Data seems to be discussed everywhere. It has perhaps become a truism that firms stand to benefit from better leveraging the data they hold, for example big data and artificial intelligence (AI) promises a huge leap forward for insurers making better decisions, faster, and with greater accuracy. It allows organisations to better understand their customers, enhance their products, price risk more accurately and improve efficiency driving down costs.
At the same time, firms, the authorities and the public are becoming increasingly aware that AI and advanced algorithms in general also bring some new risks while accentuating others.
The financial services industry is already heavily regulated, but firms will need to make sure that their approach to GDPR, ePrivacy Regulation, the Financial Conduct Authority (FCA) rulebook etc, is calibrated to fit these new technologies and evolving customer expectations. For example, how do requirements to be transparent with customers and treat them fairly map onto decisions made using machine learning? It may not be clear how existing regulation applies to new technologies.
Regulators are certainly getting interested too. The Information Commissioners Office (ICO), FCA and the Bank of England (BoE) are all planning work and guidance on data ethics and AI, and the new Centre for Data Ethics and Innovation has been set up to advise government and business on a wide range of potential challenges.
Our joint KPMG-UK Finance report on data ethics (March 2019) suggested a set of principles and some next steps to help firms ensure they look at risks in the round and embed a data ethics approach throughout the business.
Within Insurance, the FCA published the interim report (Oct 2019) from the Market Study examining the extent to which there is consumer harm (higher premiums) for long standing home and motor insurance customers caused by pricing practices focusing on higher margins. In addition to controversial practices such as price optimisation that relies on profiling of customers, other risk related ‘correlations’ are popping up in all sorts of unexpected places with unintended consequences increasing the risk of discriminatory outcomes being experienced by consumers.
While an insurer can check that the inputs to its AI or ML based models are in order, the very power and advantage of AI that insurers are looking to exploit could produce unfair outcomes. Consumers are becoming less concerned about whether a firm has done the right things, and are putting more focus on whether it has achieved the right outcomes. This is why legal compliance is no longer enough and effective continuous monitoring frameworks will become necessary. So what else can insurers and other financial service organisations do?
One additional control could be to consider incorporating a “tweet test” as part of your ethical challenge over AI/algorithmic use cases. Ask yourself: What could be the reaction from the “court of public opinion” be on the outcomes of the proposed algorithm. As was evidenced by a viral tweet that “accused Apples new credit card of being sexist” (1) that led to the investigation by the New York State Regulators, reputation damaging media coverage can occur if you cannot appropriately explain and justify what the algorithm is doing.
(1) Time: https://time.com/5724098/new-york-investigating-goldman-sachs-apple-card/