We are the voice of insurance and long-term savings | Contact us

Why addressing trust and ethics in AI is important to reap its benefits

Datapic.jpg

The use of models in financial services isn’t new. Rule based models have typically been used to inform decisions in the back and the middle office functions such as credit risk models and pricing algorithms for loans and insurance, or areas such as fraud detection and financial crime. COVID-19 and the rapid increase in remote working has accelerated the development and use of artificial intelligence (AI) across organisations and into consumer interactions.

Throughout 2020, we’ve seen AI being deployed to help organisations anticipate the impact of COVID-19 across the globe and industry sectors better, so that they can respond to it with greater resilience. We have also seen revitalised focus on the role technology and AI play across the Environmental, Social, and Governance (ESG) landscape.

Within insurance, AI promises a huge leap forward for making better and faster decisions with increased accuracy. It allows organisations to understand their customers better, enhance their products, price risk more accurately and improve efficiency by driving down costs. But AI doesn’t just bring benefits to organisations -the benefits to consumers and wider society are clear, through examples such as:

  • Faster and more effective claims processing
  • More personalised insurance products and premiums
  • Supporting call centre agents using AI technology such as Chatbots

However, harm is caused by unfair pricing practices in personal lines insurance, unsuitable or poor value products and services, as well as remuneration practices in firms that drive down value to the customer. Developments in underwriting practice including an increasing shift towards more complex modelling techniques such as machine learning, might also lead to problems of access for some customers. For example, using biometric or genetic data for risk modelling could make some consumers uninsurable, removing their access to the pooling of risk.

Trust has long been a defining factor in an organisation’s success or failure. And trust in the insurance sector is already low. Concerns over AI have been fuelled by high profile cases that were biased, discriminatory, manipulative, unlawful or violated human rights. Realising the benefits AI offers, requires building and maintaining the public’s trust — citizens need to be confident that AI is being developed and used in an ethical and trustworthy manner. Findings from KPMGs recent, Trust in Artificial Intelligence: A five country study confirms that trust strongly influences AI acceptance, and hence is important for the societal uptake of it and realising its benefits.

Of the four key drivers influencing citizens’ trust in AI systems, the perceived adequacy of current regulations and laws was the clear strongest driver, highlighting the importance of ensuring adequate regulatory and legal mechanisms are in place to protect people from the risks associated with AI use. Such regulation in turn supports citizen uptake and adoption.

Regulators, trade bodies and government are all actively engaged. In the FCA’s recently published business plan, they set out the need to deliver fair value as a key driver to underpin consumer trust pointing out through their recent investigations of pricing practices in general insurance, cash savings and mortgages that “markets sometimes fail to achieve fair value for consumers, some of whom pay a loyalty penalty”. 

The FCA’s commitment includes deepening their engagement with industry and society on AI, specifically machine learning (ML) with a focus on how to enable safe, appropriate and ethical use of new technologies. This has been driven by their fundamental concern over practices leading to higher margins being harmful by taking advantage of low customer awareness, characteristics of vulnerability or levels of engagement. In addition to the controversial practices such as price optimisation that relies on profiling of customers, other risk related ‘correlations’ are popping up in all sorts of unexpected places with unintended consequences, increasing the risk of discriminatory outcomes being experienced by consumers, or just a level of creepiness that they are not ok with.

ABI research conducted by Britain Thinks in February 2020 found that consumers approach the use of data in the context of insurance do so through a double-layered lens of mistrust, and are operating with a relatively shallow understanding of how their insurance is priced, and what insurers know about them. As they gain a greater understanding – it highlights the tensions in consumers’ priorities in relation to the use of data and new technology.

Driving the broader FS industry forward, our joint KPMG-UK Finance report on data ethics (December 2020) sets out five core principles of the ethical use of AI for financial services, together with context to help businesses move towards implementation. However, it appears more is needed. So, what else can insurers and other financial service organisations do?

Outside of any anticipated additional regulation, it’s clear that a focus on education and awareness of both AI and ethics is needed. This should include the benefits to the consumer, what AI is and isn’t and how organisations are mitigating against the potential harm through transparency, explainability and AI auditing.

In addition, increasing diversity within teams, setting performance targets relating to ethics and setting up ethics boards and councils, bringing broad views to challenge use cases can all reduce the risks and provide comfort that appropriate governance frameworks are in place.

To hear more from Abhishek on the evolving role of data ethics in the insurance and long-term savings industry, register for the ABI's young professionals speed networking event taking place on 6 May 2021.

Last updated 30/04/2021