What if you could create, price and market a personalised product offer for every individual customer in a matter of minutes? And do this not just once, but every time each customer’s circumstances changed throughout their lives?
How about being able to identify all missing information from a complex claim in a matter of seconds and prompting the claimant with the right questions to provide what’s required, all in real time? Or what if all underwriting administration could be executed in seconds with unparalleled accuracy?
Transformational scenarios like these, and plenty more besides, are already possible with generative AI. The technology has the potential to transform insurance companies from front to back, having a huge impact on both operations and customer experience.
A new wave of change
No surprise, then, that insurers are urgently exploring optimal use cases for gen AI, as well as how to build generative models and incorporate them into their day-to-day work. However, while the business case for generative AI is indeed powerfully persuasive, insurers need to consider more than just its impacts in terms of productivity and efficiency.
Without paying attention to the regulatory and ethical context in which generative AI is put to work, the negative consequences could be serious and far-reaching. That means not simply regulatory fines (which could be substantial) for infraction of both existing and emerging rules, but also reputational damage which while less easy to quantify, is arguably every bit as costly in the longer term.
A regulatory landscape for generative AI
The regulatory environment for generative AI in the insurance industry is still taking shape. But it’s already clear that insurers will have to navigate an intricate route to ensure that they remain compliant with the letter and spirit of regulations designed to protect customers.
Customer data, for example, is already subject to strict privacy and security standards thanks to GDPR. The EU Artificial Intelligence Act, adopted by the European Parliament in mid-March, means that both regulators and consumers have the right to know if and how any assessments or decisions were made by AI. That puts fairness and accountability of decision-making centre stage. It means insurers need to make sure they have the right reporting mechanisms in place, along with repeatable workflows that support the transparency that regulators will increasingly demand.
The responsible AI framework
So how can insurers go about realising the huge gains that generative AI promises while also making sure that its use meets the required standards for security and transparency? The answer is to ensure that generative AI is developed and implemented within a responsible AI framework. This establishes the ethical guidelines and guardrails that not only maximise regulatory compliance, but also underpin trusted relationships with customers.
Responsible AI is a key element in the process of building trust. At a 2023 global summit within the World Economic Forum framework – with Cognizant one of the contributors – experts and policymakers delivered recommendations for responsible AI stewardship. You can view the 30 recommendations on responsible gen AI here.
One key area where responsible AI is essential? Fairness, bias and veracity of data. We know that any generative AI model’s outputs can only ever be as reliable and accurate as the data used to train it. Any residual bias in the data will be replicated in the content that generative AI creates. That makes data governance, especially data traceability and testing for information’s output veracity, imperative. It’s only once there’s full confidence in the underlying data and its security that any experimentation with generative AI should be contemplated.
A human in the loop - always
Approaching the development of generative AI solutions with a responsible AI framework enables insurers to proceed with the confidence that they are addressing potential risks as clearly and comprehensively as possible.
Essential to that goal is the continued presence of a ‘human in the loop’. While there may come a day when generative AI adds infallibility to its many existing advantages, we are not there yet. The technology will work alongside people to augment and expand their capabilities. But it won’t replace them. So process design must take that into account and ensure that generative AI’s outputs are always subject to human verification. That applies too to making sure that AI’s outputs are correct, equitable and reflect an organisation’s values. It’s only when people and technology work closely together that those outcomes can be achieved.
Real diversity
Responsible AI also has implications beyond the outputs of a particular AI. Teams responsible for the development of AI models and tools must also reflect a real diversity of viewpoints and experiences. That’s important to help ensure that bias is surfaced before a solution is created and that those solutions address the widest possible spectrum of users’ needs.
People are also at the heart of the impacts that AI has on future roles and employment in insurance. The industry, in common with many other sectors, will see huge changes driven by AI over the next few years. By maintaining an ethical and responsible approach, the coming transformation can maximize positive results for organisations, employees and the communities they support.