Skip to main content Skip to footer
Cognizant logo
Cognizant Blog



The EU AI Act is the first piece of legislation to regulate AI systems. Here’s what it means for insurers and how they can keep innovating with AI as regulation tightens.

AI is fast taking us into new realms of possibility. Each day seems to bring new leaps forward—and new horror stories. For every AI model that can detect a tumour or predict the location of a landmine, there’s one that can fake an insurance claim or misidentify a shopper as a thief.

The potential for innovation may be huge, but so is the potential for harm—which is why we’re starting to see tighter regulation around the use of AI. The European Union AI Act, which comes into force this summer, will be the first to impact innovators, but others won’t be far behind.

Finding a balance between innovation and regulation

AI pioneers in the insurance sector will need to find a balance between innovation and regulation: between developing valuable new applications of AI and ensuring those applications meet regulatory standards for safety, security, privacy, inclusivity, ethics and consumer rights.

This needn’t mean putting the brakes on innovation. The COVID vaccine showed how even in the most highly-regulated industries, groundbreaking products can still be launched fast and safely. Insurers developing AI-enabled solutions now stand to reap substantial benefits—not just by making existing processes more efficient, but by developing new revenue streams, like hyper-personalised insurance.

But to make the most of AI’s promise, software engineering processes will almost certainly have to change. Let’s have a look at what developers will need to bear in mind.

The EU AI Act and Liability Directives: A quick guide  

The Act is the world’s first legislation aimed at regulating the use of AI, and will likely serve as a benchmark for others to follow or adapt. Its aim is to prevent accidental or deliberate harm by AI systems, and to do this it takes both a rules-based and a risk-based approach.

In terms of rules, Article 5 explicitly forbids certain uses of AI as they pose an ‘unacceptable risk’ of harm. These include tracking a person’s behaviour in a way that could result in discrimination against them; using biometric information to ascertain a person’s race, sexual orientation or beliefs; and most uses of real-time facial recognition or remote biometric identification (RBI) in public places.

Know the risk level of your AI systems

For all other uses of AI, the Act takes a risk-based approach. This requires developers and implementers to assess, monitor and disclose the risk of any AI system—and especially to identify whether it falls into a ‘high-risk’ category. High-risk categories include banking and insurance, making AI risk management a non-negotiable activity for insurers.

The Act is also concerned about the use of general-purpose AI models as software architectures underpinning AI products. Developer using models like Open AI’s Chat GPT 4 will need to keep detailed documentation, educate partners on the functionality and limits of the tools, and identify and label models that carry ‘systemic risk’ – i.e. that could cause catastrophic events.

New rules around AI liability

If a regulator determines that a developer of an AI system has failed to comply with the Act, they’ll be able to issue fines of up to 6% of annual revenue. However, they won’t be able to compensate affected parties for harm caused. That’s the role of the two AI-related Liability Directives:

The EU AI Liability Directive will enable non-contractual compensation claims against any person or legal entity for harm caused by AI systems that was due to the fault or omission of that person or entity. The onus will be on that person (or legal entity) to prove that the harm didn’t happen.  

In terms of the kinds of cases this might cover, an interesting recent example was Canada Air. When its chatbot wrongly offered a customer a discount, the company tried to argue that the chatbot was a separate legal entity, and thus that the airline was not liable for the discount. The court didn’t see it that way—a cautionary tale for deployers of AI systems!

The revised EU Product Liability Directive will enable civil compensation claims against manufacturers and importers for harm caused by defective software and AI products, or products that incorporate software and AI systems. Again, the onus will be on the manufacturer to prove that the harm was not caused, rather than on the complainant to provide it did.

Time to rethink risk management and compliance

So what does all of this mean for AI developers and innovators? Firstly, since much of the Act is risk-based, it places a significant burden on the developer to evaluate the risks inherent in the AI system and to disclose, monitor and report on them as necessary.

For many, this will require a new approach to risk management. A best-practice risk management framework is one that is flexible enough to adapt to different (and evolving) regulatory regimes, and that spans the whole risk management lifecycle, from identifying and assessing risks to mitigating, monitoring and reporting on them.

Compliance processes may also need to be overhauled. The EU AI Act emphasises the need for transparency around AI systems, making it vital to maintain detailed documentation and implement robust data governance. In an AI world, that means ensuring that data used to train or operate AI systems is properly managed, stored, and used, and that privacy and security are protected.

Maintain speed of innovation with ‘agile compliance’

Insurers who start now with AI innovation can refine their models, learn fast, iterate fast and stay ahead of the game. But being fast must be matched with being smart and compliant. No insurer wants to see their investment go down the drain or have a project derailed by a compliance issue.

The way forward is to change the way risk and compliance are involved in the software development process. Today, it works a lot like the waterfall software development model of old: first the product gets built, then it goes ‘over the wall’ to QA – or, in this case, to risk and compliance.

That leads to what I call a 59th minute problem. If you have 60 minutes to launch a product, you don’t want to find out in the 59th minute that it needs to be substantially rebuilt due to compliance reasons —or scrapped altogether.

A better way is to move to what might be called ‘agile compliance’ – having risk and compliance professionals and developers collaborate from the start, so AI systems are risk-managed at every stage, all the right documentation is produced, and there are no delays or nasty surprises at the end.

With the EU AI Act now coming into force, and more regulation to follow, we’re finding more clients asking if we can work like this with them on development projects. After, all, it could mean the difference between launching on time or being beaten to it by a competitor.


Hellen Beveridge

Global Responsible AI Governance Delivery Lead, Cognizant

Author Image




In focus

Latest blog posts


More blog posts