Skip to main content Skip to footer
Cognizant logo


June 10, 2024

How banks and insurers can avoid the top 10 generative AI risks

To capitalize on generative AI, financial institutions and insurers need to recognize the biggest risks this technology introduces and understand how to mitigate them.


There’s intense pressure on banks, financial services and insurance (BFSI) firms to invest in generative AI. From JPMorgan Chase to The Travelers to Italy’s Intesa Sanpaolo, BFSI businesses are pouring millions of dollars into generative AI initiatives. The goal: boost customer service, adviser proficiency, systems development capabilities and process efficiency and effectiveness.

To capitalize on the upsides and reduce the downsides of generative AI in banking, finance and insurance, however, BFSI leaders need to recognize the 10 biggest risks this technology can introduce to their business. Mitigating these risks begins in system design—long before a stitch of code is written or generated—and continues through systems development, operations and user adoption and training.

Mitigating the 10 biggest generative AI risks

We’ve categorized the biggest risks of generative AI in banking, finance and insurance into three main groups—unintended consequences, market evolutions and human nature—and provided some specific ways for BFSI organizations to avoid them. (For a more detailed view into the 10 risks and specific mitigation strategies, see our eBook, “How financial firms can maximize value and minimize risk with gen AI.”)

Category 1: unintended consequences

Financial services firms and insurers don’t set out to inject bias into their large language models (LLMs), infringe on others’ intellectual property or expose their proprietary information. But these are all risks they take when using generative AI.

Here’s a closer look at these unintended consequences and how to mitigate them.

1)    Misplaced trust: Generative AI is all too capable of producing inaccurate information or biased answers. One way to instill trust is through prompt design strategies. For example, prompt engineers could specify information that should not be included in a response or create prompt design templates to ensure greater predictability.

BFSIs also need to design privacy into the system. This entails everything from data traceability (ie, clearly understanding where the data resides that’s involved with generating a response), adherence to data retention and deletion regulations, and clearly communicating to customers how the financial institution will use their data.

Further, BFSIs should conduct regular performance and quality audits of the AI system and involve employees themselves in checking generative AI outputs.

2)    IP infringement: Public LLMs trained on third-party content could expose financial services firms to copyright infringement. Beyond content, this could also apply to work processes and patented software programs.

Mitigating this risk is a work in progress. Systems designers should maintain detailed documentation of their design decisions and work with the legal department to make sure the LLMs they use aren’t trained on copyrighted code.

BFSIs should also conduct spot checks of employee awareness by randomly issuing assignments that would result in infringement if the employee went through with it without question.

3)    IP loss: Generative AI systems that use public models trained on sensitive or confidential data could expose the BFSI’s proprietary information to competitors.

The best protection against IP loss is to specify a system development environment that either automates controls for IP leakage or isolates sensitive IP from other systems. Designers should make sure the development environment automates the tagging and filtering of protected IP to ensure it cannot be included in data sets exposed to public LLMs.

4)    Orphan code: Generative AI may one day enable non-techies to become programmers. While there are many benefits to this democratization of software engineering, it could also result in orphan code: code that is abandoned when the creator leaves the organization but still must be maintained by the corporate IT function.

To avoid this, BFSIs should standardize on how generative AI systems are designed and built. For instance, users could be given standard prompt templates and libraries of reusable prompts and embeddings, in addition to an AI-enabled tool that recommends library items relevant to their needs.

Category 2: market evolutions 

Much is in flux in these early days of generative AI, from regulations, to vendor viability to the competitive arena. Here’s how to mitigate risk in all three of these areas:

5)      Regulatory reflux: Globally, regulations on data privacy, generative AI use and related issues are still in their infancy. For financial institutions operating across borders, this is an area of great risk.

BFSIs should consider implementing an LLM dedicated to comparing existing and emerging regulations against existing rules and suggest modifications and additions. This will be familiar work to the many financial institutions already using deep-learning algorithms to monitor for regulatory compliance.

6)      Tool/vendor roulette: Choosing generative AI vendors with staying power is a risky proposition given the technology’s embryonic state. A generative AI platform that files for bankruptcy in three years is not likely to be as easy to maintain as one whose owner has a thriving business.

In general, the best defense against overreliance on a single vendor is to design a loosely coupled, modular architecture that separates various functions such as data preprocessing, feature extraction and the actual generative model.

7)      Unsustainable advantage: Many of today’s experiments and pilots utilize readily available solutions from hyperscalers and other product vendors. They are often focused on capabilities that will become rapidly commoditized (i.e., chatbots, document summary tools, etc.). If nearly every company is using the same tools and infrastructures, sustainable advantage can rapidly shrink.

BFSIs should determine whether they have the assets and risk appetite to design solutions capable of maintaining sustained advantage. For those that do, the focus should be on differentiating the architecture and operating model rather than functionality.

For those that don’t, the design focus should be on developing “ecosystem assembly strategies” that help them affordably keep pace with market expectations and services become commoditized. Or they should focus on safely leveraging proprietary data and creative prompt design to generate unique outputs even when designing for commercially available LLMs.

Category 3: human nature

The success of a generative AI implementation is highly dependent on the humans involved. But the very fact that humans are involved means expectations can be overblown, cyber criminals will seek a way in, and employees and customers will reject the new way of working or transacting. Here’s how to mitigate the top risks caused by humans.

8)      Audacious overreach: Overly ambitious objectives can lead to both governance challenges and speculative investments. If early returns fall short of expectations, initial excitement can quickly turn into skepticism.

To guard against this outcome, BFSIs should devise a formal systems design philosophy, along with supporting methodologies and playbooks, that embrace experimentation and innovation simulation. They should also create explainable estimation models that help communicate the cost implications (both build and run) of requested solutions.

Designers need to create models that help non-technologists understand the implementation and operational costs, using reality-based metrics (i.e., lowering transaction costs, elevating customer satisfaction scores). The management team could then greenlight designs with a much higher probability of success in moving the needle on business impact.

9)      Malicious behavior: Every time a new technology tool emerges, cyber criminals figure out how to abuse it—sometimes much faster than the good actors do.

If generative AI has visibility into system configurations and timings of configuration changes, it could reveal when the windows of vulnerability are open. Designers, therefore, should design systems to track logins for a change request. The design should allow humans to see the precise time of each login and whether the change request was implemented. This would allow IT operations to see which files were changed and what was changed within them.

Another way to stymie potential jail breaks is to understand the implications of prompt injections, and to design in ways that prevent them.

10)  Organ rejection: There are many reasons why employees, customers or business partners could be slow to adopt, or even reject, generative AI-based solutions. These include a lack of clarity about company and regulatory policies, low confidence in system outputs, poor education on leveraging the new capabilities, or a lack of trust in employer intentions.

To minimize gen AI rejection, businesses should focus on usability design to ensure these systems augment knowledge workers’ experience and judgement. A core assumption of the design should be that humans are empowered to override machine-generated output. This is particularly true for knowledge workers who are highly educated and compensated but fear being displaced by such systems.

Moving forward with generative AI in banking, finance and insurance

As generative AI use expands across the BFSI industry, financial firms will need to create a generative AI design guide that highlights the organization’s gen AI vision, ethical code, business objectives and the numerous risks that can undermine them.

By entering into their generative AI initiatives with their eyes wide open to these 10 risks, BFSIs are in a prime position to capitalize fully on this powerful technology.

For a full depiction of generative AI opportunities and risks in banking, finance and insurance, see our reports, “Capitalizing on generative AI,” and “How financial firms can maximize value and minimize risk with gen AI.”
 


 



Ed Merchant

Vice President, Banking and Capital Markets

Author Image

Ed is a Vice President in the Banking and Capital Markets Group. He is responsible for advising CIO and CTOs on execution strategies for technology-driven operational improvement, transformation and innovation initiatives. He participates both as a Consultant and a Delivery Leader.

ed@cognizant.com




Babak Hodjat

CTO, AI

Babak Hodjat

Babak Hodjat is CTO of AI at Cognizant and former co-founder & CEO of Sentient. He is responsible for the technology behind the world’s largest distributed AI system and was the founder of the world's first AI-driven hedge fund.



Latest posts

Focus on customer with digital banking solutions

Visit the Banking section of our website.

A woman making a digital payment holding a bank card

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition