Skip to main content Skip to footer
Cognizant logo


November 29, 2022

The governance ‘must-haves’ to scale AI in life sciences

Here are three guiding principles for building an AI governance model that will help you meet AI regulatory, risk and ethical challenges.


The role of artificial intelligence and machine learning (AI/ML) in life sciences is moving rapidly from “potential” to “innovative” to “necessary.” And as it does, the industry’s obligation to be responsible and compliant in its approach to the technology becomes ever-more important.

While the FDA and other regulatory bodies work toward mapping the borders of AI regulation, AI/ML-driven breakthrough solutions—like the recent study on AI as a promising tool for breast cancer reduction—have begun to emerge. However, unless companies invest in working with regulators to proactively establish clear AI governance structures to regulate and scale AI usage, it may be decades before these breakthrough solutions are sophisticated enough to overcome data biases and convince regulators to allow commercial or clinical use.

Most life sciences organizations understand they need to adjust and align their internal governance processes to manage these challenges. For instance, they need to prepare for the concrete steps the US Food and Drug Administration (FDA) has outlined to build a fundamental AI/ML regulatory structure. The question is how to do that.

We recommend implementing a separate and dedicated AI governance framework. Pharma companies can use this framework to assess their current governance, decision-making and risk management models, and then seek guidance on the gaps identified. Ultimately, this will enable them to more effectively deploy AI solutions in a scalable manner.

Three AI governance must-haves

Here are three guiding principles to consider when establishing an AI governance model:

1. A well-defined internal AI governance structure. To ensure accountability and oversight for the use of AI technologies, organizations should take three actions:

  • Choose a governance model: The organization’s existing goals, values, ethical considerations and risk appetite all need to be aligned with its use of AI. We advise adopting a hybrid model using parts of your existing governance setup and decision-making tools alongside newly developed structures for decentralized AI governance and accountability.

    For example, you can leverage your existing enterprise risk management structure to manage the risks associated with AI, and set up a separate committee to specifically assess AI ethical challenges.

  • Build the right team: Involve all employees in AI governance practices—from the AI solution developer, to the domain expert, to the end-user. Doing so not only ensures transparency of the AI model but also helps maintain a balance between dependence on the ML algorithms and human interference.

    For instance, organizations can deploy small cross-functional teams across various stages of AI development and deployment, each with a specific set of roles, responsibilities and accountability. A technical team of data scientists, medical personnel and domain experts could work in tandem with legal counsel and compliance officers to address ethical issues during AI development and testing, and align on the adoption of AI best practices.

  • Manage stakeholder communication: Transparent communication inspires trust, confidence and acceptance among employees and customers—an essential component for AI adoption success. Create a holistic communication framework comprised of stakeholder-specific communication policies that clearly outline roles and responsibilities.

    For instance, for external stakeholders like customers and regulators, the communication would entail imparting awareness around the extent of AI dependence and the nature of AI behavior of the products and services they are using. Such communication should also include a robust feedback channel to gather insights on the accuracy of the AI-augmented decisions.

    For internal stakeholders, the communication would entail imparting awareness on the associated risks, ethical evaluation results and impact of the AI tools under development.

2. Robust risk management and internal controls. Establish a risk committee consisting of data governance personnel, compliance experts, privacy officers, security officers and ethics experts to continually identify and review risks relevant to the organization’s AI technology solutions. Perform periodic risk impact assessments, mitigate those risks, and maintain a response plan should mitigation fail.

Three major categories of AI deployment-related risks require a specific mitigation strategy:

  • AI data-related risks: For data issues related to bias, poor quality, unreliable sources, redundancy and obsolescence, organizations can establish an AI data governance team as a subset of the general data governance team. This team should be responsible for managing data as a strategic asset for the company, developing and reviewing governance processes and policies.

  • AI knowledge gaps: When key personnel move in and out of the organization, it can create gaps in the AI governance structure. Maintain standard knowledge management governance articles, create knowledge transfer templates and deploy regular mandatory trainings to ensure proper knowledge transfer.

  • AI performance risks: Because AI algorithms continuously evolve, the performance of the AI model is often difficult to benchmark, which further leads to skepticism when it comes to adoption.

    Establish monitoring and reporting systems and processes to ensure key leadership is up-to-date on how the deployed AI is performing. This can include autonomous monitoring, where appropriate, to effectively scale human oversight. AI systems can be designed to report on the confidence level of their predictions, which can build the required levels of trust among end users and company leadership.

3. AI operations and feedback governance, including AI model development, testing, performance enhancements and feedback processes. Because of the continuously evolving nature of AI algorithms, it’s essential to establish specialized processes for data modeling, algorithm selection and change control:

  • Development & testing platforms: Relevant departments with responsibility for data quality, model training and model selection should work together to establish sound data accountability practices. These teams should also propagate FDA-proposed good machine learning practices (GMLPs) for data management, feature extraction, training, interpretability, evaluation and documentation, throughout the organization.

    Organizations can also implement standardized AI code evaluation and deployment practices, such as automated code testing, annual penetration tests with an independent third-party security firm, and post-mortem analyses. Doing so can help identify root causes and implement future controls to ensure development and testing process reliability, and ultimately build the trust of regulators and users in their AI models.

  • Pre-determined change control plan: To establish the safety and effectiveness of the outcomes of the AI algorithms, and implement responsible performance enhancements, organizations need to properly document how the model training and selection processes are conducted. They also need to document the reasons decisions are made, as well as the changes—performance-, input- or intended use-related—that the organization expects to achieve when the AI solution is in use.

    The first step is ensuring compliance with the FDA’s guidance on AI operations governance (i.e., the Pre-determined Change Control Plan). Since such regulatory guidelines are dynamic, the organization needs to stay aware of changes and adapt accordingly.

  • Real-world performance monitoring: Deploy centralized real-world data collection and monitoring pathways to understand how your AI models are used, identify areas of improvement, and respond to safety and usability concerns. The FDA foresees that evaluations performed during real-world performance monitoring will allow for the development of thresholds and performance evaluations for AI/ML-based software as a medical device (SaMD), particularly in regard to safety and usability.

Imperative or choice?

Organization-wide AI adoption is a relatively new concept in life sciences. At this early point, then, a separate governance model for AI can feel more like a choice than a requirement. In many ways, however, it is very much required.

Establishing an AI governance model not only streamlines the organization’s AI deployment processes but also builds regulator and customer trust. It provides a regulated space for the AI experts, health experts and data scientists to innovate, and eventually improve health outcomes. It is the first step toward ensuring future compliance with regulations that are still being written.

To learn more about our three-pronged approach to tackling AI regulations, please see our report, “AI regulation is coming to life sciences: three steps to take now.”

Rohan Desai and Lakshay Bhambri—both members of Cognizant’s Life Sciences Consulting Practice—also contributed to this blog post.



Shirali Desai
Life Sciences Consultant
Picture of Digitally Cognizant author Shirali Desai

Shirali Desai is a Manager in the Cognizant Life Sciences Consulting India Practice. She has 8+ years of life sciences consulting experience in the areas of digital transformation, process optimization, ERP integration and digital health.

ShiraliJitesh.Desai@cognizant.com



Mini Nair
Life Sciences Consultant
Digitally Cognizant author Mini Nair

Mini Nair is a Senior Manager in the Cognizant Consulting India Practice. She has 19+ years of Life Sciences consulting experience managing various strategic and digital transformation projects through her domain acumen and transformation management competencies.

Mini.Nair@cognizant.com


Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition