Skip to main content Skip to footer
Cognizant logo


December 15, 2023

4 ways to embed ethics into generative AI

Ethical AI is not a box-checking exercise. It requires a new mindset that balances innovation with responsibility.


What do hiring managers look for in a new hire? Experience and education? “Soft skills” like problem-solving or collaboration? Well, a speaker at the recent International AI Summit in Brussels revealed that, for one AI-enabled recruiting tool, the top two factors “most determinative of job performance” were the name Jared and a history of playing lacrosse in high school.

This particular AI model—though actually a few years old and presumably not in use today—remains a cautionary tale for how quickly bias can creep into business and society at large if AI is left to its own devices. As seen in this example, AI models are extremely adept at finding patterns within data, but they are unable to discern correlation from causation. This emphasizes the critical need for checks, balances and oversight when using AI.

With the fast uptake of generative AI, ethical principles and guidelines have become a common talking point. But successfully embedding ethics within AI programs requires more than box-checking. It demands a mindset shift among leaders to ensure models and tools operate fairly, safely and responsibly. It’s through this ethical mindset that the true power of AI can be unlocked, with innovation aligning harmoniously with values, fostering a technology-driven world that is both advanced and humane.

How to create an ethical gen AI program

1.    Incorporate ethics checkpoints into development cycles

AI has no inherent intelligence of its own (at least not yet). Instead, it is a tool to augment human intelligence and help us become more productive and effective.


So, even as AI takes on more tasks—and more complex tasks at that—humans must continue to question, at each step, how the output of these tools will affect people and society.

For example, if a model is designed to keep people engaged on a platform, at what point does the engagement reach an unhealthy level? When does it drive overconsumption? Does the model encourage people to be taken advantage of? Businesses will always have commercial goals, but these need to be balanced with a consideration of the social impacts.

The most common way to achieve this balance is by maintaining a “human in the loop,” which means having a person or group of people consistently review, approve and adapt AI outputs and inputs to help avoid issues like bias and ensure organizations maintain a nuanced and complete understanding of the decisions they are making.

For example, when financial services organizations assess a loan applicant traditionally, human reviewers consider both credit history and individual circumstances, like employment changes due to a pandemic. This holistic approach allows for factors like repayment history to provide a more complete view of the individual. Algorithms, on the other hand, rely on raw data to make decisions and often lack a nuanced—but ultimately very relevant—understanding of human circumstances.

These are precisely the sorts of issues that companies of all kinds need to consider and avoid when integrating generative AI into processes.

2.    Gather diverse voices to establish AI governance

Companies cannot address—or likely even comprehend—the full implications of AI using only their existing teams and resources.

Because generative AI is a novel, unregulated technology, it is up to companies to make decisions about how they are managing both the input and output of these models, as well as its design, development, operation and adaptation. Every step of this process—from ethically collecting training data to ensuring transparency with consumers and stakeholders—requires not just a specialized skill set, but many specialized skill sets working together as a common body.

With this in mind, organizations should draw on the skills and expertise of external voices, such as academia, external counsel, industry consortia, government agencies, sociologists, ethicists and others, to establish a governing body. Such a body could be a board, working group, steering committee or other group, whose job it is to develop, implement and oversee the governance controls.

At this stage, many Big Tech organizations are demonstrating a high level of ethics when it comes to AI. Both Google and Microsoft have clearly outlined the principles and practices guiding their AI programs; they are also leading industry-wide discussions and collaborative efforts to address challenges associated with responsible and ethical use. For companies just beginning their AI journey, it may be helpful to review any open-source materials offered by these companies and use them as a blueprint for their own activity.

3.    Envision AI systems that empower people rather than replace them

AI will likely be the central figure in much of the work we do in the future. While it may seem like the burden is on the workforce to embrace AI, the onus is really on leaders to empower people to do so.

Companies need to upskill and reskill existing employees to enable the quantum shift forward that AI represents. As part of this process, leaders need to explain and demonstrate the value of using this technology both on an individual and corporate level and create a clear path toward adoption.

One way to embed ethics into generative AI is to train employees on ethical considerations during the design and development process. This can be achieved by establishing a set of ethical principles and guidelines that guide the development of AI systems and providing training to employees on how to apply these principles in their work. These principles should be based on widely accepted ethical frameworks, such as transparency, accountability, fairness, and non-maleficence. By incorporating these principles into employee training, AI systems can be built to operate in an ethical manner, ensuring that they align with societal values and norms.

Likewise, to equip the future workforce with relevant skills and help them understand the implications of this technology, schools need to adapt their curriculum to reflect AI’s outsized role as a productivity tool and how to leverage these new capabilities safely, securely and responsibly.

4.    Commit to continuous transparency and accountability

To demonstrate a high level of trust, companies must be transparent and accountable in their use of AI.

What is transparency?

Transparency means being open and honest with stakeholders about what the business is doing with AI and the steps it is taking to be trustworthy, responsible and ethical.

What is accountability?

Accountability means documenting each stage of the journey so that the organization can demonstrate to all stakeholders that those steps were taken.


Because of the dynamic nature of generative AI—both in terms of the technology itself and the regulatory landscape—companies need to continuously reevaluate how they are using AI and the impact of doing so.

For example, algorithms being used today may not be compliant with legislation introduced in the coming months, such as the EU AI Act. Companies also need to consider that any changes to the model, such as its input, training data or mathematical formulas, will affect their outputs. This is another area where it is helpful to employ a cross-section of expert voices on the governing body since they can help oversee and manage the many facets of this evolving technology and the broader landscape.

Why AI ethics is good for business

As companies explore the incredible potential of generative AI, business and tech leaders have the added responsibility of asking not just if something can be done, but whether that task should be done.

Adopting an ethical mindset isn’t just a moral imperative; it also drives innovation, attracts talent, builds consumer trust and ensures compliance with emerging regulations.


The path forward lies in partnership: meeting leaders where they are but also inspiring a vision of AI built ethically to improve lives. With care, wisdom and foresight, we can build an AI future we can trust—one in which ethics moves from box-checking to mindset shifts and where technology reflects the best of our humanity.

Do you have questions about how to realize the potential of gen AI through an ethical mindset? Visit our Generative AI webpage to learn more about how this technology is reshaping the world around us—and how to ensure your company is leveraging this technology safely, securely and responsibly.



Tahir Latif

Global Practice Lead - Data Privacy & Responsible AI

Author Image

Tahir Latif is a globally recognized leader in data privacy and AI governance. His extensive experience spans data privacy, AI ethics, establishing frameworks for trustworthy and responsible AI, and aligning emerging technologies with organizational values.

Tahir.Latif@cognizant.com



Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition