Skip to main content Skip to footer
  • "com.cts.aem.core.models.NavigationItem@2f275688" Careers
  • "com.cts.aem.core.models.NavigationItem@3a2d7f0f" News
  • "com.cts.aem.core.models.NavigationItem@11c5327d" Events
  • "com.cts.aem.core.models.NavigationItem@5f056b43" Investors


March 27, 2025

AGI: Enough with the hype; let’s ask the important questions

Today’s horse-race coverage of artificial general intelligence fails to consider its implications.


Artificial general intelligence (AGI) has become one of the most talked-about topics in technology circles, evoking excitement, speculation and concern. Yet for leaders navigating today’s rapidly evolving AI and digital landscape, the more relevant questions are not about when AGI might arrive, but what it means, why it matters and how to prepare.

As we move through 2025, the AGI conversation must shift from abstract futurism to concrete implications for markets, operations and society.

What, exactly, is AGI?

Generative AI, which surged in popularity during 2022 and 2023, is undergoing a necessary recalibration. After widespread experimentation, many enterprises are discovering that implementation hurdles—ranging from hallucination risks to integration costs and regulatory uncertainty—are often greater than anticipated. Gartner now places generative AI in the woeful “trough of disillusionment” category of its Hype Cycle—not because the underlying technologies lack promise, but because expectations exceeded what current systems can reliably deliver.

This moment of pause presents an opportunity for business leaders to reframe the AGI discussion away from timelines and hype, and toward more meaningful strategic considerations. One such consideration is the question of which capabilities truly define AGI—that is, what exactly is AGI? The term is often used loosely to mean a sort of omniscient software, but in reality, general intelligence encompasses a variety of functions: transferring knowledge across domains, reasoning about causality, navigating social contexts, generating creative solutions and making decisions under uncertainty.

Each of these functions presents distinct technical challenges and offers different kinds of value—and risk. Rather than treating AGI as a monolith, businesses should assess which cognitive functions are most relevant to their industries and operational needs—those capable of best delivering shareholder value, contributing to society, and strengthening long-term strategic advantage.

Moving from AGI benchmarks to AGI benefits

Similarly, progress in AI should not be measured by leaderboard performance alone. Accuracy in narrow tasks is not equivalent to intelligence. A model that excels at answering trivia questions may fail catastrophically in unfamiliar or ambiguous scenarios.

For businesses, robustness, adaptability and reliability matter far more than benchmark supremacy. The focus should be on how systems perform in dynamic, real-world environments—how they generalize, how they fail and how those failures are detected and mitigated.

AGI regulatory and governance issues

Governance is another area where AGI debates intersect with pressing business concerns. Regulatory frameworks are evolving but remain uneven. The European Union’s AI Act took hold in 2024, establishing specific requirements for high-risk systems. Global standards organizations like ISO and IEEE are proposing early frameworks for AGI safety, but comprehensive oversight mechanisms remain nascent.

For companies operating across jurisdictions or deploying powerful AI models, proactive governance—through internal audits, industry collaboration and policy advocacy—is quickly becoming a strategic necessity.

The here and now of AGI

As businesses grapple with how AGI will work in the future, current AI systems are already reshaping industries and altering risk landscapes, for better and worse. In healthcare, diagnostic tools are improving access and efficiency, while innovations like AlphaFold are accelerating drug discovery. In construction, AI-based safety monitoring has been linked to significant reductions in workplace incidents.

Meanwhile, in finance and commerce, the rise of synthetic media and deepfake technologies has exposed organizations to new forms of fraud and brand risk. In one 2024 poll, 26% of executives said their organization had experienced at least one “deepfake incident targeting financial and accounting data” in the past 12 months—a trend with real implications for cybersecurity and trust.

The labor market is also undergoing transformation. Our New work, new world research found that 90% of jobs could be disrupted by generative AI. This change will surely be exacerbated by AGI. For business leaders, this is not merely a workforce planning issue but a strategic opportunity: to invest in reskilling, shape future-of-work policies, and support talent transitions that keep their organizations resilient and competitive.

The time to address AGI questions is now

All of these developments suggest that the most urgent AGI-related challenges are not in the distant future—they are here now. The AGI risks that often animate discussions, such as misalignment with human values or concentration of power, are already playing out in today’s systems. Addressing these challenges now not only mitigates near-term harm but also lays the foundation for responsibly navigating more advanced capabilities down the line.

Ultimately, AGI should not be seen as a finish line, but as part of a broader continuum of increasingly capable AI systems. The question may not be when machines might match human intelligence, but what kind of intelligent systems we are choosing to develop today, and whether they align with long-term goals of trust, accountability and economic inclusion.



Amir Banifatemi

Chief Responsible AI Officer, Cognizant

Amir Banifatemi

Amir Banifatemi is a leading technology executive, investor, and thought leader with over 25 years of experience creating technology-based ventures and new markets. As Chief Responsible AI officer, he leads Cognizant’s effort to define, enable and govern the company’s approach to responsible and trustworthy AI. His career has focused on advancing AI and human empowerment while prioritizing ethics and safety, demonstrating responsible innovation at scale.



Latest posts

Shaping the AI-enabled future

Visit the Responsible AI section of our website.

People attending a concert

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition