Skip to main content Skip to footer
  • "com.cts.aem.core.models.NavigationItem@61b4aa8a" Careers
  • "com.cts.aem.core.models.NavigationItem@11c3b758" News
  • "com.cts.aem.core.models.NavigationItem@3c76cc66" Events
  • "com.cts.aem.core.models.NavigationItem@3acc50d5" Investors


Feburary 14, 2025

DeepSeek is no Sputnik, but it’s an important accelerator

Rather than panicking, business and tech leaders should study and embrace new directions for the future of AI.


Following the shock of DeepSeek’s arrival, many are catching their collective breath and asking important questions about what it means now for competition, innovation, and the future of artificial intelligence.

Yes, DeepSeek is an undeniably significant advancement. It roiled the stock market and panicked the tech industry in ways we rarely see.

But I think it’s worth taking a step back and looking at the bigger picture. DeepSeek is remarkable, but it is not the watershed moment it’s being hyped up to be.

Instead of looking at comparisons to past breakthroughs like Sputnik, the Soviet Union’s 1957 satellite that launched the space race, let’s look at what DeepSeek tells us about where AI is going. What I’m seeing is that DeepSeek is an important industry correction to accelerate AI innovation and adoption, and it’s one that’s been a long time coming.

The future lies in democratization

Remember when computers were the size of hallways? Today, our watches have more processing capacity by orders of magnitude. The driving force behind this evolution was not a fundamental change but rather the continuous optimization of processing power, allowing it to be incorporated into various smaller form factors at a substantially reduced cost.

In a similar way, DeepSeek has used creative optimization to make its model smaller and faster. This is smart, because it’s also the direction the industry is going, and it’s what is necessary for enterprise-grade AI to gain mainstream adoption.

Traditional, larger AI models have been slow and expensive to run, which limits their usage potential. By comparison, my team and I have run and tested smaller DeepSeek models on a MacBook and been amazed at how much more powerful it is than similarly sized models of just a few months ago. No massive processing power or storage was needed. Additionally, these models can be trained faster while requiring less horsepower.

The takeaway here is to look at where it leads us: democratization. The exclusive ownership of powerful models by a few commercial entities is fading. This is good for us all.

Multi-agent systems can abound in a post-DeepSeek world

Until now, the bar has been set very high for which AI models can support advanced AI reasoning capabilities. We’ve only had a handful of options that pass the bar for acceptable large language mode (LLM)-based agents. Now, we’ve cracked the door open for more models inspired by DeepSeek. What I’m watching closely now is what this means for agents and multi-agent systems.

We’ve all seen how agents have taken hold as a powerful way to leverage AI, and we are starting to see how they can be connected into multi-agent systems that deliver enormous gains in productivity and autonomy. The optimization improvements DeepSeek has introduced allow us to consider a wider scope of use cases for multi-agent systems because we can now begin working with cheaper, smaller, faster AI models.

As these new models catch on, businesses will gain far more flexibility in where and how they build and deploy them. In turn, this can significantly accelerate the AI adoption timeline many enterprises are on.

A proliferation of use cases

As LLMs become faster and more energy-efficient, the feasibility of multi-agent solutions increases. Consequently, a broader range of use cases will benefit from multi-agent applications. This is huge.

So far, we’ve been targeting use cases in which a decision maker within a certain domain uses multi-agent systems to augment process flows among a few users. The challenge comes when you try to scale these solutions to thousands of people using LLMs. Under load, these models can be slow and inefficient in multi-agent settings. But now we can leverage cost-effective, smaller and faster models to run such systems at scale and increase the number of people who use them. As the types of process flows and throughput capacity increase, use cases proliferate.

What does this look like? The most obvious area to immediately benefit would be call center and support line augmentation, with multiple AI agents handling a customer inquiry from start to finish.

Another example is in the insurance industry. Property underwriters typically assess volumes of third-party data. They review past instances of underwriting on similar properties and ultimately conclude if and how to insure. This is a complex process that involves input from many parties. As multi-agent systems become more accessible, an insurance company could augment this process through interconnected agents. These agents can analyze third-party and internal information; distill and consolidate perspectives; and provide various options along with predicted outcomes for the property that account for risk, win-loss, and cost. Running such systems that have high data security requirements fully on-premises becomes much more cost-effective now.

Look to the future

The dominant discourse in modern LLM-based agents has leaned heavily on building large single agents as one-stop-shops, and I don’t think that scales. “Bigger is better” is not the way forward. The way to go is to harness the collective intelligence of a multitude of smaller cost-effective AI agents that are smart enough to represent various process nodes in a business organization flow.

DeepSeek has rightfully inspired the industry to embrace models that are faster, more efficient and more cost-effective. This brings significant promise to what can be achieved through cross-enterprise use case deployments.

As the dust settles around DeepSeek, I’m hopeful we’ll start to feel the early rumblings of a gold rush in terms of what these models can be used for, and how enterprises can harness their power to deliver breakthrough transformation.



Babak Hodjat

CTO, AI

Babak Hodjat

Babak Hodjat is CTO of AI at Cognizant and former co-founder & CEO of Sentient. He is responsible for the technology behind the world’s largest distributed AI system and was the founder of the world's first AI-driven hedge fund.



Latest posts

Harness generative AI

Visit the Generative AI section of our website.

Two customers checking out a tablet at a store

Related posts

Get actionable business Insights in your inbox

Sign up for the Cognizant newsletter to gain actionable AI advice and real-world business insights delivered to your inbox every month.