Skip to main content Skip to footer


September 28, 2021

The UK’s ‘Goldilocks Problem’: getting AI regulation just right

As the UK balances AI regulation and innovation, businesses there need to factor privacy and security into their decisions while also complying with other regions’ rules.


When you think of global leadership in the field of artificial intelligence (AI), you might not automatically think of the UK; however, that may change in the near future. 

Already, the Global AI index, which benchmarks nations on their level of AI investment, innovation and implementation, rates Britain third, behind the traditional AI powerhouses of the US and China. AI companies such as Benevolent AI are attracting millions in funding, and further startup activity will likely be spurred by the UK government’s “Grand Challenge” to facilitate medical research, improve care and prevent disease using AI. And in our recent Work Ahead study, 93% of UK respondents said AI’s impact will only increase over the next two years.

For this to happen, however, the UK will need to respond to a bevy of AI regulatory questions and strategies across jurisdictions. Untethered from the EU, the UK now manages its own regulations and, as a result, must foster an environment supportive of startups and innovators while also striving to ensure the security, privacy and protection of consumers, businesses and the state.

Tradeoffs are inevitable. British businesses, economy and society would benefit from regulation only if it is fit for purpose, not compromised on to avoid difficult choices.

A choice for the UK

So far, the UK has not published a full AI regulatory structure, and it may not intend to, in the interest of encouraging innovation. Instead, the country has formed an AI council made up of experts from business, the public sector and academia to create a roadmap that guides rather than regulates.

This relatively hands-off regulatory approach closely follows that of the US, which similarly has no formal regulation to police the use of AI as it emerges from Silicon Valley and other tech hubs. While the US government has published some general principles, any discussions formalized in official documents have focused on “maintaining American leadership” and a continued drive for innovation, rather than exploring how misuse may be restricted.

In contrast, the EU’s recently published proposals for AI regulation are predictably robust. Some uses of AI are banned outright, such as for determining a person’s “social score,” as China is potentially looking to do with its social credit system, or determining identity through real-time facial recognition and biometric capture — something that may leave Clearview.ai somewhat hamstrung over its use in Europe.

In the EU, implementations that pose a high risk to both consumers and employees will be heavily vetted by EU regulators. This covers recruitment, applicant screening and the management of the safety of critical infrastructure, among many other things. 

At the other end of the spectrum, even China has recently passed a new data protection law that is intended to crack down on the collection of personal data by large corporations. Ultimately, however, the control — rather than regulation — of that data and the uses of it will sit with the state and will be seen as tools to steer economic development and social harmony. 

In search of regulation that’s ‘just right’

Both the “moonshots” noted in the UK’s roadmap and its Grand Challenge indicate the country is building its AI strategy around freedom to experiment and, notably, smoothing the path from academic research into commercial product. While this can be a lengthy process, freedom from overarching regulation, combined with the right funding, will ensure there is one less barrier in the way.

As well as providing a hotbed for innovation, the UK approach to regulating AI will no doubt be influenced by cost to businesses. In the EU, for example, the US- and Brussels-based Center for Data Innovation predicts that, if implemented, the EU’s AI act will cost its economy €31 billion over the next five years.

These costs are the result not just of implementing new protocols and documenting compliance but also the missed opportunities when businesses choose not to proceed with prohibitively expensive and time-consuming projects. Indeed, some will take their business elsewhere — something the UK will no doubt welcome with open arms.  

Striking a balance

A number of factors could impede the UK’s progress with this approach. While its roadmap lays out various guidelines on how to ensure fairness, privacy, and diversity and inclusion while creating AI products, these are not legal requirements. As a result, these fundamental rights could be at risk for the sake of growth and profit.

While consumer backlash is a real risk when personal data, facial recognition and algorithmic bias are all in the mix, the deeper impact here could emerge in the longer term. Any missteps that lead to even perceived rights violations, and a desire for technical superiority at the expense of social equity, would be not only disastrous for the company involved but also a failure of the government that allowed it to happen. 

More practically, any business operating in the UK that wants to trade digitally with the EU will have to ensure compliance with the EU laws. Such was the case when GDPR was first implemented in the EU — the “Brussels Effect” took hold globally, and companies around the world that held data on EU citizens scrambled to comply.

Since the European AI framework will likely have the same effect, the rules must at least be in the back of businesses' minds from the outset if work is to be done with their European neighbors and that market is to be accessed. 

The impact on UK businesses

We see significant opportunities for the UK to quickly become an AI leader by using a looser regulatory framework and leaning on its already strong education system and access to funding. If the European Commission wants to lead by regulation, the EU must also be a major player, and its pursuit of tight rules and governance around AI increase the risk of the startup sector being stifled by bureaucracy.

The UK clearly doesn’t want to be left behind as a result of adopting the same policy, and will fund and support innovation in AI and other areas without being bogged down. But it can’t do this at the cost of privacy and security, and businesses must factor this into any decisions they make.

Most importantly, AI innovation should not be an isolationist practice. Business leaders must engage in industry and economy-wide discussions across geographic and regulatory boundaries in order to learn from others. By working with partners that are subject to these regulations to understand their impacts and benefits, businesses can ensure that AI is used for the common good even if there is no legal imperative to do so.

To learn more, read The Work Ahead in the UK.



Duncan Roberts
Senior Manager, Thought Leadership
Headshot of Digitally Cognizant author Duncan Roberts

Duncan Roberts is a Sr Manager at Cognizant. A thought leader and researcher, he draws on his experience as a digital strategy & transformation consultant, advising clients on how to best utilize emerging tech to meet strategic objectives.

Duncan.Roberts@cognizant.com


Latest posts

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition