Menu
Sun, 15 December 2024

Newsletter sign-up

Subscribe now
The House Live All
Manufacturing chemical raw materials using captured CO2: pipedream or reality? Partner content
By BASF
Technology
Can ‘Global Britain’ really succeed with electric vehicles? Partner content
By Advanced Propulsion Centre UK (APC)
Transport
Education
Press releases

The government should lean into the new EU AI Act

5 min read

After an 11th hour turnabout by Starmer’s government on the introduction of an eagerly expected AI bill, progress on the future of AI development in the UK is up in the air.

In order not to land flat on its face, the new Labour government would do well to look closely at continental efforts.

To understand the European Union's philosophy when it comes to regulating technology, Labour should consider the proverb: “Fool me once, shame on you; fool me twice, shame on me.” It could also crib the attitude of Alfred Einstein, who famously quipped: “Insanity is doing the same thing over and over again and expecting different results.”

Whichever saying best fits the government’s eye, the message from the EU to the major tech companies commercialising AI - via the new EU AI Act, in force this week - is this: You must regain our trust and learn the lessons of the serial data protection failures and information chaos caused by the rapid adoption and growth of online commerce and social media platforms.       

And while the temptation for Labour in their search for economic growth might be to hew closely to the American approach to AI - all-in on innovation, light touch on regulation - the new government would do well to consider the EU alternative. There is good reason to do so. The major technology companies are no longer plucky upstarts in need of a permissive regulatory framework; they are now industry behemoths, which spend tens of millions of dollars to lobby governments around the world. And these giants need to be reined in, for their own good and that of society.

The change in size and influence of the major tech platforms matters. Unlike the early days of the internet, the commercialisation of AI is being pursued, arms-race style, by some of the world's most wealthy and powerful corporations, with an eager and interested China competing alongside. This could easily become a recipe for disaster. And no, this isn’t a reference to fantastical killer robots or AI suddenly extinguishing the world (although this might happen!).

No, the more mundane threats from AI are why former Prime Minister Rishi Sunak was right to convene all the major players for last year’s groundbreaking AI Summit. A collective framework and common understanding of recent history is essential in the context of a technology that could profoundly alter labour markets, narrow the scope of consumer choices (from healthcare to financial services if AI models decide what is best for us), potentially complicate our collective ability to address climate challenge (AI is energy and water hungry when renewable supply remains limited and water stress is rising), and seriously unsettle the bonds of trust and truth (through deep fakes and other synthetic information). We have to get this right, and right on the first time of asking. 

Even though large parts of the world (with the notable exception of China) have thus far taken a lighter-touch approach to regulating AI than the EU, nearly all leading AI players will pursue EU private and public sector customers. After all, the 27-member bloc is the second largest economy in the world by GDP after the United States. And with a £20+ bn shortfall in the government's books - following the EU's example will ensure UK firms developing AI technology have quick and seamless access to that market.      

Sceptics within Labour need only to look at the GDPR and the spectacular global success the EU had in setting the regulatory approach to personal data: over a dozen countries from Canada and Brazil to Nigeria and South Africa have incorporated elements of the law into their national data protection. History could quite easily repeat itself with AI, as the GDPR will be a central regulatory tool for AI involving the processing of personal data, a point that is often overlooked. This gives the EU an in-built advantage on setting the regulatory pace. The UK is likely to remain closely aligned to the EU approach on data protection to ensure that the regime continues to be viewed as adequate by the EU. This convergence is an opportunity for the Starmer government to also seize for itself a global leadership role on AI regulations.

 The regulatory winds are shifting, and the major tech platforms will cry foul (and lobby accordingly), but we must learn from our history; the chaos that flowed from the addition of a simple 'like' button to social media platforms was bad enough; what harms might we get from a technology with more power, able to be manipulated by everyone, that can expand at an exponential rate? Freedom always comes with responsible constraints.

More to the point, applying a sensible regulatory regime, like the EU has done with GDPR, but at an earlier stage, will promote more buy-in from consumers on AI-enabled services and technology. It will also provide a degree of regulatory certainty for innovators. In other words, we will help AI grow by giving it proper guardrails. It is both the best and correct thing to do to foster sustainable and responsible innovation.


Megha Kumar is a Partner at CyXcel managing the firm's risk analysis and threat protection capabilities for clients.

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Categories

Technology