Britain can become a leader in AI safety
3 min read
Artificial intelligence (AI) is the great technology of our lifetime. The release of ChatGPT a year ago sparked a fire in the imaginations of business, policymakers, and the public alike. But it’s a mistake to think of ChatGPT as a watershed moment – it’s a point on an exponential curve, and one that shows no sign of slowing.
A decade ago, a big milestone was reached: an AI model surpassed human performance for the first time on a set of visual recognition tasks. Today, that model looks like a toy. GPT-4, the AI model that powers ChatGPT, was created using 100m times more computational power. It’s rumoured that the next generation of models, expected to be released next year, will involve 100 times more investment again. And, crucially, no one – including the AI companies – knows what that means for AI’s capabilities in 2024.
We can’t let the companies building powerful AI mark their own homework
This means AI presents enormous promise, but also real risk. Deployed safely, AI could transform public services, revolutionise healthcare and accelerate the economy. But, if we fail to make it safe, AI could put enormous destructive power in the hands of bad actors, by facilitating the design and deployment of devastating bio- and cyber-weapons. Dario Amodei, CEO of Anthropic, one of the leading AI companies, recently told the United States senate that powerful systems like those expected in 2024 “could be misused on a grand scale” in traditional national security domains.
We therefore have a platform for urgent international action. Today, there exists no shared understanding of the risks of frontier AI or what to do about them. That’s the role that November’s AI Safety Summit at Bletchley Park can play. This will be the first time the world has come together to discuss frontier risks. AI does not stop at national borders, so this needs to be a truly global conversation – one that includes some countries with whom we have profound differences.
We hope to agree broad collaboration on the crucial topic of identifying and mitigating the risks of frontier AI. We can’t hope to resolve all the questions AI raises at this summit, but we do aspire to begin a conversation that shapes the international agenda for years to come.
One challenge is that, more than any other strategically important technology of the last 200 years, AI has been dominated by the private sector. That needs to change. We want a thriving, competitive AI ecosystem, but we can’t let the companies building powerful AI mark their own homework when it comes to evaluating risk. Governments have an obvious legitimate interest in understanding the capabilities of AI systems, but until now have lacked the skills and infrastructure to do so.
The UK’s Frontier AI Taskforce, backed by £100m of government investment and some of the world’s leading experts, is a big step in the right direction. The taskforce is equipped to work as a peer with the leading private sector actors to collaborate on safety research and evaluation. Now, though, we need to build international public sector capacity in this domain, and we hope that the AI Safety Summit is a catalytic moment for accelerating this.
Safety will be a critical pillar in the continuing development and deployment of AI, and the UK’s leadership in this domain is already attracting cutting-edge talent and investment from the world’s top AI companies. Nearly 80 years ago, Bletchley Park established itself as the birthplace of modern computing. In just a few days, it will witness the start of a new international effort, led by the UK, to face up to the risks posed by the descendants of those early systems and put safety at the heart of a positive AI future.
Matt Clifford, Prime Minister’s representative for the AI Safety Summit
PoliticsHome Newsletters
Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.