AI Safety Summit Lauded As "Success" But MPs Question What's Next
The AI Safety Summit brought together delegates from the world's biggest economies (Alamy)
7 min read
The government's landmark AI Safety Summit has been largely celebrated as a success, achieving multiple agreements between international governments and technology developers to work together on preventing existential risks.
Representatives from 27 countries signed a “world-first” agreement on the first day of the summit, which commited to working with the UK to combat the "catastrophic" risks the technology could present.
Top AI companies at the summit also agreed that government should have a role in testing the safety of frontier AI models.
The UK also announced a new £100 million fund for accelerating the use of AI to tackle health issues, and said it would invest in an AI supercomputer.
Conservative MP and chair of the Science, Innovation and Technology Committee Greg Clark told PoliticsHome that while he deemed the summit to have been a "success", the UK government would now need to follow up with answering some fundemental questions on its own approach.
"It has been a success, It was clearly a good thing to have held the summit and an achievement to have the attendance the PM and team have managed," he said, applauding the fact that the international agreement had got the Chinese to sign along with the Americans.
"USA and China and many more in between have said they are willing to work together," he said.
But were there any details on how will they do so? "That is the next step," he replied.
Clark welcomed the agreement from companies that governments should be able to access and test AI models.
"Were the government to discover something they found dangerous about the models they would have to act," he said. "But the mechanism for acting has not been determined."
The committee will hold a session with Technology Secretary Michelle Donelan next week, in which members will question her on what will come next following the summit.
"We regard existential risks as being one risk but not the only one: there are eleven others," Clark said.
"And what is our attitude to open source? This will need to be decided now by the UK government. Following the global summit what is the government going to do about the here and now risks?"
Conservative MP and former digital minister Matt Warman agreed that getting China, the US, and the EU "in the same room, talking the same language" was a success in itself, and one that could only have been achieved by the UK.
He also praised the work of the top officials involved in organising the summit, Matt Clifford and Ian Hogarth.
"Their work is partly why this has succeeded, some of which was down to getting the companies in the room," he said.
"The day two agreement [between companies and governments] is much deeper, it's all abut national security."
Warman said that the focus needed to be on consulting on the AI white paper and for the government to put "more flesh on the bones" of its own approach to AI regulation.
"The government needs to be looking now where to take the slightly more stringent approach and where to take the lighter approach... it will require putting more flesh on the bones," he said.
Conservative MP and former justice secretary Robert Buckland said that he now wanted to see the government "delve into different sectors" and assess what harms AI could be causing now, rather than just in the future.
"The very fact that this is the first of several summits has given me encouragement that it wasn't just a publicity stunt," he said.
"I think the declaration is a very good start, but that we do need to delve down into different sectors.
"There wasn't a reference to justice, which I think has to be part of the consideration now and how we have a set of international principles with the way we use AI in justice because the deep fakes problem is already affecting justice."
Some think tank organisations and experts have echoed the MPs' calls for more work on domestic reform.
Responding to the summit, the Adam Smith Institute called for the UK government to do more in encouraging workers skilled in AI to the UK, but welcomed the government's recognition of what benefits AI could bring.
Mimi Yates, Director of Engagement and Operations at the Adam Smith Institute, said: “The government has made key steps to both harness the immense benefits AI can bring, whilst protecting us from the risks, through investing in supercomputing and skills, and creating an AI Safety Institute.
"The government is also right to continue to make the distinction between 'frontier' AI and 'narrow' AI- but we risk giving insufficient attention to the latter, which has the potential to transform health, transport and productivity.
"To capitalise on this strong start, the Government should go further with domestic reform, such as planning and high-skilled VISA routes, to encourage more highly skilled AI researchers to move to the UK and make Britain the world-leader in the tech of tomorrow.”
The Ada Lovelace Institute called for the agreements at the summit to be followed up by legislation.
“The conversations at Bletchley reinforced that the AI Safety Summit wasn’t fundamentally about technology or regulation, but about people," Fran Bennett, Interim Director of the Ada Lovelace Institute, said, emphasising the importance of involving people in the discussions around AI.
"Any effective governance must be backed by legislation. Without it, we won’t be able to incentivise developers and users of AI properly to make AI safe, or give regulators the scope, powers and resources they need.
“The UK Government has two live opportunities to start addressing the regulation of AI: in the King’s Speech and the Data Protection and Digital Information Bill. If the Government seizes these opportunities, it will be a significant step forward for making AI work for people and society.”
Professor of Science and Technology Policy at UCL and policy co-lead for the research programme Responsible AI, Jack Stilgoe, warned in the run-up to the summit that civil society voices were at risk of being excluded.
However, he told PoliticsHome on Wednesday that he had been "pleasantly surprised" the diversity of voices who ended up being involved.
"I think the British policy-makers have listened to a lot of the concerns that were expressed in the initial framing of the summit," he said.
"So the range of issues, the range of people is much broader than we were fearing it would be.
"The pleasant surprise was just looking at the live stream of the plenary yesterday where a lot of civil society perspectives were actually given space at the main summit, and I think that's the thing I've been really pleased to see."
However, he voiced concern that, while welcoming the international agreement, it could be seen as a "job done" rather than just the first step of many.
"My worry would be that if the agreement was taken by some people as 'job done', if the tech industry just said 'okay, we've showed that we can engage in these discussions now, let us go away and get on with it, trust us', then that would be a huge mistake," he said.
"I think the agreement has to be a first step towards not just ongoing meetings, but also the institutionalising some of these processes, so making sure that a broad range of people is involved and institutionalising ongoing international collaboration.
"The consensus has been that we can't trust the industry to self regulate, that that myth has been busted. But the question of what regulation should look like, and whether the UK can catch up with the EU and US approaches, I think remains open."
South Korea has agreed to co-host a mini virtual summit on AI in the next six months, and France will host the next in-person summit in a year's time.
PoliticsHome Newsletters
PoliticsHome provides the most comprehensive coverage of UK politics anywhere on the web, offering high quality original reporting and analysis: Subscribe