Menu
Sat, 23 November 2024

Newsletter sign-up

Subscribe now
The House Live All
Education
Coronavirus
Building public trust is essential to achieving an all-electric and connected society Partner content
By BSI
Technology
Health
Designing and delivering “resilient, sustainable, thriving communities” through infrastructure Partner content
Education
Press releases

The deepfake dilemma: What happens when AI is used against politicians?

Former prime minister Boris Johnson sitting in a cell, created by AI platform Midjourney

9 min read

AI-generated deepfakes are on the rise – and experts are warning that government is behind the eight-ball when it comes to legislating for the tsunami of disinformation that could follow. (All images are AI generated by Tali Fraser using Midjourney)

I have in my possession an image of Boris Johnson in a prison cell. Of course, most people are unlikely to fall for it – but for a few who aren’t particularly politically engaged, or for those to whom it plays to their biases, it’s realistic enough to be convincing. It took me just minutes to create it. 

The image was made using a simple Artificial Intelligence (AI) prompt that the average person could come up with (it is no longer just the nerds). All I had to do was go on the Generative AI platform Midjourney and type in: “Boris Johnson sitting in a jail cell on the bed, deflated posture, shot from behind the bars [with a little bit of technical jargon at the end].”  

Someone with greater technical abilities than I would have been able to produce something even more realistic. 

Matt Warman, Conservative MP for Boston and Skegness and former DCMS minister, puts the concern for those outside of the Westminster bubble plainly: “People who are not political geeks are going to be more susceptible to even the less convincing deepfakes.” 

You can create Jeremy Corbyn waving an Israeli flag, or Theresa Villiers lying down in front of a bulldozer at a housing estate. It is absolutely not failsafe. Every time I tried to have two people together, it either created a twin of one figure, or a peculiar morphing of the two – it also makes Rishi Sunak look like a model. But as the software develops and goes through more trial and error, the possibilities will become a) improved and b) endless. 

An accidental twin of former Labour leader Jeremy Corbyn, created on Midjourney
An accidental twin of former Labour leader Jeremy Corbyn, created on Midjourney

These are deepfakes, named after the “deep learning” that allows AI to generate or imitate, faces, voices and movements to mimic human behaviour – and they could mean that seeing is no longer believing. 

For example, a set of AI images went viral on Twitter depicting former president Donald Trump being arrested before his indictment, gathering almost five million views in just two days.  

A series that became popular in the UK included images of politicians doing low-paid, gig economy jobs with Sunak as a Deliveroo driver, Matt Hancock pushing supermarket trolleys and Liz Truss pulling pints.  

The rapid software development is such that audio and video deepfakes have also advanced at “lightning speed”, according to British specialist in AI and synthetic media Henry Ajder.  

The only known case of deepfakes being used by bad actors against Westminster politicians so far has been against Tom Tugendhat, now security minister, during his time as chair of the Foreign Affairs Select Committee. He was targeted, among other European MPs, by deepfake video calls imitating Russian opposition figures. 

One of those calls featured a man who looked like Leonid Volkov, an ally of Russian opposition leader Alexei Navalny. A screenshot from the call was posted to Twitter alongside a photo of the real Volkov. Volkov himself said the two looked virtually identical: “Looks like my real face – but how did they manage to put it on the Zoom call? Welcome to the deepfake era…” 

US President Joe Biden shaking hands with Russian President Vladimir Putin, created on Midjourney
US President Joe Biden shaking hands with Russian President Vladimir Putin, created on Midjourney

Ajder says: “The foundations are set in a general election for people to start weaponising or utilising these tools in ways that can be deceptive and malicious.” 

It is “almost inevitable” for a senior British politician, Dr Tim Stevens, head of King’s College’s Cyber Security Research Group, says, to “be the star, shall we say, of something risque or undermining” – and it doesn’t matter whether it really happened or not. 

The risqué element here could be just that, or it could be a lot more sinister. There is a disturbing trend for deepfake pornography, which is exactly what it sounds like: using technology to put (primarily) women into fake but compromising positions. 

It “undoubtedly” is being used against female politicians, according to legal expert in online abuse Professor Clare McGlynn – and it already has been, just not against members of the House of Commons, yet. 

Our AI policy is almost exclusively framed in terms of innovation, which I think is quite short-sighted

“There are examples in other countries where deepfake porn has been used against women in the public eye and women politicians as a way of harassing and abusing them, minimising their seriousness. It is definitely being weaponised against women politicians.” 

A proposed amendment to the Online Safety Bill, championed by Conservative MP Maria Miller, would make it a criminal offence to use deepfake technology to distribute explicit images or video manipulated to look like someone else without their consent, but it does not provide for any crucial civil remedies.  

It is “an arms race” in terms of developments, according to internet law expert Professor Lilian Edwards – and, because of the proliferation of deepfake content and tools, politicians should not expect to press the red button and have them go away, says Dr Stevens. 

He is concerned that politicians are not wanting to tackle AI policy for fear of “nanny statism”, instead “keeping their fingers crossed to see what happens, but it might be too late”. 

Dr Stevens adds: “There are very clear signs that politicians are reluctant to get too heavily into this discussion. Our AI policy is almost exclusively framed in terms of innovation, which I think is quite short-sighted.” 

Former Labour leader Jeremy Corbyn waving an Israeli flag, created on Midjourney
Former Labour leader Jeremy Corbyn waving an Israeli flag, created on Midjourney

Professor Edwards is unimpressed by the calibre of MPs’ understanding of the technical side of AI: “You tend to get better scrutiny in the House of Lords than you do in the Commons. I think it is an appalling problem that we have a non-scientific, non-technically-trained legislature.” 

She sees the UK going down “a laissez-faire, minimal regulatory path relating to AI” – and at a time when, she says, “the rest of the world is heading in the opposite direction”.  

“I think at some point, it may take a while, it might take a new government, we are going to have to review this policy.” 

In terms of the rest of the world, Labour MP for Bristol North West Darren Jones would like the UK to take a more individual approach: “The thing that frustrates me slightly is I think we’re probably being led by the Americans on these issues. 

“I think Britain has its own capacity and potential for global leadership on these issues and so I would rather that we were kind of not just falling behind the Americans and the Europeans, but actually providing more global leadership on this.” 

British Prime Minister Rishi Sunak sitting in a pile of money, created on Midjourney
British Prime Minister Rishi Sunak sitting in a pile of money, created on Midjourney

Following the release of the AI white paper in March, Professor Edwards says the UK is focusing on regulators knowing what they are doing and cooperating with each other. Privately, a number of experts called this plan “pathetic”, with one going so far as to say they were “angry” that the government had not come up with a “proper plan”, instead relying on a “shambolic holding mechanism”. 

Louise Edwards, director of regulation and digital transformation at the Electoral Commission, the electoral watchdog and one of such regulators, says they have been asking “for a little while now” to improve the sharing powers between regulators “to work much more effectively”. 

She also says that British laws require modernisation to protect deepfake misinformation in our democratic processes: “These are laws that were written 20 years ago, more than 20 years ago, and they haven’t kept pace with more ways of campaigning, including things that are more digitally based.” 

Jones adds that currently regulators don’t have enough AI specialists: “If you want the regulatory network to work better you probably will need to legislate to update the mandates of those organisations, to put in a duty of cooperation between them … there is going to be some kind of spending required around that.” 

Warman adds that while the white paper was “a really good starting point”, since it was published, “which is obviously very recent”, people have expressed more concern about the potential downsides. 

If you can’t trust what has been produced by the BBC or whatever news channel, or indeed what you are seeing online, then we are in a fairly sticky situation

Part of the problem is that deepfakes provide an avenue for people to dismiss real content as fake. In 2017, Trump reportedly began suggesting the infamous Access Hollywood tape – where he said he liked to grab women “by the pussy” – was a deepfake. The excuse of plausible deniability, or as Ajder put it “the liar’s dividend”, is being weaponised to try to convince people that real videos, images and audio are fake. 

Former prime minister Boris Johnson putting up gold wallpaper, created on Midjourney
Former prime minister Boris Johnson putting up gold wallpaper, created on Midjourney

That is producing “an awful lot of cynicism and scepticism”, Dr Stevens says – and established organisations are being tarred with the same brush. 

He adds: “If you can’t trust what has been produced by the BBC or whatever news channel, or indeed what you are seeing online, then we are in a fairly sticky situation.” 

The BBC, Washington Post and Canadian Broadcasting Company recently held an event with Adobe to discuss AI and deepfakes. They are looking at watermarking their content, according to Professor Edwards. 

“They were saying: ‘If we put in some kind of watermark that is totally certified, digitally hard to remove, then we can at least prove this video did come from the BBC.” 

As Warman notes, tech companies like OpenAI have themselves been expressing concerns and, in the United States, calling for regulation to mitigate the risk of increasingly powerful AI models: “I think we should pay attention to their worries because they are emerging faster and more openly and honestly than people might expect.” 

A government spokesperson said: “The AI landscape is constantly evolving. Under the Online Safety Bill, companies will be required to take action to stop deepfakes appearing on their services when they become aware of it.” 

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Read the most recent article written by Tali Fraser - Ministers Could Give Care Leavers Free Prescriptions And Bus Fares