Menu
Sat, 21 December 2024

Newsletter sign-up

Subscribe now
The House Live All
We are on a mission to raise the profile of safer gambling Partner content
Culture
Culture
Betting advertising and sponsorship benefits sport at all levels. It’s time the critics heard the facts Partner content
Culture
Culture
Culture
Press releases

We can no longer support the Online Safety Bill

(Alamy)

5 min read

Around the world, children are being exposed to violent and inappropriate content online with sometimes devastating results.

Social media platforms are being used to foment political unrest, spread misleading health messages, and shower abuse on women, ethnic minorities and other vulnerable groups. The tools that were supposed to connect us and give power to the masses are imperilling individuals and our social fabric. 

As the dangers of an unregulated internet become frighteningly clear, the need for better laws and policies is obvious. But after a litany of delays and controversies, the United Kingdom’s proposed Online Safety Bill, designed to make social media and search services safer and more secure, is no longer fit for purpose. 

The bill prioritises catchy slogans over delivery, and now risks making the online world less safe for many

Both our organisations have long championed better digital regulation in the UK and have been a critical friend throughout the development of the bill. We have contributed evidence and expertise, including testifying in Parliament. But we are afraid that after too many missed opportunities to course-correct, and too many unworkable additions, the bill prioritises catchy slogans over delivery, and now risks making the online world less safe for many.

There are three main realities that the Online Safety Bill fails to address. Firstly, it doesn’t get to grips with the nature and extent of online harms. The bill currently focuses simply on three reductive categories of content: “illegal”, “age-inappropriate” and “unpleasant content that people would prefer not to see.” These gloss over the reality of the threats we are facing and obscures what platforms need to be doing to tackle issues, such as the reach of misogynistic influencers, dangerous health misinformation, or political disinformation.

If we can only ask platforms to mitigate the risks associated with illegal speech online – far from protecting free speech – there is a danger that the list of what it is illegal to say will just get longer. For example, the government’s latest proposal is that content which shows Channel crossings in “a positive light” should be banned, but campaigners have pointed out that this could lead to legitimate content about the refugee crisis being widely restricted. The fact that automated algorithms will be making most of these decisions will not help. 

Protecting the free and fair press is a crucial cornerstone of democracy. But the bill’s attempts to do this don’t stand up to scrutiny. In tension with the requirements on platforms to consistently enforce their terms and conditions regarding users’ content, if an organisation that qualifies as a news publisher – according to an overly broad definition – shares content that violates a platforms’ rules, the platform has to leave it up during an appeal. This fails to take into consideration how important speed is when responding to emerging information threats. By the time an original piece of disinformation is taken down, it could have been shared and amplified millions of times.

The second major flaw in the legislation is its focus on individual action and choice rather than system change. The proposed solution for hateful and dangerous speech is to require so-called “user empowerment tools” which allow individuals to “turn off” some kinds of harmful content they don’t want to see. While that might sound great, it puts the onus of preventing the spread of harmful content onto individual users, while the social media companies can continue to amplify, promote and monetise hate. Harm doesn't stop just because some people choose not to look at it. 

Finally, barriers alone aren’t a good enough solution to online safety. A key purpose of the bill is to make online spaces safer for children, with platforms expected to risk-assess their services and take steps to mitigate risk. But what platforms are being asked to do practically is to put up barriers between children and content that might be harmful: not to make safer online environments overall. 

Creating a separate online world for children raises huge issues about their fundamental right of inclusion and access to information. And at what cost to privacy and security – for adults and children alike – can this be achieved? This will drive platforms to collect more data on their users, age-gating to ensure certain people are only seeing certain content. Provisions in the bill also risk undermining end-to-end encryption, meaning we could be in a situation where there is no way to have a private conversation free from corporate or government surveillance. 

Good digital regulation is urgently needed to tackle hugely complex and widespread harms: but that doesn’t mean we should accept any proposals. As Members gather to review the legislation, the Lords should know that the Online Safety Bill doesn’t deliver what we need: a flexible, responsive, forward-looking regulatory framework that puts people’s wellbeing and fundamental rights first. In its current form, we can no longer support it.

 

Ellen Judson, head of CASM at Demos. Kyle Taylor, founder and director of Fair Vote UK.

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Categories

Culture