We must address AI’s potential for bias – or risk automating discrimination
3 min read
ChatGPT has been a gamechanger in artificial intelligence (AI). The pace of development continues to impress me but also concern me.
AI truly has the power to transform our lives for the better – but we are currently at a crossroads. The pace of change brings huge risks, as there will be a point when no one will be interested in looking back to where and why it all started. And by then it would be too late to tackle the root of the problem. This is why it is crucial for society that governments globally legislate how AI is used, because the simple reality is we cannot keep up with the pace of progress.
Readers may not know that my first job was in the world of computer science, working as a computer programmer. Since then, I have maintained a strong focus on science and technology, especially AI, as its influence on our daily lives has grown rapidly in recent years.
During my time on the Science, Innovation and Technology Select Committee, we scrutinised AI technology. We ensured the release of the vital AI report before the general election due to its pressing importance. What has struck me the most is that, in the main, we have no idea where the information has been gathered to build these systems.
The pace of change brings huge risks
We know, for instance, that some large organisations have scraped their subscribers’ information, some without their explicit consent. Just imagine how many times you have clicked yes to all cookies – who has time to read 20 pages of terms and conditions? Or think about those free games: nothing is free.
I pushed for a strong focus around bias in AI, as that is one of the main challenges with this technology. If we do not take this seriously, we risk automating and accelerating discrimination.
Up until this point we have believed everything a system throws at us; AI and programmes that allow for manipulation, even in real-time, have changed that. So you can imagine just how dangerous an AI built on bias actually is.
If the data used to train models is biased, the AI will learn and replicate that bias in its output, leading to harmful outcomes. These models can generate highly realistic but entirely fake text, which could be used to spread disinformation, manipulate public opinion, and even harm individuals. It is estimated that over 80 per cent of what we will view online in 2025 will be AI-generated.
This climate of misinformation and disinformation is worrying, including AI-generated images, video, audio and messages. And with the development of bots, we must focus on how we combat the new iteration of hate and incitement to hate.
I have consistently called for serious measures to address the risk AI poses to our human rights and equalities. I hope we will investigate a human rights bill to address the potential harms and ensure basic protections for the individual.
The Metropolitan police uses facial recognition. If the system has wrongly identified an individual, that person would then have to prove their innocence against a computer programme. That is in breach of our principle of innocent until proven guilty.
AI is the future. I remain clear – it can be a force for good, but only if appropriate guardrails are in place to prohibit its most dangerous uses, and we introduce strong safeguards for our human rights.
Dawn Butler, Labour MP for Brent East
PoliticsHome Newsletters
Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.