Menu
Sat, 23 November 2024

Newsletter sign-up

Subscribe now
The House Live All
Education
Women in Westminster: In Conversation With Eleni Courea Partner content
Parliament
Parliament
Health
Parliament
Press releases

AI-pocalypse Now

7 min read

The late Stephen Hawking warned “creating effective AI could be the biggest event in the
history of our civilisation. Or the worst.” As the artificial intelligence revolution gathers
pace, government must plot a safe and successful digital pathway for the UK. Noa Hoffman reports.

Artificial intelligence (AI) – machines that “think” – will change every aspect of how we live. From the way we work and socialise, to where we go and what we enjoy doing, eventually machines will determine everything – or at least play a part in it.

That’s not to say our lives haven’t already been impacted. Social media algorithms are shaping political views, while online shopping technologies are dictating what consumers wear. The financial sector has been rocked by the popularity of cryptocurrencies, and in the next decade self-driving cars will likely be a mainstream feature on Britain’s roads.

But that’s just the beginning. Think of battlefields devoid of soldiers as robots fight wars in their place. Consider brothels with no real-life women on the books as AI takes over the oldest profession in the world.

Researchers at the University of Oxford estimate the full automation of labour could happen less than 75 years from now. So as AI expands to reach capabilities once only imaginable as science fiction, government, policymakers, scientists, technologists and philosophers need to prepare for the revolution.

“We need to come up with ways to ensure systems are being built with transparency and accountability, so there’s sufficient levels of justified public trust in being able to access an explanation of how or why a system has behaved in certain ways,” says Dr David Leslie, ethics theme lead at the Alan Turing Institute.

“We also need to think about issues of bias discrimination and equality as it applies to the production and use of technologies. So that involves considering not just the potential algorithmic biases that might be baked into datasets, but also thinking about how the use of certain systems might exacerbate inequality.”

Bias in AI is a concern facing policymakers and the tech sector. When humans input instructions into machines to guide an AI’s future thinking, those instructions will be influenced by the inherent biases of the technologist inserting them.

Similarly, when historic datasets are used to inform the way an AI considers new problems, any discrimination built into the original data will influence a machine’s future thinking, thereby perpetuating potential racist or sexist decision making.

A second problem requiring immediate consideration among policymakers is the way AI has the potential to change people’s patterns of social behaviour. “We really need to think about human autonomy and agency,” Leslie says. “How is the use of AI systems impacting each individual’s capacity to fully develop themselves and to move about in the world with dignity – to feel as though they’re not being deskilled or excluded from creative involvement?

“How can we continue to feel as though our capacity to argue, understand and make reflective decisions in the world isn’t being undermined by automated processes making decisions about our lives?”

As AI is taught to perform more complex tasks, sections of society will see their opportunities for human-to-human interaction drastically reduced. Machines taking on the traditional duties of carers, for example, will leave the elderly – a group already more prone to social isolation – with even less chances to engage in human contact.

Leslie believes policymakers should consider how AI “is affecting empathic interactions between service providers and customers”. “We need to think about how subtle, everyday connections we have with each other build a foundation of social trust and what we call ‘social capital between human beings’,” he says.

“We also need to be aware that, in certain areas, it may be better to preserve elements of human interaction.”

Potential bias in AI and the reconfiguration of socialisation are immediate concerns for policymakers. However, in the long-term, far greater challenges lie ahead; problems that in 2021 still sound like the plot of a sci-fi blockbuster.

In the film Ex Machina, a character is manipulated by a hyper-intelligent robot who knows his every like, dislike and even sexuality after an unscrupulous tech giant mines his online interactions.

“There’s been an over-focus on these [current] important and relevant things, and an under-focus on the fact there’s going to be much bigger and more serious changes coming down the road,” says Tom Chivers, science writer and author of The AI Does Not Hate You.

One long-term issue, with potential to cause immense damage, is referred to as the “alignment problem”: how can innovators align a machine’s goals with human values? For instance, if a hyper-intelligent system is programmed to find a cure for pancreatic cancer with 100 per cent efficacy, how can we ensure that, along the way, possible remedies aren’t mercilessly tested on humans residing in developing nations? How can we guarantee AI will understand the ethics of trialling new drugs in the same stringent way as scientists and doctors?

“It’s very hard to make sure the goal you give AI aligns with the actual goals you have, which are usually much messier, more complicated and related to this broad and difficult idea of human flourishing,” Chivers says.

“You can tell AI, for example, to increase the amount of money in a bank account, and it might be able to do it really well. But it’s the sort of thing that could go disastrously wrong in some unforeseen way.

“The machine thinks, ‘OK, I have been given this goal, it is the only thing I care about in the universe because that’s how I’ve been programmed, so I’ll go and do whatever it takes to achieve it.’ You want to be bloody sure the goals you’ve given an AI are the goals you actually want to happen.”

AI researchers have discussed how, in the future, the alignment problem could potentially pose an existential risk to humanity. Perhaps a machine is instructed to “eradicate poverty” and to do so decides to murder every human being it deems incapable of earning beyond a certain income threshold. This is an extreme example, but academics have argued that the possibility of similar scenarios taking place are worthy of consideration. Innovators have been urged to come up with mitigating actions to prevent what currently seems like dystopic fiction from ever straying into the realms of reality.

Leslie and Chivers disagree on the extent to which they believe AI could pose a catastrophic threat to humanity. For Chivers, the issue is “definitely something that could happen and definitely something that people should take seriously”.

Leslie is slightly more sceptical. “At this point it really is speculative to think in those terms about artificial general intelligence,” he says. “We need to approach it in a very measured way.”

But both are 100 per cent certain that the AI revolution requires serious time, effort and thought from parliamentarians and civil servants.

Government, it seems, would agree. In September 2021 the Office for Artificial Intelligence released its National AI Strategy, a document outlining how the UK will “get the national and international governance of AI technologies right”. The strategy details key actions for the short, medium and long-term and recognises a need to understand “what public sector actions can safely advance AI and mitigate catastrophic risks”.

“Governments face challenges due to the fact that AI technologies are being developed and deployed far quicker than traditional governance approaches are used to keeping abreast of,” says Tabitha Goldstaub, chair of the UK government’s AI Council.

“This means that companies are able to deploy AI systems without scrutiny and appropriate regulation. The Office for AI is acutely aware of this challenge and is rising to proactively meet it.”

Goldstaub believes education is a key method to “ensure citizens are ready for the advances of the next 50 years”.

“Over time, AI needs to be built into the curriculum as a specialist subject,” she says. “As well as being its own subject, AI needs to be part of computer science, citizenship studies and as part of new ways of doing other subjects such as geography or history. These changes may take a decade to complete but getting there is essential for the next 50 years.”

If it is humanity’s desire to constrain dystopic new worlds to the movies, let’s hope government, scientists and policymakers take heed.

PoliticsHome Newsletters

Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.

Read the most recent article written by Noa Hoffman - Where Are They Now? Neil Carmichael