Autonomous weapons can transform the nature of warfare but without regulation the risks are huge
4 min read
The Lords special inquiry select committee to investigate AI and weapon systems brings together a remarkable range of relevant knowledge and experience
Artificial Intelligence (AI) is now in the headlines every day. It is described in terms ranging from the mildly curious to the terrifyingly apocalyptic. But one particular application of AI is moving to the top of the agenda: autonomous weapon systems, or AWS. These are systems which can detect, select and engage targets with little or no human intervention. So what are the practical implications?
AWS, if fully autonomous, could change the nature of warfare. Weapons can be told what criteria to observe before deployment: for example, what collateral damage is permissible. They will be prepared to press home attacks when human commanders, their pilots and soldiers, might not. They may be much more effective in identifying and responding to threats to their mission, again without human intervention.
This scenario immediately raises a host of questions. How can AWS be deployed ethically and responsibly? The question of how international humanitarian law applies is a challenging one. If there is, in the jargon, “meaningful human control”, then the decision-maker can be identified, and called to account if AWS “commits” war crimes. But if there is indeed a human in the loop, full automaticity is lost, which may not be attractive to some states. And, even if there is meaningful human control, the practical ability of legal processes to operate effectively may be open to question.
The speed of development of AWS, as with AI more generally, is breathtaking. This brings with it another set of challenges. Do the “owners” of an autonomous system really understand how it is coded, and how can they identify when it might operate in an unexpected way? If it is capable of learning in deployment, how do you distinguish between original and learned capabilities? And how do you know when it goes wrong?
They will be prepared to press home attacks when human commanders, their pilots and soldiers, might not
The speed of development means that our conventional assumptions about procurement are out of date. Forget the decades needed for the introduction of a new frigate or fighter aircraft. Significant developments in AWS may happen in a matter of months. And the balance between purchasing governments and private industry may tilt towards the latter. No longer might the conversation be “This is what we want; can you supply it?” but “This is what we have developed; do you have a use for it?”
Is there any way in which this technology can be regulated, either internationally or by the United Kingdom in respect of its own potential use of AI-assisted systems? Comparisons with treaties on nuclear weapons and proliferation might offer some lessons. But the difficulties of reaching international agreement may be indicated by the first challenge: how do you reach agreement on a working definition of what an autonomous weapon system actually is?
I was delighted that the House of Lords decided to establish a special inquiry select committee to investigate AI and weapon systems. The committee has 13 members, bringing together a remarkable range of relevant knowledge and experience.
We have already taken a great deal of oral and written evidence: from AI professionals, academics, international lawyers, those retired from the military and the civil service, and industry, both in the UK and worldwide. In addition to our final sessions of evidence we will be visiting research establishments in Cambridge, Glasgow and Edinburgh. We have to report by the end of November, and we are on track to do so. We have no illusions about the complexity of the issues; but this is the right inquiry at the right time.
Lord Lisvane is a Crossbench peer and chair of the AI in Weapon Systems Lords Select Committee
PoliticsHome Newsletters
Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.