AI can aid defence, but must be governed by the highest standards
Combat vehicle apparently capable of AI (Credit: Andrew Blyth / Alamy Stock Photo)
3 min read
Artificial intelligence (AI) and its implications for our future have certainly caught the public’s interest recently, whether it’s curiosity about applications such as ChatGPT or dystopian predictions of a machine-run future.
While this is fuelled by rapid technological changes, the science behind AI has been around for decades, dating back to the 1950s.
The debate we now face is how to harness AI for the good of society while safeguarding from its unintended consequences or those who would use it to do harm. This is no more apparent than in the military and defence sphere. Modern militaries, like many organisations today, run on large amounts of data – from managing logistics to dealing with the masses of data that modern military systems collect to inform targeting decisions on the battlefield.
The importance of an accurate and timely supply of munitions has been evident from the war in Ukraine. AI will be vital in managing modern military logistics, not just in managing current stocks but also in making predictions about what future stocks are needed in a time of crisis. This could revolutionise our ability to quickly react to events and ensure better visibility of stockpiles and related supply chains.
AI will be vital in managing modern military logistics
Many modern defence systems, from aircraft to ships-to-land platforms, are swimming in sensors but drowning in data. The current problem is that the amount of data collected is on a scale that is beyond human comprehension to effectively interpret it to make informed decisions. AI can process this data quickly enough to aid system operators in their decisions, which is limited at present. While some may say this gives too much power to a machine, any action will still ultimately be taken by a human – an action that I would argue is better informed.
This leads to the issue of whether or not we allow completely autonomous systems to make decisions such as whether to engage a target, taking humans out of the loop. This is a debate we need to have. But as long as the necessary legal safeguards are built in, I think this is the way certain systems are going to develop.
These debates, particularly how much autonomy we allow systems to have, are very much live across our allies. Nato has already started to work on its AI certification standards, to be adopted across the alliance. But what about those individuals or state actors that will not abide by ethical or legal standards? Those with no regard for such standards will always use whatever means they choose, however they choose, for bad purposes. This is the case regardless of the technology involved and should not be a reason not to argue for, and put in place, high standards for ourselves.
Some argue that we need brand new international agreements to govern the use of AI in defence. I would urge caution on this. Any new rules or regulations would be very quickly outdated by the rapid pace of change we are seeing. A better approach would be to apply existing international and humanitarian law to military decisions involving the use of AI.
The AI genie is out of the bottle and it cannot be put back in. We therefore need to exploit the advantages of AI but also recognise, particularly in the military sphere, that we have an obligation to ensure its use is governed by high international standards in line with our values – values that are worth fighting for.
Kevan Jones, Labour MP for North Durham
PoliticsHome Newsletters
Get the inside track on what MPs and Peers are talking about. Sign up to The House's morning email for the latest insight and reaction from Parliamentarians, policy-makers and organisations.