Can autonomous killer robots be stopped?

Advances in AI could make lethal weapon technology devastatingly effective

Earlier this week robotics and artificial intelligence experts signed an open letter (PDF) calling on the United Nations to help prevent the “third revolution in warfare”: Lethal autonomous weapons.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter states. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

What is a lethal autonomous weapon?

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can be sent to search for and shoot anyone in a pre-determined area. They don’t include remotely piloted drones with humans in-the-loop to ‘pull the trigger’ or active protection systems, such as fixed sentry guns which fire at targets detected by sensors to defend an area.

How likely are they?

According to Ryan Gariepy, founder and CTO of Clearpath Robotics, very likely.

“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he said.

"This is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action."

Rapid technological advances means fully autonomous weapons are possible, and many companies are pursuing that goal. The Australian Defence force in July announced a $50 million research centre to develop ‘Trusted Autonomous Systems’.

UTS Professor Mary-Anne Williams imagines a rather terrifying near-future: “Weaponised robots could be like the velociraptors in Jurassic Park, with agile mobility and lighting fast reactions, able to hunt humans with high precision sensors augmented with information from computer networks,” she said.

What do the experts want?

Speaking of dinosaurs, the argument against lethal autonomous weapons can be summed up thus:

The latest open letter (a similar one was issued in 2015) calls on the United Nations to take action “to find a way to protect us all from these dangers”.

A UN Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems was due to meet for the first time this week, but the event was cancelled and rescheduled for November.

“We entreat the high contracting parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilising effects of these technologies,” the letter continues.

Do they want a ban?

Some of the signatories have gone a step further than the language of the letter and have called for an outright ban on autonomous weapons “similar to bans on chemical and other weapons”.

A separate group, Campaign to Stop Killer Robots, has long called for an international treaty to stop the technology.

“A comprehensive, pre-emptive prohibition on the development, production and use of fully autonomous weapons – weapons that operate on their own without human intervention – is urgently needed,” the group says.

But surely it's better to lose bots in battle than human soldiers?

“Lethal autonomous robots might well reduce collateral damage – just as the current arsenal of fire-and-forget weapons have. This negates the notion that lethal autonomous robots should be declared unlawful per se,” argues Professor Anthony Finn, director of the Defence and Systems Institute at the University of South Australia.

On the other hand, some say replacing human troops with machines removes the disincentive of loss of human life, which could make the decision to go to war much easier so increasing the likelihood of conflict. 

Is a ban possible? Would it be effective anyway?

Nearly 200 states worldwide are bound by the Chemical Weapons Convention, which, since the early '90s has led to the destruction of 93 per cent of known stockpiles and the deactivation of 97 declared production facilities.The convention has been successful in massively reducing the use of chemical weapons, although some countries continue to do so.

A ban on lethal autonomous weapons might falter, however, due to the ‘me-tooism’ of opposing states, as it has with nuclear weapons.

Last month 122 countries endorsed a UN treaty to ban the use of nuclear weapons, heralded by its supporters as a big step towards the elimination of all nuclear arms. However, key nuclear-armed states and their allies were absent from the treaty, some of which said recent posturing by North Korea about its nuclear missile capabilities was good reason to keep their own.

“Enforcing such a ban is highly problematic and it might create other problems; stopping countries such as Australia from developing defensive killer robots would leave us vulnerable to other countries and groups that ignore the ban,” says Williams. “A ban on killer robots cannot be the only strategy. Society and nations need much more than a killer robot ban.”

What about banning further research?

Many researchers are guided by ethical norms. However research ethics are not bound by law, and vary between institutions and countries.

"It is an excellent idea to consider the positives and the negatives of autonomous systems research and to ban research that is unethical,” says Dr Michael Harre, lecturer with the Faculty of Engineering and Information Technologies at the University of Sydney. However, he adds, there needs to be a “closer examination of what constitutes 'ethical' research”.

Although many of the best minds currently working in AI and robotics have signed the open letter, far from all of them have. Unsurprisingly, the companies involved in military technology don’t often publish their breakthroughs in peer-reviewed journals, giving the scientific community little or no oversight.

But some kind of agreement is surely better than nothing?

The signatories certainly believe so.

“The key is to establish circumstances under which their use might be permitted and to develop practical legal frameworks that allocate responsibility for infringements,” says Finn.

If a ban is not possible, it’s better we have a legal framework in place sooner rather than later.

"In the past, technology has often advanced much faster than legal and cultural frameworks, leading to technology-driven situations such as mutually assured destruction during the Cold War, and the proliferation of land mines,” says James Harland, Associate Professor in Computational Logic at RMIT.

“I have seen first-hand the appalling legacy of land mines in countries such as Vietnam [where RMIT has two campuses], where hundreds of people are killed or maimed each year from mines planted over 40 years ago. I think we have a chance here to establish this kind of legal framework in advance of the technology for a change, and thus allow society to control technology rather than the other way around.”

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags militaryUTSroboticsdefenceunswRMITnuclear warheadsTerminatorwarmachine learningweaponsUniversity of South Australiachemical weaponswhat is it good forabsolutely nothingCampaign to Stop Killer Robots

More about AustraliaKillerRMITUnited NationsUniversity of South AustraliaUniversity of SydneyUTS

Show Comments
[]