Computerworld

Killer robot campaign defector to 'embed ethics' in autonomous weapons

UNSW Canberra and University of Queensland to commence $9m, Defence-backed research

Dr Jai Galliott – an academic at UNSW Canberra – used to be against fully autonomous weapons, an emerging class of military technology that leverages artificial intelligence to select and shoot enemies.

Along with thousands of academics, activists and artists, he signed an open letter calling on governments to preemptively ban the so-called ‘slaughterbots,’ fearing a rise of ruthlessly effective killing machines, lacking in moral judgement, ethics and accountability.

He was a vocal supporter of the International Committee for Robot Arms Control (ICRAC) and the Campaign to Stop Killer Robots (CtSKR), which want an international treaty against the technology akin to the bans on chemical weapons and cluster munitions.

But in 2015, he had a “radical change in opinion”. He has since expressed regret about contributing to what he now calls “fearmongering” on the issue.

“Some people are just so determined to see a ban on anything that might resemble any kind of new weapons technology, whether it’s a lethal robot or not,” Galliott says. “Essentially they’re peaceniks and they’re not going to be happy until every nation either has them or they manage to get a ban. I don’t think that’s going to happen.”

Galliott now takes a more pragmatic view; it is better to work with the military to ensure ethics and the law is embedded in the AI and autonomous systems being used on the battlefield.

Along with University of Queensland Professor Rain Liivoja, Galliott is now commencing a five-year, $9 million study to explore the ethical constraints required in such systems, and the potential of  autonomy to “enhance compliance” with social values.

“Why invest so much time and effort in trying to push for something that’s never going to occur when you can invest your efforts in working directly with the people who are developing the technologies and to make sure they’re as ethical and legal as can be,” Galliott says.

“I think that’s going to drive a better humanitarian outcome, rather than being very critical and being a constant contrarian,” he adds.

Before the genie’s out

The research is being funded by the government through the Defence Cooperative Research Centre (DCRC) for Trusted Autonomous Systems, a $50 million initiative launched by the federal government in 2017.

As well as surveying Defence personnel to understand what they expect from new robotic comrades, the study will pair ethicists and lawyers with the programmers and engineers working on AI-supported weapons to “nut out a lot of the ethical and legal challenges at the time of the design rather than trying to do it all after the fact, after the genie’s out of the bottle,” Galliott said.

The Australian military has an “unwritten policy” against completely autonomous weapons, requiring there be a “human in the loop”. But, Galliott says, “it comes in degrees”.

“When you’re deploying these robots in semi-autonomous or autonomous mode, the whole idea of course is not to have a human overseeing every little action – so at the end of the day, the human that’s involved is going to be very distant from any effect,” Galliott says.

For example, a tank could be fitted with computer vision capabilities to identify ‘person with weapon’ in a landscape, and aim a gun at them.

Should potential targets be labelled with a percentage confidence score or a green box? How can the user interface help avoid ‘automation bias’ where soldiers blindly shoot at whatever the AI suggests?

Such questions shift much of the ethical considerations to the coding stage.

“It’s the programmer that’s going to have a degree of responsibility over how this potentially lethal action is meted out,” Galliott says. “And programmers inevitably apply their own sense of ethics, whenever they’re coding anything. You can’t avoid it…The aim of this project is to try and uncover that and maybe improve the design process.”

The research effort will also see the establishment of an advisory board for organisations to consult with on ethical matters. It will also explore where AI can be utilised to make weapons safer, for example, by teaching AI to identify ambulances or hospitals and alert soldiers.

“Even if it were to do nothing but help eliminate a number of lethal accidents that in itself is a really good thing,” Galliott says.

Page Break

Pulling the trigger

The research is timely. Major defence forces, including the Australian Navy, have been using highly automated gun turrets in remote areas to fire at anything in their proximity for some years. But thanks to advances in AI, military forces are now looking to send autonomous weapons into situations where the potential casualty count is far higher.

Last month, the US Army posted on a federal government contracting site that it was seeking industry partners to help it “leverage recent advances in computer vision and artificial intelligence” and develop “autonomous target acquisition technology”.

The call comes as part of its ATLAS program – Advanced Targeting and Lethality Automated System – the army’s effort to use technology to “acquire, identify and engage targets at least three times faster” than human soldiers can.

The Australian Army plans to ramp up its use of robotics and autonomous systems in ground combat over the next decade, it revealed last year, to “augment soldiers performing dirty, dangerous and dull roles” and improve decision-making.

ATLAS officials, quoted in Breaking Defense this week, said the system would not be pulling the trigger, pointing to a US Department of Defense directive which ensures autonomous weapons systems allow human commanders to “exercise appropriate levels of human judgment over the use of force”.

The department last month released its AI strategy, stating it will soon “articulate its vision and guiding principles for AI ethics and safety in defense matters”.

“There will always be a human somewhere in the loop, it’s just a matter of where in that loop and how far from it. It’s not like the military is trying to replace soldiers or anything like that. The human is still going to be the ultimate moral and legal arbiter in warfare, nobody’s trying to change that,” Galliott says.

Execute any evil order

Galliott’s involvement in the project, and the project’s premise, has not gone down well with his former comrades at ICRAC and the CtSKR.

Mary Wareham from CtSKR and Human Rights Watch said the group was “shocked” by the announcement.

“[The DoD investment] implies the Australian government believes it is possible to program ethics and the laws of war into machines, despite the widespread view among AI experts that this will never be possible. That’s why I called the research effort ‘doomed’,” she says.

Galliott’s fellow UNSW academic, AI Professor Toby Walsh said he was “severely disappointed” in his university “that this amount of money is being thrown at this particular aspect of the problem”.

Between them, UNSW Canberra and University of Queensland are putting $3.5 million towards the research.

Walsh last year led 122 AI experts working in Australia in signing an open letter to then Prime Minister Malcolm Turnbull, calling on Australia to “take a firm global stand” against lethal autonomous weapons systems that remove “meaningful human control” when selecting targets and deploying lethal force.

British physicist Stephen Hawking, Apple co-founder Steve Wozniak, cognitive scientist Noam Chomsky, Tesla chief Elon Musk and Mustafa Suleyman, head of applied AI at Google’s DeepMind have all signed similar letters in recent years.

“There are arguments that [autonomous weapons] will change the character of war in a very bad way. [The weapons] will change the speed, accuracy and duration of warfare. And they will be the perfect weapons for terrorists. They would execute any order however evil it was. Target all Caucasians, or kill all children. Those are the things you could give a weapon like this and they would do it without question,” Walsh says.

"If you start building them they’ll turn up on the black market, and then we’ll be defending ourselves," Walsh adds.

Last year, Google CEO Sundar Pichai vowed that the company will not deploy artificial intelligence for use in deadly weapons following resignations and protests from staff over its involvement with the US Department of Defense’s Project Maven. Microsoft employees have also protested over the company’s involvement with Defense projects.

Among the public, an Ipsos poll from December last year found 59 per cent of Australians surveyed opposed the use lethal autonomous weapons systems, with 15 per cent in support.

Despite the opposition, Galliott is defiant.

“The backlash has been small, a very small subset of people who are absolute pacifist peaceniks,” he says. “It’s very easy to sit back and criticise and do nothing, sometimes the more beneficial or even courageous thing is to get involved…I’m doing the right thing.”