Killer robot campaign defector to 'embed ethics' in autonomous weapons

UNSW Canberra and University of Queensland to commence $9m, Defence-backed research

Pulling the trigger

The research is timely. Major defence forces, including the Australian Navy, have been using highly automated gun turrets in remote areas to fire at anything in their proximity for some years. But thanks to advances in AI, military forces are now looking to send autonomous weapons into situations where the potential casualty count is far higher.

Last month, the US Army posted on a federal government contracting site that it was seeking industry partners to help it “leverage recent advances in computer vision and artificial intelligence” and develop “autonomous target acquisition technology”.

The call comes as part of its ATLAS program – Advanced Targeting and Lethality Automated System – the army’s effort to use technology to “acquire, identify and engage targets at least three times faster” than human soldiers can.

The Australian Army plans to ramp up its use of robotics and autonomous systems in ground combat over the next decade, it revealed last year, to “augment soldiers performing dirty, dangerous and dull roles” and improve decision-making.

ATLAS officials, quoted in Breaking Defense this week, said the system would not be pulling the trigger, pointing to a US Department of Defense directive which ensures autonomous weapons systems allow human commanders to “exercise appropriate levels of human judgment over the use of force”.

The department last month released its AI strategy, stating it will soon “articulate its vision and guiding principles for AI ethics and safety in defense matters”.

“There will always be a human somewhere in the loop, it’s just a matter of where in that loop and how far from it. It’s not like the military is trying to replace soldiers or anything like that. The human is still going to be the ultimate moral and legal arbiter in warfare, nobody’s trying to change that,” Galliott says.

Execute any evil order

Galliott’s involvement in the project, and the project’s premise, has not gone down well with his former comrades at ICRAC and the CtSKR.

Mary Wareham from CtSKR and Human Rights Watch said the group was “shocked” by the announcement.

“[The DoD investment] implies the Australian government believes it is possible to program ethics and the laws of war into machines, despite the widespread view among AI experts that this will never be possible. That’s why I called the research effort ‘doomed’,” she says.

Galliott’s fellow UNSW academic, AI Professor Toby Walsh said he was “severely disappointed” in his university “that this amount of money is being thrown at this particular aspect of the problem”.

Between them, UNSW Canberra and University of Queensland are putting $3.5 million towards the research.

Walsh last year led 122 AI experts working in Australia in signing an open letter to then Prime Minister Malcolm Turnbull, calling on Australia to “take a firm global stand” against lethal autonomous weapons systems that remove “meaningful human control” when selecting targets and deploying lethal force.

British physicist Stephen Hawking, Apple co-founder Steve Wozniak, cognitive scientist Noam Chomsky, Tesla chief Elon Musk and Mustafa Suleyman, head of applied AI at Google’s DeepMind have all signed similar letters in recent years.

“There are arguments that [autonomous weapons] will change the character of war in a very bad way. [The weapons] will change the speed, accuracy and duration of warfare. And they will be the perfect weapons for terrorists. They would execute any order however evil it was. Target all Caucasians, or kill all children. Those are the things you could give a weapon like this and they would do it without question,” Walsh says.

"If you start building them they’ll turn up on the black market, and then we’ll be defending ourselves," Walsh adds.

Last year, Google CEO Sundar Pichai vowed that the company will not deploy artificial intelligence for use in deadly weapons following resignations and protests from staff over its involvement with the US Department of Defense’s Project Maven. Microsoft employees have also protested over the company’s involvement with Defense projects.

Among the public, an Ipsos poll from December last year found 59 per cent of Australians surveyed opposed the use lethal autonomous weapons systems, with 15 per cent in support.

Despite the opposition, Galliott is defiant.

“The backlash has been small, a very small subset of people who are absolute pacifist peaceniks,” he says. “It’s very easy to sit back and criticise and do nothing, sometimes the more beneficial or even courageous thing is to get involved…I’m doing the right thing.”

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags legalethicsmilitarydefenceCRCuniversity of queenslandAustralian Defence Force (ADF)Human RightslawADFHuman Rights WatcharmyAustralian Defence Force AcademyNavyweaponsautonomousCampaign to Stop Killer RobotsairforceInternational Committee for Robot Arms ControlUNSW Canberra

More about AdvancedAppleAustraliaGoogleHuman Rights WatchKillerMicrosoftTeslaUniversity of QueenslandUNSWUS ArmyWareham

Show Comments
[]