Foreign minister Julie Bishop has delivered a blow to a campaign against the weaponisation of artificial intelligence, by saying the government considers it “premature to support” any ban.
In November, 122 AI experts working in Australia signed an open letter to Prime Minister Malcolm Turnbull, calling on Australia to “take a firm global stand” against lethal autonomous weapons systems that remove “meaningful human control” when selecting targets and deploying lethal force.
The group, led by UNSW Professor of AI Toby Walsh, urged the government to support the unilateral prohibition of such weapons at the United Nations Conference on the Convention on Certain Conventional Weapons (CCW) that year.
While Brazil, Iraq and Uganda joined 19 other countries in calling for the ban, Australia did not.
In her response to the open letter’s signatories, Bishop said that the international community was yet to reach a common understanding of lethal autonomous weapons systems, and the Australian government “considers these dimensions should be explored more comprehensively” before policy is set.
“At the same time, Australia also has an interest in all aspects of emerging technology relevant to Australian Defence Force missions, including autonomous weapons systems. Trusted autonomous weapons systems could have important benefits such as allowing faster and more accurate actions that could reduce risk to friendly units and civilian populations,” she wrote.
In July, the government launched a $50 million Defence Cooperative Research Centre focused on ‘Trusted Autonomous Systems’.
Bishop added that current policy meant that there will “always be human interaction with autonomous systems”.
Walsh said her reply was “a little disappointing” despite the promise that Australia would remain actively engaged with the CCW and the UN’s newly established Group of Governmental Experts on Autonomous Weapons Systems, which will meet for the first time in April.
“This appears to be a very weak commitment to tackle this troubling issue and to commit Australia to a standard less than ‘meaningful human control’. Sadly, Australia continues not to take moral leadership in this space,” Walsh said.
Lethal autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can be sent to search for and shoot anyone in a pre-determined area. They don’t include remotely piloted drones with humans in-the-loop to ‘pull the trigger’ or active protection systems, such as fixed sentry guns which fire at targets detected by sensors to defend an area.
They have been dubbed the “third revolution in warfare”.
The Australian experts’ open letter is part of a broader international campaign hoping to stop the use of such weapons by establishing treaties similar to the ban on chemical weapons.
“The concern is that a variety of available sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without any meaningful human control,” says associated group, the Campaign to Stop Killer Robots which was launched five years ago.
“States must draw the line now against unchecked autonomy in weapon systems by ensuring that the decision to take human life is never delegated to a machine.”
In August last year Elon Musk founder of Tesla, SpaceX and OpenAI and Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind were among 116 signatories of an open letter to the UN “raising the alarm” on the technology.
“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” they wrote.