Australian military to state 'clear position' on killer robot ethics next year
- 08 August, 2019 14:59
A “clear position” on the ethical use of artificial intelligence in warfare will be written into the Australian military’s guiding doctrine document by the end of next year, Computerworld has learnt.
The position on how Australian defence forces should use AI-enabled weapons – such as so-called ‘killer robots’ – and how to ensure they have “appropriate ethical oversight” will be published in a dedicated chapter of the doctrine.
“The principles will be iterated and adapted, ensuring ethical vigilance on the use of AI and autonomous technologies on an ongoing basis,” a Department of Defence spokesperson told Computerworld.
The ethical position and set of guiding principles around the Army, Navy and Airforce’s use of AI on the battlefield and beyond has already been drafted, following a low-key summit last week in Canberra.
Philosophers, legal experts, scientists and military personnel were among those that attended the three-day workshop hosted by the Department of Defence’s Science and Technology Group, the Royal Australian Air Force’s ‘augmented intelligence’ project Plan Jericho, and the government’s Trusted Autonomous Systems Defence Cooperative Research Centre.
Computerworld can reveal that global arms manufacturers BAE Systems and Thales were among the organisations that sent representatives to the meeting.
Boeing’s military products research group Phantom Works, and military-focused robot and drone makers DefendTex, Cyborg Dynamics Engineering and Skybourne Technologies were also in attendance, along with bureaucrats from the Attorney General's Department and the Department of Foreign Affairs and Trade.
“The aim of the conference was to ensure a uniform position on how autonomous weapons and systems should be regulated. Australia is an active participant in ensuring that all autonomous weapons have appropriate ethical oversight,” a Defence spokesperson said.
The event was prompted by a five-year, $9 million study exploring the ethical constraints required by autonomous weapon systems being conducted by Australian Defence Force Academy group leader (and former supporter of the Campaign to Stop Killer Robots) Dr Jai Galliott.
The study will explore the potential of autonomy to “enhance compliance” with social values. It was slammed earlier this year by anti-AI weapon campaigners who call it “doomed”, to which Gaillot responded by calling the campaigners “absolute pacifist peaceniks”.
Just over a hundred people attended the workshop last week at the capital’s Shine Dome to work on the doctrine entry. The doctrine provides “authoritative and proven guidance” to all military personnel, although does not have legal standing.
“Participants contributed to discussions on developing principles relevant to Defence contexts for AI and autonomous systems. The workshop produced draft principles which will be evaluated, validated and approved by relevant stakeholders,” the spokesperson added.
Defence confirmed that it will be responsible for the final evaluation and approval of the principles.
“Industry and other stakeholders were invited to contribute their ideas towards these principles. However, they will not be involved in the approval process,” the spokesperson said.