Philosophers, legal experts, scientists and military personnel gathered in Canberra this week to discuss the ethical issues surrounding the use of artificial intelligence on the battlefield and beyond.
The goal was to establish a set of principles that will guide the use of AI and autonomous systems in a range of Defence applications, from weapons systems to business management and decision support tools.
“This is about developing relevant ethical principles to facilitate communication between software engineers, integrators and operators during the development and operation of military AI systems,” said chief defence Scientist Professor Tanya Monro in a statement this week.
“The objective of the workshop was to bring together the best national and international people in the field, work through incredibly complex moral issues and create a roadmap for ethical AI into the future,” she added.
The three day summit was hosted by the Department of Defence’s Science and Technology Group, the Royal Australian Air Force’s ‘augmented intelligence’ project Plan Jericho, and the government’s Trusted Autonomous Systems Defence Cooperative Research Centre.
Around 80 people from 45 organisations came to the capital’s retro-futuristic Shine Dome including representatives from the Australian Defence Force, the Australian Defence College’s Centre for Defence Leadership and Ethics, Queensland University of Technology and the Australian National University.
Among them, Air Vice-Marshal Cath Roberts, head of Air Force capability, who said after the event that the military “must be sure” AI technologies are trusted and transparent “before we bring them into service”.
“This workshop is a key activity in developing Defence’s understanding in this critical area. Our focus is on how to ensure appropriate action and moral responsibility for decisions, and continuously evaluating which decisions can be made by machines and which must be made by humans,” she said.
The ethics surrounding Defence’s use of AI has become a focus for the department in recent times.
The government established a $50 million Defence Cooperative Research Centre (DCRC) for Trusted Autonomous Systems in 2017 to “ensure reliable and effective cooperation between people and machines” during military operations.
Earlier this year, the DCRC backed a $9 million study to explore the ethical constraints required in such systems, and the potential of autonomy to “enhance compliance” with social values.
Although this week’s workshop heard “honest and challenging questions” according to organiser Dr Kate Devitt from QUT, some have questioned the lack of independent, dissenting voices at the event.
“I didn't attend because I didn't know it was taking place,” Toby Walsh, professor of artificial intelligence at the University of New South Wales and Data61, told Computerworld.
Walsh is co-chair of an Australian Council of Learned Academies (ACOLA) expert working group, which this week released a government backed report into the ‘Effective and Ethical Development of Artificial Intelligence’.
Walsh is also a contributor to the government funded effort by Data61 to formulate a national AI ethics framework. He has also contributed extensively to the debate on autonomous weapons at the UN and elsewhere, and last year led 122 experts working in Australia in signing an open letter to government, calling on Australia to “take a firm global stand” against AI weapons that remove “meaningful human control” when selecting targets and deploying lethal force.
“Why then was someone like myself…not invited? I fear they didn't want their ideas tested by independent and outside voices,” he said.