The government is seeking the public’s concerns about artificial intelligence, as it works towards formulating a national AI ethics framework.
CSIRO’s Data61 today launched a discussion paper to “encourage a conversation on how the nation develops and uses AI” covering the benefits and risks of the increasingly ubiquitous technology.
Funded by the government in its May Budget, the discussion paper proposes eight principles “to guide organisations in the use or development of AI systems”.
Feedback on the paper – and on the eight principles in particular – is being sought until the end of May.
“AI has the potential to provide real social, economic, and environmental benefits – boosting Australia’s economic growth and making direct improvements to people’s everyday lives,” said minister for industry, science and technology, Karen Andrews launching the paper today.
“But importantly, we need to make sure people are heard about any ethical concerns they may have relating to AI in areas such as privacy, transparency, data security, accountability, and equity. The impact of AI is likely to be widespread and we have an imperative to ensure the best possible outcomes; while the community needs to be able to trust that AI applications are safe, secure and reliable,” she said.
The principles which the Data61 paper says “should be seen as goals that define whether an AI system is operating ethically” state that AI systems should “do no harm” and “generate net benefits” and work within local and international laws and regulations. They should also ensure people’s privacy is protected, and not “result in unfair discrimination”.
The principles propose that individuals should know when an algorithm is being used that impacts them and “they should be provided with information about what information the algorithm uses to make decisions”. People should also be able to challenge the use or output of an algorithm that may have impacted them, via an “efficient process”.
Those responsible for creating and implementing AI based systems “should be identifiable and accountable” although it is noted that "developers cannot be expected to bear the responsibility for achieving these outcomes all on their own"
“With a proactive approach to the ethical development of AI, Australia can do more than just mitigate against risks – if we can build AI for a fairer go, we can secure a competitive advantage as well as safeguard the rights of Australians,” the paper Artificial Intelligence: Australia’s Ethics Framework says.
Following the consultation period, the government will lay out a set of principles and practical measures that organisations and individuals can use “as a guide to ensure their design, development and use of AI meets community expectations”.
Struggling to keep up
A group of Australian business leaders and academics earlier this year called for greater ethical oversight of AI, arguing that “old, dumb law [is] struggling to keep up” with the rapid progress in the field.
Many companies – particularly those considered early adopters of AI – are establishing internal AI ethics committees to appraise their efforts. Insurer IAG in December announced it was founding, with CSIRO’s Data61 and the University of Sydney, a research institute with an aim to create a ‘world where all systems behave ethically’. SAP and Axon have also established committees.
Major technology companies including Microsoft, Facebook and Google have established ethics committees to tackle the issue, with varied results. Yesterday, Google confirmed to Vox that it was pulling the plug on its AI ethics board, just a week after it had been established. The company faced employee petitions calling for the removal of board members based on past comments and involvement in military use of AI.
Late last year, Microsoft published six ethical principles to guide its development of AI. The move followed Google’s sharing of seven similar principles. In September, IBM released a “practical guide” for designers and developers working with AI called Everyday Ethics for Artificial Intelligence covering five areas of focus.