NAB, the Commonwealth Bank of Australia, Telstra, Microsoft and Flamingo AI have signed up to “trial” principles that form part of the government’s new AI ethics framework.
The government said that the businesses would test the principles “to ensure they deliver practical benefits and translate into real world solutions.”
The government in the 2018-19 budget allocated funding for a number of artificial intelligence initiatives, including the development of an AI ethics framework. Earlier this year it released a paper produced by the CSIRO’s Data61 to help guide the process and spark discussion.
The government has released the results of that consultation, including the set of voluntary AI ethics principles and a guide on how to apply them.
The eight principles are:
- Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society and the environment.
- Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
- Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
- Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
- Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
- Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
- Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
- Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
The principles define an AI system as a “a collection of interrelated technologies used to solve problems autonomously, and perform tasks to achieve defined objectives, without explicit guidance from a human being.”
Telco group Communications Alliance had called for an additional principle explicitly focused on security.
“We hope to make a meaningful contribution to the discussion, to learn more about how we can leverage AI in an ethical way in order to help deliver new and improved experiences for our customers,” said NAB’s chief data officer, Glenda Crisp, in a statement provided by the government.
“Collaborating with government and across industry drives diversity of thinking which is vital in developing new ways of working and implementing new technologies safely.”
Telstra is “proud to be a part of the AI ethics trial and we look forward to learning from other companies who are also involved,” said the telco’s chief data officer, Noel Jarrett.
"There’s no doubt that AI can improve the experiences of our customers and our employees by making things simpler and easier. We want to make sure that we’re using this technology in the right way from the start, and testing these principles will help guide us as we consider how to best use AI.”
Industry, science and technology minister Karen Andrews said government is “determined to create an environment where AI helps the economy and everyday Australians to thrive.”
Separately the New South Wales government is developing an AI strategy and ethics framework.
Earlier this year, the Department of Defence hosted a gathering to help guide its thinking on the use of AI technologies on the battlefield.