An industry group that represents the telecommunications sector has called for a focus on securing the data employed by artificial intelligence systems to be incorporated into a government-backed ethical framework for AI.
Earlier this year the government launched a consultation on developing an ethical framework for AI, commissioning a discussion paper from the CSIRO’s Data61.
“AI has the potential to provide real social, economic, and environmental benefits – boosting Australia’s economic growth and making direct improvements to people’s everyday lives,” industry, science and technology minister Karen Andrews said at the time.
“But importantly, we need to make sure people are heard about any ethical concerns they may have relating to AI in areas such as privacy, transparency, data security, accountability, and equity.
“The impact of AI is likely to be widespread and we have an imperative to ensure the best possible outcomes; while the community needs to be able to trust that AI applications are safe, secure and reliable.”
The Data61 paper outlined eight core principles that it said “should be seen as goals that define whether an AI system is operating ethically”.
Those principles, elaborated on in the paper (PDF), are: Generates net-benefits; do no harm; regulatory and legal compliance; privacy protection; fairness; transparency and explainability; contestability; and accountability.
In its contribution to the consultation, which is being run by the Department of Industry, Innovation and Science, telco group Communications Alliance said that an additional principle focused on security should be incorporated into the framework.
“AI will pose a significant challenge from a cyber security perspective as large volumes of centralised data create a ‘honeypot’ that is likely to be targeted by criminal actors,” the group’s written submission to the consultation said.
“In addition, the power of AI systems is likely to present an attractive target for those who seek to exert control through the use of AI and who wish to manipulate AI systems.”
“AI itself may also facilitate very complex cyber attacks against companies and Government organisations,” the document adds.
“It can be argued that securing the powerful AI that we create must be part of an ethical consideration rather than a mere commercial implication or prerequisite to applying other principles, such as the privacy protection principle.”
Such a principle would also boost the alignment of the ethics framework with other international principles, such as the OECD Principles, the group argues.
The submission takes issue with what it described as a “relatively broad” definition of AI. The consultation paper uses as its definition of AI a “collection of interrelated technologies used to solve problems autonomously and perform tasks to achieve defined objectives without explicit guidance from a human being”.
“Based on this definition, it will be difficult to discern when a certain activity or technology constitutes AI,” Communications Alliance argues – although the group’s submission acknowledges that is a “difficulty that would likely arise with most, if not all, definitions of AI.”
In general, the group says that the Data61 paper is “constructive and balanced in tone”. However, it also warns against any over-regulation of AI. Instead the government should consider whether existing frameworks and regulations can “accommodate evolving new technologies” and only consider intervening when there is a clear “failure of markets to produce the desired outcome”.