Does AI have to be a privacy nightmare?

AI is a privacy minefield but there could be privacy-enhancing aspects to the technology, argues an issues paper released by Victoria’s privacy watchdog

Although artificial intelligence (AI) presents challenges to many of the key principles that underpin privacy legislation, it’s not a forgone conclusion that AI has to usher in a data-driven Orwellian nightmare.

An issues paper (PDF) on AI and privacy issued by Victoria’s privacy watchdog — the Office of the Victorian Information Commissioner (OVIC) — argues that the increased use of AI technologies does not mean privacy will suddenly become irrelevant.

In fact, it is possible to imagine scenarios in which AI can be a privacy enabler, OVIC argues: “For instance, it is likely to mean that less people will actually need access to raw data in order to work with it, which could in turn minimise the risk of privacy breaches due to human error,” the paper argues.

“It could also empower more meaningful consent, in which individuals receive personalised services dependent on privacy preferences that have been learnt over time.”

Although the growth of “big data” technology foreshadowed some of the potential privacy implications of AI, AI’s ability to learn and adapt, and its frequently opaque processes, create a range of additional challenges.

AI can also alter the privacy implications of existing technologies, such as CCTV camera networks deployed in public spaces, the OVIC paper argues — for example, by using them in conjunction with a facial recognition system.

The federal government, with the support of the state and territory governments, has committed to rolling out a national system that will provide facial recognition and facial verification services.

Human rights and privacy groups have warned that the system stands to have an unprecedented impact on Australians’ privacy and could provide “the legal infrastructure to enable mass surveillance”.

The OVIC paper argues that AI is also helping to further blur the distinction between “what is and is not ‘personal information’”.

“The increased emergence of AI is likely to lead to an environment in which all information that is generated by or related to an individual is identifiable,” it argues.

There have been some noteworthy data privacy missteps in recent years including the Department of Health releasing supposedly de-identified data.Elements of the data drawn from the Pharmaceutical Benefits and Medicare Benefit schemes were successfully re-identified by researchers — leading to the government pushing for new laws to criminalise re-identification.

(The federal government recently earmarked $65 million for initiatives designed to give individuals and small businesses greater control over their personal data as well as make more data sets available for businesses and researchers.)

Collection, purpose and use

OVIC’s paper argues that AI “fundamentally challenges” three of the key principles that underpin the 1980 OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, which helped guide the development of a number of information privacy frameworks, including Victoria’s.

Those principles are limiting the collection of personal information to only what is necessary, specifying the purpose of collecting personal information, and limiting the use of personal information only to the purpose for which it was collected.

“Moving forward, our understanding of AI and privacy could benefit from shifting the focus from the collection aspect of information privacy, toward emphasising safeguards to ensure information is handled ethically and responsibly once it is obtained,” the paper states.

“Attempts to control or limit collection of data are likely to become increasingly futile as data-collecting technology becomes ubiquitous,” the OVIC paper argues.

Read more: IBM’s Watson to analyse emotions for Wimbledon highlights

“As such, shifting the emphasis toward 'ethical data stewardship' over data once it is collected may be more worthwhile,” it adds. “This would require a genuine commitment to transparency and accountability through good governance practices.”

(It’s worth noting that the EU’s GDPR does, however, impose limitations on the collection of data.)

The government recently committed $29.9 million over four years to helping boost Australia’s AI and machine learning capabilities. In addition to aiding the development of a technology roadmap and standards framework, some of the funding will be used to produce a national AI Ethics Framework. 

In New Zealand, the use of algorithms and AI by government agencies is currently the subject of a government review.

“The government is acutely aware of the need to ensure transparency and accountability as interest grows regarding the challenges and opportunities associated with emerging technology such as artificial intelligence,” NZ’s digital services minister, Clare Curran, said last month when she announced the review.

“Government has an important role to play in creating an environment in which a commitment to developing safe and fair AI can be balanced with technological progress,” OVIC argues.

“Leveraging existing information privacy frameworks, as well as re-imagining traditional concepts will be a key component in building, using and regulating AI,” the paper concludes.

Analyst firm Gartner has predicted that global business value derived from AI will reach US$1.2 trillion this year — an increase of 70 per cent on 2017. By 2022 Gartner expects the figure to grow to $3.9 trillion.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags privacymachine learningartificial intelligence (AI)Office of the Victorian Information Commissioner (OVIC)

More about AustraliaDepartment of HealthEUGartnerOECD

Show Comments
[]