Why Australia has an opportunity to lead the way on AI regulation
- 19 July, 2018 09:46
We are on the cusp of a fourth industrial revolution with advances in AI continuing at a rapid pace. Yet despite its promise, Australian businesses are hesitating to embrace AI, fuelled by misunderstandings and misapprehensions. These have only been exacerbated by the recent headlines of misuse of personal data by tech giants.
Rewiring the either/or mindset on AI
While the business benefits of AI are well-documented, the conversation continues to circle back to the implications it will have on our workforce. And rightly so. Too many make the assumption that to adopt AI means to obliterate jobs and that discourse has to change.
It is up to those of us leading the AI charge — the startups, developers and data owners — to help change this narrative. There is no denying the adoption of AI will have an enormous impact on our jobs and the way we work, but it’s not a case of either/or. AI will have some enormous benefits, and these need to be raised and discussed as much as the risks and downsides.
We need far more education for business and the community that AI can augment jobs, not just destroy them.
Yes, jobs that traditionally rely on rules, repetition or data will be more efficiently handled by AI algorithms. That is what AI products and machines do best – perform repetitive tasks, analyse huge data sets and handle routine cases.
But in turn, this frees up time for humans to do what they do best – resolve ambiguous issues, solve problems in new and creative ways, exercise judgment with difficult problems and deal with dissatisfied customers. Indeed, we thrive in situations where there is little or no data, while machines excel at the opposite.
Businesses require both of these capabilities crucially working together, and when that is realised, human-AI collaboration can flourish.
So what’s needed to reset AI attitudes and uptake in Australia? Guiding principles for AI development are a good start. The industry can then not only highlight its capabilities and the benefits it brings to businesses, but how these technologies serve as a natural extension of many workforce functions.
Let’s look at Amazon for example. It has applied AI beyond individual functions and incorporated it into its wider business processes. The inner-workings of the company’s fulfilment centres demonstrate this symbiosis on a much larger scale with robots working in unison with employees to minimise the overall need for manual labour.
And while we are working with existing businesses and their employees, we also need to be ensuring we have a future-ready workforce entering the market. Institutions and training pathways must adapt to help people gain the relevant skills to navigate this transitionary time. Schools and universities will have to reassess the careers and roles they are preparing students for.
The federal government’s $29.9 million investment in AI announced in the 2018 Budget is a small first step in the right direction for nurturing AI. However, further funding for education and research is required if we’re to propel Australia to the forefront of the AI industry.
Protecting and promoting the integrity of AI
Guiding principles for the AI sector and greater education are noble goals, but all is for naught if we don’t address three fundamental aspects of our industry – privacy, transparency and integrity. The Facebook and Cambridge Analytica scandal has only heightened the widely held views that the tech giants are looking out for themselves and their interests. It has never been more crucial to educate the public to protect and promote the integrity of the technology.
With the European General Data Protection Regulation (GDPR) now in effect, representing the biggest overhaul of the world's privacy rules in more than 20 years, the issue of data governance is firmly in the spotlight. Now is the time to promote greater transparency in big tech and data.
There is an opportunity for Australia to lead the way on how AI can be adopted and expanded to workplaces and broader society, and also prevent companies such as Cambridge Analytica setting precedents for dangerous and unethical use of the technology. A collaborative approach between startups, academia and government can pave the way for the ethical use of AI.
Crucial to this will be the establishment of a governing body on AI to set industry minimum standards of behaviour. This must involve members of government, the startup community, business and academia.
We are on the cusp of an era that presents enormous opportunity for our country, our economy and our people. We need to mobilise our government, schools, universities and business to join other countries and embrace AI to rethink our world.
Richard Kimber is the CEO and co-founder of Australian AI software company Daisee.