Australian companies are split over whether they or the providers of artificial intelligence solutions should be responsible for its outcomes, according to a LivePerson-commissioned survey by research firm Fifth Quadrant.
A survey of 562 IT, customer experience and digital decision makers in Australian businesses with at least 20 employees conducted last month found 46 per cent believed accountability for AI outcomes lay with the company that developed the AI, with the same proportion believing it lay with the business deploying it.
Within their organisations, respondents said their company’s leadership, including the C-suite (34 per cent) and board of directors (34 per cent), were most likely to have ultimate accountability for the decisions made by AI systems.
Despite this, only 40 per cent of Australian businesses surveyed had AI standards or guidelines in place. Some 15 per cent were developing such guidelines internally, with 13 per cent looking to global tech companies for inspiration and 12 per cent seeking prompts from local tech companies.
Most businesses felt that the responsibility for setting and enforcing AI ethics and principles in Australia should sit with the government (40 per cent), or an independent Australian body (25 per cent).
“I would like to reiterate to other business leaders the need to foster an ‘ethical AI mindset’. This starts with developing a well-defined ethical AI strategy, as without this, AI will become the next digital technology that divides us,” said LivePerson CEO and founder Rob Locascio.
"Ultimately, the technology industry, business and government must work together to right the future,” he added.
In April, CSIRO’s Data61 launched a discussion paper to “encourage a conversation on how the nation develops and uses AI” covering the benefits and risks of the increasingly ubiquitous technology.
The paper proposes eight principles “to guide organisations in the use or development of AI systems” feedback on which is being sought until the end of this month.
A group of Australian business leaders and academics earlier this year called for greater ethical oversight of AI, arguing that “old, dumb law [is] struggling to keep up” with the rapid progress in the field.
“It’s really a checkpoint to make sure that, if you’ve agreed this is how we’re going to operate as an organisation – for example we’re not going to use post codes because they discriminate against certain races or demographics – than the ethics committee needs to make sure that ethos is being held up and followed and nobody’s going ‘well we can make a quick buck let’s just do it’,” Accenture APAC’s AI delivery leader Amit Bansal explained to CIO Australia earlier this year.
A separate survey by SAS, Accenture and Intel found that out of the Australian companies that have already adopted AI in some form, 72 per cent have established such a committee (slightly higher than the global average of 70 per cent).
Australian businesses are taking other actions to minimise the potential risks of AI on society, the LivePerson survey found.
More than a third were conducting risk assessments, with a similar number monitoring industry standards and conducting ethics training for employees. Best practice guidelines were being written by 30 per cent of respondents, with a similar number conducting impact assessments.
When it came to businesses top concerns about AI, respondents cited the potential for AI technologies to fall into the wrong hands, loss of privacy and unauthorised access to data.
“We’re on the cusp of a new era. AI will transform people’s lives by making technology more efficient, easier to use, reliable and capable, opening up tremendous human potential. Progressive companies are already starting to achieve this,” said Lacascio.
“However, business leaders should approach AI with their eyes wide open, looking at practical and proactive measures to ensure ongoing ethical implementation that results in the best outcomes for customers,” he added.