Computerworld

AI ethics is a political issue that businesses need to consider

As interactions with AI become indistinguishable from those with humans, it’s important that we build ethical frameworks into our AI similar to those that intuitively guide human-to-human interactions

This morning, I asked my phone’s smart assistant, “Do you have ethics?”  Its response? “The engineers that designed me do. They never let me download illegally.

Sure, it was a strange question to ask a phone, but it was also an interesting response that made me ponder: what ethics does my virtual assistant have programmed into it? And should those ethics be determined by a few engineers, or standard guidelines, agreed on by the broader community for the benefit of society?

As governments around the world gear up for a future driven by artificial intelligence (AI), it’s clear that whatever the answer to my above questions, now is the time to debate the importance of digital ethics.

Ethics as a foundation for trust

How much would you trust a stranger? Would you trust them with your life?

How about a robot?

These were just a few of the questions raised by Australia’s Chief Scientist, Dr Alan Finkel AO., in a recent speech calling for Australia to adopt an operating framework for artificial intelligence – painting an ethical pathway to its widespread adoption.

Finkel noted that while questions of how far you’d trust a robot or a stranger often make us feel uneasy, in reality, we trust strangers with our lives on a daily basis. Take crossing the street as an example – we routinely trust others will follow the road rules.

We’re heading towards a society and economy that relies heavily on trust. Airbnb, Airtasker, Kickstarter, Etsy and Uber are all hugely popular platforms that have been built on an economy of trust. These services rely on principals of reputation, reviews and community rules or guidelines to build-up the trust between strangers required to complete a digitally-mediated interaction.

As organisations seek to bring artificial intelligence into society across a multitude of business and consumer contexts, the degree to which it is accepted and adopted may depend on the degree to which it can gain trust from citizens.

The rules and principles of platforms like Uber are the foundations of why they work. By formally structuring community guidelines and standards, society has been able to confidently engage in new ways of interacting without the traditional safeguard of personal relationships and judgements of character based on experiences in the real world.

It’s easy to see why applying the same guidelines to AI makes sense. Society’s trust must be built into the foundations of AI platforms and organisations through the rigorous application of digital ethics.

Getting on the front foot and building community-based guidelines for ethical use could be the key to unlocking a more trusting future for AI, one in which society has fewer questions around the mysteries of AI and more interest in how it can be used to make their lives easier.

This sort of action is already underway in New Zealand, where the government is discussing an AI action plan and ethical framework. The New Zealand minister of broadcasting, communications and digital media, Clare Curran, has said the framework would give people the necessary tools to participate in conversations about AI and its implications on the society and economy.

In order to develop trust and a reputation for good, organisations working with AI should look to get ahead of the impending wave of legislation, debate and discussion around AI led by governments, by leading their own frameworks for digital ethics – or risk falling behind.

Establishing a digital ethics framework for organisations

It’s been a huge few years for progressing AI around the world.

In Australia, the 2018-19 budget saw $29.9 million put aside to strengthen Australia’s capability in artificial intelligence and machine learning (ML). A big sign that the government is getting ready to set Australia up for a future in AI leadership. An exciting prospect, given the countless demonstrated examples of AI improving the world not only for businesses, but importantly for their customers.

Earlier this year, Avanade’s CEO, Adam Warby, highlighted three examples of how AI has helped brands create emotional connections with their customers. Behind these ‘emotional connections’ were real improvements in customer experience driven by AI - and in the case of Tesla automatically extending the battery range of its cars in hurricane affected areas, it created the opportunity for a safer world.

Today, the major players in consumer-facing AI are building intelligent chat bots that can mimic elements of human speech, passing themselves off as human. In an age where our digital interactions with AI are becoming indistinguishable from those with humans, it’s important that we build ethical frameworks into our AI similar to those that intuitively guide human-to-human interactions.

Let’s go back to my smartphone assistant. After the first questions, I asked my phone whether it could pass the Turing Test – in reference to a popular test for whether a machine is truly ‘artificial intelligence’.

Its answer is both a good summary of the current state of play, and hopefully a nice indicator of the way artificial intelligence should always be…helpful.

“I don’t mind if you can tell I’m not human. As long as I’m helpful, I’m all good.”

In this exciting – and no doubt scary for some – age, organisations should look to adopt a framework for digital ethics that will ensure their products are both ethical and helpful – not harmful to society.

To achieve this, we advise adhering to the following three best practices to consider when creating a digital ethics framework for your business:

1. Start with the customer

Before adopting any new technology, start with what’s best for your customers and employees. Rather than what’s possible or what’s easy, it starts with your people.

A laser focus on your end user will help shape your digital ethics processes. Starting with the people, and not the technology, guarantees that the standards and ethics that accompany your digital approach will be shaped by what your business can do best for its customers and employees.

2. Data transparency

Digital ethics is no different to business ethics. Your existing business code of ethics and compliance with the existing legal landscape is an important starting point for digital ethics.

The big difference lies in a much greater responsibility businesses need to take to protect the vast amounts of data and information that is generated and collected in our AI-first reality. Businesses require much more thoughtful processes and much greater awareness at all levels to successfully protect information and data from being mishandled.

From the very beginning, organisations must be transparent with customers and employees around how much data is being gathered, how that data is used, how it is treated, and what is and is not acceptable.

3. Ethics at all levels

Digital ethics requires organisations to maintain a regular dialogue with stakeholders around ethical issues. This means standards, guidelines and processes at your organisation around ethics is not left up to software developers or engineers to figure out.

A regular ethics review cycle is critical. For example, for updating policies around customer data protection, regular review cycles help build clear expectations for customers and inform employees how this information should be treated. 

These are the building blocks for getting started with digital ethics but this is only the beginning. Continuous discussion is required within organisations as well as across society to ensure we take the right course in a world rapidly adopting technology that is getting smarter by the day.

Lourens Swanepoel is Australia data & AI market unit lead at Avanade.