Senior APAC Amazon Web Services executives have said the company does consider the ethics of artificial intelligence and machine learning but doesn't publicise its efforts because "shouting loudest isn’t the best strategy".
While its rivals Microsoft, Google and IBM have laid out the ethical principles guiding their work on AI, AWS tends to put the onus on its customers to use the technologies appropriately, at least publicly.
“We do similar things; we just don’t talk that much about it. Because I don’t think there’s value in just saying ‘hey we’re doing something’ – what are you doing?” the cloud giant’s head of emerging technologies for the region Olivier Klein told Computerworld.
Late last year, Microsoft published six ethical principles to guide its own development of AI. The move followed Google’s sharing of seven similar principles in June. In September, IBM released a “practical guide” for internal and external developers working with AI called Everyday Ethics for Artificial Intelligence covering five areas of focus.
AWS offers some best practice advice relating to its customers' use of data, but has stopped short of laying out its own guiding principles. It is up to clients to decide whether their use of AWS tools is ethical, said the company's head of solution architecture in ANZ, Dr Peter Stanski.
“We certainly don’t want to do evil; everything we’ve released to customers to innovate [helps] to lift the bar on what’s actually happening in the industry. It’s really up to the individual organisation how they use that tech,” he told Computerworld.
“Whether it’s AI or other things it comes back to the actual end user,” he said.
In any case ethical boards and guiding principles often failed, Stanski added, referring to the cancellation of Google’s AI ethics board due to controversy over a number of its members.
“When you look at some of those initiatives, the wheels are falling off some of those initiatives unfortunately, because it’s just too early,” he said,
“The reality is going to be – customers will figure out what is applicable and what’s not,” Stanski added.
Klein pointed to AWS’s involvement in external ethical AI groups. Amazon was a founding member of Partnership on AI, an industry consortium focused on establishing best practices for AI systems.
“I do truly believe there are very few bad intentions and like any piece of tech you can always misuse it… That’s not to say we don’t have responsibility in making sure our platform is appropriately used and we do understand and assume that responsibility,” Klein said.
Both executives, speaking to Computerworld at the AWS summit this week in Sydney, emphasised that the company adhered to the laws of every country it operated in, adding that its Acceptable Use Policy prohibited customers from violating the law. This is a same response given by AWS CEO Andy Jassy to questions on machine learning ethics given to media at the AWS Re:Invent conference in Las Vegas in December.
AWS’s managing director for ANZ Paul Migliorini said if a customer was found to be using the company’s AI products to break the law, the company would “look at that really closely”
“But we haven’t seen any scenario where that’s happened here,” he added.
In February AWS called for more focus on AI from policymakers and legislators, specifically regarding facial recognition, a core Rekogntion capability. This followed Microsoft’s call for facial recognition regulation in a July blog post. Microsoft’s president and chief legal officer Brad Smith – during a March trip to Australia – claimed a world without any legal regulation on the technology would be “like a day out of the book 1984”.
In March, a group of leading AI researchers from industry and academia signed an open letter calling on Amazon to “stop selling Rekognition to law enforcement”.
The letter, signed by experts from academia and companies including Google, DeepMind, Facebook, Intel and Microsoft, claimed AWS’ facial analysis and facial recognition tool Rekognition “exhibits gender and racial bias for gender classification”.
In a response to the letter, AWS called the work “misleading”.
“In the two-plus years we’ve been offering Amazon Rekognition, we have not received a single report of misuse by law enforcement,” the company’s head of global public policy Michael Punke wrote.
AWS has said Rekognition – which launched in 2016 – is used by the Washington County Sheriff’s Office and the City of Orlando, Florida.
Last year the company reportedly met with US Immigration and Customs Enforcement officials to promote the product, prompting a letter circulated among AWS staff calling on Amazon CEO Jeff Bezos to stop selling the Rekognition tool. Some shareholders have since made similar demands.
Klein said that for facial recognition and other machine learning based products, AWS worked hard to educate its customers on best practices.
“It’s much more around education to customers. I would argue customers generally don’t have bad intentions, they just sometimes might not know exactly how to apply those best practices,” he said.
Although there should be “enforcement in doing what is right” Klein said “…it’s much more the education around how do you utilise this correctly and how can you build services and functionalities that make it easier for you to do what is right for your end customer”.