‘Explainable Artificial Intelligence’: Cracking open the black box of AI

Researchers and enterprise want to build deep learning neural networks that can explain their actions to humans

Since December, Amazon has offered three artificial intelligence based services; the deep learning speech recognition and natural-language tool Lex; text-to-speech tool Polly; and image recognition with Rekognition on its cloud platform.

Speaking to Computerworld on Thursday, Gore hinted that there would eventually be some ‘explainable’ element to the offering.

“Right now no. You just put data in and get attributes out,” he said. “As it evolves over time, being able to understand that decision making process to a certain level will be there. So you can try and work out why it may be making a recommendation.”

Trust in the system

As well as the potential commercial benefits and necessities of AI that can explain, in human terms, how it has reached its decision, there is also a societal need.

Our lives will be increasingly influenced by deep learning algorithms, from those with immediate consequences to human safety such as medical diagnosis systems or driverless car autopilots, as well as AI built into larger systems that could determine our credit rating, insurance premium or opportunity for promotion.

“It’s incredibly easy to be seduced by the remarkable nature of the technology that is coming. And it is remarkable,” ANU anthropologist and Intel senior fellow Genevieve Bell told Wednesday’s AIIA summit.

“What is coming is amazing. Some of that tech is provocative and remarkable and delightful. Having humans in the middle both as the objects and subjects and regulators of that technology is the most important and in some ways the hardest thing to do.”

The Institute of Electrical and Electronics Engineers is considering the issue with its Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Its Ethically Aligned Design standards guide suggests that systems must be accountable and transparent.

“For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why,” it reads.

One notable suggestion – set out in the standards for physical robots – is exactly what AI needs: that they all be fitted with a button marked 'why-did-you-do-that?'.

Correction: The article originally reported that PARC was working with DARPA on XAI research. DARPA's XAI research is being conducted independently of PARC.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags intelamazonDARPADepartment of DefenseAWSmachine learningPARCdeep learningdeep neural networksexplainable artificial intelligenceXAI

More about AdvancedAmazon Web ServicesAWSCapital OneCapital One FinancialDefense Advanced Research Projects AgencyIntelSmartWall Street

Show Comments
[]