Google's research chief questions value of 'Explainable AI'

Peter Norvig says output of machine learning systems a more useful probe for fairness

Google’s own algorithmic outputs have been accused of bias. A Google image search for hands or babies, for example, displays exclusively white-skinned results. In 2015, the company’s Photos app mistakenly labelled a black couple as being ‘gorillas’.

Accusations of racism and sexism have also been targeted at Google’s autocomplete function, which for example, completed ‘Are jews’ with ‘a race’, ‘white’, ‘Christians’ and ‘evil’.

“These results don’t reflect Google’s own opinions or beliefs,” the company said in response to an Observer story in December, adding the results were merely a “reflection of the content across the web”. To a similar story published by the Sydney Morning Herald late last year, the company said “we acknowledge that autocomplete isn't an exact science and we're always working to improve our algorithms”.

There were better ways to avoid bias than investigating under the hood of machine learning algorithms, Norvig explained.

"We certainly have other ways to probe because we have the system available to us," he said. "We could say well what if the input was a little bit different, would the output be different or would it be the same? So in that sense there's lots of things that we can probe."

Where we're going

Although checks on outputs might be a satisfactory approach from Google's perspective, individuals and governments are beginning to demand that they, and all entities that employ machine learning, go much further.

Earlier this year, the UK government’s chief scientific adviser, wrote in a Wired op-ed: “We will need to work out mechanisms to understand the operations of algorithms, in particular those that have evolved within a computer system's software through machine learning.”

European legislators are making significant efforts in the area to protect individuals. The EU’s General Data Protection Regulation, which will come into force in May 2018, restricts automated decision-making systems which "significantly affect" users. It also creates a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them.

Australian businesses that supply or operate in the EU “or that monitor the behaviour of individuals in the EU” may need to comply.

In May, Google said it would “continue to evolve our capabilities in accordance with the changing regulatory landscape” while helping customers do the same.

Despite the significant implications, Norvig welcomed the regulators’ focus.

“I think it’s good that were starting to look into what the effects are going to be. I think it’s too early to have the answers,” he said. “I think it’s good that right now as we start seeing the promise of AI that we’re not waiting, we’re asking the questions today, trying to figure out where we’re going.”

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags AustraliaGoogleArchitecturealgorithmsAIunswmachine learningbiasfairnessNeural Networksdeep learningexplainable artificial intelligenceXAIPeter Norvigprobabilistic graphical models

More about EUGoogleNASAUNSW

Show Comments
[]