New institute wants ‘world where systems behave ethically’

Not-for-profit Gradient Institute will release open source tools and provide training in bid to make machine learning fairer

Ensuring machine learning models are ethical and unbiased is hard. In October it emerged Amazon.com had scrapped a four year project to automate its recruitment process.

The company, according to a Reuters report, found the system was not rating candidates for developer and technical roles in a gender-neutral way.

The issue is that methods to make machine learning models fairer are not always intuitive. In a recruitment tool scenario a data scientist might believe, understandably, that removing gender classifiers from the model would result in a fairer outcome.

But, as a blog post from the Gradient Institute describes, doing so can actually result in outcomes that are much worse for women.

“Some people think that by deliberately withholding data you can end up with fairer outcomes. But you can get a detrimental outcome. That’s one of many examples where an intuitive and reasonable human intuition turns out to be not correct,” explains Bill Simpson-Young, director of the new institute.

“Instead of drawing on naive intuitions, data scientists need to draw on understanding of how bias can seep into AI and how the bias can be reduced,” Simpson-Young added.

The Gradient Institute – the name coming from a machine learning term meaning ‘propensity towards optimisation’ – launched yesterday with the aim of creating a ‘world where all systems behave ethically’.

Bias is just one of a multitude of pitfalls in building ethical machines. The institute will also consider the value of human oversight of machine learning models, and their explainability.

“We also see some mistakes being made by people who think that the fact there’s been some bad decisions made by machines is a sign humans should be making decisions,” Simpson-Young said.

The Gradient Institute is a Sydney-based not-for-profit, staffed by around 10 mathematicians and machine learning experts, and will up its headcount over time. Simpson-Young, formerly director of engineering and design at CSIRO’s Data61, is CEO with Dr Tiberio Caetano, co-founder of IAG subsidiary Ambiata, in the role of chief scientist.

Tiberio Caetano and Bill Simpson-Young
Tiberio Caetano and Bill Simpson-Young

The institute will undertake research in the emerging field of quantitative fairness and work with the public and private sectors to put the research into practice. It will release open source ethical AI tools that can be adopted and adapted, Simpson-Young said.

Although it’s work will mostly be “deeply technical,” the group will train both coders and decision-makers on how to build and run ethical AI systems.

“It’s not enough that just the data scientists and developers understand it, it’s critical that the people operating a machine learning system understand it. Now, often the things that are causing the ethical implications are being done down at the code level, but really they should be done at the senior management level too,” Simpson-Young said.

“Those people should be making the decisions about where the trade-offs are being made,” he said.

Improve trust, better experiences

Insurer IAG, a founding partner of the institute along with CSIRO’s Data61 and the University of Sydney, will be an early adopter of its tools. Ensuring ethical AI is the right thing to do, and there is business value in it, according to IAG.

“Ethical AI will improve trust in how automated machines make decisions. IAG hopes to be an early adopter of the techniques and tools the Institute develops so we can provide better experiences for our customers,” said Julie Batch, IAG chief customer officer.

“Leaning into the challenges and opportunities of AI requires considered thinking about fairness and equality. No government or business can do this alone. We need to work together across sector and we need to do this with urgency,” she added.

That urgency was echoed by Simpson-Young.

“People worry about AI in the future taking over the world, but actually there are many many consequential decisions being made by machine learning today – whether that’s recruiting decisions or who gets a home loans – all of those decisions are being mediated by machine learning,” Simpson-Young said.

“Machine learning is really good at doing what it tries to do, but people are often specifying and constraining machine learning in ways that are not really very sophisticated. They’re getting what they’re designing but they’re not considering all the ethical considerations of what they’re doing,” he said.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags CSIROethicsalgorithmsuniversity of sydneyAIIAGgendermachine learningbiasfairnessinstituteData61MLquantitative fairnessmodels

More about AmazonAmazon.comBillCSIROIAGUniversity of Sydney

Show Comments
[]