How to maintain the ethical use of technology in healthcare

The growing use of machine learning raises ethical challenges for the health sector

Credit: Dreamstime

In the last few years, advances in both software and hardware for machine learning have enabled people to take large volumes of data to build predictive models.

In healthcare, we find large collections of semi-structured data such as laboratory results, radiology and admission data, and modern machine learning techniques allow us to use these disparate data sources to build AI models, in ways that previous statistical techniques did not.

However, the ability to pour large volumes of data into essentially ‘black box’ algorithms has significant ethical and equity implications, many of which are manageable but require significant effort to address.

In the context of AI startups and their race to market, often these considerations are not addressed: Many models are created from data that is easily available rather than designing data collection, which can lead to issues of bias and error.

These ethical dilemmas exist across all industries. For example Amazon abandoned an automated hiring tool it developed because of fears it would reinforce gender imbalances, and the significant bias found in the legal system’s predictive models that provide decision support for setting bail.

To maintain the ethical use of technology in the healthcare sector, we must consider the following.

The limitations of descriptive models

Machine learning models, particularly the deep neural networks popular today, are “descriptive models” that describe the underlying data used to train them from large data sets. Deep learning methods don’t allow people to integrate knowledge, which means all predictions are based on underlying data.

As a result, these models can reinforce any biases in the data such as learning systemic bad practices and creating less accurate predictions about minority populations due to under-representation. Without critical analysis and curation of the input data used to build models, AI risks perpetuating structural disadvantages and discrimination.

Brittleness

Machine learning models, and in particular deep learning models, are “brittle”. This is because their performance can deteriorate if there are subtle changes in the input data or if there is missing data.

To make matters worse, these models are not good at letting the user know when they are not confident about their predictions, so produce confidently wrong predictions. 

This is further exacerbated by the difficulty of extracting meaningful explanations from deep learning models, so patients can face clinical risks without the healthcare professional being able to know when to override the predictive model.

Accuracy vs harm

Deep learning rose to fame by significantly beating previous techniques in competitions, such as object identification in images from the internet and speech recognition. This mind set continues in the machine learning ecosystem, with fierce competition to improve accuracy.

In healthcare, accuracy is not the most important metric. For example, a model used to identify pathology on X-rays is accurate overall, but in the few cases it is wrong it can interpret non-malignant changes are cancer and recommend dangerous investigations or treatments for patients.

Healthcare models need to consider how to minimise the harm of incorrect predictions, not just pure accuracy.

These technological challenges are solvable, but require new approaches to training, deploying and monitoring AI models in healthcare settings. Unfortunately, many of these problems are subtle and not obvious to users of these models, so it’s vital to build best practice recommendations on how to build models and how to monitor them in practice.

Prior to co-founding Alcidion in 2000, Dr Malcolm Pradhan was the associate dean of IT and director of medical informatics, University of Adelaide.

During his time at the university, Dr Pradhan conducted research into applications of clinical decision support and into optimum use of variety of statistical and probabilistic methods of applying clinical decision support. He was also active in the Australian health informatics community, as a founding fellow of the Australasian College of Health Informatics.

Dr Pradhan has also undertaken responsibilities as a clinical lead within the Australian government’s National eHealth Transition Authority (NeHTA) and has provided important guidance in the design and development of the Personally Controlled eHealth Record System project, as well as a number of definitional documents that underpin NeHTA’s healthcare interoperability standards.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags healthAlcidion Corporation

More about AmazonUniversity of Adelaide

Show Comments
[]