MOTION MODELLING & ANALYSIS GROUP

Artificial Intelligence for Medical Image Analysis

Fairness and bias

One of the group's main focuses is fairness and bias in AI for medical image analysis. See below for details of our research and activities in this important field.

Fairness in CMR segmentation

In many computer vision applications, artificial intelligence (AI) models have been known to exhibit bias in performance against protected groups that were underrepresented in the data used to train them. In this work we investigated whether AI semantic segmentation models could also exhibit such bias. We trained AI models for the task of segmenting the chambers of the heart from short axis cine cardiac MR images, and found significant racial bias in performance against minority races. This bias could potentially lead to higher misdiagnosis rates for heart failure, which is typically based on the patient's ejection fraction, as estimated from segmentations of the cardiac structures from cardiac MR.

Publications:

Sample segmentations from biased model for different races.

Relationship between training set imbalance and segmentation performance: white vs. black.

Fairness in brain MR segmentation

Illustration of bias in brain segmentation models on black and white females.

We also investigated the potential for bias in the brain MR segmentation task. We systematically varied the level of protected group imbalance in the training set of a FastSurfer segmentation model and observed both sex and race bias in the resulting model performance. The bias was localised to specific regions of the brain and was stronger for race (white vs black) than sex.

Publications:

Fairness in unsupervised anomaly detection

We have also investigated different sources of bias in AI based unsupervised anomaly detection (UAD) from brain MR.  UAD involves training an AI model to learn the normal distribution of a training set. Subsequently, any sample outside of this distribution is considered to be abnormal, or an anomaly. However, previous work has assumed that there were no other sources of domain shift that could cause the abnormality. In this work, we found that both race and sex, as well as MR scanner vendor, could lead to such shifts, and UAD models trained using imbalanced data could exhibit biases in a similar way to supervised models.

Publications:

Illustration of biases in unsupervised anomaly detection from brain MR for different sources of domain shift.

Fairness in breast cancer characterisation

There has recently been increasing interest in the use of AI to determine characteristics of tumours from radiological images. We have investigated the use of AI for determining breast tumour molecular subtype from dynamic contrast enhanced MR images (DCE-MR). We found that a random forest model trained with radiomics features was able to classify the race of the subject from such data (White vs Black) and furthermore that models for predicting tumour subtype could exhibit race bias when trained with imbalanced data.

Publications:

A dynamic contrast enhanced MR image of a breast cancer patient.

Dr King and Dr Puyol-Antón are  founding members and organisers of FAIMI, an independent academic initiative aimed at promoting research into Fairness of AI in Medical Imaging. FAIMI holds an annual free online symposium to showcase the best and latest research into Fair AI in medical imaging. They also run a workshop at the MICCAI conference. Check out the FAIMI web site for recordings of events, details of future events and useful resources on fairness research.