MOTION MODELLING & ANALYSIS GROUP
Artificial Intelligence for Medical Image Analysis
Fairness and bias
One of the group's main focuses is fairness and bias in AI for medical image analysis. See below for details of our research and activities in this important field.
Fairness in CMR segmentation
In many computer vision applications, artificial intelligence (AI) models have been known to exhibit bias in performance against protected groups that were underrepresented in the data used to train them. In this work we investigated whether AI semantic segmentation models could also exhibit such bias. We trained AI models for the task of segmenting the chambers of the heart from short axis cine cardiac MR images, and found significant racial bias in performance against minority races. This bias could potentially lead to higher misdiagnosis rates for heart failure, which is typically based on the patient's ejection fraction, as estimated from segmentations of the cardiac structures from cardiac MR.
Publications:
T. Lee, E. Puyol-Antón, B. Ruijsink, K. Aitcheson, M. Shi, A. P. King, "An Investigation Into the Impact of Deep Learning Model Choice on Sex and Race Bias in Cardiac MR Segmentation", Proceedings MICCAI FAIMI, 2023. (paper)
T. Lee, E. Puyol-Antón, B. Ruijsink, M. Shi, A. P. King, "A Systematic Study of Race and Sex Bias in CNN-based Cardiac MR Segmentation", Proceedings MICCAI STACOM, 2022. (paper)
E. Puyol-Antón, B. Ruijsink, J. Mariscal-Harana, S. K. Piechnik, S. Neubauer, S. E. Petersen, R. Razavi, P. Chowienczyk, A. P. King, "Fairness in Cardiac Magnetic Resonance Imaging: Assessing Sex and Racial Bias in Deep Learning-Based Segmentation", Frontiers in Cardiovascular Medicine, 2022. (open access paper)
E. Puyol-Antón, B. Ruijsink, S. K. Piechnik, S. Neubauer, S. E. Petersen, R. Razavi, A. P. King, "Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance in Deep Learning Based Segmentation", Proceedings MICCAI, 2021. (paper)
Sample segmentations from biased model for different races.
Relationship between training set imbalance and segmentation performance: white vs. black.
Fairness in brain MR segmentation
Illustration of bias in brain segmentation models on black and white females.
We also investigated the potential for bias in the brain MR segmentation task. We systematically varied the level of protected group imbalance in the training set of a FastSurfer segmentation model and observed both sex and race bias in the resulting model performance. The bias was localised to specific regions of the brain and was stronger for race (white vs black) than sex.
Publications:
S. Ioannou, H. Chockler, A. Hammers, A. P. King, "A Study of Demographic Bias in CNN-based Brain MR Segmentation", Proceedings MICCAI MLCN, 2022. (paper)
Fairness in unsupervised anomaly detection
We have also investigated different sources of bias in AI based unsupervised anomaly detection (UAD) from brain MR. UAD involves training an AI model to learn the normal distribution of a training set. Subsequently, any sample outside of this distribution is considered to be abnormal, or an anomaly. However, previous work has assumed that there were no other sources of domain shift that could cause the abnormality. In this work, we found that both race and sex, as well as MR scanner vendor, could lead to such shifts, and UAD models trained using imbalanced data could exhibit biases in a similar way to supervised models.
Publications:
C. I. Bercea, E. Puyol-Antón, B. Wiestler, D. Rueckert, J. A. Schnabel, A. P. King, "Bias in Unsupervised Anomaly Detection in Brain MRI", Proceedings MICCAI FAIMI, 2023. (paper)
Illustration of biases in unsupervised anomaly detection from brain MR for different sources of domain shift.
Fairness in breast cancer characterisation
There has recently been increasing interest in the use of AI to determine characteristics of tumours from radiological images. We have investigated the use of AI for determining breast tumour molecular subtype from dynamic contrast enhanced MR images (DCE-MR). We found that a random forest model trained with radiomics features was able to classify the race of the subject from such data (White vs Black) and furthermore that models for predicting tumour subtype could exhibit race bias when trained with imbalanced data.
Publications:
M. Huti, T. Lee, E. Sawyer, A. P. King, "An Investigation Into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features", Proceedings MICCAI FAIMI, 2023. (paper)
A dynamic contrast enhanced MR image of a breast cancer patient.
Dr King and Dr Puyol-Antón are founding members and organisers of FAIMI, an independent academic initiative aimed at promoting research into Fairness of AI in Medical Imaging. FAIMI holds an annual free online symposium to showcase the best and latest research into Fair AI in medical imaging. They also run a workshop at the MICCAI conference. Check out the FAIMI web site for recordings of events, details of future events and useful resources on fairness research.