Fairness & bias
Fairness in CMR segmentation
In many computer vision applications, artificial intelligence (AI) models have been known to exhibit bias in performance against protected groups that were underrepresented in the data used to train them. In this work we investigated whether AI semantic segmentation models could also exhibit such bias. We trained AI models for the task of segmenting the chambers of the heart from short axis cine cardiac MR images, and found significant racial bias in performance against minority races. This bias could potentially lead to higher misdiagnosis rates for heart failure, which is typically based on the patient's ejection fraction, as estimated from segmentations of the cardiac structures from cardiac MR.
T. Lee, E. Puyol-Antón, B. Ruijsink, M. Shi, A. P. King, "A Systematic Study of Race and Sex Bias in CNN-based Cardiac MR Segmentation", Proceedings MICCAI STACOM, 2022. (paper)
E. Puyol-Antón, B. Ruijsink, J. Mariscal-Harana, S. K. Piechnik, S. Neubauer, S. E. Petersen, R. Razavi, P. Chowienczyk, A. P. King, "Fairness in Cardiac Magnetic Resonance Imaging: Assessing Sex and Racial Bias in Deep Learning-Based Segmentation", Frontiers in Cardiovascular Medicine, 2022. (open access paper)
E. Puyol-Antón, B. Ruijsink, S. K. Piechnik, S. Neubauer, S. E. Petersen, R. Razavi, A. P. King, "Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance in Deep Learning Based Segmentation", Proceedings MICCAI, 2021. (paper)
Sample segmentations from biased model for different races
Relationship between training set imbalance and segmentation performance: white vs. black
Relationship between training set imbalance and segmentation performance: white vs. Asian
Illustration of bias in brain segmentation models on black and white females
Fairness in Brain MR segmentation
We also investigated the potential for bias in the brain MR segmentation task. We systematically varied the level of protected group imbalance in the training set of a FastSurfer segmentation model and observed both sex and race bias in the resulting model performance. The bias was localised to specific regions of the brain and was stronger for race (white vs black) than sex.
S. Ioannou, H. Chockler, A. Hammers, A. P. King, "A Study of Demographic Bias in CNN-based Brain MR Segmentation", Proceedings MICCAI MLCN, 2022. (paper)
Dr King and Dr Puyol-Anton are co-organisers of the first international workshop on Fairness of AI in Medical Imaging. The event was held online on October 5th 2022 and was free to all participants. There were more than one hundred attendees, four keynote speakers and a panel discussion. You can view the recordings of the event at the workshop web site: