MOTION MODELLING & ANALYSIS GROUP

Artificial Intelligence for Medical Image Analysis

Uncertainty

Enabling AI models to express their confidence or uncertainty in outputs is crucial when considering AI as a tool to support humans' decision making. Find out about how we are trying to develop models that can do this in a meaningful way.

Uncertainty-aware training

Model AI models based upon deep learning are notoriously bad at quantifying the confidence of their outputs. By training models using 'hard' labels, solely to maximise performance (e.g. in terms of classification accuracy), they tend to over-estimate confidence. In this case we say that the models are poorly 'calibrated' with respect to uncertainty. We have pioneered the development of training methods that not only aim to maximise accuracy but also calibration, which we refer to as 'uncertainty-aware training'. By demonstrating these methods on real medical imaging problems we hope to advance the adoption of AI models for decision support in medicine.

Publications:


Reliability diagrams showing model calibration for a range of uncertainty-aware training methods (b-g) against the baseline model (a). The orange/green regions show poor calibration, representing over/underestimation of confidence respectively.