QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation — Analysis of Ranking Scores and Benchmarking Results

From LRDE

Abstract

Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this studywe explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.

Documents

Bibtex (lrde.bib)

@Article{	  mehta.22.melba,
  author	= {Mehta, Raghav and Filos, Angelos and Baid, Ujjwal and
		  Sako, Chiharu and McKinley, Richard and Rebsamen, Michael
		  and D{\"{a}}twyler, Katrin and Meier, Raphael and
		  Radojewski, Piotr and Murugesan, Gowtham Krishnan and
		  Nalawade, Sahil and Ganesh, Chandan and Wagner, Ben and Yu,
		  Fang F. and Fei, Baowei and Madhuranthakam, Ananth J. and
		  Maldjian, Joseph A. and Daza, Laura and G{\'{o}}mez,
		  Catalina and Arbel{\'{a}}ez, Pablo and Dai, Chengliang and
		  Wang, Shuo and Reynaud, Hadrien and Mo, Yuanhan and
		  Angelini, Elsa and Guo, Yike and Bai, Wenjia and Banerjee,
		  Subhashis and Pei, Linmin and AK, Murat and
		  Rosas-Gonz{\'{a}}lez, Sarahi and Zemmoura, Ilyess and
		  Tauber, Clovis and Vu, Minh Hoang and Nyholm, Tufve and
		  L{\"{o}}fstedt, Tommy and Ballestar, Laura Mora and
		  Vilaplana, Veronica and McHugh, Hugh and Talou, Gonzalo
		  Maso and Wang, Alan and Patel, Jay and Chang, Ken and
		  Hoebel, Katharina and Gidwani, Mishka and Arun, Nishanth
		  and Gupta, Sharut and Aggarwal, Mehak and Singh, Praveer
		  and Gerstner, Elizabeth R. and Kalpathy-Cramer, Jayashree
		  and Boutry, Nicolas and Huard, Alexis and Vidyaratne,
		  Lasitha and Rahman, Md Monibor and Iftekharuddin, Khan M.
		  and Chazalon, Joseph and Puybareau, Elodie and Tochon,
		  Guillaume and Ma, Jun and Cabezas, Mariano and Llado,
		  Xavier and Oliver, Arnau and Valencia, Liliana and
		  Valverde, Sergi and Amian, Mehdi and Soltaninejad,
		  Mohammadreza and Myronenko, Andriy and Hatamizadeh, Ali and
		  Feng, Xue and Dou, Quan and Tustison, Nicholas and Meyer,
		  Craig and Shah, Nisarg A. and Talbar, Sanjay and Weber,
		  Marc-Andr{\'{e}} and Mahajan, Abhishek and Jakab, Andras
		  and Wiest, Roland and Fathallah-Shaykh, Hassan M. and
		  Nazeri, Arash and Milchenko, Mikhail and Marcus, Daniel and
		  Kotrotsou, Aikaterini and Colen, Rivka and Freymann, John
		  and Kirby, Justin and Davatzikos, Christos and Menze,
		  Bjoern and Bakas, Spyridon and Gal, Yarin and Arbel, Tal},
  title		= {{QU-BraTS}: {MICCAI} {BraTS} 2020 Challenge on Quantifying
		  Uncertainty in Brain Tumor Segmentation --- {A}nalysis of
		  Ranking Scores and Benchmarking Results},
  journal	= {Journal of Machine Learning for Biomedical Imaging
		  (MELBA)},
  volume	= {26},
  pages		= {1--54},
  month		= sep,
  year		= {2022},
  abstract	= {Deep learning (DL) models have provided state-of-the-art
		  performance in various medical imaging benchmarking
		  challenges, including the Brain Tumor Segmentation (BraTS)
		  challenges. However, the task of focal pathology
		  multi-compartment segmentation (e.g., tumor and lesion
		  sub-regions) is particularly challenging, and potential
		  errors hinder translating DL models into clinical
		  workflows. Quantifying the reliability of DL model
		  predictions in the form of uncertainties could enable
		  clinical review of the most uncertain regions, thereby
		  building trust and paving the way toward clinical
		  translation. Several uncertainty estimation methods have
		  recently been introduced for DL medical image segmentation
		  tasks. Developing scores to evaluate and compare the
		  performance of uncertainty measures will assist the
		  end-user in making more informed decisions. In this study,
		  we explore and evaluate a score developed during the BraTS
		  2019 and BraTS 2020 task on uncertainty quantification
		  (QU-BraTS) and designed to assess and rank uncertainty
		  estimates for brain tumor multi-compartment segmentation.
		  This score (1) rewards uncertainty estimates that produce
		  high confidence in correct assertions and those that assign
		  low confidence levels at incorrect assertions, and (2)
		  penalizes uncertainty measures that lead to a higher
		  percentage of under-confident correct assertions. We
		  further benchmark the segmentation uncertainties generated
		  by 14 independent participating teams of QU-BraTS 2020, all
		  of which also participated in the main BraTS segmentation
		  task. Overall, our findings confirm the importance and
		  complementary value that uncertainty estimates provide to
		  segmentation algorithms, highlighting the need for
		  uncertainty quantification in medical image analyses.
		  Finally, in favor of transparency and reproducibility, our
		  evaluation code is made publicly available at
		  https://github.com/RagMeh11/QU-BraTS. },
  nodoi		= {}
}