Segmentation of Gliomas and Prediction of Patient Overall Survival: A Simple and Fast Procedure

From LRDE

Abstract

In this paper, we propose a fast automatic method that seg- ments glioma without any manual assistance, using a fully convolutional network (FCN) and transfer learning. From this segmentation, we predict the patient overall survival using only the results of the segmentation and a home made atlas. The FCN is the base network of VGG-16pretrained on ImageNet for natural image classificationand fine tuned with the training dataset of the MICCAI 2018 BraTS Challenge. It relies on the "pseudo-3D" method published at ICIP 2017, which allows for segmenting objects from 2D color images which contain 3D information of MRI volumes. For each n th slice of the volume to segment, we consider three images, corresponding to the (n-1)th, nthand (n-1)th slices of the original volume. These three gray-level 2D images are assembled to form a 2D RGB color image (one image per channel). This image is the input of the FCN to obtain a 2D segmentation of the n th slice. We process all slices, then stack the results to form the 3D output segmentation. With such a technique, the segmentation of a 3D volume takes only a few seconds. The prediction is based on Random Forests, and has the advantage of not being dependant of the acquisition modality, making it robust to inter-base data.

Documents

Bibtex (lrde.bib)

@InProceedings{	  puybareau.18.brainles,
  author	= {\'Elodie Puybareau and Guillaume Tochon and Joseph
		  Chazalon and Jonathan Fabrizio},
  title		= {Segmentation of Gliomas and Prediction of Patient Overall
		  Survival: {A} Simple and Fast Procedure},
  booktitle	= {Proceedings of the Workshop on Brain Lesions (BrainLes),
		  in conjunction with MICCAI},
  year		= 2018,
  series	= {Lecture Notes in Computer Science},
  volume	= {11384},
  pages		= {199--209},
  publisher	= {Springer},
  abstract	= {In this paper, we propose a fast automatic method that
		  seg- ments glioma without any manual assistance, using a
		  fully convolutional network (FCN) and transfer learning.
		  From this segmentation, we predict the patient overall
		  survival using only the results of the segmentation and a
		  home made atlas. The FCN is the base network of VGG-16,
		  pretrained on ImageNet for natural image classification,
		  and fine tuned with the training dataset of the MICCAI 2018
		  BraTS Challenge. It relies on the "pseudo-3D" method
		  published at ICIP 2017, which allows for segmenting objects
		  from 2D color images which contain 3D information of MRI
		  volumes. For each n th slice of the volume to segment, we
		  consider three images, corresponding to the (n-1)th, nth,
		  and (n-1)th slices of the original volume. These three
		  gray-level 2D images are assembled to form a 2D RGB color
		  image (one image per channel). This image is the input of
		  the FCN to obtain a 2D segmentation of the n th slice. We
		  process all slices, then stack the results to form the 3D
		  output segmentation. With such a technique, the
		  segmentation of a 3D volume takes only a few seconds. The
		  prediction is based on Random Forests, and has the
		  advantage of not being dependant of the acquisition
		  modality, making it robust to inter-base data.},
  doi		= {10.1007/978-3-030-11726-9_18}
}