0899

Brain Tumor Segmentation and Uncertainty Quantification Using Monte Carlo dropout sampling
Joohyun Lee1, Woojin Jung1, and Jongho Lee1

1Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, Korea, Republic of

Synopsis

Deep learning has made tremendous progress in many areas but it is often regarded as a black box with uncertainty in outcome. Therefore, a more reliable method is necessary to be applied in a medical field. In this work, we designed a brain tumor segmentation network that provides uncertainty quantification using Monte Carlo dropout sampling. The proposed method resulted in considerable outcomes and also provided an option for selectively maximizing precision or recall using the uncertainty quantification.

Introduction

In the diagnosis and treatment of brain tumors, it is critical to find the range of tumor, which is commonly performed using multi-contrast MRI images (e.g. T1w, contrast enhanced T1w, FLAIR, T2w). However, the process of determining the boundary of the tumor in 3D MRI images is not only experience-dependent but also time-consuming. Hence, automation using machine learning algorithms has been an important topic of research. Recently, powered by the advance in deep learning, substantial improvements have been demonstrated1. Despite these improvements, deep learning has a risk in bringing it into medical practice due to limited reliability and interpretability. In this paper, we present a more reliable/interpretable tumor segmentation method by providing uncertainty quantification of the segmented results. The proposed method can selectively maximize precision or recall by including or excluding highly uncertain abnormal regions.

Methods

[Data and Aim] This work was performed using the BraTS 2018 training datasets2 consisted of 285 multi-contrast MRI images (FLAIR, T1, T1c, and T2). Our target tasks were to segment whole tumor, tumor core, and enhancing tumor regions and to provide uncertainty quantification.

[Segmentation] For segmentation, PSPNet3 was applied (Figure 1). The PSPNet is capable of exploiting multi-contrast MRI images effectively using the pyramid pooling module. This module applies multiple size kernels to extract various regional information from each image. The network inputs were multi-contrast MRI images (FLAIR, T1, T1c, and T2), and the outputs were segmentation, uncertainty, and combined masks. Training and evaluation were done using a 10-fold cross-validation method with BraTS 2018 training datasets. A loss function, which was based on Dice-coefficient (DSC), was used to train the model. The Dice-coefficient4 is defined by

$$DSC=\frac{2\sum_i^Np_{i}g_{i}}{\sum_i^Np_i^2q_i^2}$$

where pi is the binary value of the voxel i in the segmentation output and gi is the binary value of the voxel i in the ground truth.

[Uncertainty Quantification] To generate an uncertainty mask, a Monte-Carlo dropout sampling1, which is a type of variational inference, was applied. To quantify the uncertainty value, the segmentation model was trained with a dropout, which is random disconnection in the network. When inferencing, the same data were applied to the network N times while applying the different dropouts. Then the individual result was analyzed to quantify aleatoric uncertainty and epistemic uncertainty, which are defined as follows:

$$Aleatoric\ uncertainty=\frac{1}{N}\sum_n^N(x_{i,n})(1-x_{i,n})$$

$$Epistemic\ uncertainty=\frac{1}{N}\sum_n^Nx_{i,n}^2-(\frac{1}{N}\sum_n^Nx_{i,n})^2$$

where xi,n is the value of voxel i in the nth segmentation output. The final uncertainty value is calculated as follows:

$$u_{i}=\alpha\times Aleatoric\ Uncertainty+\beta\times Epistemic\ Uncertainty$$

where ui is the value of voxel i in the uncertainty mask, and α, β were hyperparameters and set as 0.5.

Finally, the segmentation mask was combined with the uncertainty mask to generate a combined mask. In the segmentation mask, red pixels indicate tumor. In the uncertainty mask, green pixels were normal tissue with high uncertainty values, and yellow pixels were tumor with high uncertainty values. In the combined mask, color-code followed the uncertainty mask first then the segmentation mask. Using this combined mask, inclusion or exclusion of highly uncertain prediction becomes possible. By excluding them, the results have less false positives, predicting a certain and safe tumor area (i.e. optimizing precision). By including them, the results have less false negatives, predicting most areas of tumor (i.e. optimizing recall). Hence, our method provides this selectivity that can be optimized for the purpose of the application.

Results and discussion

Figures 2-3 shows sample results. In many datasets, the network segmented images precisely (Figures 2a, d, and e). However, segmentation was less perfect in other datasets (for example, small tumor fragments in Figure 2b and tumor on a dark background in Figure 2c). The uncertainty map represents the uncertainty value of each prediction voxels. Using this information, prediction results can selectively include or exclude highly uncertain regions (Figure 3). Table 1 summarizes the DSC results of the recent segmentation methods and the proposed method. The proposed method achieved considerable results (0.920, 0.844, 0.815 DSC in whole tumor, tumor core, and enhancing tumor respectively). Table 2 shows the combined mask evaluation results of proposed methods for All-Tumor (red+yellow) and Certain-Tumor (red). By including the yellow area, the prediction results show higher recall (i.e. less false negative) whereas by excluding them, the prediction results provide higher precision (i.e. less false positive), demonstrating the utility of the uncertainty mask.

Conclusion

In this work, PSPNet with uncertainty estimation were applied for brain tumor segmentation problem. The method may provide more reliable/interpretable diagnosis results.

Acknowledgements

This research was supported by NRF-2015M3C7A1031969 and Brain Korea 21 Plus Project in 2018.

References

[1] Kendal. A., et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, NIPS 2017

[2] Menze, B.H., et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS), IEEE Trans. Med. Imaging. 34, 1993– 2024 (2015)

[3] Zhao. H., et al. Pyramid Scene Parsing Network, CVPR 2017

[4] Milletari. F., et al. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation, 3D Vision (3DV) 2016 Fourth International Conference on, pp. 565–571. IEEE, 2016

[5] G. Alain., Y. Bengio., et al. Understanding intermediate layers using linear classifier probes, arXiv:1610.01644, 2016

[6] Dong. H., et al. Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks, MIUA, 2017

[7] Kwon, Y., et al. Uncertainty quantification using Bayesian neural networks in classification: Application to ischemic stroke lesion segmentation, Medical Imaging with Deep Learning (MIDL) preprint, 2018

[8] Isensee, F., et al. No New-Net, arXiv:1809.10483, 2018

Figures

Figure 1. PSPNet for segmentation and uncertainty. Monte-Carlo dropout sampling was used to estimate uncertainty

Figure 2. Sample results. Segmentation (red) shows the tumor segmentation mask. Uncertainty (green/yellow) shows the uncertainty mask. Combined show the combined segmentation and uncertainty mask. Ground-Truth (blue) shows the ground-truth tumor mask.

Figure 3. Including/excluding uncertain tumor regions based on combined mask. All-Tumor includes uncertain tumor regions and Certain-Tumor excludes uncertain tumor regions.

Table 1. Dice-coefficient results of recent methods and the proposed method on BraTS 2018 training dataset.

Table 2. Evaluation results of the combined mask. All-Tumor is the tumor prediction including the area of high and low uncertainty (red area+ yellow area). Certain-Tumor is the tumor prediction including area of only low uncertainty (only red area).

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0899