3081

Fully automated segmentation of meningiomas using a specially trained deep-learning-model on multiparametric MRI
Kai Roman Laukamp1,2,3, Frank Thiele1,4, Lenhard Pennig1, Robert Reimer1, Georgy Shakirin1,4, David Zopfs1, Simon Lennartz1, Marco Timmer5, David Maintz1, Michael Perkuhn1,4, and Jan Borggrefe1

1Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany, 2Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, United States, 3Radiology, Case Western Reserve University, Cleveland, OH, United States, 4Philips Research Europe, Aachen, Germany, 5Neurosurgery, University Hospital Cologne, Cologne, Germany

Synopsis

Volumetric assessment of meningiomas plays an instrumental role in primary assessment and detection of tumor growth. We used a specially trained deep-learning-model on multiparametric MR-data of 116 patients to evaluate performance in automated-segmentation. The deep-learning-model was trained on 249 gliomas, then further adapted by a subgroup of our meningioma patients (n=60). A second group of meningiomas (n=56) was used for testing performance of the deep-learning-model compared to manual-segmentations. The automated-segmentations showed strong correlation to the manual-segmentations: dice-coefficients were 0.87±0.15 for contrast-enhancing-tumor in T1CE and 0.82±0.12 for total-tumor-volume (union of contrast-enhancing-tumor and edema). Automated-segmentation yielded accurate results comparable to manual interreader-variabilities.

INTRODUCTION:

Meningiomas are among the most common brain tumors, with a high incidence in routine brain MR imaging1,2. MRI is essential for diagnosis, treatment planning, and monitoring of meningiomas3. Tumor progression pattern is known to be rather slow, multifocal, and multidirectional. Volumetric evaluation of meningiomas is therefore superior to traditional diameter methods when assessing tumor growth; however, it is proven to be time-consuming1,4 and intra- and inter-reader variabilities in brain tumor segmentation are high, ranging from 20-30%5,7. Automated brain tumor segmentation has to address several difficult factors in order to be seen as reliable: anatomical variations, dural attachments, heterogenous imaging data from differing MR scanners, and varying scanner parameters3,5. Deep-learning-models and their technological advancements showed significant improvements in automated-detection and -segmentation6. Apart from deep learning, other (semi-)automated methods have been used for brain tumor segmentation, including the most common primary intracranial neoplasms: meningiomas and gliomas8. We used a specially trained deep-learning-model on routine multiparametric MR-data to investigate its performance in automated-detection and -segmentation.

METHODS:

This study has been approved by the local institutional review board. Only patients with a complete MR-dataset (T1-/T2-weighted, T1-weighted contrast-enhanced [T1CE], FLAIR) and histopathological specimen were included, resulting in a total of 116 patients (n=91 grade I and n=25 grade II meningiomas). The deep-learning-model was trained on an external, independent dataset of 249 gliomas6 and then further adapted by a subgroup of our meningioma patients (n=60). The remaining meningioma cases (n=56) were used for the comparison of the segmentation performance between the deep-learning-model and manual-segmentations. The deep-learning-model preforms voxel-wise classifications of four tumor classes (edema, contrast-enhancing tumor, necrosis, non-enhancing tumor) as defined in the BRATS benchmark9. For our study, we applied a previously implemented technique utilizing the following defined tumor volumes8: total-tumor-volume (contrast-enhancing-tumor in T1CE and surrounding edema in FLAIR), and contrast-enhancing-tumor in T1CE. Preprocessing of imaging data included registration, skull stripping, resampling, and normalization. DeepMedic-architecture6 was used, including 3D-convolutional-neural-network for segmentation and a 3D-post-processing-step to remove false-positives. Detection and segmentation in FLAIR and T1CE were compared to manual-segmentations. Manual-segmentations were performed by two experienced radiologists in consensus using IntelliSpace-Discovery (Philips Healthcare). We applied Wilcoxon signed rank test to evaluate any statistical difference. Segmentation accuracy was computed as overlap of ground-truth-segmentation S1 and model segmentation S2 using the dice-coefficient (similarity index)8,10: DSC(S1,S2)=(2|S1∩S2|)÷(|S1|+|S2|)

RESULTS:

The deep-learning-model detected 55 of 56 meningiomas correctly, leading to a detection accuracy of 98.2%. The single meningioma that was not detected by the algorithm was located next to the left medial sphenoid wing and showed a rather small contrast-enhancing-tumor size (12.7cm³) and little surrounding edema of (3.6cm³). Average contrast-enhancing-tumor from manual-segmentations was 30.7±25.1cm³ and total-tumor-volume was 71.3±65.9cm³ (Figure 1). The automatically detected tumor volumes were smaller, contrast-enhancing-tumor was 27.0 ± 22.6cm³, and total-tumor-volume was 67.3±60.5cm³ (Figure 1). The automated-segmentations showed a strong correlation with the manual-segmentations, as indicated by the high dice-coefficients: average dice-coefficients were 0.87±0.15 (range: 0.07-0.97) for contrast-enhancing-tumors and 0.82±0.12 (range: 0.46-0.93) for total-tumor-volume (Figure 2&3). There was a significant difference between the dice-coefficients of total-tumor-volume and contrast-enhancing tumor (p<0.05). Furthermore, the dice-coefficients of the 16 meningiomas that were attached to the skull base were significantly lower (contrast-enhancing-tumor, 0.79±0.24; total-tumor-volume, 0.80±0.09) than the meningiomas of the convexity of the skull (contrast-enhancing-tumor, 0.91±0.06; total-tumor-volume, 0.83±0.13).

DISCUSSION:

The specially trained deep-learning-model detected (>98%) and segmented tumors accurately; automated-segmentation methods showed strong correlation to manual-segmentations, and the overlap was high for both contrast-enhancing-tumor and total-tumor-volume (dice-coefficients, 0.87 and 0.82, respectively). The results are either comparable or superior to other recently published studies in automated brain tumor segmentation5,6,8,11–14 and general segmentation accuracies accounting for intra- and interreader-variabilities5,7. Next to glioma, research focussing on meningioma and semi-automated or fully-automated approaches showed promising results regarding tumor volume definition15–19. In our patient cohort, automated-segmentation of skull base meningiomas performed considerably worse than in meningioma of the convexity. Though accuracies were still sufficiently high, this is an important limitation to consider, as radiologists/neurosurgeons might rely on assistance from the deep-learning-model specifically in this area. Another important topic to discuss is how the training data influenced segmentation performance. Initially the deep-learning-model was trained with glioma data. For our study the deep-learning-model was then further adapted/trained on meningioma cases, as discussed/proposed in recent findings8,14. This led to a distinct improvement of segmentation accuracies compared to results of a not specially trained deep-learning-model, for which dice-coefficients have been reported between 0.78 and 0.818.

CONCLUSION:

Fully automated-segmentation yielded accurate results that are comparable to manual interreader-variabilities, thereby confirming its ability to potentially replace manual-segmentations in the future.

Acknowledgements

We thank Natalie C. Vick for her contribution to this study and the review of the manuscript.

References

1. Fountain DM, Soon WC, Matys T, et al. Volumetric growth rates of meningioma and its correlation with histological diagnosis and clinical outcome: a systematic review. Acta Neurochir. (Wien). 2017;159:435–445.

2. Vernooij MW, Ikram MA, Tanghe HL, et al. Incidental Findings on Brain MRI in the General Population. N. Engl. J. Med. 2007;357:1821–1828.

3. Goldbrunner R, Minniti G, Preusser M, et al. EANO guidelines for the diagnosis and treatment of meningiomas. Lancet Oncol. 2016;17:e383–e391.

4. Chang V, Narang J, Schultz L, et al. Computer-aided volumetric analysis as a sensitive tool for the management of incidental meningiomas. Acta Neurochir. (Wien). 2012;154:589–597.

5. Akkus Z, Galimzianova A, Hoogi A, et al. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J. Digit. Imaging. 2017:1–11.

6. Kamnitsas K, Ledig C, Newcombe VFJ, et al. Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation. 2016.

7. Mazzara GP, Velthuizen RP, Pearlman JL, et al. Brain tumor target volume determination for radiation treatment planning through automated MRI segmentation. Int. J. Radiat. Oncol. 2004;59:300–312.

8. Laukamp KR, Thiele F, Shakirin G, et al. Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI. Eur. Radiol. 2018:1–9.

9. Menze BH, Jakab A, Bauer S, et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging. 2015;34:1993–2024.

10. Crum WR, Camara O, Hill DLG. Generalized overlap measures for evaluation and validation in medical image analysis. IEEE Trans. Med. Imaging. 2006;25:1451–61.

11. Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with Deep Neural Networks. Med. Image Anal. 2017;35:18–31.

12. Farzaneh N, Soroushmehr SMR, Williamson CA, et al. Automated subdural hematoma segmentation for traumatic brain injured (TBI) patients. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2017:3069–3072.

13. Zhuge Y, Krauze A V., Ning H, et al. Brain tumor segmentation using holistically nested neural networks in MRI images. Med. Phys. 2017;44:5234–5243.

14. Michael Perkuhn PSFTGSMMDGCKJB. Clinical Evaluation of a Multiparametric Deep Learning Model for Glioblastoma Segmentation Using Heterogeneous Magnetic Resonance Imaging Data From Clinical Routine. Invest. Radiol. 2018;53:647–654.

15. Shimol E Ben, Joskowicz L, Eliahou R, et al. Computer-based radiological longitudinal evaluation of meningiomas following stereotactic radiosurgery. Int. J. Comput. Assist. Radiol. Surg. 2018;13:215–228.

16. Latini F, Larsson E-M, Ryttlefors M. Rapid and Accurate MRI Segmentation of Peritumoral Brain Edema in Meningiomas. Clin. Neuroradiol. 2017;27:145–152.

17. Koley S, Das DK, Chakraborty C, et al. Pixel-based Bayesian classification for meningioma brain tumor detection using post contrast T&lt;inf&gt;1&lt;/inf&gt;-weighted magnetic resonance image. In: 2014 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). IEEE; 2014:000358–000363.

18. Hsieh TM, Liu Y-M, Liao C-C, et al. Automatic segmentation of meningioma from non-contrasted brain MRI integrating fuzzy clustering and region growing. BMC Med. Inform. Decis. Mak. 2011;11:54.

19. Laukamp KR, Lindemann F, Weckesser M, et al. Multimodal Imaging of Patients With Gliomas Confirms (11)C-MET PET as a Complementary Marker to MRI for Noninvasive Tumor Grading and Intraindividual Follow-Up After Therapy. Mol. Imaging. 2017;16:1536012116687651.

Figures

Box-plot diagram depicting sizes of the total-tumor-volume and contrast-enhancing-tumor for automated- and manual-segmentations.

Box-plot diagram depicting dice-coefficients of total-tumor-volume and contrast-enhancing-tumor.

85 y.o. female patient with a meningioma grade I in the left parietal lobe with wide dural attachment. The depicted tumor shows sharp circumscription, pronounced gadolinium enhancement and edema in the surrounding white matter. Manual and automated-segmentations show excellent correlation.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
3081