1070

Fetal Brain Automatic Segmentation Using 3D Deep Convolutional Neural Network
Li Zhao1, Xue Feng2, Craig Meyer2, Yao Wu1, Adre J. du Plessis3, and Catherine Limperopoulos1

1Diagnostic Imaging and Radiology, Childrens National Medical Center, Washington, DC, United States, 2Biomedical Engineering, University of Virginia, Charlottesville, VA, United States, 3Fetal Medicine, Childrens National Medical Center, Washington, DC, United States

Synopsis

Fetal brain MR image segmentation is necessary for brain development research. Currently, this task mainly relies on labor-intensive manually contouring or correction, because automatic segmentation often fails due to the low image quality. In this work, we apply a convolutional neural network, 3D U-Net, to segment the fetal brain regions. The proposed method was validated on 209 fetal brain MRI scans, including healthy fetal controls and high-risk fetuses with congenital heart disease. The proposed method showed high consistency with the manual correction results and may facilitate the identification of aberrant fetal brain development by providing quantitative morphological information.

INTRODUCTION

The human brain develops rapidly during the fetal period. Neuroimaging by MRI is an essential tool to quantitatively evaluate in vivo brain development and to diagnose its impairments1. Current studies of the fetal brain mainly focus on morphological development with advancing gestational age, in which identifying the brain regions is the first step. However, this is a challenging task because fetal MR images have inherently low image contrast and are often contaminated with motion artifacts. Conventional segmentation based on probability templates2 is not suitable to identify the regions of the fetal brain. Currently, fetal brain segmentation is performed with intensive manual contouring and corrections. The manual measurements also introduce subjective bias and reduce the reproducibility in the study with a large cohort.

In the past few years, deep convolutional neural networks have shown promising performance in various image processing tasks. In particular, the U-Net method has been recognized as a milestone in automatic medical image segmentation3,4. In this work, we present a modified U-Net structure for 3D MR image segmentation and our preliminary results on 209 fetal brain segmentations.

METHODS

In this study, 143 MRI scans were performed on 121 healthy fetuses at a mean gestational age (GA) 31.3±5.5 weeks (GA of one scan was missing). Additional 66 MRI scans were performed on 54 fetuses diagnosed with congenital heart disease (CHD) at a mean GA 32.1±3.9 weeks. The control and CHD data were mixed together in the following analysis. 80% of the scans were selected randomly and were used for training the 3D U-Net model. The remaining 20% were used for validation.

T2-weighted images were acquired on a GE1.5T scanner. High-resolution single shot fast spin echo sequences were acquired in coronal, sagittal and axial planes with three repetitions. The flip angle was 130 degrees; matrix size was 256x192; slice thickness was 2mm; TE was 157-163ms and the minimum TR was used (about 1100ms). The data were acquired with partial Fourier and without parallel imaging.

The images from all three planes were reconstructed into a high-resolution 3D volume using a the slice-to-volume method5. The head region was manually selected in each slice and segmentation was performed on the brain region only. The high-resolution images were first segmented by the Developing brain Region Annotation With Expectation-Maximization (Draw-EM) tool6. The labels of eight regions were provided, including the cerebral cortex, white matter, external CSF, internal CSF/lateral ventricles, cerebellum, subcortical gray matter, brain stem, and amygdala+hippocampus. Based on the conventional segmentation, the segmentation was manually corrected afterward, which was used as the ground truth. The third and fourth ventricles were labeled as external CSF regions in the conventional segmentation method and manual corrections.

The fetal brain segmentation was performed using a modified 3D U-Net structure. Compared to the original design, there are two modifications. First, this model was implemented with a parametric rectified linear unit (PReLU), mentioned below, instead of the rectified linear unit.

$$f(x) =\begin{cases}x & x > 0\\ax & x \le 0\end{cases} $$

where a is a parameter decided during training.

Second, during the training stage, 3D T2 weighted images were extracted into 64x64x64 patches randomly. These data were fed to 96 filters in the first layer. In the argumentations, images were flipped randomly along the left-right direction. The cross-entropy loss was calculated based on the manually labeled maps, which was used as the ground truth. Dice coefficient was used in the evaluation. The data were processed on a cluster with Tesla P100 Units.

RESULTS

In the validation data, the method yielded Dice coefficients of 0.89 in the cortex, 0.78 in the white matter, 0.91 in the external CSF, 0.89 in the internal CSF or ventricles, 0.94 in the cerebellum, 0.90 in the subcortical gray matter, 0.95 in the brain stem, and 0.80 in the amygdala and hippocampus, which indicated good to excellent performance of the method. Figure 1 shows the performance of the proposed method on a healthy fetus at 38 weeks GA. Figure 2 shows the performance of the proposed method on a fetus with CHD at 33 3/7 weeks GA. The proposed method provided high consistency with the ground truth and showed promising performance on fetal brain segmentation.

DISCUSSION

In this work, we demonstrated the performance of a modified 3D U-Net on the brain segmentation of health and high-risk fetuses. The proposed method shows superior performance for both healthy and CHD cohorts, compared to the conventional method. This novel method offers a promising tool for the quantitative analysis and automatic characterization of in vivo fetal brain development.

Acknowledgements

This work was partly supported by R01HL116585 from NIH National Heart, Lung, and Blood

References

1. Clouchoux C, Guizard N, Evans AC, Du Plessis AJ, Limperopoulos C. Normative fetal brain growth by quantitative in vivo magnetic resonance imaging. Am. J. Obstet. Gynecol. 2012;206:173.e1-173.e8 doi: 10.1016/j.ajog.2011.10.002.

2. Gholipour A, Estroff JA, Barnewolt CE, Connolly SA, Warfield SK. Fetal brain volumetry through MRI volumetric reconstruction and segmentation. Int. J. Comput. Assist. Radiol. Surg. 2011;6:329–339 doi: 10.1007/s11548-010-0512-x.

3. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. 2015:1–8 doi: 10.1007/978-3-319-24574-4_28.

4. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. 2016:424–432 doi: 10.1007/978-3-319-46723-8_49.

5. Kainz B, Steinberger M, Wein W, et al. Fast Volume Reconstruction from Motion Corrupted Stacks of 2D Slices. IEEE Trans. Med. Imaging 2015;34:1901–1913 doi: 10.1109/TMI.2015.2415453.

6. Makropoulos A, Gousias IS, Ledig C, et al. Automatic whole brain MRI segmentation of the developing neonatal brain. IEEE Trans. Med. Imaging 2014;33:1818–1831 doi: 10.1109/TMI.2014.2322280.

Figures

Figure 1 Brain segmentations of a healthy fetal brain at the gestational age of 38 weeks. A) 3D reconstruction of the axial, coronal and sagittal 2D single-shot fast spin echo. B) manually corrected segmentations based on the conventional segmentation results. C) automatic segmentation based on modified 3D U-Net

Figure 2 Brain segmentations of a fetus with congenital heart disease at the gestational age of 33+3/7 weeks. A) 3D reconstruction of the axial, coronal and sagittal 2D single-shot fast spin echo. B) manually corrected segmentations based on the conventional segmentation results. C) automatic segmentations based on modified 3D U-Net

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
1070