4797

Automated Fetal Brain Segmentation Using Deep Convolutional Neural Network
Bin Chen1, Liming Wu1, Bing Zhang2, Simin Liu3, and Hua Guo3

1Purdue University Northwest, Hammond, IN, United States, 2Nanjing University Medical School, Nanjing, China, 3Tsinghua University, Beijing, China

Synopsis

Recent advances show promising fetal brain reconstruction results through image motion correction and super resolution from a stack of unregistered images consisting of in-plane motion free snapshot slices acquired by fast imaging methods. Most motion correction and super resolution techniques for 3D volume reconstruction require accurate fetal brain segmentation as the first step of image analysis. In this study, a customized U-Net based deep learning method was implemented for automatic fetal brain segmentation. The high accuracy of deep learning based semantic segmentation improves the performance in volume registration as well as quantitative studies of brain development and group analysis.

Introduction

Fetal MRI has been increasingly used in quantitative brain development studies as well as normal/abnormal development diagnosis for the gestational age (GA) over 20 weeks [1]. While fast imaging methods are capable of acquiring in-plane motion free 2D snapshot, unconstrained fetal motion in uterus between slice acquisitions usually leads to unregistered image volumes. Recent advances show promising volume reconstruction results from a stack of unregistered images through fetal image motion correction and super resolution volume reconstruction. The performance of most techniques in 3D volume registration reconstruction usually depends on the accuracy of fetal brain segmentation [2,3]. In this study, a customized U-Net [4] based deep learning method was implemented for automatic fetal brain segmentation. The results are evaluated by commonly used measures in medical image segmentation, and compared with FSL’s popular brain extraction tool (BET).

Methods

A modified U-Net model was implemented to better fit the fetal brain segmentation task. The dimension for the input layer was resized from 480×480 to 512×512 pixels for convenient feature concatenation in implementation. The convolution was set to have the kernel size of 3×3, stride 2, and zero padding. The batch normalization was adopted for faster and more stable network convergence. Manually segmented images in 52 volumes from 19 fetuses with GA between 30-33 weeks were served as the training dataset. The binary cross-entropy loss was calculated between the ground truth and the output prediction. The SDG optimizer was set to a learning rate of 0.01 with a decay rate of 50% for every 10 epochs. The network converged and the loss started to plateau after 20 epochs. The training time was approximately 5 hours on a computer with two Nvidia Geforce GTX 1080 graphic cards. Both the trained network and FSL’s BET program were applied to the test dataset consisting of the subjects not in the training dataset for comparison. The results from FSL’s BET with the fractional intensity threshold of 0.80, the best segmentation results by visual inspection with the threshold ranging from 0.5 to 0.9, were used as a reference. The performance of segmentation was evaluated by intersection over union (IOU), Dice coefficient, sensitivity and specification.

Results

Figure 1 shows the segmented areas of a fetal brain with the gestational age of 31 weeks in the test dataset. The green contours represent the manual segmented ground truth. The red contours are predicted segmentation by the trained deep learning network and FSL’s BET program. The overlapped contours of the ground truth and predicted segmentation are shown in yellow. The results show that deep learning based segmentation has high resemblance in shape and size to the manual segmentation. The quantitative measures for performance evaluation are shown in Table 1. Other than the Dice coefficient, the IOU score provides an intuitive way to illustrate the resemblance between the ground truth and predicted segments. Both techniques have high sensitivity and specificity scores.

Discussions

Segmentation usually serves as a first step for quantitative analysis. Deep learning based techniques have demonstrated their performances in challenging tasks of medical image segmentation in recent years. Our results show that the trained network can accurately extract the brain regions of a fetus with similar GA weeks. A further study may evaluate the segmentation performance between a large one-fit-all network for all gestational ages and multiple trained networks for different GA stages.

Acknowledgements

No acknowledgement found.

References

  1. Makropoulos A, Counsell S, Rueckert D. A Review on Automatic Fetal and Neonatal Brain MRI Segmentation. NeuroImage. 2018;170:231-248.
  2. Keraudren K, Kuklisova-Murgasova M, Kyriakopoulou V, Malamateniou C, Rutherford MA, Kainz B, Hajnal JV, Rueckert D. Automated fetal brain segmentation from 2D MRI slices for motion correction. Neuroimage. 2014; 101:633-43.
  3. Tourbier S, Velasco-Annis C, Taimouri V, Hagmann P, Meuli R, Warfield S, Cuadra M, Gholipour A. Automated Template-based Brain Localization and Extraction for Fetal Brain MRI Reconstruciton. NeuroImage. 2017; 155:460-472.
  4. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. 2015. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, 9351: 234-241.

Figures

Figure 1: Left: The contours of FSL's BET predicted segmentation (in red) and the ground truth (in green) overlaid on a fetal brain image volume. Right: The contours of deep learning predicted segmentation (in red) and the ground truth (in green) overlaid on the same fetal brain image volume. The yellow contour indicates the overlap between the contours of the ground truth and predicted segmentation.

Table 1: Performance Comparison with Commonly Used Measures in Medical Image Segmentation

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4797