3481

Learning 4D Probabilistic Atlas of Fetal Brain with Multi-channel Registration Network
Yuchen Pei1, Fenqiang Zhao1, Liangjun Chen1, Zhengwang Wu1, Tao Zhong1, Ya Wang1, Li Wang1, He Zhang2, and Gang Li1
1Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA, Chapel Hill, NC, United States, 2Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, China, Shanghai, China

Synopsis

Brain atlases are of fundamental importance for analyzing the dynamic neurodevelopment in fetal brains. Since the brain size, shape, and structure change rapidly during the prenatal development, it is essential to construct a spatiotemporal (4D) atlas with tissue probability maps for accurately characterizing dynamic changes in fetal brains and providing tissue prior for segmentation of fetal brain MR images. We propose a novel unsupervised learning framework for building multi-channel atlases by incorporating tissue segmentation. Based on 98 healthy fetuses from 22 to 36 weeks, the learned 4D fetal brain atlas includes intensity templates, corresponding tissue probability maps and parcellation maps.

Introduction

Since the fetal brain size, shape, and structure change rapidly during early brain development, fetal brain atlases should be spatiotemporal (4D) to densely cover multiple time points. However, existing brain atlas construction methods[1,2] typically perform several rounds of group-wise registration, which involves co-registering the subjects and averaging the intensity values at each voxel. Multiple atlases can be constructed for different subgroups, which would demand a significant amount of time and expertise. Recently, a deep learning-based atlas construction framework[3,4] is computationally more efficient by jointly learning atlas synthesis network and unsupervised registration network. The network can synthesize a template and produce a corresponding deformation field that aligns the template to the input image. Our work is inspired by aforementioned work, but incorporates a multi-channel inputs to further enhance the image alignment not only based on the relatively noisy intensity information but also on the reliable tissue segmentation maps, thus obtaining a high-quality spatiotemporal fetal brain atlas and its corresponding tissue probability maps.

Materials and Methods

The MRI data for atlas construction in this study were obtained from 98 healthy fetuses scanned at the gestational age from 22 to 36 weeks. All scans were acquired by 1.5T Siemens Avanto scanner with the resolution of 0.54×0.54×4.4 mm3. Preprocessing, including brain localization, extraction[5], and super-resolution volume reconstruction[6] from 2D stacks were performed to generate the 3D brain volume with an isotropic resolution of 0.8×0.8×0.8 mm3 . Brain tissues were segmented into the white matter (WM), gray matter (GM), and cerebrospinal (CSF)[7], and then manually corrected by experts.

The network architecture is shown in Fig. 2. Let V={V01, V11 ,V12, V13, ..., Vji, Vn0, Vn1, Vn2, Vn3} denote a fetal volumetric dataset containing subjects (i=1, ... n) with T2w image and three types of tissue labels (j=0,1,2,3, representing T2w image, WM, GM, CSF, respectively) and ti denote the age of subject $$$i$$$. We aim to jointly train an atlas synthesis network $$$G$$$ and a U-Net that can align the multi-channel atlas to individual images. In order to enforce the tissue correspondence among the multi-channel inputs, we concatenate the intensity image (T2w MRI) and the tissue segmentations maps as input to the U-Net and incorporate the tissue map similarity loss to our loss function. The objective function used to optimize our learning framework is:

L=∑i ||Vi0 - Ai0 ° Φi||2 - ∑i3j=1NCC(Vij, Aij ° Φi) + λc ||u||2d/2 ∑i ||ui||2a/2∑i||Del(ui)||2

where Ai = G(ti) represents the synthesized multi-channel atlas at the time point ti, Ai0 represents intensity atlas, Aij represents tissue probability map and Φi is the deformation field aligning the atlas to the subject $$$i$$$. The first term enforces the similarity of moved intensity image and individual intensity image. The second term enforces the similarity of moved tissue probability maps and individual tissue maps, where NCC represents normalized cross correlation. The rest terms regularize the unbiasedness, extent and smoothness of the displacement field u (u=Φ-Id). To facilitate the ROI-based analysis, we finally warped the CRL fetal brain atlas[2] with 126 regions onto our atlases.

Results and Discussion

Fig. 1 shows examples of typical axial slices from the constructed 4D volumetric atlas from 22 to 36 gestational weeks. From left to right, each column corresponds to the intensity image, the tissue probability maps for GM, WM, and CSF, and the parcellation maps. From this figure, we can find that the distinct morphological changes between adjacent gestational stages. Fig. 3 shows the two atlases at 32 gestational weeks constructed by different methods; the top row illustrates the atlas constructed by the network incorporating the tissue map similarity loss and the bottom row shows the atlas constructed by the network without the tissue map similarity loss. We can observe that the atlas built with the tissue map similarity loss can preserve more structural details and the corresponding tissue probability maps are sharper.

Conclusion

We present a novel learning-based atlas construction framework to efficiently and accurately build the multi-channel 4D fetal brain atlas. The constructed 4D atlas can preserve more structural details for accurately mapping fetal brain development. The tissue probability maps and parcellation maps are also provided, thus provides a valuable reference and resource for the fetal brain development studies. Our 4D fetal brain atlas will be released for the community soon.

Acknowledgements

This work was partially supported by NIH grants (MH116225, MH117943).

References

[1] Serag A, Kyriakopoulou V, Rutherford M A, et al. A multi-channel 4D probabilistic atlas of the developing brain: application to fetuses and neonates[J]. Annals of the BMVA, 2012, 2012(3): 1-14.

[2] Gholipour A, Rollins C K, Velasco-Annis C, et al. A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth[J]. Scientific reports, 2017, 7(1): 1-13.

[3] Dalca, A., Rakic, M., Guttag, J. and Sabuncu, M.: Learning conditional deformable templates with convolutional networks. In: Advances in neural information processing systems, pp. 806-818, (2019)

[4] Evan M Y, Dalca A V, Sabuncu M R. Learning Conditional Deformable Shape Templates for Brain Anatomy[C]//International Workshop on Machine Learning in Medical Imaging. Springer, Cham, 2020: 353-362.

[5] Liao L, Zhang X, Zhao F, et al. Joint Image Quality Assessment and Brain Extraction of Fetal MRI Using Deep Learning[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2020: 415-424.

[6] Ebner M, Wang G, Li W, et al. An automated framework for localization, segmentation and super-resolution reconstruction of fetal brain MRI[J]. NeuroImage, 2020, 206: 116324.

[7] Pei Y, Wang L, Zhao F, et al. Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI[C]//International Workshop on Machine Learning in Medical Imaging. Springer, Cham, 2020: 384-393.

Figures

Fig. 1. 4D fetal brain volumetric atlas, including intensity templates (a), tissue probability maps (b-d) and parcellation maps (e) at 9 time points from 22 to 36 gestational weeks.

Fig.2 The proposed network architecture.

Fig. 3. Qualitative comparison of the two atlases at 32 gestational week. Top row: the atlas constructed by the network with tissue similarity loss. Bottom row: the atlas constructed by the network without tissue similarity loss.

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
3481