3482

Self-Supervised Transfer Learning for Infant Cerebellum Segmentation with Multi-Domain MRIs
Yue Sun1, Kun Gao1, Shihui Ying1, Weili Lin1, Gang Li1, Sijie Niu1, Mingxia Liu1, and Li Wang1
1Department of Radiology and Biomedical Research Imaging Center, UNC at Chapel Hill, Chapel Hill, NC, United States

Synopsis

This study develops a self-supervised transfer learning (SSTL) framework to generate reliable cerebellum segmentations for infant subjects with multi-domain MRIs, aiming to alleviate the domain shift between different time-points/sites and improve the generalization ability. Experiments demonstrate that by transferring limited manual labels from late time-points (or a specific site) with high tissue contrast to early time-points (or other sites) with low contrast, our method achieves improved performance and can be applied to other tasks, especially for those with multi-site data.

Introduction

Cerebellum is a rapidly developing structure in the early postnatal years1 (see Figure 1). Accurate segmentation of the cerebellum into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is essential to characterize early cerebellum development. To the best of our knowledge, there is no work on cerebellum tissue segmentation for infant subjects less than 24 months of age. The challenge of cerebellum segmentation for infants less than 24 months of age is three-fold. First, the number of training subjects is often limited, especially for 0-month-old infant subjects. Second, there is a domain shift issue caused by different distributions of medical images acquired by different imaging protocols/scanners in different imaging centers2; Third, anatomical errors often appear during infant cerebellum segmentation. To alleviate these challenges, we propose a self-supervised transfer learning (SSTL) framework for infant cerebellum segmentation with multi-domain MRIs.

Methods

The proposed SSTL framework is shown in Figure 2, which is designed to alleviate the effort of manual annotation and generate accurate cerebellum segmentations for multi-domain infant MRIs. The SSTL framework consists of the following major components.
1) Based on a fact that cerebellum MRI at early time-points (e.g., 0-month-old) exhibit an extremely low tissue contrast and 24-month-old cerebellum shows a much higher tissue contrast, we first transfer manual labels of infants at late time-points (or a specific site) to early time-points (or other sites) via a segmentation model. The ADU-Net3 is utilized as the backbone of our segmentation model. A total of 18 (with paired T1- and T2-weighted MRIs) at the 24-month time-point and their corresponding manual labels are used to train ADU-Net, generating probability maps of different tissues.
2) Inspired by a previous study4, we design a confidence model to evaluate the reliability of automatically generated segmentations for each voxel, and apply the U-Net structure5 with the contracting and expansive paths to achieve this task. The error map, defined as the differences between manual labels and automatic segmentations, is regarded as targets to train the confidence model.
3) To alleviate the domain shift issue, we automatically generate a set of reliable training samples for other time-points/sites. Specifically, to deal with the hindrance of limited training subjects for early time-points or other sites, we utilize the confidence map, generated by the trained confidence model, to automatically identify anatomical errors of automatic segmentations and generate a set of reliable training samples for each time-point/site. Later, we retrain the segmentation model (i.e., ADU-Net) guided by the generated training dataset and our proposed spatially-weighted cross-entropy loss function, which is defined as
$$L_{seg-weights}=-w\sum_{i=1}^{C}y_{i}\ln x_{i}$$
where $$$C$$$ is the class number ($$$C=4$$$ in this work, i.e., background, CSF, GM, and WM), $$$x_{i}$$$ represents the predicted probability map, $$$y_{i}$$$ is the target, and $$$w$$$ denotes the weights from the confidence map.

Results

In this study, the T1- and T2-weighted infant brain MRIs were randomly chosen from the UNC/UMN Baby Connectome Project (BCP)6, where subjects were acquired at around 0, 3, 6, 9, 12, 18, and 24 months of age with a Siemens Prisma scanner. The 24-month-old subjects with manual labels are used as training data, while those at early time-points are used as the testing data.
We compare our SSTL method with volBrain7, ASD-Net4 and ADU-Net3. The volBrain is an automated MRI Brain Volumetry System (https://volbrain.upv.es/index.php), ASD-Net is an attention based semi-supervised deep learning framework, and the ADU-Net architecture is the backbone of our segmentation model. Figure 3 presents the comparison of segmentation results among these methods on 18-, 12-, 9-, 6-, 3- and 0-month-old testing subjects. The input T1- and T2-weighted MRIs and the manual label are shown in the first and the last rows, respectively. Compared with other methods, the cerebellum segmentations of the proposed method are much more consistent with the manual labels (see the fifth row of Figure 3). In addition, we also perform the Wilcoxon signed-rank test to calculate the statistical difference between our SSTL and each competing method, with results reported in Figure 4. When testing younger subjects, the Dice ratio of segmentation results is gradually decreased, but the proposed method can still generate more reliable results compared with others.

Discussion and Conclusion

We propose an SSTL framework for infant cerebellum segmentation to deal with the domain shift caused by different time-points/sites, which achieves superior results compared with other competing methods, especially for the younger infants. To the best of our knowledge, this is among the first attempts to segment the cerebellum for infants younger than 24 months.
Our framework is general and can be applied to other tasks, especially for those involving multi-site data. We further validate the proposed SSTL method in the cerebrum segmentation task in the iSeg-2019 challenge2 that contains three testing sites (i.e., BCP, Stanford University and Emory University). The comparison between our SSTL and three top-ranked methods (i.e., QL111111, Tao_SMU, and FightAutism) is presented in Figure 5. We can observe that the proposed method achieves the highest Dice ratio on GM and WM segmentation when testing on three different sites and has the smallest variance of Dice ratios among the three sites. In future work, we will validate our method on more subjects from multiple sites.

Acknowledgements

This work was supported in part by National Institutes of Health grants MH109773, MH116225, and MH117943. This work utilizes approaches developed by an NIH grant (1U01MH110274) and the efforts of the UNC/UMN Baby Connectome Project Consortium.

References

1. Wolf U, Rapoport M J, and Schweizer T A. Evaluating the affective component of the cerebellar cognitive affective syndrome. J Neuropsychiatry Clin Neurosci. 2009; 21(3): 245-53.

2. Sun Y, Gao K, Wu Z, et al. Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge. 2020.

3. Wang L, Li G, Shi F, et al. Volume-Based Analysis of 6-Month-Old Infant Brain MRI for Autism Biomarker Identification and Early Diagnosis. in MICCAI. 2018; 11072: 411-419.

4. Nie D, Gao Y, Wang L, et al. ASDNet: Attention Based Semi-supervised Deep Networks for Medical Image Segmentation. in MICCAI. 2018; 370-378.

5. Ronneberger O, Fischer P, and Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation, in MICCAI. 2015; 234–241.

6. Howell B R, Styner M A, Gao W, et al. The UNC/UMN Baby Connectome Project (BCP): An overview of the study design and protocol development. NeuroImage. 2019; 185: 891-905.

7. Manjón J V and Coupé P. volBrain: An Online MRI Brain Volumetry System. Front Neuroinform. 2016; 10: 30.

Figures

Figure 1: Infant cerebellum from 0 month to 24 months, shown in T1- and T2-weighted MRIs.

Figure 2: Flowchart of the proposed method. 1) A segmentation model is trained with the supervision of limited manual labels at one time-point/site, 2) then a confidence model is trained to evaluate the segmentation for each voxel. 3) Top subjects with their automatic segmentations are automatically selected as the training set, to train a segmentation model for the remaining time-points/sites accordingly, guided by a proposed spatially-weighted cross-entropy loss.

Figure 3: Segmentation comparison of infant cerebellum at around 18, 12, 9, 6, 3, and 0 months. The first row displays the infant cerebellum MRIs from each month (left: T1w image, right: T2w image). From the second row to the fifth row, the segmentations are obtained from volBrain, ASD-Net, ADU-Net, and our proposed method. The corresponding manual labels are shown in the last row. For better comparison among these results, some results are denoted by red dotted lines with zoomed views.

Figure 4: Performance comparison between the top 3 methods and our proposed method on 30 testing subjects at 18, 12, 9, 6, 3 and 0 months of age from BCP, in terms of Dice ratio.

Figure 5: Performance comparison between the top 3 methods and our proposed method on testing subjects from three sites (i.e., BCP, Stanford University, and Emory University sites in the iSeg-2019 challenge), in terms of Dice ratio.

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
3482