4740

Development of U-Net Breast Density Segmentation Method for Fat-Sat T1-Weighted Images Using Transfer Learning from Model for Non-Fat-Sat Images
Yang Zhang1, Jeon-Hor Chen1,2, Kai-Ting Chang1, Siwa Chan3, Huay-Ben Pan4, Jiejie Zhou5, Ouchen Wang6, Meihao Wang5, and Min-Ying Lydia Su1

1Department of Radiological Sciences, University of California, Irvine, CA, United States, 2Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan, 3Department of Medical Imaging, Taichung Tzu-Chi Hospital, Taichung, Taiwan, 4Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan, 5Department of Radiology, The First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China, 6Department of Thyroid and Breast Surgery, The First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China

Synopsis

The U-Net deep learning is a feasible method for segmentation of breast and fibroglandular tissue on non-fat-suppressed (non-fat-sat) T1-weighted images. Whether it can work on fat-sat images, which are more commonly used for diagnosis, is studied. Three datasets were used: 126 Training, 62 Testing Set-A, and 41 Testing Set-B. The model was developed without and with transfer learning based on parameters in the previous model developed for non-fat-sat images. The results show that U-Net can also achieve a high segmentation accuracy for fat-sat images, and when training case number is small, transfer learning can help to improve accuracy.

Introduction

Breast MRI is an important imaging modality for management of breast cancer. With well-established clinical indications, the number of MRI examinations is increasing rapidly, and large datasets have gradually become available. MRI is also recommended for women who have high risk developing breast cancer, assessed by using several validated risk prediction models. Breast density is a known risk factor, and quantitative measurement of density may help to improve the accuracy of the risk models [1]. Many semi-automatic and automatic breast MRI segmentation methods have been developed [2,3], but the need of operator input or post-processing manual correction hampers its clinical use. In a previous study we developed an automatic segmentation method using the Fully-Convolutional Residual Neural Network (FC-RNN), or U-Net, for non-fat-sat T1-weighted MRI of 286 patients [4], which achieved high accuracy. For diagnosis of breast cancer, the fat-suppressed (fat-sat) images were used more often, and whether the developed method can be applied or modified to perform breast density segmentation on fat-sat images has not been studied before, which is the purpose of this work. Three datasets from different institutions were used, the largest one of 126 patients was used for training, with or without transfer learning based on the previously developed model for non-fat-sat images. Then, the developed segmentation model was applied in two independent datasets to evaluate the accuracy.

Methods

The previous non-fat-sat image dataset from 286 patients were acquired using a Siemens 3T scanner. In this study, the first fat-sat dataset used for training was from 126 breast cancer patients (range 22-75, mean age 49 y/o) scanned on a Siemens 1.5T system. The pre-contrast images in the DCE sequence were used for analysis, acquired using fat-suppressed 3D-FLASH with TR/TE=4.50/1.82 ms, flip angle=12°, matrix size=512x512, FOV=32cm, and slice thickness=1.5 mm. Only the normal breast was analyzed. The ground truth breast and fibroglandular tissue was generated by using a template-based segmentation method [2,3]. Deep learning segmentation was performed using the U-Net [5,6], as shown in Figure 1. It is a fully connected convolutional residual network, and consists of convolution and max-pooling layers at the descending part (the left component of U), and convolution and up-sampling layers at ascending part (the right component of U). For transfer learning, the initial values of the trainable parameters were taken from the previous model trained for the non-fat-sat images [4]. If not using transfer learning, then the model was developed using cross-validation in the training dataset, by using the He normal method to initialize the trainable parameters [7]. To investigate the training efficiency, we used 10 cases, 20 cases, … to all 126 cases to develop the segmentation models. Two independent validation datasets, Testing Set-A and Testing Set-B, were from patients receiving diagnostic MRI at two different institutions. Dataset-A included 62 patients (range 28-70, mean age 49 y/o) done on a Siemens 3T MR scanner, using fat-suppressed turbo spin echo, with TR/TE=4.36/1.58msec, FOV=30cm, acquisition matrix=384x288, slice thickness=1mm, flip angle=10°. Dataset-B included 41 patients (range 24-82, mean age 52 y/o) done on a GE 3T scanner, using the VIBRANT sequence with TR/TE=5/2ms, FA 10°, slice thickness=1.2 mm, FOV=34cm, matrix size 416×416. Only the normal breast was used for analysis. The ground truth was generated with the template-based method for evaluation of the segmentation performance. The Dice Similarity Coefficient (DSC) and the overall accuracy calculated from all pixels were reported.

Results

For breast segmentation, the foreground is breast and the background is non-breast area on the whole image. Within the segmented breast, the foreground is fibroglandular tissue and the background is fatty tissue. Figures 2-3 show two case examples. The breast and fibroglandular tissue segmented using the template-based method and U-Net model are similar. Figure 3 has an obvious bias-field artifact in the medial part of the breast, which needs to be corrected using complicated algorithms, as reported in [3]. In U-Net segmentation, this bright region is not mis-classified as dense tissue, which demonstrates a great strength of deep learning. Table 1 shows the DSC and Accuracy in the training set and testing set-A and Set-B. The results obtained with transfer learning are slightly better compared to the results without transfer learning. To further evaluate the effect, the DSC in the Testing Set-A and Set-B by using the model developed with different number of training cases from 10, 20, … to 126 are plotted in Figure 4. The results show that when the training case number is small, the DSC is poor; but when transfer learning is applied, the DSC is improved substantially.

Discussion

Developing an efficient and reliable breast density segmentation method may provide helpful information for a woman to assess her cancer risk more accurately for choosing an optimal screening and management strategy. In this study we tested a deep-learning method by using the Fully-convolutional Residual Neural Network (FCR-NN) reported by Dalmış et al. [6]. We have previously demonstrated that U-Net can be applied to perform segmentation on non-fat-sat images [4], and in this study we show it can also be applied for fat-sat images, which have lower signal-to-noise ratio, more severe artifacts, and lower fat-fibroglandular tissue contrast compared to non-fat-sat images. The results also demonstrate that when the number of training cases is limited, applying transfer learning can help to develop a good model and achieve a high accuracy.

Acknowledgements

This work was supported in part by NIH R01 CA127927, R21 CA208938, and the Natural Science Foundation of Zhejiang (No.LY14H180006).

References

[1]. Kerlikowske K, Ma L, Scott CG, et al. Combining quantitative and qualitative breast density measures to assess breast cancer risk. Breast Cancer Research. 2017;19(1):97. [2]. Lin M, Chen JH, Wang X, Chan S, Chen S, Su MY. Template‐based automatic breast segmentation on MRI by excluding the chest region. Medical physics. 2013;40(12). [3]. Lin M, Chan S, Chen JH, et al. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI. Medical physics. 2011;38(1):5-14. [4] Zhang et al. Automatic Breast and Fibroglandular Tissue Segmentation Using Deep Learning by A Fully-Convolutional Residual Neural Network. Presented at the Joint Annual Meeting ISMRM-ESMRMB, Paris, France, June 16-21, 2018; Program Number: 2420. [5]. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Paper presented at: International Conference on Medical image computing and computer-assisted intervention2015. [6]. Dalmış MU, Litjens G, Holland K, et al. Using deep learning to segment breast and fibroglandular tissue in MRI volumes. Medical physics. 2017;44(2):533-546. [7]. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Paper presented at: Proceedings of the IEEE international conference on computer vision2015.

Figures

Figure 1: Architecture of the Fully-convolutional Residual Neural Network (FCR-NN). The input of the network is the normalized image and the output is the probability map of the segmentation result. This “U-net” consists of convolution and max-pooling layers at the descending phase (the initial part of the U). This part can be seen as down-sampling stage. At the ascending part of the U network, up-sampling operations are performed which are also implemented by convolutions, where kernel weights are learned during training. The arrows between the two parts show the incorporation of the information available at the down-sampling steps into the up-sampling operations performed in the ascending part of the network.

Figure 2: An example to illustrate segmentation results. This is the normal right breast of a 64-year-old patient with breast cancer in the left side. A: The original T1-weighted, fat-suppressed, image. B: The ground truth breast segmentation result done by the template-based method, shown in green. The fibroglandular tissue within the breast is segmented by using K-mean clustering, after the bias-field correction, shown in red. C: The segmentation results done by the U-Net. The maps segmented by two methods are similar. For breast segmentation, DSC is 0.96 and accuracy is 0.98. For fibroglandular tissue segmentation, DSC is 0.90 and accuracy is 0.95.

Figure 3: An example to illustrate segmentation results. This is the normal left breast of a 73-year-old patient with breast cancer in the right side. A: The original T1-weighted, fat-suppressed, image. There is a clear bias-field artifact in the medial part of the breast. B: The ground truth breast segmentation result done by the template-based method, shown in green. The fibroglandular tissue within the breast is segmented by using K-mean clustering, after the bias-field correction, shown in red. C: The segmentation results done by the U-Net. The bright bias-field artifact is not mis-classified. The maps segmented by two methods are similar. For breast segmentation, DSC is 0.90 and accuracy is 0.93. For fibroglandular tissue segmentation, DSC is 0.81 and accuracy is 0.84. Since the breast is fatty, it is difficult to segment the fibroglandular tissue and reach a high accuracy.

Table 1: The Dice Similarity Coefficient (DSC) and Accuracy in the Training Set, Testing Set-A, and Testing Set-B by using the U-Net Model Developed with and without Transfer Learning

Figure 4: The plot of DSC in the Testing Set-A and Set-B by using the model developed with different number of training cases from 10, 20, … to 126, with and without transfer learning. It is seen that when the training case number is small, DSC is low. But when transfer learning is applied, the DSC is improved substantially. When sufficient number of cases is used for training (>30 or breast segmentation, and >80 for fibroglandular tissue segmentation), the achieved DSC with and without transfer learning is comparable, slightly better with transfer learning.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4740