Yang Zhang1, Jeon-Hor Chen1,2, Kai-Ting Chang1, Vivian Youngjean Park3, Min Jung Kim3, Siwa Chan4, Peter Chang1, Daniel Chow1, Alex Luk1, Tiffany Kwong1, and Min-Ying Lydia Su1
1Department of Radiological Sciences, University of California, Irvine, CA, United States, 2Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan, 3Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea, Democratic People's Republic of, 4Department of Medical Imaging, Taichung Tzu-Chi Hospital, Taichung, Taiwan
Synopsis
Segmentation of breast and fibroglandular tissue
(FGT) using the U-net architecture was implemented using training MRI from 286
patients, and the developed model was tested in independent validation datasets
from 28 healthy women acquired using 4 different MR scanners. The dice
similarity coefficient was 0.86 for breast, 0.83 for FGT; and the accuracy was
0.94 for breast and 0.93 for FGT. The results on MRI acquired using different
MR scanners were similar. U-net provides a fully automatic, efficient, segmentation
method in large MRI datasets for evaluating its role on breast cancer risk
assessment and hormonal therapy response prediction.
Introduction
Breast MRI is an established clinical imaging
modality for high-risk screening, diagnosis, pre-operative staging and
neoadjuvant therapy response evaluation. The most common clinical indication
was diagnostic evaluation (40.3%), followed by screening (31.7%) [1]. Passage
of the breast density notification law has had a major impact on MRI
utilization, showing increases from 8.5% to 21.1% in non-high-risk women after
the law in California went into effect [2]. The increasing popularity of breast
MRI has led to the fast accumulation of large breast MRI database. This offers
a great opportunity to address unanswered clinical questions regarding the use
of breast density, e.g. whether the volumetric density can be incorporated into
risk models to improve the prediction accuracy [3], or be used as a surrogate
biomarker to predict hormonal treatment efficacy [4,5]. Since MRI is a
three-dimensional (3D) imaging modality with distinctive tissue contrast, it
can be used to measure the fibroglandular tissue (FGT) volume. However, because
many imaging slices are acquired in one MRI, an efficient, objective, and
reliable segmentation method is needed. In a previous study we have developed
an automatic segmentation method using the Fully-Convolutional Residual Neural Network (FC-RNN), commonly
noted as U-net, with non-fat-sat T1-weighted MRI of 286 patients [6]. In
this study we tested the segmentation performance of this developed method in
56 breasts of 28 healthy volunteers, each woman acquired using 4 different
scanners.Methods
The initial dataset used for training included
286 patients with unilateral invasive breast cancer (median age, 49 years;
range, 30–80 years), and only the contralateral normal breast was analyzed. MRI
was performed on a 3T Siemens Trio-Tim scanner, and the pre-contrast
T1-weighted images without fat suppression were used for segmentation. The
ground truth was generated using a template-based segmentation method [7]. Deep learning segmentation
was performed using the U-net, which is a fully connected convolutional
residual network [8], and consists of convolution and max-pooling layers at the
descending part (the left component of U), and convolution and up-sampling
layers at ascending part (the right component of U). The final segmentation
model was developed using 10-fold cross-validation. The independent validation
dataset included 28 healthy volunteers (age 20–64, mean 35 years old). Each
subject was scanned using four different MR scanners in two institutions,
including GE Signa-HDx 1.5T, GE Signa-HDx 3T (GE Healthcare, Milwaukee, WI),
Philips Achieva 3.0T TX (Philips Medical Systems, Eindhoven, Netherlands) and
Siemens Symphony 1.5T TIM (Siemens, Erlangen, Germany). Non-contrast
T1-weighted images without fat suppression were used for segmentation. Since
both left and right breasts were normal, they were analyzed separately, so
there was a total of 56 breasts. The validation was done using the 56 breasts
acquired by each scanner first, and then using all 224 breasts acquired by all
4 scanners together. The ground truth was generated using the template-based
method for comparison. The segmentation performance was evaluated using the
Dice Similarity Coefficient (DSC) and the overall accuracy based on all pixels.
In addition, the Pearson’s correlation was applied to evaluate the correlation
between the U-net prediction output and the ground truth.Results
The DSC and accuracy for each scanner was
calculated separately, and then combined for all 4 scanners together. The
results are shown in Table 1. Figures 1 and 2 illustrate the
segmentation results of two women with different breast morphology. The
correlation between the U-net prediction output and ground truth for breast
volume is shown in Figure 3. The obtained results for four different scanners were
similar. The correlation coefficient r was high, in the range of 0.96-0.98. In
each figure, the fitted line was very close to the unity line, and the slope
was close to 1. The segmentation result for FGT volume
is shown in Figure 4. The FGT
segmentation results for MRI acquired using 4 different scanners were similar.
The correlation coefficient r was very high, in the range of 0.97-0.99.
However, using the unity line as reference, the U-net segmented FGT volume was
lower compared to the ground truth, as in the two case examples demonstrated in
Figures 1 and 2.Discussion
The results suggest that deep learning segmentation using U-net is feasible to perform fully automatic segmentation for the breast and FGT and yield reasonable accuracy compared to the ground truth segmented by using a template-based method verified by a radiologist. Over the last decade, segmentation of the breast and FGT on MRI has been studied using semiautomatic to automatic approaches with some operator inputs. The processing time for these methods varies from minutes to more than half an hour, which is partially due to the need for the post-segmentation manual correction. The testing in this study using independent validation datasets allows us to evaluate whether the developed segmentation method can be applied widely to other MRI datasets acquired using different imaging sequences on different MR scanners. This fully-automatic deep learning-based segmentation method may provide an accurate and efficient means to quantify FGT volume for evaluation of breast density in very large datasets. The results can potentially be used for breast cancer risk assessment, and for evaluating the response in women receiving hormonal therapy or chemoprevention.Acknowledgements
This study is supported in part by NIH R01
CA127927, R21 CA208938, and a Basic Science Research Program through the
National Research Foundation of Korea (NRF) funded by the Ministry of Education
(NRF-2017R1D1A1B03035995). References
[1] Wernli et al. Patterns of breast magnetic
resonance imaging use in community practice. JAMA internal medicine. 2014;174(1):125-132. [2] Ram et al. Impact
of the California Breast Density Law on Screening Breast MR Utilization,
Provider Ordering Practices, and Patient Demographics. Journal of the American College of Radiology. 2018;15(4):594-600.
[3] Kerlikowske et al. Combining quantitative and qualitative breast density
measures to assess breast cancer risk. Breast
Cancer Research. 2017;19(1):97. [4] Lundberg et al. Association of
infertility and fertility treatment with mammographic density in a large
screening-based cohort of women: a cross-sectional study. Breast Cancer Research. 2016;18(1):36. [5] Chen et al. Reduction of
breast density following tamoxifen treatment evaluated by 3-D MRI: preliminary
study. Magnetic resonance imaging. 2011;29(1):91-98.
[6] Zhang et al. Automatic Breast and Fibroglandular Tissue Segmentation Using
Deep Learning by A Fully-Convolutional Residual Neural Network. Presented at
the Joint Annual Meeting ISMRM-ESMRMB, Paris, France, June 16-21, 2018; Program
Number: 2420. [7] Lin et al. Templateābased automatic breast segmentation on
MRI by excluding the chest region. Medical
physics. 2013;40(12). [8] Ronneberger et al. U-net: Convolutional networks
for biomedical image segmentation. Paper presented at: International Conference
on Medical image computing and computer-assisted intervention 2015.