1718

Automatic breast lesion segmentation in MR images employing a dense attention fully convolutional network
Cheng Li1, Hui Sun2, Qiegen Liu3, Zaiyi Liu4, Meiyun Wang5, Hairong Zheng1, and Shanshan Wang1

1Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, 2School of Control Science and Engineering, Shandong University, Shangdong, China, 3Department of Electronic Information Engineering, Nanchang University, Nanchang, China, 4Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China, 5Henan Provincial Peoples Hospital, Henan, China

Synopsis

Despite its high sensitivity, MR imaging has low specificity and high false positive issues. Therefore, automatic breast lesion detection algorithms are necessary. To this end, we propose a new network, dense attention network (DANet), for breast lesion segmentation in MR images. In DANet, we designed a feature fusion and selection mechanism. Features from the corresponding encoder layer and from all previous decoder layers are fused by concatenation. To highlight the rich-informative channels, a channel attention module is introduced. DANet showed better segmentation results compared to commonly applied segmentation networks on our 2D contrast-enhanced T1-weighted breast MR dataset.

Introduction

Breast cancer is one of the most frequent cancer types in the female population. Early diagnosis is important for the reduction of breast cancer mortality rate. Among all the available breast imaging techniques, MR imaging possesses the highest sensitivity1. However, MR imaging is subject to relatively high false positive rate and low specificity2. And manual reading may prone to errors3. Computer-aided detection/diagnostic (CAD) systems that can automatically and accurately detect breast lesions in MR images is thereby, important and necessary. Current available CAD systems mainly use manual defined features to locate and classify breast lesions4. A few studies applying deep learning methods, such as UNet, to breast cancer detection5. The fast updating models in computer vision have yet to be introduced to breast MR image processing. In this study, we propose a new network, dense attention network (DANet), for the segmentation of breast lesions in MR images. Specifically, our model investigated the importance of feature fusion and selection for our task. Features from the corresponding encoder layer and from all previous decoder layers are fused by concatenation. To highlight the rich-informative channels, a channel attention module is introduced. Results show that our method achieves the best segmentation results compared to three commonly used fully convolutional networks (FCNs), UNet6, FCDenseNet1037, and RefineNet1018.

Methods and Experimental Setup

The architecture of the proposed network is shown in Figure 1. Our network adopted the classical FCN architecture. There are five basic blocks in the encoder pathway and the first four are followed by maxpooling operations, which downsample the input images to 1/16 in total. Bilinear upsampling is applied to restore the image resolution. To improve the segmentation performance, we fuse the features from high- and low-levels by concatenation. Feature adaptation is included to clean and prepare the features from the encoder pathway before fusion. Channel attention is introduced right after the fusion to highlight the rich-informative channels. Moreover, we add dense connections between the decoder blocks to fully utilize the different layer features and to avoid possible errors caused by the upsampling process.

We evaluated the performance of the proposed DANet as well as the three comparison FCNs using contrast-enhanced T1-weighted (T1C) breast MR scans of 314 women. The MR images were collected using an Achieva 1.5T system (Philips Healthcare, Best, The Netherlands) with a four-channel phased array breast coil. All acquisitions were done from 2011 to 2017. Axial T1C images with fat suppression were obtained after the intravenous injection of 0.3 mL/kg of gadodiamide (BeiLu Healthcare, Beijing, China) (TR/TE = 5.2 ms/2.3 ms, FOV = 300 mm x 320 mm, section thickness = 1 mm, and flip angle = 15o). Although 3D images were taken, only the central slices with the largest cross-section areas were labelled by two experienced radiologists to be further processed in this study. 5-fold cross-validation experiments were conducted with 80% of the images for training and 20% for validation. The models were implemented with PyTorch on a NVIDIA TITAN Xp GPU (12G). Three independent experiments were done, and the results are presented as (mean ± sd).

Results and Discussion

The performance of the proposed DANet was compared to UNet, FCDenseNet103, and RefineNet101, which were quantified by Dice similarity coefficient (DSC), sensitivity, and relative area difference (DA) (Figure 1). All the three evaluation metrics indicate that our proposed DANet achieved the best segmentation results by obtaining the highest average DSC, the highest average sensitivity, and the lowest average DA. Unpaired t-test was performed, and significant differences were shown between the DSC of DANet and all the other three FCNs (p < 0.05). Segmentation results of two examples were shown in Figure 3. It could be observed that our proposed DANet was better at finding those irregular lesion boundaries (example 1) and at decreasing the false positive incidences (example 2).

Conclusion

In this study, we propose a new network, DANet, for the segmentation of breast lesions in 2D MR images. Both quantitative and qualitative results show that DANet achieved the best segmentation performance when compared to UNet, FCDenseNet103, and RefineNet101.

Acknowledgements

This work was supported by funding from the National Natural Science Foundation of China (61601450, 61871371, and 81830056), Science and Technology Planning Project of Guangdong Province (2017B020227012).

References

  1. Partridge S C, Rahbar H, Murthy R, et al. Improved diagnostic accuracy of breast MRI through combined apparent diffusion coefficients and dynamic contrast-enhanced kinetics. Magn. Reson. Med. 2011; 65(6): 1759–1767.
  2. Kuhl C. The current status of breast MR imaging Part I. Choice of technique, image interpretation, diagnostic accuracy, and transfer to clinical practice. Radiology. 2007; 244(2—August): 356–378.
  3. Vreemann S, Gubern-Merida A, Lardenoije S, et al. The performance of MRI screening in the detection of breast cancer in an intermediate and high risk screening program. 24th ISMRM. 2016; 0405.
  4. Pinker-Domenig K, Amirhessam T, Wengert G, et al. Radiomics with magnetic resonance imaging of the breast for early prediction of response to neo-adjuvant chemotherapy in breast cancer patients. ISMRM-ESMRMB. 2018; 0098.
  5. Dalmış M U, Vreemann S, Kooi T, et al. Fully automated detection of breast cancer in screening MRI using convolutional neural networks. J. Med. Imaging. 2018; 5(1): 014502-1–9.
  6. Ronneberger O, Fischer P, and Brox T. U-Net: Convolutional networks for biomedical image segmentation. MICCAI. 2015; 234–241.
  7. Jegou S, Drozdzal M, D Vazquez, et al. The one hundred layers tiramisu: Fully convolutional DenseNets for semantic segmentation. IEEE CVPR Work. 2017; 1175–1183.
  8. Lin G, Milan A, Shen C, et al. RefineNet: Multi-path refinement networks for high-resolution semantic segmentation. IEEE CVPR. 2017; 1925–1934.

Figures

Figure 1. Architecture of the proposed dense attention network (DANet).

Figure 2. Evaluation metrics of the different FCNs with (a) showing the Dice similarity coefficient, (b) showing the sensitivity, and (c) showing the relative area difference.

Figure 3. Segmentation results of different networks. From left to right, the columns correspond to the input images, the ground-truth masks, the segmentation results of UNet, FCDenseNet103, RefineNet101, and our proposed DANet, respectively. The white lines indicate the boundaries of the labels and the green lines indicate the boundaries of the segmentation results. The red number on the right bottom of each image is the Dice similarity coefficient of the segmentation results.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
1718