0594

Automatic Search in Breast MRI Dataset For Detection of Suspicious Lesions Using Mask R-CNN
Yang Zhang1, Kai-Ting Chang1, Siwa Chan2, Peter Chang1, Daniel Chow1, Jeon-Hor Chen1,3, and Min-Ying Lydia Su1

1Department of Radiological Sciences, University of California, Irvine, CA, United States, 2Department of Medical Imaging, Taichung Tzu-Chi Hospital, Taichung, Taiwan, 3Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan

Synopsis

A Mask R-CNN algorithm was implemented to search the entire dataset of breast MRI to identify suspicious lesions for further diagnosis. A total of 102 patients with confirmed cancer were analyzed. There were a total of 2,314 positive cases (i.e. imaging slices containing lesion); and 8,512 slices without lesion as negative cases. The search results show 1,943 true positives; 6,149 true negatives; 2,363 false positives; and 371 false negatives, with sensitivity 0.83, specificity 0.72, and the overall detection accuracy 0.75. The Dice Similarity Coefficient of the tumor segmented in the detection box compared to ground truth is 0.84.

Introduction

Breast MRI is a well-established imaging modality for diagnosis of breast cancer. The analysis is usually done by radiologists’ visual interpretation based on images or maps generated by subtraction, maximum intensity projection (MIP), DCE time course, and color-coded DCE wash-out patterns, etc. Since thin slice was acquired, many slices were needed to cover the entire breast, and it would take some time and effort for a radiologist to carefully evaluate all slices. For patients with multiple lesions or satellite lesions, smaller lesions may be over-looked. More advanced algorithms can be applied to automatically detect lesions, and further characterize them to give benign vs. malignant diagnostic impression. The detection of lesions is mainly based on contrast enhancement maps; however, the strong enhancement in the chest and the strong parenchymal enhancement may influence the performance of the searching algorithms, and lead to false positive diagnosis. Recently, deep learning, especially convolution neural network (CNN), has shown great potential in pattern recognition and object detection, and it may be applied for automatic detection of breast lesions. The purpose of this study is to implement Mask Recurrent-Convolutional Neural Network (R-CNN) to search and detect suspicious lesions in the entire image dataset. After the location of the lesion is correctly detected, the tumor is further segmented and the result is compared to the ground truth.

Methods

A total of 102 patients (range 22-75, mean age 48.5 y/o) with pathologically confirmed breast cancer were studied. If multiple lesions were found on one image slice, they were treated as separate cases. There were a total of 2,314 positive cases (i.e. imaging slices containing lesion), and 8,512 slices without lesion as negative cases. The MRI was performed using a Siemens 1.5T system. The ground truth tumor was segmented on the contrast-enhanced maps. For mass tumors, fuzzy-C-means (FCM) clustering-based algorithm was applied [1]. For non-mass lesions, a rectangle box was first placed to cover the suspicious cancerous tissues. The signal intensity histograms of tissues inside and outside the rectangle ROI were obtained and fitted by two unnormalized Gaussian Probability Density functions. The intersection between the two Gaussian functions was used as the threshold for region growing to obtain the tumor boundary. Based on the segmented tumor mask, the smallest bounding boxes covering the lesions were computed and used for evaluating the tumor box detected by the deep learning algorithm. The deep learning detection algorithm was implemented using Mask R-CNN framework [2]. The ResNet101 was selected as the backbone network to build Feature Pyramid Network (FPN) and ImageNet was used as initial parameter values [3]. The architecture is shown in Figure 1. The number of input channel is 3, using the slice to be processed combined with its two neighboring slices as input. Focal Loss was selected as the loss function. As shown in Figure 1, the outputs of FPN were sent to three sub-networks: bounding box regression network, object classification network, and mask network. The bounding box regression network outputted the bounding boxes covering the lesion. Intersection over Union (IoU) was utilized to evaluate the accuracy of the predicted bounding boxes based on the ground truth. The prediction was considered as true positive if IoU was over 0.5. If no bounding box was detected on images which do not contain lesion, the prediction was considered as true negative. All of true positive results were sent to mask network to generate the lesion masks. The segmentation performance was evaluated using the Dice Similarity Coefficient (DSC). The final detection model was developed using 10-fold cross-validation.

Results

All validation predictions from the 10-fold cross-validation were combined and evaluated. Based on IoU calculated from the ground truth of the bounding boxes, there were 1,943 true positives, 6,149 true negatives, 2,363 false positives and 371 false negatives. Therefore, the sensitivity is 0.83, specificity is 0.72, and the overall detection accuracy is 0.75. Figures 2-5 show case examples of the detection results. From the results, there is no false positive detected inside the chest region. In the 1,943 true positives, the tumor within the detected box was segmented and compared to the ground truth segmentation results. The range of DSC is from 0.64 to 0.97 from 10-fold cross-validation, with the mean value of 0.84.

Discussion

Developing an efficient and reliable lesion detection method may provide helpful information for diagnosis of lesions in breast MRI. We applied mask R-CNN, which contains Retina Network as backbone, and three sub-networks, including bounding box regression network, classification network and mask network. The Retina network can increase the sensitivity in the search of the whole image. Then the sub-networks can decrease the false positives, which increase specificity. The results show that deep learning using mask-RNN provides an efficient method to localize and segment the lesion on breast MRI, which can achieve accuracy of 0.75. The current commercially available software for breast MRI analysis can generate useful information to improve reading efficiency of radiologists. Although these methods work well to assist radiologists’ interpretation, a fully automatic method that can search the entire set of images and identify suspicious lesions is needed to build automatic, artificial intelligence-based, diagnostic tools.

Acknowledgements

This work was supported in part by NIH R01 CA127927 and R21 CA208938.

References

1. Nie K, Chen J-H, Hon JY, Chu Y, Nalcioglu O, Su M-Y. Quantitative analysis of lesion morphology and texture features for diagnostic prediction in breast MRI. Academic radiology. 2008;15(12):1513-1525. 2. He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. Paper presented at: Computer Vision (ICCV), 2017 IEEE International Conference on2017. 3. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556. 2014.

Figures

Figure 1: Mask R-CNN architecture. Hybrid 3D-contracting (middle block) and 2D-expanding (right block) fully convolutional feature-pyramid network architecture is used for the mask R-CNN backbone. The architecture incorporates both traditional 3 x3 filters as well as bottleneck 1x1–3x3–1x1 modules (left block). The contracting arm is composed of 3D operations and convolutional kernels. The number of input channel is 3. The slice to be processed is combined with its two neighboring slices as inputs. Focal Loss is selected as the loss function.

Figure 2: True positive case example from a 42-year-old patient with a mass type breast cancer showing typical rim enhancement. (A) Pre-contrast image; (B) The 2nd post-contrast image; (C) The subtraction image; (D) Tumor detection result searched by the algorithm. The segmented tumor is highlighted by green color, and used as the ground truth. The red box is the detection output from Mask R-CNN, which correctly detects the location of the cancer.

Figure 3: True positive case example from a 55-year-old patient with a mass type breast cancer showing irregular shape. (A) Pre-contrast image; (B) The 2nd post-contrast image; (C) The subtraction image; (D) Tumor detection result searched by the algorithm. The segmented tumor is highlighted by green color, and used as the ground truth. The red box is the detection output from Mask R-CNN, which correctly detects the location of the cancer.

Figure 4: One case example from a 41-year-old patient with a heterogeneously enhanced breast cancer. (A) Pre-contrast image; (B) The 2nd post-contrast image; (C) The subtraction image; (D) Tumor detection result searched by the algorithm. The segmented tumor is highlighted by green color, and used as the ground truth. The red box is the detection output from Mask R-CNN, which correctly detects the location of the cancer, a true positive result. The blue box is another output from the network analysis that detects a small benign enhanced focus; therefore, a false positive result.

Figure 5: One case example from a 62-year-old patient with a small mass type breast cancer, who also shows strong parenchymal enhancement in both breasts. (A) Pre-contrast image; (B) The 2nd post-contrast image; (C) The subtraction image; (D) Tumor detection result searched by the algorithm. The segmented tumor is highlighted by green color, and used as the ground truth. Two large blue boxes are the detection output from Mask R-CNN. Although the box in the left breast (right side of image) correctly encloses the cancer, it contains the surrounding parenchymal enhancement and is much larger than the size of the cancer, thus considered as incorrect detection based on the Intersection over Union (IoU). Another blue box in the right breast (left side of image) also detects the parenchymal enhancement, thus a false positive result.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0594