2098

Multi-class segmentation of the Carotid Arteries using Deep Learning
Magnus Ziegler1,2, Jesper Alfraeus1,2, Elin Good1,2,3, Jan Engvall1,2,3, Ebo de Muinck1,2,3, and Petter Dyverfeldt1,2

1Linköping University, Linköping, Sweden, 2Center for Medical Image Science and Visualization (CMIV), Linköping, Sweden, 3Linköping University Hospital, Linköping, Sweden

Synopsis

The rupture of atherosclerotic plaques in the carotid arteries can lead to strokes, which is one of the most common causes of death worldwide. MRI can provide geometric, compositional, and hemodynamic information about the carotid arteries, but in order to access this information, the images must first be segmented to delineate the regions of interest. This work proposes a state-of-the-art convolutional neural network, developed from the DeepMedic architecture, that performs automated, multi-class segmentation of the carotid arteries. Results show high quantitative and qualitative scores, with DICE = 0.8750, Sensitivity = 0.9374, Specificity = 0.9942, and F2 = 0.9067.

Introduction

Non-invasive imaging and monitoring is crucial for tracking the progression of atherosclerosis in the carotid arteries[1]. MRI can be used for generating geometric, hemodynamic, and compositional information from the vessels. However, to extract this information, the vessels must be identified and delineated from the images. When performed manually, this is a difficult and time consuming process and the amount of time required increases with further localization, for example, when segmenting each arterial branch. Examining large cohorts would not be feasible without a significant decrease in user input or complete automation of these tasks. Convolutional neural networks(CNNs) have emerged as a useful method for medical image segmentation[2-4]. This work therefore aims to deploy and validate such a method for automated segmentation of the carotid arteries using contrast-enhanced MR angiography data.

Methods

Data from 91 subjects was used in this study. 26 were used for training, 5 for validation, and 10 for quantitative evaluation, each with ground-truth(GT) that includes the left and right common-, internal-, and external carotid arteries(CCA,ICA,ECA). Branches had standardized lengths of 2.5 cm, measured from the bifurcation. An additional qualitative study included 50 subjects without GT. Subjects were 50-64 years old, with non-symptomatic atherosclerotic plaques measured using ultrasound to be 2.7 mm, at minimum, in thickness. Subjects were recruited as part of the Swedish CArdioPulmonary bioImage Study(SCAPIS), and all participants gave written, informed consent.

Contrast-enhanced MR angiography(CEMRA) data was acquired using a gadolinium-based contrast agent(Gadovist, Bayer Schering Pharma AG) to generate bright-blood images. Scan parameters included: a coronal slab with 3D field of view (FOV) = 200x200x50 mm3 and matrix size 512x512x100 set to cover the carotid arteries from the clavicle to the circle of Willis, flip angle 27◦, TE 1.8 ms, TR 4.9 ms, parallel imaging(SENSE) factor 2, and reconstructed spatial resolution of 0.48x0.48x0.50 mm3. Data was acquired using a 3T Philips Ingenia system(Philips Healthcare, Best, the Netherlands) using an 8-channel carotid coil.

The DeepMedic network[2] was selected as a platform for further development. This network was tailored for multi-class segmentation of the carotid arteries by introducing anisotropic convolution kernels to increase the networks receptive field in desired directions, a triple pathway architecture to access larger spatial context, and a cost function based on the Fβ-measure. The Fβ cost function includes a parameter that allows for tuning the impact of sensitivity versus specificity on the cost in order to counteract class imbalance. Segmentations generated by the network were post-processed to remove disjoint areas, smooth masks, and create a standardized output. The data was divided in the sagittal plane during training, validation, and testing. This allowed for data augmentation and lowered the number of labels considered from 7 to 4 per input image, simplifying the task at hand for the network.

To evaluate the performance of the network, in addition to the 31 subjects used during the training and validation process, we quantitatively examined 10 previously unseen subjects using the following metrics: Dice, sensitivity, specificity, and Fβ with a value of 2 for the β-parameter. These 10 subjects, and an additional 50 subjects, for which ground truth does not exist, were segmented using the network and the quality was assessed visually by two observers. Qualitative scores used the following 0-4 scale, based on the amount of adjustments needed before the masks could be used for future analyses: 0-Failed segmentation or an incorrect label; 1-significant adjustments; 2-some adjustments; 3-minor adjustments; and 4-no adjustments necessary.

Results

The proposed network performed well when evaluated quantitatively and qualitatively. Segmentations generated by the network scored: Dice=0.8750, sensitivity=0.9374, specificity=0.9942, and F2=0.9067 following post-processing. The quantitative results for all datasets are provided in Table 1, and mean scores of the original implementation and our modified network are provided in Table 2. Mean qualitative scores after post-processing were 3.8 and 3.6 for the left and right carotid artery, respectively, an improvement from the original implementation, as shown in Table 3. Figures 1 and 2 depict segmentation results from the proposed network. Qualitative assessment of 50 subjects without GT resulted in a mean score of 3.4.

Discussion

Performing accurate segmentation is critical for analyses, automation is necessary for large-cohort studies. The alterations to the original DeepMedic network substantially improved performance, and the proposed network was able to segment the carotid bifurcation into its constituent branches accurately, enabling increased localization in future analyses. Segmentations were judged to be of sufficient quality to be used immediately for analysis. This result further illustrates the promise of deep learning and proves its value for segmentation of medical images.

Acknowledgements

No acknowledgement found.

References

1. Liapis, C. D., Kakisis, J. D. & Kostakis, A. G. Carotid stenosis. Stroke. DOI: 10.1161/hs1201.099797 (2001).http://stroke.ahajournals.org/content/32/12/2782.full.pdf.

2. Kamnitsas, K.et al. Efficient multi-scale 3d CNN with fully connected CRF for accurate brain lesion segmentation.CoRR abs/1603.05959 (2016). 1603.05959.

3. Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T. & Ronneberger, O. 3d u-net: Learning dense volumetric segmentation from sparse annotation. CoRR abs/1606.06650 (2016). 1606.06650.

4. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436 (2015).

Figures

Table 1: Quantitative Evaluation for 10 previously unseen subjects

Table 2: Quantitative comparison between the original implementation of DeepMedic and the proposed network using 10 previously unseen subjects.

Table 3: Results from Qualitative evaluation, comparing the original implementation of DeepMedic and the proposed network using 10 previously unseen subjects and two reviewers.

Figure 1: Example results for subject 1. a) Raw output from the network, b) Ground truth (top) and prediction after post processing (bottom), c) Ground truth (top) and prediction after post processing (bottom), skeletonized, d) Images from several mid-stack slices, where the brighter segments represent where GT and prediction overlap, green for GT only and purple for the prediction only.

Figure 2: Example results for subject 8. a) Raw output from the network, b) Ground truth (top) and prediction after post processing (bottom), c) Ground truth (top) and prediction after post processing (bottom), skeletonized, d) Images from several mid-stack slices, where the brighter segments represent where GT and prediction overlap, green for GT only and purple for the prediction only.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
2098