0788

A Deep Learning Based Left Atrium Anatomy Segmentation and Scar Delineation in 3D Late Gadolinium Enhanced CMR Images
Guang Yang1,2, Jun Chen3, Zhifan Gao4, Shuo Li4, Hao Ni5,6, Elsa Angelini7, Tom Wong1,2, Raad Mohiaddin1,2, Eva Nyktari2, Ricardo Wage2, Lei Xu8, Yanping Zhang3, Xiuquan Du3, Heye Zhang9, David Firmin1,2, and Jennifer Keegan1,2

1National Heart and Lung Institute, Imperial College London, London, United Kingdom, 2Cardiovascular Research Centre, Royal Brompton Hospital, London, United Kingdom, 3Anhui University, Anhui, China, 4Department of Medical Imaging, Western University, London, ON, Canada, 5Department of Mathematics, University College London, London, United Kingdom, 6Alan Turing Institute, London, United Kingdom, 7NIHR Imperial Biomedical Research Centre, Imperial College London, London, United Kingdom, 8Department of Radiology, Beijing Anzhen Hospital, Beijing, China, 9School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen, China

Synopsis

3D late gadolinium enhanced (LGE) CMR images of left atrial (LA) scar tissue can be used to stratify patients with atrial fibrillation and to guide subsequent ablation therapy. This requires a segmentation of the LA anatomy (usually from an anatomical acquisition) and a further segmentation of the scar tissue within the LA (from a 3D LGE acquisition). We propose a deep learning based framework incorporating multiview information and attention mechanism to solve both LA anatomy and scar segmentations simultaneously from a single 3D LGE acquisition. Compared to existing methods, we show improved segmentation accuracy (mean Dice=93%/87% for LA/scar).

Introduction

Atrial fibrillation (AF) is the most common arrhythmia of clinical significance and the incidence is increasing fast as the population ages [1]. It affects quality of life and is associated with an increased risk of stroke and heart failure. Visualization and quantification of left atrial (LA) and scar tissue using 3D late gadolinium enhanced (LGE) CMR can provide clinical information for patient stratification and ablation therapy guidance and can also be used to predict outcome. These require accurate segmentation of the LA anatomy (usually from an anatomical acquisition, e.g., 3D b-SSFP or MRA [2,3]) and a further delineation of scar tissue within the LA (from a 3D LGE acquisition). In this study, we propose an automated deep learning based framework to accomplish these two segmentations simultaneously from a single 3D LGE acquisition. This reduces overall study duration and avoids registration errors using two acquisitions.

Methods

With ethical approval, from 2011 to 2018, 202 CMR studies were carried out in patients presenting with long-standing persistent AF on a Siemens Magnetom Avanto 1.5T scanner. Transverse navigator-gated 3D LGE CMR [4,5] was performed using an inversion prepared segmented gradient echo sequence (TE/TR=2.2ms/5.2ms, resolution: (1.4–1.5)×(1.4–1.5)×4mm3 reconstructed into (0.7–0.75)×(0.7–0.75)×2mm3) 15 minutes after gadolinium administration (Gadovist—gadobutrol, 0.1mmol/kg body weight) [6]. A dynamic inversion time (TI) was designed to null the signal from normal myocardium [7]. Prior to contrast agent administration, coronal navigator-gated 3D b-SSFP (TE/TR=1ms/2.3ms, resolution: (1.6–1.8)×(1.6–1.8)×3.2mm3 reconstructed into (0.8–0.9)×(0.8–0.9)×1.6mm3) data were acquired. Both LGE and b-SSFP data were acquired during free-breathing using a prospective crossed-pairs navigator positioned over the dome of the right hemi-diaphragm with navigator acceptance window size of 5mm and CLAWS respiratory motion control [6,8]. Navigator artefact (from the navigator-restore pulse) in the LGE acquisition was reduced by introducing a navigator-restore delay of 100 ms [6,8]. For our proposed method, only the LGE CMR data are required, and the b-SSFP data are used for some comparison studies.

Based on image quality scored by a senior cardiac MRI physicist, 190 out of 202 cases were retrospectively entered into this study. Manual segmentations of the LA anatomy (with proximal pulmonary veins) and scar were performed by a cardiac MRI physicist with consensus from a senior radiologist and were used as the ground truth for training and evaluation.

Our deep learning framework (Figure 1) used multiview information to mimic the inspection process of reporting clinicians who step through 2D axial slices to find correlated information while also using complementary information from orthogonal views. In addition, attention mechanism was used to enforce the network to focus on the scar regions, and therefore improve the delineation. Our segmentations were compared with the manually-delineated ground truths using Dice scores. For the LA anatomy segmentation, we also compared our method with the whole heart segmentation (WHS) [9], U-Net [10,11] and V-Net [12,13] based methods. For the scar delineation, we compared with both unsupervised learning based methods [3] and also U-Net and V-Net based methods [10–13].

Results

Our method was validated by dividing the data into training/cross-validation (N=170) and independent testing (N=20) datasets. The proposed method achieved a mean Dice score of 93% for the LA anatomy segmentation and 87% for the scar delineation using independent testing data (Figure 2). Unsupervised learning based methods obtained much worse results. U-Net and V-Net based methods performed well, but were still out-performed by our proposed method. In addition, there was an excellent agreement between the scar percentages assessed with the proposed method and those determined from manual segmentations (Figure 3 and Figure 4). Our proposed method was very efficient taking ~0.27 seconds to simultaneously segment the LA anatomy and scars (Figure 5) from a 3D LGE CMR dataset with 60–68 2D slices.

Discussion

Segmentation of both the LA anatomy and scar from a 3D LGE dataset is highly desirable as it avoids the need for additional acquisitions and subsequent registration errors between them; however, the task is challenging due to the nulling of signal from healthy tissue, low signal-to-noise ratio and often limited image quality in this difficult AF cohort. The proposed automated deep learning framework solves this problem by utilizing supervised deep learning incorporating multiview information with attention mechanism. Our method has achieved high accuracy relative to manually-delineated ground truth and significant improvement compared to existing methods.

Conclusion

Our study shows that the proposed deep learning framework can achieve accurate and efficient segmentation of both LA anatomy and scar simultaneously from 3D LGE CMR data without the need for additional acquisitions. This can be helpful for further development of a patient-specific anatomical model combined with LA scar segmentation for patients in AF.

Acknowledgements

This study was funded by the British Heart Foundation Project Grant (Project Number: PG/16/78/32402). D Firmin and J Keegan are co-last authors.

References

1. Chugh SS, Havmoeller R, Narayanan K, Singh D, Rienstra M, Benjamin EJ, et al. Worldwide Epidemiology of Atrial Fibrillation: A Global Burden of Disease 2010 Study. Circulation [Internet]. 2014;129:837–47. Available from: http://circ.ahajournals.org/cgi/doi/10.1161/CIRCULATIONAHA.113.005119

2. Ravanelli D, Dal Piaz EC, Centonze M, Casagranda G, Marini M, Del Greco M, et al. A novel skeleton based quantification and 3-D volumetric visualization of left atrium fibrosis using late gadolinium enhancement magnetic resonance imaging. IEEE Trans Med Imaging. 2014;33:566–76.

3. Karim R, Housden RJ, Balasubramaniam M, Chen Z, Perry D, Uddin A, et al. Evaluation of current algorithms for segmentation of scar tissue from late gadolinium enhancement cardiovascular magnetic resonance of the left atrium: an open-access grand challenge. J Cardiovasc Magn Reson [Internet]. 2013;15:105–22. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3878126&tool=pmcentrez&rendertype=abstract

4. Peters DC, Wylie J V., Hauser TH, Nezafat R, Han Y, Woo JJ, et al. Recurrence of Atrial Fibrillation Correlates With the Extent of Post-Procedural Late Gadolinium Enhancement. A Pilot Study. JACC Cardiovasc Imaging [Internet]. Elsevier Inc.; 2009;2:308–16. Available from: http://dx.doi.org/10.1016/j.jcmg.2008.10.016

5. Oakes RS, Badger TJ, Kholmovski EG, Akoum N, Burgon NS, Fish EN, et al. Detection and quantification of left atrial structural remodeling with delayed-enhancement magnetic resonance imaging in patients with atrial fibrillation. Circulation. 2009;119:1758–67.

6. Keegan J, Jhooti P, Babu-Narayan S V., Drivas P, Ernst S, Firmin DN. Improved respiratory efficiency of 3D late gadolinium enhancement imaging using the continuously adaptive windowing strategy (CLAWS). Magn Reson Med. 2014;71:1064–74.

7. Keegan J, Gatehouse PD, Haldar S, Wage R, Babu-Narayan S V, Firmin DN. Dynamic inversion time for improved 3D late gadolinium enhancement imaging in patients with atrial fibrillation. Magn Reson Med [Internet]. 2015;73:646–54. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24604664

8. Keegan J, Drivas P, Firmin DN. Navigator artifact reduction in three-dimensional late gadolinium enhancement imaging of the atria. Magn Reson Med. 2013;785:779–85.

9. Zhuang X, Shen J. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med Image Anal [Internet]. Elsevier B.V.; 2016;31:77–87. Available from: http://linkinghub.elsevier.com/retrieve/pii/S1361841516000219

10. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Med Image Comput Comput Interv [Internet]. 2015. p. 234–41. Available from: http://link.springer.com/10.1007/978-3-319-24574-4_28

11. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O. 3D U-Net: learning dense volumetric segmentation from sparse annotation. Int Conf Med Image Comput Comput Interv. 2016. p. 424–32.

12. Winther HB, Hundt C, Schmidt B, Czerner C, Bauersachs J, Wacker F, et al. nu-net: Deep Learning for Generalized Biventricular Cardiac Mass and Function Parameters. arXiv Prepr arXiv170604397. 2017;

13. Milletari F, Navab N, Ahmadi S-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. 3D Vis (3DV), 2016 Fourth Int Conf. 2016. p. 565–71.

14. Yang G, Zhuang X, Khan H, Haldar S, Nyktari E, Li L, et al. Fully Automatic Segmentation and Objective Assessment of Atrial Scars for Longstanding Persistent Atrial Fibrillation Patients Using Late Gadolinium-Enhanced MRI. Med Phys. American Institute of Physics; 2018;45:1562–76.

Figures

Figure 1: Flowchart of our deep learning based segmentation framework, which incorporates multiview information and attention mechanism. The multiview learning consists of two types of deep neural networks: (1) a sequential learning network for the axial view with the learning mimicking the inspection process of reporting clinicians who step through 2D axial slices to find correlated information in between and (2) two dilated residual networks for learning complementary information from the sagittal and the coronal views. A dilated attention network is designed to segment LA scars more accurately.

Figure 2: Boxplot of the Dice scores obtained by our proposed method compared with other methods. Dice scores of the LA anatomy segmentation using the 10-fold cross-validation datasets (a) and the independent testing datasets (b). Dice scores of the scar delineation using the 10-fold cross-validation datasets (c) and the independent testing datasets (d). For our method, only LGE CMR data are required. For the WHS, b-SSFP data are needed for LA anatomy segmentation, which is then co-registered with the LGE CMR data. (Tested unsupervised learning based approaches include the 2SD, c-Means and k-Means methods [3]. 2SD: 2-standard deviation based thresholding).

Figure 3: Correlation between the estimated scar percentage of our proposed method and the scar percentage from the manual delineation (diagonal lines in color represent the fit using linear regression). Here the scar percentage is defined as the volume of scar tissue as a percentage of the LA anatomy volume [14]. (a) Results of the 10-fold cross-validation datasets (r=0.94 and r=0.96 for the pre-ablation (N=97) and post-ablation (N=93) cases, respectively) and (b) Results of the independent testing datasets (r=0.98 and r=0.99 for the pre-ablation and post-ablation cases, respectively).

Figure 4: Bland-Altman plots for the estimated scar percentage (ESP) of our proposed method and the scar percentage derived from the manual delineation (MSP). (a) Results of the 10-fold cross-validation, bias=−1% [95% CI −7% to 5%] and (b) Results of the independent testing, bias=−0.1% [95% CI −2% to 2%].

Figure 5: Visualization of the LA segmentation (top) and scar delineation (bottom) for pre-ablation and post-ablation example cases. (a) For the LA segmentation, we show multiple slices of an example pre-ablation case and an example post-ablation case (red contour: manually-delineated ground truth; green contour: our segmentation). (b) For the scar delineation, we show manually-delineated ground truth (red) and our automatic segmentations (green) in an example slice of each of 3 pre-ablation and 3 post-ablation cases. The three cases selected reflected examples of those in which the scar percentage showed poor, fair and good correlation with the ground truth.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0788