It can be argued that the most significant technical impediment for wider clinical adoption of fully-quantitative cardiac perfusion MRI is the lack of a fully-automatic post-processing workflow across all scanner platforms. In this work, we present an initial proof-of-concept based on a deep-learning approach for quantification of myocardial blood flow that eliminates the need for motion correction, hence enabling a rapid and platform-independent post-processing framework. This is achieved by optimizing/training a cascade of deep convolutional neural networks to learn the common spatio-temporal features in a dynamic perfusion image series and use it to jointly detect the myocardial contours across all dynamic frames in the dataset.
With recent technical advances, fully-quantitative perfusion cardiac MR (CMR) imaging is being adopted as a potentially superior modality for detection of myocardial ischemia by providing a validated quantitative tool for assessment of presence/severity of ischemic heart disease. It can be argued that the most significant technical impediment for wider clinical adoption of fully-quantitative perfusion CMR is the lack of a robust and rapid fully-automatic post-processing workflow across all scanner platforms. On select platforms, retrospective motion correction with non-rigid registration is available which enables a faster workflow for manual analysis – although the accuracy of motion-correction varies significantly depending on the
registration
technique.
A recent work has proposed a optimized approach that automatically
generates a pixel-wise myocardial blood flow (MBF) map.1 This approach currently requires a manual step (segmentation of the MBF
pixel map) to generate, e.g., a global stress MBF value for each myocardial slice and requires using a customized pulse sequence. Deep convolutional neural networks (CNNs) have been recently applied for segmentation of cine CMR images with the goal of automatic assessment of cardiac function.3,4 In this work, we present the first attempt at applying deep learning for rapid automatic analysis of perfusion CMR datasets.
Stress/rest perfusion images from 62 volunteer patients with suspected/known ischemia and 10 healthy volunteers were analyzed. All subjects underwent free-breathing vasodilator-stress CMR (saturation-recovery FLASH at 3T; contrast dose: 0.05 mmol/kg) with images acquired in 3 short-axis over 60 heartbeats. Mean MBF for each slice was quantified by an expert physicist using manual segmentation of the myocardium (endocardial/epicardial contours) for each of the slices and Fermi deconvolution of the gadolinium concentration time-curves. As shown in Fig. 1, the proposed deep-learning network is composed of a cascade of two CNNs each with an optimized U-net architecture.2 The first CNN acts as the "heart localizer" by detecting the centroid of left ventricle (LV) and, as shown in Fig. 2, is then used to crop all of the image frames to a "LV region-of-interest" as the input to the second CNN. The second CNN jointly processes the 3D stack (2D + time) of image frames for each slice and jointly detects the myocardial borders for all of the first-pass perfusion frames by computing a deep cascade of feature maps from the spatio-temporal information in the dynamic image series (gray boxes represent multi-channel feature maps in the optimized U-net architecture). Each of the two CNNs was separately trained/validated using 70 of the available 72 stress/rest perfusion studies (≈ 24,000 images). As described in Fig. 3, the training dataset was augmented by applying random affine transforms to the training dataset. Figure 4 shows an example of an augmented image series in the training dataset. For two patients (not among the training dataset), the agreement between automatic vs. manual segmentation was assessed (Dice score) and mean per-slice MBF for the two approaches (3 stress and 3 rest MBF quantified for each patient) were compared using Pearson correlation.
1. Kellman P et al. Myocardial perfusion cardiovascular magnetic resonance: optimized dual sequence and reconstruction for quantification. J Cardiovasc Magn Reson 2017;19:43. doi: 10.1186/s12968-017-0355-5
2. Ronneberger O et al. U-Net: Convolutional Networks for Biomedical Image Segmentation. Proceedings of Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Lecture Notes in Computer Science 2015;9351. doi: 10.1007/978-3-319-24574-4_28
3. Avendi MR et al. Magn Reson Med 2017;78(6):2439-2448. doi: 10.1002/mrm.26631
4. Bai W et al. J Cardiovasc Magn Reson 2018;20:65. doi: 10.1186/s12968-018-0471-x