2154

Free-breathing Multi-Phase MRI using Deep Learning-based Respiratory Motion Compensation
Vahid K Ghodrati1,2, Jiaxin Shao1, Mark Bydder1, Kim-Lien Nguyen3,4, Xiaodong Zhong5, Yingli Yang6, and Peng Hu1,2

1Radiology, University of California Los Angeles, Los Angeles, CA, United States, 2Biomedical Physics Inter-Departmental Graduate Program, University of California Los Angeles, Los Angeles, CA, United States, 3Department of Medicine, Division of Cardiolog, University of California Los Angeles, Los Angeles, CA, United States, 4Division of Cardiology, Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA, United States, 5siemens, Los Angeles, CA, United States, 6Department of Oncology, University of California Los Angeles, Los Angeles, CA, United States

Synopsis

To minimize respiratory motion-induced image blurring and artifacts, conventional cardiothoracic and abdominal MRI techniques rely mostly on breath-holding. These approaches result in limited time window for data acquisition, especially in many ill patients who are unable to breath-hold for an extended period of time. In this study, we employed deep learning as a promising tool for detection and correction of complex respiratory motion during free-breathing MRI scanning. On average, our proposed network increased the sharpness of the images 20 percent.

Introduction:

To minimize respiratory motion-induced image blurring and artifacts, conventional cardiothoracic and abdominal MRI techniques rely mostly on breath-holding. These approaches result in limited time window for data acquisition, especially in many ill patients who are unable to breath-hold for an extended period of time. Several respiratory motion compensation strategies, such as navigators1, MR self-gating2 and image-based navigators3, have been extensively studied. However, these techniques have their respective limitations. Reliability issues and inaccurate representation of the true underlying complex motion could result in residual motion artifacts that affect clinical diagnoses and patient management. In this study, we propose to employ deep learning as a promising tool for detection and correction of complex respiratory motion during free-breathing MRI scanning.

Methods:

Figure 1 shows our network architecture using an adversarial autoencoder structure. Adversarial autoencoder is a probabilistic autoencoder that performs variational inference by matching the aggregated posterior of the hidden code vector with arbitrary prior distribution4,5. The architecture consists of the autoencoder (two U-Net structures) and the adversarial component, with the latter serving as a one discriminator. Intuitively, the autoencoder consisting of the encoder U-Net and decoder U-Net attempts to learn the identity map for the images with free-breathing motion artifacts and works to preserve the consistency. The discriminator, which only interacts with the output of the encoder, forces the encoder (first U-Net) to remove the motion artifacts. We trained both the autoencoder and the adversarial network jointly with stochastic gradient descent in two phases: the consistency phase and the correction phase. Both were executed on each mini-batch. The autoencoder updated the consistency path each time to minimize the reconstruction error. During the correction phase training, the adversarial network first updates its discriminator, then updates the encoder to minimize the motion artifact. In order to stabilize the training process, a Markovian-patch-based approach was used to train the correction phase6. During the training process, the output of the encoder for each epoch is divided to 4 patches and the discriminator either accepts or rejects the decision based on the average of the patches.

The datasets used to trained our network consisted of two groups: 1) Free breathing 2D multi-slice, retrospective EKG-triggered, bSSFP cardiac cine MR images in the short- and long-axis views from 10 volunteers (Siemens, 1.5T). The normal breathing was intended to simulate respiratory motion artifacts. 2) Breath-held 2D multi-slice, retrospective EKG-triggered, bSSFP cardiac cine MR images from 42 patients. We trained the discriminator for the adversarial pathway using patient data. We used five of 10 datasets from volunteers to train the network. We used the remaining five volunteer datasets to test the performance of the network. To quantify image sharpness as a surrogate for good image quality and absence of motion, we calculated the maximum intensity gradient between myocardium and blood pool. We compared the sharpness score of the motion-resolved group with the free-breathing group using paired-sample t tests.

Results:

Figure.2 shows representative respiratory motion-corrected images generated by our network and the corresponding motion contaminated images from three volunteers. The signal intensity profiles across the heart throughout the cardiac cycle are plotted to demonstrate the dynamic nature of myocardial motion in relation to the red lines. The network was able to resolve the respiratory-motion artifacts for both for the long- and short-axis views. The endocardial border on motion-resolved images is sharp compared to the non-corrected acquisition and most notable for the long-axis images.

Figure.3a illustrates derivation of sharpness score; Figure.3b shows Box-Whisker plots for the sharpness scores. Figure.3c summarizes the paired-sample t test comparisons. There was a significant difference (p<0.001) between the sharpness scores for motion-resolved relative to free-breathing images. On average, our proposed network increased the sharpness of the images 20 percent.

Figure.4 provides another illustrative comparison of reduced motion artifacts for both the cardiac and the abdominal region and confirms that the motion-resolved image is structurally consistent with the motion-corrupted image.

Discussion:

This work represents an innovative and early application of deep learning for motion correction of MRI. A major consideration is whether the proposed network can produce images that are consistent with the acquired k-space data while correcting for motion artifacts. This concern is addressed by the consistency path in the autoencoder consistency pathway of the network, which is designed to preserve the fidelity of the images with the acquired k-space samples.

Conclusions:

An adversarial autoencoder architecture is effective for respiratory motion correction of free-breath, multiphase cardiac and dynamic abdominal MR imaging. Future functional validation (i.e. LVEF measurements) is needed to demonstrate the accuracy of deep learning respiratory motion-resolved images for quantitative assessment.

Acknowledgements

No acknowledgement found.

References

[1] Y. Wang, R. C. Grimm, P. J. Rossman, J. P. Debbins, S. J. Riederer, and R. L. Ehman, “3D coronary MR angiography in multiple breath‐holds using a respiratory feedback monitor,” Magn. Reson. Med., vol. 34, no. 1, pp. 11–16, 1995.

[2] A. C. Larson et al., “Preliminary investigation of respiratory self‐gating for free‐breathing segmented cine MRI,” Magn. Reson. Med. An Off. J. Int. Soc. Magn. Reson. Med., vol. 53, no. 1, pp. 159–168, 2005.

[3] M. H. Moghari et al., “Free‐breathing 3D cardiac MRI using iterative image‐based respiratory motion correction,” Magn. Reson. Med., vol. 70, no. 4, pp. 1005–1015, 2013.

[4] A. Makhzani, “Implicit Autoencoders,” arXiv: 1805.09804, 2018.

[5] A. Makhzani, J. Shlens, N. Jaitly, and I. J. Goodfellow, “Adversarial Autoencoders,” arXiv: 1511.05644, 2015.

[6] C. Li and M. Wand, “Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks,” arXiv: 1604.04382, 2016.

Figures

Figure.1: The autoencoder network consists of two U-Nets. The correction path is applied to the output of the encoder to encourage the encoder to reconstruct the motion artifact-free image. Each epoch of the training process first updates the encoder and decoder parameters along the consistency path and subsequently updates the parameters in the adversarial path. Therefore, after successful training, the output of the encoder is not only expected to be consistent with the input, but also has minimal motion artifacts. The input of the encoder U-Net is free-breathing cine MRI k-space data with complex values.

Figure.2: Sample of the motion-corrupted images (b), and corrected version (a) for three test data: first and second row presents the Short-Axis (SA) views and the last row shows the Horizontal Long-Axis (HLA) view. Intensity profiles of the pixel on the red-dashed lines are reported through the 25 cardiac phases. The motion-corrected images are improved with reduced motion blurring and artifacts.

Figure.3. Statistical Analysis: (a) shows the sharpness score derivation, (b) shows the Box-Whisker plots for the sharpness scores of the respiratory motion-resolved test datasets and corresponding free-breathing images, (c) shows the paired-samples t test comparisons of the sharpness score for motion-resolved relative to free-breathing images.

Figure.4: Illustrative images of free-breathing volunteer test data and respiratory motion-corrected images. Consistency of respiratory motion-correction of cardiac and abdominal structures is qualitatively demonstrated.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
2154