0075

Retrospective motion correction using deep learning
Thomas Kuestner1,2,3, Bin Yang3, Fritz Schick2, Sergios Gatidis2, and Karim Armanious2,3

1School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom, 2Department of Radiology, University Hospital Tübingen, Tübingen, Germany, 3Institute of Signal Processing and System Theory, University of Stuttgart, Stuttgart, Germany

Synopsis

Motion is the main extrinsic source for imaging artifacts which can strongly deteriorate image quality and thus impair diagnostic accuracy. Numerous motion correction strategies have been proposed to mitigate or capture the artifacts. These methods have in common that they need to be applied during the actual measurement procedure with already a-priori knowledge about the expected motion type and appearance. We propose the usage of deep neural networks to perform retrospective motion correction in a reference-free setting, i.e. not requiring any a-priori motion information. Feasibility and influences of motion type and origin as well as optimal architecture are investigated.

Introduction

Motion is the main extrinsic source for imaging artifacts in MRI which can strongly deteriorate image quality and thus impair diagnostic accuracy. The artifacts manifest in the image as blurring, aliasing and deformations. Numerous motion correction strategies have been proposed to mitigate or capture the artifacts1-10. These methods have in common that they need to be applied during the actual measurement procedure with already a-priori knowledge about the expected motion type and appearance. Only few methods have been proposed for the correction of already acquired data such as auto-focusing11. This correction requires the existence of an external surrogate signal respectively motion model reflecting the motion and depends on a reliable quality metric to measure the perfect motion-free alignment. For complex non-rigid motion and without the existence of any a-priori knowledge about the motion, this problem is still unsolved. Moreover, in the context of large epidemiological cohort studies12,13 any impairment by motion artifacts can reduce the reliability and precision of the image analysis and a motion-free reacquisition can become time- and cost-intensive.

The emerge of deep neural networks enable to address retrospective motion correction in a reference-free setting by learning from pairs of motion-free and motion-affected images. This image-to-image translation problem has been previously studied for rigid motion correction in neurological cases14-17. We propose a variational autoencoder (VAE) and a generative adversarial network (GAN), named MedGAN18,19, to perform non-rigid and rigid motion correction simultaneously. The aim is to provide motion-corrected MR images starting from motion-corrupted images only without the need of any a-priori motion information or reference.

Material and Methods

The proposed VAE and MedGAN share the common concept of an encoder-decoder structure as shown in Fig.1a. In the discriminator (encoder) the network learns how to classify and separate the motion while a reconstruction (decoder) part performs the correction task. The VAE (Fig.1b) assumes, in contrast to the MedGAN (Fig.1c), a Gaussian probabilistic distribution from which it draws in the latent space mean and standard deviation samples for the motion-corrected reconstruction/decoder. The two network parts are placed sequentially and trained end-to-end. The training database consists of imaging data from 18 healthy subjects in the head, abdomen and pelvis scanned with a T1w and T2w FSE with parameters stated in Tab.120. Each acquisition was performed twice to get a motion-free and motion-affected image. Motion was induced by head tilting (head), body movement (pelvis) representing rigid motion and free-breathing (abdomen) representing non-rigid motion.

All images are first normalized into a range of 0 to 1 and then patched into sizes 48x48 with 80% overlap (VAE only, whole-image input for MedGAN). Training is done on 2D motion-free and motion-affected patch/image pairs (head: 240,500/1296; abdomen: 337,880/1116; pelvis: 408,000/1440 pairs) with leave-one-subject-out cross-validation for testing. Networks are trained in different body regions and tested on the same (intra-region correction) and on other body parts (inter-region correction) for 100 epochs via ADAM optimizer21. The VAE loss function consists of Charbonnier loss22, gradient entropy23, Kullback-Leibler divergence24 and perceptual loss25 which inspect feature maps in the first 3 layers of pre-trained on VGG-19. The MedGAN loss function18 contains an adversarial loss, perceptual loss which is directly obtained from the trained decoder and a style transfer loss which is derived from the Gram matrix of the pre-trained VGG-19.

Images were evaluated by normalized root-mean squared-error, structural similarity index and normalized mutual information. Visual perception of the results were blinded scored by two experienced radiologists for: motion artifacts (0:“no motion artifacts” to 3:“strong motion artifacts”) and with respect to overall image quality (1:“non-diagnostic quality” to 4:“excellent image quality”).

Results and Discussion

Fig. 2 and 3 show qualitative comparisons of the proposed architectures (VAE, GAN) versus a TV denoising in mild and strong moving subjects in the head and abdomen. Compared to motion-affected images, the degree of motion artifacts was significantly reduced after correction via the networks. MedGAN provided the most effective reduction of motion artifacts and motion-obstructed content was recovered. A blinded expert reading (Fig. 4) substantiate the markedly reduced motion artifact appearance and improved image quality.

The current study has limitations: Only MR images of a single sequence type were considered and correction was only performed on 2D real-valued magnitude images, i.e. through-plane motion correction is limited and no phase information is yet exploited. In the future this will thus be extended towards 3D multi-channel complex-valued motion correction.

Conclusion

This feasibility study shows the potential of deep learning networks for motion correction to restore near realistic content. High reader scores and evaluation metrics indicate the potentials of the networks which will be further investigated in the future.

Acknowledgements

No acknowledgement found.

References

1. Cheng JY, Zhang T, Ruangwattanapaisarn N, Alley MT, Uecker M, Pauly JM, Lustig M, Vasanawala SS. Free‐breathing pediatric MRI with nonrigid motion correction and acceleration. Journal of Magnetic Resonance Imaging 2015;42(2):407-420.
2. Cruz G, Atkinson D, Henningsson M, Botnar RM, Prieto C. Highly efficient nonrigid motion‐corrected 3D whole‐heart coronary vessel wall imaging. Magnetic resonance in medicine 2017;77(5):1894-1908.
3. Henningsson M, Koken P, Stehning C, Razavi R, Prieto C, Botnar RM. Whole‐heart coronary MR angiography with 2D self‐navigated image reconstruction. Magnetic resonance in medicine 2012;67(2):437-445.
4. Küstner T, Würslin C, Schwartz M, Martirosian P, Gatidis S, Brendle C, Seith F, Schick F, Schwenzer NF, Yang B. Self‐navigated 4D cartesian imaging of periodic motion in the body trunk using partial k‐space compressed sensing. Magnetic resonance in medicine 2017;78(2):632-644.
5. Maclaren J, Herbst M, Speck O, Zaitsev M. Prospective motion correction in brain imaging: a review. Magnetic resonance in medicine 2013;69(3):621-636.
6. Prieto C, Doneva M, Usman M, Henningsson M, Greil G, Schaeffter T, Botnar RM. Highly efficient respiratory motion compensated free‐breathing coronary MRA using golden‐step Cartesian acquisition. Journal of Magnetic Resonance Imaging 2015;41(3):738-746.
7. Skare S, Hartwig A, Martensson M, Avventi E, Engstrom M. Properties of a 2D fat navigator for prospective image domain correction of nodding motion in brain MRI. Magn Reson Med 2015;73(3):1110-1119.
8. Speck O, Hennig J, Zaitsev M. Prospective real-time slice-by-slice motion correction for fMRI in freely moving subjects. Magnetic Resonance Materials in Physics, Biology and Medicine 2006;19(2):55.
9. Wallace TE, Afacan O, Waszak M, Kober T, Warfield SK. Head motion measurement and correction using FID navigators. Magnetic Resonance in Medicine;0(0).
10. Zaitsev M, Maclaren J, Herbst M. Motion artifacts in MRI: A complex problem with many partial solutions. Journal of magnetic resonance imaging : JMRI 2015;42(4):887-901.
11. Atkinson D, Hill DL, Stoyle PN, Summers PE, Keevil SF. Automatic correction of motion artifacts in magnetic resonance images using an entropy focus criterion. IEEE transactions on medical imaging 1997;16(6):903-910.
12. Ollier W, Sprosen T, Peakman T. UK Biobank: from concept to reality. Pharmacogenomics 2005;6(6):639-646.
13. Bamberg F, Kauczor H-U, Weckbach S, Schlett CL, Forsting M, Ladd SC, Greiser KH, Weber M-A, Schulz-Menger J, Niendorf T. Whole-body MR imaging in the German National Cohort: rationale, design, and technical background. Radiology 2015;277(1):206-220.
14. Cao X, Yang J, Wang L, Wang Q, Shen D. Non-rigid Brain MRI Registration Using Two-stage Deep Perceptive Networks. 2018; Paris. p 1176.
15. Johnson P, Drangova M. Motion correction in MRI using deep learning. 2018; Paris. p 4098.
16. Küstner TJ, Marvin Liebgott, Annika Mauch, Lukas Martirosian, Petros Bamberg, Fabian Nikolaou, Konstantin Gatidis, Sergios Yang, Bin Schick, Fritz Motion artifact quantification and localization for whole-body MRI. 2018; Paris. p 664.
17. Pawar K, Chen Z, Shah N, Egan G. Motion Correction in MRI using Deep Convolutional Neural Network. 2018; Paris. p 1174.
18. Armanious K, Nikolaou K, Gatidis S, Yang B, Küstner T. Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs. arXiv preprint arXiv:180906276 2018.
19. Armanious K, Yang C, Fischer M, Küstner T, Nikolaou K, Gatidis S, Yang B. MedGAN: Medical Image Translation using GANs. arXiv preprint arXiv:180606397 2018.
20. Küstner T, Liebgott A, Mauch L, Martirosian P, Bamberg F, Nikolaou K, Yang B, Schick F, Gatidis S. Automated reference-free detection of motion artifacts in magnetic resonance images. Magnetic Resonance Materials in Physics, Biology and Medicine 2018;31(2):243-256.
21. Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980 2014.
22. Barron JT. A more general robust loss function. arXiv preprint arXiv:170103077 2017.
23. McGee KP, Manduca A, Felmlee JP, Riederer SJ, Ehman RL. Image metric‐based correction (autocorrection) of motion effects: analysis of image metrics. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine 2000;11(2):174-181.
24. Kingma DP, Welling M. Auto-encoding variational bayes. arXiv preprint arXiv:13126114 2013.
25. Johnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution. 2016. Springer. p 694-711.

Figures

Fig. 1: a) General deep learning architecture for retrospective motion correction. A VGG-19 loss network depicts the perceptual loss (VAE) and style transfer loss (GAN) for the performed motion correction. Proposed network architectures: b) Variational auto-encoder (VAE) with skip connection between encoder and decoder. c) Generative adversarial network (GAN) with CasNet generator consisting of three concatenated UNets (encoder/decoder), discriminator and style transfer feature extractor for loss calculation.

Fig. 2: Comparison of rigid motion correction in the head of two subjects who were instructed to perform a head-tilting. A mild and strong movement case is depicted. Motion-affected images are fed into the networks. Motion-corrected output of the proposed VAE and GAN are compared against a TV denoising. Correction quality can be appreciated visually and quantitatively (NRMSE, SSIM, NMI) in comparison to the motion-free reference.

Fig. 3: Comparison of non-rigid motion correction in the abdomen of two subjects who were breathing freely. A mild and strong movement case is depicted. Motion-affected images are fed into the networks. Motion-corrected output of the proposed VAE and GAN are compared against a TV denoising. Correction quality can be appreciated visually and quantitatively (NRMSE, SSIM, NMI) in comparison to the motion-free reference.

Fig. 4: Blinded expert reading in terms of motion artifact appearance and overall perceivable image quality. Statistical significance in a Friedman test (p<0.05) between motion-affected and motion-corrected for all reconstruction methods can be appreciated.

Tab. 1: MR acquisition parameters to create training database for motion artifacts.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0075