4857

Motion Correction of Magnitude MR Images using Generative Adversarial Networks
Yuan Bian1, Ye Wang2, and Stanley Reeves1

1Electrical and Computer Engineering, Auburn University, Auburn, AL, United States, 2Computer Science and Software Engineering, Auburn University, Auburn, AL, United States

Synopsis

Motion during MRI scan can reduce image quality due to the induced artifacts. We present a novel data-driven motion correction method for magnitude MR images using generative adversarial networks (GANs). GANs (Pix2pix model) is implemented to reduce motion artifacts and reconstruct motion-corrupted images through adversarial training between generator and discriminator to force motion-corrected image close to the reference image. The training set is made of image pairs, which consist of motionless reference images and corresponding motion-simulated images. The proposed method was validated by a simulated motion test set and a real motion (experimental) test set.

Introduction

Motion is an unavoidable issue for MRI. Image artifacts are induced due to motion, which will reduce image quality. Two classes of MRI motion correction methods have been proposed: prospective motion correction and retrospective motion correction.1 Prospective motion correction techniques perform a real-time update of the data acquisition strategy by tracking object positions adaptively, which always require extra navigator pulse sequences, complex sampling patterns, special patient markers, or specialized hardware. Retrospective motion correction usually postprocesses raw (complex) MR scanner data and reconstructs MR images after the data is fully acquired. However, the raw data is not always accessible for clinical applications. We present a data driven motion correction method for magnitude MR images using GANs pix2pix model,2 which does not require extra equipment, specialized sampling patterns, or raw data. A pix2pix model is trained on motion-free MR images and motion-simulated images and tested on real motion-corrupted images and unseen motion-simulated images.

Methods

pix2pix model

The pix2pix model is an application of conditional GANs. GANs include two parts, discriminator and generator. The discriminator learns to distinguish the input from the model distribution or the data distribution. The generator learns a map from random noise to the targets. For MR image motion correction, the pix2pix model learns a mapping from a motion-corrupted image x to a motionless image y, G: x to y. The generator G is trained to produce the motion-corrected image that cannot be distinguished from motionless images by an adversarial trained discriminator D, while discriminator D is trained to distinguish the output (motion-corrected image) of generator G from a real motionless image. This training procedure is diagrammed in Fig 1. We adapted generator and discriminator architectures from a pix2pix model.

Dataset

A pix2pix model requires pairs of images as the training set: motionless images and corresponding motion-corrupted images. 8000 2D images were acquired from 3D datasets of 276 subjects in the Autism Brain Imaging Data Exchange (ABIDE) dataset to be motionless images in the training set.3 The corresponding motion-corrupted images were generated by these motionless images. Motion was simulated in k-space, according to the properties of Fourier Transform. Each k-space line (along phase encode direction), was modeled with two random translations within ±12 pixel and one random rotation angle within ±12°. The center k-space lines (randomly chosen from 20 to 60 lines) were preserved without motion corruption to keep the basic image structure.

Two test sets were used to validate the proposed method. One test set―the simulation test set―includes 1780 simulated motion-corrupted images, which were generated by modeling motion of no-motion images in another subset of the ABIDE dataset (53 subjects). The other test set― the experimental test set―includes 200 real motion-corrupted images from 2 subjects. Six datasets were acquired by a standard vendor-supplied MP-RAGE sequence with resolution 1 mm isotropic and flip angle=9° on Siemens Verio Open-Bore 3T Scanner. The subjects were instructed to perform no motion and head motion to separately acquire motionless images (reference) and real motion-corrupted images.

Training

For the pix2pix model training, minbatch Adam solver was used with learning rate = 0.0002, and momentum parameters were β1=0.5, β2=0.999. The network was trained over 200 epochs (40 hours) on an NVIDIA Tesla P100 GPU.

Results

Fig 2 and Fig 3 show several examples of applying the proposed motion correction method on the simulated motion-corrupted images and real motion-corrupted images. The motion-corrected images indicate that our method was able to eliminate motion artifacts and to preserve the sharp boundaries. In Fig 2, the corrected images recovered almost all the details of the reference images. In Fig 3, the blurred edges became sharp and the corrected images recovered most of the details of the reference images, but there were some differences between the reference images and the corrected images.

Discussion

There are two reasons that may explain the differences between the reference images and the corrected images. First, the subjects’ position changed during the experiment. The small position change between two scans can lead to the mismatch between motion-corrupted images and reference images. Second, information is lost due to through-plane motion. Some through-plane motion is almost inevitable when subjects performed motion, which leads to some information lost in the reference plane and some information obtained from other planes.

Conclusion

A data-driven motion correction method for magnitude MR images using generative adversarial networks has been developed. Simulations and experiments show that the proposed method can effectively correct and reconstruct motion-corrupted images.

Acknowledgements

No acknowledgement found.

References

1. Godenschweger F, Kägebein U, Stucht D, Yarach U, Sciarra A, Yakupov R, Lüsebrink F, Schulze P, Speck O. Motion correction in MRI of the brain. Physics in Medicine and Biology, 2016;61(5):R32-R56.

2. Isola P, Zhu J, Zhou T, Efros A. Image-to-Image Translation with Conditional Adversarial Networks. arXiv:1611.07004v2, 2017.

3. Martino et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular Psychiatry, 2014;19:659–667.

Figures

Fig 1 Training a pix2pix model to map a motion corrupted image to a motionless image. The discriminator D is trained to distinguish the tuple between motion corrected image G(x) and motion corrupted image x from the tuple between motionless image y and motion corrupted image x. The generator G is trained to fool the discriminator D.

Fig 2 Results of motion correction on three simulation dataset from the simulation test set. The first column is motion-corrupted images generated by the second column images (reference); the second column is motionless images as the reference; the third column is the motion-corrected images through pix2pix model.

Fig 3 Results of motion correction on two experimental dataset from experimental test set. The first column is motion-corrupted images generated by subject motion in the experiment; the second column is no-motion images generated by motionless subject in the experiment; the third column is the motion-corrected images through pix2pix model.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4857