Jongyeon Lee1, Byungjai Kim1, Wonil Lee1, and HyunWook Park1
1Korean Advanced Institute of Science and Technology, Daejeon, Korea, Republic of
Synopsis
Deep learning techniques have been applied to
motion artifact correction without motion estimation or tracking. We previously
studied the motion correction method for the multi-contrast brain MRI using NMI
maximization and the multi-input neural network. However, as the previous work
suffered from a prolonged alignment time and a training inconvenience, we adopt
the registration network to reduce alignment time and the multi-output neural
network to be trained only once. Our proposed method successfully reduces
motion artifacts in the multi contrast images.
Introduction
Motion during MRI scan typically hinders image quality
by producing motion artifacts1. To reduce the motion artifacts,
various motion correction techniques have been developed. Prospective motion
correction methods tracked motions during acquisition using a navigator2 or external sensors3 and updated a sequence in real time4 to keep a patient’s relative position stationary. In contrast, retrospective
motion correction methods estimated motion information after scan using
self-navigated trajectories5 to correct inconsistencies among
k-space lines. However, these previous techniques are typically lacking
generality due to their dependency on MRI system or specialized pulse sequences,
and require additional data acquisition to correct motion artifacts6.
To overcome these issues in the multi-contrast brain MRI environment, we
previously introduced a deep learning technique to correct motion-corrupted
images using motion-free images of other contrasts7. This work
utilized the multi-contrast MRI with three contrasts including T1-weighted
(T1w), T2-weighted (T2w), and T2-weighted FLuid-Attenuated Inversion Recovery
(FLAIR)8, applied the multi-modal registration by the normalized
mutual information (NMI) maximization, and used a motion correction network.
However, the previous work had problems with its prolonged runtime for the
registration and its inconvenience of training each network for correction of
each contrast image. To resolve these problems, we extend this work to a new
framework that utilizes the registration network to reduce time consumed in NMI
maximization and employs a multi-output motion correction network, which can be
trained only once regardless of the number of contrasts.Methods
As illustrated in Figure 1-a, the proposed
framework consists of two parts: image alignment and motion correction. To
correct misalignment between multi-contrast images due to motions, the image
alignment is performed with the unsupervised registration network, which uses
the normalized cross-correlation9 (NCC) loss to yield the best
motion parameters that minimize the loss among the multi-contrast input images,
and the NMI maximization to fine-tune the alignment. Their combination
facilitates faster and more accurate image alignment. After the aligning
process, the motion correction network, which consists of three encoders, one
transformer, and three decoders, corrects motion artifacts in the input images using
SSIM10 loss and VGG11 loss with the ratio of 1:0.001.
This multi-input and multi-output network structure enables to effectively
correct motion artifacts for every contrast. The detailed network architecture is
described in Figure 1-b. For training, Adam
optimizer was used with a learning rate of 0.001. An actual implementation was
performed on TensorFlow 2.0 with the Keras library.
2D motion-free axial
brain MR images with three contrasts were obtained from 41 healthy subjects
(age 22.7 ± 2.8 years) with a slice thickness
of 5mm using a 3.0T MRI system (MAGNETOM Verio, Siemens)
and the 32-channel head coil. The field of view (FOV) was 220mm × 220mm, and the
image matrix size was 256 × 256. From 262 motion-free slices, 193 slices were
used for training, 34 slices for validation, and 35 slices for test.
In order to train, validate, and test the network, we simulated motion
artifacts depending on the sequences used for the study and synthesized the
datasets with the simulated motion artifacts. Figure 2 shows example cases of
the random motion simulation for a) a T1w spin-echo image and b) a T2w turbo
spin-echo image. The training process was done only with the synthesized
training dataset.
Tests
are done using both the synthesized test dataset and the in-vivo test dataset,
which contains random real motion artifacts from 5 subjects (age 25.2 ± 1.8 years).
To investigate the effectiveness of each process in the proposed method, we
perform the experiments that correct motion artifacts without the image
alignment, with the network registration only, and with the whole image
alignment process including the NMI maximization for the synthesized test
dataset.Results
Figure
3 shows the results of the example test images for a) the synthesized dataset
and b) the real motion dataset, where an image of the target contrast is
motion-corrupted and the other contrast images are motion-free. For the real
motion dataset, it is visually addressed that the motion artifacts are greatly
reduced as shown in the areas within the colored boxes.
The
importances of the registration network and the NMI maximization are presented
in Figure 4 with the average SSIM and NMI scores for the motion-corrected
images of the synthesized test dataset. The test scores without any image
alignment are the lowest, whereas the test scores with the whole image
alignment process are the highest. Example images are also presented for these
experiments in Figure 5, where the experiment with the whole image alignment
shows the best performance among the experiments for all contrasts as the
highest SSIM scores and the lowest NRMSE scores of the motion-corrected images indicate.Discussion and Conclusion
Although our method is currently limited to the correction
of intra-slice motion artifacts in the given contrast images, the work is
likely to be extended to dealing with through-plane motion artifacts and other
contrast images.
We proposed the framework with the image alignment and the motion
correction using deep learning, and the proposed method successfully reduced
motion artifacts for all contrasts as the quantitative scores and the tests
with the real motion datasets present.Acknowledgements
This research was supported by a grant of the Korea
Health Technology R&D Project through the Korea Health Industry Development
Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of
Korea (grant number: HI14C1135) and by Institute for Information &
communications Technology Promotion (IITP) grant funded by the Korea government
(MSIT) (No.2017-0-01778).References
- Zaitsev M, Maclaren J, Herbst M. Motion artifacts in MRI: A complex problem with many partial solutions. Journal of Magnetic Resonance Imaging. 2015;42(4):887-901.
- Firmin D, Keegan J. Navigator echoes in cardiac magnetic resonance. Journal of Cardiovascular Magnetic Resonance. 2001;3(3):183-193.
- Zaitsev M, Dold C, Sakas G, Hennig J, Speck O. Magnetic resonance imaging of freely moving objects: prospective real-time motion correction using an external optical motion tracking system. Neuroimage. 2006;31(3):1038-1050.
- Herbst M, Maclaren J, Weigel M, Korvink J, Hennig J, Zaitsev M. Prospective motion correction with continuous gradient updates in diffusion weighted imaging. Magnetic Resonance in Medicine. 2012;67(2):326-338.
- Pipe JG. Motion correction with PROPELLER MRI: application to head motion and free‐breathing cardiac imaging. Magnetic Resonance in Medicine. 1999;42(5):963-969.
- Godenschweger F, Kägebein U, Stucht D, et al. Motion correction in MRI of the brain. Physics in Medicine & Biology. 2016;61(5):R32.
- Lee J, Kim B, Jeong N, Park H. Motion correction for a multi-contrast brain MRI using a multi-input neural network. In Proceedings of the 28th Annual Meeting of ISMRM, 2020.
- Lu H, Nagae‐Poetscher LM, Golay X, Lin D, Pomper M, Van Zijl, PC. (2005). Routine clinical brain MRI sequences for use at 3.0 Tesla. Journal of Magnetic Resonance Imaging, 2005;22(1):13-22.
- Zhao F, Huang Q, Gao W. Image matching by normalized cross-correlation. IEEE International Conference on Acoustics Speech and Signal Processing Proceedings. 2006.
- Wang Z, Bovik AC, Sheikh HR, Simoncelli, EP. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing. 2004;13(4):600-612.
- Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations. 2015.