4438

Towards motion-robust MRI – Autonomous motion timing and correction during MR scanning using multi-coil data and a deep-learning neural network
Rafi Brada1, Michael Rotman1, Ron Wein1, Sangtae Ahn2, Itzik Malkiel1, and Christopher J. Hardy2

1GE Global Research, Herzliya, Israel, 2GE Global Research, Niskayuna, NY, United States

Synopsis

We propose a method for timing and correcting for rigid-body in-plane patient motion during an MRI scan. The motion is detected using differences between coil-intensity-corrected images from different coils in the receiver array together with the scan-order information. The method allows for the detection and timing of multiple movements during the scan. For each scan where motion was detected, k-space data are divided into different motion states, which are used as input to a deep neural network whose output is a motion-corrected image. The system shows promising results on MR data containing simulated and real motion.

Introduction

Patient motion during MRI exams is a significant clinical problem, rendering scans sometimes clinically unusable and often requiring rescans. Over the years multiple methods have been proposed for detecting and correcting for patient motion during a scan, recently including the use of deep-learning networks1-4. We propose a new two-step method that first detects/times patient motion during a scan and then corrects for it, without requiring any additional hardware or navigator sequences.

Methods

Figure 1a shows an example of a phase-encode order for a fast spin echo (FSE) sequence. Motion during the scan is detected and timed by Fourier transforming a pair of coil-intensity-corrected images from two of the coils in the receiver array back into k-space, calculating their difference, projecting along the frequency-encode direction, and finding the location of peaks in the data. Figure 1b shows a color map of pre- and post-motion regions of k-space when discrete motion happened at scan-step 144. Figure 1c shows the corresponding projection described above, illustrating that the peaks correspond to borders between the two motion states. The method is extended to multiple motion steps by zero-filling partial k-space and excluding from consideration any boundaries with the zero-filled regions.

This information is then used to correct motion artifacts in the image as follows. The various motion-segregated portions of k-space are grouped into motion-state 1 (chosen as the state with the dominant signal in the center of k-space) and motion-state 2 (combining all other segregated k-space regions). These are each zero-filled, transformed to the image domain, and coil-combined before being fed as separate complex inputs into the deep-learning network of Fig. 2. The network cascades ten processing blocks each followed by a data-consistency term, which insures that the k-space region from motion-state 1 has not been changed by the network. In the basic processing unit (Fig. 2b), motion-state 2 is passed through a Resnet block before being concatenated as additional channels onto the "main" Image-1 channels, which are then fed to a Unet block, followed by the data-consistency block. Four motion-correction models were trained (one for motion in each quartile Q1-Q4 of k-space according to the scan order) using a simulated rigid-body-motion dataset of about 6000 images, containing randomly-generated 2-3 patient movements where each movement includes a random translation of up to 10 pixels in any direction and a random small rotation.

The motion detection/timing algorithm was tested on a set of 6000 images with simulated motion at random timings. The motion-correction network was tested on a simulated-motion dataset of 500 scans for each quartile of motion timing and on volunteer scans containing real head motion.

Results

Figure 3 plots simulated vs measured motion timings. The algorithm was able to detect and time motion to within several phase-encodes (SD = 2.8 steps). Figure 4 shows an example of a motion-corrupted image repaired by the network of Fig. 2. Figure 5 shows normalized mean square error (NMSE) of repaired images relative to ground truth, as a function of timing of the first motion step. The average NMSE (Fig. 5) was 8x10-3 for Q1, 5.9x10-3 for Q2, 1x10-3 for Q3, and 1.4x10-4 for Q4. In the most difficult cases, with motion near center k-space, small residual motion artifacts sometimes remained visible in the repaired images.

Discussion

The proposed motion-detection algorithm can detect and time the presence of motion (including multiple motion steps) during the scan with a high degree of accuracy. The proposed motion-correction algorithm makes use of the known motion timings by breaking up the k-space data into consistent parts and feeding them as separate inputs to a deep neural network for calculating a corrected image. This allows the motion-correction network to include a data-consistency term constraining the reconstructed image to remain consistent with that part of k-space containing the most signal energy. The most difficult cases to correct were those where one of the motion steps occurred near the center of k-space (end of Q1 and beginning of Q2 using the scan order of Fig. 1a).

Conclusion

This work presents a novel approach to detecting patient motion during a scan and for reconstructing a clinically useable image. This opens the door to multiple strategies for overcoming motion. Knowing the motion timing enables the use of a novel deep-network architecture with a data-consistency term that performs better than deep-learning solutions that work without knowledge of the timing.

Acknowledgements

No acknowledgement found.

References

  1. K Pawar, et al. Motion correction in MRI using deep convolutional neural network. Proc ISMRM 2018, Paris. p 1174.
  2. K Sommer, et al. Correction of motion artifacts using a multi-resolution fully convolutional neural network. Proc ISMRM 2018, Paris. p 1175.
  3. S Braun, et al. Wasserstein GAN for motion artifact reduction of MR images. Proc ISMRM 2018, Paris. p 4093.
  4. P Johnson, et al. Motion correction in MRI using deep learning. Proc ISMRM 2018, Paris. p 4098.

Figures

Figure 1 a) MRI scan order for a 256x256 FSE image with echo-train length of 8. Note the period of 8 before coming back to scanning the adjacent phase-encode in k-space. b) Map indicating in green the regions in k-space that were scanned before motion occurred (at scan step 144) and in red regions that were scanned after motion. c) The detected peaks in the absolute value of the difference in k-space between two receiver coils projected along the frequency-encoding direction. The peaks are located at the boundaries between the red and the green areas.

Figure 2 – (a) Motion-correction deep neural network architecture uses 10 cascaded blocks ("Iterations"). (b) Each Iteration has Image 1 containing the dominant portion of motion-consistent k-space data available and Image 2 the remaining k-space. Image 2 is passed through a Resnet before being concatenated as additional channels to Image 1 and passed on to U-net and data consistency blocks. (c) The data-consistency block outputs an image where the k-space lines corresponding to the original Image-1 input have overwriten the calculated k-space.

Figure 3 –Time step measured by method of Fig. 1, plotted against true simulated step, for 6000 different randomly generated motions. Measured steps were within several phase-encodes of true simulated steps (STD = 2.8).

Figure 4 – Example of motion correction by network of Fig. 2, using motion detection and timing method of Fig. 1. (a) Motion-corrupted image produced by volunteer instructed to move his head during the scan; (b) output of the motion-correction network.

Figure 5- Results of correction of simulated-motion test sets with two or three random motions during the scan. The NMSE (relative to corresponding motion-free ground-truth images) of images corrected using network of Fig. 2, as a function of the timing of first motion step. The average NMSE was 8x10-3 for Q1, 5.9x10-3 for Q2, 1x10-3 for Q3, and 1.4x10-4 for Q4.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4438