3393

Iteration-based motion compensation method for multi-shot diffusion imaging
Zhongbiao Xu1, Rongli Zhang2,3, Yingjie Mei1,4, Zhifeng Chen1, Yaohui Wang5, Ed X. Wu6, Feng Huang7, and Yanqiu Feng1

1School of Biomedical Engineering, Guangdong Provincial Key Laborary of Medical Image Processing, Southern Medical University, Guangzhou, China, 2School of Medicine, South China University of Technology, Guangzhou, China, 3Guangdong General Hospital, School of Medicine, South China University of Technology, Guangzhou, China, 4Philips Healthcare, Guangzhou, China, 5Division of Superconducting Magnet Science and Technology, Institute of Electrical Engineering, Chinese Academy of Sciences, Beijing, China, 6Laboratory of Biomedical Imaging and Signal Processing, University of Hong Kong, Hongkong, China, 7Neusoft Medical System, Shanghai, China

Synopsis

Multi-shot EPI technique is vulnerable to patient motion. Though CIRIS proposed by our group tackles the infrequent macroscopic motion in multi-shot EPI by clustering and registration, it cannot deal with the frequent motion (e.g. shot-wise motion). In this work, an iterative motion compensation frame was introduced to correct for the frequent motion during the multi-shot acquisition. The simulation experiments demonstrated that the proposed method can obtain improved image quality in the presence of infrequent motion, and even correct for the shot-wise motion, compared to CIRIS.

Purpose

Multi-shot EPI has been widely used to overcome the shortcomings of large distortion and low spatial resolution in single-shot EPI acquisition. However, this technique is vulnerable to patient motion. Miniscule motion will induce inter-shot phase variations, resulting in ghosting artifact. Macroscopic motion will cause pixel mismatch, leading to image blurring and artifact. MUSE1 and IRIS2 have well addressed the inter-shot phase variations caused by miniscule motion, but neglect macroscopic motion. Though our previous work CIRIS3 addresses the macroscopic motion by clustering and registration, it cannot work in the presence of frequent motion (e.g. shot-wise motion). In this work, an iterative reconstruction frame4 was introduced to address the frequent motion during multi-shot EPI acquisition.

Methods

Multi-shot EPI with navigator2 was used for data acquisition. The proposed method integrated the phase and motion information estimated from navigators into an iterative reconstruction frame4 to correct for motion. Fig. 1 presents the flowchart of the proposed method. In each iteration, the phase and motion information from navigator of each shot was firstly applied to the image from the last iteration (initialization value = 0) to generate diffusion images of each shot, which have different phase and spatial position. Then, the shot images were weighted with coil sensitivities followed by folding operation to obtain estimated aliasing images of each shot. After that, the multi-channel aliasing residual images of each shot were computed from the differences between the estimated aliasing images and the aliasing images from measurement image data. These multi-channel aliasing residual images were unfolded and combined to generate residual images of each shot, which have the same phase and position. Finally, these residual images were added to the image from the last iteration to improve the image quality. This process was repeated until the image had converged. In this study, the iteration number was set to 20.

To evaluate the performance of the proposed method, a multi-shot brain static dataset was acquired on a Philips Achieva 3.0 Tesla scanner (Philips Healthcare, Best, The Netherlands) using an 8-channel head coil. The acquisition parameters included: in-plane resolution = 1.0 × 1.0 mm2, number of signal average (NSA) = 2, number of shots = 6, b-value = 800 s/mm2, and number of diffusion gradients = 6. The motion-corrupted six-shot datasets with six diffusion directions were generated as follows. Firstly, random rotational and translational motion were simulated and added to the gold standard tensors calculated from the acquired static dataset. Then, a diffusion image was generated using the tensors with motion and the corresponding b-matrix. After that, the phase map from the in vivo data and eight coil sensitivities were applied to the diffusion image to generate multi-channel diffusion data. Finally, one shot data was extracted from the k-space data of the multi-channel diffusion images. The k-space datasets for other shots and directions were generated through the same method. Navigator data were obtained by downsampling these multichannel diffusion images to a 72 × 36 matrix. NSA was set to 2 and Gaussian noise with SNR of 10 was added to the k-space in the simulation. Two experiments with different motion frequency were simulated: 1) two movements per direction; 2) shot-wise random motion. The rotation ranges from –10° to 10°, and the translation ranges from –3 to 3 pixels.

Results

Fig. 2 shows the results of the simulated experiment with two movements per direction with different methods. Compared to IRIS, CIRIS and the proposed method both corrected for the motion. However, the proposed method outperformed CIRIS in terms of image quality. The results of shot-wise motion experiment are presented in Fig. 3. CIRIS cannot deal with such frequent motion and thus the results are not shown. It can be observed that the proposed method successfully corrected for the shot-wise motion.

Discussion

The proposed method estimated motion parameters and phase information from navigator and then integrated these information into the iterative reconstruction frame to correct for motion. Compared to CIRIS, the proposed method avoids the clustering reconstruction, which requires sufficient shots in one cluster, and thus can deal with frequent motion. Compared to the motion-corrected k-space reconstruction5, the proposed method corrects for motion in image domain and thus has the ability to correct for non-rigid motion types.

Conclusion

The proposed method can improve the image quality in the presence of infrequent motion and even correct for the shot-wise motion, compared to CIRIS.

Acknowledgements

No

References

[1] Chen, N-k. et.al. NeuroImage 2013;72:41-47.

[2] Jeong, H-k. et.al. MRM 2013; 69:793-802.

[3] Xu, Z-b. et.al. MP 2018, DOI: 10.1002/mp.13232.

[4] Nielsen, T. et.al. MRM 2011; 66(5):1339–1345.

[5] Dong, Z-j. et.al. MRM 2017; 79(4):1992–2002.

Figures

Figure 1 Flowchart of the proposed method.

Figure 2 Results of simulated experiment with two movements per direction reconstructed with different methods. CIRIS and the proposed method both correct for the motion artifacts, however the proposed method achieves better image quality.

Figure 3 Results of simulated experiment with shot-wise motion reconstructed with different methods. CIRIS cannot correct for shot-wise motion and thus no results are shown in this figure. The proposed method successfully removes the shot-wise motion artifacts.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
3393