1359

Retrospective motion compensation for spiral brain imaging with a deep convolutional neural network
Quan Dou1, Zhixing Wang1, Xue Feng1, John P. Mugler2, and Craig H. Meyer1
1Biomedical Engineering, University of Virginia, Charlottesville, VA, United States, 2Radiology & Medical Imaging, University of Virginia, Charlottesville, VA, United States

Synopsis

Head motion can severely degrade the quality of MR brain images. A deep convolutional neural network was implemented in this study to retrospectively compensate for motion in spiral imaging. The network was trained on images with simulated motion artifacts and tested on both simulated and in vivo data. The image quality was improved after the motion correction.

Introduction

Subject motion can introduce blurring and artifacts to the resulting MR images. Both prospective and retrospective motion compensation methods have been proposed to address this issue over the past 20 years. Recently, several deep learning-based approaches were presented for motion correction with Cartesian sampling1, 2. Since spiral sampling can achieve higher scan efficiency, we aimed to develop a deep convolutional neural network (DCNN) to remove motion artifacts for spiral brain imaging.

Methods

An open-source data set (http://www.brain-development.org) containing T2-weighted, Cartesian TSE magnitude images for 578 subjects was used. The imaging parameters were TR = 5.7 s, TE = 100 ms, in-plane field of view = 240 × 240 mm2, matrix size = 256 × 256, and echo train length (ETL) = 16. Data from 347 of the subjects were used for training and validation, and data from the remaining 231 subjects were used for testing the network performance.

To simulate motion artifacts for spiral imaging, constant density, variable density, and dual density spiral trajectories were generated based on the image FOV and resolution. After combining k-space spiral interleaves from different motion states, an adjoint nonuniform fast Fourier transform3, 4 was applied to the combined k-space to obtain a motion-corrupted image, as shown in Figure 1. Parameters related to motion simulation are summarized in Table 1, including the ranges and distributions of the translation, rotation, and the ratio of motion-affected spiral interleaves. To improve network robustness and save training time, augmentations consisting of random shift, rotation, horizontal/vertical flip, and contrast stretching were performed before the simulation.

The network was adapted from pix2pix5, which comprises a U-Net6 generator (G) and a three-layer discriminator (D). The generator is trained to minimize the combined conditional generative adversarial network (cGAN) loss and L1 loss $$$\mathcal{L}_{cGAN}(G, D)+\lambda\mathcal{L}_{L1}(G)$$$, where $$$\lambda$$$ was set to 100 empirically. The network was implemented in PyTorch7 and optimized using Adam8 with learning rate 0.0002. The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), root mean square error (RMSE) and absolute difference (ABSD) were calculated as image quality metrics. The trained network was also applied on images acquired from a healthy volunteer who was asked to move their head during a spiral TSE scan on a 1.5 T scanner (MAGNETOM Avanto, Siemens Healthcare, Erlangen, Germany). The scans were conducted with approval of the institutional review board. The parameters for spiral imaging were TR = 3 s, TE = 91 ms, in-plane field of view = 240 × 240 mm2, matrix size = 256 × 256, and ETL = 15.

Results

Figure 2 shows the network performance on the test subjects with simulated spiral motion artifacts. The motion-compensated images show fewer artifacts and higher quality compared to the motion-corrupted images. Quantitatively, the average SSIM and PSNR were increased from 0.6659 and 24.20 to 0.8965 and 26.36, respectively, after motion compensation. The average RMSE and ABSD were decreased from 0.0661 and 0.0492 to 0.0393 and 0.0221, respectively. The network performance on the in vivo data is shown in Figure 3. The motion-corrupted images showed substantial motion artifacts, and these artifacts were significantly reduced in the motion-compensated images. The average time for processing a single slice was less than 100 ms (not including the I/O time) on a 12 GB NVIDIA TITAN Xp GPU.

Discussion

In this study, we developed a DCNN to compensate motion for spiral brain imaging. The network operates retrospectively in the image domain. We evaluated the network performance on both simulated data and in vivo data. Fast and effective artifact reduction was achieved in both cases. Future work will include developing methods to minimize contrast loss and blurring in motion-compensated images and additional in vivo testing.

Acknowledgements

No acknowledgement found.

References

[1] Johnson, P. M., & Drangova, M. (2019). Conditional generative adversarial network for 3D rigid‐body motion correction in MRI. Magnetic Resonance in Medicine, 82(3), 901-910.

[2] Haskell, M. W., Cauley, S. F., Bilgic, B., Hossbach, J., Splitthoff, D. N., Pfeuffer, J., ... & Wald, L. L. (2019). Network Accelerated Motion Estimation and Reduction (NAMER): Convolutional neural network guided retrospective motion correction using a separable motion model. Magnetic resonance in medicine, 82(4), 1452-1461.

[3] Fessler, J. A., & Sutton, B. P. (2003). Nonuniform fast Fourier transforms using min-max interpolation. IEEE transactions on signal processing, 51(2), 560-574.

[4] Ong, F., & Lustig, M. (2019, May). SigPy: A Python Package for High Performance Iterative Reconstruction. In Proceedings of the 27th annual meeting of ISMRM, Montréal (Vol. 4819).

[5] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).

[6] Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.

[7] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., ... & Desmaison, A. (2019). Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems (pp. 8026-8037).

[8] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

Figures

Figure 1. Motion simulation strategy for spiral sampling (A), and network architecture adopted in this study (B).

Table 1. Description of motion simulation parameters.

Figure 2. Representative motion compensation results on simulated data, from subjects 1 (A), 2 (B), and 3 (C).

Figure 3. Motion compensation results on in vivo data in slices 1 (A), 2 (B), and 3 (C). Motion artifacts and blurring are reduced in each slice.

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
1359