4426

Deep Learning based motion artifact correction improves the quality of cortical reconstructions
Ben A Duffy1, Lu Zhao1, Arthur Toga1, and Hosung Kim1

1Institute of Neuroimaging and Informatics, University of Southern California, los angeles, CA, United States

Synopsis

Cortical reconstruction is prone to failure without high quality structural imaging data. Here, motion simulation was performed on good quality structural MRI images and used to train a regression convolutional neural network to predict the motion-free images as the output. We show that performing retrospective motion correction using a convolutional neural network is able to significantly reduce the number of cortical surface reconstruction quality control failures.

Introduction

Head motion during MRI scanning results in serious confounding effects for subsequent neuroimaging analyses. Exclusion of images with visually-recognized motion artifact through a standard image quality control procedure inevitably leads to a smaller sample remaining in the study, Motion artifacts that are not easily identified in structural images may yet cause deterioration in the performance of post processing procedures such as cortical tissue segmentation and surface reconstruction. Here, we show that such depraved processing can be improved by retrospectively removing motion artifacts using a deep learning of motion artifact patterns generated by mathematical simulation.

Methods

We trained a regression convolutional neural network (CNN) using 875 T1-weighted MRI images from the Autism Brain Imaging Data Exchange (ABIDE) dataset that were deemed to have no significant artifacts by our in-house quality control (QC) protocol. A modified version of the HighRes3dNet [1], a compact and efficient 3D 8-convolutional layer CNN suited for large-scale 3D image data was used with an input size of 96 x 96 x 96. The CNN was trained using NiftyNet [2] with an Adam optimizer, an L2 loss function and a batch size of 1 per GPU. Networks were trained on three GPUs (Nvidia GTX1080Ti) for 50000 iterations. The CNN model was trained using simulated motion to predict the ground-truth motion-free images. Motion simulation was performed online during training. Artifacts were simulated by applying random linear phase shifts to a random selection of p phase-encoding lines in the Fourier transformed magnitude image, where p was sampled from a uniform distribution between 30-40% or 30-60% (Fig. 1a). The position of the image at one corrupted line was related to adjacent lines by a Gaussian random walk in order to simulate more coherent ghosting using a low standard deviation (SD) (0.01 voxels) or random artifacts using a high SD (1 voxel). The center 7 percent of k-space lines were preserved as corrupting these would change the contrast and position of the image. For evaluation, FreeSurfer was applied on separate 2034 test images from the ABIDE I and ABIDE II datasets. This constructed cortical surfaces on the images before and after motion correction. A visual QC protocol was used to detect QC failures by an operator blinded to the group identity.

Results

By visual inspection, the CNN model was highly effective at removing motion artifacts when tested on a real motion affected image (Fig. 1c). Models trained using coherent ghosting artifacts visually performed better than those trained with more random trajectories. In addition, the model trained with more severe artifacts (30-60% of phase encoding lines) was more effective at removing motion artifacts but at test time the resulting images suffered too much smoothing and loss of detail and therefore the model trained by corrupting 30-40% of lines was used for the remainder of the experiments. The difference image before and after correction showed plausible ghosting artifacts in the Left-Right and Superior-Inferior directions consistent with those observed in the image before correction (Fig. 1d). Before correction, 2011 images completed the Freesurfer pipeline (98%) and 115 of the complete cases (5.6%) failed the QC protocol. After correction, 2019 images completed the pipeline (99%) and 35 failed the QC protocol (1.7%). Examples of images which failed QC before correction and passed after correction are shown in Fig. 1e.

Discussion

CNN models trained on simulated data were able to significantly improve real motion artifact affected data. Visually, models trained with coherent artifacts outperformed those trained on more random ones. Cortical reconstruction requires good quality images and is prone to fail if images are affected by mild or moderate artifacts. These results indicate the possibility of significantly improving the quality of motion affected structural MRI data such that they are likely to be usable for further cortical surface-based analyses.

Acknowledgements

No acknowledgement found.

References

(1) W. Li, G. Wang, L. Fidon, S. Ourselin, M. J. Cardoso, and T. Vercauteren, “On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task,” pp. 348–360, Springer, Cham, 2017

(2) E. Gibson, W. Li, C. Sudre, L. Fidon, D. I. Shakir, G. Wang, Z. Eaton-Rosen, R. Gray, T. Doel, Y. Hu, T. Whyntie, P. Nachev, M. Modat, D. C. Barratt, S. Ourselin, M. J. Cardoso, and T. Vercauteren, “NiftyNet:a deep-learning platform for medical imaging,” Computer Methods and Programs in Biomedicine, vol. 158, pp. 113–122, 2018

Figures

(a) Schematic of motion simulation. (b) Motion simulation examples for the 2 different parameters (horizontal axis): the percentage of total number of lines that were corrupted and (vertical axis): the position at adjacent motion corrupted lines were related by a random walk, the SD of the random walk was set to 0.01 to generate more coherent ghosts or 1 to produce random artifacts. (c) Model output on real motion corrupted images for models trained using different parameters. (d) Before, after and difference images for a real motion artifact affected dataset. (e) Cortical reconstruction failures that occurred before artifact correction that were not present after correction.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4426