The inherent speed of EPI is penalized by the calibration prescans necessary to suppress N/2 ghosts. Here, we propose a deep neural network with a novel architecture that suppresses N/2 ghosts in a post-processing step starting from magnitude images, thereby eliminating the necessity of a prescan. The proposed network achieves better results than more classical networks of the same size by taking into account the N/2 structure of ghosts. The network architecture could easily be adapted to also correct for ghosts of higher order in multishot EPI.
EPI is one of the fastest sampling sequences in MRI. However, due to hardware imperfections such as gradient delays, the resulting images are subject to artifacts known as N/2 ghosts1.
The traditional approach to remove these is to perform a prescan to calibrate the hardware imperfections and include them in a model-based reconstruction. The drawbacks of this method is that in long sequences such as fMRI, the prescans might have to be repeated during the sequence as gradient delays can drift in time, while for short scans, the prescan might be as time-consuming as the scan itself. Approaches that do not rely on a prescan are therefore desirable. For example, it was proposed2 to use phased array coil information in combination with a model-based iterative reconstruction.
Here, we present a method focusing on the case of single-shot EPI. We propose to remove N/2 ghosting with a trained neural network in a post-processing step. In contrast to a deep-learning method recently proposed3 , the network acts in image space and not in k-space.
Training data generation: Training data were generated from T1-weighted brain images taken from the IXI-dataset4. N/2 ghosts were added on axial images (matrix size 256x160) by simulating a single-shot EPI acquisition with an added echo shift and global phase of different values for odd and even echos. The training dataset consisted of 2000 images, the odd/even echo shifts and added phases were drawn at random for each image and each training epoch so as to obtain a wide variety of artifacts. The magnitude images were taken as starting point for the artifact removal.
Network architecture: An appropriate network for the problem is a residual U-Net5, taking ghosted images as inputs and giving corrected images as output. We also propose a modified version that uses our knowledge of the ghosting structure. As illustrated in Figure 1, the modified network’s input layer has two channels that are the two half-FOV images obtained by halving the full image in the ghosting direction. This way, the artifacts created by a structure in one of the channels are at the exact same position on the other channel. The network’s receptive field therefore automatically comprises both the structure and its ghost, even without deep structure or large kernel sizes. The corrected image is obtained by stitching the two channels of the output layer back to a single image. The modified network used for the experiment has a depth d=3, a kernel size of 3x3 and a pooling size of 2x2. The regular U-Nets used for comparison have depth d=3 and d=4. The number of channels in the deepest layer is 16*2d-1.
Training: All networks were implemented in Tensorflow and trained with the Adam optimizer with default parameters for 1000 epochs, using the mean-square-error between the reference image and the network output as a loss function. Total training times were about 4 hours.
Figure 2 shows test dataset images with and without simulated artifacts, along with the proposed modified U-Net reconstruction. Ghosting artifacts of different strength are successfully removed.
Figure 3 shows a comparison of reconstructions obtained with the described modified U-Net and with regular U-Nets with depth 3 and 4. The one with depth 3 is clearly insufficient for the task, while the one with depth 4 leaves important remaining artifacts despite having a deeper structure and more learnt parameters than the modified U-Net of depth 3. Only with a depth of 5 does the regular U-Net have a sufficient receptive field and achieves the same results as the proposed modified U-Net.
The simplified setting presented here can easily be extended to more complex settings such as multi-shot EPI and multi-coil acquisition. In 2-shot EPI, N/4 ghost are produced, which can be handled by changing the input layer to a 4-channel image of ¼ FOV per channel. The extension to a multi-coil setting can be achieved by simulating multi-coil acquisition in the training dataset. Further work in this direction will allow to assess the proposed network’s performance with actual EPI acquisitions.
[1] Brown, Robert W., et al. Magnetic resonance imaging: physical principles and sequence design. John Wiley & Sons, 2014.
[2] Yarach, Uten, et al. "Model‐based iterative reconstruction for single‐shot EPI at 7 T." Magnetic resonance in medicine 78.6 (2017): 2250-2264.
[3] Lee, D., Yoo, J., & Ye, J. C. (2017). Deep artifact learning for compressed sensing and parallel MRI. arXiv preprint arXiv:1703.01120.
[4] IXI Dataset, http://brain-development.org/ixi-dataset/, accessed June 2018
[5] Ronneberger, O., Fischer, P., Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234-241, 2015.