Alfredo De Goyeneche ^{1}, Shreya Ramachandran^{1}, Ke Wang^{1,2}, Ekin Karasan^{1}, Stella Yu^{1,2}, and Michael Lustig^{1}

^{1}Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States, ^{2}International Computer Science Institute, University of California, Berkeley, Berkeley, CA, United States

We propose a physics-inspired, unrolled-deep-learning framework for off-resonance correction. Our forward model includes coil sensitivities, multi-frequency bins, and non-uniform Fourier transforms hence compatible with fat/water imaging and parallel imaging acceleration. The network, which includes data-consistency terms and CNN modules serving as proximal operators, is trained end-to-end using only synthetic random field maps, coil sensitivities, and noise-like images with statistics (smoothness) mimicking natural signals. Our aim is to train the network to reverse off-resonance irrespective of the type of imaging, and hence generalizable to any anatomy and contrast without retraining. We demonstrate initial results in simulations, phantom, and in-vivo data.

We propose a physics-inspired unrolled-deep-learning framework for off-resonance correction. Our model enforces data consistency with a forward model that includes coil sensitivities, multi-frequency bins, and non-uniform Fourier transforms. The model is trained end-to-end using only synthetic random field maps, coil sensitivities, and noise-like images, with the aim of learning to reverse off-resonance irrespective of the type of imaging and hence generalize to any anatomy and contrast without retraining.

We aim to solve for $$$x$$$ such that $$$A(x)=y$$$, where $$$A$$$ is the forward model, $$$y$$$ the multi-coil k-space measurements, and $$$x$$$ the clean image for each frequency bin. We approximate the forward model as follows (Figure 1): $$A=\Sigma\cdot M\cdot F\cdot S$$ where $$$M=\exp{(j\cdot2\pi\cdot f\cdot t)}$$$.

We model the image as a stack of sharp images at different resonant frequencies around the center Larmor frequency, each containing the region of the object that is resonant. The coil images for each frequency bin are obtained by multiplication with coil sensitivity maps ($$$S$$$). Finally, a density compensated NUFFT ($$$F$$$) calculates the non-Cartesian k-space for each bin, which is then modulated at the corresponding frequency ($$$M$$$). These differently modulated k-spaces are then summed ($$$\Sigma$$$) to obtain a complete representation of the acquired multi-coil k-space.

This problem is ill-posed, and hence, we incorporate a neural network for regularization. We take a MoDL-inspired

The 3D CNN architecture is depicted in Figure 3b. The network takes the complex images obtained from applying $$$A^H$$$ to multi-coil k-space, with the real and imaginary parts as different channels. Convolutions are across image dimensions and frequency bins. We use a combination of an attention module across frequencies and residual modules

Inspired by [8], all the training data corresponds to simulated synthetic random field maps, coil sensitivities, and noise-like images with statistics (smoothness) mimicking natural signals. Random images and field maps are generated by applying a 2D-FFT to exponentially weighted random complex data, where the weighting radius determines the level of smoothness. Similarly, random sensitivity maps are generated from weighted random SPIRiT kernels

From here, we have all the elements to supervise our model. The model’s output $$$\hat{x}$$$ can be supervised with the ground truth $$$x$$$, as well as the combined output image $$$\hat{I}=\sum_i\hat{x}_{f_i}$$$ can be supervised with $$$I$$$. Additionally, we can aid learning by supervising the attention maps with the frequency bin weights obtained from the field map.

Results on noise data and real brain images with simulated synthetic off-resonance are shown in Figure 4. Results from evaluating our noise-trained network on acquired phantom and in-vivo brain data are shown in Figure 5. We used a spiral GRE sequence with a spectral-spatial pulse for water-only excitation. We scanned with the same readout trajectory as in training, as well as with a higher-resolution ($$$1\text{mm}^2$$$) scan with a different spiral trajectory (13 interleaves, 10.72ms readout). Sensitivity maps for the acquired data were estimated with ESPIRiT

M. A. Bernstein, K. F. King, and X. J. Zhou, Handbook of MRI pulse sequences. Elsevier, 2004.

Y. Yang, G. H. Glover, P. van Gelderen, A. C. Patel, V. S. Mattay, J. A. Frank, and J. H. Duyn, “A comparison of fast MR scan techniques for cerebral activation studies at 1.5 Tesla, ”Magn Reson Med, vol. 39, no. 1, pp. 61–67, Jan 1998.

D. Y. Zeng, J. Shaikh, S. Holmes, R. L. Brunsing, J. M. Pauly, D. G. Nishimura, S. S. Vasanawala, and J. Y. Cheng, “Deep residual network for off-resonance artifact correction with application to pediatric body MRA with 3d cones, ”Magnetic Resonance in Medicine, vol. 82, no. 4, pp. 1398–1411, 2019. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.27825

Y. Lim, Y. Bliesener, S. Narayanan, and K. S. Nayak, “Deblurring for spiral real-time MRI using convolutional neural networks, ”Magnetic Resonance in Medicine, vol. 84, no. 6, pp. 3438–3452, 2020. [Online]. Available:https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.28393

M. W. Haskell, A. A. Cao, D. C. Noll, and J. A. Fessler, “Deep learning field map estimation with model-based image reconstruction for off-resonance correction of brain images using a spiral acquisition.”

H. K. Aggarwal, M. P. Mani, and M. Jacob, “Modl: Model-based deep learning architecture for inverse problems, ”IEEE Transactions on Medical Imaging, vol. 38, no. 2, p. 394–405, Feb 2019. [Online]. Available:http://dx.doi.org/10.1109/TMI.2018.2865356

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.

B. S. Hu and J. Y. Cheng, “System and method for noise-based training of a prediction model,” Mar 2020.

M. Lustig and J. M. Pauly, “SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space, ”Magnetic Resonance in Medicine, vol. 64, no. 2, pp. 457–471, 2010. [Online]. Available:https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.22428

D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in3rd International Conference on LearningRepresentations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.6980

M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M. Pauly, S. S. Vasanawala, and M. Lustig, “ESPIRiT—an eigenvalue approach to autocalibrating parallel MRI: Where sense meets grappa, ”Magnetic Resonance in Medicine, vol. 71, 2014.

M. Uecker, F. Ong, J. I. Tamir, D. Bahri, P. Virtue, J. Y. Cheng, T. Zhang, and M. Lustig, “Berkeley advanced reconstruction toolbox,” inProc. Intl. Soc. Mag. Reson. Med, vol. 23, no. 2486, 2015

Figure 1. Multi-frequency, multi-coil forward model $$$A$$$. We model the image as a stack of sharp images at multi-frequency bins. Each of the images is multiplied by coil sensitivity maps ($$$S$$$), followed by a NUFFT ($$$F$$$) on the non-Cartesian k-space trajectory. Each of the bins is then modulated with the corresponding bin frequency ($$$M$$$). Finally, k-spaces across bins are summed ($$$\Sigma$$$) to obtain a complete representation of the acquired multi-coil k-space.

Figure 2. (a) Synthetic training data generation. First, colored noise images and field maps are obtained by applying a 2D FFT to exponentially weighted random complex data. Then, the generated field map is used to assign pixels into frequency bins. (b) Examples of images obtained from demodulating kspace at multiple frequencies are shown for: colored noise image used during training, toy example, and brain image.

Figure 3. Framework overview. (a) We propose an unrolled model-based framework for our off-resonance correction. The model consists of $$$N$$$ unrolls of 3D CNNs and Data Consistency (DC) layers. DC layers take the model definition $$$A$$$, acquired k-space $$$y$$$ and previous estimate of the output $$$x_{k-1}$$$ to solve for $$$x_k$$$ using the Conjugate Gradient (CG) method. (b) Our 3D CNN architecture consists of an attention over frequencies module, and $$$R$$$ residual blocks. DHWC are intermediate representation dimensions: demodulations, height, width, channels.

Figure 4. Evaluation on simulated off-resonance. Input images at multiple demodulation frequencies, center frequency input, simulated field map, reconstructed outputs at each frequency bin, final combined output image, and ground truth image are shown. (a) The model trained on synthetic data along the low-resolution trajectory ($$$2.2mm^2$$$) is able to generalize to real anatomy with simulated random off-resonance sampled with (b) the same trajectory and also (c) at a higher resolution ($$$1.0mm^2$$$).

Figure 5. Evaluation on in-vivo acquired data. Phantom and brain data was acquired in a GE 3T MR750W System with a 24-channel head coil, using Spiral GRE with spectral-spatial excitation. Results are shown for low-resolution scans ($$$2.2\text{mm}^2$$$) that match the training trajectory, as well as higher resolution scans ($$$1\text{mm}^2$$$) with a different spiral trajectory. Sensitivity maps for the acquired data were estimated with ESPIRiT^{11} using BART^{12}.

DOI: https://doi.org/10.58530/2022/0555