3581

Learned Off-Resonance Correction for Simultaneous Radial 23Na and 1H Acquisitions at 7T
Kirsten Koolstra1, Olga Dergachyova2, Zidan Yu2,3, Andrew Webb1, and Martijn Cloos2,3
1C.J. Gorter Center for High Field MRI, Department of Radiology, Leiden University Medical Center, Leiden, Netherlands, 2Center for Advanced Imaging Innovation and Research, Department of Radiology, New York University School of Medicine, New York, NY, United States, 3Sackler Institute of Graduate Biomedical Sciences, NYU Langone Health, New York, NY, United States

Synopsis

Simultaneous proton (1H) and sodium (23Na) acquisition can provide important metabolic information. However, proton data may suffer from off-resonance artifacts due to the long dwell time required to obtain sufficient SNR for 23Na. In this work we use center outward and center inward image pairs to train a convolutional neural network that performs an off-resonance correction for the proton data without an additional measured field map.

Introduction

Simultaneous proton (1H) and sodium (23Na) acquisition can provide important metabolic information1. To obtain sufficient SNR for 23Na, a center outward (CO) radial trajectory with a long dwell time (small acquisition bandwidth) is desired2. Unfortunately, such sampling strategies generally lead to blurred 1H images due to the larger gyromagnetic ratio of 1H compared to 23Na. Therefore, 1H images need to be corrected for off-resonance effects. In this work we train a convolutional neural network (CNN) that performs an off-resonance correction without an additional measured field map. Instead, we use two images as inputs: the usual CO image, and an image obtained from a center-inward (CI) trajectory captured during the rephase gradient. Such images are acquired with different effective TE and acquisition windows, and hence encode the field map without increasing scan time.

Methods

Training data: 70 MP-RAGE and 63 T2-weighted 3D brain images from the Human Connectome Project (https://ida.loni.usc.edu/login.jsp) were downloaded and transformed into transverse (MPRAGE:40, T2w:65) and sagittal (MPRAGE:36, T2w:66) slices. The slices for one MP-RAGE and one T2-weighted volunteer were selected to create a model validation set.
Simulation: Linear, uniform and Gaussian-shaped ΔB0 maps with a maximum absolute off-resonance value of 500 Hz were simulated. Each 2D image was blurred for the CO (4.8 ms) and the CI (1 ms) trajectories, using one of the simulated ΔB0 maps and a TE of 1.5 ms. After blurring, data were augmented by rotating each image by multiples of 90 degrees, resulting in 26,580 training and 828 validation examples. A Gaussian-shaped phase offset was randomly added for each CO/CI pair. Images were normalized (absolute values between 0 and 1), and real and imaginary image components for the CO and the CI trajectories were used as 4-channel input. One Gaussian-shaped ΔB0 map was shifted by ~100 Hz and used to simulate an additional validation example.
Model and training: We used a residual neural network with three residual blocks containing two layers each3. A 2D convolution using 3×3 kernels and 128 features was performed at every layer. ReLu (hidden layers) and tanh (output layer) activations were used. We minimized the mean absolute error over 6 epochs using the Adam optimizer with an initial learning rate of 5x10-4 and a batch size of 16. Drop out with a probability of 0.25 was applied to all the layers except the last one. Training was performed in Tensorflow with a GeForce RTx 2060 gpu.
Test data acquisition: Blurred CO and CI images were acquired using a 7T MR system (MAGNETOM, Siemens, Erlangen, Germany) with a 32 channel Nova head coil for phantom experiments and an 8-channel dual tune coil for in vivo brain experiments (informed consent obtained). Each scan was performed using 1) B0 shimming, 2) intentionally perturbed shim settings, and 3) with a constant offset of 300 Hz to the scanner’s resonance frequency. Scan parameters: 1x1x3 mm3 resolution, FOV=240 cm, TE/TR=1.5/10 ms, total scan time=15 s. A radial ΔB0 map was acquired for comparison.

Results

Figure 1 displays the mean squared error (MSE) loss for training and validation data during training. Figure 2 shows the deblurring performance for one slice of the validation data set. Note that the shifted Gaussian-shaped ΔB0 map for this case did not correspond to one encountered during training. Figure 3 shows the deblurred phantom data, compared with the conjugate phase reconstruction (CPR) using the measured ΔB0 map as input for the CO and the CI images independently. Figure 4 shows corresponding results for an in vivo brain experiment. White circles outline an example region where the prediction is sharper than both the CO and CI images. Although training of the network took 95 min, the final model corrects the blurred images in 0.19 s.

Discussion

Absolute error maps for validation data confirm that the network’s predictions are close to the ground truth images. Good performance on measured phantom data suggest that the model was not overfitted to the simulated data used for training. The trained model produces a sharper phantom image than the CPR-corrected CO image, while the trained model and CPR show similar performance for the CI phantom image. In vivo brain results also show a sharp network prediction, while CPR suffers from imperfections around the skull and loss of detail. Phantom and in vivo experiments show that, although the network predicts an image that is sharper than both the CO and the CI images, most of the contrast is learned from the CI image. This is especially visible around the fine structures in the brain image. Further optimization of the training set, for example by adapting it to the contrast of interest and by taking relaxation effects into account in simulation, may help to reduce this effect.

Conclusion

It is possible to correct for off-resonance artifacts without knowledge of the field map, using a deep learning model trained on CO and CI image pairs. This approach can be used in simultaneous radial 23Na and 1H acquisitions, for which a short TE and a long dwell time are necessary and CI images can be acquired efficiently. Further model optimization is necessary to improve contrast estimation around fine anatomical structures.

Acknowledgements

This project was partially funded by the Leiden University Fund (LUF) and the European Research Council Advanced Grant 670629 NOMA MRI.

References

1. Madelin, G. et al. Biomedical applications of sodium MRI in vivo. JMRI, 2013;38:511-529.

2. Yu, Z. et al. Simultaneous proton MR fingerprinting and sodium imaging. ISMRM Montréal. 2019; 0488

3. Zeng, D. et al. Deep residual network for off-resonance artifact correction with application to pediatric body MRA with 3D cones. MRM, 2019: 82(4):1398-1411.

Figures

Figure 1. The MSE loss for training and validation data as a function of iteration number. The validation loss (blue) decreases with the training loss (red) over 6 epochs.

Figure 2. The performance of the network for simulated validation data. (a,b) The CO (4.8 ms) and CI (1 ms) images are blurred due to the large field inhomogeneity, shown in (c). Note that the shifted Gaussian-shaped ΔB0 map was not used during training. (d,e) The learned correction results in a sharp image prediction, which is close to the ground truth. (f) This is confirmed by the error map (magnified 4x).

Figure 3. The performance of the network for measured phantom data. (a) The measured ΔB0 map in Hz (shifted by 300 Hz case). (b,c) The acquired CO (4.8 ms) and CI (1 ms) images are both blurred due to the large off-resonance. (d) The learned off-resonance correction results in an image prediction that is sharper than the acquired CO and CI images. (e,f) The CO/CI images were also reconstructed using CPR with the measured ΔB0 map as input. The learned correction results in a sharper image than the CPR-corrected CO image, while the result is very similarly to the CPR-corrected CI image.

Figure 4. The performance of the network for measured brain data. (a) The measured ΔB0 map in Hz (shifted by 300 Hz case). (b,c) Again, the acquired CO (4.8 ms) and CI (1 ms) images are both blurred due to the large off-resonance. (d) The learned off-resonance correction results in an image prediction that is sharper than the acquired CO and CI images. This is for example observed in the white circles in the zoomed region. (e,f) The CPR-corrected images show imperfections around the skull and loss of detail for COCPR.

Proc. Intl. Soc. Mag. Reson. Med. 28 (2020)
3581