Jiaming Liu^{1}, Cihat Eldeniz^{1}, Yu Sun^{1}, Weijie Gan^{1}, Sihao Chen^{1}, Hongyu An^{1}, and Ulugbek S. Kamilov^{1}

^{1}Washington University in St. Louis, St. Louis, MO, United States

We propose a new MR image reconstruction method that systematically enforces data consistency while also exploiting deep-learning imaging priors. The prior is specified through a convolutional neural network (CNN) trained to remove undersampling artifacts from MR images without any artifact-free ground truth. The results on reconstructing free-breathing MRI data into ten respiratory phases show that the method can form high-quality 4D images from severely undersampled measurements corresponding to acquisitions of about 1 minute in length. The results also highlight the improved performance of the method compared to several popular alternatives, including compressive sensing and UNet3D.

We introduce a new RED algorithm that replaces a denoising CNN with a more general image restoration CNN. We observe that respiratory binning leads to different k-space coverage patterns for different acquisition times, leading to distinct artifact patterns. Based on this observation, and inspired by Noise2Noise, we learn our prior by mapping pairs of complex MR volumes acquired over different acquisition times to one another, without using artifact-free ground-truth images. The trained CNN is then introduced into the iterative RED algorithm, where it is combined with the k-space data consistency term. We refer to our technique as

The acquisition parameters were as follows: TE/TR = 1.69 ms/3.54 ms, FOV = 360 mm x 360 mm, in-plane resolution = 1.125x1.125 mm, partial Fourier factor = 6/8, number of radial lines = 2000, slice resolution = 50%, slices per slab = 96 with a slice thickness of 3 mm, total acquisition time = about 5 minutes (slightly longer for larger subjects).

RED-N2N replaces the denoiser in RED by a 3D DnCNN network (x-y-phase) [18] trained for removing streaking artifacts from complex-valued MR volumes. The training of DnCNN was inspired by Noise2Noise and uses pairs of MR volumes corresponding to the same person, but acquired over different acquisition times with no ground truth data. Figure 1 exhibits the details of the RED-N2N method. We used 8 healthy subjects for training and 1 for validation. The remaining 6 healthy subjects and the 17 patients were used for testing. 400, 800, 1200 and 1600 radial spokes were used to reconstruct the images.

We evaluated the performance of RED-N2N against multi-coil non-uniform inverse fast Fourier transform (MCNUFFT), compressed sensing (CS) [4], and UNet3D (x-y-phase) trained by using the 5-minute CS reconstruction as the ground truth.

1. Grimm R, Fürst S, Souvatzoglou M, et al. Self-gated MRI motion modeling for respiratory motion compensation in integrated PET/MRI. Med. Image Anal. 2015;19(1):110–120.

2. Feng, L, Grimm R, Block KT, et al. Golden-angle radial sparse parallel MRI: combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI. Magn. Reson. Med. 2014;72(3):707–717.

3. Feng L, Axel L, Chandarana H, et al. XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing. Magn. Reson. Med. 2016;75(2):775–788.

4. Eldeniz C, Fraum T, Salter A, et al. CAPTURE: Consistently Acquired Projections for Tuned and Robust Estimation: A Self-Navigated Respiratory Motion Correction Approach. Invest Radiol. 2018;53(5):293–305.

5. Lustig M, Donoho DL, Pauly JM. Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging. Magn. Reson. Med. 2007;58(6):1182–1195.

6. Knoll F, Brendies K, Pock T, et al. Second Order Total Generalized Variation (TGV) for MRI. Magn. Reson. Med. 2011;65(2):480–491.

7. Otazo R, Candès E, Sodickson DK. Low-Rank Plus Sparse Matrix Decomposition for Accelerated Dynamic MRI with Separation of Background and Dynamic Components. Magn. Reson. Med. 2015;73:1125–1136.

8. Han YS, Yoo J, Ye JC. Deep learning with domain adaptation for accelerated projection‐reconstruction MR. Magn. Reson. Med. 2017;80(3):1189–1205.

9. Lee D, Yoo J, Tak S, et al. Deep Residual Learning for Accelerated MRI Using Magnitude and Phase Networks. IEEE Trans. Biomed. Eng. 2018;65(9):1985–1995.

10. Aggarwal HK, Mani MP, Jacob M. MoDL: Model Based Deep Learning Architecture for Inverse Problems. IEEE Trans. Med. Imag. 2018;38(2):394–405.

11. Romano Y, Elad M, Milanfar P. The Little Engine That Could: Regularization by Denoising (RED). SIAM J. Imaging Sci. 2017;10(4):1804–1844.

12. Reehorst ET, Schniter P. Regularization by Denoising: Clarifications and New Interpretations. IEEE Trans. Comput. Imag. 2019;5(1):52–67.

13. Metzler CA, Schniter P, Veeraraghavan A, et al. prDeep: Robust Phase Retrieval with a Flexible Deep Network. In: Proc. 35th Int. Conf. Machine Learning (ICML). Stockholm, Sweden; 2018.

14. Sun Y, Liu J, Kamilov US. Block Coordinate Regularization by Denoising. In: Proc. Advances in Neural Information Processing Systems 33. Vancouver, BC, Canada; 2019.

15. Mataev G, Elad M, Milanfar P. DeepRED: Deep Image Prior Powered by RED. Proc. IEEE Int. Conf. Comp. Vis. Workshops (ICCVW). 2019.

16. Wu Z, Sun Y, Liu J, et al. Online Regularization by Denoising with Applications to Phase Retrieval. In: Proc. IEEE Int. Conf. Comp. Vis. Workshops (ICCVW). Seoul, Korea Republic; 2019.

17. Lehtinen J, Munkberg J, Hasselgren J, et al. Noise2Noise: Learning Image Restoration without Clean Data. In: Proc. 35th Int. Conf. Machine Learning (ICML). Stockholm, Sweden; 2018.

18. Zhang K, Zuo W, Chen Y, et al. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017;26(7):3142–3155.