Zhuo-Xu Cui^{1}, Sen Jia^{2}, Zhilang Qiu^{2}, Qingyong Zhu^{1}, Yuanyuan Liu^{3}, Jing Cheng^{2}, Leslie Ying^{4}, Yanjie Zhu^{2}, and Dong Liang^{1}

^{1}Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, ^{2}Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China, ^{3}National Innovation Center for Advanced Medical Devices, Shenzhen, China, ^{4}Department of Biomedical Engineering and the Department of Electrical Engineering, The State University of New York, Buffalo, NY, United States

k-space deep learning (DL) is emerging as an alternative to the conventional image domain DL for accelerated MRI. Typically, DL requires training on large amounts of data, which is unaccessible in clinical. This paper proposes to present an untrained k-space deep generative model (DGM) to interpolate missing data. Specifically, missing data is interpolated by a carefully designed untrained generator, of which the output layer conforms the MR image multichannel prior, while the architecture of other layers implicitly captures k-space statistics priors. Furthermore, we prove that the proposed method guarantees enough accuracy bounds for interpolated data under commonly used sampling patterns.

On the other hand, the key to success of CS-DGM is due to its ability to capture image statistics priors. However, in MRI, since there exist basis mismatches between the singularities of true continuous image and discrete grid caused by the discrete or truncated sampling in k-space [4], [5], [6], the image domain statistics priors extraction has deviation. Recent works on k-space interpolation methods have shown that regularization on the true continuous image can be converted to structural low-rank (SLR) regularization in k-space due to the duality between the continuous image domain and k-space [7]. However, the SLR regularization has two shortcomings: firstly, it suffers from high computational burden caused by the singular value decomposition (SVD) of SLR matrices; secondly, it only relies on image structure priors (including multichannel prior, smooth phase and sparsity, e.t.c.) through domain dual transformation without using the statistics priors of k-space. Recent works on DGM have shown that statistics priors can be implicitly captured by the architecture of a carefully designed generator [1], which motivates us to study the k-space DGM.

Inspired by the DGM, we generalize the Landweber iteration to a generator network. Specifically, according to the iterative architecture of Landweber, we generalize the first iteration into a residual network and absorb the unknown variables $$${z}$$$ and $$${k}_i$$$ into network parameters to realize subspace $$$U$$$ projection inexpensively. In particular, the architecture of the generator is depicted in Figure 1. The network $$$\mathcal{G}$$$ is not pre-trained. To make the generated data $$$\mathcal{G}(\xi)$$$ matches the true k-space data, we minimize the following loss function:$$\min_{\mathcal{G}}\|\mathcal{M}\mathcal{G}(\xi)-y\|_F^2.$$

From experimental results, we can know that K-DGM without training is competitive with trained k-space deep learning (K-Unet), image domain deep learning (I-Unet), traditional k-space SLR method (P-LORAKS) and image domain DGM (ConvDecoder) under deterministic and random trajectories.

[1] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.

[2] S. Dittmer, T. Kluth, P. Maass, and D. OteroBaguer, “Regularization by architecture: A deep prior approach for inverse problems,” Journal of Mathematical Imaging and Vision, vol. 62, no. 3, pp. 1573–7683, 2020.

[3] A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing using generative models,” in International Conference on Machine Learning, 2017, pp. 537–546.

[4] G. Ongie and M. Jacob, “Off-the-grid recovery of piece wise constant images from few fourier samples,” SIAM Journal on Imaging Sciences,vol. 9, no. 3, pp. 1004–1041, 2016.

[5] Y. Chi, L. L. Scharf, A. Pezeshki, and A. R. Calderbank, “Sensitivity to basis mismatch in compressed sensing,” IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 2182–2195, 2011.

[6] J.-F. Cai, J. K. Choi, and K. Wei, “Data driven tight frame for compressed sensing mri reconstruction via off-the-grid regularization,”SIAM Journal on Imaging Sciences, vol. 13, no. 3, pp. 1272–1301, 2020.

[7] M. Jacob, M. P. Mani, and J. C. Ye, “Structured low-rank algorithms:Theory, magnetic resonance applications, and links to machine learning,”IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 54–68, 2020.

[8] J. P. Haldar and J. Zhuo, “P-loraks: Low-rank modeling of local k-space neighborhoods with parallel imaging data,” Magnetic Resonance in Medicine, vol. 75, no. 4, pp. 1499–1514, 2016.

[9] M. Z. Darestani and R. Heckel, “Accelerated mri with un-trained neural networks,” IEEE Transactions on Computational Imaging, vol. 7, pp.724–733, 2021.

[10] Y. Han, L. Sunwoo, and J. C. Ye, “k -space deep learning for accelerated mri,” IEEE Transactions on Medical Imaging, vol. 39, no. 2, pp. 377–386, 2020.

[11] J. Zbontar, F. Knoll, A. Sriram, T. Murrell, Z. Huang, M. J. Muckley,A. Defazio, R. Stern, P. Johnson, M. Bruno, M. Parente, K. J. Geras,J. Katsnelson, H. Chandarana, Z. Zhang, M. Drozdzal, A. Romero,M. Rabbat, P. Vincent, N. Yakubova, J. Pinkerton, D. Wang, E. Owens,C. L. Zitnick, M. P. Recht, D. K. Sodickson, and Y. W. Lui, “fastmri: An open dataset and benchmarks for accelerated mri,” 2019.

The network architecture of generator $$$\mathcal{G}$$$. The input is a initialized vector in $$$B(s)$$$, $$$g_{\phi}$$$ represents a residual network with parameter $$$\phi$$$ and the output layer represents the projection onto subspace $$$U$$$. For simplicity's sake, we abbreviate $$${g_{\phi}(\xi)}$$$ as $$${g}$$$ here.

Reconstruction results under deterministic (VISTA) under-sampling at R = 5. The values in the corner are NMSE/PSNR/SSIM values of each slice. As shown in this figure, we can see that the aliasing pattern still remains in the reconstructed images for P-LORAKS and ConvDecoder. For K-Unet and I-Unet, although noise are well suppressed, some wavy artifacts remain in the middle region of reconstructed images and details are lost seriously. The proposed K-DGM suppresses the aliasing pattern and restores details effectively.

Reconstruction results under 2-D random under-sampling at R = 8. The values in the corner are NMSE/PSNR/SSIM values of each slice. As shown in this figure, we can seethat all methods can effectively suppress artifacts in thisexperiment. In terms of details recovery, it is not difficult to find that our method can most effectively reconstruct the neat edge indicated by the red arrow in the enlarged view. All the quantitative metrics also show that our method outperforms comparison methods significantly in this test.

DOI: https://doi.org/10.58530/2022/4334