0253

Deep Learning MR Relaxometry with Joint Spatial-Temporal Under-sampling
Hongyu Li1, Mingrui Yang2, Jeehun Kim2, Ruiying Liu1, Chaoyi Zhang1, Peizhou Huang1, Sunil Kumar Gaire1, Dong Liang3, Xiaojuan Li2, and Leslie Ying1
1Department of Biomedical Engineering, Department of Electrical Engineering, The State University of New York at Buffalo, Buffalo, NY, United States, 2Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, United States, 3Paul C. Lauterbur Research Center for Biomedical Imaging, Medical AI research center, SIAT, CAS, Shenzhen, China

Synopsis

This abstract presents a deep learning method to generate MR parameter maps from very few subsampled echo images. The method uses deep convolutional neural networks to learn the nonlinear relationship between the subsampled T1rho/T2-weighted images and the T1rho/T2 maps, bypassing the conventional exponential decay models. Experimental results show that the proposed method is able to generate T1rho/T2 maps from only 2 subsampled echo images with quantitative values comparable to those of the T1rho/T2 maps generated from fully-sampled 8 echo images using the conventional exponential decay curve fitting.

Introduction

With the conventional model fitting method, 4-8 echoes are typically necessary for reliable estimation of the parameter maps, resulting in the prolonged acquisition. It is of great interest to accelerate quantitative imaging to increase its clinical use. Deep learning methods have been used recently to accelerate MR acquisition by reconstructing images from subsampled k-space data1-6. The learning-based reconstruction methods have the benefit of ultrafast online reconstruction once the offline training is completed. Although there are quite some works on deep reconstruction of morphological images, very few works have studied tissue parameter mapping7-9. In this abstract, we develop a deep learning-based framework for ultrafast quantitative MR imaging. Different from the existing works using deep learning for parameter mapping, our network learns the information in both spatial and temporal directions such that both the k-space measurement and the echoes can be reduced. Specifically, we formulate the problem of parameter mapping as a deep network with multi-channel input (images from different echoes) problem based on our previous network10 for diffusion tensor imaging. The purpose of this study is to demonstrate the feasibility of such a framework, named Model Skipped Convolutional Neural Network (MSCNN), for ultrafast T1rho/ T2 mapping. Using knee cartilage data, we demonstrate for the first time the feasibility of T1rho/T2 mapping using as few as 2 subsampled (in k-space) echo images with quantitative maps comparable to those from fully sampled 8 echo images.

Theory and Methods

The proposed MSCNN reconstructs parametric maps directly from subsampled echo images using a deep CNN. In our network, the goal is to learn the nonlinear relationship $$$F$$$ between input $$$x$$$ and output $$$y$$$, which is represented as $$$y=F(x;Θ),$$$ where $$$Θ$$$ is the DL parameters to be learned during training. Learning of such a mapping is achieved through minimizing a loss function between the network prediction and the corresponding ground truth data. In line with our prior study for diffusion tensor imaging using reduced acquisition10, the loss term $$$L(Θ)=\frac{1}{n}\sum_{i=1}^{n}\| F(x_{i};Θ) -y_{i}\|^{2} (1)$$$ ensures that the reconstructed parameter maps from the end-to-end CNN are consistent with the maps from the fully sampled echo images while by-passing the conventional error-prone model fitting step. During the training stage, a reduced number of subsampled echo images are used as the inputs $$$x_{i}$$$ and the corresponding reference T1rho/T2 maps $$$y_{i}$$$ (obtained by fitting all 8 fully sampled echo images) as the output. We learn the deep learning network parameters $$$Θ$$$ that minimize the loss function, which is the mean-square error between the network output and the reference T1rho/T2 maps (n is the number of training dataset). For our network, ten weighted layers were used for the training and testing. For each layer except the last one, 64 filters with kernel size of 3 are used. The deep network exploits both the spatial correlation among pixels and the temporal correlation between the selected echoes, while learning the complicated nonlinear relationship between the subsampled T1rho/T2 weighted images and the T1rho/T2 maps.
Ten sets of knee data were collected at a 3T MR scanner (Prisma, Siemens Healthineers) with a 1Tx/15Rx knee coil (QED), using a magnetization-prepared angle-modulated partitioned k-space spoiled gradient echo snapshots (MAPSS) T1ρ and T2 quantification sequences (time of spin-lock [TSLs] of 0, 10, 20, 30, 40, 50, 60, 70ms, spin-lock frequency 500Hz, Preparation TEs of 0, 9.7, 21.3, 32.9, 44.5, 56.1, 67.6, 79.2 ms, matrix size 160×320×8×24 [PE×FE×Echo×Slice], FOV 14cm, and slice thickness 4mm). Among them, 8 datasets were used to train the proposed MSCNN and 2 for testing. In the initial experiment, echo 1, 3 and 8 were selected out of 8, and 2D Poisson random sampling was used with an acceleration factor of 2 (joint AF 5.33). For further acceleration, only the first and the last echoes were selected. A 2D Poisson random sampling pattern was used with an additional acceleration factor of 2 and 3. Parameter maps were generated with joint reduction factors of 4 (no AF in echo1&8), 8 (AF2), and 12 (AF3). Hardware specification: i9 7980XE; 64 GB; 2x NVIDIA GTX 1080Ti. Training took around 10 hours. Testing only takes 0.07 seconds to generate a complete set of T1rho/T2 map through learned network, which is in contrast to the ~15 min processing time for the conventional exponential decay curve fitting.

Results

Figures 1 and 2 show the generated T1rho and T2 maps using proposed MSCNN with different echo undersampling and different k-space subsampling. Results from all 8 echoes using the conventional fitting model are shown as the reference. It can be seen that the quantitative maps generated by MSCNN are very close to the reference even with only 2 subsampled echo images. Performance is further verified by the NMSE shown on the bottom left of each image.

Conclusion

In this abstract, we present a deep convolutional neural network MSCNN for superfast MR quantitative imaging. The network exploits both spatial and temporal information from the training datasets. Experimental results show that our proposed network is capable of generating T1rho/T2 mapping from as few as 2 subsampled echo images. Future studies will use larger dataset for evaluating quantification accuracy and diagnostic performance.

Acknowledgements

The work was partly supported by the Arthritis Foundation.

References

[1] Wang S, Su Z, Ying L, Peng X, Liang D, et al. Accelerating magnetic resonance imaging via deep learning. IEEE 14th International Symposium on Biomedical Imaging (ISBI), pp. 514-517, Apr. 2016.

[2] Wang S, Huang N, Ying L, Liang D, et al. 1D Partial Fourier Parallel MR imaging with deep convolutional neural network. Proceedings of International Society of Magnetic Resonance in Medicine Scientific Meeting, 2017.

[3] Lee D, Yoo J, et al. Deep residual learning for compressed sensing MRI. IEEE 14th International Symposium on Biomedical Imaging (ISBI), Melbourne, VIC, pp. 15-18, Apr. 2017.

[4] Wang S, Zhao T, Ying L, and Liang D. Feasibility of Multi-contrast MR imaging via deep learning. Proceedings of International Society of Magnetic Resonance in Medicine Scientific Meeting, 2017.

[5] Schlemper J, Caballero J, et al. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. arXiv:1704.02422, 2017.

[6] Jin K H, McCann M T et al. Deep Convolutional Neural Network for Inverse Problems in Imaging. in IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4509-4522, Sept. 2017.

[7] Liu F, Feng L, Kijowski R. MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient estimation of MR parameters. International Society of Magnetic Resonance in Medicine Machine Learning Workshop Part II, 2018.

[8] Zhang Q, Su P, Liao Y, et al. Deep learning based MR fingerprinting ASL ReconStruction. Proceedings of 27th ISMRM Annual Meeting, Montreal, Canada, #0820, 2019.

[9] Pirkl C, Lipp I, Buonincontri G, et al. Deep Learning-enabled Diffusion Tensor MR Fingerprinting. Proceedings of 27th ISMRM Annual Meeting, Montreal, Canada, #1102, 2019.

[10] Li H, Zhang C, Liang Z, Liang D, Shen B, et al. Deep Learned Diffusion Tensor Imaging. Proceedings of 27th ISMRM Annual Meeting, Montreal, Canada, #3344, 2019.

Figures

Fig.1. T1rho (top)/T2 (bottom) maps from 3 echoes using MSCNN with joint reduction factor of 5.33, reference T1 rho/T2 maps from 8 fully-sampled echoes and the corresponding differences.

Fig.2. T1rho (top) and T2 (bottom) maps from 2 echoes using MSCNN with joint reduction factor of 4, 8, 12 and the reference T1rho and T2 maps from 8 fully-sampled echoes.

Proc. Intl. Soc. Mag. Reson. Med. 28 (2020)
0253