Fang Liu1, Li Feng2, and Richard Kijowski1
1Radiology, University of Wisconsin-Madison, Madison, WI, United States, 2Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
Synopsis
The purpose of this work was to develop and evaluate a novel deep learning-based reconstruction framework called Model-Augmented Neural neTwork with Incoherent k-space Sampling (MANTIS) for accelerated MR parameter mapping. Our approach combines end-to-end CNN mapping with k-space consistency using the concept of cyclic loss to further enforce data and model fidelity. Incoherent k-space sampling is used to improve reconstruction performance. A physical model is incorporated into the proposed framework, so that the parameter maps can be efficiently estimated directly from undersampled images. The performance of MANTIS was demonstrated for T2 mapping of the knee joint. Our study demonstrated that the proposed MANTIS framework represents a promising approach for efficient MR parameter mapping. MANTIS can potentially be extended to other types of parameter mapping with appropriate models.
Introduction
Deep learning methods have been successfully used for image reconstruction with promising initial results. Exemplary approaches attempt to extend the compressed sensing framework using various deep learning architectures and have achieved great success(1,2). Other approaches have also been proposed to directly remove artifacts from undersampled images or k-space using a direct end-to-end convolutional neural network (CNN) mapping(3–5). While these deep learning methods have focused on highly efficient image reconstruction for conventional MR imaging, applications for dynamic imaging applications, such as accelerated parameter mapping, have been limited. In this work, we aimed to develop a deep learning-based framework for rapid MR parameter mapping. Our approach combines efficient end-to-end CNN mapping with k-space consistency using the concept of cyclic loss to further enforce data and model fidelity. A random sampling pattern in k-space combined with a completely random training strategy "sampling augmentation” was also proposed to improve method robustness during image inference. The purpose of this study was to demonstrate the feasibility of such framework, called MANTIS, for rapid and accurate T2 mapping of the knee joint.Methods
(a) MANTIS Quantitative Imaging: The MANTIS reconstructs parametric maps directly from undersampled source images using CNN. In line with our prior study for image reconstruction using CycleGAN with a focus on data consistency (6), MANTIS (Fig. 1) consists of two training objectives. The first loss term (loss 1) ensures that the undersampled images produce estimated parametric maps consistent with reference maps (i.e. an objective in normal end-to-end mapping). The second loss term (loss 2) ensures that the reconstructed parameter maps from end-to-end CNN mapping produce undersampled data matching the acquired k-space measurements (i.e. data consistency) either in the k-space domain or undersampled image domain (Fig. 1). (b) Network Implementation: We used U-Net as convolutional encoder/decoder for performing the end-to-end CNN mapping. The network was trained on an Nvidia GeForce GTX 1080Ti card using adaptive gradient descent optimization with a learning rate of 0.0002 for 200 epochs. (c) Evaluation: The evaluation was performed on 110 multi-echo knee images from our routine clinical scans collected on a 3T scanner (Signa Excite Hdx, GE Healthcare) using multi-echo spin-echo T2 mapping sequence (total 8 echoes) in sagittal orientation. The training data were obtained by randomly selecting 100 knee images, with the rest 10 images used for evaluation. Undersampling was retrospectively simulated by retaining 5% of central k-space lines and selecting the remaining lines to achieve an acceleration rate (R) of 5 and 8 using a 1D variable density Cartesian pattern. The sampling pattern was randomly generated for each dynamic frame and we further randomly modified the masks during each training iteration to create “sampling augmentation” for training data (Fig. 2). The training and reconstruction took ~19 hours and 8.2 sec, respectively.Results
Fig. 3 showed exemplary results at R=5 and 8. Undersampling prevented reliable reconstruction of a T2 map with simple inverse FFT (Zero-Filling), direct U-Net mapping (CNN-Only) or by using iterative global and local low rank constrained reconstruction methods (GLR and LLR) (7). MANTIS successfully removed aliasing artifacts and preserved much better tissue structure similar to the reference. The highest degree of correspondence between MANTIS and the reference was confirmed by normalized root mean square error (nRMSE), which were 11.9%, 10.8%, 6.5% and 5.3% for GLR, LLR, CNN-Only and MANTIS at R=5, and 14.1%, 13.5%, 7.9% and 6.3% for GLR, LLR, CNN-Only and MANTIS at R=8, respectively. The high accuracy for MANTIS was observed regardless of the selection of sampling mask during evaluation as shown in Fig. 4, indicating the robustness against sampling discrepancy. Fig 5 demonstrated the capability of MANIS for accurately reconstructing small lesions in cartilage and meniscus at R=5 and 8.Discussion
We proposed a novel deep learning-based reconstruction approach called MANTIS for rapid MR parametric mapping. It yielded superior results compared to traditional iterative sparsity-based methods. With a combination of efficient end-to-end CNN mapping, model-based data fidelity reinforcement and incorporation of random training strategy, MANTIS allows rapid parametric mapping directly from undersampled data with high quality and good robustness.Acknowledgements
No acknowledgement found.References
1. Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, Knoll F. Learning a Variational Network for Reconstruction of Accelerated MRI Data. Magn. Reson. Med. [Internet] 2017;79:3055–3071. doi: 10.1002/mrm.26977.
2. Mardani M, Gong E, Cheng JY, Vasanawala SS, Zaharchuk G, Xing L, Pauly JM. Deep Generative Adversarial Neural Networks for Compressive Sensing (GANCS) MRI. IEEE Trans. Med. Imaging [Internet] 2018:1–1. doi: 10.1109/TMI.2018.2858752.
3. Wang S, Su Z, Ying L, Peng X, Zhu S, Liang F, Feng D, Liang D, Technologies I. ACCELERATING MAGNETIC RESONANCE IMAGING VIA DEEP LEARNING. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). IEEE; 2016. pp. 514–517. doi: 10.1109/ISBI.2016.7493320.
4. Schlemper J, Caballero J, Hajnal J V., Price A, Rueckert D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging [Internet] 2017:1–1. doi: 10.1007/978-3-319-59050-9_51.
5. Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature [Internet] 2018;555:487–492. doi: 10.1038/nature25988.
6. Liu F, Samsonov A. Data-Cycle-Consistent Adversarial Networks for High-Quality Reconstruction of Undersampled MRI Data. In: the ISMRM Machine Learning Workshop. ; 2018.
7. Zhang T, Pauly JM, Levesque IR. Accelerating parameter mapping with a locally low rank constraint. Magn. Reson. Med. [Internet] 2015;73:655–661. doi: 10.1002/mrm.25161.