0307

MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for accelerated MR parameter mapping
Fang Liu1, Li Feng2, and Richard Kijowski1

1Radiology, University of Wisconsin-Madison, Madison, WI, United States, 2Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States

Synopsis

The purpose of this work was to develop and evaluate a novel deep learning-based reconstruction framework called Model-Augmented Neural neTwork with Incoherent k-space Sampling (MANTIS) for accelerated MR parameter mapping. Our approach combines end-to-end CNN mapping with k-space consistency using the concept of cyclic loss to further enforce data and model fidelity. Incoherent k-space sampling is used to improve reconstruction performance. A physical model is incorporated into the proposed framework, so that the parameter maps can be efficiently estimated directly from undersampled images. The performance of MANTIS was demonstrated for T2 mapping of the knee joint. Our study demonstrated that the proposed MANTIS framework represents a promising approach for efficient MR parameter mapping. MANTIS can potentially be extended to other types of parameter mapping with appropriate models.

Introduction

Deep learning methods have been successfully used for image reconstruction with promising initial results. Exemplary approaches attempt to extend the compressed sensing framework using various deep learning architectures and have achieved great success(1,2). Other approaches have also been proposed to directly remove artifacts from undersampled images or k-space using a direct end-to-end convolutional neural network (CNN) mapping(3–5). While these deep learning methods have focused on highly efficient image reconstruction for conventional MR imaging, applications for dynamic imaging applications, such as accelerated parameter mapping, have been limited. In this work, we aimed to develop a deep learning-based framework for rapid MR parameter mapping. Our approach combines efficient end-to-end CNN mapping with k-space consistency using the concept of cyclic loss to further enforce data and model fidelity. A random sampling pattern in k-space combined with a completely random training strategy "sampling augmentation” was also proposed to improve method robustness during image inference. The purpose of this study was to demonstrate the feasibility of such framework, called MANTIS, for rapid and accurate T2 mapping of the knee joint.

Methods

(a) MANTIS Quantitative Imaging: The MANTIS reconstructs parametric maps directly from undersampled source images using CNN. In line with our prior study for image reconstruction using CycleGAN with a focus on data consistency (6), MANTIS (Fig. 1) consists of two training objectives. The first loss term (loss 1) ensures that the undersampled images produce estimated parametric maps consistent with reference maps (i.e. an objective in normal end-to-end mapping). The second loss term (loss 2) ensures that the reconstructed parameter maps from end-to-end CNN mapping produce undersampled data matching the acquired k-space measurements (i.e. data consistency) either in the k-space domain or undersampled image domain (Fig. 1). (b) Network Implementation: We used U-Net as convolutional encoder/decoder for performing the end-to-end CNN mapping. The network was trained on an Nvidia GeForce GTX 1080Ti card using adaptive gradient descent optimization with a learning rate of 0.0002 for 200 epochs. (c) Evaluation: The evaluation was performed on 110 multi-echo knee images from our routine clinical scans collected on a 3T scanner (Signa Excite Hdx, GE Healthcare) using multi-echo spin-echo T2 mapping sequence (total 8 echoes) in sagittal orientation. The training data were obtained by randomly selecting 100 knee images, with the rest 10 images used for evaluation. Undersampling was retrospectively simulated by retaining 5% of central k-space lines and selecting the remaining lines to achieve an acceleration rate (R) of 5 and 8 using a 1D variable density Cartesian pattern. The sampling pattern was randomly generated for each dynamic frame and we further randomly modified the masks during each training iteration to create “sampling augmentation” for training data (Fig. 2). The training and reconstruction took ~19 hours and 8.2 sec, respectively.

Results

Fig. 3 showed exemplary results at R=5 and 8. Undersampling prevented reliable reconstruction of a T2 map with simple inverse FFT (Zero-Filling), direct U-Net mapping (CNN-Only) or by using iterative global and local low rank constrained reconstruction methods (GLR and LLR) (7). MANTIS successfully removed aliasing artifacts and preserved much better tissue structure similar to the reference. The highest degree of correspondence between MANTIS and the reference was confirmed by normalized root mean square error (nRMSE), which were 11.9%, 10.8%, 6.5% and 5.3% for GLR, LLR, CNN-Only and MANTIS at R=5, and 14.1%, 13.5%, 7.9% and 6.3% for GLR, LLR, CNN-Only and MANTIS at R=8, respectively. The high accuracy for MANTIS was observed regardless of the selection of sampling mask during evaluation as shown in Fig. 4, indicating the robustness against sampling discrepancy. Fig 5 demonstrated the capability of MANIS for accurately reconstructing small lesions in cartilage and meniscus at R=5 and 8.

Discussion

We proposed a novel deep learning-based reconstruction approach called MANTIS for rapid MR parametric mapping. It yielded superior results compared to traditional iterative sparsity-based methods. With a combination of efficient end-to-end CNN mapping, model-based data fidelity reinforcement and incorporation of random training strategy, MANTIS allows rapid parametric mapping directly from undersampled data with high quality and good robustness.

Acknowledgements

No acknowledgement found.

References

1. Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, Knoll F. Learning a Variational Network for Reconstruction of Accelerated MRI Data. Magn. Reson. Med. [Internet] 2017;79:3055–3071. doi: 10.1002/mrm.26977.

2. Mardani M, Gong E, Cheng JY, Vasanawala SS, Zaharchuk G, Xing L, Pauly JM. Deep Generative Adversarial Neural Networks for Compressive Sensing (GANCS) MRI. IEEE Trans. Med. Imaging [Internet] 2018:1–1. doi: 10.1109/TMI.2018.2858752.

3. Wang S, Su Z, Ying L, Peng X, Zhu S, Liang F, Feng D, Liang D, Technologies I. ACCELERATING MAGNETIC RESONANCE IMAGING VIA DEEP LEARNING. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). IEEE; 2016. pp. 514–517. doi: 10.1109/ISBI.2016.7493320.

4. Schlemper J, Caballero J, Hajnal J V., Price A, Rueckert D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging [Internet] 2017:1–1. doi: 10.1007/978-3-319-59050-9_51.

5. Zhu B, Liu JZ, Cauley SF, Rosen BR, Rosen MS. Image reconstruction by domain-transform manifold learning. Nature [Internet] 2018;555:487–492. doi: 10.1038/nature25988.

6. Liu F, Samsonov A. Data-Cycle-Consistent Adversarial Networks for High-Quality Reconstruction of Undersampled MRI Data. In: the ISMRM Machine Learning Workshop. ; 2018.

7. Zhang T, Pauly JM, Levesque IR. Accelerating parameter mapping with a locally low rank constraint. Magn. Reson. Med. [Internet] 2015;73:655–661. doi: 10.1002/mrm.25161.

Figures

Figure 1: Illustration of the MANTIS framework, which features two loss components. The first loss term (pixel-wise loss 1) ensures that the undersampled images produces parameter maps that are same as the reference parameter maps. The second loss term (pixel-wise loss 2) ensures that the reconstructed parameter maps from the CNN mapping produces synthetic undersampled data matching the acquired k-space measurements. The MANTIS framework considers both the data-driven deep learning component and signal model from the basic MR physics.

Figure 2: Schematic demonstration of the undersampling patterns used in the study. (a) Examples of the applied 1D variable-density random undersampled mask used for 1st and 2nd echo image. (b) Example sets of ky-t 1D variable-density random undersampling masks for eight echo times. The undersampling mask was varying along echo dimension and the mask set was randomized for each iteration during the network training to augment the training data.

Figure 3: Representative examples of T2 maps estimated from the different reconstruction methods at R=5 (top row) and R=8 (bottom row), respectively. MANTIS generated nearly artifact-free T2 map with well-preserved sharpness and texture comparable to the reference T2 maps. The other methods generated suboptimal T2 maps with either reduced image sharpness or residual artifacts indicated by the white arrows.

Figure 4: Comparison of T2 (top row) and I0 (bottom row) maps reconstructed using MANTIS at different undersampling masks for the healthy volunteer. Although the image artifacts arisen from the different undersampling masks are dramatically different, the MANTIS was able to remove the heterogeneous artifacts and provided nearly artifact-free T2 and I0 maps regardless of different undersampling masks.

Figure 5: Two representative examples demonstrating the performance of MANTIS in cartilage and meniscus lesion detection. (a) Results from a 67-year male patient with knee osteoarthritis and superficial cartilage degeneration on the medial femoral condyle and medial tibia plateau. (b) Results from a 59-year male patient with a tear of the posterior horn of the medial meniscus. MANTIS was able to reconstruct high quality T2 maps for clear identification of cartilage and meniscus lesions at both R=5 and R=8.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0307