We propose a new deep neural network (Y-net) that can utilize images acquired with a different MR contrast for reconstruction of down-sampled images. K-space center of down-sampled T2-weighted images and k-space edge of full-sampled T1-weighted images were combined through one Y-net, and desired high-resolution T2-weighted images were generated by another Y-net. The proposed network not only improved spatial resolution but also suppressed ringing artifacts caused by the down‑sampling at the k-space center. The developed technique potentially enables to accelerate the multi-contrast MR imaging in routine clinical studies.
Introduction
There have been many attempts to reconstruct down-sampled MR images for reducing MRI scan time such as compressed sensing and parallel imaging 1, 2. Recently, deep learning has been introduced as a powerful tool for image reconstruction. Among many deep learning network, U-net convolutional network combined with residual learning scheme is regarded as a state‑of‑the‑art technique for image processing because of fast convergence 3, 4. However, it is still difficult to reduce MR scan time dramatically using the convolutional network, because the reconstructed image becomes blurred or visually different from the fully sampled image, which is more severe at higher acceleration factors. Normally, more than two different contrast images are acquired in routine MRI examinations for the accurate diagnosis. It would be beneficial for reconstruction of down-sampled MR images to utilize high-resolution images acquired in another contrast. For instance, down sampled T2-weighted image can be reconstructed better using the information of high-resolution T1-weighted image. In this study, we introduce a new deep residual network, Y-net, designed to take information of another contrast image as input for reconstruction of a down-sampled MR image. The performance of the proposed network was verified using T1- and T2-weighted images. In addition, we compared the reconstruction quality of Y-net with that of conventional U-net through quantitative analysis.Results and Discussion
As shown in Figs. 3 and Fig. 4, the detailed tissue information in the reconstructed image from Y-net was depicted better than that from U-net. Furthermore, Gibbs artifacts caused by down sampling were suppressed better for Y-net. NMSE of reconstructed images from U-net and Y-net were 0.054±0.027 and 0.029±0.001 respectively. In addition, SSIM of the reconstructed images from U-net and Y-net were 0.747±0.081 and 0.804±0.045 respectively. These quantitative analyses indicate that reconstructed images from Y-net were closer to the fully‑sampled images than those from U-net, which was consistent with the visual observation. Combining a k-space center portion of one MR contrast and a k-space edge portion of another MR contrast would generate ringing artifacts, because of discontinuity between the two k-space portions. Moreover, motion can be induced between the two scans of different contrasts. We used two Y-nets to resolve these issues: one for combining two k-space portions and the other for generating the desired output. Deep learning could overcome the k-space discontinuity problem easily, as demonstrated in this study.1. Lustig M, Donoho D, Pauly JM. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 2007;58(6):1182-1195.
2. Griswold MA, Jakob PM, Heidemann RM, Nittka M, Jellus V, Wang J, Kiefer B, Haase A. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magnetic Resonance in Medicine 2002;47(6):1202-1210.
3. Lee, Dongwook, Jaejun Yoo, and Jong Chul Ye. "Deep residual learning for compressed sensing MRI." Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on. IEEE, 2017. APA
4. Han, Yo Seob, Jaejun Yoo, and Jong Chul Ye. "Deep Learning with Domain Adaptation for Accelerated Projection Reconstruction MR." arXiv preprint arXiv:1703.01135 (2017).
5. Zhang T, Pauly JM, Vasanawala SS, Lustig M. Coil compression for accelerated imaging with Cartesian sampling. Magnetic Resonance in Medicine 2013;69(2):571-582.