Deep learning (DL)-based image reconstruction methods have achieved promising results across multiple MRI applications. However, most approaches require large-scale fully-sampled ground truth data for supervised training. Acquiring fully-sampled data is often either difficult or impossible, particularly for dynamic datasets. We present a DL framework for MRI reconstruction which does not use fully-sampled data. We test the proposed method in two scenarios: retrospectively undersampled cine and prospectively undersampled abdominal DCE. Our unsupervised method can produce faster reconstructions which are non-inferior to compressed sensing. Our novel proposed method can enable accelerated imaging and accurate reconstruction in applications where fully-sampled data is unavailable.
1. Griswold MA, Jakob PM, Heidemann RM, et al. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn Reson Med Med. 2002;47(6):1202-1210. doi:10.1002/mrm.10171
2. Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med. 1999.
3. Lustig M, Donoho D, Pauly JM. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn Reson Med. 2007;58(6):1182-1195. http://doi.wiley.com/10.1002/mrm.21391.
4. Hammernik K, Klatzer T, Kobler E, et al. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79(6):3055-3071. doi:10.1002/mrm.26977
5. Chen F, Taviani V, Malkiel I, et al. Variable-Density Single-Shot Fast Spin-Echo MRI with Deep Learning Reconstruction by Using Variational Networks. Radiology. 2018;289(2):366-373. doi:10.1148/radiol.2018180445
6. Mardani M, Gong E, Cheng JY, et al. Deep generative adversarial neural networks for compressive sensing MRI. IEEE Trans Med Imaging. 2019. doi:10.1109/TMI.2018.2858752
7. Yang G, Yu S, Dong H, et al. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans Med Imaging. 2018;37(6):1310-1321. doi:10.1109/TMI.2017.2785879
8. Diamond S, Sitzmann V, Heide F, Wetzstein G. Unrolled Optimization with Deep Priors. http://arxiv.org/abs/1705.08041. Published May 22, 2017.
9. Cheng JY, Chen F, Alley MT, Pauly JM, Vasanawala SS. Highly Scalable Image Reconstruction using Deep Neural Networks with Bandpass Filtering. May 2018. http://arxiv.org/abs/1805.03300.
10. Aggarwal HK, Mani MP, Jacob M. MoDL: Model-Based Deep Learning Architecture for Inverse Problems. IEEE Trans Med Imaging. 2019. doi:10.1109/TMI.2018.2865356
11. Souza R, Lebel RM, Frayne R, Ca R. A Hybrid, Dual Domain, Cascade of Convolutional Neural Networks for Magnetic Resonance Image Reconstruction. In: Proceedings of Machine Learning Research 102:437–446. ; 2019.
12. Eo T, Jun Y, Kim T, Jang J, Lee HJ, Hwang D. KIKI-net: Cross-domain Convolutional Neural Networks for Reconstructing Undersampled Magnetic Resonance Images. Magn Reson Med. 2018. doi:10.1002/mrm.27201
13. Cole EK, Cheng JY, Pauly JM, Vasanawala SS. Analysis of Deep Complex-Valued Convolutional Neural Networks for MRI Reconstruction. arXiv:200401738. April 2020. http://arxiv.org/abs/2004.01738.
14. Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative Adversarial Networks. Adv Neural Inf Process Syst. 2014:2672-2680. doi:10.1001/jamainternmed.2016.8245
15. Zhu J, Krähenbühl P, Shechtman E, Efros A. Generative Visual Manipulation on the Natural. Eur Conf Comput Vis. 2016. doi:10.1007/978-3-319-46454-1
16. Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. In: Advances in Neural Information Processing Systems. ; 2014. doi:10.3156/jsoft.29.5_177_2
17. Radford A, Metz L, Chintala S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In: 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings. ; 2016.
18. Mardani M, Gong E, Cheng JY, et al. Deep Generative Adversarial Networks for Compressed Sensing Automates MRI. http://arxiv.org/abs/1706.00051. Published May 31, 2017.
19. Bora A, Price E, Dimakis AG. Ambientgan: Generative models from lossy measurements. In: 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings. ; 2018.
20. Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Phys D Nonlinear Phenom. 1992. doi:10.1016/0167-2789(92)90242-F
21. Beck A, Teboulle M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J Imaging Sci. 2009;2(1):183-202. doi:10.1137/080716542
22. Zhang T, Pauly JM, Vasanawala SS, Lustig M. Coil compression for accelerated imaging with Cartesian sampling. Magn Reson Med. 2013. doi:10.1002/mrm.24267
23. Lei K, Mardani M, Pauly JM, Vasawanala SS. Wasserstein GANs for MR Imaging: from Paired to Unpaired Training. http://arxiv.org/abs/1910.07048. Published October 15, 2019.
Figure 1. (a) Framework overview example in a supervised setting with a conditional GAN when fully-sampled datasets are available.
(b) Our proposed framework overview in an unsupervised setting. A sensing matrix comprised of coil sensitivity maps, an FFT and a randomized undersampling mask is applied to the generated image to simulate the imaging process. The discriminator takes simulated and observed measurements as inputs and tries to differentiate between them. The generator’s loss is based on the discriminator as well as reducing spatial and temporal variation.
Figure 2. Network architectures. All convolutional layers are 3D, which operate on 2D plus time volumes.
(a) The generator architecture, which is an unrolled network based on the Iterative Shrinkage-Thresholding Algorithm and includes data consistency. The generator is trained in both k-space and image domain.
(b) The discriminator architecture, which uses leaky ReLU in order to backpropagate small negative gradients into the generator. The discriminator is trained only in image domain.
Figure 5. (a) Representative DCE images. The leftmost column is the input zero-filled reconstruction, the middle column is our generator’s reconstruction, and the rightmost column is the CS reconstruction. The generator improves the input image quality by recovering sharpness and adding more structure to the input images.
(b) Comparison of DCE inference time per three-dimensional DCE volume (2D + time) between CS low-rank and our unsupervised GAN. Our method is approximately 2.98 times faster.