Cardiac MR imaging plays an important role in clinical diagnosis. But the long scan time limits its wide applications. To accelerate data acquisition, deep learning based methods have been applied to effectively reconstruct the undersampled images. However, current deep convolutional neural network (CNN) based methods do not make full use of the hierarchical features from different convolutional layers, which impedes their performances. In this work, we propose a cascaded residual dense network (C-RDN) for dynamic MR image reconstruction with both local features and global features being fully explored. Our proposed C-RDN achieves the best performance on in vivo datasets compared to the iterative optimization methods and the state-of-the-art CNN method.
Theory and method
In this work, we propose a cascaded residual dense network for cardiac MR image reconstruction (Figure 1). C-RDN consists of a cascade of RDNs. Each RDN contains five major components: shallow feature extraction, residual dense blocks (RDBs), global feature fusion, global residual learning and data consistency (DC). Firstly, Undersampled cardiac MR images are fed into the network for shallow feature extraction. Secondly, the shallow features go through D RDBs for local feature fusion. The details of one RDB are shown in Figure 2. RDB includes dense connections, local feature fusion, and local residual connections. Dense connections refer to the direct connections of each convolutional layer to subsequent layers, which can enhance the transmission of local features. All local features are concatenated together and pass through a 1*1 convolutional layer to achieve local feature fusion. Residual connections are introduced in RDB to further improve information propagation. Thirdly, these residual dense features from D RDBs are merged via global feature fusion (concatenation + 1*1 convolution). We believe the combination of local feature fusion and global feature fusion can make full use of features at different levels. Fourthly, a global residual connection combines the shallow features with the global fused features. Finally, a data consistency layer [1] is appended to correct MR images by the accurate k-space samples.Experiment
We collected 101 fully sampled cardiac MR data from using a 3T scanner (SIMENS MAGNETOM Trio) with T1-weighted FLASH sequence. Multi-coil data were combined to a single channel and then retrospectively undersampled using 1D random Cartesian masks [3]. After normalization and extraction, we got 17502 cardiac data, where 15000, 2000, and 502 were used for training, validating, and testing, respectively. The models were implemented on a Ubuntu 16.04 LTS (64-bit) operating system equipped with an Intel Xeon E5-2640 Central Processing Unit (CPU) and a Tesla TITAN Xp Graphics Processing Unit (GPU, 12GB memory). The open framework Tensorflow was used.[1]. J. Schlemper, J. Caballero, J.V. Hajnal, A. Price, D. Rueckert, “A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction”, IEEE TMI, DOI: 10. 1109/TMI.2017.2760978 (2017)
[2]. Qin, Chen, Joseph V. Hajnal, Daniel Rueckert, Jo Schlemper, Jose Caballero, and Anthony N. Price. "Convolutional recurrent neural networks for dynamic MR image reconstruction." IEEE transactions on medical imaging (2018).
[3]. Jung, Hong, Jong Chul Ye, and Eung Yeop Kim. "Improved k–t BLAST and k–t SENSE using FOCUSS." Physics in Medicine & Biology 52, no. 11 (2007): 3201.
[4]. Lingala, Sajan Goud, Yue Hu, Edward DiBella, and Mathews Jacob. "Accelerated dynamic MRI exploiting sparsity and low-rank structure: kt SLR." IEEE transactions on medical imaging30, no. 5 (2011): 1042-1054.
[5]. Otazo, Ricardo, Emmanuel Candès, and Daniel K. Sodickson. "Low‐rank plus sparse matrix decomposition for accelerated dynamic MRI with separation of background and dynamic components." Magnetic Resonance in Medicine 73, no. 3 (2015): 1125-1136.
[6]. Zhang, Yulun, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. "Residual dense network for image super-resolution." In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018.