Recently, convolutional neural network (CNN) based fast cardiac image reconstruction techniques have shown the potential to produce rapid, high quality reconstructions from under-sampled data. However, the relationship between the k-space sampling strategy, image content, training process and reconstruction performance has not been extensively studied. To address this, our study trained different CNN based cardiac image reconstruction models for different image content and various sampling patterns. We showed that better reconstruction results were achieved when using mixed image content as training data and distributing more energy at the center of k-space. Radial acquisition showed the lowest RMSE suggesting potential improvement of CNN performance with non-Cartesian acquisition.
Using deep convolutional neural networks (CNN) for cardiac magnetic resonance (CMR) image reconstruction not only boosts reconstruction speed and simplifies parameter tuning, but also maintains high image quality1,2,3. However, the relationship between sampling strategy, training methods, and reconstruction performance have not been completely explored. In this work, we used deep CNN models trained on cardiac DICOM images that were sampled using a variety of sampling patterns. Our goal was to evaluate the quality of image reconstruction as a function of the training image content and sampling strategy. This abstract aims to explore the following questions to provide practical guidance on sampling strategies and network training for clinical CMR image acquisition:
Question 1: For a specific sampling mask following a particular k-space energy distribution, is reconstruction performance dependent on the image content of the training data?
Question 2: For a particular k-space energy distribution pattern (i.e. Gaussian), is the reconstruction dependent on the specific choice of the specific sampling mask, number of fully sampled center lines and width of the sampling energy distribution?
Question 3: How do different k-space sampling patterns (Gaussian, uniform, radial) affect image reconstruction performance?
All training and test data were from the Kaggle Second Annual Data Science Bowl4, and all experiments were conducted on a NVIDIA Tesla P100 GPU. The CNN models1 used to answer each question are shown in Fig 1. The L1 norm was used as the loss function5,6. All reconstructions used an acceleration factor of 4 (40/160 lines), with a training set of 2400 images from 80 subjects and a testing set of 600 images from 20 different subjects. The SSIM and RMSE from each reconstruction technique was determined as compared to the fully sampled “Gold-standard” images and compared using a repeated measures mixed model, with sub-group comparisons using a Tukey correction for multiple comparisons in SAS 9.4.
Question 1 We trained models with short axis (SA), 2-chamber (2-CH), or 4-chamber (4-CH) images separately, and with a combined dataset (COMB) which included a randomly selected set of 800 images from SA, 2-CH and 4-CH views respectively. Each training and test procedure was conducted with a fixed Gaussian distributed sampling mask (Fig 1a).
Question 2 Based on the results from Question 1, the COMB dataset was used to train models using k-space lines with 4 different Gaussian k-space energy distributions. The number of fully sampled center lines (8 or 16 lines) and the standard deviation (SD) of the Gaussian distribution (16 or 80) were varied. 20 different sampling masks following these energy densities were generated. CNNs were trained for each of the individual sampling masks, and a random mixture of the 20 masks for each Gaussian k-space energy distribution pattern (Fig 1b).
Question 3 In addition to the Gaussian sampling distributions used for Question 2, two types of uniform density (UD) sampling patterns (8 or 16 fully sampled center lines), and a golden-angle radial pattern with the same number of k-space lines were trained. Radial data fidelity was enforced on the Cartesian k-space grid (Fig 1c). Based on the results from Question 2, the Gaussian densities were trained and tested using the random mixture of sampling patterns.
1. Schlemper J, Caballero J, Hajnal JV, Price AN, Rueckert D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE transactions on Medical Imaging. 2018 Feb;37(2):491-503.
2. Zhou Z, Han F, Ghodrati V, Gao Y, Yang Y, Hu P. Parallel Imaging and Convolutional Neural Network Combined Fast Image Reconstruction for Low Latency Accelerated 2D Real-Time Imaging. Proc Intl Soc Mag Reson Med 2018;26:3373.
3. Han F, Zhou Z, Ghodrati V, Gao Y, Yang Y, Hu P. Single Breath-held, ECG-Free Cardiac CINE MRI using Parallel Imaging and Deep Learning Combined Image Reconstruction. Proc Intl Soc Mag Reson Med 2018;26:1048.
4. https://www.kaggle.com/c/second-annual-data-science-bowl
5. Zhao H, Gallo O, Frosio I, Kautz J. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging. 2017 Mar;3(1):47-57.
6. Hammernik K, Knoll F, Sodickson D, Pock T. L2 or not L2: Impact of Loss Function Design for Deep Learning MRI Reconstruction. Proc Intl Soc Mag Reson Med 2017;25:0687.