Free-breathing 3D abdominal imaging is challenging since respiratory motion can produce image blurring and ghosting artifact. Our purpose is to employ a novel deep learning method using conditional generative adversarial network (GAN) to reconstruct the undersampled radial 3D abdominal MRI. The whole network combines a generator G consists of 8 convolutional layers and corresponding 8 deconvolutional layers with a discriminator D which is formed using 11 convolutional layers. The GAN-based reconstructed images achieve similar quality to the ground-truth images. Additionally, the average reconstruction time is negligible. Therefore, this method can be adopted for a wide range of clinical applications.
Introduction
Free-breathing three-dimensional (3D) abdominal imaging is a challenging task for MRI, as respiratory motion severely degrades image quality. One of the most promising self-navigation techniques is the 3D golden-angle radial stack-of-stars (SOS) sequence [1], which has advantages in terms of speed, resolution, and allowing free breathing. However, streaking artifacts are still clearly observed in reconstructed images when undersampling is applied. This work employs the conditional generative adversarial network (GAN) to solve this problem.Methods
Thirty healthy volunteers participated in our experiment. To build the dataset, reference and artifact-affected images were reconstructed using fully-sampled 451 golden-angle spokes and the first 90 golden-angle spokes corresponding to acceleration rates of 6.98, respectively. Figure 1 shows the overall framework of the GAN-based reconstruction architecture. The structure of GAN is adopted from [2]. The generator G consists of 8 convolutional layers and corresponding 8 deconvolutional layers. And the pre-trained VGG is used to get perceptual loss. In addition, the discriminator D is formed using 11 convolutional layers. The network loss function is combined by adversarial loss and content loss consisted of pixel-wise MSE and perceptual loss. In the training step, we fed the generator with patches from artifact-affected images Xu. Then, the network G would output the corresponding reconstructed patches Y to stop the network D from discriminating it from ground-truth Xt. In the testing step, we only applied the generator to get the streaking-artifact free images. The training required approximately 3 days on an NVIDIA GPU (GTX 1080). Basic settings were kept constant with 2000 epochs and a batch size of 100. The model was implemented in TensorLayer of Tensorflow.Results
It can be clearly seen that strong streaking artifacts existing in zero-filled reconstructed images are significantly reduced with GAN-based reconstruction method in Figure 2. From Table 1, we observe that GAN-based method yields significant higher SSIM in terms of the input images. Additionally, the average reconstruction time is below 1 second per image.Discussion
Visual inspection indicates that there is nearly no difference between the GAN-based reconstruction and the ground-truth images. The nonlinear relationship between images of the two types (streaking-contaminated and steaking-free) is effectively captured by a large number of trainable mapping and parameters in GAN.Conclusion
In conclusion, our preliminary results demonstrate the feasibility of this method to remove the streaking artifacts of undersampled free-breathing 3D abdominal MRI with a negligible reconstruction time.[1] Feng L, Axel L, Chandarana H, et al. XD-GRASP: golden-angle radial MRI with reconstruction of extramotion-state dimensions using compressed sensing. Magn Reson Med. 2016;75:775 – 788.
[2] Yang G, Yu S, Dong H, et al. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction[J]. IEEE Transactions on Medical Imaging, 2018, PP(99):1-1.