Multi-contrast isotropic high resolution intracranial vessel wall imaging (VWI) can enable direct detection and follow-up surveillance of intracranial vessel wall pathologies. The overall scan time can be largely reduced by proper random undersampling. However, the image reconstruction process can be time consuming causing deployment difficulties of accelerated intracranial VWI for clinical use. In this study, a generative adversarial networks based compressed sensing method was developed for multi-contrast intracranial image reconstruction. The preliminary results demonstrate comparable/improved image quality for vessel wall delineation in comparison to the traditional image reconstruction method, while providing a significant reduction in reconstruction time.
MR experiments: Multi-contrast intracranial VWI datasets including PD, pre- and post-contrast T1 weighted 3D volumetric isotropic turbo spin echo acquisition (VISTA) scans were acquired on a Philips Ingenia 3.0T MR scanner using a 32-channel head coil with FOV=180x180x45mm3, isotropic 0.5mm resolution, NSA=2, x4(PD)/x4.5(T1) CUSTOM undersampling. Sixteen subjects were scanned after obtaining informed consent. Thirteen cases were used for training and validation while the others were used for testing. STEP reconstruction, which takes ~20minutes per volume, was used as ground-truth.
GANCS multi-contrast image reconstruction: The size of each subsampled and zero filled 3D volume is 720x720x160. Each slice was randomly shuffled before feeding into the network structure together with neighboring slices as a 2.5D input. We adopted the recurrent residual network (Resnet) block8 as the basic network structure and 20 Resnet blocks were used to build the reconstruction network. Each Resnet block has two 2D convolutional layers with a small 3x3 kernel and 64 feature maps, and a rectified linear unit (ReLU) activation layer. The loss function is a mixed function between equally weighted structural similarity (SSIM)9 and L1 loss, as well as adversarial loss7 to ensure high quality output for ground-truth approximation.
Before training, we preprocessed the images by applying mean normalization. Then we trained our model with ADAM optimizer by setting the initial learning rate as 5e-3. Training was performed with Tensorflow on an NVIDIA Tesla V100 GPU. To evaluate the model performance, we calculated the PSNR, SSIM between the ground-truth and the zero-filled input/reconstruction result.
1. Qiao Y, Steinman DA, Qin Q, Etesami M, Schär M, Astor BC, Wasserman BA. Intracranial arterial wall imaging using three-dimensional high isotropic resolution black blood MRI at 3.0 Tesla. J Magn Reson Imaging. 2011;34(1):22-30.
2. Qiao Y, Steinman DA, Qin Q, Etesami M, Schär M, Astor BC, Wasserman BA. Intracranial plaque enhancement in patients with cerebrovascular events on high-spatial-resolution MR images. Radiology. 2014;271(2):534-42.
3. Fan Z, Yang Q, Deng Z, Li Y, Bi X, Song S, Li D. Whole-brain intracranial vessel wall imaging at 3 Tesla using cerebrospinal fluid-attenuated T1-weighted 3D turbo spin echo. Magn Reson Med. 2017;77(3):1142-1150.
4. Balu N, Zhou Z, Hatsukami T, Mossa-Basha M, Yuan C. Accelerated Multi-Contrast High Isotropic Resolution 3D Intracranial Vessel Wall MRI Using a Tailored K-Space Undersampling and Partially Parallel Reconstruction Strategy. ISMRM 2017. p2790.
5. Zhou Z, Chen S, Sun A, Li Y, Li R, Yuan C. Optimized Parametric Variable Radius Sampling Scheme for 3D Cartesian K-Space Undersampling Pattern Design. ISMRM 2016. p1818.
6. Zhou Z, Wang J, Balu N, Li R, Yuan C. STEP: Self-supporting tailored k-space estimation for parallel imaging reconstruction. Magn Reson Med. 2016;75(2):750-761.
7. Mardani M, Gong E, Cheng JY, Vasanawala SS, Zaharchuk G, Xing L, Pauly JM. Deep Generative Adversarial Neural Networks for Compressive Sensing (GANCS) MRI. IEEE Trans Med Imaging. 2018 Jul 23 Epub.
8. Lim B, Son S, Kim H, Nah S, Lee KM. Enhanced deep residual networks for single image super-resolution. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pp. 1132-1140.
9. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600-12.
10. Yang Y, Sun J, Li H, Xu Z. Deep ADMM-Net for Compressive Sensing MRI. NIPS 2016. p10-18.
11. Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, Knoll F. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79(6):3055-3071.