2077

Generative Adversarial Networks based Compressed Sensing for Multi-contrast Intracranial Vessel Wall Imaging Acceleration
Niranjan Balu1, Long Wang2, Tao Zhang2, Zechen Zhou3, Enhao Gong2, Kristi Pimentel1, Mahmud Mossa-basha1, Thomas Hatsukami1,4, and Chun Yuan1,5

1Department of Radiology, University of Washington, Seattle, WA, United States, 2Subtle Medical Inc., Menlo Park, CA, United States, 3Philips Research North America, Cambridge, MA, United States, 4Department of Surgery, Division of Vascular Surgery, University of Washington, Seattle, WA, United States, 5Center for Biomedical Imaging Research, Department of Biomedical Engineering, Tsinghua University, Beijing, China

Synopsis

Multi-contrast isotropic high resolution intracranial vessel wall imaging (VWI) can enable direct detection and follow-up surveillance of intracranial vessel wall pathologies. The overall scan time can be largely reduced by proper random undersampling. However, the image reconstruction process can be time consuming causing deployment difficulties of accelerated intracranial VWI for clinical use. In this study, a generative adversarial networks based compressed sensing method was developed for multi-contrast intracranial image reconstruction. The preliminary results demonstrate comparable/improved image quality for vessel wall delineation in comparison to the traditional image reconstruction method, while providing a significant reduction in reconstruction time.

Introduction

Stroke is a leading cause of death and morbidity worldwide, and intracranial atherosclerosis is an essential contributor to culprit lesions causing stroke. High resolution intracranial vessel wall imaging (VWI) is required for direct evaluation of intracranial vessel wall conditions1-3. Recently, a multi-contrast 0.5mm isotropic intracranial VWI protocol4 was optimized to identify different plaque components covering the major intracranial branches, in which the data acquisition was accelerated with CUSTOM5 variable density random undersampling, enabling a 5- or 6-minute scan for each contrast weighting while reducing the motion corruption issues. However, the singular value decomposition based STEP image reconstruction6 can take hours to reconstruct multi-contrast volumetric images, thus posing a significant barrier to clinical translation. To address this issue, a recently developed deep learning and generative adversarial networks based compressed sensing (GANCS) method7 was further optimized and adapted in this study for multi-contrast intracranial image reconstruction to improve the image quality and reduce the overall reconstruction time with the advanced GPU implementation.

Methods

MR experiments: Multi-contrast intracranial VWI datasets including PD, pre- and post-contrast T1 weighted 3D volumetric isotropic turbo spin echo acquisition (VISTA) scans were acquired on a Philips Ingenia 3.0T MR scanner using a 32-channel head coil with FOV=180x180x45mm3, isotropic 0.5mm resolution, NSA=2, x4(PD)/x4.5(T1) CUSTOM undersampling. Sixteen subjects were scanned after obtaining informed consent. Thirteen cases were used for training and validation while the others were used for testing. STEP reconstruction, which takes ~20minutes per volume, was used as ground-truth.

GANCS multi-contrast image reconstruction: The size of each subsampled and zero filled 3D volume is 720x720x160. Each slice was randomly shuffled before feeding into the network structure together with neighboring slices as a 2.5D input. We adopted the recurrent residual network (Resnet) block8 as the basic network structure and 20 Resnet blocks were used to build the reconstruction network. Each Resnet block has two 2D convolutional layers with a small 3x3 kernel and 64 feature maps, and a rectified linear unit (ReLU) activation layer. The loss function is a mixed function between equally weighted structural similarity (SSIM)9 and L1 loss, as well as adversarial loss7 to ensure high quality output for ground-truth approximation.

Before training, we preprocessed the images by applying mean normalization. Then we trained our model with ADAM optimizer by setting the initial learning rate as 5e-3. Training was performed with Tensorflow on an NVIDIA Tesla V100 GPU. To evaluate the model performance, we calculated the PSNR, SSIM between the ground-truth and the zero-filled input/reconstruction result.

Results

For the three testing cases, the output reconstruction results achieved approximately 0.04 SSIM improvement and 2dB-4dB increase in PSNR compared with subsampled input (figure 2a & 2b). Also, good pixel-wise correlation coefficient (ρ=0.9911) and normalized root mean square error (12.86%) between ground-truth and GANCS reconstruction across different contrast weightings were found, indicating a reasonable generalization capability of the trained GANCS model. The image quality of GANCS reconstruction results provided comparable/improved delineation of vessel wall boundaries in comparison to STEP method, and the plaques could be clearly identified on all contrast weightings (figure 3 and 4). Furthermore, the average reconstruction time for each 3D volume was significantly reduced from ~20minutes using STEP to ~30seconds using GANCS with advanced GPU acceleration (figure 2c).

Discussion and Conclusion

GANCS based multi-contrast intracranial VWI results in significant time reduction (40x) in the image reconstruction process, while providing comparable/improved vessel wall delineation and plaque visualization compared to the traditional image reconstruction method. It has the potential to achieve further improved image quality by considering the multi-channel signal encoding model10,11 and enable whole brain coverage with reasonable clinical acquisition and reconstruction time. This highly accelerated image reconstruction method provides a promising solution to promote the clinical translation of multi-contrast intracranial VWI for stroke evaluation.

Acknowledgements

This study is supported by the grant from National Institutes of Health (5R01NS092207).

References

1. Qiao Y, Steinman DA, Qin Q, Etesami M, Schär M, Astor BC, Wasserman BA. Intracranial arterial wall imaging using three-dimensional high isotropic resolution black blood MRI at 3.0 Tesla. J Magn Reson Imaging. 2011;34(1):22-30.

2. Qiao Y, Steinman DA, Qin Q, Etesami M, Schär M, Astor BC, Wasserman BA. Intracranial plaque enhancement in patients with cerebrovascular events on high-spatial-resolution MR images. Radiology. 2014;271(2):534-42.

3. Fan Z, Yang Q, Deng Z, Li Y, Bi X, Song S, Li D. Whole-brain intracranial vessel wall imaging at 3 Tesla using cerebrospinal fluid-attenuated T1-weighted 3D turbo spin echo. Magn Reson Med. 2017;77(3):1142-1150.

4. Balu N, Zhou Z, Hatsukami T, Mossa-Basha M, Yuan C. Accelerated Multi-Contrast High Isotropic Resolution 3D Intracranial Vessel Wall MRI Using a Tailored K-Space Undersampling and Partially Parallel Reconstruction Strategy. ISMRM 2017. p2790.

5. Zhou Z, Chen S, Sun A, Li Y, Li R, Yuan C. Optimized Parametric Variable Radius Sampling Scheme for 3D Cartesian K-Space Undersampling Pattern Design. ISMRM 2016. p1818.

6. Zhou Z, Wang J, Balu N, Li R, Yuan C. STEP: Self-supporting tailored k-space estimation for parallel imaging reconstruction. Magn Reson Med. 2016;75(2):750-761.

7. Mardani M, Gong E, Cheng JY, Vasanawala SS, Zaharchuk G, Xing L, Pauly JM. Deep Generative Adversarial Neural Networks for Compressive Sensing (GANCS) MRI. IEEE Trans Med Imaging. 2018 Jul 23 Epub.

8. Lim B, Son S, Kim H, Nah S, Lee KM. Enhanced deep residual networks for single image super-resolution. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017, pp. 1132-1140.

9. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600-12.

10. Yang Y, Sun J, Li H, Xu Z. Deep ADMM-Net for Compressive Sensing MRI. NIPS 2016. p10-18.

11. Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, Knoll F. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med. 2018;79(6):3055-3071.

Figures

Figure 1: The architecture of generative adversarial networks based compressed sensing for multi-contrast intracranial vessel wall image reconstruction. Each zero-filled slice together with its neighboring slices are fed into this reconstruction network as a 2.5D input, and 20 recurrent residual network (Resnet) blocks are used in this framework. Each Resnet block contains two 2D convolutional layers and a rectified linear unit (ReLU) activation layer. The loss function consists of equally weighted structural similarity (SSIM) and L1 loss, as well as the adversarial loss to ensure high quality output for ground-truth approximation.

Figure 2: Quantitative image quality metric and reconstruction time measurements of GANCS method on the testing datasets. For different image contrasts, panel (a) and (b) demonstrate the consistent image quality improvement of the GANCS output on both of the structural similarity (SSIM) and PSNR metrics compared with the zero filled input. Panel (c) provides the averaged reconstruction time across the testing cases for different image contrast.

Figure 3: Multi-contrast image reconstruction performance comparison between STEP and GANCS at one coronal slice with plaque in the internal carotid artery. The left, middle and right columns correspond to the PD, pre- and post-contrast enhanced T1 weighted 3D VISTA scans, and different rows from top to bottom represent the zero-filled, STEP, and GANCS image reconstruction results, respectively. The images are cropped to focus on the central area with diseased vessels, and note the plaque region illustrated by the red arrows.

Figure 4: Multi-contrast image reconstruction performance comparison between STEP and GANCS at one axial slice with plaque in the posterior communicating artery. The left, middle and right columns correspond to the PD, pre- and post-contrast enhanced T1 weighted 3D VISTA scans, and different rows from top to bottom represent the zero-filled, STEP, and GANCS image reconstruction results, respectively. The images are cropped to focus on the central area with diseased vessels, and note the plaque region shown by the red arrows.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
2077