Processing math: 100%

1975

Progressive Volumetrization for Data-Efficient Image Recovery in Accelerated Multi-Contrast MRI
Mahmut Yurt1,2, Muzaffer Ozbey1,2, Salman Ul Hassan Dar1,2, Berk Tinaz1,2,3, Kader Karlı Oğuz2,4, and Tolga Çukur1,2,5
1Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Turkey, 2National Magnetic Resonance Research Center, Bilkent University, Ankara, Turkey, 3Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, United States, 4Department of Radiology, Hacettepe University, Ankara, Turkey, 5Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey

Synopsis

The gold-standard recovery models for accelerated multi-contrast MRI either involve volumetric or cross-sectional processing. Volumetric models offer elevated capture of global context, but may yield suboptimal training due to expanded model complexity. Cross-sectional models demonstrate improved training with reduced complexity, yet may suffer from loss of global consistency in the longitudinal dimension. We propose a novel progressively volumetrized generative model (ProvoGAN) for contextual learning of image recovery in accelerated multi-contrast MRI. ProvoGAN empowers capture of global and local context while maintaining lower model complexity by performing aimed volumetric mappings via a cascade of cross-sectional mappings task-optimally ordered across rectilinear orientations.

Introduction

Magnetic resonance imaging (MRI) allows versatility to acquire volumetric images under various tissue contrasts, but prolonged data acquisitions limit the quality and diversity of the images collected. The native approach mitigating this limitation involves recovering diverse arrays of high-quality images from acquisitions undersampled across tissue contrasts1-4 or k-space5-8. The recovery methods process data either volumetrically2,3,5 or cross-sectionally1,4,6-8. Volumetric models offer advanced global consistency, yet they may manifest increased model complexity, yielding suboptimal learning. Cross-sectional models suggest reduced complexity with improved training, but they may suffer from loss of contextual consistency across the longitudinal dimension. Here we proposed a novel progressively volumetrized model9, ProvoGAN, for data-efficient image recovery in accelerated multi-contrast MRI. ProvoGAN decomposes volumetric image recovery tasks into a sequence of cross-sectional mappings optimally ordered across rectilinear orientations (Fig. 1) to effectively capture global-local details while demonstrating reduced model complexity9.

Methods

Progressively Volumetrized Generative Model for MR Image Recovery

First Progression: Given a progression sequence of the rectilinear orientations (o1 o2 o3), ProvoGAN first learns a cross-sectional mapping in o1 via a generator (Go1:xio1ˆyip1,o1) and a discriminator (D:{xio1,yio1}[0,1]and{xio1,ˆyip1,o1}[0,1]), where xio1yio1 denote the ith cross-sections in o1 of the sourcetarget volumes, and ˆyip1,o1 denotes the ith cross-section of the target volume in o1 recovered in the first progression. Go1 and Do1 are trained with adversarial and pixel-wise losses:Lp1=Exio1,yio1[|yio1ˆyip1,o1|]pixelwiselossExio1,yio1[(1Do1(xio1,yio1))2]Exio1[Do1(xio1,ˆyip1,o1)2]adversariallossOnce Lp1 is optimized, I cross-sections in o1 are separately recovered, and then reformatted with a concatenation block f to generate the target volume:ˆYp1=f(ˆy1p1,o1,ˆy2p1,o1,,ˆyIp1,o1)Second Progression: ProvoGAN then learns a separate recovery model in o2 to progressively enhance global and local context: ˆyjp2,o2=Go2(xjo2,ˆyjp1,o2), where xjo2ˆyjo2 denote the jth cross-sections of the source and recovered target volumes in o2, and ˆyjp1,o2 denotes the jth cross-section in o2 of the previously recovered volume incorporated to further leverage priors. Go2 and Do2 are again trained in an adversarial setup.

Third Progression: ProvoGAN lastly performs a cross-sectional mapping in o3 with a third generator Go3 receiving as input cross-sections of the source and previously recovered target volumes: ˆykp3,o3=Go3(xko3,ˆykp2,o3), where xko3ˆyko3 denote the kth cross-sections of the source and recovered target volumes in o3. An adversarial setup is used in this progression. The final output of ProvoGAN is then generated by concatenating the recovered K separate cross-sections in o3:ˆYp3=f(ˆy1p3,o3,ˆy2p3,o3,,ˆyKp3,o3)where ˆYp3 denotes the final output volume recovered by ProvoGAN. In MRI reconstruction, consistency of the recovered and acquired k-space coefficients is ensured as follows:Fu(ˆYpn):=Fu(X)where Fu denotes the Fourier operator, and n denotes the ongoing progression index.


Implementation Details

Each cross-sectional model in ProvoGAN contained a generator with 3 conv-layers, 9 ResNet blocks, and 3 conv-layers, and a discriminator with 5 conv-layers in series. Trainings were continued for 100 epochs with a batch size of 1. Learning rate was 0.0002 in the first 50 epochs, and linearly decreased to 0 in the remaining epochs with the ADAM optimizer (β1=0.5,β2=0.999). Comprehensive experiments were performed on the IXI dataset10 (T1,T2weighted images of 52 subjects, training: 37, validation: 5, test: 10) to demonstrate recovery quality of ProvoGAN on two synthesis tasks: T1T2 and T2T1 and two reconstruction tasks: T1(R=4,8)T1(R=1) and T2(R=4,8)T2(R=1), R denoting acceleration rate. Optimal progression sequence of ProvoGAN was determined to be (C A S) for T1T2, (S A C) for T2T1, (C A S) for T1(R=4)T1(R=1), (A C S) for T1(R=8)T1(R=1), (C S A) for T2(R=4)T2(R=1), and (A C S) for T2(R=8)T2(R=1), where A denotes axial, C coronal, and S sagittal orientations. ProvoGAN was evaluated against cross-sectional sGAN and volumetric vGAN models. The network architectures and hyperparameters of these models were adopted from a previous study successfully demonstrated for MRI recovery1, where the 2D conv-layers used for sGAN were replaced with 3D ones for vGAN. Separate sGAN models were trained for each individual orientation in synthesis (sGAN-A: axial, sGAN-C: coronal, sGAN-S: sagittal), whereas a single sGAN model was trained for the transverse orientation (axial) in reconstruction.


Results

We first performed demonstrations on the synthesis tasks (T1T2andT2T1)in IXI. Table 1 lists the volumetric PSNR measurements of the models under comparison, where ProvoGAN achieves 0.58dB higher PSNR and %2.32 higher SSIM compared to the second-best method, on average. Representative results are displayed in Fig. 2 for T1-weighted image synthesis.

Next, we demonstrated the reconstruction quality of ProvoGAN in IXI (T1(R=4,8)T1(R=1)andT2(R=4,8)T2(R=1)). PSNRSSIM measurements reported in Table 2 for the reconstruction tasks indicate that ProvoGAN yields, on average, 1.78dB higher PSNR, and %4.61 higher SSIM compared to the second-best method. Representative results are displayed in Fig. 3 for T1-weighted image reconstruction.

The overall results suggest that sGAN suffers from suboptimal recovery in the longitudinal dimensions due to loss of contextual consistency, whereas vGAN suffers from poor recovery of fine-structural details and deteriorated resolution due to expanded model complexity with suboptimal learning behaviour. Meanwhile, ProvoGAN advances capture of global consistency in all rectilinear orientations via progressively volumetrized cross-sectional mappings and offers enhanced recovery of fine-structural details by manifesting reduced complexity.

Discussion

Here, we introduced a progressively volumetrized model (ProvoGAN) for image recovery in MRI. ProvoGAN decomposes volumetric image recovery tasks into a task-optimally ordered series of simpler cross-sectional mappings which empowers ProvoGAN to leverage global contextual information and enhance fine-structural details in each orientation while maintaining reduced complexity.

Conclusion

The proposed method holds a great promise to advance practicality and utility of MR image recovery.

Acknowledgements

This work was supported in part by a TUBA GEBIP fellowship, by a TUBITAK 1001 Grant (118E256), by a BAGEP fellowship, and by NVIDIA with a GPU donation.

References

1. Dar SUH, Yurt M, Karacan L, Erdem A, Erdem E, Cukur T. Image Synthesis in Multi-Contrast MRI with Conditional Generative Adversarial Networks. IEEE Transactions on Medical Imaging. 2019;38(10):2375-2388.

2. Yu B, Zhou L, Wang L, Shi Y, Fripp J, Bourgeat P. Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Transactions on Medical Imaging. 2019;38(7):1750–1762.

3. Yang H, Lu X, Wang SH, Lu Z, Yao J, Jiang Y, Qian P. Synthesizing multi-contrast MR images via novel 3D conditional variational auto-encoding GAN. Mobile Networks and Applications. 2020.

4. Lee D, Kim J, Moon W, Ye JC. Collagan: Collaborative GAN for missing image data imputation. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2019;2482–2491.

5. Malave MO, Baron CA, Koundinyan SP, Sandino CM, Ong F, Cheng JY, Nishimura DG. Reconstruction of undersampled 3D non-cartesian image-based navigators for coronary MRA using an unrolled deep learning model. Magnetic Resonance in Medicine. 2020;84(2):800–812.

6. Akcakaya M, Moeller S, Weingartner S, Ugurbil K. Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: Database-free deep learning for fast imaging. Magnetic Resonance in Medicine. 2019;81(1): 439–453.

7. Lee D, Yoo J, Tak S, Ye JC. Deep residual learning for accelerated MRI using magnitude and phase networks. IEEE Transactions on Biomedical Engineering. 2018;65(9):1985–1995.

8. Mardani M, Gong E, Cheng JY, Vasanawala S, Zaharchuk G, Alley M, Thakur N, Han S, Dally W, Pauly JM, Xing L. Deep generative adversarial neural networks for compressive sensing MRI. IEEE Transactions on Medical Imaging, 2019;38(1):167–179.

9. Yurt M, Ozbey M, Dar SUH, Tinaz B, Oguz KK, Cukur T. Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery. arXiv:2011.13913, preprint. 2020.

10. https://brain-development.org/ixi-dataset/

Figures

Fig. 1: ProvoGAN performs a series of cross-sectional subtasks optimally-ordered across individual rectilinear orientations (AxialSagittalCoronal illustrated here) to handle the aimed volumetric recovery task. Within a given subtask, source-contrast volume is divided into cross-sections across the longitudinal dimension, and a cross-sectional mapping is learned to recover target cross-sections from source cross-sections, where the previous subtask's (if available) output is further incorporated to leverage contextual priors.


Fig. 2: ProvoGAN is demonstrated on IXI for T1-weighted image synthesis. Representative results from ProvoGAN, sGAN models (sGAN-A is trained axially, sGAN-C coronally, and sGAN-S sagittally), and vGAN are displayed for all rectilinear orientations (first row: axial, second row: coronal, third row: sagittal) together with reference images.

Fig. 3: ProvoGAN is demonstrated on IXI for T1-weighted single-coil image reconstruction with an acceleration ratio of 8. Representative results from ProvoGAN, sGAN (sGAN is trained through the transverse axial orientation), and vGAN are displayed for all rectilinear orientations (first two rows: axial, second two rows: coronal, and third two rows: sagittal) together with error maps, zero-filled images, and references.

Table 1: Synthesis quality of the proposed ProvoGAN model is demonstrated on IXI for T1-weighted and T2-weighted image synthesis tasks. PSNR and SSIM measurements are reported as mean ± std for ProvoGAN, sGAN models (sGAN-A is axially trained, sGAN-C is coronally trained, and sGAN-S is sagittally trained), and vGAN.

Table 2: Reconstruction quality of the proposed ProvoGAN model is demonstrated on IXI for T1-weighted and T2-weighted image reconstruction tasks for acceleration ratios of 2 and 4. PSNR and SSIM measurements are reported as mean ± std for ProvoGAN, sGAN (sGAN is trained in the transverse axial orientation), and vGAN.

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
1975