2915

3D Artificial Cerebral Blood Volume Generation from T1W Structural MRI
Vishwanatha Mitnala Rao1, Scott A Small2, and Jia Guo3
1Biomedical Engineering, Columbia University, Acton, MA, United States, MA, United States, 2Department of Neurology, Columbia University Medical Center, New York, NY, United States, 3Department of Psychiatry, Columbia University, New York, NY, United States

Synopsis

Keywords: Machine Learning/Artificial Intelligence, Brain

While gadolinium-based contrast agents are necessary to generate a quantitative mapping of brain metabolism, they are invasive with unclear long-term side-effects. As such, convolutional neural networks (CNNs) have been explored as a method to generate artificial cerebral blood volume (aCBV) maps from T1W structural MRI scans. However, prior implementations process MRI in 2D slices, severely limiting output resolution, production time, and utility. In this study, we propose a 3D CNN-Transformer hybrid aCBV generation tool that outperforms both 2D and 3D implementations of the prior state-of-the-art model (PSNR: 29.46, P.R.: 0.836, SSIM: 0.875, S.R.: 0.681).

Introduction

Contrast enhancement of magnetic resonance imaging (MRI) scans using gadolinium-based contrast agents (GBCAs) is necessary to generate a quantitative mapping of brain metabolism.1 Specifically, GBCAs can generate high resolution cerebral blood volume (CBV) maps which have been used to isolate changes associated with aging and screen for diseases such as Alzheimer’s and schizophrenia.2,3,4

Despite their clinical relevance, GBCAs require intravenous administration, which adds further burden to patients and healthcare workers. Moreover, the long-term effects of GBCAs remain unclear; patients have already reported symptoms associated with GBCA exposure,5 and trace amounts of GBCAs are deposited not only within the brain but also in other regions such as the liver and skin6 for extended periods of time. As such, there remains a great need for a GBCA alternative.

Several studies have indicated that the delineation between blood vessels and brain tissue may be captured within T1W structural MRI scans.7,8 In fact, deep learning approaches have demonstrated reasonable success in generating artificial cerebral blood volume (aCBV) from T1W MRI.9,10 However, these prior works were built upon 2D frameworks, which severely limits their output resolution, production time, and utility within analytical MRI pipelines.

In this study, we introduce a novel 3D approach for aCBV generation using T1W structural MRI. In doing so, we adapt a hybrid 3D transformer-CNN architecture to this task and validate its performance against 2D and 3D implementations of the prior state-of-the-art.

Methods

We modeled our study to compare against the prior state-of-the-art results achieved by Liu et al.9 We used the same dataset, consisting of 598 healthy brain structural MRI scans. These were split into 326/93/179 scans for train/validation/test respectively. The data preprocessing pipeline is shown in Figure 1, and notably differs from the approach used in Liu et al.9

The model used in this study is a modified version of the 3D hybrid CNN-transformer architecture TABS,11 shown in Figure 2. TABS has previously outperformed Residual Attention U-Net (RAU-Net), which is the prior state-of-the art model. We compared our TABS variant to the 2D RAU-Net results reported in Liu et al.9 Furthermore, we validated it against a 3D implementation of RAU-Net, following the same network architecture parameters as the 2D RAU-Net. For the 3D implementations, we used a batch size of one, SGD as the optimization algorithm, an initial learning rate of 0.1, and a novel adaptive loss function described in Barron et al.12

To ensure consistency within our evaluation, we tracked the same metrics reported in Liu et al9. These included Pearson Correlation (P.R.), Spearman Correlation (S.R.), SSIM, and PSNR. All metrics were calculated within the brain region.

Results

The proposed 3D framework outperformed both the 2D and 3D implementations of RAU-Net in terms of P.R. and S.R. Moreover, it performed only marginally worse than the prior 2D state of the art model in terms of SSIM and PSNR, while still outperforming the 3D implementation of RAU-Net. These results are illustrated and tabulated in Figures 3 and 4 respectively. Qualitative visualizations are shown in Figure 5, which convey the similarity of the predicted aCBV maps and the ground truth gadolinium uptake for a sample scan across all views.

Discussion

This study investigated the performance and feasibility of a novel 3D deep learning approach for aCBV generation. The proposed framework outperformed the prior state-of-the-art approach for certain metrics and also outperformed a separate 3D implementation of the prior state-of-the-art across all metrics.

Our results firstly validate the ability of deep learning algorithms to successfully generate aCBV using structural T1W MRI, given that there are limited studies in this area. Furthermore, we are the first to propose a compelling 3D approach for aCBV generation. In general, 3D frameworks are more efficient than their 2D counterparts, given that post-processing must be performed following 2D aCBV generation to stitch together a 3D output. For this reason, 3D frameworks are better positioned as an intermediary step within larger deep learning pipelines. This is especially pertinent for aCBV generation given that CBV often serves as a precursor before more in-depth functional analysis.2,3,4

While our model reaches state-of-the-art performance, we believe it can be further improved by using higher quality training data. The dataset used for the 3D models were upsampled from the original 0.68x0.68x3 mm space, thereby making aCBV generation theoretically harder compared to the original 2D implementation. This difficulty was reflected in the lower performance of the 3D RAU-Net implementation. Nevertheless, our framework performed at-par, and even better in some respects, compared to the prior state-of-the-art. We expect even higher performances when training this model on higher quality 3D 1x1x1 mm MRI scans. Finally, we used a relatively small sample size for 3D CNN training, and our model would likely improve when trained on a larger dataset.

Conclusion

In this study, we propose a novel 3D deep learning framework for aCBV generation. Given the efficiency and performance of our approach, we believe that it can be successfully applied as a preprocessing tool in various tasks in the future that can take advantage of functional mapping.

Acknowledgements

No acknowledgement found.

References

  1. Lohrke, J., Frenzel, T., Endrikat, J., et al. (2016). 25 years of contrast-enhanced mri: developments, current challenges and future perspectives. Adv. Ther, 33, 1–28.

  2. Khan, U. A., Liu, L., Provenzano, F. A., Berman, D. E., et al. (2014). Molecular drivers and cortical spread of lateral entorhinal cortex dysfunction in preclinical Alzheimer's disease. Nature neuroscience, 17(2), 304-311.

  3. Schobel, S. A., Chaudhury, N. H., Khan, U. A., et al. (2013). Imaging patients with psychosis and a mouse model establishes a spreading pattern of hippocampal dysfunction and implicates glutamate as a driver. Neuron, 78(1), 81-93.

  4. Brickman, A. M., Khan, U. A., Provenzano, F. A., et al. (2014). Enhancing dentate gyrus function with dietary flavanols improves cognition in older adults. Nature neuroscience, 17(12), 1798-1803.

  5. Ramalho, M., Ramalho, J., Burke, L. M., & Semelka, R. C. (2017). Gadolinium retention and toxicity—an update. Advances in Chronic Kidney Disease, 24(3), 138-146.

  6. Guo, B. J., Yang, Z. L., & Zhang, L. J. (2018). Gadolinium deposition in brain: current scientific evidence and future perspectives. Frontiers in molecular neuroscience, 11, 335.

  7. Hasgall, P. A., Di Gennaro, F., Baumgartner, C., et al. (2015). Itis database for thermal and electromagnetic parameters of biological tissues, version 3.0.

  8. Small, S. A., Wu, E. X., Bartsch, D., et al. (2000). Imaging physiologic dysfunction of individual hippocampal subregions in humans and genetically modified mice. Neuron, 28(3), 653-664.

  9. Liu, C., Zhu, N., Sun, H., et al. (2022). Deep learning of MRI contrast enhancement for mapping cerebral blood volume from single-modal non-contrast scans of aging and Alzheimer's disease brains. Frontiers in Aging Neuroscience, 893.

  10. Sun, H., Liu, X., Feng, X., et al. (2020, April). Substituting gadolinium in brain MRI using DeepContrast. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) (pp. 908-912). IEEE.

  11. Rao, V. M., Wan, Z., Ma, D. J., et al. (2022). Improving Across-Dataset Brain Tissue Segmentation Using Transformer. arXiv preprint, arXiv:2201.08741.

  12. Barron, J. T. (2019). A general and adaptive robust loss function. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4331-4339).

  13. Zhang, Y., Brady, J.M. and Smith, S., (2000). Hidden Markov random field model for segmentation of brain MR image. In Medical Imaging 2000: Image Processing (Vol. 3979, pp. 1126-1137). International Society for Optics and Photonics.


Figures

Overview of preprocessing. T1W and T1W contrast enhanced (CE) pipelines shown. 1. Spatial alignment between T1W and T1W CE scans. 2. Brain extraction. 3. Skull stripping. 4. Bias field correction, registration to MNI152, upsampling to 1x1x1 mm. 5. Generate WM segmentation using FSL FAST.13 6. Normalization using top 1% white matter intensity.

Model architecture, based on a variant of the TABS architecture proposed in Rao et al.11

Illustration of model performance across all 4 metrics.

Tabulation of model performance across all 4 metrics.

Qualitative visualization of model performance, across all three views of the brain.

Proc. Intl. Soc. Mag. Reson. Med. 31 (2023)
2915
DOI: https://doi.org/10.58530/2023/2915