2422

Does Simultaneous Morphological Inputs Matter for Deep Learning Enhancement of Ultra-low Amyloid PET/MRI?
Kevin T. Chen1, Olalekan Adeyeri2, Tyler N Toueg3, Elizabeth Mormino3, Mehdi Khalighi1, and Greg Zaharchuk1
1Radiology, Stanford University, Stanford, CA, United States, 2Salem State University, Salem, MA, United States, 3Neurology and Neurological Sciences, Stanford University, Stanford, CA, United States

Synopsis

We have previously generated diagnostic quality amyloid positron emission tomography (PET) images with deep learning enhancement of actual ultra-low-dose (~2% of the original) PET images and simultaneously acquired structural magnetic resonance imaging (MRI) inputs. Here, we will investigate whether simultaneity is a requirement for such structural MRI inputs. If simultaneity is not required, this will increase the utility of MRI-assisted ultra-low-dose PET imaging by including data acquired on separate PET/ computed tomography (CT) and standalone MRI machines.

Introduction

Positron emission tomography (PET) allows the interrogation of amyloid deposition in the brain, a hallmark of Alzheimer’s disease neuropathology1, while magnetic resonance imaging (MRI) with its exquisite soft tissue contrast allows for imaging morphology-based features such as cortical atrophy, representative of neurodegeneration2. These complementary strengths allow for MRI to assist in PET image processing and enhancement. While the absolute quantification of radiotracer concentrations is a strength in PET, radioactivity associated with the radiotracers will also present a risk to participants, especially in vulnerable populations. This will affect the scalability of large-scale clinical longitudinal PET studies.
With the advent of deep learning (DL)-based machine learning methods such as convolutional neural networks (CNNs), we have previously generated with DL diagnostic quality amyloid PET images using actual ultra-low injected radiotracer dose and simultaneously acquired MRI inputs3. Here, we will investigate the value of simultaneity in the acquisition of MRI inputs as well as the effect of different DL training methods in relation to these inputs. Non-simultaneous structural MRI inputs will allow for greater utility of MRI-assisted ultra-low-dose PET imaging to include those acquired on standalone PET/computed tomography (CT) and MRI machines.

Methods

PET/MR data acquisition
48 participants were recruited for the study. Participant recruitment was approved by the Stanford Institutional Review Board and all participants (or an authorized surrogate) provided written consent. 32 (19 female, 68.2±7.1 years) were used for pre-training the network; 328±32 MBq of the amyloid radiotracer [18F]-florbetaben were injected into the participants. 16 (6 female, 71.4±8.7 years) were scanned with the ultra-low-dose protocol. These participants were scanned in two PET/MRI sessions, with 6.49±3.76 and 300±14 MBq [18F]-florbetaben injections respectively. In both sessions the T1-, T2-, and T2 FLAIR-weighted MR images were acquired simultaneously with PET (90-110 minutes after injection) on an integrated PET/MR scanner (SIGNA PET/MR, GE healthcare) with time-of-flight capabilities (Figure 1). 7 participants were scanned in the same day (ultra-low-dose protocol followed by the full-dose protocol), while the others were scanned on separate days (1- to 42-day interval, mean 19.6 days). All images were co-registered using the software FSL to the standard-dose PET image.
CNN implementation
Using the low-dose CNN with multimodal PET/MRI inputs structure proposed in Chen et al. (Figure 2) 4, network training was either done from scratch (Method 1), or fine-tuned based on the participants presented in Chen et al.4. Either all layers of the U-Net were fine-tuned using the actual low-dose datasets (Method 2) or just the last layer (Method 3). 8-fold cross-validation was used to efficiently utilize all datasets (14 for training, 2 for testing per fold). For all methods, the training was carried out twice: once using PET/MRI inputs from the same scanning session (simultaneous: S) and once using PET and MRI from different sessions (non-simultaneous: NS) (Figure 1).
Data analysis
Using the software FreeSurfer, a brain mask derived from the T1 images of each subject was used for voxel-based analyses. For each axial slice, the image quality of the enhanced and low-dose PET images within the brain were compared to the full-dose image using peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE). Paired t-tests were carried out at the p=0.05 level to test for significance of the metrics derived from NS vs. S inputs.

Results and Discussion

Qualitatively, all enhanced images showed marked improvement in noise reduction to the ultra-low-dose image and resembled the ground truth image (Figure 3). All three metrics showed significant image quality improvement (Figure 4) from the ultra-low-dose images to the enhanced images, and the metrics between the images enhanced from the S vs. NS inputs were not statistically significant with paired t-tests (Method 1: PSNR: p=0.27, SSIM: p=0.26, RMSE: p=0.15; Method 2: PSNR: p=0.99, SSIM: p=0.62, RMSE: p=0.83; Method 3: PSNR: p=1.00, SSIM: p=0.79, RMSE: p=0.67). The results showed that for this population and our interval between scanning sessions, simultaneity is not a strict requirement for morphology-based MR-assisted ultra-low-dose PET enhancement. The p-values from the t-test comparisons also showed that more similar metric values were obtained when more parameters were pre-trained and fixed (Method 3).

Conclusion

This work has shown that amyloid PET images can be generated using trained CNNs with both S and NS multimodal ultra-low-dose PET/MR images, broadening the utility of ultra-low-dose amyloid PET imaging.

Acknowledgements

This project was made possible by the NIH grant P41-EB015891, the Stanford Alzheimer’s Disease Research Center, GE Healthcare, the Foundation of the ASNR, and Life Molecular Imaging.

References

1. Rowe CC, Villemagne VL. Brain amyloid imaging. J Nucl Med. 2011;52(11):1733-1740.

2. Dickerson BC, Bakkour A, Salat DH, et al. The cortical signature of Alzheimer's disease: regionally specific cortical thinning relates to symptom severity in very mild to mild AD dementia and is detectable in asymptomatic amyloid-positive individuals. Cereb Cortex. 2009;19(3):497-510.

3. Chen KT, Holley D, Halbert K, et al. Quantitative Assessment of Deep Learning-enhanced Actual Ultra-low-dose Amyloid PET/MR Imaging. Journal of Nuclear Medicine. 2020;61(Supplement 1):521.

4. Chen KT, Gong E, de Carvalho Macruz FB, et al. Ultra-Low-Dose (18)F-Florbetaben Amyloid PET Imaging Using Deep Learning with Multi-Contrast MRI Inputs. Radiology. 2019;290(3):649-656.

Figures

Figure 1. Sample ultra-low-dose protocol. The participants were scanned in two sessions and the two sets of MR images obtained from the sessions were used in separate neural network trainings to test the effect of simultaneous (S) vs. non-simultaneous (NS) inputs.

Figure 2. Low-dose CNN. The arrows denote computational operations and the tensors are denoted by boxes with the number of channels indicated above each box. Inputs, outputs, and the ground truth of the network are also indicated. The network was trained either from scratch (Method 1), from a pre-trained network using the same structure with all layers trainable (Method 2), or with just the last layer (red box) trainable (Method 3). NS: non-simultaneous, S: simultaneous

Figure 3. Representative amyloid PET images (top: amyloid negative, bottom: amyloid positive) with the corresponding T1 MR images. Both sets of CNN-enhanced ultra-low-dose PET images show greatly reduced noise compared to the ultra-low-dose PET image and resemble the standard-dose PET image. NS: non-simultaneous, S: simultaneous

Figure 4. Image quality metrics comparing the ultra-low-dose (LD) PET and the CNN-enhanced images to their corresponding ground truth standard-dose PET image. PSNR: peak signal-to-noise ratio, RMSE: root mean square error, SSIM: structural similarity, NS: non-simultaneous, S: simultaneous

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
2422