1128

Transfer Learning of an Ultra-low-dose Amyloid PET/MRI U-Net Across Scanner Models
Kevin T Chen1, Matti Schürer2, Jiahong Ouyang1, Enhao Gong1, Solveig Tiepolt2, Osama Sabri2, Greg Zaharchuk1, and Henryk Barthel2

1Stanford University, Stanford, CA, United States, 2University of Leipzig, Leipzig, Germany

Synopsis

Reducing the radiotracer dose of amyloid PET/MRI will increase utility of this modality for early diagnosis and for multi-center trials on amyloid-clearing drugs. To do so networks trained on data from one site will have to be applied to data acquired on other sites. In this project we have shown that after fine-tuning, a network trained on PET/MRI data acquired from one scanner is able to produce diagnostic-quality images while achieving noise reduction and image quality improvement for data acquired on another scanner.

Introduction

Simultaneous amyloid PET/MRI provides the opportunity of a “one-stop shop” imaging exam for the early diagnosis of dementia1. To increase the utility of this hybrid modality, we have previously trained deep learning networks to generate amyloid PET images with diagnostic value from PET/MRI scan protocols with markedly reduced injected radiotracer dose2. However, to increase the utility of our trained network (trained on an integrated PET/MRI scanner with time-of-flight capabilities: Signa PET/MRI, GE Healthcare; “Scanner 1”) on hybrid amyloid PET/MR imaging in multi-center studies, the network has to be potentially applied to data acquired on different scanner models and with different scan protocols. Using transfer learning techniques3, we aim to demonstrate utility of this trained network on data acquired on another PET/MRI scanner (mMR, Siemens Healthineers; “Scanner 2”).

Materials and Methods

Data Acquisition: For the simultaneous acquisition of MRI and PET data, 40 datasets (39 participants, 19 female; 67±8 years) were recruited for scanning on Scanner 1 while 40 other participants (23 female, 64±11 years) were scanned on Scanner 2. The T1-weighted and T2-weighted MR images were acquired on both scanners while T2 FLAIR was acquired on Scanner 1 only. The amyloid radiotracer 18F-florbetaben was injected into the subjects and PET data were acquired simultaneously with the MRI data 90-110 minutes after injection. The raw list-mode PET data was reconstructed for the full-dose ground truth image and was also either randomly undersampled by a factor of 100 (Scanner 1) or framed for 1 minute from the start of PET acquisition (Scanner 2) for reconstruction to produce low-dose (1% and ~5% dose respectively for each scanner) PET images.

U-net Implementation and Transfer Learning: We trained a convolutional neural network (“U-net”) with the proposed structure (Figure 1)4,5, using data from Scanner 1. The inputs of the network are the multi-contrast MR images (T1-weighted, T2-weighted, T2 FLAIR) and the low-dose PET image. 5-fold cross-validation (32 scans for training, 8 for testing per network trained) was used for training. For transfer learning to Scanner 2 data, we used a model trained on Scanner 1 data to initialize the training. T1-weighted images were used as inputs for the missing T2 FLAIR channel; 5%-dose images were used as the low-dose input. A low learning rate of 0.0001 was used and the network was fine-tuned for 100 epochs. 5-fold cross-validation was also used during transfer learning.

Data Analysis and Reader Studies: Using the software FreeSurfer6, T1-based brain masks of each subject were used for voxel-based analyses. For comparing the synthesized PET image and the low-dose PET images to the full-dose images, in each axial slice of the volumes, the image quality metrics peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE) were calculated. Student’s t-tests were used to assess the difference of metrics across datasets. A certified physician read the reconstructed and synthesized images (randomized and anonymized) and rated the image quality (5-point scale; scores of 4 and 5 were defined as “good quality”) as well as the amyloid uptake status (positive/negative).

Results and Discussion

Qualitatively, the synthesized images (both before and after fine-tuning) from Scanner 2 data showed marked improvement in noise reduction. The image quality improved with fine-tuning using Scanner 2 data (Figure 2). After transfer learning, the image quality metrics also showed improvement (compared to the low-dose image) at a level slightly greater than using the original trained network on Scanner 1 data (p<0.05 for all comparisons, Figure 3). The readings showed that fine-tuning the network produced more images that were of superior quality than even the full-dose images (proportion of good quality images: 95% vs. 20%; 95% confidence interval [CI]: 83.5%-98.6% vs. 10.5%-34.8%; Figure 4). The synthesized images also produced images with high diagnostic accuracy (accuracy=90.0%; 95% CI: 77.0%-96.0% for the full-dose vs. synthesized images with and without fine-tuning comparisons). Most (65%) of the low-dose images were uninterpretable.

Conclusion

We have shown that, after fine-tuning, a network trained on PET/MRI data acquired from one scanner is able to produce diagnostic-quality images while achieving noise reduction and image quality improvement for data acquired on another scanner. This can potentially lead to overcoming the bias from the source of training data and lead to the increase of utility of pre-trained neural networks for large multi-center medical imaging studies.

Acknowledgements

This project was made possible by the NIH grant P41-EB015891, GE Healthcare, the Michael J. Fox Foundation for Parkinson’s Research, the Stanford Alzheimer’s Disease Research Center, the Foundation of the ASNR, and Piramal Imaging.

References

1. Schütz L, et al. "Feasibility and acceptance of simultaneous amyloid PET/MRI." Eur J Nucl Med Mol Imaging. 2016 Nov;43(12):2236-2243.

2. Chen KT, et al. "Ultra-low-dose 18F-florbetaben Amyloid PET Imaging using Deep Learning with Multi-contrast MRI Inputs." Radiology 2018, in press.

3. Pan SJ, Yang QA. "A Survey on Transfer Learning." IEEE Trans Knowl Data Eng 2010.

4. Ronneberger O, Fischer P, Brox T. “U-Net: Convolutional Networks for Biomedical Image Segmentation.” arXiv:1505.04597 (2015).

5. Chen H et al. “Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN).” arXiv:1702.00288 (2017).

6. Dale AM, Fischl B, Sereno MI. "Cortical surface-based analysis. I. Segmentation and surface reconstruction." Neuroimage. 1999;9(2):179-94.

Figures

Figure 1. A schematic of the U-net used in this work. The arrows denote computational operations and the tensors are denoted by boxes with the number of channels indicated above each box.

Figure 2. MRI and PET (overlaid on the MRI) images of representative subjects from both scanners. The Synthesized PET images showed noise reduction from the low-dose images. The synthesized PET image from Scanner 2 data showed better resemblance to the full-dose image after fine-tuning the network.

Figure 3. Image quality metrics comparing the original low-dose image from both scanners with its synthesized counterpart(s) from the network.

Figure 4. Proportion of image quality scores for each image type as rated by a certified physician.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
1128