Reducing the radiotracer dose of amyloid PET/MRI will increase utility of this modality for early diagnosis and for multi-center trials on amyloid-clearing drugs. To do so networks trained on data from one site will have to be applied to data acquired on other sites. In this project we have shown that after fine-tuning, a network trained on PET/MRI data acquired from one scanner is able to produce diagnostic-quality images while achieving noise reduction and image quality improvement for data acquired on another scanner.
Data Acquisition: For the simultaneous acquisition of MRI and PET data, 40 datasets (39 participants, 19 female; 67±8 years) were recruited for scanning on Scanner 1 while 40 other participants (23 female, 64±11 years) were scanned on Scanner 2. The T1-weighted and T2-weighted MR images were acquired on both scanners while T2 FLAIR was acquired on Scanner 1 only. The amyloid radiotracer 18F-florbetaben was injected into the subjects and PET data were acquired simultaneously with the MRI data 90-110 minutes after injection. The raw list-mode PET data was reconstructed for the full-dose ground truth image and was also either randomly undersampled by a factor of 100 (Scanner 1) or framed for 1 minute from the start of PET acquisition (Scanner 2) for reconstruction to produce low-dose (1% and ~5% dose respectively for each scanner) PET images.
U-net Implementation and Transfer Learning: We trained a convolutional neural network (“U-net”) with the proposed structure (Figure 1)4,5, using data from Scanner 1. The inputs of the network are the multi-contrast MR images (T1-weighted, T2-weighted, T2 FLAIR) and the low-dose PET image. 5-fold cross-validation (32 scans for training, 8 for testing per network trained) was used for training. For transfer learning to Scanner 2 data, we used a model trained on Scanner 1 data to initialize the training. T1-weighted images were used as inputs for the missing T2 FLAIR channel; 5%-dose images were used as the low-dose input. A low learning rate of 0.0001 was used and the network was fine-tuned for 100 epochs. 5-fold cross-validation was also used during transfer learning.
Data Analysis and Reader Studies: Using the software FreeSurfer6, T1-based brain masks of each subject were used for voxel-based analyses. For comparing the synthesized PET image and the low-dose PET images to the full-dose images, in each axial slice of the volumes, the image quality metrics peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE) were calculated. Student’s t-tests were used to assess the difference of metrics across datasets. A certified physician read the reconstructed and synthesized images (randomized and anonymized) and rated the image quality (5-point scale; scores of 4 and 5 were defined as “good quality”) as well as the amyloid uptake status (positive/negative).
1. Schütz L, et al. "Feasibility and acceptance of simultaneous amyloid PET/MRI." Eur J Nucl Med Mol Imaging. 2016 Nov;43(12):2236-2243.
2. Chen KT, et al. "Ultra-low-dose 18F-florbetaben Amyloid PET Imaging using Deep Learning with Multi-contrast MRI Inputs." Radiology 2018, in press.
3. Pan SJ, Yang QA. "A Survey on Transfer Learning." IEEE Trans Knowl Data Eng 2010.
4. Ronneberger O, Fischer P, Brox T. “U-Net: Convolutional Networks for Biomedical Image Segmentation.” arXiv:1505.04597 (2015).
5. Chen H et al. “Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN).” arXiv:1702.00288 (2017).
6. Dale AM, Fischl B, Sereno MI. "Cortical surface-based analysis. I. Segmentation and surface reconstruction." Neuroimage. 1999;9(2):179-94.