4704

Ultra-low-dose Amyloid PET/MRI Reconstruction by Generative Adversarial Network
Jiahong Ouyang1, Kevin T. Chen2, Enhao Gong2, John Pauly2, and Greg Zaharchuk2

1Carnegie Mellon University, Pittsburgh, PA, United States, 2Stanford University, Stanford, CA, United States

Synopsis

Amyloid PET is widely used in the early diagnosis of dementia. However, the injection of the radiotracer will lead to radiation exposure to the subject. We proposed a novel method based on Generative Adversarial Network (GAN) with perceptual loss to achieve diagnostic image quality PET images using ultra-low-dose PET images with or without additional MR contrasts as inputs.

Introduction

Amyloid PET is widely used in the early diagnosis of dementia. A previous work1 used the U-Net to generate diagnostic Amyloid PET images based on the ultra-low-dose PET with or without multi-contrast MR inputs. However, the synthetic images are blurry and missing details that are important for making clinical diagnoses, especially when having only the PET as inputs. We proposed a novel method based on Generative Adversarial Network (GAN) to achieve higher quality PET images using ultra-low-dose PET images with or without additional MR contrasts as inputs.

Methods

Data Acquisition and Preprocessing: 40 patients’ data was acquired by PET/MR scanner with the injection of 330±30 MBq amyloid radiotracer 18F-florbetaben. The raw-list-mode PET data was reconstructed as the standard-dose ground truth and was randomly undersampled by a factor of 100 to reconstruct 1% low-dose PET scans. T1-weighted, T2-weighted, and T2-FLAIR MR were registered to the standard-dose PET. Each volume was of size 256 x 256 x 89 and was normalized by the mean value of the non-zero region. Top and bottom 20 slices with less information were removed. 4-fold cross validation was used.

Architecture: The input of the network is the stack of the neighboring slices of low-dose PET and one slice for each MR contrasts. The whole network shown in figure 1 has three modules: a generator, a discriminator and a task-specific network. An encoder-decoder network was implemented as the generator to map the input to the corresponding standard-dose image. A pixel-level L1 loss was included to ensure the synthesized image sharing the similar global structure with the standard-dose image. A CNN-based discriminator was used to evaluate the standard-dose and synthesized image of whether they are real or fake by adversarial loss. Feature Matching2 was applied to reduce the hallucinate structures and to address the instability of the training process by forcing the generator to match the expected value of the features on the intermediate layers of the discriminator. A pre-trained Amyloid status classifier on the standard-dose datasets was included as the task-specific network to ensure the synthesized images share the similar pathological features with the standard-dose images, which are the Amyloid status related features here, by computing perceptual loss3 from the task-specific network between the image pairs.

Evaluation: Three image quality metrics were used for evaluation, including the peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and mean square error (MSE). For the proposed network with PET-only inputs, a radiologist was asked to give 1~5 score for image quality and read the Amyloid status (positive/negative) for each the standard-dose and the synthesized volumes to test the diagnosis consistency. We compared the results from the proposed method with only low-dose as input with two models from the state-of-the-art method 1: PET-only model with only low-dose PET input and PET-MR model which additionally took multi-contrast MR as input.


Results

Qualitatively, the synthesized PET images from the two proposed models show remarkable improvement comparing with the state-of-the-art method. Results are visually demonstrated in figure 2. Quantitatively, for image metrics shown in figure 3, the proposed PET-only model on average outperforms the state-of-the-art PET-only model by 1.35 dB in PSNR, 22.70 % in MSE and 2.15% in SSIM and achieves comparable results with the state-of-the-art PET-MR model. For image quality score shown in figure 4, the average score for the proposed PET-only is 4.1 which is higher than scores of 3.1 and 4.0 for both PET-only and PET-MR model by the state-of-the-art method and comparable to the standard-dose. For Amyloid status diagnosis, over all 40 cases, only 2 cases were wrongly diagnosed using results from the proposed PET-only model, comparing to 5 and 4 for the two state-of-the-art model.

Discussion

Comparing the visual results with the state-of-the-art method with the same input, the proposed method can generate diagnostic synthesized images that are less blurry with more detail and maintain more accurate structures. The image metrics and quality scores show that the image quality of the proposed method is superior than the state-of-the-art PET-only model and comparable to the PET-MR model. The Amyloid status diagnosis represents that the radiologist can make much more consistent diagnosis based on the images from the proposed method, which means the proposed results contain more accurate pathological features.

Conclusion

Standard-dose Amyloid PET images can be synthesized from ultra-low-dose (1%) Amyloid PET image by GAN. Both quantitative and qualitative evaluation demonstrate the superiority of the proposed method comparing with the state-of-the-art model with the same inputs and even with additional MR inputs. More reading studies will be done to further evaluate the proposed PET-MR model.

Acknowledgements

This project was made possible by the NIH grant P41-EB015891, GE Healthcare, the Michael J. Fox Foundation for Parkinson’s Research, the Stanford Alzheimer’s Disease Research Center, and Piramal Imaging.

References

1. Chen et al., Ultra-low-dose 18F-florbetaben Amyloid PET Imaging using Deep Learning with Multi-contrast MRI Inputs, Radiology, 2018 (in press)

2. Salimans et al., Improved Techniques for Training GANs, arXiv:1606.03498, 2016.

3. Johnson et al., Perceptual Losses for Real-Time Style Transfer and Super-Resolution, arXiv:1603.08155, 2016.

Figures

Figure 1. Overview of the proposed method.

Figure 2. Comparison of the qualitative results between standard-dose, state-of-the-art and proposed PET-only and PET-MR models.

Figure 3. Comparison of three image metrics between the low-dose PET, the results from the state-of-the-art and proposed PET-only and PET-MR models.

Figure 4. Comparison of the image quality score between low-dose, standard-dose, state-of-the-art PET-MR and PET-only model, and the proposed PET-only model.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4704