Hamed Yousefi1, Chunwei Ying2, Mahdjoub Hamdi2, Richard Laforest2, and Hongyu An2
1Imaging Science, Washington University in St.Louis, St Louis, MO, United States, 2Washington University in St.Louis, St Louis, MO, United States
Synopsis
Keywords: Machine Learning/Artificial Intelligence, Brain
In this study,
dynamic brain PET denoising was done using registered MRI and reconstructed PET
images reconstructed by the OSEM method, while maintaining TACs that
were quite near to the original noisy data. The fundamental challenge to using
the supervised learning approach is the lack of ground truth. The resulting
spatiotemporal images improved the image quality according to various CNR, SNR,
and CRC parameters while maintaining TACs that were similar to the original raw
PET. This method only needs one trained network, one set of matrices for a
statistical temporal PCA model, and 4D dynamic PET data as input.
Introduction
Dynamic PET is a potent imaging technique that may evaluate the biological function of a living body1. These analyses are performed on time activity curves (TACs) at the regional and voxel levels. Many recent ideas have significantly improved image-denoising performance through convolutional neural networks (CNNs). Also, solutions to the deblurring methods rely on more complicated architectures like the UNet, GAN, or CycleGAN2, 3. However, the black-box nature of AI models might occasionally make them less desirable in a clinical setting because clinical end-users need understandable explanations4, 5. In this study, we merged a temporal denoising method using PCA, which is based on a mean orthonormalized atlas of principal components (PCs) throughout time, with a spatial kernel-based strategy using 3D residual UNet (3D ResUNet). Finally, 4D simulated and real PET data are used to evaluate the TACs and image quality (see figure 1). Method
The majority of deep learning-based techniques which rely on spatial information generate blurry images. Temporal information from 4D dynamic PET might alleviate it. In this work, a quadric polynomial function is used to fit a predefined spline curve fitting approach to the waveform of each voxel's intensity with time. This temporal smoothing component contributes to the small number of components during PCA implementation. It was assumed that noise levels varied according to the time and region. Therefore, the image quality and TAC consistency may change depending on how many PCs are present at any given time inside each ROI. When employing the region-temporal PCA approach to estimate the appropriate number of PCs, it is essential to simultaneously take into account both time and domain concerns. The ideal number of PCs is attained when the output TACs error is less than 5% of the original raw PET and the maximum SNR is achieved.
However, this temporal-based denoising along all voxels is time-consuming. CNN block can be replaced for the entire strategy. DL-TESLA uses a 3D ResUNet for denoised PET estimation by combining the advantages of the residual network ResNet and UNet6. The training input consisted of two channels of registered MRI and 3D temporal sampled PET, and the target is a conventional regional-temporal PCA. After that, the denoised data is used to generate the mean PCs statistical temporal model for the entire brain7, 8. Here, we assumed that healthy brain tissue activity responds identically for different subjects when exposed to the same radiotracer and imaging conditions. Orthonormalization was also included through Gram-Schmit orthogonality computation. Since they were obtained via the denoised CNN method, it was assumed that those PCs (see figure 2.a) serve as denoising components in this situation. When relying on the temporal-based approach, the contrast of the resulting images is maintained, as opposed to the oversmoothed output of the spatial-based ResUNet method. Finally, the merging of spatial-based 3D ResUNet and temporal-based PCA with a few PCs shown in figure 2.b, will yield high-quality images while keeping the minimal TACs error.Results
Imaging data consisted of 77 patients of 4D FDG_AV45 brain PET imaging with MRI scanners. The FLIRT registration and FreeSurfer segmentation were applied to MRI. Relative scatter correction and ordered subset expectation maximization (OSEM) were utilized. The time-domain reconstruction includes 26 non-uniform list-mode data samples from the moment of injection to 70 minutes later.
In different ROIs, the SNR, CNR, and CRC were examined to assess image quality. Additionally, the MSE between the processed and original images as well as the TACs in each ROI were compared. The cerebral and cerebellum's white and gray matter was used as the foreground and background for the CRC and CNR criteria. As is shown in figure 3 and figure 4, the proposed spatiotemporal method improves the raw PET images' CNR by 62.0±15.5% and 49.7.0±11.7% in the cerebellum and cerebral respectively. The results also reveal that the spatiotemporal approach will improve the SNR in cerebellum WM and GM by 111.2±22.5% and 95.4±32.5% and in cerebral WM and GM by 73.4±16.1% and 59.3±19.8% respectively.Disscussion
The proposed 4D PET denoising method has two technological advantages: (1) the framework does not consider for pretraining and creation of training datasets from other static PET or different radiotracer doses; and (2) both the image quality like contrast and SNR with TACs consistency are taken into account in the generated denoised image. The mixed spatiotemporal method not only improves the image quality with both SNR and contrast criteria but also lowers the mean squared error of time activities in all ROIs (see figure 4). Conclusion
The fundamental challenge to using the supervised learning approach is the lack of ground truth. We proposed a simple conventional temporal-regional PCA strategy for cleaning the data. Since implementing spline fitting on noisy data across all voxels required a significant amount of time, we substitute this method using CNN. We employed the temporal PCs derived from the CNN-denoised to address the problem of oversmoothed images taken by CNN. In addition to increasing contrast by 6.33.0% and 9.47.6% in the cerebellum and cerebral via the CRC criterion, adding temporal denoised images from the mean PCA model to the 3D ResUNet also produces more consistent results with TACs than the deep learning-alone method.Acknowledgements
No acknowledgement found.References
1. Hashimoto F, Ohba H, Ote K,
Kakimoto A, Tsukada H, Ouchi Y. 4D deep image prior: dynamic PET image
denoising using an unsupervised four-dimensional branch convolutional neural
network. Physics in Medicine & Biology. 2021 Jan 13;66(1):015006.
2. Wang Y, Zhou L, Yu B, Wang L, Zu C,
Lalush DS, Lin W, Wu X, Zhou J, Shen D. 3D auto-context-based locality adaptive
multi-modality GANs for PET synthesis. IEEE transactions on medical imaging.
2018 Nov 29;38(6):1328-39.
3. Zhao K, Zhou L, Gao S, Wang X, Wang
Y, Zhao X, Wang H, Liu K, Zhu Y, Ye H. Study of low-dose PET image recovery
using supervised learning with CycleGAN. Plos one. 2020 Sep 4;15(9):e0238455.
4. H. Liu et al., "PET Image
Denoising Using a Deep-Learning Method for Extremely Obese Patients,"
in IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 6, no.
7, pp. 766-770, Sept. 2022, doi: 10.1109/TRPMS.2021.3131999.
5. Liu J, Malekzadeh M, Mirian N, Song
TA, Liu C, Dutta J. Artificial intelligence-based image enhancement in pet
imaging: Noise reduction and resolution enhancement. PET clinics. 2021 Oct
1;16(4):553-76.
6. Zhang J, Jiang Z,
Dong J, Hou Y, Liu B. Attention gate resU-Net for automatic MRI brain tumor
segmentation. IEEE Access. 2020 Mar 24;8:58533-45.
7. Yousefi H, Mohammadi F, Mirian N,
Amini N. Tuberculosis bacilli identification: a novel feature extraction
approach via statistical shape and color models. In2020 19th IEEE International
Conference on Machine Learning and Applications (ICMLA) 2020 Dec 14 (pp.
366-371). IEEE.
8. Yousefi H, Fatehi M, Bahrami M,
Zoroofi RA. 3D statistical shape models of radius bone for segmentation in
multi resolution MRI data sets. In2014 21th Iranian Conference on Biomedical
Engineering (ICBME) 2014 Nov 26 (pp. 246-251). IEEE.