4265

A Predictive Model for Chronic Low Back Pain Objective Diagnosis Exploiting Multi-Modal Brain [11C]-PBR28 PET/MR Radiomic Features
Angel Torrado-Carvajal1, Daniel S Albrecht1, Ken Chang1, Andrew L Beers1, Oluwaseun Akeju2, Minhae Kim1, Courtney Bergan1, Dunkan J Hodkinson3, Robert R Edwards4, Yi Zhang2, Jacob M Hooker1, Vitaly Napadow1, Jayashree Kalpathy-Cramer1, and Marco L Loggia1

1Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States, 2Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States, 3Department of Anesthesiology, Perioperative and Pain Medicine, Boston Children’s Hospital and Harvard Medical School, Boston, MA, United States, 4Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, United States

Synopsis

Chronic pain affects more than 100 million individuals in the United States alone. However, our ability to diagnose and properly treat pain disorders is currently limited, including due to the lack of reliable biomarkers. In this work, we present a predictive model for the classification of chronic low back pain (cLBP) patients using multi-modal brain [11C]-PBR28 PET/MR radiomic features extracted from structural, functional, and molecular imaging. Our results suggest that a PET/MR classifier (RFPET/MR) performs better than single-modality classifiers (RFPET and RFMR) for AUC (p’s<0.01), accuracy (p’s<0.01), sensitivity (p’s<0.05), and specificity (p’s<0.01), highlighting the power of multi-modal over single-modality imaging.

Introduction

Despite the staggering prevalence and societal impact of chronic pain1, our ability to diagnose and properly treat pain disorders is limited, in part due to the lack of reliable biomarkers. Recent developments in artificial intelligence and their application to brain imaging, however, demonstrate great promise in aiding the search for pain biomarkers.

Radiomics, so far mostly used in oncology, uses data-characterization algorithms to extract large amounts of quantitative features, potentially uncovering characteristics that may be invisible to the naked eye2. In this work we assess the performance of a predictive model for the classification of chronic low back pain (cLBP) patients using multi-modal radiomic features extracted from structural (T1), functional (fractional amplitude of low frequency fluctuations; fALFF), and molecular ([11C]-PBR28, a radioligand for the glial marker translocator protein, TSPO) imaging.

Methods

Datasets: 25 cLBP patients and 29 healthy controls underwent brain [11C]-PBR28 PET/MR imaging on a combined MR-PET system (Siemens Medical Solutions) consisting of a 3T Siemens TIM Trio with a BrainPET insert. MR imaging included a T1-weighted volume (TR/TE1/TE2/TE3/TE4=2530/1.64/3.5/5.36/7.22ms, flip angle=7º, voxel size=1x1x1mm, acquisition matrix=280x280x208), and a BOLD fMRI resting state sequence (TR/TE=2sec/30ms, flip angle=90°, voxel size=3.1x3.1x3mm, 37 slices).

Data Preprocessing: Standardized uptake values from [11C]-PBR28 data collected 60-90 min post-injection were normalized by the occipital cortex3 (SUVR). SUVR images were corrected for the Ala147Thr TSPO polymorphism (which predicts binding affinity to [11C]-PBR284), injected dose, and age. MRI bias correction was performed on the T1-weighted images. Resting-state BOLD data underwent standard pre-processing, and voxelwise fALFF maps were calculated for two bands which were found to be altered in chronic pain disorders5: slow-4 (0.027-0.073Hz) and slow-5 (0.01-0.027Hz).

Regions of Interest: We defined 21 regions of interest (ROIs) corresponding to brain functional areas thought to be involved in pain processing. These include dorsolateral (dLPFC) and medial prefrontal, anterior and posterior cingulate, anterior and posterior insular, primary and secondary somatosensory cortices, precuneus, the periaqueductal gray, thalamus, putamen and nucleus accumbens (Fig 1). ROIs were defined using a functionally-defined dLPFC label6, the Harvard-Oxford Cortical Atlas (remaining cortical structures), and the segmentation output from FSL-FAST (subcortical structures).

Radiomic Features Extraction: We calculated 140 shape, intensity, and gray-level co-occurrence matrices (GLCM) features7 using the feature extraction package QTIM_Tools.

Machine Learning Algorithm: A random forest (RF) classifier with 256 trees (estimators), using repeated stratified 5-fold cross-validation, was implemented using the scikit-learn python module8. Performance was assessed by calculating the area under the curve (AUC) from receiver operating characteristic (ROC) curve analysis. Features with AUC≥0.7 were fed into RF classifiers. The RF classifiers were thus grown to 256 trees with the number of features per decision tree set to the square root of the number of features.

Statistical Analysis: Analysis of the variance (ANOVA) test and post-hoc pairwise comparisons using Tukey’s HSD test were performed to compare the performance (AUC, accuracy, sensitivity and specificity) of classifiers using only PET (i.e., SUVR) features (RFPET), only MR (i.e., T1 and fALFF) features (RFMR) or all features (RFPET/MR). Statistical significance was set at p<0.05.

Results

193 features were identified in univariate analyses as having a AUC≥0.7 (Fig 2). Among these, ASM-, energy-, and correlation-based features were the most represented, whereas only 4/193 morphology features and 22/193 intensity statistics proved salient. These features were used to train single-modality classifiers –RFPET (27 PET features), and RFMR, (166 MRI features)– and a multimodal classifier –RFPET/MR (all 193 features). The RF classifiers were thus grown to 256 trees with the number of features per decision tree set to 5, 13, and 14, respectively. RFPET/MR performed better than RFPET and RFMR for AUC (p’s<0.01), accuracy (p’s<0.01), sensitivity (p’s<0.05), and specificity (p’s<0.01) (Fig 3 and Table 1).

Discussion

Our results suggest that radiomic features from multimodal [11C]-PBR28 PET/MR data might be used to classify individuals with or without cLBP. Further, RFPET/MR shows improved outcomes when compared to RFPET and RFMR, highlighting the power of multi-modal over single-modality imaging. Further research with larger sample sizes is needed to further validate the predictive value of these models. Furthermore, the addition of other advanced MR modalities (e.g., diffusion, connectivity) may improve the performance of the classifier.

Conclusion

This work demonstrates the promise of using radiomics, an approach so far mostly used in oncology, in conjunction with structural, functional and molecular brain imaging, for the classification of chronic pain patients. PET/MR radiomics is still in development, but its use may lead to a personalized pain management approach in the future. This work paves the way to the identification of objective biomarkers for pain, which would have significant clinical and societal implications.

Acknowledgements

Support: 1R21NS087472-01A1/1R01NS095937-01A1/W81XWH-14-1-0543/1R01NS094306-01A1 (MLL).

References

1. Harstall C, Ospina M. How prevalent is chronic pain. Pain Clinical Updates, 2003;11(2):1-4.

2. Gillies RJ, Kinahan PE, Hricak H. Radiomics: images are more than pictures, they are data. Radiology, 2015;278(2):563-577.

3. Albrecht DS, Normandin MD, Shcherbinin S, et al. Pseudoreference Regions for Glial Imaging with 11C-PBR28: Investigation in 2 Clinical Cohorts. J Nucl Med, 2018;59(1):107-114.

4. Owen DR, Yeo AJ, Gunn RN, et al. An 18-kDa translocator protein (TSPO) polymorphism explains differences in binding affinity of the PET radioligand PBR28. Journal Cerebr Blood F Met, 2012;32(1):1-5.

5. Hodkinson DJ, Wilcox SL, Veggeberg R et al. Increased amplitude of thalamocortical low-frequency oscillations in patients with migraine. J Neurosci 2016;36(30):8026-8036.

6. Yendiki A, Greve DN, Wallace S, et al. Multi-site characterization of an fMRI working memory paradigm: reliability of activation indices. Neuroimage, 2010;53(1):119-131.

7. Haralick RM, Shanmugam K, Dinstein IH. Textural features for image classification. IEEE T Syst Man Cyb, 1973;(6):610-621.

8. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. J Mach Learn Res, 2011;12:2825-2830.

Figures

Figure 1. Representative subject showing various ROIs. Radiomic features were extracted for each modality (T1, fALLFslow-4, fALFFslow-5 and SUVR) from these ROIs, resulting in 11750 features per subject.

Figure 2. Mean AUC performance values for all PET (red), T1 (blue), fALFF slow 4 (orange) and fALFF slow 5 (green) features. Salient features (AUC≥0.7) are highlighted.

Figure 3. ROC curves for RFPET (left), RFMR (center) and RFPET/MR.

Table 1. Mean and standard deviation of the AUC, accuracy, sensitivity, and specificity for RFPET, RFMR and RFPET/MR.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4265