0838

Evaluating multi-site rCBV consistency from DSC-MRI imaging protocols and post-processing software across the NCI Quantitative Imaging Network sites using a Digital Reference Object
Laura C. Bell1, Natenael Semmineh1, Hongyu An2, Cihat Eldeniz2, Richard Wahl2, Kathleen Schmainda3, Melissa Prah3, Bradley Erickson4, Panagiotis Korfiatis4, Chengyue Wu5, Anna Sorace5, Neal Rutledge5, Thomas Yankeelov5, Thomas Chenevert6, Dariya Malyarenko6, Yichu Liu7, Andrew Brenner7, Leland Hu8, Yuxiang Zhou8, Jerrold Boxerman9, Yi-Fen Yen10, Jayashree Kalpathy-Cramer10, Andrew Beers10, Mark Muzi11, Ananth Madhuranthakam12, Marco Pinho12, Brian Johnson12,13, and C. Chad Quarles1

1Barrow Neurological Institute, Phoenix, AZ, United States, 2Washington University in St. Louis, St. Louis, MO, United States, 3Medical College of Wisconsin, Milwaukee, WI, United States, 4Mayo Clinic, Rochester, MN, United States, 5The University of Texas at Austin, Austin, TX, United States, 6University of Michigan, Ann Arbor, MI, United States, 7University of Texas Health Science Center at San Antonio, San Antonio, TX, United States, 8Mayo Clinic, Scottsdale, AZ, United States, 9Rhode Island Hospital and Alpert Medical School of Brown University, Providence, RI, United States, 10Massachusetts General Hospital, Boston, MA, United States, 11University of Washington, Seattle, WA, United States, 12The University of Texas Southwestern, Dallas, TX, United States, 13Philips, Gainesville, FL, United States

Synopsis

Differences in imaging protocols (IP) and post-processing methods (PM) may influence relative cerebral blood volume (rCBV). Our goal was to leverage a dynamic susceptibility contrast (DSC) DRO to characterize rCBV consistency across 12 sites, focusing on differences due to site-specific IPs and/or PMs. Our results indicate high agreement when one center processes rCBV despite slight variations in the IP. However, substantial disagreement was observed when site-specific software was applied for rCBV measurements. These results have important implications for comparing DSC-MRI data across sites/trials, where PM variability could confound the use of rCBV as a biomarker of therapy response.

Introduction:

To aid in recent efforts to promote rCBV reproducibility, 12 sites with the NCI’s Quantitative Imaging Network aimed to investigate the current reproducibility of rCBV using a DSC digital reference object (DRO) that is representative of a wide-range of glioma MR signals1. Using this DRO multi-site consistency in rCBV were evaluated for varying permutations of imaging protocol (IP) parameters and post-processing methods (PM).

Methods:

This challenge was comprised of three phases to evaluate the influence of IP and/or PM on multi-site rCBV consistency across 12 participating sites of the QIN (Table 1). Phase I (“site IP w/constant PM”) required each site to submit their clinical IP to the managing center (n = 20). Site-specific IPs were generated and processed by the managing center according to post-processing steps previously published2-3. Phase II (“constant IP w/site PM”) required each site to process a “Standard Imaging Protocol” (SIP) recommended by the ASFNR4 using their software of choice (n = 17). Phase III (“site IP w/site PM”) required each site to process their site-specific IP DROs using their software of choice (n = 25).The DRO encompasses 10,000 tumor voxels that were simulated for both an intact-BBB (Ktrans = 0) and a disrupted-BBB (Ktrans > 0). Normal-appearing-white-matter (NAWM) voxels were also simulated for Ktrans = 0. Where necessary, we define references as the following: 1) from the disrupted-BBB DRO was compared to the intact-BBB DRO to asses accuracy, and 2) each DRO was compared to the SIP processed by the managing center to determine variability.Intraclass correlation coefficient (ICC) and Lin’s concordance correlation coefficient (LCCC) were calculated to evaluate consistency across multiple processed DROs and accuracy of leakage correction algorithms for each DROs, respectively. The 95% limits of agreement (LOA) were extracted from a Bland-Altman analysis for agreement between each site-specific DRO and a reference. The covariance (CV%) was also calculated.

Results and Discussion:

Multiple IP and PM were reported by sites (Table 1). A variety of software platforms were used: IB Neuro, nordicICE, PGUI, 3D Slicer, Philips IntelliSpace Portal, and in-house processing scripts. Accuracy of each processed rCBV map was first assessed as shown in Figure 1. Phase I results indicate that 17 of 20 analyses demonstrate high rCBV accuracy when processed by the managing center. This illustrates that most of the sites’ IPs are able to accurately compute rCBV – most likely because most sites use a protocol similar to the SIP. Phase II results in 10 of 17 analyses demonstrated high rCBV accuracy when different software choices were used to process the SIP. These software choices included IB Neuro, nordicICE, and in-house processing. Phase III resulted in 12 of 25 combinations of site-specific IP and PM that were able to accurately compute rCBV.

A decrease in agreement was observed when leakage was introduced across all DROs. Phase I had the highest agreement in multisite rCBV (ICCintact-BBB = 0.97; ICCdisrupted-BBB = 0.88). On the other hand, both Phase II (ICCintact-BBB = 0.69; ICCdisrupted-BBB = 0.44) and Phase III (ICCintact-BBB = 0.64; ICCdisrupted-BBB = 0.38) had poor agreement in rCBV across sites. These results indicate that inconsistency in rCBV currently most likely is due to the post-processing methods. The poor agreement across rCBV from the intact-BBB simulations also indicate that variability in pre-processing (e.g. filtering, smoothing) affects rCBV.

For Phase I (Figure 2a), majority of sites have narrow 95% LOAs and results are centered around the mean rCBV compared to the reference. The exceptions to this likely arise from differences in the preload. Two sites showed larger 95% LOA because of differences in TE and injection dose. The LOA widen and exhibit bias for Phase II and III (Fig 2b and c) when PMs are varied. Software that demonstrate the narrowest 95% LOA and no bias are IB Neuro, nordicICE, ISP’s “model-free” option, and in-house processing.

Lastly, Figure 3 illustrates CV% calculated in all voxels for each DRO as a function of the rCBV. In general, the mean CV increases for each phase when more freedom is allowed for both IPs and PMs. The greatest CV% occurs at low rCBV values. This suggests standardization is necessary when voxel-wise analysis (versus hot spot analysis) is performed for detection of early therapeutic effects.

Conclusion:

Although great efforts have been made to standardize DSC IPs, this study highlights poor rCBV agreement due to processing platforms. It is critical that the DSC community establish qualifying and validating criteria for both IP and PMs based on ground truth provided by the DRO.

Acknowledgements

NIH/NCI R01CA213158 (LCB, NS, CCQ)

NIH/NCI U01CA207091 (AJM, MCP)

NIH/NCI U01CA166104 and P01CA085878 (DM, TLC)

NIH/NCI U01CA142565 (CW, AGS, TEY, NR)

NIH/NCI U01CA176110 (KMS, MAP)

References

[1] Semmineh NB, Stokes AM, Bell LC, Boxerman JL, Quarles CC. A Population-Based Digital Reference Object (DRO) for Optimizing Dynamic Susceptibility Contrast (DSC)-MRI Methods for Clinical Trials. Tomography 2017;3:41–49.

[2] Semmineh N, Bell L, Stokes A, Hu L, Boxerman J, Quarles C. Optimization of Acquisition and Analysis Methods for Clinical Dynamic Susceptibility Contrast (DSC) MRI Using a Population-based Digital Reference Object. AJNR Am J Neuroradiol 2018.

[3] Boxerman JL, Schmainda KM, Weisskoff RM. Relative cerebral blood volume maps corrected for contrast agent extravasation significantly correlate with glioma tumor grade, whereas uncorrected maps do not. AJNR. Am. J. Neuroradiol. 2006;27:859–67.

[4] Welker K, Boxerman J, Kalnin A, et al. ASFNR recommendations for clinical performance of MR dynamic susceptibility contrast perfusion imaging of the brain. AJNR. Am. J. Neuroradiol. 2015;36:E41-51

Figures

Table 1: Summary of participating teams’ imaging protocols (IP) and post-processing methods (PM). Large variations in both IP and PM were observed.

Figure 1: A bar plot of the LCCC for each rCBV map for site-specific IP w/constant PM (black), constant IP w/site-specific PM (medium gray), and site-specific IP w/site-specific PM (light gray). Each phase is sorted by the resulting LCCC from highest to lowest value. A horizontal bar at LCCC = 0.8 is placed to evaluate good agreement (LCCC > 0.8). Less than half of the DROs accurately represented ground truth as more freedom was allowed for IP and PM choices (as seen in the last phase).

Figure 2: Bland Altman LOA against the SIP plotted for a) site-specific IP w/constant PM, b) constant IP w/site-specific PM, and c) site-specific IP w/site-specific PM. The vertical dashed line is the mean rCBV across 10,000 voxels for the SIP. When one site process rCBV, tight LOA are seen with slight biases in rCBV due to differences in IPs (Fig 2a). However, poor LOA and large biases were observed due to differences in PMs (Fig 2b-c).

Figure 3: The covariance (CV%) across all rCBV maps for each of the 10,00 voxels plotted across the mean rCBV of the voxels for a) site-specific IP w/constant PM, b) constant IP w/site-specific PM, and c) site-specific IP w/site-specific PM. Results from the Ktrans = 0 (light gray) and Ktrans > 0 (black) are included with their mean CV% across all 10,000 voxels indicated for the horizontal lines. For all three phases, the largest variation in rCBV occurs at the low rCBV range for Ktrans >0, and CV% increases with more variation in IP and PM.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0838