0814

Differential Diagnosis of  Prostate Cancer and Benign Prostatic Hyperplasia Based on Prostate DCE-MRI by Using Deep Learning with Different Peritumoral Areas
Yang Zhang1,2, Weikang Li3, Zhao Zhang3, Yingnan Xue3, Yan-Lin Liu2, Peter Chang2, Daniel Chow2, Ke Nie1, Min-Ying Su2, and Qiong Ye3,4
1Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States, 2Department of Radiological Sciences, University of California, Irvine, CA, United States, 3Department of Radiology, The First Affiliate Hospital of Wenzhou Medical University, Wenzhou, China, 4High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China

Synopsis

A bi-directional Convolutional Long Short Term Memory (CLSTM) Network was previously shown capable of differentiating prostate cancer and benign prostate hyperplasia (BPH) based DCE-MRI that acquired 40 time frame images. The purpose of this work was to investigate the diagnostic value of peritumoral tissues. Several different methods were used to expand peritumoral tissues surrounding the lesion, and they were used as the input to the diagnostic network. A total of 135 cases were analyzed, including 73 prostate cancer and 62 BPH. Based on 4-fold cross-validation, the region growing based ROI had the best performance, with a mean AUC of 0.89.

Introduction

Prostate cancer (PCa) is one of the most common malignant tumors in man [1]. The accurate diagnosis of PCa is a challenging task [2]. The distinction of PCa from benign conditions, including benign prostatic hyperplasia and prostatitis, is critical for personalized management [3]. Currently, MRI of the prostate are evaluated by radiologists. However, the detection and diagnosis of PCa using MR images vary considerably [4]. Quantitative imaging features may provide additional information for the differentiation of benign and malignant lesions. Furthermore, deep learning using a convolutional neural network provides a fully automatic and efficient approach to integrate analyze the detailed information contained in the tumor and the surrounding peritumoral tissue for diagnosis. Several studies have shown that Artificial Intelligence (AI) is promising potential for the diagnosis of prostate cancer [5-7]. The goal of this study is to evaluate the diagnostic performance of Convolutional Long Short Term Memory (CTSTM) Networks by considering different peritumoral areas extracted using different methods. The results to differentiate between prostate cancer and benign prostatic hyperplasia are compared.

Methods

The cases were collected from September 2014 to September 2019 from patients who underwent prostate multi-parametric MRI (mpMRI) examination. 73 patients, who were confirmed with PCa by transrectal ultrasonography guided prostate biopsy and underwent subsequent radical prostatectomy, were included in this study. 62 BPH patients had PI-RADS v2 score ≤ 2, and they received biopsy in an interval less than 6 months of MRI and were confirmed to have negative findings. MR examinations were carried out on a 3.0 T scanner (Achieve; Philips, The Netherlands) equipped with a abdominal sensitivity-encoding (SENSE) torso coil without an endorectal coil. Four hours of fasting before the MR examination was required to suppress bowel peristalsis. During the acquisition, a contrast agent (Omniscan, GE, concentration 0.5 mmol/ml) with a dose of 0.2 ml/kg of body weight at a flow rate of 2 ml/s was injected via a power injector (Spectris Solaris EP, Samedco Pvt Ltd) at the start of the sixth DCE time point, followed by a 20 ml saline flush. Figure 1 and Figure 2 show two case examples. Only the DCE images were analyzed in this study. A total of 40 frames were acquired, including 5 pre-contrast (F1-F5) and 35 post-contrast (F6-F40). Two radiologists outlined the whole prostate gland and the index suspicious lesion in consensus on DCE-MRI using ImageJ (NIH, USA). A subtraction map was generated by subtracting averaged pre-contrast images (F1-F5) from F25. The outlined lesion ROI on all slices were combined to generate a 3D tumor mask. For deep learning, to consider the change of the signal intensity with time, a bi-directional CLSTM network was applied to differentiate between BPH and PCa patients. To analyze the peritumoral information, nine different methods were used to generate ROIs (R1-9) that contained different amounts of peritumoral areas surrounding the tumor. R1 was the smallest bounding box covering the tumor ROI. R2 and R3 were obtained by enlarging to 120% and 150% tumor area. R4, R5, and R6 used 5-, 10-, and 15-pixel expansion from the tumor boundary. R7, R8, and R9 used ± 20%, 30%, and 40% region growing algorithms [8] to locate the peritumoral tissues using different stopping criteria. For R7, the lower stopping criterion was 80% of the average intensity of the tumor, and the higher stopping criterion was 120%. For R8, the lower and higher stopping criteria were 70% and 130%. For comparison, the tumor alone without any peritumoral area was denoted as R0. Since only the tumor ROI was considered, the pixels outside the tumor in the box were set to zero. Figure 3 and Figure 4 show ROI examples from 2 cases shown in Figure 1 and Figure 2. After the peritumoral tissue was determined, the smallest bounding box was generated to cover all identified tissues as input to CLSTM.

Results

All results are shown in Table 1. When only using the tumor without any peritumoral tissue (R0) as input into the bi-directional CLSTM architecture, the AUC was 0.72 – 0.89 (mean 0.84). When considering the different amounts of peritumoral tissues using expanded ROI as inputs, in general, the AUC decreased when the ROI became bigger. In R2 and R3, the mean AUC=0.85 for 120% areal expansion, and decreased to 0.75 for 150% expansion. In R4-R6 with 5-, 10-, and 15-pixel expansions, the AUC decreased from 0.87 to 0.85 to 0.74. When using the R7 region growing with ±20% as the stopping criterion, the AUC had the best performance, 0.81 – 0.94 (mean 0.89). In R8 and R9 region growing with 30% and 40% as stopping criteria, similarly, as the peritumoral areas expanded larger, the performance became worse, down to 77% and 65%.

Discussion

In this study, we demonstrated that the recurrent network using CLSTM could differentiate between PCa and BPH, by considering the change of signal intensity in the DCE series. Peritumoral tissue is known to contain information that may contribute to the diagnosis. In this study, we performed a systematic analysis using 9 different expansion methods to evaluate the peritumoral tissue outside the lesion ROI. When using the ±20% average intensity region growing to determine the expansion, the diagnostic performance was improved compared to using tumor alone, with mean AUC increased from 0.84 to 0.89. Including peri-umoral tissue outside the tumor ROI will not only provide more information for the diagnosis but also may reduce the uncertainties in the lesion ROI drawing. However, how much tissue should be included needs to be carefully determined. As shown in our results, when the ROI was expanding, the lesion information might be diluted by including the weakly enhanced tissues into the analysis. In conclusion, this study demonstrates that deep learning, with appropriate consideration of the peritumoral information, can be implemented to analyze the DCE-MRI to differentiate between PCa and BPH.

Acknowledgements

No acknowledgement found.

References

[1] Siegel RL, Miller KD, Jemal A (2018) Cancer statistics, 2018. CA: a cancer journal for clinicians 68:7-30 [2] Weinreb JC, Barentsz JO, Choyke PL, Cornud F, Haider MA, Macura KJ, Margolis D, Schnall MD, Shtern F, Tempany CM (2016) PI-RADS prostate imaging–reporting and data system: 2015, version 2. European urology 69:16-40 [3] Herold CJ, Lewin JS, Wibmer AG, Thrall JH, Krestin GP, Dixon AK, Schoenberg SO, Geckle RJ, Muellner A, Hricak H (2016) Imaging in the age of precision medicine: summary of the proceedings of the 10th Biannual Symposium of the International Society for Strategic Studies in Radiology. Radiology 279:226-238 [4] Venderbos LD, Roobol MJ (2011) PSA-based prostate cancer screening: the role of active surveillance and informed and shared decision making. Asian journal of andrology 13:219 [5] Le MH, Chen J, Wang L, Wang Z, Liu W, Cheng K-TT, Yang X (2017) Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks. Physics in Medicine & Biology 62:6497 [6] Zhu Y, Wang L, Liu M, Qian C, Yousuf A, Oto A, Shen D (2017) MRI‐based prostate cancer detection with high‐level representation and hierarchical classification. Medical physics 44:1028-1039 [7] Yuan Y, Qin W, Buyyounouski M, Ibragimov B, Hancock S, Han B, Xing L (2019) Prostate cancer classification with multiparametric MRI transfer learning model. Medical physics 46:756-765 [8] Forsyth DA, Ponce J (2002) Computer vision: a modern approach. Prentice Hall Professional Technical Reference

Figures

Figure 1: A case example from a 72-year-old man with benign prostatic hyperplasia (tPSA=10.8 ng/ml). The lesion is manually outlined. (A) The first DCE time frame (pre-contrast image); (B) The 15th DCE time frame (post-contrast image); (C) The 40th DCE time frame (post-contrast image); (D) The DCE time intensity curve shows the persistent enhancement pattern.

Figure 2: A case example from an 83-year-old man with prostate cancer (tPSA=7.13 ng/ml, Gleason Score=4+5). The lesion is manually outlined. (A) The first DCE time frame (pre-contrast image); (B) The 15th DCE time frame (post-contrast image); (C) The 40th DCE time frame (post-contrast image); (D) the DCE time intensity curve shows the wash-out kinetic pattern.

Figure 3: Different ROIs from the case shown on Figure 1, which contains different amounts of peritumoral tissue as the input for the diagnostic neural network. (A) Original image, (B) Tumor alone (R0), (C) 120% enlarged tumor area (R2), (D) 150% enlarged tumor area (R3), (E) 5-pixel expansion from the tumor boundary (R4), (F) 10-pixel expansion from the tumor boundary (R5), (G) region growing with ±20% average intensity as the stopping criteria (R7), (F) region growing with ±30% average intensity as the stopping criteria (R8).

Figure 4: Different ROIs from the case shown on Figure 2, which contains different amounts of peritumoral tissue as the input for the diagnostic neural network. (A) Original image, (B) Tumor alone (R0), (C) 120% enlarged tumor area (R2), (D) 150% enlarged tumor area (R3), (E) 5-pixel expansion from the tumor boundary (R4), (F) 10-pixel expansion from the tumor boundary (R5), (G) region growing with ±20% average intensity as the stopping criteria (R7), (F) region growing with ±30% average intensity as the stopping criteria (R8).

Table 1: Results from different Input ROIs

Proc. Intl. Soc. Mag. Reson. Med. 29 (2021)
0814