Jinseong Jang1, Jeong Kon Kim2, Subeom Park1, Won Tae Kim1, Shin Uk Kang1, Myung Jae Lee1, and Dongmin Kim1
1AI R&D Center, JLK Inspection, Seoul, Korea, Republic of, 2Department of Radiology, Asan Medical Center, Seoul, Korea, Republic of
Synopsis
we used qualitative and
quantitative parametric MRI in various deep convolutional neural networks for fully-automatic
detection of prostate cancer. region. various deep neural networks were
compared with pathology map-based ground truth. The 3D convolutional neural networks achieved the
highest performance in our experiments.
Introduction
Multi-parametric magnetic resonance image (mp-MRI)
is widely used for detection and staging of prostate cancer [1-3]. Currently,
comprehensive interpretation of T2-weighted(T2w) images, diffusion-weighted images
(DWI), dynamic contrast-enhanced MRI (DCE-MRI) image can provide a greater diagnostic
accuracy than single-modal MRI [1-3]. However, visual inspection of mp-MRI
requires a significant training period, and moreover may be inconsistent
between or within observers. This potential limitation can be overcome by using
computer-aided diagnosis tool such as support vector machine and multi-layer
perceptron [4-5]. In this paper, we propose convolutional neural
network (CNN)-based automatic segmentation methods to detect prostate cancer in
qualitative and quantitative mp-MRI. We compared the diagnostic performance
among various 2D CNN and 3D CNN methods using receiver operating characteristic
(ROC) curve. AUC value obtained with 3D CNN method were higher than those with
the other 2D CNN methods. Method
mp-MRIs and pathology maps obtained from 350 patients
who underwent radical prostatectomy. The mp-MRI included multi-slice T2w
images, DWI images, and DCE-MRI. Apparent diffusion coefficient (ADC) map and
high b-value DWI images at b value of s/mm2 were calculated from DWI
images. Also, four parametric maps were generated form DCE MRI, including arear
under time-intensity curve(iAUC), Extracellular volume ratio reflecting
vascular permeability (VE), Wash-in rate (Wash-in), and peak enhancement ratio (MaxRelEnh).
For setting ground truth for training and validation process, an experienced
radiologist labelled cancer lesion greater than 5 mm in diameter by referring
to histological map. The qualitative and quantitative images were used as input
data to generate voxel-based probability map for prostate. T2w, ADC, b1500,
iAUC, VE, Wash-in and MaxRelEnh were used for input images of our deep neural
network because they have been used to diagnose prostate cancer in clinical site [6-7]. Figure 1 shows the acquisition
process of mp-MRI images.
In training process, the annotated ground truth
images were used for the output of our deep neural networks. In validation
process, the images were used to evaluate performances of our proposed neural
networks.
Some pre-processing algorithms were used into the
mp-MRI images. First, the other mp-MRI images were registered into high quality
and resolution T2w. Next, prostate regions were segmented by our CNN based deep
neural network. Finally, the re-scale algorithm was used into the prostate
region of mp-MRIs. The enlarged prostate images were used in our proposed deep
neural network for cancer segmentation. Figure 2 shows our pre-processing flow-work
The number
of our convolutional neural networks [11-15] for prostate cancer
segmentation was eleven, as it can be shown in figure 3. We included multi
parametric input image processing layer in each network. The mp-MRI and ground
truth images of 300 patients were used in training process, and those of 50
patients used in validation process. Results
For the quantitative validation of our networks, the
mp-MRI images and ground truth of mid-gland prostate were divided into four
regions [8-10]. The segmentation results of our deep neural networks
were compared with the divided ground truth images. Sensitivity (true positive)
and specificity (false negative) values were calculated from the comparison.
Also, the ROC curves and AUC values were obtained from various cutoff values of
segmentation probability. Figure 4 shows the ROC curves of our proposed networks.
In the case of 2D networks, The AUC value of PCD
was higher than that of the other networks. PCD networks used DCE parameters
processing layer. Because the information of DCE-MRI was different with the
that of the other parametric images, DCE parameter process layer separated with
the other parametric MRI in PCD network leads to good performance for prostate
cancer detection
The 3D networks, which AUC value was 0.93,
outperformed other 2D networks. Cancer tends to develop 3D shape in the prostate
region. While 2D networks cannot consider the correlation between adjacent
slices and 3D cancer shape, 3D networks can consider the correlation and the
cancer shape. Conclusion
In this paper, we used qualitative and quantitative parametric MRI in
various deep neural networks for automatic detection of prostate cancer. eleven
convolutional neural networks were compared with pathology map-based ground
truth. The 3D networks achieved the highest performance in our experiments. Acknowledgements
No acknowledgement found.References
[1] Weinreb,
Jeffrey C., et al. "PI-RADS prostate imaging–reporting and data system:
2015, version 2." European urology 69.1 (2016): 16-40.
[2] Ahmed,
Hashim U., et al. "Diagnostic accuracy of multi-parametric MRI and TRUS
biopsy in prostate cancer (PROMIS): a paired validating confirmatory
study." The Lancet 389.10071 (2017): 815-822.
[3] Hamoen,
Esther HJ, et al. "Use of the Prostate Imaging Reporting and Data System
(PI-RADS) for prostate cancer detection with multiparametric magnetic resonance
imaging: a diagnostic meta-analysis." European urology 67.6 (2015):
1112-1121.
[4] Guo,
Yanrong, Yaozong Gao, and Dinggang Shen. "Deformable MR prostate
segmentation via deep feature learning and sparse patch matching." IEEE
transactions on medical imaging 35.4 (2016): 1077-1089.
[5] Yang,
Xin, et al. "Co-trained convolutional neural networks for automated
detection of prostate cancer in multi-parametric MRI." Medical image
analysis 42 (2017): 212-227.
[6] Kim,
Jeong Kon, et al. "Wash‐in rate on the basis of dynamic contrast‐enhanced
MRI: usefulness for prostate cancer detection and localization." Journal
of Magnetic Resonance Imaging: An Official Journal of the International Society
for Magnetic Resonance in Medicine 22.5 (2005): 639-646.
[7] Sung,
Yu Sub, et al. "Dynamic contrast‐enhanced MRI for oncology drug
development." Journal of Magnetic Resonance Imaging 44.2 (2016): 251-264.
[8] Vargas,
Hebert Alberto, et al. "Diffusion-weighted endorectal MR imaging at 3 T
for prostate cancer: tumor detection and assessment of aggressiveness."
Radiology 259.3 (2011): 775-784.
[9] Brock,
M., et al. "Fusion of magnetic resonance imaging and real-time
elastography to visualize prostate cancer: a prospective analysis using whole
mount sections after radical prostatectomy." Ultraschall Med 36 (2015):
355-361.
[10] Lim,
Hyun Kyung, et al. "Prostate cancer: apparent diffusion coefficient map
with T2-weighted images for detection—a multireader study." Radiology
250.1 (2009): 145-151.
[11] Long,
Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional
networks for semantic segmentation." Proceedings of the IEEE conference on
computer vision and pattern recognition. 2015.
[12] Wolterink,
Jelmer M., et al. "Dilated convolutional neural networks for
cardiovascular MR segmentation in congenital heart disease."
Reconstruction, Segmentation, and Analysis of Medical Images. Springer, Cham,
2016. 95-102.
[13] Chen,
Hao, et al. "VoxResNet: Deep voxelwise residual networks for brain
segmentation from 3D MR images." NeuroImage (2017).
[14] Ronneberger,
Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for
biomedical image segmentation." International Conference on Medical image
computing and computer-assisted intervention. Springer, Cham, 2015.
[15] Li,
Xiaomeng, et al. "H-DenseUNet: Hybrid Densely Connected UNet for Liver and
Tumor Segmentation from CT Volumes." IEEE Transactions on Medical Imaging
(2018).