Multi-parametric MR images have been shown to be effective in the non-invasive diagnosis of prostate cancer. Automated segmentation of the prostate eliminates the need for manual annotation by a radiologist which is time consuming. This improves efficiency in the extraction of imaging features for the characterization of prostate tissues. In this work, we propose a fully automated cascaded deep learning architecture with residual blocks (Cascaded MRes-UNET) for segmentation of the prostate gland and the peripheral zone in one pass through the network. The network yields high dice scores (mean=0.91) with manual annotations from an experienced radiologist. The average difference in volume estimation is around 6% in the prostate and 3% in the peripheral zone.
Figure 2 shows the architecture of proposed Modified Residual UNET (MRes-UNET). This architecture is a modified version of UNET5 with residual blocks within the analysis and synthesis paths. Furthermore, instead of the feature concatenations used in UNET, the proposed architecture uses feature addition. Finally, the residual blocks6,7 use 1x1 convolutions along the identity paths. Figure 3 shows the proposed fully automated cascaded architecture that consists of two sequential MRes-UNETs. Given an input image, the first MRes-UNET predicts the mask for the prostate gland. The detected prostate region is extracted from the image and is used as input to the second network which predicts the PZ within the prostate.
3D T2-weighted Fast Spin Echo (FSE) images with 1 mm isotropic resolution (matrix size: 256x256) were acquired on 3T (Siemens Skyra) on a total of 73 patients screened for prostate biopsy. 65 subjects were used for training the cascaded network and 3 were used for validation. 5 subjects were held out as test subjects to validate the generalizing ability of the proposed technique. The prostate gland and PZ were annotated on axial T2-w images by experienced radiologists. Pre-processing steps involved cropping the images to 192x192 to reduce computational burden and normalizing each subject’s data to have zero mean and unit standard deviation. Data augmentation was performed using a combination of random rotations (-10°,10°), in-plane translations, and horizontal flips, resulting in a four-fold increase with roughly 17000 training images.The networks were trained in Keras8 with tensorflow9 background with the following parameters: weights=random initialization, Loss=categorical cross-entropy, LR=0.0005, batch size = 5 and epochs = 30. The performance of the proposed technique was evaluated using dice similarity coefficients, average volume difference, precision and recall.
1. Siegel R, Naishadham D, Jemal A. Cancer statistics, 2012. CA Cancer J Clin. 2012;62(1):10–29.
2. Kozlowski P, Chang SD, Jones EC, Berean KW, Chen H, Goldenberg SL. Combined diffusion-weighted and dynamic contrast-enhanced MRI for prostate cancer diagnosis:correlation with biopsy and histopathology.J Magn Reson Imaging. 2006;24(1).
3. MICCAI Grand Challenge: Prostate MR Image Segmentation. 2012.
4. Tian Z, Liu L, Fei B. Deep Convolutional neural network for prostate MR segmentation. Int J Comput Assisst Radiol Surg. 2018; 13(11)
5. Ronneberger O, et al. “U-Net: Convolutional Networks for Biomedical Image Segmentation” MICCAI, Springer, LNCS. 2015
6. He K, Zhang S, Ren S, Sun J. Deep Residual Learning for Image Recognition. 2015. arXiv:1512.03385v1
7. Guerroro R, et al. White matter hyperintensity and stroke lesion segmentation and differentiation using convolutional neural networks. NeuroImage: Clinical. 2018; 17
8. Chollet, Francois and others. 2015. https://keras.io
9. Abadi M, et al. TensorFlow: Large-scale machine learning on heterogeneous systems. 2015. Software available from tensorflow.org
10. Garvey B, et al. Clinical value of prostate segmentation and volume determination on MRI in benign prostatic hyperplasia. Diagn Interv Radiol. 2018; 20(3)
11. Doluoglu OG, et al. The importance of prostate volume in prostate needle biopsy. Turkish Journal of Urology. 2013; 39(2)