Automatic prostate segmentation in MR images is essential in many clinical applications. Generative adversarial networks(GAN) have recently gained interests due to their promising ability in generating images which are difficult to distinguish from real images. In this paper, we propose an automatic and efficient algorithm base on GAN to segment the prostate contour and make the prostate segmentation shape more realistic. Our restult shows that the mean segmentation accuracy in test dataset is 90.3%±5.5. It indicates that the proposed strategy is feasible for segmentation of prostate MR images.
Image Acquisitions:
Clinical routine MRI was conducted on a cohort of 163 subjects, including healthy subjects, prostate cancer patients and prostate hyperplasia patients. The imaging was performed on a 3.0 T Ingenia system (Philips Healthcare, the Netherlands), with acquisition of T2-weighted, diffusion-weighted and dynamic contrast-enhanced images, with standard imaging protocol. For each subjects,5 or 6 typical slices in T2WI were selected as the data sets. (102 subjects, 600 typical slices as the training set and 51 subjects, 262 typical slices as the testing set). All manual segmentations in our datasets were outlined by two experts with more than five-year-experience.
Algorithm description:
As illustrated in Figure 1, the proposed network referred to the Pix2Pix network[6], it contained two primary components, i.e. generator (G) and discriminator (D). The generator was a classical U-Net network that generates a probability label map from input images. The discriminator network was a classical classification network which fed with two kinds of inputs: original images concatenated with ground truth label maps, and original images concatenated with predicted label maps synthesized by the generator. The generator network was trained to produce outputs that cannot be distinguished from ground truth images by an adversarial trained whilst the discriminator network is trained to do as well as possible at detecting the generator’s “fakes”.
Conclusion
In this paper, we presented a fully automatic segmentation method for the whole prostate region on T2WIs based on an adversarial image-to-image network. Experiments demonstrate that our method achieves a satisfactory performance for segmentation of prostate MR images.[1] Villers, A., et al. "Current status of MRI for the diagnosis, staging and prognosis of prostate cancer: implications for focal therapy and active surveillance. " Current Opinion in Urology 19.3(2009):274.
[2] Wang, Shijun, et al. "Computer aided-diagnosis of prostate cancer on multiparametric MRI: a technical review of current research." BioMed research international 2014 (2014).
[3] Zhao, K., et al. "Prostate cancer identification: quantitative analysis of T2-weighted MR images based on a back propagation artificial neural network model. " Science China Life Sciences 58.7(2015):666-673.
[4] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
[5] Luc, Pauline, et al. "Semantic segmentation using adversarial networks." arXiv preprint arXiv:1611.08408 (2016).
[6] Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." arXiv preprint (2017).
[7] Zhu, Qikui, et al. "Deeply-supervised CNN for prostate segmentation." International Joint Conference on Neural Networks IEEE, 2017:178-184.