4742

An fully automatic prostate segmentation based on generative adversarial networks
Yi Zhu1, Rong Wei1, Ge Gao2, Jue Zhang1,3, and Xiaoying Wang2

1Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China, 2Peking University First Hospital, Beijing, China, 3College of Engineering, Peking University, Beijing, China

Synopsis

Automatic prostate segmentation in MR images is essential in many clinical applications. Generative adversarial networks(GAN) have recently gained interests due to their promising ability in generating images which are difficult to distinguish from real images. In this paper, we propose an automatic and efficient algorithm base on GAN to segment the prostate contour and make the prostate segmentation shape more realistic. Our restult shows that the mean segmentation accuracy in test dataset is 90.3%±5.5. It indicates that the proposed strategy is feasible for segmentation of prostate MR images.

Introduction

MRI is the most common used non-intrusive technique to diagnosis prostate cancer [1], but interpreting those exams requires expertise and heavily depends on the personal experience of the radiologist. Computer-aided diagnosis (CAD), which uses computer analysis methods to help radiologists detect and diagnose [2,3] and MRI image segmentation plays an essential role in CAD applications. As a conventional fully convolutional neural network (FCN), U-Net provide a unique way to train the network end-to-end, which allows us to implement pixel wise segmentation.[4] Generative adversarial networks(GAN) have recently gained popularity due to their promising ability in generating images that are difficult to distinguish from real images. [5] In this paper, combining the advantages of U-Net and GAN, we employ adversarial training to improve the performance of prostate MRI segmentation.

Method

Image Acquisitions:

Clinical routine MRI was conducted on a cohort of 163 subjects, including healthy subjects, prostate cancer patients and prostate hyperplasia patients. The imaging was performed on a 3.0 T Ingenia system (Philips Healthcare, the Netherlands), with acquisition of T2-weighted, diffusion-weighted and dynamic contrast-enhanced images, with standard imaging protocol. For each subjects,5 or 6 typical slices in T2WI were selected as the data sets. (102 subjects, 600 typical slices as the training set and 51 subjects, 262 typical slices as the testing set). All manual segmentations in our datasets were outlined by two experts with more than five-year-experience.

Algorithm description:

As illustrated in Figure 1, the proposed network referred to the Pix2Pix network[6], it contained two primary components, i.e. generator (G) and discriminator (D). The generator was a classical U-Net network that generates a probability label map from input images. The discriminator network was a classical classification network which fed with two kinds of inputs: original images concatenated with ground truth label maps, and original images concatenated with predicted label maps synthesized by the generator. The generator network was trained to produce outputs that cannot be distinguished from ground truth images by an adversarial trained whilst the discriminator network is trained to do as well as possible at detecting the generator’s “fakes”.

Result

Figure 2 illustrates of the segmentations generated using the GAN and the classical U-Net in T2WI image. The adversarial training better enforces spatial consistency among the class labels. It smoothens and strengthens the class probabilities over the prostate areas. What’s more, the spurious class labels across small areas can be removed. To evaluate our segmentation method, three traditional region-based metrics were used to compared the algorithm and manual segmentation results: Dice Similarity Coefficient (DSC) which is a measurement of the overlap of the two regions; False Positive Rate (FPR) which is the ratio of segmented parts not overlapping the ground truth, and False Negative Rate (FNR) which is the ratio of missed parts of the ground truth. The results show in Table 1 indicate that the proposed method achieved a better performance .

Discussion

The prostate gland accounts for a small part in the whole T2WI image, so that the amounts of background(non-prostate pixel) and foreground(prostate pixels) are quite unbalanced. To avoid this problem, the DSC value is utilized as the loss function[7]. But this operation sometimes lead to output more than one target area due to the similar characteristics. In this paper, we used the generative adversarial network as a “variational” loss, with adjustable parameters, and the segmentation results showed better spatial consistency, which could be found in Fig.2. We believe when inputs to the discriminator are generated pixel-wise label maps with more than one target area vs. ground truth, the real/fake classification task will be easy for the discriminator. Thus, the adversarial training better enforces spatial consistency among the class labels and make the prostate shape more realistic. However, for some prostate cancer patients, the shape of the prostate have been greatly deformed, the accuracy of outer contour segmentation would declined correspondingly. This phenomenon may cause by the latent shape constraint introduced by adversarial training. On the other hand, 3D segmentation could take advantage of the inherent dependency between the spatial location of multiple slices, segmentation accuracy could be further improved with the 3D extension of our method in the future.

Conclusion

In this paper, we presented a fully automatic segmentation method for the whole prostate region on T2WIs based on an adversarial image-to-image network. Experiments demonstrate that our method achieves a satisfactory performance for segmentation of prostate MR images.

Acknowledgements

No acknowledgement found.

References

[1] Villers, A., et al. "Current status of MRI for the diagnosis, staging and prognosis of prostate cancer: implications for focal therapy and active surveillance. " Current Opinion in Urology 19.3(2009):274.

[2] Wang, Shijun, et al. "Computer aided-diagnosis of prostate cancer on multiparametric MRI: a technical review of current research." BioMed research international 2014 (2014).

[3] Zhao, K., et al. "Prostate cancer identification: quantitative analysis of T2-weighted MR images based on a back propagation artificial neural network model. " Science China Life Sciences 58.7(2015):666-673.

[4] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.

[5] Luc, Pauline, et al. "Semantic segmentation using adversarial networks." arXiv preprint arXiv:1611.08408 (2016).

[6] Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." arXiv preprint (2017).

[7] Zhu, Qikui, et al. "Deeply-supervised CNN for prostate segmentation." International Joint Conference on Neural Networks IEEE, 2017:178-184.

Figures

Fig.1 Structure of the proposed GAN method

Fig.2 Results of segmentation, the red line indicates the result outlined by our method, the blue line indicates the result outlined by the classical U-Net, and the white line indicates the result outlined by experts.(the second row is the zoom-in patch of the first row)

Table 1. Segmentation accuracy for 51 subjects, Dice Similarity Coefficient (DSC), False Positive Rate (FPR) and False Negative Rate (FNR) as the evaluation standard, represented as the mean value ± standard deviation.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4742