Keywords: Multimodal, Cancer
Prostate Imaging Reporting and Data System (PI-RADS) on multiparametric MRI (mpMRI) provides fundamental MRI interpretation guidelines but suffers from inter-reader variability. Deep learning networks show great promise in automatic lesion segmentation and classification, which help to ease the burden on radiologists and reduce inter-reader variability. In this study, we proposed a novel multi-branch network, MiniSegCaps, for prostate cancer segmentation and PI-RADS classification on mpMRI, and a graphical user interface (GUI) integrated into the clinical workflow for diagnosis reports generation. Our model achieved the best performance in prostate cancer segmentation and PIRADS classification compared with state-of-the-art methods.
1. C. P. Smith et al., “Intra-and interreader reproducibility of PI-RADSv2: A multireader study,” J. Magn. Reson. Imaging, vol. 49, no. 6, pp. 1694–1703, 2019.
2. Y. Chen, L. Xing, L. Yu, H. P. Bagshaw, M. K. Buyyounouski, and B. Han, "Automatic intraprostatic lesion segmentation in multiparametric magnetic resonance images with proposed multiple branch UNet," Med. Phys., vol. 47, no. 12, pp. 6421–6429, 2020.
3. A. Saha, M. Hosseinzadeh, and H. Huisman, “End-to-end prostate cancer detection in bpMRI via 3D CNNs: Effects of attention mechanisms, clinical priori and decoupled false positive reduction,” Med. Image Anal., vol. 73, p. 102155, 2021.
4. A. Saha, M. Hosseinzadeh, and H. Huisman, “Encoding clinical priori in 3d convolutional neural networks for prostate cancer detection in bpmri,” ArXiv Prepr. ArXiv201100263, 2020.
5. A. Seetharaman et al., “Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging,” Med. Phys., vol. 48, no. 6, pp. 2960–2972, 2021.
6. S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.
7. A. Jiménez-Sánchez, S. Albarqouni, and D. Mateus, “Capsule networks against medical imaging data challenges,” in Intravascular imaging and computer assisted stenting and large-scale annotation of biomedical data and expert label synthesis, Springer, 2018, pp. 150–160.
8. S. G. Armato et al., “PROSTATEx Challenges for computerized classification of prostate lesions from multiparametric magnetic resonance images,” J. Med. Imaging, vol. 5, no. 4, p. 044501, 2018.
9. N. Bloch et al., “NCI-ISBI 2013 challenge: automated segmentation of prostate structures,” Cancer Imaging Arch., vol. 370, p. 6, 2015.
10. Y. Qiu, Y. Liu, S. Li, and J. Xu, “Miniseg: An extremely minimum network for efficient covid-19 segmentation,” ArXiv Prepr. ArXiv200409750, 2020.
11. P. A. Gutiérrez, M. Perez-Ortiz, J. Sanchez-Monedero, F. Fernandez-Navarro, and C. Hervas-Martinez, “Ordinal regression methods: survey and experimental study,” IEEE Trans. Knowl. Data Eng., vol. 28, no. 1, pp. 127–146, 2015.
12. R. Cao et al., “Joint prostate cancer detection and Gleason score prediction in mp-MRI via FocalNet,” IEEE Trans. Med. Imaging, vol. 38, no. 11, pp. 2496–2506, 2019.
13. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, 2015, pp. 234–241.
14. O. Oktay et al., “Attention u-net: Learning where to look for the pancreas,” ArXiv Prepr. ArXiv180403999, 2018.
15. Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” in Deep learning in medical image analysis and multimodal learning for clinical decision support, Springer, 2018, pp. 3–11.
16. V. Badrinarayanan, A. Handa, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling,” ArXiv Prepr. ArXiv150507293, 2015.
Fig. 1. The overall pipeline of our work includes four main steps: 1) T2W and DWI images obtained from the clinical MRI scan session; 2) image preprocessing (registration, normalization, etc.); 3) zonal segmentation and cropping; 4) prostate cancer segmentation and classification, and 5) diagnostic report generation.
Fig. 2. Architecture of MiniSegCaps: 1) MiniSeg, a lightweight network as the backbone for lesion segmentation; 2) Capsule predictive branch for producing PI-RADS score; 3) CapsGRU for exploiting spatial information across adjacent slices. MiniSeg extracts Conv-features and produces multi-channel cancer segmentations; learned features by the last downsample block of MiniSeg are used as inputs of caps-branch for PI-RADS classification; with learned caps-feature stacks by PrimaryCaps as inputs, the CapsGRU exploits inter-slice spatial information during learning process.
Fig. 3. Visualization of lesion segmentation results among different cases. The yellow contour is the ground truth, and the red contours are from the deep learning predictions.
Fig. 4. Visualization of lesion segmentation results on eight slices from one case. The yellow contour is ground truth, and the red contours are predictions from the MiniSegCaps without or with CapsGRU. MiniSegCaps with CapsGRU can better delineate the prostate cancer contours across different slices in one case compared to that without CapsGRU.
Fig. 5. Visualization of benign nodule (e.g., BPH) segmentation results among different cases. The green contour is the ground truth, and the blue contours are from the deep learning predictions.