Deep Learning of MR Imaging Patterns in Prostate Cancer
Nelly Tan1, Noah Stier1, Steven Raman1, and Fabien Scalzo1

1UCLA, Los Angeles, CA, United States

Synopsis

This demonstrates the feasibility of using Deep Learning to characterize prostate cancer lesions in an automatic fashion. The translation and development of this method into a decision support tool may provide more objective criteria for clinicians during diagnosis.

Target Audience

Radiologists

Purpose

Prostate cancer is the second most common cancer among men worldwide and remains underestimated in about one third of patients. Determination of severity using MRI is challenged due to the inherent heterogeneity of the prostate and the wide inter-patient variability. Radiologists usually rely on the shape, texture, and intensity of a region in the MRI image to establish diagnostic. However, the lack of standardization during this process may lead to undetected or underestimated cancer lesions. There is an overt need for quantification and characterization of those lesions using a more systematic and generalizable approach. In this study, we introduce a machine learning method to identify the visual features across several routinely acquired MRI parameters that are associated with prostate cancers. Our model relies on deep learning which has recently emerged as a groundbreaking and versatile machine learning technique to identify complex patterns in images.

Methods

This retrospective HIPAA-compliant study of 10 patients received IRB approval. All patients had biopsy proven prostate cancer and preoperative multiparametric prostate MRI before undergoing laparoscopic-assisted robotic prostatectomy. Clinical data (PSA, age, biopsy), multiparametric MRI (T1, T2, ADC, and DWI) and whole mount pathological information (Gleason score, size, stage) were collected. Subsequently, patients had whole mount histopathology and areas of the tumor manually outlined by a genitourinary pathologist. The MRI imaging was used as input within the machine learning model. Regions of tumor foci on the whole mount histology were used as the target output of the model. Deep learning [1] was used to identify complex patterns emerging at imaging locations affected by cancer and formalized as a Convolutional Neural Network (CNN) [2] that consists of a hierarchical model where each layer represents increasingly complex features. Learning was achieved by feeding input maps forward through alternating convolutional layers and pooling layers. During learning, the parameters are adjusted via back-propagation over labeled examples to maximize the accuracy. The network was able to learn features by which it can distinguish inputs from the different output classes (i.e healthy or cancer tissue).

Results

The mean age was 65 years and PSA was 9.5 ng/ml. Preoperative TRUS Gleason score (GS) was 3+4 in 3/10; GS 4+3 in 1/10 patients; GS8-10 in 6/10 patients. The average prostate volume was 25.2 cc (SD 9.8) and average tumor diameter on MRI was 1.7 cm (SD 0.7); Preoperative mpMRI showed single suspicion lesion with PI-RAD score of 3 in 1 (10%) patient; 4 in 7 (70%) patients; and 5 in 2 (20%) patients. 4 of 10 (40%) of the patients had stage T2 disease; 4/10 (40%) had T3a and 2/10 (20%) had T3b disease. Postoperative whole mount histopathology showed GS3+4 in 5/10 (50%); 4+3 in 3/10 (30%) and GS 4+5 in 2/10 (20%) patients. The model resulted in a CNN composed of 4 convolutional layers (con1 to conv4) interleaved with 4 pooling layers (pool1 to pool4). The first two pairs of layer (conv1, pool1, conv2, pool2) were specific to a MRI parameter while subsequent layers were pooled across modalities and thus captured complex associations across MRI maps.

Discussion

The proposed framework successfully demonstrated that deep learning can be used to automatically discover imaging features that were associated with cancer tissue in the prostate. Although the current study was limited by the low number of patients (10), we believe that the same formalism can be used on a larger and more representative cohort of patients. The low-level features learned by the model in the first layer were observed to correspond to gradient filters that usually detect edges, while higher level features in the model were observed to be related to more complex multimodal patterns and textures. These features could ultimately be highlighted during detection and provide valuable insights to understand why a specific image regions was associated with high likelihood of cancer. It could be also play a role in the training of radiology trainees as it could bring their attention to specific features of the image that are predictive of cancer.

Conclusion

This demonstrates the feasibility of using Deep Learning to characterize prostate cancer lesions in an automatic fashion. The translation and development of this method into a decision support tool may provide more objective criteria for clinicians during diagnosis.

Acknowledgements

No acknowledgement found.

References

[1] Y. Bengio, Learning deep architectures for AI, Foundations and trends in Machine Learning, vol. 2, no. 1, pp. 1–127, 2009. [2] Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. Howard, W. E. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in Neural Information Processing Systems, 1990, pp. 396–404.


Proc. Intl. Soc. Mag. Reson. Med. 24 (2016)
3880