4030

Do we still need mathematical modeling in the age of deep learning? A case-study comparison of the Tofts model versus end-to-end deep learning in prostate cancer segmentation
Alessandro Guida1, Peter Q Lee2, Steve Patterson3, Thomas Trappenberg2, Chris V Bowen1,4, Steven Beyea1,4, Jennifer Merrimen5, Cheng Wang5, and Sharon E Clarke4

1Biomedical Translational Imaging Centre, Halifax, NS, Canada, 2Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada, 3Nova Scotia Health Research Foundation, Halifax, NS, Canada, 4Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada, 5Department of Pathology, Dalhousie University, Halifax, NS, Canada

Synopsis

The rise in popularity of deep learning is revolutionizing the way biomedical images are acquired, processed, analyzed. Just a few years ago, extracting high-level understanding from biomedical images was a process restricted to highly trained professionals often requiring multidisciplinary collaborations. In the work presented, we showcase a study that compares the performance of a model trained end-to-end using a novel deep learning architecture, versus a model trained on its corresponding state-of-the-art mathematically engineered feature. Results show that end-to-end deep learning significantly outperforms the mathematical model, suggesting that feature engineering will play a less important role in the coming years.

Introduction

In the past years, there has been a growing interest in applying deep learning techniques such as convolutional neural networks (CNN) to radiological images. An important benefit of CNN over more classical machine learning methods is that they can be trained “end-to-end”; this term refers to the omission of an initial phase that requires deep domain-specific knowledge and hand-crafted feature engineering. Instead, the solution to the problem is learned directly from the training dataset and derived features of interest, such as texture, do not need to be anticipated or pre-computed. We showcase a study that compares two deep learning models tasked to perform prostate tumor (PCa) segmentation. One model is trained end-to-end using Dynamic Contrast Enhanced (DCE) Magnetic Resonance Imaging, a volumetric time-series of images that are rapidly acquired after the patient is injected with a gadolinium-based contrast-agent [1]. The concentration of contrast agent within the bloodstream throughout time may behave differently in abnormal tissue [2]. The second model is instead trained with the Tofts pharmacokinetic model [3], one method for analyzing DCE that fits scalar parameters, such as Ktrans, to the time-series. Specifically, Ktrans represents the volume transfer constant and it reflects the rate of contrast agent efflux into extravascular extracellular space from the blood plasma [2].We hypothesize that machine learning models that spatially and temporally interpret the raw DCE time-series will segment PCa better than baseline models that use Ktrans.

Methods

The training dataset is composed of 16 patients that were referred for prostate MRI and subsequently underwent radical prostatectomy. An abdominal radiologist manually annotated tumor regions according to PIRADS v2 and histopathological sections provided by genitourinary pathologists. Annotated DCE images from patients were sequentially transformed in Ktrans maps. For each 3D volumetric image, 204 2D image slices processed as input to the model. Two experiments are performed (Figure 1), both tasked with segmenting the tumor regions from non-tumor regions in PCa images. The first one uses U-net [4] and Ktrans. The second uses a novel architecture that combines U-net and convGRU architectures (named UconvGRU) to perform the segmentation in both spatial and temporal domain (Figure 2). The latter model is designed to process the DCE time-series images. The models were trained using DICE score [5]. Model evaluation was performed using leave-one-out-cross-validation and the overall score calculated through averaging. Significance was tested with exact one-sided Wilcoxon test.

Results

The results of the tumor versus non-tumor segmentation task can be visualized as prediction heatmaps as shown in figure 3. As shown in table 1, the model trained with DCE images significantly outperforms the model trained with Ktrans maps

Conclusions

Feature engineering relies on domain-specific knowledge about the data and the task, to generate data transformations that yield a more condensed predictive value. Feature engineering had played a critical role in the years previous to the deep learning revolution. This is due to the fact that, classical machine learning algorithms are “shallow” and lack the hypothesis space depth required to learn complex tasks. Moreover, spatial and temporal independence assumptions required to fit simple models, grossly oversimplifying the dataset. Deep learning is changing the way models are built in the biomedical field. The results from the work presented, indicate that an end-to-end approach, without any previous pharmacokinetic physiological knowledge, can outperform models that rely on well studied mathematically engineered features. This presents the main advantage that deep subject-specific knowledge is no longer required, but with enough data, features can be learned directly from the raw data. Feature engineering will still find many applications, especially when the problem complexity is very high and data availability is a limiting factor; but as data sharing and Generative adversarial network (GAN) synthesized images become more popular, we believe this need will eventually be replaced by fully end-to-end approaches.

Acknowledgements

This work received funding from the Canada Summer Jobs, Atlantic Innovation Fund grant, Brain Canada, NSERC discovery, GE Healthcare, Radiology Research Foundation, Nova Scotia Health Authority Research Fund, and the Nova Scotia Cooperative Education Incentive. Thanks are also extended to Nathan Murtha, Manjari Murthy, and Jessica Luedi for approaching patients in order to gain their consent for their scans to be included in this research, as well as Liette Connor and Ricardo Rendon for referring patients to the research study.

References

1. O’Connor, J. P. B. et al. Dynamic contrast-enhanced imaging techniques: CT and MRI. Br. J. Radiol. 84, S112–S120 (2011).

2. Weinreb, J. C. et al. PI-RADS Prostate Imaging – Reporting and Data System: 2015, Version 2. Eur. Urol. 69, 16–40 (2016).

3. Ronneberger, O., Fischer, P. & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv150504597 Cs (2015).

4. Tofts, P. S. T1-weighted DCE Imaging Concepts: Modelling, Acquisition and Analysis.

5. Milletari, F., Navab, N. & Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:1606.04797 [cs] (2016).

Figures

Figure 1. Experimental design

Figure 2. UconvGRU network architecture designed to process raw DCE image input.

Figure 3. Sample probability maps overlaid on the 2D slices of the prostate from subjects that were generated by models with leave-one-out evaluation. The thin black outline indicates ground-truth tumor labels. Red color indicates regions with a predicted high cancer probability while green indicates predicted healthy tissue.

Table 1. Model performance in cancer segmentation task is evaluated with DICE score (i.e. F1 score). Significance was tested with exact one-sided Wilcoxon test.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4030