In this work, we demonstrate that images with pathologies can be transformed to its non-pathological normal state using a cycle Generative Adversarial Network (cGAN). Potential applications of the work are in surgical planning, radiation therapy planning and longitudinal studies.
Introduction
In the presence of pathologies like non-infiltrating tumors (e.g. gliomas, Glioblastomas etc.), the surrounding healthy tissue is compressed by the growing tumor. While considering therapy for these patients, and to study progress of tumor growth from baseline, it is very useful to know the brain structure prior to the presence of the tumor. A baseline non-pathological image of the patient also helps label the healthy surrounding tissue to spare critical functional regions during radiation therapy or surgery. Previous efforts to generate normative data have focused on biomechanical models or registration-based inversion methods to generate normative data [1,2]. However, these models rely on prior information about tumor location and tumor ROI delineation must be explicitly done. In this work we explore the possibility of using Generative Adversarial Networks (GANs) for estimating normative data for a given set of pathological data. GANs are attractive for learning data style transfers since a GAN implicitly models the parametric form of data distributions [3]. This relaxes the constraint on the input and output data to be exactly matched for one-to-one correspondence. This is critical in cases with pathology where exact matched data is nearly impossible to obtain. We show that using a cycle-GAN (cGAN) [4], we can potentially transform a tumor pathology image into its normative state baseline image without pathology. Preliminary results are presented in cases with brain tumors to generate normal looking T2W brain images.Subject Data: The data for this study came from two clinical cohorts: Cohort A was obtained from TCGA database (N = 14, multiple vendors) [5,6] and Cohort B (N=15) was obtained from another clinical site with GE MRI scanners (1.5T, 3.0T). An appropriate IRB approved all the studies.
Imaging Data: Only axial, T2W protocol images from both the data cohorts were used for training and testing purposes. T2W images were selected since they are ubiquitous in most of the clinical neuro imaging protocol. We sorted a total of 175 image slices with lesions present as well as structurally similar looking slices from normal cases (Fig. 1). Notice that there is no one-to-one correspondence between each of these slices. Of these, 156 were used for training (26 cases, 10% validation) and 19 (3 cases) were used for testing the algorithm.
cGAN Experiment: In this experiment, we used T2W images with lesions present as the input to the generator and used supervised cyclic GAN [2] to generate its normative image as output (Fig. 2). The generator has 5 Convolution layers (with relu activation), a merge layer and tanh activation at the last layer. The discriminator has 4 convolution layers and relu activation is used. Mean absolute error was used as loss for both the discriminator and generator. The GAN architecture is trained for 200 epochs and the convergence is observed at around 60 epochs. The cGAN generator model for each epoch was stored and that with best visual result in validation cases was chosen for testing purpose.