2870

GAN based transformation of pathology data into its normative state for interventional neuro-therapy planning
Arathi Sreekumari1, Sandeep Kaushik1, Suresh Joel1, and Dattesh Shanbhag1

1GE Global Research, Bangalore, India

Synopsis

In this work, we demonstrate that images with pathologies can be transformed to its non-pathological normal state using a cycle Generative Adversarial Network (cGAN). Potential applications of the work are in surgical planning, radiation therapy planning and longitudinal studies.

Introduction

In the presence of pathologies like non-infiltrating tumors (e.g. gliomas, Glioblastomas etc.), the surrounding healthy tissue is compressed by the growing tumor. While considering therapy for these patients, and to study progress of tumor growth from baseline, it is very useful to know the brain structure prior to the presence of the tumor. A baseline non-pathological image of the patient also helps label the healthy surrounding tissue to spare critical functional regions during radiation therapy or surgery. Previous efforts to generate normative data have focused on biomechanical models or registration-based inversion methods to generate normative data [1,2]. However, these models rely on prior information about tumor location and tumor ROI delineation must be explicitly done. In this work we explore the possibility of using Generative Adversarial Networks (GANs) for estimating normative data for a given set of pathological data. GANs are attractive for learning data style transfers since a GAN implicitly models the parametric form of data distributions [3]. This relaxes the constraint on the input and output data to be exactly matched for one-to-one correspondence. This is critical in cases with pathology where exact matched data is nearly impossible to obtain. We show that using a cycle-GAN (cGAN) [4], we can potentially transform a tumor pathology image into its normative state baseline image without pathology. Preliminary results are presented in cases with brain tumors to generate normal looking T2W brain images.

Experimental Details

Subject Data: The data for this study came from two clinical cohorts: Cohort A was obtained from TCGA database (N = 14, multiple vendors) [5,6] and Cohort B (N=15) was obtained from another clinical site with GE MRI scanners (1.5T, 3.0T). An appropriate IRB approved all the studies.

Imaging Data: Only axial, T2W protocol images from both the data cohorts were used for training and testing purposes. T2W images were selected since they are ubiquitous in most of the clinical neuro imaging protocol. We sorted a total of 175 image slices with lesions present as well as structurally similar looking slices from normal cases (Fig. 1). Notice that there is no one-to-one correspondence between each of these slices. Of these, 156 were used for training (26 cases, 10% validation) and 19 (3 cases) were used for testing the algorithm.

cGAN Experiment: In this experiment, we used T2W images with lesions present as the input to the generator and used supervised cyclic GAN [2] to generate its normative image as output (Fig. 2). The generator has 5 Convolution layers (with relu activation), a merge layer and tanh activation at the last layer. The discriminator has 4 convolution layers and relu activation is used. Mean absolute error was used as loss for both the discriminator and generator. The GAN architecture is trained for 200 epochs and the convergence is observed at around 60 epochs. The cGAN generator model for each epoch was stored and that with best visual result in validation cases was chosen for testing purpose.

Results

We describe preliminary initial experiments with GAN on transforming pathology MRI data into normative data types in Figure 3. As seen, in most of the cases, the tumor lesion (red arrows) are mitigated to good extent (green arrows) in cGAN estimated normal images. In most of the cases, we also observe that normal tissue characteristics are preserved well in cGAN predicted normal images (yellow arrows). However, we do notice some shape distortions in predicted images; especially in the edges (blue arrows in predicted image, column 1). We did observe some failures (red arrows in cGAN predicted images, row 2). Currently the GAN architecture was tested with a preliminary network architecture and default loss metrics. Even then, we can notice significant impact of GAN framework in generating reasonable baseline images. We understand that as we refine the network architecture for both the generator and discriminator, optimize the loss weights across discriminator and generative networks and add more data, we should be able to address the false positives observed in the data effectively. The current cGAN model is trained for T2W contrast only and will need to be tuned independently for other contrast or a data balanced to allow performance with other contrast (e.g. T1W with contrast enhancement).

Conclusion

We demonstrate that cycle GAN can reasonably transform pathology T2W images into baseline, normal-like T2W images. cGAN is a promising approach to generate normative data from pathology data without any external inputs. With further modifications to the cGAN architecture and hyper-parameters as well as additional data, we should be able to further refine the results.

Acknowledgements

No acknowledgement found.

References

  1. Realistic Simulation of the 3-D Growth of Brain Tumors in MR Images Coupling Diffusion With Biomechanical Deformation, Olivier Clatz, et al. IEEE Tran. Med. Imaging, 2005.
  2. Ravishankar Hariharan et.al, “ Enhanced brain labeling by atlas registration in Neuro-oncology using virtual tumor shrinking:, ISMRM 2016, p.4352.
  3. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. Generative adversarial nets, NIPS (2014).
  4. Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." arXiv preprint (2017).
  5. Scarpace, L., Mikkelsen, T., Cha, soonmee, Rao, S., Tekchandani, S., Gutman, D., … Pierce, L. J. (2016). Radiology Data from The Cancer Genome Atlas Glioblastoma Multiforme [TCGA-GBM] collection. The Cancer Imaging Archive. http://doi.org/10.7937/K9/TCIA.2016.RNYFUYE9.
  6. Pedano, N., Flanders, A. E., Scarpace, L., Mikkelsen, T., Eschbacher, J. M., Hermes, B., … Ostrom, Q. (2016). Radiology Data from The Cancer Genome Atlas Low Grade Glioma [TCGA-LGG] collection. The Cancer Imaging Archive. http://doi.org/10.7937/K9/TCIA.2016.L4LTD3TK.

Figures

Figure 1: Basic Workflow for baseline data generation from tumor data. Number indicates training slice index. Notice that there is no one-to-one match in input and output data. It is possible for mismatches to be present across slice locations. Care has been taken to find a relatively similar looking normal image to a given pathology slice.

Figure 2: The experimental set up is elaborated in the figure. Input to the generator network is a T2W image slice with pathology present (red arrow) and the discriminator takes the output from the generator and the reference image as inputs. The generator changes its behavior according to the changes in the discriminator loss. Cycle GAN reconstructs the input back and calculates the cycle consistency loss as well. The overall output of the GAN generator is tuned to look similar to the reference input image by the discriminator.

Figure 3: Sample examples of baseline image generation from pathology cases is shown in the figure. Green arrows: Indicate good pathology to normal tissue generation. Red Arrows: Indicate pathology, Yellow Arrow: Indicates normal regions and highlights the fact that cGAN preserves normal structures texture well. Blue Arrow: Indicates changes in morphology at the edges.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
2870