0879

3D Super-resolution Prostate MRI using Generative Adversarial Networks and unpaired data
Yucheng Liu1, Yulin Liu2, Daniel Litwiller3, Rami Vanguri4, Michael Zenkay Liu5, Richard Ha5, Hiram Shaish5, and Sachin Jambawalikar5

1Applied Physics and Applied Mathematics, Columbia University, New York, NY, United States, 2Information and Computer Engineering, Chung Yuan Christian University, Taoyuan, Taiwan, 3Global MR Applications and Workflow, GE Healthcare, New York, NY, United States, 4Data Science Institute, Columbia University, New York, NY, United States, 5Radiology, Columbia University Medical Center, New York, NY, United States

Synopsis

We developed a novel method to generate 3D isotropic super-resolution prostate MR images using a class of machine learning algorithms known as Generative Adversarial Networks (GANs). We use GANs to generate super-resolution images with 3D SVR image slices as inputs. Super-resolution is enforced as the discriminator network is trained to distinguish the output image from in-plane T2 FSE images, resulting in the generation of super-resolution images. We use unpaired GANs since slices of 3D SVR do not usually have corresponding super-resolution images. The result is a generated continuous 3D volume with super-resolution throughout all three planes in isotropic voxel size.

Introduction

Previous work has shown that 3D isotropic T2-weighted (T2w) prostate MRI yields comparable diagnostic results in identifying and staging of prostate cancer.1 Also, volumetric images may find additional utility in guiding interventions, for example in ultrasound-MRI fusion biopsies.2 However, current 3D reconstruction techniques such as slice-to-volume or patch-to-volume methods suffer from inevitably high-frequency details loss in the reconstruction processes (Figure 1). 3,4 Convolutional neural networks (CNNs) have been widely used to generate super-resolution (SR) images.5 However, in order to perform CNN-based SR techniques, a high-resolution 3D isotropic image as ground-truth is required to be paired with a downgraded low-resolution (LR) image. Additionally, CNNs are sensitive to the specific algorithm being used to simulate LR images. In this study, we utilize a cycleGAN architecture which consists of two pairs of generators and discriminators to train three independent networks from axial, coronal and sagittal T2w images with resliced SVR volumes. 3D registrations were performed to ensure minimized anatomy misalignment during the reconstruction of SR slices into a single 3D volume. Finally, we adopted 2 cases of three-plane Single-Shot Fast Spin Echo (SSFSE) images to this pre-trained networks and generated 3D SR volumes to potentially generate superior image quality compared with the in-plane SSFSE images, as acquired.

Methods

Patient data: This study utilized 2D T2w FSE prostate images acquired from 346 patients included in the ProstateX challenge, with in-plane resolution of 0.5x0.5 mm2 and slice thickness of 3.6 mm. 6 Isotropic 3D volumes were reconstructed from the three-view T2w images using slice-to-volume Reconstruction (SVR) and patch-to-volume (PVR) techniques, with a resulting isotropic voxel size of 0.75 mm3. Single-Shot Fast Spin Echo (aka SSFSE, or HASTE, SSH-TSE, etc.) images were acquired under IRB control in three orthogonal planes at 3T (Discovery MR750, GE Healthcare), with in-plane and through-plane resolution of 0.8 mm and 4 mm, and a 50% overlap between adjacent slices. A volumetric HyperCube image (aka 3D FSE/TSE, SPACE, VISTA etc.) was acquired in the same subject to serve as a high-resolution reference.

Training set: We took 300 patients for training. Training group A contains 79,459 slices of SVR LR images; training group B contains 19,002 slices of FSE HR images. Of these, 46 patients with 12,051 SVR and 2,725 slices were reserved for testing (Table 1).

DL model: SR images are generated by the cycleGAN proposed by Zhu et al. 7 The cycleGAN contains two pairs of generators and discriminators, they were trained by unpaired data. We added 50% dropout to prevent overfitting. The AtoB Generator (GAB) learns to translate SVR to FSE slices and AtoB Discriminator (DAB) learns to distinguish the generated output from real T2w slices. By distinguishing the output of GAB, DAB act as a loss function to provide improve direction to GAB. Meanwhile, the GBA/DBA pair is working in a reverse direction to provide cycle consistency, which ensures GAB generating output regarding the input ground-truth slices. The network is trained 100 epochs for axial, coronal and sagittal slices each.

DL data test: The sequestered SVR volumes were resliced into 3 orthogonal stacks and the pre-trained models were applied. The SR stacks were registered and averaged together to generate 3D SR volume. The test was applied to 46 cases from ProstateX and 2 cases of SSFSE images with FSE image and HyperCube images as ground-truth, respectively.

Comparison Analysis:T2w 2D images and 3D HyperCube volumes were used as the reference standard images for in-plane and overall image quality analysis, respectively. Root Mean Square Error (rMSE), Structural Similarity Index (SSIM) and Perceptual Index (PI)8 were used to evaluate the quality of reconstructed SR image quantitatively.

Results

The 2D T2-weighted images presented here have a dense in-plane resolution of 0.5x0.5 mm2, with a lower through-plane resolution of 3.6 mm. The reconstructed 3D T2w volumes have an isotropic voxel size of 0.75 mm3, with reduced image quality compared to 2D T2w images. CycleGAN SR reconstruction of three LR image stacks resulted in a single HR volume. In Figure 2, the image displays the PVR, SVR image in axial view, the cycleGAN reconstructed HR image, and the standard 3D FSE HR volume, in a left to right order. The quantitative results for the SSFSE cases and T2w cases are shown in Table 2.

Discussion/Conclusion

Our proposed method generates an isotropic 3D SR volume with a voxel size of 0.753, while yielding superior image quality as compared with SVR/ PVR 3D images. Future steps will include a qualitative assessment of normal and abnormal prostate glands by radiologists to verify the diagnostic information for the 3D SR volume.

Acknowledgements

No acknowledgment found.

References

1. Westphalen, A. C., Noworolski, S. M., Harisinghani, M., Jhaveri, K. S., Raman, S. S., Rosenkrantz, A. B., Kurhanewicz, J. High-Resolution 3-T Endorectal Prostate MRI: A Multireader Study of Radiologist Preference and Perceived Interpretive Quality of 2D and 3D T2-Weighted Fast Spin-Echo MR Images. American Journal of Roentgenology. 2016; 206(1), 86-91. doi:10.2214/ajr.14.14065

2. Marks, L., Young, S., & Natarajan, S. MRI–ultrasound fusion for guidance of targeted prostate biopsy. Current Opinion in Urology, 2013; 23(1), 43-50. doi:10.1097/mou.0b013e32835ad3ee

3. Alansary, A., Rajchl, M., Mcdonagh, S. G., Murgasova, M., Damodaram, M., Lloyd, D. F., Kainz, B. PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI. IEEE Transactions on Medical Imaging. 2017; 36(10), 2031-2044. doi:10.1109/tmi.2017.2737081

4. Cao, W., Lian, Y., Liu, D., Li, F., Zhu, P., & Zhou, Z. Rectal cancer restaging using 3D CUBEvs.2D T2-weighted technique after neoadjuvant therapy: A diagnostic study. Gastroenterology Report. 2016; doi:10.1093/gastro/gow039

5. Mccann, M. T., Jin, K. H., & Unser, M. Convolutional Neural Networks for Inverse Problems in Imaging: A Review. IEEE Signal Processing Magazine. 2017; 34(6), 85-95. doi:10.1109/msp.2017.2739299

6. Geert Litjens, Oscar Debats, Jelle Barentsz, Nico Karssemeijer, and Henkjan Huisman. "ProstateX Challenge data", The Cancer Imaging 2017; https://doi.org/10.7937/K9TCIA.2017.MURS5CL.

7. Zhu, J., Park, T., Isola, P., & Efros, A. A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. 2017 IEEE International Conference on Computer Vision (ICCV). 2017; doi:10.1109/iccv.2017.244

8. Yochai Blau, Roey Mechrez, Radu Timofte, Tomer Michaeli, Lihi Zelnik-Manor. PIRM Challenge on Perceptual Image Super-resolution, arXiv:1809.07517v2. 2018.

Figures

Figure 1. Axial (A), Coronal (B) and Sagittal (C) view of 2D T2w FSE images from ProstateX; Axial (D), Coronal (E) and Sagittal (F) view of 3D Slice-to-volume images reconstructed from FSE images.

Figure 2. Workflow for super-resolution reconstruction

Figure 3. Prostate SSFSE images after Patch-to-volume reconstruction (A), Slice-to-volume reconstruction (B), cycleGAN super-resolution (C). HyperCube image as the reference image for the same subject (D).

Table 1. SVR images as low-resolution input were divided into a training group A and a testing group A. T2w FSE images as ground-truth were divided into a training group B and a testing group B. The number of slices for the Axial, Coronal and Sagittal network are listed here.


Table 2. Root Mean Square Error (rMSE), Structural Similarity Index (SSIM) and Perceptual Index (PI) values for T2w FSE ProstateX and SSFSE cases.1. PI can only be computed in 2D images, we then took the central slice of each image to compare.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0879