We developed a novel method to generate 3D isotropic super-resolution prostate MR images using a class of machine learning algorithms known as Generative Adversarial Networks (GANs). We use GANs to generate super-resolution images with 3D SVR image slices as inputs. Super-resolution is enforced as the discriminator network is trained to distinguish the output image from in-plane T2 FSE images, resulting in the generation of super-resolution images. We use unpaired GANs since slices of 3D SVR do not usually have corresponding super-resolution images. The result is a generated continuous 3D volume with super-resolution throughout all three planes in isotropic voxel size.
Patient data: This study utilized 2D T2w FSE prostate images acquired from 346 patients included in the ProstateX challenge, with in-plane resolution of 0.5x0.5 mm2 and slice thickness of 3.6 mm. 6 Isotropic 3D volumes were reconstructed from the three-view T2w images using slice-to-volume Reconstruction (SVR) and patch-to-volume (PVR) techniques, with a resulting isotropic voxel size of 0.75 mm3. Single-Shot Fast Spin Echo (aka SSFSE, or HASTE, SSH-TSE, etc.) images were acquired under IRB control in three orthogonal planes at 3T (Discovery MR750, GE Healthcare), with in-plane and through-plane resolution of 0.8 mm and 4 mm, and a 50% overlap between adjacent slices. A volumetric HyperCube image (aka 3D FSE/TSE, SPACE, VISTA etc.) was acquired in the same subject to serve as a high-resolution reference.
Training set: We took 300 patients for training. Training group A contains 79,459 slices of SVR LR images; training group B contains 19,002 slices of FSE HR images. Of these, 46 patients with 12,051 SVR and 2,725 slices were reserved for testing (Table 1).
DL model: SR images are generated by the cycleGAN proposed by Zhu et al. 7 The cycleGAN contains two pairs of generators and discriminators, they were trained by unpaired data. We added 50% dropout to prevent overfitting. The AtoB Generator (GAB) learns to translate SVR to FSE slices and AtoB Discriminator (DAB) learns to distinguish the generated output from real T2w slices. By distinguishing the output of GAB, DAB act as a loss function to provide improve direction to GAB. Meanwhile, the GBA/DBA pair is working in a reverse direction to provide cycle consistency, which ensures GAB generating output regarding the input ground-truth slices. The network is trained 100 epochs for axial, coronal and sagittal slices each.
DL data test: The sequestered SVR volumes were resliced into 3 orthogonal stacks and the pre-trained models were applied. The SR stacks were registered and averaged together to generate 3D SR volume. The test was applied to 46 cases from ProstateX and 2 cases of SSFSE images with FSE image and HyperCube images as ground-truth, respectively.
Comparison Analysis:T2w 2D images and 3D HyperCube volumes were used as the reference standard images for in-plane and overall image quality analysis, respectively. Root Mean Square Error (rMSE), Structural Similarity Index (SSIM) and Perceptual Index (PI)8 were used to evaluate the quality of reconstructed SR image quantitatively.
1. Westphalen, A. C., Noworolski, S. M., Harisinghani, M., Jhaveri, K. S., Raman, S. S., Rosenkrantz, A. B., Kurhanewicz, J. High-Resolution 3-T Endorectal Prostate MRI: A Multireader Study of Radiologist Preference and Perceived Interpretive Quality of 2D and 3D T2-Weighted Fast Spin-Echo MR Images. American Journal of Roentgenology. 2016; 206(1), 86-91. doi:10.2214/ajr.14.14065
2. Marks, L., Young, S., & Natarajan, S. MRI–ultrasound fusion for guidance of targeted prostate biopsy. Current Opinion in Urology, 2013; 23(1), 43-50. doi:10.1097/mou.0b013e32835ad3ee
3. Alansary, A., Rajchl, M., Mcdonagh, S. G., Murgasova, M., Damodaram, M., Lloyd, D. F., Kainz, B. PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI. IEEE Transactions on Medical Imaging. 2017; 36(10), 2031-2044. doi:10.1109/tmi.2017.2737081
4. Cao, W., Lian, Y., Liu, D., Li, F., Zhu, P., & Zhou, Z. Rectal cancer restaging using 3D CUBEvs.2D T2-weighted technique after neoadjuvant therapy: A diagnostic study. Gastroenterology Report. 2016; doi:10.1093/gastro/gow039
5. Mccann, M. T., Jin, K. H., & Unser, M. Convolutional Neural Networks for Inverse Problems in Imaging: A Review. IEEE Signal Processing Magazine. 2017; 34(6), 85-95. doi:10.1109/msp.2017.2739299
6. Geert Litjens, Oscar Debats, Jelle Barentsz, Nico Karssemeijer, and Henkjan Huisman. "ProstateX Challenge data", The Cancer Imaging 2017; https://doi.org/10.7937/K9TCIA.2017.MURS5CL.
7. Zhu, J., Park, T., Isola, P., & Efros, A. A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. 2017 IEEE International Conference on Computer Vision (ICCV). 2017; doi:10.1109/iccv.2017.244
8. Yochai Blau, Roey Mechrez, Radu Timofte, Tomer Michaeli, Lihi Zelnik-Manor. PIRM Challenge on Perceptual Image Super-resolution, arXiv:1809.07517v2. 2018.
Table 1. SVR images as low-resolution input were divided into a training group A and a testing group A. T2w FSE images as ground-truth were divided into a training group B and a testing group B. The number of slices for the Axial, Coronal and Sagittal network are listed here.