4885

Enhance One-minute EPIMix Brain MRI Exams with Unsupervised Cycle-Consistent Generative Adversarial Network
Jiang Liu1, Enhao Gong2,3, Stefan Skare4, and Greg Zaharchuk2

1Tsinghua University, Beijing, China, 2Stanford University, Stanford, CA, United States, 3Subtle Medical Inc., Menlo Park, CA, United States, 4Karolinska Institutet, Stockholm, Sweden

Synopsis

Recently, a new one-minute multi-contrast echo-planar imaging (EPI) based sequence (EPIMix) is proposed for brain magnetic resonance imaging (MRI). Despite the ultra-fast acquisition speed, EPIMix images suffer from lower signal-to-noise ratio (SNR) and resolution than standard scans. In this study, we tested whether an unsupervised deep learning framework could improve the image quality of EPIMix exams. We evaluated the proposed network on T2 and T2 FLAIR images and achieved promising qualitative results. The results suggest that deep learning could enable high image quality for ultra-fast EPIMix exams, which could have great clinical utility especially for patients with acute diseases.

Introduction

Recently, a new one-minute multi-contrast echo-planar imaging (EPI) based sequence (EPIMix)1 is proposed for brain magnetic resonance imaging (MRI). Despite the ultra-fast acquisition speed, EPIMix images suffer from lower signal-to-noise ratio (SNR) and resolution. Due to EPI distortion, it’s hard to improve the image quality of EPIMix through traditional de-noising and super-resolution techniques of deep learning, that require well-registered low quality and high quality image pairs for training. In this study, we proposed an unsupervised image enhancement framework that enables noise reduction and super-resolution simultaneously without the need of paired training data. We tested the network for T2 and T2-FLAIR images enhancement and achieved promising qualitative results.

Method

① Dataset: The study was approved by the local IRB committee. MR acquisitions were performed at 3T MR system (GE Healthcare, Milwaukee, WI). The MR protocol included 6 EPIMix sequences (T1‐FLAIR, T2‐w, diffusion‐weighted (DWI), apparent diffusion coefficient (ADC), T2 *w, T2‐FLAIR) and two standard sequences (T2, T2-FLAIR). EPIMix images are of 256 × 256 pixels, with pixel spacing of 0.9375 mm× 0.9375 mm. Standard images are of 512 × 512 pixels, with pixel spacing of 0.4688 mm× 0.4688 mm. 50 patients were scanned. 40 cases from the cohort were random chosen for training and the rest were used for testing. Figure 1 shows an example of the dataset (1 patient, 1 slice).

② Network architecture: We follow the CycleGAN2 framework as shown in Figure 2, which consists of two generators and two discriminators. We design Generator F to learn to synthesize high resolution standard T2 images from low resolution noisy EPIMix T2 images, which achieves the goal of image enhancement. We do not increase the resolution of EPIMix images before enhancement for the purpose of computation reduction. The architecture of generators is adopted from the style transfer network3 and the discriminators are 70 × 70 PatchGAN4.

③ Loss function: The loss for the network consists of adversarial loss Ladv, cycle-consistent loss Lcycle, and identity loss Lid2. Let X denotes the domain of EPIMix T2 images; Y denotes the domain of standard T2 images; x denotes sample from X; y denotes sample from Y; A denotes the discriminator for X and B denotes the discriminator for Y. The adversarial loss Ladv for mapping F:X→Y can be formulated as:

Lgan(F,B,X,Y) = Ey~p(Y) log(B(y)) + Ex~p(x)log⁡(1-B(F(x)).

The overall loss function L = LGAN(F, B, X, Y) + LGAN (G,A,Y,X) + λ1 Lcyc(F, G) + λ2 Lid(F, G), where we empirically set λ1 = 10, λ2 = 5.The adversarial loss encourages more realistic outputs and makes sure outputs lie in the target domains. And the cycle-consistent loss and identity loss make sure the outputs of the network are faithful to the inputs without hallucinated features.

Results

We demonstrate the effectiveness of the proposed method with T2 and T2-FLAIR images. Examples of enhanced images are shown in Figure 3. Reference images are taken from nearby positions in standard image volumes for visual comparisons. Enhanced and reference images are of 512 × 512 pixels, while EPIMix images are of 256 × 256 pixels and enlarged via bicubic interpolation for uniform display in Figure 3. The results show that the enhanced images look sharper and have higher similarity with standard images than the original EPIMix images. In addition, the pathologies in the original images are well-preserved (see Figure 3(d)).

Discussion and Conclusion

Traditional deep learning based image enhancement methods require paired low quality and high quality images for training, which are usually infeasible in medical imaging due to registration and acquisition protocol issues. In this study, we proposed an unsupervised image enhancement framework that enables noise reduction and super-resolution simultaneously without the need of paired training data. The results suggest that deep learning can help ultra-fast MRI scans produce high quality images, which could have great clinical utility especially for patients with acute diseases. However, although the algorithm produces visually pleasing results, the performance of the network is hard to quantify since there are no paired data for quantitative evalutaion. Reader study is desired for further assessment of algorithm performance.

Acknowledgements

No acknowledgement found.

References

1. Skare, S., Sprenger, T., Norbeck, O., Rydén, H., Blomberg, L., Avventi, E., & Engström, M. (2018). A 1‐minute full brain MR exam using a multicontrast EPI sequence. Magnetic Resonance in Medicine, 79(6), 3045-3054.

2. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint.

3. Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. arXiv preprint.

4. Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. "Perceptual losses for real-time style transfer and super-resolution." European Conference on Computer Vision. Springer, Cham, 2016.

Figures

Figure 1. Example of the dataset (1 patient, 1 slice). (a)-(f) EPIMix images; (g) (h) standard images.

Figure 2. Unsupervised image enhancement framework. We design Generator F to learn to synthesize high resolution standard images from low resolution noisy EPIMix images, which achieves the goal of image enhancement

Figure 3. Examples of enhanced images from the test set. (a) (b): T2 images; (c) (d) T2 FLAIR images. Reference images are taken from nearby positions in standard image volumes for visual comparisons. Enhanced and reference images are of 512 × 512 pixels, while EPIMix images are of 256 × 256 pixels and enlarged via bicubic interpolation for uniform display.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4885