4888

Will a Convolutional Neural Network Trained for Non-contrast Water-Fat Separation Generalize to Post-Contrast Acquisitions?
James W Goldfarb1 and Jie Jane Cao2

1St. Francis Hospital, Roslyn, NY, United States, 2St Francis Hospital, Roslyn, NY, United States

Synopsis

A deep learning CNN trained using precontrast images generalizes to post-contrast images, providing equivalent image quality with fewer swap artifacts. For wide-spread adoption of deep learning methods, it is important that they have the capability to generalize beyond training data for flexible usage. This work provides important evidence that magnetic resonance deep learning water-fat separation can be used in a variety of settings.

Purpose

To determine the performance of a deep learning water-fat separation method trained exclusively with pre-contrast dark-blood images and then applied to contrast-enhanced bright-blood images. The unexpected bright heart chambers and contrast agent-induced frequency shifts could affect water and fat identification and quantification.

Methods

This retrospective study used magnitude image data from a database of research cardiovascular images. 31 chronic myocardial infarction patients and 17 normal controls were included in this study providing 1000 precontrast training acquisitions and 446 matched (pre- and post-contrast acquisitions). All MR examinations were performed at 1.5 T (Magnetom Avanto, Siemens Healthineers, Erlangen, Germany) with the subject in the supine position and standard (“tune-up”) magnetic field shim. The pulse sequence was a spoiled multiple gradient-echo sequence (1 slice per breathhold, repetition time = 20 ms; 12 echo times, 2.4 - 15.5 ms (1.2 ms spacing), flip angle = 20, bandwidth = 1860 Hz/pixel, in-plane spatial resolution = 2.3 x 1.7 mm, slice thickness = 8 mm). Images were acquired in multiple long-axis and contiguous short axis planes before and after contrast agent administration. The post-contrast acquisition was commenced 5 minutes after bolus contrast agent administration (0.15 mmol/kg gadopentetate dimeglumine, Magnevist, Bayer Healthcare, Wayne, NJ) followed by late gadolinium-enhancement (LGE) imaging.

Conventional water-fat separation was performed via a multi-point fat-water separation with R2* using a graph cut field map estimation algorithm (GraphCut) [1] with the ISMRM water-fat Toolbox [2] to provide pre-contrast training data and post-contrast ground-truth for comparison to the deep learning method. A U-Net convolutional neural network (CNN) [3] was used for deep learning water-fat separation. The input to the training algorithm was 12 pre-contrast magnitude images from 12 echo times as 12 channels of the CNN. The output of the CNN was two images (water only and fat only). The implementation was realized using Keras 2.0 and TensorFlow 1.2 (both freely available software). Post-contrast water and fat images not used for training (n=446, [test set]) were “predicted” using the trained CNN. Images were assessed for water-fat swap artifacts and visualization of ischemic cardiomyopathy and intramyocardial fat deposition (fatty metaplasia). The structural similarity (SSIM) index and peak signal-to-noise (PSNR) of all images were measured and used to study and compare the performance of the deep learning and GraphCut methods for pre- and post-contrast acquisitions. A p-value < 0.05 was regarded as statistically significant.

Results

Water-fat separation was visually comparable between deep learning and the conventional model-based method for post-contrast acquisitions (Figure 1). Image resolution was equivalent between methods and well depicted intramyocardial fat deposition (Figures 1 and 2) in chronic myocardial infarction. Swap artifacts were present in 12 of the pre-contrast and 3 of the post-contrast GraphCut separation and none of the deep learning separations. There were 41 post-contrast images showing fatty metaplasia with both the GraphCut and deep learning methods. SSIM and PSNR were not significantly different between the GraphCut and DeepWaterFat methods (Table 1).

Conclusion

A deep learning CNN trained using precontrast images generalizes to post-contrast images, providing equivalent image quality with fewer swap artifacts. For widespread adoption of deep learning methods, it is import that they have the capability to generalize beyond training data for flexible usage. This work provides important evidence that magnetic resonance deep learning water-fat separation can be used in a variety of settings.

Acknowledgements

No acknowledgement found.

References

  1. Hernando D, Kellman P, Haldar JP, Liang ZP. Robust water/fat separation in the presence of large field inhomogeneities using a graph cut algorithm. Magn Reson Med 2010;63(1):79-90.
  2. Hu HH, Bornert P, Hernando D, Kellman P, Ma J, Reeder S, Sirlin C. ISMRM workshop on fat-water separation: insights, applications and progress in MRI. Magn Reson Med 2012;68(2):378-388.
  3. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In MICCAI Proceedings, 2015. Springer. p 234-241.

Figures

Figure 1. A comparison of pre- and post-contrast water-fat separation with the GraphCut and Deep Learning methods showed equivalent image quality and depiction of fatty metaplasia (arrows).

Figure 2. Comparison of pre- and post-contrast deep learning water-fat separation. Overlay of fat (yellow) and water (blue) images in (A) a normal volunteer showing good fat identification outside of heart structures (B and C) Chronic myocardial patients depicting fatty metaplasia (arrows).

Table 1. Structural Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR) were not significantly different between the GraphCut and deep learning methods, providing quantitative evidence that the deep learning method generalizes from pre to post-contrast acquisitions.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4888