A deep learning CNN trained using precontrast images generalizes to post-contrast images, providing equivalent image quality with fewer swap artifacts. For
This retrospective study used magnitude image data from a database of research cardiovascular images. 31 chronic myocardial infarction patients and 17 normal controls were included in this study providing 1000 precontrast training acquisitions and 446 matched (pre- and post-contrast acquisitions). All MR examinations were performed at 1.5 T (Magnetom Avanto, Siemens Healthineers, Erlangen, Germany) with the subject in the supine position and standard (“tune-up”) magnetic field shim. The pulse sequence was a spoiled multiple gradient-echo sequence (1 slice per breathhold, repetition time = 20 ms; 12 echo times, 2.4 - 15.5 ms (1.2 ms spacing), flip angle = 20, bandwidth = 1860 Hz/pixel, in-plane spatial resolution = 2.3 x 1.7 mm, slice thickness = 8 mm). Images were acquired in multiple long-axis and contiguous short axis planes before and after contrast agent administration. The post-contrast acquisition was commenced 5 minutes after bolus contrast agent administration (0.15 mmol/kg gadopentetate dimeglumine, Magnevist, Bayer Healthcare, Wayne, NJ) followed by late gadolinium-enhancement (LGE) imaging.
Conventional water-fat separation was performed via a multi-point fat-water separation with R2* using a graph cut field map estimation algorithm (GraphCut) [1] with the ISMRM water-fat Toolbox [2] to provide pre-contrast training data and post-contrast ground-truth for comparison to the deep learning method. A U-Net convolutional neural network (CNN) [3] was used for deep learning water-fat separation. The input to the training algorithm was 12 pre-contrast magnitude images from 12 echo times as 12 channels of the CNN. The output of the CNN was two images (water only and fat only). The implementation was realized using Keras 2.0 and TensorFlow 1.2 (both freely available software). Post-contrast water and fat images not used for training (n=446, [test set]) were “predicted” using the trained CNN. Images were assessed for water-fat swap artifacts and visualization of ischemic cardiomyopathy and intramyocardial fat deposition (fatty metaplasia). The structural similarity (SSIM) index and peak signal-to-noise (PSNR) of all images were measured and used to study and compare the performance of the deep learning and GraphCut methods for pre- and post-contrast acquisitions. A p-value < 0.05 was regarded as statistically significant.