4014

Deep Neural Network for Single-Point Dixon Imaging with Flexible Echo Time
Jong Bum Son1, Marion Elizabeth Scoggins2, Basak Erguvan Dogan3, Ken-Pin Hwang1, and Jingfei Ma1

1Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States, 2Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States, 3Department of Diagnostic Radiology, The University of Texas Southwestern Medical Center, Dallas, TX, United States

Synopsis

Dixon imaging generally requires multiple input images acquired at varying echo-times for robust phase-correction and water and fat separation. However, multi-echo Dixon imaging suffers from relatively long scan-time and is more susceptible to motion related artefacts and inflexible in choosing scan-parameters. Recently, it was reported that deep neural networks can help separate water and fat from two-point, or multi-point Dixon images. In this work, we present a deep learning based method that can achieve water and fat separation from a single image acquired at a flexible echo-time and therefore can help alleviate the limitations of multi-point Dixon imaging.

INTRODUCTION

Dixon imaging generally requires multiple input images acquired at varying echo times for robust phase correction and water and fat separation.1-4 However, multi-echo Dixon imaging suffers from relatively long scan time and is generally more susceptible to motion related artefacts and more inflexible in choosing scan-parameters.5

Recently, it was reported that deep neural networks can help separate water and fat from two-point Dixon in-phase and out-of-phase images,6 or multi-point Dixon images.7 In this work, we present a deep learning based method that can achieve water and fat separation from a single image acquired at a flexible echo time and therefore can help alleviate the limitations of multi-point Dixon imaging.

METHODS

A total of 2,072 in vivo fast triple-echo Dixon (FTED)8 breast images were acquired from 37 patients on a 3.0T whole-body scanner (GE Healthcare, Waukesha, Wisconsin, USA). The images were randomly split into two groups: 1,848 images from 33 patients for training, and 224 images from 4 patients corresponding to only the first-echo of the triple echo Dixon acquisition for testing. Scan parameters for the FTED acquisition were TE1/TE2/TE3 = 94.8/96.4/97.9 ms, TR = 6.3 secs, NFE = 384, NPE = 224, NSLICE = 56, NCOIL = 8, slice thickness = 4 mm, FOV = 30 cm x 30 cm, RBW = ±250 kHz, NEX = 1, ETL = 13, ARC subsampling factor = 2, and scan time = 69 secs. The phase difference between water/fat signals in the first-echo and the middle spin echo was approximately 140°.

The proposed deep neural network was based on U-Net9 and implemented in MATLAB (MathWorks, Natick, Massachusetts, USA), using the following training parameters: initial learning rate = 0.05, momentum = 0.9, mini-batch size = 16, and gradient threshold = 0.05. We modified the three output layers of U-Net (the final convolution, soft-max, and pixel classification) to predict whether water or fat is dominant for each image pixel. The labeled images including the water/fat dominancy information were from the original FTED reconstruction.

With the trained deep neural network, the pixel-wise water/fat dominancy was first estimated using only the image corresponding to the first-echo of the triple echo Dixon acquisition. Then, the coil-by-coil phase-correction (i.e. subtracting 140° from the image-phase for fat-dominant pixels) was performed with the estimated dominancy information to separate water and fat images. To evaluate the performance, we compared the final water and fat images from the proposed method and from the original FTED reconstruction.8

RESULTS / DISCUSSION

Fig. 1 shows an example of successful water and fat images from the proposed method and FTED. For most of the testing images, the proposed deep neural network was able to generate clean water and fat separation using only the single image from the first echo. However, small areas of water/fat swapping was occasionally noted, mostly in some edge slices, or areas of signal wrap-around (Fig. 2a), or areas of very low SNR (Fig. 2b).

Despite having been investigated and its potential advantages over multi-echo Dixon imaging, single point Dixon imaging is generally considered less robust than the multiple point Dixon imaging and therefore has not been in widespread clinical use. In our method, the deep neural network was trained with the labeled water and fat images that were generated using multiple-echo Dixon images and the required input images were only from a single echo with a flexible echo time.

Acknowledgements

Part of the research was conducted at the Center for Advanced Biomedical Imaging at The University of Texas MD Anderson Cancer Center (Houston, Texas, USA) with equipment support from GE Healthcare (Waukesha, Wisconsin, USA).

References

1. Dixon WT. Simple proton spectroscopic imaging. Radiology 1984;153:189–194.

2. Ma J, Son JB, and Hazle JD. An improved region growing algorithm for phase correction in MRI. Magn Reson in Med. 2015;76:519-529.

3. Reeder SB, Pineda AR, Wen Z, Shimakawa A, Yu H, Brittain JH, Gold GE, Beaulieu CH, and Pelc NJ. Iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL): application with fast spin-echo imaging. Magn Reson Med 2005; 54:636–644.

4. Zhang T, Chen Y, Bao S, Alley MT, Pauly JM, Hargreaves BA, and Vasanawala SS. Resolving phase ambiguity in dual-echo Dixon imaging using a projected power method. Magn Reson Med 2017;77:2066-2076.

5. Ma J. A single-point Dixon technique for fat-suppressed fast 3D gradient-echo imaging with a flexible echo time. J Magn Reson Imaging 2008;27(4):881-890.

6. Zhang T, Chen Y, Vasanawala S, and Bayram E. Dual echo water-fat separation using deep learning. In Proc of the 26th Annual Meeting of Intl Soc Mag Reson Med, Paris, France, 2018. (Abstract 5614)

7. Gong E, Zaharchuk G, and Pauly J. Improved multi-echo water-fat separation using deep learning. In Proc of the 25th Annual Meeting of Intl Soc Mag Reson Med, Honolulu, HI, USA, 2017. (Abstract 5657)

8. Son JB, Hwang KP, Madewell JE, Bayram E, Hazle JD, Low RN, and Ma J. A flexible fast spin echo triple-echo Dixon technique. Magn Reson in Med. 2017;77:1049-1057.

9. Ronneberger O, Fischer P, and Brox T. U-net: Convolutional networks for biomedical image segmentation. Med Image Comput Comput Assist Interv 2015:234-241.

Figures

Fig. 1. An example of successful water and fat separation of the proposed method compared to the original FTED. With the image from a single echo as its input, the deep neural network is able to generate nearly identical water and fat images to those by FTED, which requires at least two input images.

Fig. 2. Two examples of water/fat swapping by the deep neural network, in an area with wrap-around signals (a), and in an area with low SNR (b).

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4014