4853

3T to 7T MRI Synthesis via Deep Learning in Spatial-Wavelet Domains
Liangqiong Qu1, Shuai Wang 1, Yongqin Zhang2, Pew-Thian Yap1, and Dinggang Shen1

1Department of Radiology and BRIC,University of North Carolina at Chapel Hill, Chapel Hill, NC, United States, 2School of Information Science and Technology, Northwest University, Xi'an, China

Synopsis

Ultra-high field 7T MRI scanners, while producing images with exceptional anatomical details, are cost prohibitive and hence highly inaccessible. In this abstract, we propose a novel deep learning network to synthesize 7T T1-weighted images from their 3T counterparts. Our network jointly considers both spatial and wavelet domains to facilitate learning for coarse to fine details.


Introduction

A good image synthesis method preserves details of different scales, from global image contrast to local anatomical details. However, many existing methods do not consider the multi-scale nature of image details and learn image mappings that lead to images degraded by blurriness or artifacts. In this abstract, we introduce a deep learning network that leverages wavelet transformations to facilitate effective reconstruction of multi-frequency image details. Our network fuses information from both spatial and wavelet domains using a novel wavelet-based affine transformation (WAT) layer.

Method

Our goal is to synthesize 7T T1-weighted images from their 3T counterparts. As shown in Figure 1, our network, called WATNet, consists of a feature extraction branch and an image reconstruction branch. The feature extraction branch explicitly models the 3T-7T mappings in different frequency components with the guidance of several flexible WAT layers. The image reconstruction branch converts information from the spatial-wavelet domains to the spatial domain to generate realistic 7T images.

The WAT layer is learned to endow the network with multi-level frequency information via a simple feature-wise affine transformation on the intermediate feature maps. Specifically, the WAT layer learns a set of affine parameters $$$\{\gamma_{c},\beta_{c}\}$$$ based on the wavelet coefficients $$$W_c$$$ and performs element-wise affine transformation on intermediate feature maps $$$F_m$$$ as follows:

$$ {\textbf{WAT}}({F_{m}} \mid\gamma_{c},\beta_{c}) = \gamma_{c}\odot{F_{m}} + \beta_{c},$$

where $$$\odot$$$ indicates element-wise multiplication. The affine parameters $$$\gamma_{c}$$$ and $$$\beta_{c}$$$ are calculated as:

$$\gamma_{c} = {M}(W_c),$$

$$\beta_{c} = {N}(W_c),$$

where the mapping functions $$$M$$$ and $$$N$$$ are implemented as two convolution layers that can be optimized as part of the whole network.

\gamma
,

Results

We compared WATNet with four state-of-the-art 7T image synthesis methods (e.g., MCCA1, RF2, CAAF3, and DDCR4) via leave-one-out cross validation using 15 pairs of 3T and 7T T1-weighted images. The 3T images were acquired by a Siemens Magnetom Trio 3T scanner, with the voxel size 1×1×1 mm$$$^3$$$, TR = 1990 ms, TE = 2.16 ms, TI = 900 ms, and Filp angle = 9°; while the 7T images were acquired by a Siemens Magnetom 7T whole body MR scanner, with the voxel size 0.65×0.65×0.65 mm$$$^3$$$, TR = 6000 ms, TE = 2.95 ms, TI = 800/2700 ms, and Flip angle = 4°/4°. All these images were first linearly aligned to MNI standard space5 with FLIRT6 , and then bias correction7 and skull-stripped8 were performed to remove non-brain regions.

Figure 2 shows example 7T images synthesized using the compared methods, indicating that WATNet yields results that are significant closer to the ground truth 7T images. This can be attributed to multi-scale learning capability of WATNet. Figure 3 shows the boxplots of PSNR and SSIM values generated by comparing the synthetic outcomes with the ground truth images. It shows that the proposed WATNet achieves the best performance among all the 7T image synthesis methods.

$$$

Conclusion

The proposed WATNet, which can efficiently leverage the complementary information of dual spatial-wavelet domains, provides new insight for the research on medical image synthesis. It can be embedded into any existing medical synthesis network and trained end-to-end together with the original network to help improve its ability in capturing multi-level frequency information.

Acknowledgements

This work was supported by NIH grant EB006733.

References

[1] K. Bahrami, F. Shi, X. Zong, H. W. Shin, H. An, and D. Shen,“Hierarchical reconstruction of 7t-like images from 3t mri using multilevel cca and group sparsity,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 659–666.

[2] K. Bahrami, I. Rekik, F. Shi, Y. Gao, and D. Shen, “7t-guided learning framework for improving the segmentation of 3t mr images,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2016, pp. 572–580

[3] K. Bahrami, F. Shi, I. Rekik, and D. Shen, “Convolutional neural network for reconstruction of 7t-like images from 3t mri using appearance and anatomical features,” in Deep Learning and Data Labeling for Medical Applications. Springer, 2016, pp. 39–47

[4] Y. Zhang, J.-Z. Cheng, L. Xiang, P.-T. Yap, and D. Shen, “Dual-domain cascaded regression for synthesizing 7t from 3t mri,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 410–417.

[5] C. J. Holmes, R. Hoge, L. Collins, R. Woods, A. W. Toga, and A. C. Evans, “Enhancement of mr images using registration for signal averaging,” Journal of computer assisted tomography, vol. 22, no. 2, pp. 324–333, 1998.

[6] M. Jenkinson, P. Bannister, M. Brady, and S. Smith, “Improved optimization for the robust and accurate linear registration and motion correction of brain images,” Neuroimage, vol. 17, no. 2, pp. 825–841, 2002.

[7] J. G. Sled, A. P. Zijdenbos, and A. C. Evans, “A nonparametric method for automatic correction of intensity nonuniformity in mri data,” IEEE transactions on medical imaging, vol. 17, no. 1, pp. 87–97, 1998.

[8] F. Shi, Y. Fan, S. Tang, J. H. Gilmore, W. Lin, and D. Shen, “Neonatal brain image segmentation in longitudinal mri studies,” Neuroimage, vol. 49, no. 1, pp. 391–400, 2010.

Figures

Fig. 1: WATNet takes a 3T image and its wavelet coefficients at various levels and predicts the 7T image.

Fig. 2: 7T images synthesized using the compared method along with prediction error maps. PSNR and SSIM values are shown at the bottom.

Fig. 3: Boxplots of PSNR and SSIM values for different 7T image synthesis methods.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4853