0231

Simultaneous Reconstruction of Multiple b-Values DWI using a Joint Convolutional Neural Network
Chengyan Wang1, Yucheng Liang2, Yuan Wu1, Danni Yang2, and Yiping P. Du1

1Institute for Medical Imaging Technology (IMIT), School of Biomedical Engineering, Shanghai Jiao Tong university, Shanghai, China, 2Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States

Synopsis

This study presented a joint convolutional neural network (CNN) architecture for the reconstruction of multiple b-values diffusion-weighted (DW) images simultaneously. The proposed joint-net is able to extract high-level anatomical correlations among multi-contrast images and correct misalignment between images by adding a spatial transformation layer. Experimental results show that the proposed algorithm outperforms single image reconstruction network and compressed sensing algorithm with improved image quality. The training process of the joint-net is much more efficient compared to individual training for each b-value image. Besides, combination of data consistency and the joint-net enables precise characterization of brain tumor in a patient study.

Introduction

Diffusion-weighted (DW) imaging with multiple b-values is widely used for the characterization of tissue microstructures. However, single‐shot DW imaging suffers from the problems of limited spatial resolution and low SNR. Acquiring multiple b-values images is also time-consuming for applications such as diffusion tensor imaging (DTI) and diffusion spectrum imaging (DSI). A straightforward idea to solve the problems mentioned above is to reconstruct partial k-space data from all images in a joint fashion.

Existing studies have applied joint total variation (TV) and group sparsity for joint reconstruction of multi-contrast images1-5. However, these handcrafted features remain insufficient to capture the complex correlation between multi-contrast images. Convolution neural network (CNN), on the other hand, is capable of extracting high-level anatomical correlations between multi-contrast images.

This study aims to 1) reconstruct multiple-contrast images using a joint network; 2) improve reconstruction performance by extracting high-level anatomical correlations from images with multiple b-values; 3) correct image misalignments from motion and imperfection of the imaging system.

Theory

Problem Formulation

We proposed a joint reconstruction algorithm by exploiting a priori information from CNN as: $$\begin{align}\left\{\widehat{x}_1,\widehat{x}_2, ... ,\widehat{x}_Q\right\} = \underset{\left\{x_1,x_2,...,x_Q\right\}}{argmin}\left\{\sum_{q=1}^{Q}w_q||\mathcal{F}_ux_q-y_q||_2^2+R(x_1,x_2,...,x_Q)\right\}, &&&&&&&&&&&& [1]\end{align} $$ where $$$\left\{x_1,x_2,...,x_Q\right\}$$$ is the set of multiple b-value DW images, $$$y_q$$$ denotes the acquired data for the q-th image, $$$\mathcal{F}_u$$$ the sparse sampling and Fourier transform operator, $$$w_q$$$ the weighting parameters, R(·) is defined as a combination of $$$\ell_{2}$$$-norm and joint-TV between $$$x_q$$$ and $$$x_{cnn,q}$$$: $$\begin{align}R(x_1,x_2,...,x_Q) = \alpha\sum_{q=1}^{Q}w_q||x_q-x_{cnn,q}||_2^2+\beta \sum_{q=1}^{Q}(w_q\sqrt{(\nabla x_q)^2+(\nabla x_{cnn,q})^2}), &&&&&&&&&&&& [2]\end{align} $$ where $$$x_{cnn,q}$$$ denotes the prior images from CNN prediction, $$$\alpha$$$ and $$$\beta$$$ are regularization parameters.

CNN Architecture

The proposed joint-net (Figure 1) is designed according to these observations: 1) object support boundaries exist regardless of image contrasts; 2) a strong anatomical correlation exists across multi-contrast images; 3) identical spatial resolution holds for all contrasts due to the same spin population.

In the joint-net, a channel-wise auto-encoder is applied to each under-sampled image (Figure 1, red line). High-resolution image is generated by combining features from both under-sampled image and high-resolution structural image. In addition to minimizing the $$$\ell_{2}$$$-loss between output and ground truth, a structural sparsity term is added into the loss function to force the network to learn some joint features from multiple b-value images:

$$\begin{align}Loss=\frac{1}{Q}\sum_{q=1}^{Q}||f_{cnn}(z_q;W)-\widetilde{x}_q||_2^2+ \lambda _w||\sum_{q=1}^{Q-1}(W_{q+1}-W_q)^2||_1,&&[3]\end{align}$$

where $$$\widetilde{x}_q$$$ denotes the -th ground truth image, $$$f_{cnn}(z_q;W)$$$ the CNN output, $$$z_i$$$ a zero-padding of $$$\mathcal{F}y_i$$$, $$$W_q$$$ the weighting parameters of the corresponding channel.

With this specific design, the contrast of each image is preserved while the anatomical information is shared between channels. To deal with the issue of misalignment between images, we added a spatial transformation layer6 after each concatenating layer.

Methods

Data Acquisition

An open database containing DW images from 1113 subjects was used for training7,8. Imaging parameters were: TR/TE = 5520/89.5 ms, FOV = 210 × 180 mm2, voxel size = 1.25 × 1.25 mm2, b-value of 0, 1000, 2000, 3000 s/mm2. A patient with high-grade glioma (HGG) was scanned with identical imaging parameters as the training data.

CNN Training

Under-sampled data were simulated by randomly sampling the full k-space with sampling rates of 1/3, 1/4 and 1/5. Data were randomly grouped into a training set (80%) and a validation set (20%).

Comparison Study

For comparison, a single image network was applied for each DW image using U-net9. Compressed sensing (CS) algorithm10,11 was also performed for comparison. Apparent diffusion coefficient (ADC) maps were calculated from multiple b-value images using a mono-exponential model.

Results

The proposed joint-net outperformed single image reconstruction with significantly better performances in terms of PSNR (b0: 42.00 vs 38.03, b1000: 35.63 vs 32.79, b2000: 36.15 vs 33.29, b3000: 30.85 vs 26.46). Image features were better depicted using the joint-net compared to single image CNN and CS reconstruction with a sampling rate of 1/4 (Figure 2). Less blurring artifacts were observed in the ADC maps using the joint-net (Figure 3), and slight denoising effect was observed in the high b-value images with the joint fashion of reconstruction. Besides, the joint-net handles the misalignment problem well with the introduction of a spatial transformation layer (Figure 4).

Images from a patient with HGG was shown in Figure 5. Although the network was initially trained with healthy human data, it performed well for the characterization of lesions with the data consistency operation.

Discussion and Conclusion

The proposed joint reconstruction algorithm outperforms single image reconstruction and CS reconstruction. The joint-net can be trained much more efficiently than the individual networks in terms of training time. Besides, the reconstruction algorithm can be easily extended to parallel imaging with slight modifications to fit multi-coil data.

Acknowledgements

No acknowledgement found.

References

1. Majumdar A, Ward R. Joint reconstruction of multiecho MR images using correlated sparsity. Magn Reson Imaging 2011;29:899-906.


2. Majumdar A, Ward RK. Accelerating multi-echo T2 weighted MR imaging: analysis prior group-sparse optimization. Journal of Magnetic Resonance, 2011, 210(1): 90-97.

3. Bilgic B, Goyal V, Adalsteinsson E. Multi-contrast reconstruction with Bayesian compressed sensing. Magn Reson Med 2011;66:1601-15.


4. Huang J, Chen C, Axel L. Fast multi-contrast MRI reconstruction. Magnetic resonance imaging, 2014, 32(10): 1344-1352.

5. Haldar JP, Liang ZP. Joint reconstruction of noisy high-resolution MR image sequences. Biomedical Imaging: From Nano to Macro, 2008. 5th IEEE International Symposium on. IEEE, 2008: 752-755.

6. Jaderberg M, Simonyan K, Zisserman A. Spatial transformer networks. Advances in neural information processing systems. 2015: 2017-2025.

7. The Human Connectome Project. Available at: https://www.humanconnectome.org/.

8. Van Essen DC, Smith SM, Barch DM, Behrens TE, Yacoub E, Ugurbil K; WU-Minn HCP Consortium. The WU-Minn Human Connectome Project: an overview. Neuroimage 2013;80: 62-79.

9. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015: 234-241.

10. Donoho D. Compressed sensing. IEEE Trans Inf Theory 2006;52:1289-1306.


11. Doneva M, Börnert P, Eggers H, et al. Compressed sensing reconstruction for magnetic resonance parameter mapping. Magn Reson Med 2010, 64(4): 1114-1120.

Figures

Figure 1. Diagram of the proposed joint convolution neural network.

Figure 2. Comparison of the reconstruction performances of DW images with an under-sampling rate of 1/4. The proposed joint-net achieved the best PSNR compared to single image CNN and compressed sensing.

Figure 3. ADC maps corresponding to the DW images in Figure 2. The ADC maps show less blurring artifacts compared single image reconstruction and compressed sensing reconstruction in the zoomed areas.

Figure 4. Reconstruction performance of the misaligned data using joint-net.

Figure 5. DW images from an HGG patient. The lesion is characterized well using the joint-net.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0231