Zheng Zhong^{1}, Kanghyun Ryu^{1}, Jae Eun Song^{1}, Janhavi Singhal^{2}, Guangyu Dan^{3}, Kaibao Sun^{3}, Shreyas S. Vasanawala^{1}, and Xiaohong Joe Zhou^{3,4}

^{1}Radiology, Stanford University, Stanford, CA, United States, ^{2}Homestead High School, Cupertino, CA, United States, ^{3}Center for MR Research, University of Illinois at Chicago, Chicago, IL, United States, ^{4}Radiology, Neurosurgery and Bioengineering, University of Illinois at Chicago, Chicago, IL, United States

DWI can probe tissue microstructures in many disease processes over a broad range of b-values. In the scenario where severe geometric distortion presents, non-single-shot EPI techniques can be used, but introduce other issues such as lengthened acquisition times, which often requires undersampling in kspace. Deep learning has been demonstrated to achieve many-fold undersampling especially when highly redundant information is present. In this study, we have applied a novel convolutional recurrent neural network (CRNN) to reconstruct highly undersampled (up to six-fold) multi-b-value, multi-direction DWI dataset by exploiting the information redundancy in the multiple b-values and diffusion gradient directions.

Multi-b-value DWI series exhibit similar image features (i.e., edges, anatomy) among differing b-values and diffusion directions (Figure 1). Exploiting this information in a neural network can effectively train it to reconstruct highly undersampled k-space data. The formulation of the proposed CRNN-DWI is expressed as:

$$X_{rec}=f_{N}(f_{N-1}(...(f_{1}(X_{u})))), (1)$$

where $$$X_{rec}$$$ is the image to be reconstructed, $$$X_{u}$$$ is the input image series from direct Fourier Transform of the under-sampled k-space data, and $$$f_{i}$$$ is the network function including model parameters such as weights and bias of each iteration, and $$$N$$$ is the number of iterations.

During each iteration, the network function performs:

$$X_{rnn}^{(i)}=X_{rec}^{(i-1)}+CRNN(X_{rec}^{(i-1)}), (2a) $$

$$X_{rec}^{(i)}=DC(X_{rec}^{(i-1)};{\bf{y}}), (2b)$$

where CRNN is the learnable box that consists of five layers (Figure 2A), DC represents data consistency operation, and

Figure 2B shows the unfolded CRNN box, which consists of one CRNN-

For the CRNN-i layer, let $$$H_{l}^{(i)}$$$ be the feature representation at layer $$$l$$$ and iteration step $$$i$$$, $$$W_{c}$$$ and $$$W_{r}$$$ represent the filters of input-to-hidden convolutions and hidden-to-hidden recurrent convolutions evolving over iterations, respectively, and $$$B_{l}$$$ denote a bias term. We then have:

$$H_{l}^{(i)}=ReLU(W_{c}*H_{l-1}^{(i)}+W_{r}*H_{l}^{(i-1)}+B_{l}). (3)$$

In this layer, both the iteration and the b-value information are propagated. Specifically, for each b in the b-value series, the feature representation $$$H_{l, b}^{(i)}$$$ is formulated as (Figure 2C):

$${H_{l,b}^{(i)}}=\overrightarrow{H_{l,b}^{(i)}}+\overleftarrow{H_{l,b}^{(i)}}, (4a)$$

$$\overrightarrow{H_{l,b}^{(i)}}=ReLU(W_{c}*H_{l-1,b}^{(i)}+W_{b}*\overrightarrow{H_{l,b-1}^{(i)}}+W_{r}*H_{l,b}^{(i-1)}+\overrightarrow{B_{l}}), (4b)$$

$$\overleftarrow{H_{l,b}^{(i)}}=ReLU(W_{c}*H_{l-1,b}^{(i)}+W_{b}*\overleftarrow{H_{l,b-1}^{(i)}}+W_{r}*H_{l,b}^{(i+1)}+\overleftarrow{B_{l}}), (4c)$$

where $$$\overrightarrow{H_{l,b}^{(i)}}$$$ and $$$\overleftarrow{H_{l,b}^{(i)}}$$$ are the feature representation calculated along forward and backward directions, respectively, and other parameters are defined in Figure 2C.

With IRB approval, multi-b-value DWI data were acquired on a 3T GE MR750 scanner from ten subjects. The key acquisition parameters were: slice thickness=5mm, FOV=22cm×22cm, matrix=256×256, slice number=25, 14 b-values from 0 to 4000 s/mm

1. Niendorf, T., Dijkhuizen, R. M., Norris, D. G., van Lookeren Campagne, M. & Nicolay, K. Biexponential diffusion attenuation in various states of brain tissue: Implications for diffusion-weighted imaging. *Magn. Reson. Med.* **36**, 847–857 (1996).

2. Bennett, K. M. *et al.* Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model. *Magn. Reson. Med.* **50**, 727–734 (2003).

3. Tang, L. and Zhou, X. J. Diffusion MRI of cancer: From low to high b-values: High b-Value Diffusion MRI of Cancer. *J. Magn. Reson. Imaging* **49**, 23–40 (2019).

4. Zhou, X. J., Gao, Q., Abdullah, O. & Magin, R. L. Studies of anomalous diffusion in the human brain using fractional order calculus. *Magn. Reson. Med.* **63**, 562–569 (2010).

5. Zhong, Z. *et al.* High-Spatial-Resolution Diffusion MRI in Parkinson Disease: Lateral Asymmetry of the Substantia Nigra. *Radiology* **291**, 149–157 (2019).

6. Pipe, J. G. Non-EPI Pulse Sequences for Diffusion MRI. in *Diffusion MRI* (ed. Jones, PhD, D. K.) 203–217 (Oxford University Press, 2010). doi:10.1093/med/9780195369779.003.0013.

7. Lee, D., Yoo, J., Tak, S. & Ye, J. C. Deep Residual Learning for Accelerated MRI using Magnitude and Phase Networks. *ArXiv180400432 Cs Stat* (2018).

8. Eo, T. *et al.* KIKI ‐net: cross‐domain convolutional neural networks for reconstructing undersampled magnetic resonance images. *Magn. Reson. Med.* **80**, 2188–2201 (2018).

9. Qin, C. *et al.* Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction. *IEEE Trans. Med. Imaging* **38**, 280–290 (2019).

10. Sandino, C. M., Ong, F. and Vasanawala, S. S. Deep Subspace learning: Enhancing speed and scalability of deep learning-based reconstruction of dynamic imaging data. Proceeding of ISMRM 2020, P0599.

**Figure 1:** A set of multi-b-value DWI dataset from a representative slice showing the high degree of data redundancy among the images with different b-value and different diffusion directions.

**Figure 2:** The structure of CRNN-DWI used in this study. (A): The detailed structure of the proposed network for each layer; (B): The unfolded structure of the proposed network for each iteration; and (C): The detailed structure of CRNN-** b**-i layer. The green arrow (W

**Figure 3:** Representative images from the experiments with four-fold undersampling. The reconstructed images using CRNN-DWI outperformed 3D-CNN and were the closest to the ground truth for a broad range of b-values.

**Figure 4:** Representative images from the experiment with six-fold undersampling. The quality of the reconstructed images using CRNN-DWI is not as good as that with four-fold undersampling, yet still considerably out-performed 3D-CNN and zero-padding.

**Figure 5:** Representative trace-weighted images at b = 1000 s/mm^{2} (left) using different reconstruction methods and the corresponding signal decay curves (right) from two randomly selected ROIs (white matter and gray matter as indicated by the blue areas). The trace-weighted images reconstructed using CRNN-DWI showed excellent image quality even with a six-fold undersampling. The signal decay curves from CRNN-DWI agreed well with the curves using fully sampling images, whereas the curve from zero-padding and 3D-CNN exhibited substantial deviations.

DOI: https://doi.org/10.58530/2022/0683