Diffusion tensor imaging typically requires acquisition of a large number of diffusion weighted images (DWI) for accurate fitting of the tensor model due to the issue of low SNR. This abstract presents a deep learning method to generate FA color map showing the primary diffusion directions from very few DWIs. The method uses deep convolutional neural networks to learn the nonlinear relationship between the DWIs and the FA color maps, bypassing the conventional DTI models. Experimental results show that the proposed method is able to generate FA color maps from only 6 DWIs with quality comparable to results from 270 DWIs using conventional tensor fitting.
In the deep learning network, the goal is to learn the nonlinear relationship F between input x and output y, which is represented as $$$y=F(x;Θ)$$$, where $$$Θ$$$ is the DL parameters to be learned during training. Learning of such a mapping is achieved through minimizing a loss function between the network prediction and the corresponding ground truth data based on some training data. In this study, our goal is to obtain the mapping between DWIs (input) and the FA color maps (output) using deep learning while by-passing the conventional tensor model. During the training stage, a reduced number of DWIs $$$x_i$$$ are used as the inputs and the corresponding true FA color maps $$$y_i$$$ (obtained by fitting all 288 images to the conventional tensor model) as the output. We learn the deep learning network parameters $$$Θ$$$ that minimize the loss function, $$$ L(Θ)=\frac{1}{n} \sum_{i=1}^{n}‖F(x_i;Θ)-y_i ‖^2$$$ (1), which is the mean-square error (MSE) between the network output and the ground truth FA maps (n is the number of training dataset). For our convolutional neural network (CNN), ten weighted layers are used for the training and testing. For each layer except the last one, 64 filters with kernel size of 3 are used. In the testing stage, acquired DWIs are fed into the CNN with learned $$$Θ$$$ to generate the desired FA color maps $$$F(x_t;Θ)$$$. The DTI model is not needed during testing, thus avoiding the model fitting error. Instead, the deep network learns the complicated nonlinear relationship between the DWIs and the FA color maps when noise is also taken into account.
For validation, DWI data from a total of five subjects were randomly selected from the Human Connectome Project (HCP)3. For each dataset, it includes 18 non-DWIs and 270 DWIs in three different B values: 1000, 2000, and 3000 s/mm2 and 90 diffusion directions. Data from four subjects (120 slices for each subject) were used for training (480 training samples) and the data from the fifth subject were used for testing. The FA color maps were estimated using both the conventional DTI model and the learned DL network from 36, 18, and 6 DWIs (corresponding to acquisition time reduction factors of 7.5, 15, and 45, respectively) and one non-DWI. The reduced DWIs were randomly and independently chosen from the 270 DWIs. The hardware specification is CPU i9 7980XE (18 cores 36 threads @4.2GHz); GPU 2x NVIDIA GTX 1080Ti; Memory 64 GB. The training takes around 20 hours. In contrast, it takes only 0.005 seconds to generate each FA color map using the learned network.
Figures 1, 2 and 3 show the FA maps, FA color maps and MD maps, respectively, from 36, 18, and 6 DWIs generated using the proposed deep learning method and the conventional tensor model. Results from all 288 images were used as the ground truth for comparison. It can be seen that the FA maps generated by our deep learning method are very close to the ground truth even with only 6 DWIs, while the FA maps using the conventional DTI model are blurry and noisy. The result is further verified by the normalized mean squared errors (NMSE) shown on the bottom left of each image.
[1] Golkov, et al. q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans. IEEE TMI 35, pp. 1344-1350, 2016.
[2] Gong et al. Efficient Reconstruction of Diffusion Kurtosis Imaging Based on a Hierarchical Convolutional Neural Network. Proc. Intl. Soc. Mag. Reson. Med. p. 1653, 2018.
[3] Van Essen, et al. The WU-Minn Human Connectome Project: an overview. NeuroImage, vol. 80, pp. 62–79, 2013.