3487

Optimal selection of diffusion-weighting gradient waveforms using compressed sensing and dictionary learning
Raphaël Malak Truffet1, Christian Barillot1, and Emmanuel Caruyer1

1Univ Rennes, CNRS, Inria, Inserm, IRISA - UMR 6074, VisAGeS - ERL U 1228, F-35000 Rennes, France, Rennes, France

Synopsis

Acquisition sequences in diffusion MRI rely on the use time-dependent magnetic field gradients. Each gradient waveform encodes a diffusion-weighted measure; a large number of such measurements are necessary for the in vivo reconstruction of microstructure parameters. We propose here a method to select only a subset of the measurements while being able to predict the unseen data using compressed sensing. We learn a dictionary using a training dataset generated with Monte-Carlo simulations; we then compare two different heuristics to select the measures to use for the prediction. We found that an undersampling strategy limiting the redundancy of the measures allows for a more accurate reconstruction when compared with random undersampling with similar sampling rate.

Purpose

Diffusion MRI enables estimation of microstructure parameters such as axon diameter and density in white matter. A diffusion-weighted measurement relies on a spin-echo sequence with a diffusion-weighting gradient, for which the waveform, intensity and direction constitute the many degrees of freedom describing the acquisition. A large number of such measurements may be needed for an accurate parameter estimation, leading to long acquisition time, often incompatible with in vivo imaging. In this work, we evaluate two strategies to reduce the number of measurements: starting from a collection of gradient waveforms, we first learn a dictionary, then we select a subset of gradients (i) giving non-redundant signal response or (ii) optimizing the sensing matrix properties for sparse reconstruction. Both approaches are evaluated and compared to random subsampling.

Data generation

Microstructure configurations

We generated synthetic signals using Monte-Carlo simulation as implemented in Camino4,5. Microstructure configurations follow the irregularly packed, gamma-distributed radius cylinders model, with an average radius in the range [0.5μm, 3μm], a shape parameter (for the gamma distribution) in the range [1.5, 9] and an intracellular volume fraction in the range [0.015, 0.8] for a total of 180 different sets of microstructure parameters. Last, the data is augmented using spherical harmonics representation up to rank L=6 to rotate the microstructure into 100 various directions.

Gradient waveforms

We generated a set of 65 piecewise constant gradient waveforms with fixed orientations. Some examples of generated waveforms are shown on figure 4. Then, the waveforms are rotated into 40 directions, selected uniformly on the unit sphere2, which gives a total of 2600 potential gradient waveforms.

Methods

Dictionary learning

The reconstruction method is based on compressed sensing; we learned a dictionary using a training set made up of signals corresponding to 20% of the generated microstructure configurations. The learning is performed with SPAMS6 and aims at solving $$\min_{D,x_i}\frac{1}{n}\sum_{i=1}^{n}\frac{1}{2}||y_i-Dx_i||_2^2+\lambda||x_i||_1$$ where $$$n$$$ is the number of signals vectors in the learning set, $$$y_i$$$ is the signal for microstructure $$$i$$$ and $$$\lambda$$$ is a regularization weight. For the learning phase, the regularization parameter $$$\lambda$$$ was fixed to 0.15, and the size (number of atoms) of the dictionary was set to 200. In what follows, we propose two methods using this dictionary to select a subset, $$$\Omega$$$, of the original measurements while being able to predict the unseen data.

Minimizing redundancy of gradient response

The first strategy consists in minimizing a correlation score $$f(\Omega)=\sum_{i,j\in\Omega}\left(\sum_{k}\tilde{D}_{i,k}\tilde{D}_{j,k}\right)^2$$ for a subset Ω of gradients where $$$\tilde{D}$$$ is the dictionary $$$D$$$ where the lines are centered and reduced. We perform a local and discrete optimization strategy:

  • choose an initial subset Ω.
  • while the correlation score decreases: find the line $$$j$$$ in Ω the most correlated with the others in Ω, and replace it with the gradient not in Ω the less correlated with the remaining gradients.
  • return the subset Ω.

Note that this discrete optimization problem can be relaxed, where instead of binary selecting lines to define the set $$$\Omega$$$, we can attribute a positive weight to each line. The proposed minimization algorithm gives a result close to the theoretical minimum (see Fig. 1).

Minimizing the coherence of the dictionary

In compressed sensing, it is known that a sensing matrix with a low coherence gives a sparser representation1. We propose to choose $$$\Omega$$$ minimizing $$g(\Omega)=\max_{i,j}\frac{|\langle D_{\Omega,i},D_{\Omega,j}\rangle|}{||D_{\Omega,i}||_2||D_{\Omega,j}||_2}$$ where $$$D_{\Omega,i}$$$ is the column $$$i$$$ of $$$D_{\Omega}$$$, the extracted dictionary containing lines whose indices are in $$$\Omega$$$. We approximately solve this problem with a local search similar as the one described in the previous section.

Performance evaluation

Using the testing dataset (80% of the original data not used for training), we first compute the sparse representation $$$x$$$ with the LARS-Lasso algorithm3 using $$$y_\Omega$$$, the subsampled data, and $$$D_\Omega$$$, the dictionary with selected lines. We then reconstruct the full signal $$$y=Dx$$$ and compute the root-mean-square deviation.

Results and discussion

We compare the two subsampling strategies to random undersampling for different subsampling factors (number of measurements) and regularization weights ($$$\lambda$$$), which balances the sparsity of the representation and data fidelity. In figures 2 and 3, we plot the root-mean-square deviation for signals in the testing set. As expected, selecting the gradients to miminize the coherence leads to sparser representation; however, selecting gradients by minimizing the line correlation of the dictionary enables better prediction for a small number of samples.

We proposed a novel, data-driven method to address the problem of optimally selecting gradient waveforms for experiment design in microstructure-enabled diffusion MRI. We will extend this work to evaluate the impact of the proposed sampling schemes on the accuracy of rotation-invariant microstructure estimated parameters.

Acknowledgements

No acknowledgement found.

References

1. E. J. Candes and M. B. Wakin. "An Introduction To Compressive Sampling". In: IEEE Signal Processing Magazine 25.2 (Mar. 2008), pp. 21-30.

2. Emmanuel Caruyer et al. "Design of multishell sampling schemes with uniform coverage in diffusion MRI". In: Magnetic Resonance in Medicine 69.6 (Apr. 2013), pp. 1534-1540.

3. Bradley Efron et al. "Least angle regression". In: The Annals of statistics 32.2 (2004), pp. 407-499.

4. PA Cook et al. "Camino: open-source diffusion-MRI reconstruction and processing". In: 14th scientific meeting of the international society for magnetic resonance in medicine . Vol. 2759. Seattle WA, USA. 2006, p. 2759

5. Matt G Hall and Daniel C Alexander. "Convergence and parameter choice for Monte-Carlo simulations of diffusion MRI". In: IEEE transactions on medical imaging 28.9 (2009), pp. 1354-1364.

6. Julien Mairal et al. "Online learning for matrix factorization and sparse coding". In: Journal of Machine Learning Research 11.Jan (2010), pp. 19-60.

Figures

Figure 1: Optimizing the correlation $$$f(\Omega)$$$ over subsets $$$\Omega$$$; we compare the optimal correlation obtained with the proposed discrete, local optimization to the theoretical lower bound obtained with the relaxed optimization problem. Note that since the theoretical lower bound is the result of a convex optimization problem, the minimum is a global minimum.

Figure 2: Fidelity of the reconstruction depending on $$$\lambda$$$. The number of gradients used for the reconstruction is fixed to 30. This figure shows that we have to choose $$$\lambda$$$ carefully. If it's too small, there is over-fitting because the LARS-Lasso algorithm gives too much importance to the data fidelity and is not able to generalize for gradient waveforms that are not used to find the representation. On the contrary, if it's too high, the algorithm finds a representation that is too sparse, then the representation does not contains enough information.

Figure 3: Fidelity of the reconstruction depending on the number of measures used. λ is set to 5E-5; the 3 undersampling strategies give a similar score when the number of gradients is higher than 25; for fewer measurements, it is found that the best technique consists in minimizing $$$f(\Omega)$$$, the correlation of the lines of the restricted dictionary.


Figure 4: Example of gradient waveforms used in the simulation. The gradient amplitude is defined on 4 time steps and 3 possible values (-gmax , 0, +gmax). A symmetry is used to replicate the gradient waveform after the RF180 pulse. Among the 3⁴=81 such waveforms, we only keep 65 by removing waveforms that don't reach +gmax.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
3487