QSM: fast selection of optimal regularization weights
Job Gijsbertus Bouwman1 and Peter R Seevinck1

1Image Sciences Institute, University Medical Center Utrecht, Utrecht, Netherlands

Synopsis

Quantitative Susceptibility Mapping reconstructions may benefit from L1-regularization and magnitude weighing, however these iterative reconstruction methods are time-consuming. Recently, progression has been made in reducing the reconstruction times with Split Bregman iterations, allowing subject-specific regularization weights. Here a further reduction of the reconstruction time is reported, mostly based on accelerating the automatic selection of the optimal regularization parameter. The overall procedure reduces computational load more than threefold, without accuracy loss. Reduction of reconstruction times, may contribute to realize QSM algorithms which are either clinically feasible, or that may pave the way to include more sophisticated regularization mechanisms.

Target Audience

Researchers interested in fast and accurate reconstruction of the susceptibility distributions

Purpose

Accelerating the selection of optimal regularization weights, for L1-regularized, magnitude-weighted QSM

Theory

Quantitative Susceptibility Mapping reconstructions may benefit from L1-regularization and magnitude weighing [1], however these iterative reconstruction methods are time-consuming. Recently Split Bregman iterations have been proposed[2] for L1-regularized, magnitude-weighted QSM, minimizing:

$$ \frac{1}{2}||F^{-1}DF\chi-\phi||_2^2 + \lambda_1||WG\chi||_1 $$

with F the Fourier operators, $$$ D = \frac{1}{3}-\left(\frac{k_z}{k}\right)^2 $$$ the dipole function φ is the normalized field shift, W is a mask on edges in the magnitude image G the gradient operator in three dimensions.

Prior to the final reconstruction, the optimal regularization weight λ1 balancing data-consistency and smoothness should be selected (fig1). To this end, a series of reconstructions with different regularization weights are performed, of which only the norms of the residual and the gradient of the reconstruction are used to create an L-curve [3], in which the points with the highest curvature corresponds to the optimal regularization parameter. Only the final reconstruction is carried out with magnitude weighting. For fast convergence of the L1 regularized reconstructions, also the optimal value λ2 for the L2 regularized problem should be determined with an additional L-curve:

Methods

The general framework was followed as described in [1] and illustrated in fig 1. It was investigated 1) if an anti-aliasing buffer is needed in the construction of the L-curves, as it was hypothesized that both coordinates of the L-curve, the norm of the data-consistency and the regularization norm are almost completely determined by the voxels in the Region of interest. 2) If the complete L2-curve could be constructed in the frequency domain (Parseval’s theorem) based on only one FFT, and using only half the frequency components of the phase data (conjugate symmetry). 3) if it was possible for L1-curve to reconstruct two χ distributions with different regularization weights simultaneously for the cost of one, in the imaginary component of the phase distribution. This is justified by the purely real character of the field, the dipole and the susceptibility distribution, and the linearity of the Fourier and the gradient operation. Only for the single non-linear operation of the each iteration (soft thresholding) the datasets were temporarily separated 4) if for the Split Bregman iterations, gradient operations in the spatial domain were faster than those in the frequency domain.

Results

1) The point of maximal curvature in the L-curve was found to be invariant to the buffer size, making a buffer of four voxels sufficient, reducing calculation times almost twofold. Without any accuracy loss, L-curves constructions were further accelerated 2) twenty fold for the L2-curve using the single FFT approach and 3) almost twofold for the L1-curve by processing the additional reconstruction in the imaginary channel. 4) Switching the order of the Fourier and gradient operations gave 30% reduction in the final magnitude weighted reconstruction without accuracy loss. The total processing time was reduced more than threefold, without accuracy loss.

Discussion

Efficient reconstruction algorithms make QSM more suitable for (real-time) clinical usage, and may allow the incorporation of more sophisticated/time-consuming regularization tools. The here proposed accelerations serve these goals, yielding numerically equivalent results while reducing the total computational load more than threefold. As mentioned before [1] the efficiency of the L1-sweep can be further optimized by parallel computing.

Conclusion

The complete pipeline of optimally L1-regularized QSM with magnitude-weighting can be accelerated more than threefold without accuracy loss.

Acknowledgements

No acknowledgement found.

References

[1] Liu e.a., MRM, 66-3, 777-783, (2011) [2] Bilgic e.a, MRM, 72-5, 1444-1459, (2014) [3] PC Hansen, Comp inv probs in el-card (2000) [4] YCN Cheng et al, Phys. Med. Biol. 54 (2009)

Figures

Fig1) Present workflow of L1-magn-weighed QSM with the proposed accelerations. 1) L-curve construction without anti-aliasing buffer 2) L2-curve construction based on single FFT 3) L1-curve contruction with pairwise susceptibility reconstruction 4) Split Bregman gradient operations in the spatial domain.

Fig2) L-curves and curvature plots for the automatic parameter selection of the prior (continuous) and proposed accelerated method (dashed), yielding equivalent parameter values for both the L2 curve (Left) and the L1 curve (right). Only the L1-curve is shifted to the left, however, its curvature is the same.

Fig3) Midcoronal slice of the final L1 regularized magnitude weighted reconstruction, of the existing method, (left) and the proposed accelerated method (mid) and their zero difference (right)



Proc. Intl. Soc. Mag. Reson. Med. 24 (2016)
1561