We introduce a new regularization approach, “Match Regularization”, and show that in tandem with Global Maxwell Tomography (GMT) it enables accurate, artifact-free volumetric estimation of electrical properties from noisy B1+ measurements. We demonstrated the new method for two numerical phantoms with completely different electrical properties distributions, using clinically feasible SNR levels. Estimated electrical properties were accurate throughout the volume for both phantoms. Our results suggest that GMT with match regularization is robust to noise and can be employed to map electrical properties in phantoms and in vivo experiments.
Inverse scattering based on magnetic resonance (MR) measurements has long been considered a potential avenue for the estimation of tissue electrical properties (EP), namely relative permittivity and conductivity. Current techniques fall into two categories: local1-3 and global4-9 techniques. Local techniques employ the differential form of Maxwell's equations, therefore requires numerical derivatives, resulting in dramatic sensitivity to noise and inaccuracy in regions of high contrast. Global techniques employ iterative schemes based on the integral form of Maxwell's equations, which are less susceptible to noise. However, since the MR fields themselves are not sensitive to small perturbations in EP, noise can lead to several different minima. Therefore, regularization is critical for global techniques. In this work, we introduce a new regularization strategy, dubbed "Match Regularization", where the regularizer is a parameterized nonlinear function tuned to the particular problem. Our goal is to demonstrate the efficacy of this new approach in enabling EP estimation with Global Maxwell Tomography (GMT)7-9, for clinically-achievable signal-to-noise ratio (SNR).
GMT is a fully 3D technique, which minimizes the mismatch between simulated and measured B1+ maps using the following cost function.
$$f(\epsilon)=\frac{\sqrt{\sum_k\sum_n\|w_k\odot{w}_{n}\odot\delta_{kn}\|_2^2}}{\eta}=\frac{\sqrt{\sum_k\sum_n\left\lVert{w}_{k}\odot{w}_{n}\odot\left(\hat{b}^+_k\odot\overline{\hat{b}_n^+}-b_k^+\odot\overline{b_n^+}\right)\right\rVert_2^2}}{\eta}$$
We added a second cost function to include the new regularizer, which preserves high-contrast regions and penalizes low-contrast regions:
$$f_0(\epsilon)=\frac{1}{3N_s^{\frac{2}{3}}}\sum_{\vec{n}\in S}\left(1-e^{\beta \left(\delta-\sqrt{|\Delta_x \epsilon|_{n}^2+\delta^2}\right)}\right)+\left(1-e^{\beta\left(\delta-\sqrt{|\Delta_y \epsilon|_{n}^2+\delta^2}\right)}\right)+\left(1-e^{\beta\left(\delta-\sqrt{|\Delta_z \epsilon|_{n}^2+\delta^2}\right)}\right)$$
$$$S$$$ is the set of indices comprising the scatterer and contains $$$N_s$$$ entries. $$$\Delta_\alpha$$$ denotes finite-differencing along dimension $$$\alpha$$$. $$$\beta$$$ is selected such that for a given $$$\delta$$$ and critical jump $$$|\Delta \epsilon|^\ast$$$, the gradient reaches a maximum. For $$$|\Delta\epsilon|\ll\delta$$$, the regularizer behaves like $$$\text{L}_2$$$ total variation. For $$$\delta<|\Delta\epsilon|<|\Delta\epsilon|^\ast$$$, the regularizer behaves roughly like $$$\text{L}_1$$$ total variation. Lastly, for $$$|\Delta\epsilon|>|\Delta\epsilon|^\ast$$$, the gradient of the cost function vanishes, entering a regularization regime resembling $$$\text{L}_0$$$ regularization.
The full cost function is $$$f(\epsilon) + \alpha \cdot f_0(\epsilon)$$$, where $$$\alpha$$$ is a constant weight.
We used MARIE10,11 to iteratively simulate B1+ maps from incident fields and updated guesses material properties. The adjoint formulation of MARIE was used to calculate co-gradients analytically. The first eight components of an "ultimate basis" generated via randomized SVD12 were employed as incident fields of a hypothetical 8-channel transmit array, in order to decouple the performance of the algorithm from the quality of the excitation. We evaluated our technique using two numerical phantoms with asymmetric EP distributions: a four-compartment tissue-mimicking phantom and a torso-mimicking phantom. The latter was chosen because accurate estimation of its EP from noisy data has failed using local techniques2. We performed numerical GMT experiments using a homogeneous initial guess (i.e., worst case scenario) for different noise levels in the synthetic B1+ maps. We characterized noise in the data using peak SNR (pSNR), defined as
$$\text{SNR}_k=\frac{\|b_k^+\|_\infty}{\sigma}$$
where $$$\sigma$$$ is the standard deviation of the additive Gaussian noise. We believe that this is the most robust SNR metric, since mean SNR (mSNR) could result in regions with unrealistically large SNR for multiple channel transmission.
Through trial and error, we were able to "match" our regularizer to the four-compartment phantom at 6mm resolution, with $$$\delta=0.1$$$, $$$|\Delta\epsilon|^\ast=1.5$$$ and $$$\alpha=10^{-2}$$$. Figure 1 shows initial guess, estimated EP, ground truth, and absolute error, for pSNR=50, corresponding to mSNR of 15-27.
The peak/mean error over the entire volume was 2.4/0.47% for relative permittivity and 4.6/1% for conductivity. For higher SNRs, errors were slightly smaller, whereas reconstructions were perfect at every voxel in the absence of noise (not shown). Figure 2 shows the results for 3mm resolution, reusing the same regularization parameters. Peak/mean error was 3.4/0.55% and 8.3/1.4% for relative permittivity and conductivity, respectively. For the torso-mimicking phantom (10mm resolution) we used pSNR=200, corresponding to mSNR of 13-28 and kept $$$\delta$$$ and $$$|\Delta\epsilon|^\ast$$$ as for the other phantom, but we set $$$\alpha$$$ to $$$8\cdot{10}^{-4}$$$. Figure 3 shows the results, after clustering the material properties9, in which peak/mean error was 0.4/0.2% and 0.36/0.28% for relative permittivity and conductivity, respectively.