Bhairav Bipin Mehta1, Dan Ma1, and Mark Alan Griswold1
1Radiology, Case Western Reserve University, Cleveland, OH, United States
Synopsis
Motion
is one of the biggest challenges in clinical MRI. The
recently introduced Magnetic Resonance Fingerprinting (MRF) has been shown to
be less sensitive to motion. However, it is still susceptible to patient motion
primarily occurring in the early stages of the acquisition. In this study, we
propose a novel reconstruction algorithm for MRF, which uses robust fitting and
image registration algorithms to decrease the motion sensitivity of MRF. The evaluation was performed in numerical
phantoms with simulated rigid motion.Introduction:
Motion artifacts are extremely common in
clinical MRI, and present one of the biggest challenges in MRI. The recently introduced Magnetic Resonance
Fingerprinting (MRF)
[1] has been shown to be less sensitive to
motion than conventional imaging through the use of pattern matching. However, the
template matching
[1] based pattern matching algorithms used in most
initial implementations are still susceptible to patient motion primarily
occurring in the early stages of the acquisition. The goal of this study is to
develop a reconstruction algorithm for MRF that is explicitly insensitive to
imaging artifacts from motion. Here
we present an algorithm based on robust fitting algorithm and image
registration algorithm to estimate and compensate motion.
Methods:
The
pattern matching algorithm was reformulated as a least squares linear
regression problem. In this framework, the fingerprint with the least root mean
square fitting error (RMSE) was used as the matched fingerprint. To improve the
robustness of dictionary matching algorithm, we extended the iterative
re-weighted least squares fitting method, which is known to be robust with
respect to outliers and errors[2]. The flow of the proposed motion
compensated robust fingerprinting (MORF) reconstruction algorithm is as follows:
1. Robust dictionary matching. Dictionary matching using robust regression is
performed for each pixel within region of interest. The signal evolution of
each pixel is replaced by the matched fingerprint, appropriately scaled by the estimated
proton density value.
2. Parallel imaging sub-problem. For each individual timeframe the following
optimization problem of motion-compensated regularized SENSE[3] was
solved:
$${\bf m^*}=\arg \min_{\bf m}\parallel F_u C_i R^{-1} {\bf m}-{\bf d}\parallel_2+ \lambda_1 \psi({\bf m})$$
where $$${\bf m}$$$ is the motion-registered frame
after coil combination,
$$$R$$$ is the motion registration operator, $$${C_i}$$$ is the individual coil sensitivity profile, $$${\bf d}$$$ is the measured k-space data, $$$F_u$$$ is the undersampled non-uniform Fast Fourier
Transform (nuFFT), $$${\lambda_1}$$$ is the regularization weight and $$${\psi}$$$ is the regularization transform.
3. Motion estimation sub-problem. Timeframes with error between $$${\bf m^*}$$$ and $$${\bf m}$$$
above a selected threshold value (heuristically selected as 9%) were considered
to be motion corrupted. Displacement fields were estimated using rigid-body image
registration for motion corrupted frames.
Steps
1 and 2 were repeated until convergence was achieved. Step 3 was empirically
selected to perform every even iteration after the first five. (The initial
frames all show significant residual aliasing artifacts and are thus unsuitable
for image registration.) After convergence, step 1 was performed as the final
step of the reconstruction algorithm. Equation 1 was solved using non-linear conjugate-gradient
algorithm[4] with total variation as the regularization operator. In
this study, image registration was performed using MATLAB (The Mathworks, Inc.,
Natick, MA) 2014b’s built-in functions with rigid-body motion as the registration
transform.
Numerical
phantoms (Shepp-Logan and brain), generated using T1, T2 and PD maps from MNI brain
atlas[5], were used for evaluation of the proposed algorithm. For
assessment of the algorithm, rigid motion was simulated in the numerical phantoms.
A rotational motion of 450 was introduced in the initial 215 frames
from a total of 1000 frames (frames 1 to 20: presenting tilt from 00
to 450; frames 21 to 195: stationed at 450; frames 196 to
215: presenting tilt from 450 to 00). A variable density
spiral (VDS) trajectory, with fully sampled center of k-space and edge of
k-space undersampled by a factor of 48, was used for these simulations. The
motion corrupted fully sampled images were forward sampled using nuFFT[6]
to estimate the undersampled (1 spiral leaf; R=48) k-space measurements. For
comparison, the simulated data was reconstructed using the previously presented
iterative multi-scale[7] (IMS-MRF) reconstruction algorithm.
Results & Discussions:
Figure
1,2 and 3 shows the T1 and T2 estimation results from the simulations using
Shepp-Logan (figure 1) and brain (figure 2 and 3) phantoms. The maps estimated
using IMS-MRF present residual motion artifacts (RMSE for Shepp-Logan: T1=61.09%
and T2=31.77%; Brain: T1=77.95% and T2=11.14%). However, maps using MORF are
closer to ground truth and present minimal errors (RMSE for Shepp-Logan: T1=3.69%
and T2=2.57%; Brain: T1=8.07% and T2=4.70%) even though 21.5% of the data is motion
corrupted during the early stages of the acquisition. Figure 4 shows the reconstructed, ground truth
and undersampled images corresponding to the frames with simulated motion.
Figure 5 shows example signal evolution curves from a pixel affected by motion.
It illustrates the robustness of proposed algorithm to motion and the
capability of performing signal recovery.
Conclusion:
The
proposed MORF reconstruction algorithm noticeably decreases the sensitivity of
MRF to motion. The algorithm could potentially be used by itself for motion
estimation for applications such as estimation of cardiac function.
Acknowledgements
The authors would like to acknowledge funding
from Siemens Healthcare and NIH grants NIH 1R01EB016728 and NIH 5R01EB017219.References
1. Ma D, et
al. Nature. (2013). 2. Holland P, et
al. Communications in Statistics; Theory and Methods. (1977). 3. Pruessmann
K, et al. MRM. (2001). 4. Lustig M, et al. MRM. (2007). 5. Aubert-Broche B,
et al. Neuroimage. (2006). 6. Pipe JG, Matlab nuFFT Toolbox. 7. Pierre E, et al. MRM (2015).