Rüdiger Stirnberg1, Daniel Brenner1, Willem Huijbers1, Tobias Kober2,3,4, and Tony Stöcker1,5
1German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany, 2Advanced Clinical Imaging Technology, Siemens Healthcare, Lausanne, Switzerland, 3Department of Radiology, University Hospital Lausanne (CHUV), Lausanne, Switzerland, 4Department of Electrical Engineering (LTS5), Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 5Department of Physics and Astronomy, University of Bonn, Bonn, Germany
Synopsis
Accurate and precise head motion tracking has
been shown to be feasible using multi-channel free-induction-decay (FID)
signals, where positional information is supported by the spatial distribution
of the receive coils. Until now, this required subject-specific calibration
using simultaneously acquired FID signals and reference motion parameters,
e.g. from an external device, while the subject performs controlled motion. In
this study, we demonstrate successful calibration of FID navigators using
motion parameters extracted from a resting-state fMRI scan without intentional
motion. Additionally, extension of the calibration by principal component
analysis of the FID data is shown to increase motion prediction accuracy
and precision. Target Audience
Neuroscientists and MR physicists interested in
MR-based head motion navigation.
Purpose
To demonstrate successful calibration of free-induction-decay
(FID) motion navigators
1 using EPI data acquired under realistic
conditions (without intentional motion) and to improve on motion prediction by
utilizing principal component analysis.
Methods
Recently, the accuracy and
precision of head motion estimation by means of (spatially not encoded) FID
signals acquired with a multi-channel head coil was compared to a motion-tracking
camera system2. It was found that the mapping between (rigid-body)
motion parameters Y1(t)…Y6(t)
and FID signals, X1(t)…XN(t),
can be estimated by linear regression on an individual basis (i.e. per subject):
[Eq. 1] Y
= X·β + ε
(Y
and ε:
Nt×6 matrices, X:
Nt×N matrix, Nt: No.
of time points, N: No. of input
signals, ε: residual errors)
The N×6
coefficient matrix β is subsequently multiplied
with new FID input signals X’ to predict corresponding motion Y’.
Best results are achieved if both real and imaginary parts of the FID signals
are considered in X (optionally + const. + lin. term with respect to time, as
adopted here).
Experimental data are acquired using the 52 head
elements of a 64-channel head coil on a MAGNETOM Prisma 3T system
(Siemens
Healthcare, Erlangen, Germany), yielding a
total of N=52x2+2=106 input signals. To
suppress the effect of signals with minor relevance (e.g. high redundancy) or
even detrimental impact (e.g. low SNR), we propose to replace X
in Eq. 1 by the first Npc<N principal components (truncated PCA) such
that X≈U·S·VT=:T·VT (S: diagonal Npc×Npc
singular value matrix). The N×Npc matrix V maps
between input signals X and the Nt×Npc
principal component matrix T. Compared to Eq. 1, the proposed
approach thus relies on calibration and prediction according to
[Eq. 2] Y = T·β + ε = X·V·β + ε.
In essence, instead of the original two-step
procedure
1. solve Eq. 1 for β using calibration
data X
and Y
2. predict motion Y’=X’·β
we propose the following three-step procedure:
1. PCA on calibration signals X to obtain V
2. solve Eq. 2 for β
3. predict motion Y’=X’·β’, where β’=V·β
EXPERIMENT: A
prototype
implementation of a segmented 3D-EPI sequence3
capable of high acceleration using CAIPIRINHA4 sampling as described
recently5,6, was modified to include one global excitation pulse (FA=5°)
and FID readout (2.5ms) once every volume-TR of 500ms (3mm iso, FA=15°). Two scans,
each of approximately 4:30min duration, were acquired (550 volumes). During the
calibration scan, the subject looked at a fixation cross, while during the validation
scan, the subject responded to visual and auditory stimuli by button presses. The
subject was instructed to avoid movement. Reference motion parameters were
estimated from the EPI data using FSL’s MCFLIRT7 assuming comparable
or higher motion sensitivity than discussed by Tisdal et al.8. The
raw FID signals were processed as described previously2. FID-based
motion prediction was performed once according to Eq. 1 and once according to
the new approach (Eq. 2). For the latter, the reduced number of PCs was set to Npc=24.
Results
Fig. 1 shows unintentional head motion
parameters as extracted from the calibration and validation EPIs (
Y)
together with 26 of 204 FID signals (
X) as well as the first 6 of 24 PCs
(
T=
X·V). Fig. 2 shows FID-extracted motion parameters according
to the original (FID prediction) and novel (PC prediction) approach. For both
methods, the mean average error (MAE) and standard deviation of the error (STD)
demonstrate relatively high prediction accuracy and precision
2 compared
to the ground truth validation data. MAE and STD are smaller by a factor of 2-3
when using the PC approach. The scatter plots and the correlation coefficients,
r, indicate a similar trend for each
degree of motion separately.
Discussion
Both approaches perform relatively well
considering that no intentional motion was performed during calibration. The novel
PC approach is more accurate and precise, and – although not shown here – more robust
against peak motion parameters (cf. arrows in Fig. 1) as is observed when
limiting the calibration phase to the first 60 or 30 seconds. The high number
of 52 head array channels used here may, however, be a prerequisite. A thorough
group analysis among different degrees of motion is expected to confirm the
single-subject findings.
Conclusion
We have shown that replacing raw FID signals by
a reduced set of principal components can improve motion prediction based on
multi-channel FID signals without requiring dedicated equipment. Furthermore, even
unintentional motion was found to be suited for calibration. FID navigators
acquired during normal resting state fMRI scans may thus be suitable to train
subsequent motion tracking during anatomical scans.
Acknowledgements
RS wishes to thank Benedikt A. Poser for his help on enabling online 2D-CAIPIRINHA image reconstruction.References
1. Kober et al. Magn. Reson. Med. 66, 2011
2. Babayeva et al. IEEE TMI 34, 2015
3. Poser et al. NeuroImage 51, 2010
4. Breuer et al. Magn. Reson. Med. 55, 2006
5. Narsude et al. ISMRM 2013
6. Zahneisen et al. Magn. Reson. Med. 74, 2015
7. Jenkinson et al. NeuroImage 17, 2002
8. Tisdal et al. Magn. Reson. Med. 68, 2012