Julian Maclaren1, Andre Kyme2,3, Murat Aksoy1, and Roland Bammer1
1Department of Radiology, Stanford University, Stanford, CA, United States, 2Department of Biomedical Engineering, University of California Davis, Davis, CA, United States, 3Brain and Mind Centre, University of Sydney, Sydney, Australia
Synopsis
Prospective motion correction
based on optical tracking shows promise for improving image quality in MR brain
imaging. To simplify this technique and expedite clinical deployment, it is
desirable to avoid attaching markers to the patient’s head. Here we demonstrate
proof-of-principle markerless tracking using an MR-compatible stereo camera and
head coil configuration. We tested the method outside the MR environment using
a 6-axis robot, capable of very accurate and repeatable (~20 µm) motion, to
control a head phantom. Close agreement between our pose estimates and the
applied motion suggests that accurate markerless tracking of the head is feasible
in MRI. Introduction and Purpose
Prospective
motion correction has been shown to be effective in preventing motion artifacts
in neuroimaging (1). Optical systems are a popular means of obtaining the
required head tracking information during scanning (2–6). However, one challenge shared by all existing optical
prospective motion correction methods is the need to attach markers to the subject.
The aim of this work was to investigate
the feasibility of markerless optical motion correction for neuroimaging using a
realistic test platform with controlled motion.
Methods
Mechanical setup: The test platform (Figure 1) was based around a 6-axis robot arm (C3-A601ST, Epson America Inc.) capable of highly
repeatable (20 μm) arbitrary motion in six degrees of freedom. An
8-channel head coil (Invivo) was securely mounted in front of the robot. A polycarbonate
rig was designed and 3D printed to fit the head coil and rigidly hold two MR-compatible
cameras normally used at our institution for marker-based optical head tracking.
We used a mannequin head attached to the robot end effector to provide a
realistically shaped surface for tracking. To achieve a skin-like surface for
tracking, an image from Subject #5 of the MIT-CBCL face recognition database (7) was color-printed on paper and glued to the forehead
(Figure 1b).
Calibration:
Stereo camera calibration was performed by programming the robot to move a
checkerboard marker composed of an 8 x 7 grid of 4 mm squares
(Figure 2) to 30 different positions within the field-of-view of both cameras. Intrinsic
and extrinsic camera parameters were then computed using the Matlab Calibration
Toolbox (8).
Motion experiment: The robot was programmed to move between a series of fixed poses
comprising translations in the head-feet direction and rotations about the
z-axis (‘head shaking’). At each pose, synchronized images from the cameras were
saved for offline motion processing.
Analysis: The
feature-based pose estimation method reported previously for PET imaging of
rats (9) was adapted for this experiment. In this method,
features are detected and matched across multiple camera views to accumulate a
database of head landmarks; pose is then estimated based on 3D-2D registration
of the landmarks to features in each image. Estimates were compared to the ground
truth motion applied by the robot.
Results
Figure 3 shows camera images
of the mannequin head surface before (Fig. 3a) and after (Fig. 3b) masking and
detection of SIFT (scale invariant feature transform) features. Figures 4 and 5 compare measured translations and
rotations with the ground truth motion provided by the robot. The mean absolute error in pose detection was 0.14 mm and 0.23 degrees, for translations and rotations respectively.
Discussion
The aim of this work was to
demonstrate the feasibility of accurate motion tracking of the head without
attached markers, by using a test setup that is both realistic and allows for motion
to be accurately specified. Initial results indicate that the geometric
configuration and pose estimation algorithms can provide pose estimates suitable
for sub-millimeter / sub-degree motion correction.
Since the head coil and
camera setup used in this work are similar to the marker-based setup we currently
use for in vivo marker-based prospective motion correction, translating the
markerless method to in vivo applications would be straightforward. Other markerless
tracking approaches based on structured light have been demonstrated previously
for PET imaging (10). Structured light methods may require a larger field
of view for accurate topological mapping of the head and are therefore less
likely to be compatible with enclosed head coils used in MRI. The approach
presented here does not suffer from this problem. However, poses are currently
computed offline and in order for the method to be applied in prospective
motion correction improved computational efficiency must be improved to allow
real time pose processing with low latency.
Conclusion
We have tested a markerless
head motion tracking technique using a realistic test platform that allows
highly reproducible motion. Accurate motion tracking of a skin-like surface without
the need for attached markers has been demonstrated.
Acknowledgements
NIH (2R01 EB002711 , 5R01
EB008706, 5R01 EB011654), the Center of Advanced MR Technology at Stanford (P41
RR009784), Lucas Foundation. Kyme is supported by funding from the Education
and Research Foundation, USA Society of Nuclear Medicine and Molecular Imaging. References
1. Maclaren J, Herbst
M, Speck O, Zaitsev M. Prospective motion correction in brain imaging: A
review. Magn Reson Med 2013;69:621–636. doi: 10.1002/mrm.24314.
2. Zaitsev M, Dold C, Sakas G,
Hennig J, Speck O. Magnetic resonance imaging of freely moving objects:
prospective real-time motion correction using an external optical motion
tracking system. Neuroimage 2006;31:1038–1050.
3. Qin L, van Gelderen P,
Derbyshire JA, Jin F, Lee J, de Zwart JA, Tao Y, Duyn JH. Prospective
head-movement correction for high-resolution MRI using an in-bore optical
tracking system. Magn Reson Med 2009;62:924–934. doi:
10.1002/mrm.22076 [doi].
4. Aksoy M, Forman C, Straka M,
Skare S, Holdsworth S, Hornegger J, Bammer R. Real-time optical motion
correction for diffusion tensor imaging. Magn Reson Med 2011;66:366–378. doi: 10.1002/mrm.22787.
5. Schulz J, Siegert T, Reimer
E, Labadie C, Maclaren J, Herbst M, Zaitsev M, Turner R. An embedded optical
tracking system for motion-corrected magnetic resonance imaging at 7T. Magn.
Reson. Mater. Physics, Biol. Med. 2012:1–11. doi:
10.1007/s10334-012-0320-0.
6. Maclaren J, Armstrong BSR,
Barrows RT, et al. Measurement and Correction of Microscopic Head Motion during
Magnetic Resonance Imaging of the Brain. PLoS One 2012;7. doi: ARTN
e48088DOI 10.1371/journal.pone.0048088.
7. Weyrauch B, Heisele B, Huang
J, Blanz V. Component-Based Face Recognition with 3D Morphable Models. 2004
Conf. Comput. Vis. Pattern Recognit. Work. 2004:0–4. doi: 10.1109/CVPR.2004.41.
8. Bouguet J-Y. Complete Camera
Calibration Toolbox for Matlab. Jean-Yves Bouguet’s Homepage 1999. http://www.vision.caltech.edu/bouguetj/calib_doc/
9. Kyme A, Se S, Meikle S,
Angelis G, Ryder W, Popovic K, Yatigammana D, Fulton R. Markerless motion
tracking of awake animals in positron emission tomography. IEEE Trans. Med.
Imaging 2014;33:2180–90. doi: 10.1109/TMI.2014.2332821.
10. Olesen OV, Paulsen RR,
Højgaard L, Roed B, Larsen R. Motion tracking for medical imaging: A nonvisible
structured light tracking approach. IEEE Trans. Med. Imaging 2012;31:79–87.
doi: 10.1109/TMI.2011.2165157.