We present a framework that has the potential to capture non-rigid 3D motion at 50 Hz, hereby drastically accelerating state-of-the-art techniques. Our model directly and explicitly relates the motion-field to the k-space data and is independent of the spatial resolution, allowing for extremely high under-sampling. We illustrate proof-of-principle validations of our method through a simulation test and whole-brain 3D in-vivo measured data. Results show that the 3D motion-fields can be reconstructed from extremely under-sampled k-space data consisting of as little as 64 points, enabling 3D motion estimation at unprecedented frame-rates.
Real-time estimation of three-dimensional human motion/deformation is crucial for applications such as MRI-guided radiotherapy and cardiac imaging. State-of-the-art dynamic MRI methods (e.g. navigators1, cardiac gating2 and low-rank compressed sensing3 aim at overcoming the inherently slow MR data encoding rate but are still far from achieving real-time, high-resolution, non-gated 3D acquisitions.
Here, we present a framework that has the potential to capture non-rigid 3D motion at 50 Hz, hereby drastically accelerating state-of-the-art techniques. Our model directly and explicitly relates the motion-field to the $$$k$$$-space data and is independent of the spatial resolution, allowing for extremely high under-sampling.
Preliminary numerical and in-vivo experimental tests show that reconstruction of an affine 3D motion-field is possible with as little as 64 $$$k$$$-space points.
Given an MR-signal at time $$$t_{j+1}$$$ and a reference image at time $$$t_j$$$ we can reconstruct the deformation-field $$$\boldsymbol{u}(\boldsymbol{r})$$$ by numerically solving a non-linear inversion problem. See Fig. 2. We illustrate proof-of-principle validations of our method through a simulation test and whole-brain 3D in-vivo measured data.
For the numerical test, we considered an affine transformation $$$\boldsymbol{u}(\boldsymbol{r})$$$, i.e. $$\boldsymbol{u}(\boldsymbol{r})=A\boldsymbol{r}+\boldsymbol{\tau},$$where $$$A\in\mathbb{R}^{3\times3}$$$ and $$$\boldsymbol{\tau}\in\mathbb{R}^3$$$. We employed a bSSFP sequence with a very short (sub-millisecond) 3D spiral readout trajectory (see Fig. 3). From the signal after motion (snapshot data) we reconstructed the affine transformation and compared the hereby deformed object with the ground-truth.
For the whole-brain 3D in-vivo test, we represented the motion-field by a non-rigid affine transformation. We acquired one reference image, and a total of seven snapshot $$$k$$$-spaces. The volunteer moved in between the scans. Data was acquired using a heavily under-sampled 3D steady-state SPGR sequence with Cartesian readout on a 1.5T MR system (Ingenia, Philips), TR/TE=8ms/3ms. Retrospectively down-sampling eventually resulted in a pre-motion reference image of size $$$34\times34\times34$$$ and $$$k$$$-space snapshots of size $$$4\times4\times4$$$ (i.e. 64 points acquired in $$$4\times4\times$$$TR=128ms). To obtain ground-truth images, we performed a 3D SPGR with a single-shot EPI readout (TR/TE=35ms/9ms) immediately after each snapshot scan.
For the simulations, Fig. 3c shows that our model accurately predicts the true signal after motion. Fig. 4 shows the transformed object by applying the reconstructed motion-fields; a very high accuracy in the reconstruction is achieved.
Results of the in-vivo experiment are reported in Fig. 5 and show good agreement between our reconstruction and the ground-truth. We used a plain channel-wise sum of raw MR-signal data for our snapshots, lacking any form of channel sensitivity corrections. This possibly explains the difference in our reconstruction.
We introduced a framework to reconstruct non-rigid 3D transformations from minimal MR-signal data. This study presented simulated and experimental proof-of-principle results, showing the potential of the method. In the future, we will employ fast single-shot spiral acquisitions and expect to achieve a high-resolution 3D acquisition protocol with 50 frames per second (TR=20ms).
While the head motion is described well by affine transformations, other organs (abdomen and/or heart) will require more local and non-rigid parameterizations for $$$\boldsymbol{u}$$$; splines and other compressed representations derived from existing 4D human models4 will be considered. Note that our proof-of-principle affine transformation reconstructions are obtained with only 64 points of Cartesian $$$k$$$-space. More sophisticated transformations may require different trajectories and/or more $$$k$$$-space points but should still fit in the targeted 50 Hz rate.
The highly under-sampled data leads to a reduced minimization problem, which could be solved in less than a second with our initial Matlab implementation. Real-time reconstruction should be obtained by making use of CUDA/GPU implementations.