Two-dimensional MRF is considered to be less sensitive to in-plane motion than conventional imaging techniques. However, in scanning populations prone to rapid and extensive motion, challenges remain. Here, we suggest a two-step 3D MRF procedure that includes the correction of subject motion during the reconstruction. In the first step, we reconstruct the data in small segments consisting of images with equal contrast and calculate the between-segment motion. In the second step, we perform motion correction and use corrected images for matching with dictionary. This results in higher quality of reconstructed images and better precision of quantitative maps.
3D MRF sequence
Data was acquired on a clinical 1.5 T scanner (HDx, GE Healthcare). In our 3D-MRF acquisition we sampled random radial directions at each frame (Fig 1A.) with constant TE/TR. Each segment of the data was acquired with a pattern of flip angles (Fig. 1B). Replicating this design across the acquisition results in segments of equal contrasts with the same angular sampling density in the 3D k-space. Images from each segment, combined together, create a fully reconstructed image. We use a singular value decomposition to compress acquired frames in k-space to singular values4. The final images form a 128x128x128 volume of (1.5 mm)3 voxels.
Simulation
Data from a phantom were collected with a single-segment acquisition. Motion was simulated by shifting and rotating the trajectories before the reconstruction. A timecourse of 3D volumes, formed by the reconstructed volume with no motion, followed by the volumes with simulated motion, was generated. The 3D motion correction algorithm implemented in AFNI was applied5.
In vivo experiment
Two subjects participated to this study (case A and B), each acquired twice, with a 48-segment scheme. In the first acquisition, subjects were instructed to stay still, while in the second they randomly moved their heads. The processing pipeline on the data with deliberate motion was as follows. 1) Each acquisition was first reconstructed segment by segment to form a timeseries of 3D volumes. During reconstruction, the k-space was apodised by a 3D Gaussian (FWHM=16 k-space points) to decrease noise and facilitate motion correction. 2) Between-segment motion was estimated from the magnitude data and the transformation matrix was applied to the complex data of each segment. 3) MRF dictionary fitting was run on motion-corrected images and uncorrected images with motion artifacts. Results were compared with the dataset without voluntary motion.
Fig. 2 proves that a motion correction algorithm can correctly estimate the simulated movements (Fig. 2A), and shows the effect of motion correction applied to phantom data (Fig. 2B).
Fig. 3 shows an improvement of the in vivo data reconstructed with the motion correction algorithm. Not only the anatomical landmarks are of higher quality but also T1 maps comprise of higher detail. In Fig. 4 we correlate quantitative estimates between the datasets, and in both cases after the correction values are closer to the ground truth. To obtain a spatial map of the fitting error, we plot the correlation values between dictionary and acquired data as a map for the case A (Fig. 5A). We observe that the correction increases the correlation values globally (Fig. 5), especially in areas highly affected by motion.
Fig 1A: Sequence diagram for 3D radial trajectories. Spokes were acquired with randomized angles (theta, phi).
Fig 1B: Flip angle pattern that presents the idea of equal contrast segments. Inversion pulse was present before each segment.
Fig. 2A: Presents the simulated and estimated motion parameters.
Fig. 2B: Illustrates the volumes before and after applying the motion correction (images of the 1st singular value). First image is the reference, next two volumes consist only of translation followed by two of rotation and last two of both translation and rotation. Red circles are used to appreciate the applied correction.
Fig. 5A: Correlation values from dictionary matching for the middle axial slice for the case A.
Fig. 5B: Histogram of the same parameter for the presented slice (red before correction, green after the correction). Straight lines indicate the mean.