Qing Zou^{1}, Luis A. Torres^{2}, Sean B. Fain^{1}, and Mathews Jacob^{1}

^{1}University of Iowa, Iowa City, IA, United States, ^{2}University of Wisconsin–Madison, Madison, WI, United States

UTE radial MRI methods are powerful tools for probing lung structure and function. However, the challenge in directly using this scheme for high-resolution lung imaging applications is the long breath-hold needed. While self-gating approaches that bin the data to different respiratory phases are promising, they do not allow the functional imaging of the lung and are often sensitive to bulk motion. The main focus of this work is to introduce a novel motion compensated manifold learning framework for functional and structural lung imaging. The proposed scheme is robust to bulk motion and enables high-resolution lung imaging in around 4 minutes.

The main focus of this abstract is to introduce a novel manifold learning approach for motion compensated dynamic lung MRI. Deep manifold learning schemes have been recently introduced for motion-resolved dynamic MRI, which have conceptual similarities with self-gating approaches [2]. The main distinction of this work is the generalization to the motion compensated setting. By enabling the combination of data from different motion phases, this approach can significantly improve the image quality compared to the self-gating and previous manifold methods. More importantly, this approach is robust to bulk motion during the scans, which is a challenge with gating based methods. This approach recovers the dynamic time series, which enables the functional lung imaging, facilitating the estimation of lung and vessel contraction and diaphragm motion.

$$\mathcal{C}(\mathbf{z},\theta,{f}) = \sum_{t=1}^M||\mathcal{A}_t({f}_t)-\mathbf{b}_t||^2 + \lambda_t||\nabla_t\mathbf{z}_t||.$$

Here, $$$\mathcal{A}_t$$$ are the multichannel non-uniform FFT forward models for the time point $$$t$$$, while $$$\mathbf b_t$$$ are the corresponding measurements. Note that we also add a smoothness penalty on the latent vector $$$\mathbf{z}$$$ along the time direction to encourage the latent vectors to be smooth.

Post-learning, the image volume at each time instance is recovered as the deformed version of the fixed volume, using the motion field derived from the CNN using the corresponding latent vector.

[1] L. Feng et al., XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing, MRM, 75(2):775-788, 2016.

[2] Q. Zou et al., Dynamic imaging using a deep generative SToRM (Gen-SToRM) model, IEEE-TMI, 40(11):3102 - 3112, 2021.

[3] X. Zhu et al., Iterative motion-compensation reconstruction ultra-short TE (iMoCo UTE) for high-resolution free-breathing pulmonary MRI, MRM, 83:1208–1221, 2020.

Figure 1: Proposed motion-compensated reconstruction
jointly learns the motion vectors $$$\Phi_t$$$ and the static image
template $$$f$$$ from the k-t space data. To regularize the motion fields, we model
$$$\Phi_t = \mathcal G_{\theta}(\mathbf{z}_t)$$$ as the outputs of a deep CNN generator
$$$\mathcal G_{\theta}$$$ whose weights are denoted by $$$\theta$$$, driven by
low-dimensional (e.g., 1-D in lung MRI) latent vectors. The parameters of the
CNN generator $$$\theta$$$, the latent vectors $$$\mathbf{z}_t$$$, and the template $$$f$$$ are
jointly estimated from the data.

Figure 2: Comparison of the image quality of the
reconstructions. We compare the reconstruction using the proposed method with the
reconstructions using iMoCo [3] and XD-GRASP [1] on the pre-contrast
data.set We also showed the post-contrast reconstruction, which has higher SNR and hence may aid as a reference.

Figure 3: Demonstration of the proposed scheme on the
post-contrast data from both the healthy subject and the fibrotic subject. Maximum intensity projections of three views are shown for each dataset.

Figure 4: Latent vectors and motion maps estimated using the proposed algorithm on the pre-contrast healthy subject. (a) Latent vectors of the entire dataset (right) and a zoom of the first 500 frames (left). (b) Estimated flow maps in the inhalation and exhalation phases, marked by red and green lines in the latent vector plots, respectively. The motion maps are obtained as a bye-product of the reconstruction.

Figure 5 (animation): Robustness of the algorithm to bulk motion in the post-contrast
dataset acquired from a fibrotic subject. The left panel is the deformed reconstructions at different time points indicated by the red line in the middle image. The middle image shows the time profiles.
The right image shows the learned latent vectors, which drives the motion generator. From both the time
profile and the latent vectors, we see that there is a sudden jump. This
indicates the bulk motion in the images, which corresponds to a shoulder motion seen from the reconstructed movie.

DOI: https://doi.org/10.58530/2022/0661