0661

High-resolution dynamic 3D UTE Lung MRI using motion-compensated manifold learning
Qing Zou1, Luis A. Torres2, Sean B. Fain1, and Mathews Jacob1
1University of Iowa, Iowa City, IA, United States, 2University of Wisconsin–Madison, Madison, WI, United States

Synopsis

UTE radial MRI methods are powerful tools for probing lung structure and function. However, the challenge in directly using this scheme for high-resolution lung imaging applications is the long breath-hold needed. While self-gating approaches that bin the data to different respiratory phases are promising, they do not allow the functional imaging of the lung and are often sensitive to bulk motion. The main focus of this work is to introduce a novel motion compensated manifold learning framework for functional and structural lung imaging. The proposed scheme is robust to bulk motion and enables high-resolution lung imaging in around 4 minutes.

Purpose/Introduction

The reduced echo time in ultra-short echo time (UTE) radial 3D MRI methods significantly reduce the $$$T_2^*$$$ losses in the lung, enabling the imaging of fine structures within the lung; the non-iodizing nature of MR imaging and the ability to image lung function makes this approach desirable over competing methods. The main challenge in high-resolution pulmonary applications is the respiratory motion; the breath-hold duration severely restricts the resolution in 3D applications. Free-breathing imaging using self-gating, which bin the radial data to different respiratory bins, has recently shown to be a promising alternative [1]. However, a limitation of the self-gating approach is its dependence on assumed periodic tidal breathing that makes it sensitive to blurring due to bulk motion and unable to capture dynamic respiratory motion.

The main focus of this abstract is to introduce a novel manifold learning approach for motion compensated dynamic lung MRI. Deep manifold learning schemes have been recently introduced for motion-resolved dynamic MRI, which have conceptual similarities with self-gating approaches [2]. The main distinction of this work is the generalization to the motion compensated setting. By enabling the combination of data from different motion phases, this approach can significantly improve the image quality compared to the self-gating and previous manifold methods. More importantly, this approach is robust to bulk motion during the scans, which is a challenge with gating based methods. This approach recovers the dynamic time series, which enables the functional lung imaging, facilitating the estimation of lung and vessel contraction and diaphragm motion.

Methods

The deformation fields at different time instants, denoted by $$$\Phi_t$$$, are assumed to lie on a smooth manifold; the fields are modeled as the output of a deep convolutional neural network (CNN) $$$\mathcal{G}_{\theta}$$$ whose weights are denoted by $$$\theta$$$ in response to a low-dimensional time varying latent vector $$\Phi_t = \mathcal{G}_{\theta}(\mathbf z_t).$$ Each image in the time series $$$f_t$$$ is assumed to be derived from an image volume $$$f$$$, deformed by the above motion fields (see Fig 1) as $${f}_t = \mathcal{D}(f, \mathcal{G}_{\theta}(\mathbf{z}_t));$$ here $$$\mathcal D$$$ is an interpolation layer. The parameters of the CNN $$$\theta$$$, the image $$$f$$$, and the latent vectors $$$\mathbf{z}_t$$$ are jointly learned from the measured data of each subject in an unsupervised fashion by minimizing:
$$\mathcal{C}(\mathbf{z},\theta,{f}) = \sum_{t=1}^M||\mathcal{A}_t({f}_t)-\mathbf{b}_t||^2 + \lambda_t||\nabla_t\mathbf{z}_t||.$$
Here, $$$\mathcal{A}_t$$$ are the multichannel non-uniform FFT forward models for the time point $$$t$$$, while $$$\mathbf b_t$$$ are the corresponding measurements. Note that we also add a smoothness penalty on the latent vector $$$\mathbf{z}$$$ along the time direction to encourage the latent vectors to be smooth.

Post-learning, the image volume at each time instance is recovered as the deformed version of the fixed volume, using the motion field derived from the CNN using the corresponding latent vector.

Results

The data are acquired on a GE 3T scanner with a 3D radial UTE sequence with variable density readouts to oversample the center of k-space and bit-reversed view ordering with 32 channels. Pre and post-contrast datasets were acquired from a healthy volunteer and a patient with fibrosis. The data was acquired with 90K radial spokes with TR≈3.2 ms, corresponding to five-minute acquisition. In Fig. 2, we compared the image quality of the reconstructions of different approaches based on the pre-contrast data. We also include the high SNR post-contrast reconstructions using the proposed scheme. We observe that the proposed scheme is able to offer higher SNR reconstructions, which are less blurred than competing methods on the pre-contrast data. In Fig. 3, we show the motion-compensated reconstructions of the post-contrast datasets from both the healthy subject and fibrotic subject. In Fig. 4, we show the sample motion fields $$$\Phi_t$$$ for a specific time instance, which is obtained as a by-product. We noticed that the post-contrast fibrotic subject dataset has a shoulder movement during the scan, which is captured by the latent vectors of the proposed scheme as illustrated in Fig. 5, respectively. From the movie shown in Fig. 5, we see that the subject moved the shoulder during the scan.

Conclusion

We introduced an unsupervised motion-compensated manifold reconstruction scheme for free‐breathing pulmonary MRI from highly undersampled measurements. The proposed scheme is observed to offer improved SNR and reduced blurring, compared to competing methods. In addition, the proposed approach is observed to be robust to bulk motion during the scan.

Acknowledgements

This work is supported by NIH under Grants R01EB019961, R01AG067078-01A1 and R01HL126771.

References

[1] L. Feng et al., XD-GRASP: Golden-angle radial MRI with reconstruction of extra motion-state dimensions using compressed sensing, MRM, 75(2):775-788, 2016.

[2] Q. Zou et al., Dynamic imaging using a deep generative SToRM (Gen-SToRM) model, IEEE-TMI, 40(11):3102 - 3112, 2021.

[3] X. Zhu et al., Iterative motion-compensation reconstruction ultra-short TE (iMoCo UTE) for high-resolution free-breathing pulmonary MRI, MRM, 83:1208–1221, 2020.

Figures

Figure 1: Proposed motion-compensated reconstruction jointly learns the motion vectors $$$\Phi_t$$$ and the static image template $$$f$$$ from the k-t space data. To regularize the motion fields, we model $$$\Phi_t = \mathcal G_{\theta}(\mathbf{z}_t)$$$ as the outputs of a deep CNN generator $$$\mathcal G_{\theta}$$$ whose weights are denoted by $$$\theta$$$, driven by low-dimensional (e.g., 1-D in lung MRI) latent vectors. The parameters of the CNN generator $$$\theta$$$, the latent vectors $$$\mathbf{z}_t$$$, and the template $$$f$$$ are jointly estimated from the data.

Figure 2: Comparison of the image quality of the reconstructions. We compare the reconstruction using the proposed method with the reconstructions using iMoCo [3] and XD-GRASP [1] on the pre-contrast data.set We also showed the post-contrast reconstruction, which has higher SNR and hence may aid as a reference.

Figure 3: Demonstration of the proposed scheme on the post-contrast data from both the healthy subject and the fibrotic subject. Maximum intensity projections of three views are shown for each dataset.

Figure 4: Latent vectors and motion maps estimated using the proposed algorithm on the pre-contrast healthy subject. (a) Latent vectors of the entire dataset (right) and a zoom of the first 500 frames (left). (b) Estimated flow maps in the inhalation and exhalation phases, marked by red and green lines in the latent vector plots, respectively. The motion maps are obtained as a bye-product of the reconstruction.

Figure 5 (animation): Robustness of the algorithm to bulk motion in the post-contrast dataset acquired from a fibrotic subject. The left panel is the deformed reconstructions at different time points indicated by the red line in the middle image. The middle image shows the time profiles. The right image shows the learned latent vectors, which drives the motion generator. From both the time profile and the latent vectors, we see that there is a sudden jump. This indicates the bulk motion in the images, which corresponds to a shoulder motion seen from the reconstructed movie.

Proc. Intl. Soc. Mag. Reson. Med. 30 (2022)
0661
DOI: https://doi.org/10.58530/2022/0661