Richard James Adams^{1}, Yong Chen^{1}, Pew-Thian Yap^{2}, and Dan Ma^{1}

^{1}Case Western Reserve University, Cleveland, OH, United States, ^{2}University of North Carolina, Chapel Hill, NC, United States

Magnetic resonance fingerprinting (MRF) simultaneously quantifies multiple tissue properties. Deep learning accelerates MRF’s tissue mapping time; however, previous deep learning MRF models are supervised, requiring ground truths of tissue property maps. It is challenging to acquire quality reference maps, especially as the number of tissue parameters increases. We propose a self-supervised model informed by the physical model of the MRF acquisition without requiring ground truth references. Additionally, we construct a forward model that directly estimates the gradients of the Bloch equations. This approach is flexible for modeling MRF sequences with pseudo-randomized sequence designs where an analytical model is not available.

T1, T2, and proton density (M0) quantitative parameter maps are obtained from multi-slice 2D MRF scans of five healthy subjects. A total of 58 2D brain slices are used to simulate time evolutions of a MRF-FISP scan with 480 time points via the Bloch equations. The flip angles and TR are based on an automated sequence design

The self-supervised model consists of two halves: the inverse model, which takes a simulated signal and predicts T1, T2, and M0 tissue parameters, and the forward model, which uses a physical model to reconstruct the signal’s time evolution given the predicted tissue parameters from the network. Thus, the model both takes in and outputs time-domain signal data, such that an L1-norm loss can be computed from direct comparison of the two without the need for a ground truth of tissue parameters. This loss is analytically back-propagated through the forward model via the chain rule, such that the gradient dLoss/dParameter is found. From there, Pytorch’s Autograd feature is utilized to update the inverse network’s convolutional weights and biases based on the computed gradient. This network architecture is summarized in figure 1.The inverse model is a convolutional u-net that accepts a number of input channels equal to the time points in the signal evolution and outputs a number of channels corresponding to the tissue parameters of interest. By convolving neighboring pixels, each pixel’s predicted parameter values are related to its neighbor’s values. This imposes a spatial constraint on the inverse model prediction.There is no pre-existing analytical signal model available for MRF scans; the forward model is the Bloch model used to simulate the original dataset, including partial derivatives with respect to T1, T2, and M0 computed for the inverse model’s learning. Including this physical model reinforces the learning of the inverse model such that only tissue parameters similar to the ground truths can be learned, as MRF has previously demonstrated that unique signal evolutions exist for unique combinations of tissue parameters.

46 slices from 4 different subjects are designated as the training dataset, while 12 slices from a fifth subject are reserved for validation. Training data is iteratively fed through the model in batches of 32 patches, with patch size of 8x8 pixels. Validation data is processed as an entire 256x256 pixel slice.The inverse network is trained with an L1-norm loss function and the ADAM optimizer for 1 epoch at a constant learning rate of 10

1. Ma D, Gulani V, Seiberlich N, et al. Magnetic resonance fingerprinting. Nature. 2013;495(7440):187-192. doi:10.1038/nature11971

2. Fang Z, Chen Y, Liu M, et al. Deep Learning for Fast and Spatially Constrained Tissue Quantification From Highly Accelerated Data in Magnetic Resonance Fingerprinting. IEEE Trans on Med Imag. 2019;38(10):2364-2374.

3. Liu F, Kijowski R, Fakhri G, Feng L. Magnetic resonance parameter mapping using model-guided self-supervised deep learning. Magn Reson Med. 2021;85:3211-3226.

4. Jordan S, Hu S, et al. Automated design of pulse sequences for magnetic resonance fingerprinting using physics-inspired optimization. PNAS. 2021 118(40)e2020516118

A depiction of the model architecture. MRF time courses are accepted as input, which are processed into tissue parameters via a convolutional u-net before being reconstructed back into the signal domain via the forward model. The loss between input and output is back-propagated through the gradient of the forward model with respect to the tissue parameters.

Comparison of network-predicted tissue parameter values (A, D, G) to ground truths used to simulate the signal data (B, E, H) and the percent error difference between the two (C, F, I). T1 performs quite well across the slice except for localized areas around the central ventricle where some pixels have high T1 values related to partial volumes of CSF and white matter. T2 and proton density (M0, in [au] arbitrary units) similarly perform well.

DOI: https://doi.org/10.58530/2022/3472