4531

MR Fingerprinting SChedule Optimization NEtwork (MRF-SCONE)
Ouri Cohen1

1Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States

Synopsis

MR Fingerprinting schedule optimization can reduce scan times and improve accuracy but typically relies on minimization of indirect metrics rather than the actual reconstruction error due to the computational challenges involved in calculating the reconstruction error at each iteration of the optimization. Here we introduce a Deep Learning framework that can overcome these challenges and allow direct minimization of the reconstruction error. The proof-of-principle is demonstrated using simulations on a numerical brain phantom.

Introduction

Optimization of the MR fingerprinting (MRF) acquisition schedule can reduce scan times and improve accuracy. Schedules are typically optimized by minimizing a metric based on a dictionary generated from a representative set of tissue parameters. Some examples of optimization metrics include the discrimination between tissue types [1] or the SNR efficiency [2]. However, the reconstruction error, which is the actual parameter of interest, isn’t directly quantified due to the impracticality of reconstructing dictionary data for each schedule tested, particularly for multi-parametric dictionaries. In previous work [3] we’ve demonstrated the feasibility of reconstructing MRF data using a neural network trained on a sparse training set. Here we exploit this property to propose a fast, scalable Deep Learning schedule optimization framework that enables minimization of the actual reconstruction error for large dictionaries or acquisition parameters.

Methods

An overview of the proposed approach is shown in Figure 1. A set of 3000 schedules was sampled from the acquisition parameter space (flip angle and repetition time in this work). For each schedule, a sparse set of 500 randomly selected tissue parameter combinations was used to create a dictionary of signal magnetizations using the Extended Phase Graph formalism [4]. This dictionary was used to train a neural network as previously described [3]. An additional, distinct 500 entries dictionary was also generated and served as the test data for the error calculation. The cost associated with each schedule was calculated from the reconstructed parameter maps as the root mean-square error (RMSE) but other suitable metrics may be used as well. The costs obtained were then used to train a second neural network (Figure 2) that learned a functional mapping between the schedules and the associated reconstruction error. Once trained, the network’s feedforward process outputs the error for new schedules in milliseconds. This enables improved exploration of the schedule search-space when used in conjunction with an optimization solver. A proof-of-principle optimization was tested on a numerical brain phantom [5]. Network training was conducted on an NVIDIA (Nvidia Inc. Santa Clara, CA) P40 GPU with 24 GB of RAM. To optimize the schedule we used MATLAB’s (Mathworks, Natick, MA) fmincon() function with the trained network as the objective function. The optimizer was initialized with a random acquisition schedule and allowed to run to convergence. To test the optimization, a simulated acquisition with an MRF-EPI pulse sequence [1] using the initial and optimized schedules was conducted with varying levels of white Gaussian noise injected into the data corresponding to SNRs of 10-40 dB. The error in the reconstructed tissue maps obtained with each schedule was quantified for each SNR level.

Results

Optimization of the N=50 length schedule entailed ~40000 objective function evaluations which required only ~30 seconds with the trained network. The initial and optimized schedules are shown in Figure 3 and the resulting reconstruction error for the various SNR levels shown in Figure 4. The optimized schedule yielded lower error for both T1 and T2 across all SNR levels and a 4% shorter scan time.

Discussion/Conclusion

Conventional dictionary based optimization suffers from the exponential growth of the dictionary with increasing number of tissue parameters, unlike the approach proposed here. By leveraging the compact representation of neural networks, schedules with many acquisition parameters and/or tissue maps [6] can be optimized. Although demonstrated using a supervised learning framework, our approach is also suitable for a reinforcement learning framework which is the focus of ongoing research.

Acknowledgements

Memorial Sloan Kettering Cancer Center

References

[1] O. Cohen and M. S. Rosen, “Algorithm comparison for schedule optimization in MR fingerprinting,” Magn. Reson. Imaging, 2017.

[2] B. Zhao, J. P. Haldar, K. Setsompop, and L. L. Wald, “Optimal experiment design for magnetic resonance fingerprinting,” in Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the, 2016, pp. 453–456.

[3] O. Cohen, B. Zhu, and M. S. Rosen, “MR fingerprinting deep reconstruction network (DRONE),” Magn. Reson. Med., 2018.

[4] M. Weigel, “Extended phase graphs: Dephasing, RF pulses, and echoes-pure and simple,” J. Magn. Reson. Imaging, vol. 41, no. 2, pp. 266–295, 2015.

[5] D. L. Collins et al., “Design and construction of a realistic digital brain phantom,” IEEE Trans. Med. Imaging, vol. 17, no. 3, pp. 463–468, 1998.

[6] O. Cohen, S. Huang, M. T. McMahon, M. S. Rosen, and C. T. Farrar, “Rapid and quantitative chemical exchange saturation transfer (CEST) imaging with magnetic resonance fingerprinting (MRF),” Magn. Reson. Med., 2017.


Figures

Figure 1: Overview of the proposed optimization scheme. For each randomly selected schedule a dataset is generated and used to train a neural network. A second dataset is used as test data for the trained network. The cost is defined as the error between the true and reconstructed tissue values using any suitable error metric.

Figure 2: The cost function network. The errors (cost) obtained in the initial step are used to train a second network that maps between the schedule space and the reconstruction error to allow an efficient search of that space.

Figure 3: Initial (random) and optimized FA and TR schedules obtained with the optimization network.

Figure 4: Percent error for the reconstructed T1 and T2 maps as a function of SNR for the initial (blue) and optimized (red) schedules. The optimized schedule resulted in lower error across all SNR levels despite the shorter scan time and could be further reduced with improved training.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4531