3425

Estimation of B1 Map from a Single MR Image Using a Self-Attention Deep Neural Network
Yan Wu1, Yajun Ma2, Jiang Du2, and Lei Xing1
1Radiation Oncology, Stanford University, Stanford, CA, United States, 2Radiology, University of California San Diego, La Jolla, CA, United States

Synopsis

Inhomogeneity of the radiofrequency field (B1) is one of the main problems in quantitative MRI. Leveraging from the unique ability of deep learning, we propose a data driven strategy to derive quantitative B1 map from a single qualitative MR image without specific requirements on the weighting of the input image. B1 estimation is accomplished using a self-attention deep convolutional neural network, which makes efficient use of local and non-local information. Without additional data acquisition, an accurate estimation of B1 map is achieved, which is useful for the compensation of field inhomogeneity in T1 mapping as well as for other applications.

INTRODUCTION

In quantitative MRI, inhomogeneity of the radiofrequency field (B1) is one of the main sources of error. Measurement of B1 map effectively compensates for field inhomogeneity; however, it takes addition scan time. In this study, we propose a deep learning-based strategy to estimate B1 map from a single qualitative MR image without specific requirement on the weighting of the input image. In this way, B1 can be estimated from an MR image acquired in routine clinical practice or biomedical research without involvement of extra data acquisition. This will lay a solid foundation for the derivation of quantitative T1 map as well as for other applications. In this study, the method is mainly validated in cartilage MRI.

METHODS

To provide an end-to-end mapping from a single MR image to the corresponding B1 map, a convolutional neural network is employed. In the training of the network, input images are single images with a specific weighting (T1, T1r, or T2) acquired using an ultra-short TE sequence 1-3; and ground truth B1 maps are obtained using the widely adopted actual flip angle method 4. With the difference between the predicted maps and the ground truth backpropagated, network parameters are updated using the Adam algorithm. This iterative procedure continues until convergence is reached, as illustrated in Figure 1. For a test image acquired using the same imaging protocol, B1 map is automatically generated by the established network model.
A special convolutional neural network is constructed for B1 estimation 5. The network has a hierarchical architecture, composed of an encoder and a decoder. This enables feature extraction at various scales while enlarging the receptive field at the same time. A unique shortcut pattern is designed, where global shortcuts (that connect the encoder path and the decoder path) compensate for details lost in down-sampling, and local shortcuts (that forward the input to a hierarchical level of a single path to all subsequent convolutional blocks) facilitate residual learning. Attention mechanism is incorporated into the network to make efficient use of non-local information 6-8. Briefly, in self-attention, direct interactions are established between all voxels within a given image, and more attention is focused on regions that contain similar spatial information. In every convolutional block, a self-attention layer is integrated, where the self-attention map is derived by attending to all the positions in the feature map obtained in the previous convolutional layer. The value at a position of the attention map is determined by two factors. One is the relevance between the signal at the current position and that at other positions, defined by an embedded Gaussian function. The other is a representation of the feature value at the other position, given by a linear function. Here, weight matrices are identified by the model in training. The proposed network is shown in Figure 2. Deep neural networks are trained for B1 estimation, each taking single input images with a specific weighting (T1, T1r, or T2). A total of 1,224 slice images from 51 subjects (including healthy volunteers and patients) are used for model training, and 120 images of 5 additional subjects are employed for model testing.

RESULTS

Using established models, B1 maps are predicted from test images with various weightings. Figure 3 shows a representative case. The B1 maps estimated from different input images all show high fidelity to the ground truth map displayed in the leftmost column.

DISCUSSION

In conventional quantitative MRI approaches or MR fingerprinting, B1 map was ever estimated from variable contrast images without extra data acquisition. However, it is the first time that B1 map is estimated from a single MR image. This is accomplished by using deep learning to exploit the relationship between B1 map and MR image, which is inherently caused by the electrodynamic interaction between the incident transmission radiofrequency field and patient anatomy. Estimation of B1 map is practically useful. It not only lays a solid foundation for accurate T1 mapping, but also provides information for other applications (e.g. the derivation of electrical property tomography).

CONCLUSION

We present a deep learning-based strategy for the estimation of B1 map. Using a properly trained deep learning model, B1 map can be predicted from a single MR image with high accuracy achieved.

Acknowledgements

This research is partially supported by NIH/NCI (1R01 CA176553), NIH/NIAMS (1R01 AR068987), NIH/NINDS (1R01 NS092650).

References

1. Y. J. Ma, W. Zhao, L. Wan, T. Guo, A. Searleman, H. Jang, et al., "Whole knee joint T1 values measured in vivo at 3T by combined 3D ultrashort echo time cones actual flip angle and variable flip angle methods," Magnetic resonance in medicine, vol. 81, pp. 1634-1644, 2019.

2. Y. J. Ma, M. Carl, A. Searleman, X. Lu, E. Chang, and J. Du, "3D adiabatic prepared ultrashort echo time cones sequence for whole knee imaging," Magnetic Resonance in Medicine, vol. 80, pp. 1429-1439, 2018.

3. Du, J., Diaz E, Carl M, Bae W, Chung CB, Bydder GM, Ultrashort echo time imaging with bicomponent analysis. Magnetic resonance in medicine, 2012. 67(3): p. 645-649.

4. Yarnykh and V. Yarnykh, Actual flip-angle imaging in the pulsed steady state: A method for rapid three-dimensional mapping of the transmitted radiofrequency field. Magnetic Resonance in Medicine, 2007. 57(1): p. 192-200.

5. Y. Wu, Y. Ma, D. P. Capaldi, J. Liu, W. Zhao, J. Du, et al., "Incorporating prior knowledge via volumetric deep residual network to optimize the reconstruction of sparsely sampled MRI," Magnetic resonance imaging, 2019.

6. Wu, Y., Y. Ma, J. Liu, W. Zhao, J. Du et al., Self-attention convolutional neural network for improved MR image reconstruction. Information Sciences, 2019. 490: p. 317-328.

7. Vaswani, A., et al. Attention is all you need. in Advances in Neural Information Processing Systems. 2017.

8. Zhang, H., et al., Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018.

Figures

Figure 1. The proposed strategy for B1 mapping. In training the deep neural network, a single image is used as the input, and the predicted B1 map approaches the ground truth map measured using a conventional actual flip angle approach.

Figure 2. The proposed self-attention convolutional neural network. (a) Hierarchical architecture of the deep convolutional neural network, which is composed of an encoder and a decoder with global and local shortcut connections established. (b) Composition of a convolutional block, in which a self-attention layer is integrated to make efficient use of non-local information. The attention value is determined by the relevance between the current voxel and another voxel as well as the feature value at another position.

Figure 3. Estimation B1 map from single qualitative images (with specific T1, T1r, or T2 weighting). The B1 maps predicted from different input images all have high fidelity to the ground truth maps displayed in the leftmost column.

Figure 4. The evaluation results using quantitative metrics obtained in B1 estimation.

Proc. Intl. Soc. Mag. Reson. Med. 28 (2020)
3425