Kavitha Manickam1 and Jaganathan Vellagoundar1
1GE Healthcare, Bangalore, India
Synopsis
In MRI, automatic estimation of main (B0) and RF (B1) field maps from the scanned images will help daily quality assurance and field corrected reconstruction. In this paper, a novel approach based on deep learning technique is presented to estimate B0 and
B1 maps from the scanned images. A modified
version of stacked convolutional encoder with random skip
connections deep learning network is constructed. Two separate networks are used to estimate B0 and B1 maps individually. The networks are trained and tested with phantom images. The results show that the estimated maps are comparable to the actual field maps. Automatic map estimation based on deep learning approach is the first step towards achieving daily quality assurance and field correction from the regular scanned images.
Introduction
In MRI,
poor B0 and B1 fields homogeneity affects the quality of the scanned images. Poor
field inhomogeneity leads to artifacts, signal loss and undesired effect on fat
suppression1. For the field-corrected MR reconstruction, one must
have an available accurate estimate of the field maps. To assess the undesired
field drifts in MR systems, periodic quality assurance of MR system is done by
scanning a specific phantom with a specific set of protocols2. Generally, the procedure is done daily basis for
proper quality assurance. However, for a
better clinical use case, it is desirable to have a continuous and automatic
way of quality assessment from the scanned images on the fly. As a first step
towards that direction, we propose a deep learning (DL) based approach to
extract the B0 and B1 maps from the scanned images. The maps can be further used
to assess the field drifts of MR system. In addition, the maps also can be used
for field correction in MR images. This work is a preliminary study based on
the phantom images and few pulse sequences. The proposed method can be extended
further with clinical images and more pulse sequences. Methods
In MRI, the acquired image Y is
modeled as3,
$$Y = B*X + N$$
where, B is the field intensity
inhomogeneity field, X is the inhomogeneity free image and N is the additive
noise (Rician distribution). The field B and the image X are multiplicative
terms. For our study, the sources of filed intensity inhomogeneity are assumed
as static field (main field) inhomogeneity and RF filed inhomogeneity
respectively. In this work, the objective is to estimate the field map B given
the acquired image Y. This is achieved using the DL network trained (supervised
training) with the already acquired input images Y and the corresponding field
maps B (targets or outputs). Two
categories of the modified version of stacked convolutional encoder4
with random skip connections are constructed to train for B0 and B1 maps
separately. For training, T1 FSE & T2 FSE images are taken as the input
images and the corresponding B0 and B1 maps are taken as the outputs/targets. The image location and the corresponding map location kept the same. Six
phantoms were considered for this study. For each phantom 80 slices of images
and the field maps were acquired using 1.5T MRI scanner (Optima, GE Healthcare).
The slice thickness was kept at 10mm. Figure 1 shows the scanned images and the
corresponding field maps of body TLT phantom which is used as a testing set. For training the network, only the phantom region cropped in the B0 & B1 maps (to remove the noise outside the phantom region). In
addition to that, totally 8256 T1, T2, B0 and B1 maps are simulated using Matlab
and used for training.
Results
The total number of images used in
training is 16632 both including phantom scans and simulated maps. The database
was split into training (80%) and test (20%) data sets with the training loss
of 0.216s and 0.3996s reliably. The training was done in a NVIDIA 1080Ti
workstation in 172 m. The inference was done in 5 ms. Figure 2 & 3 shows (cropped phantom ROI) an example
of one of the test images of body TLT phantom, comparing the output of the
trained stacked auto encoder with the ground truth. The utilization of random skip connection in the stacked autoencoder increased the map prediction. We calculated SSIM (structural similarity index measurement) for comparison
of mean image quality for the test dataset and the average values for the Head
TLT phantom B0Map is 0.6 and B1Map is 0.9 which shows that the predicted maps
are within par with the ground truth. This preliminary study shows that the DL
network can be used to find out the system imperfections for daily QA or in
post processing correction. Discussion
By the visual comparison of the
maps shown in Figure 2 & 3 and also from the calculated SSIM values, it can
be observed that the estimated maps are matching with the actual maps (phantom region). There is a slight blurring is observed in the estimated images. The DL
network needs to be trained with more images and tuned further to overcome
these issues. Further, the clinical images need to be acquired and used for
training the network for automatic DQA and field correction applications.Conclusion
We presented a novel approach to estimate B0 and
B1 maps from the scanned images using a deep learning approach. This
study is a preliminary study towards achieving the use of estimated B0 and B1
maps for automated DQA workflow and field correction in MR images. Acknowledgements
No acknowledgement found.References
1. C. Barnet, J. Tsao, and K. P.
Pruessmann. Efficient iterative reconstruction for MRI in strongly inhomogeneous
B0.
Proc.
Intl. Soc.Magn. Res. Med.
2004; 347.
2. Juha I. Peltonen, Teemu Makela, et
al. An automatic image processing workflow for daily magnetic resonance image
quality assurance. Journal of Digital Imaging. 2017;
30; 163–171.
3. D. W. Shattuck, S. R.
Sandor-Leahy, et al. Magnetic resonance image tissue classification using a
partial volume model. Neuroimage. 2001; 13; 856-876.
4. Xiao-Jiao Mao,
Chunhua Shen, et al. Image Restoration Using Convolutional Auto-encoders
with Symmetric Skip Connections. Proc. Advances in Neural Inf. Process
and Systems. 2016