3569

No more localizers: deep learning based slice prescription directly on calibration scans
Andre de Alm Maximo1, Chitresh Bhushan2, Dawei Gui3, Uday Patil4, and Dattesh D Shanbhag4
1GE Healthcare, Rio de Janeiro, Brazil, 2GE Global Research, Niskayuna, NY, United States, 3GE Healthcare, Waukesha, WI, United States, 4GE Healthcare, Bangalore, India

Synopsis

In this work, we demonstrate a novel automated MRI scan plane prescription workflow by making use of the pre-scan calibrations scans to generate prescription planes for knee MRI planning. Using large-FOV, low-resolution 3D calibration data, we find the meniscus plane with very-high accuracy (angle error = 0.76, distance error = 0.07 mm). The approach obviates the need to acquire any localizer images with potential benefits: (1) avoiding subsequent retakes for correct planning of plane prescription; (2) reducing total scan time; and (3) easing the MRI scanning experience for both patient and technologist by enabling single push scan.

INTRODUCTION

Scan plane prescription for acquiring clinical high-resolution MRI data, either manually or automatically, is typically done using localizer images1,2. However, due to the blind nature of localizer acquisition, artifacts (wrap around, missing structures and others) are common, which often leads to reacquiring another localizer. This is especially true for MSK exams, e.g. knee, shoulder and ankle (internal study indicates 30-50% repeats). In this work, we demonstrate a novel automated MRI scan plane prescription workflow combining pre-scan calibration data with deep learning to generate scan plane prescription for knee imaging. The obvious advantage is the ability to plan MRI scan without the need for localizers; thereby reducing time and associated issues. Pre-scan calibration data is normally a large FOV (~50 cm), 3D acquisition with extremely-low resolution (7-10 mm isotropic), and typically acquired for hardware calibration. Calibration images lack detailed anatomical information (see figure 1) and affected by RF coil intensity shading. Therefore, it is very hard, if not impossible, for a human to accurately prescribe high-resolution scan planes using calibration data. Our method employs a deep-learning-based neural network to perform scan plane prescription using calibration data only.
Meniscus plane is the primary plane for knee imaging and accuracy of knee imaging is dependent on the correct estimation of meniscus plane. Hence, we choose to demonstrate the concept by generating the meniscus plane prescription using pre-scan calibration data and deep learning.

METHODS

Subjects: Data acquisition from 56 volunteers in-house, scanned randomly on left and right knee. Volunteers were asked to rotate and shift their knee location to be scanned again in the same sitting, resulting in multiple knee volumes per subject. All studies were approved by the appropriate IRB.

MRI scanner and acquisition: Data acquisition is done on a GE 3T Signa Premier MR scanner using dedicated knee and flex coils. Acquisition protocol is as follows: a. Localizer images: Tri-planar localizers: 2D SSFSE, TE/TR = 80 ms/1120 ms , FA = 90°, in-plane resolution = 0.55 mm x 0.55 mm, Slice Thickness = 10 mm, matrix = 512x512, slices = 5; b. Calibration images: 3D EFGRE axial calibration scan, TE/TR = 0.5 ms/1.4 ms , FA = 1°, averages = 2, in-plane resolution = 7.5mm x 7.5mm, slice thickness = 15 mm, acquisition matrix = 32x32x32, reconstruction matrix = 64x64x64. Our protocol includes acquiring both localizers and calibration data with change in subject knee position, used to generate the ground-truth labeling for training.

Gold-standard (GT) generation: Gold-truth (GT) data for the meniscus plane are manually labelled, by a human radiologist, using the localizer data (in both sagittal and coronal stacks). The ground-truth mask is then projected from the tri-planar localizers to the calibration scans data with identity transform, assuming subject did not move between these two acquisitions (see figure 2).

Deep learning (DL) model generation:
a. Data strategy: A total of 667 calibration samples is obtained from the 56 subjects. This dataset is divided into training (= 526 volumes), validation (= 59 volumes) and testing (= 82 volumes) sets. The training and validation sets are augmented with image flips, noise and coil shading. The final count is 13800 volumes for training and 2300 volumes for validation.

b. Deep learning model: Our method employs a 3D U-net architecture3 (3 layers down and 3 layers up) for training the calibration-based meniscus plane model, with input and output size = 32x32x32, 16 initial filters, 3x3x3 filter size, 2x2x2 max-pooling, 3000 epochs, dice as loss function and accuracy metric.

c. Evaluation metrics: In addition to dice, we compute the mean absolute distance (MAD) and angle errors between GT plane and DL predicted plane. MAD < 1 mm and angle error < 3˚ are considered clinically acceptable. Evaluation statistics reported for test cases.

RESULTS and DISCUSSION

Figure 3 shows validation accuracy curves for meniscus plane training. It shows a good convergence around 97%. Figure 4 shows four test cases with top accuracy (DICE = 100%). Figure 5 shows that the mean of the MAD error = 0.07 mm and angle error = 0.76⁰, indicating excellent reproducibility of meniscus plane vis-à-vis higher resolution localizer and are within the clinically acceptable limits. Although the results are good, they are still preliminary, i.e. we used a small dataset (56 subjects) with a certain bias (positional changes for re-scanning and data augmentation). We are currently working with larger diverse dataset to ensure that this generalizes.

CONCLUSION

The meniscus plane is the standard prescription plane for knee imaging, based on which other prescription planes are planned. Our results demonstrate that calibration-scan images can be used to infer such prescription plane thereby removing the need for localizer image(s), which can truly manifest a single-push scanning. The idea can be extended to other anatomical locations such as shoulder and ankle, which also suffer from issues related to improper localizer setup and several retakes.

Acknowledgements

No acknowledgement found.

References

1. Lecouvet FE, Claus J, Schmitz P, Denolin V, Bos C, Vande Berg BC. Clinical evaluation of automated scan prescription of knee MR images. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine. 2009 Jan;29(1):141-5.

2. Shanbhag DD et.al. A generalized deep learning framework for multi-landmark intelligent slice placement using standard tri-planar 2D localizers. In Proceedings of ISMRM 2019, Montreal, Canada, p. 670.

3. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention 2015 Oct 5 (pp. 234-241). Springer, Cham.

Figures

Figure 1. Comparison of calibration data (A), localizer image (B) and a high-resolution scan (C) of the knee anatomy. The loss of structural details in calibration data is evident.

Figure 2. Gold standard generation. The meniscus plane mask is labeled in localizer images (top) and projected into the pre-scan calibration data (bottom). Our method uses deep learning to segment these plane masks directly on calibration data.

Figure 3. Validation accuracy curve (dice is the metric) during DL training. It indicates good convergence around 97% for meniscus plane mask.

Figure 4. Examples of calibration scan volumes of the knee with ground-truth labels (in red not appearing under yellow) and predicted meniscus plane (in green not appearing under yellow). The deep-learning model achieved 100% accuracy in these four test cases, and thus only yellow-labelled pixels appear mixing red and green. Due to the large FOV and small volume size of calibration scans, slice images are resized and empty slices are omitted for clarity.

Figure 5. Two evaluation metrics, MAD error (left) and angle error (right) with respect to ground-truth mask. Both MAD and angle errors are well within clinically acceptable limits.

Proc. Intl. Soc. Mag. Reson. Med. 28 (2020)
3569