Magnetic resonance imaging-guided linear particle accelerators use reconstructed images to adapt the radiation beam to the tumor location. Image-based approaches are relatively slow, causing healthy tissue to be irradiated upon subject movement. This study targets on the use of convolutional neural networks to estimate rigid patient movements directly from few acquired radial k-space lines. Thus, abrupt patient movements were simulated in image data of a head. Depending on the number of acquired spokes after movement, the network quantified this motion precisely. These first results suggest that neural network-based navigators can help accelerating beam guidance in radiotherapy.
Axial slices of a reconstructed T1-weighted Cartesian 3D turbo FLASH sequence were used as 2D digital phantoms. A golden-angle radial gradient echo sequence was mimicked by converting the slices to their k-space representations and extracting 10 spokes from these k-space data, each containing 240 sampling points, using bilinear interpolation (see Figure 1).5 The images were subjected to translation by $$$(x,y)$$$ using $$$x,y\in [-5.00, 5.00]$$$ voxels, and rotation by $$$\psi\in[-8.00^\circ, 8.00^\circ]$$$ about the center. The image transformations were performed in between two spoke extractions, to mimic discontinuous in-plane movements of the patient's head. The 10 spokes become the rows of a 10$$$\times$$$240 matrix (referred to as 'scan' below) and serve as training data for a CNN. Either the last one, last 2 or last 4 rows are spokes recorded from the moved phantom ('moved spokes').
The CNN-based motion quantification was evaluated for two motion types (translation and rotation), three different numbers of 'moved spokes', and for various phantoms, which were bundled into the 5 slice stacks (A, B, C, D and E), see Figure 2. For each combination, a scan set of 5,000 scans was recorded using the settings described above. These scans of one set differ from one another by the degree of movement and the selected slice inside the stack. The CNN model of the architecture described in Figure 3 was trained and tested on these scan sets, respectively. For evaluation, a 5-fold cross-validation (80% training, 20% test) was performed using permutations of the training and test data sets.6 The movement quantification was evaluated by calculating the absolute error between the induced movement and the movement estimated by the CNN.
1 BW Raaymakers, IM Jürgenliemk-Schulz, GH Bol, M Glitzner, ANTJ Kotte, B van Asselen, JCJ de Boer, JJ Bluemink, SL Hackett, MA Moerland, et al. First patients treated with a 1.5 T MRI-Linac: clinical proof of concept of a high-precision, high-field MRI guided radiotherapy treatment. Physics in Medicine & Biology, 62(23):L41, 2017.
2 J. Yun, K. Wachowicz, M. Mackenzie, S. Rathee, D. Robinson, and BG Fallone. First demonstration of intrafractional tumor-tracked irradiation using 2D phantom MR images on a prototype Linac-MR. Medical Physics, 40(5), 2013.
3 B. Stemkens, R. H. N. Tijssen, B. D. de Senneville, J. J. W. Lagendijk, and C. A. T. van den Berg. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy. Phys. Med. Biol., 61(14):5335-5355, 2016.
4 Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436, 2015.
5 Z. Zhou, F. Han, L. Yan, D. JJ Wang, and P. Hu. Golden-ratio rotated stack-of-stars acquisition for improved volumetric MRI. Magnetic Resonance in Medicine, 78(6):2290-2298, 2017.
6 J. D Rodriguez, A. Perez, and J. A. Lozano. Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence,32(3):569-575, 2010