MRI-guided radiotherapy using hybrid MR-Linac systems, requires high spatiotemporal resolution MR images to guide the radiation beam in real time. Here, we investigate the concept of deep residual learning of radial undersampling artifacts to decrease acquisition time and minimize extra reconstruction time by using the fast forward evaluation of the network. Within 8-10 milliseconds most streaking artifacts were removed for undersampling rates between R=4 and R=32 in the abdomen and brain, facilitating real-time tracking for MR-guided radiotherapy.
Data acquisition: Eight in vivo data sets (four brain, four abdomen) were acquired on a 1.5T MRI-RT scanner (Ingenia, Philips, Best, the Netherlands). Fully sampled radial acquisitions were acquired using a multi-slice 2D (M2D) balanced steady-state free precession (bSSFP) cine sequence (TR/TE = 4.6/2.3ms, FA = 40o, resolution = 1.0x1.0x5.0mm3, FOV = 256x256x100mm3, number of dynamics = 20). Additionally, two extra 3D golden-angle stack-of-stars bSSFP data sets with fat suppression were acquired for the abdominal volunteers (TR/TE = 2.9/1.45ms, FA = 40o, resolution = 1.7x1.7x4.0mm3, FOV = 377x377x256mm3, acquisition time = 3m14s) for prospective undersampling.
Reconstruction: Complex data were retrospectively undersampled in k-space by factors R=4 to R=32 and reconstructed using the non-uniform FFT4. Data was divided into training (80%) and test (20%) sets and normalized per scan. Data of one volunteer were used for testing for the brain, abdomen M2D, and abdomen 3D data. 2D slices were the input for the network. Data augmentation in the form of flipping and rotating was used to increase the size and variability within the training sets.
Deep learning: A U-net was used for residual learning; the undersampling artifacts were estimated from the undersampled images, which were subsequently subtracted from the undersampled images to obtain artifact-free images, since the topology of streaking artifacts is simpler (to learn) than the artifact-free images3. A relatively shallow U-net was implemented in Keras using a TensorFlow backend (see Figure 1)3. Additional features were a hyperbolic tangent (Tanh) activation function, Adam optimizer with learning rate decay of 0.0001 and training batches of 8. Training and testing was performed on a 16Gb Nvidia Tesla P100 GPU.
Evaluation: Both the predicted artifact images and calculated artifact-free images were compared with ground-truth images using the structural similarity (SSIM)5 and information content weighted multi-scale SSIM (IW-SSIM)6. Additionally, the Fourier radial error spectrum plot (ESP)7 was calculated between calculated artifact-free and ground truth images to get insight into the error at different spatial frequencies. Lastly, deformable vector fields (DVFs)8 were calculated through non-rigid registration between ground truth images at the same location and compared to DVFs calculated after registering the networks’ output.
[1] Raaymakers BW, et al. First patients treated with a 1.5T MRI-Linac: clinical proof of concept of a high-precision, high-field MRI guided radiotherapy treatment. PMB2017;62:L41
[2] Lee D, et al. Deep residual learning for compressed sensing MRI, IEEE ISBI2017:15-18
[3] Han Y, et al. Deep learning with domain adaptation for accelerated projection reconstruction MR, MRM2018;80:1189-1205
[4] Fessler JA, et al. Nonuniform fast fourier transforms using min-max interpolation, IEEE T-SP, 2003;51:560-574
[5] Wang, et al. Image quality assessment: from error visibility to structural similarity, IEEE TIP, 2004;13:600-612
[6] Wang Z, et al. Information content weighting for perceptual image quality assessment, IEEE TIP,2011;20:1185-1198
[7] Kim, TH, et al. The fourier radial error spectrum plot: a more nuanced quantitative evaluation of image reconstruction quality, 15thIEEE ISBI, 2018:61-64
[8] Zachiu C, et al. An improved optical flow tracking technique for real-time MR-guided beam therapies in moving organs, PMB, 2015;60:9003-9010