4475

DCE-Abdominal MR Image Registration using Convolutional Neural Networks
Zachary Miller1 and Kevin Johnson1

1University of Wisconsin-Madison, Madison, WI, United States

Synopsis

Convolutional neural networks (CNNs) have had incredible success solving image segmentation problems. We explore whether CNNs could have a similar level of success on difficult image registration problems. To this end, we developed a modified U-net to remove respiratory motion, but preserve contrast changes in abdominal free breathing dynamic contrast enhanced (DCE)-MRI. We then compared this network to a state of the art iterative registration algorithm. We demonstrate that our modified U-net outperforms iterative methods both in terms of registration quality and speed (600 registrations in <1 sec vs. Elastix in 2 hours)

Introduction

Respiratory motion complicates abdominal MR imaging. A variety of techniques have been proposed that provide time-resolved acquisitions resistant to motion artifacts allowing for imaging during free breathing. However, the lack of accurate and fast deformable image registration makes it difficult to use these new sequences for applications such as quantitative mapping in dynamic contrast enhanced (DCE)-MRI or real-time motion tracking. State of the art registration algorithms used to solve these problems are slow for large data sets, challenged by contrast changes through time, and tend to result in registered outputs with significant parenchymal distortions. In this work, we investigate the use of CNNs to register free breathing DCE-MR data sets, a particularly challenging registration problem as the image registration algorithm must be able to untangle pixel intensity changes due to contrast from changes due to motion.

Methods

DCE-MR dataset acquisition

In 2 healthy volunteers, free breathing DCE-MR abdominal data sets were acquired using a motion robust sequence consisting of sliding sagittal 2D slices. This sequence acquires images with short acquisition times, but the image time series is misaligned due to respiratory motion. Imaging parameters: (0.1 mmol/kg gadobenate dimeglumine, Image matrix: 192x144x176, 20 time points, 3T GE scanner, 32 channel, golden ratio sampling, cartesian, resolution=1.5x1.9x4mm3 ,TE/TR=1.3/3.8ms, flip imaging=15°, flipsat=8°, with 2 (ky) x 2 (slice) parallel imaging acceleration, 4s temporal resolution, 8s temporal footprint).

Synthetic Training Set Generation

Collected DCE-MR dataset consist of 176 contrast sequence with each sequence consisting of 20 time points. Contrast sequences are completely misaligned, and have no ground truth deformation fields associated with them. Given these constraints, we developed a synthetic training set by applying known deformations to the first time point from the 30 centrally located contrast sequences. Known deformations consisted of a randomized affine transform followed by a randomized gaussian kernel based non-linear transform to build pairs of reference and moving images with an associated ground truth deformation field. Uncorrelated, randomized magnitude, linear brightness changes were then applied five times to the moving and reference images to train the network to ignore time-dependent contrast changes. Validation data was synthesized in an identical way using 10 central slices (fig 1).

Neural Network Architecture and Training
We used a modified U-net architecture. Notable modifications were the use of cascaded convolution kernels (7X7-->5X5-->3X3), and the use of PReLU activation functions.

Neural Network Testing
Test data were composed of the full time series of contrast sequences from the 2 patients used to build the synthetic data set. Overfitting was not a concern as the synthetic data set for each volunteer was built off the first time point in each contrast sequence, and it was impossible to replicate the nonlinearly varying spatial and temporal contrast changes observed in the test data with our synthetic data set. The quality of these image registrations were compared against Elastix, a state of the art registration tool.

Data Analysis
Standard methods for assessing registration quality rely on either image intensity differences or DICE coefficient. Time dependent contrast data complicates the first approach, while the DICE coefficient does not assess parenchymal distortion. Registration quality was therefore assessed qualitatively (10 blinded raters, 3 example registrations) and quantitatively through principal component analysis (PCA).

Results

CNN based image registration visually appears to exceed state of the art iterative registration techniques in terms of registration quality (figures 3 and 4). Compared to Elastix, CNN-based image registration consistently preserves internal features. Additionally, preliminary data show that trained readers, blind to registration method, rank CNN registration outputs higher than elastix or unregistered outputs (87% vs. 13% vs 0% favor respectively) in terms of motion removal and lack of parenchymal distortion (figure 5). PCA analysis demonstrates the first eigenvalue accounted for .963+/-.010 of image covariance for CNN output compared to .947+/-.004 for Elastix and .901+/-.012 for the unregistered data

Discussion and Conclusion

We demonstrate that CNN-based image registration of DCE-MR sequences using synthetically generated training data is not only feasible, but is capable of outperforming state of the art registration techniques. These results hold despite significant differences between the synthetic data set (random deformations with linear brightness changes spatially on the first time point of the sequence) and the contrast sequence test data (periodic deformations with peristaltic gut motion and nonlinear contrast changes in space and time). We do note, however, that this results come from two patients, and further work is needed to evaluate the network’s efficacy in a wider range of subjects.

Acknowledgements

Research Support from GE Healthcare

References

1.Brodsky, Ethan K., et al. "High‐spatial and high‐temporal resolution dynamic contrast‐enhanced perfusion imaging of the liver with time‐resolved three‐dimensional radial MRI." Magnetic resonance in medicine 71.3 (2014): 934-941.
2.Goodfellow, Ian, et al. Deep learning. Vol. 1. Cambridge: MIT press, 2016.

3. Lausch, Anthony, Mehran Ebrahimi, and Anne Martel. "Image registration for abdominal dynamic contrast-enhanced magnetic resonance images." Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on. IEEE, 20114 .

4.Melbourne, A., et al. "Registration of dynamic contrast-enhanced MRI using a progressive principal component registration (PPCR)." Physics in Medicine & Biology 52.17 (2007): 5147.

5. Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.

Figures

Figure 1: Method for generating synthetic training sets from DCE-MR data. N=30 central slices with appreciable spatial variation at t=0 (no contrast point) were used to initialize synthetic training data generation per volunteer data-set. Two volunteer data-sets in total were used to train the network. Validation data was synthesized using an identical procedure (N=10 slices at t=0). Contrast changes over time for each slice were used as test data.

Figure 2: Neural network architecture for DCE-MR image registration. Goal of network is to map an image pair (moving and reference images) to a deformation field with X and Y components. Network input is pair of 96X96 reference and moving images. Inputs are passed through cascaded 7X7, 5X5, and 3X3 convolution kernels with ReLU activation, batch normalization, and a PReLU activation followed by a series of downsampling steps, further cascaded convolutions, and then upsampling to ultimately yield 96X96 deformation fields in x and y. L1 loss was used during training with adamax optimization. Training time was 2 hours.

Figure 3: Unregistered, Elastix registered and CNN registered contrast images from a single slice through ten time points are shown from volunteer 1. Residual liver motion and parenchymal distortions are used to assess registration quality. Although Elastix is arguably more effective at removing liver motion than the CNN, it does this at the cost of distorting the liver parenchyma. This can be observed by watching contrast brightened vessels get pulled towards and pushed away from each other. The CNN has minimal parenchymal distortions, and removes the majority of liver motion. Note, CNN running time: <1 s. Elastix running time: 600 s.

Figure 4: Unregistered, Elastix registered and CNN registered contrast images from a single slice through ten time points are shown from volunteer 2.

Figure 5: Top figure shows the amount of variance over time accounted for by first principal component across 10 slices in 2 volunteers (top) by registration method compared to unregistered image slices. Bottom figure shows percentage preference by 10 medically trained readers by registration method over 3 example registrations.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4475