Diagnostic applications often require the estimation of organ motion. Image registration enables motion estimation by computing deformation fields for an image pair. In this work, voxelmorph, a framework for deep learning-based diffeomorphic image registration is used to register CINE cardiac MR images in four-chamber view. Additionally, the framework is extended to a one-to-many registration to also utilize temporal information within a time-resolved MR scan. Registration performance as well as the performance of a valve tracking application using this approach are evaluated. The results are comparable to a state-of-the-art registration method, while noticeably reducing the computation time.
1. Ciprian Catana. Motion correction options in pet/mri. Seminars in Nuclear Medicine, 45(3):212–223, 2015. Clinical PET/MR Imaging (Part I)
2. Seung Su Yoon, Elisabeth Hoppe, Michaela Schmidt, Christoph Forman, Teodora Chitiboi, Puneet Sharma, Christoph Tillmanns, Andreas Maier, and Jens Wetzl. A robust deep-learning-based automated cardiac resting phase detection: Validation in a prospective study, 2020. Presented at International Society for Magnetic Resonance in Medicine (ISMRM) 28th Annual Meeting & Exhibition, Sydney.
3. John Ashburner. A fast diffeomorphic image registration algorithm. NeuroImage, 38(1):95–113, 2007.
4. Marc-Michel Roh´e, Manasi Datar, Tobias Heimann, Maxime Sermesant, and Xavier Pennec. Svf-net: Learning deformable image registration using shape matching. In Medical Image Computing and Computer Assisted Intervention - MICCAI 2017, pages 266–274, Cham, 2017. Springer International Publishing.
5. Sahar Ahmad, Jingfan Fan, Pei Dong, Xiaohuan Cao, Pew-Thian Yap, and Dinggang Shen. Deep learning deformation initialization for rapid groupwise registration of inhomogeneous image populations. Frontiers in Neuroinformatics, 13:34, 2019.
6. Adrian V. Dalca, Guha Balakrishnan, John Guttag, and Mert R. Sabuncu. Unsupervised learning for fast probabilistic diffeomorphic registration. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 729–738. Springer, 2018.
7. Christophe Chefd’Hotel, Gerardo Hermosillo, and Olivier Faugeras. Flows of diffeomorphisms for multimodal image registration. In Proceedings IEEE International Symposium on Biomedical Imaging, pages 753–756. IEEE, 2002.
8. Maria Monzon, Seung Su Yoon, Carola Fischer, Jens Wetzl, Daniel Giese, Andreas Maier. Fully automatic extraction of mitral-valve annulus motion parameters on long axis CINE CMR using deep learning, 2021.
Presented at International Society for Magnetic Resonance in Medicine (ISMRM) 29th Annual Meeting & Exhibition.
9. Cathie Sudlow, John Gallacher, Naomi Allen, Valerie Beral, Paul Burton, John Danesh, Paul Downey, Paul Elliott, Jane Green, Martin Landray, Bette Liu, Paul Matthews, Giok Ong, Jill Pell, Alan Silman, Alan Young, Tim Sprosen, Tim Peakman & Rory Collins (2015). UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS medicine, 12(3), e1001779.
10. Carissa G. Fonseca, Michael Backhaus, David A. Bluemke, Randall D. Britten, Jae Do Chung, Brett R. Cowan, Ivo D. Dinov, J. Paul Finn, Peter J. Hunter, Alan H. Kadish, Daniel C. Lee, Joao A. C. Lima, Pau Medrano-Gracia, Kalyanam Shivkumar, Avan Suinesiaputra, Wenchao Tao, and Alistair A. Young. The Cardiac Atlas Projectan imaging database for computational modeling and statistical atlases of the heart. Bioinformatics, 27(16):2288–2295, 2011.
11. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. CoRR, 2015.
Figure 1: Architectures of the framework used for the one-to-one and one-to-many registration. Subplot a) shows the standard voxelmorph architecture consisting of a 2-D Unet, scaling and squaring layers and a spatial transformer layer. Subplot b) shows the architecture for the one-to-many extension consisting of a 3-D Unet, scaling and squaring layers and a spatial transformer.
Figure 2: Registration performance for the standard voxelmorph method, the one-to-many extension and MrFtk. Subplot a) shows a table with the mean landmark distance error and corresponding standard deviation for the first and second mitral valve annulus landmark as well as the average error for both landmarks. Subplot b) shows a boxplot visualizing the landmark distance error for specific time frame intervals. The orange lines denote the median error values, and the triangles represent the mean error value. Subplot c) shows computation times for all evaluated methods.
Figure 3: Registration results for the deep-learning-based method and MrFtk. The moving, fixed and moved image are displayed for both registration methods. The ground truth landmarks for the mitral valve insertion points (red) are displayed for the moving and the fixed image. Additionally, the propagated landmarks computed by the proposed framework (green) are displayed for fixed and the moved image. For better visualization, the moving, fixed and moved images as well as the deformation fields are cropped.
Figure 4: A figure showing the integration of the registration approach into a valve tracking application. Deep-learning-based registration is used to compute deformation fields for a CINE scan. A localization CNN is used to estimate the landmarks of the mitral valve insertion points for the first timeframe. These estimated landmarks are then propagated through the timeseries by spatial transformation with the computed deformation fields.
Figure 5: A figure showing the results for the valve-tracking application using the proposed one-to-many registration approach. Subplot a-c) shows the atrioventricular plane displacement, velocity and the annulus diameter, respectively. Subplot d) shows the mitral annular plane systolic excursion error. Subplot e) additionally shows the average error for all metrics. All these metrices are shown for the propagated and the ground truth landmarks of the CAP test dataset.