Alper Ozan Turgut1, Matthew Van Houten2, Junyu Wang2, Xue Feng2, and Michael Salerno3
1School of Medicine, University of Virginia, Charlottesville, VA, United States, 2Biomedical Engineering, University of Virginia, Charlottesville, VA, United States, 3Department of Medicine and Radiology, Stanford University, Stanford, CA, United States
Synopsis
Contrast-enhanced cardiac magnetic resonance (CMR) stress perfusion
imaging shows excellent utility in evaluating coronary artery disease1.
Registering perfusion CMR image series is difficult due to the varying image
contrast. Neural style transfer is a deep learning method used to transfer the
“style” of one domain to another while preserving the content. Two neural style
transfer networks were implemented in Python using TensorFlow and PyTorch.
Training of each network was done using three, slice matched patient profiles
and cine-like perfusion images were generated and registered. This method is
compared to a KL-transform based registration approach.
Introduction
Contrast-enhanced
cardiac magnetic resonance (CMR) stress perfusion imaging shows excellent
diagnostic and prognostic utility in evaluating coronary artery disease1.
However, respiratory motion remains a challenge in achieving optimal perfusion
quantification. Registration of CMR perfusion data is challenging due to the
large contrast variations in the temporal series. Previous registration methods
have relied on computationally expensive calculations of eigen-images using principal
component analysis (PCA), either in a bulk or iterative fashion2,3.
Neural style transfer is a deep learning method used to transfer the “style” of
one domain to the target image, while preserving the overall content of the
target image. Rather than use a traditional PCA approach to generate eigen-images
that have temporally unvarying contrast, which requires large computational time,
a neural style transfer network can be trained to apply the more-desirable
style of another CMR sequence to that of perfusion series. In our case, the
unvarying contrast of cine can be applied to a perfusion image series to obtain
flattened contrast while preserving the content of the perfusion series. This
method has the potential for widespread application to other CMR problems. Methods
Training
of neural style transfer networks:
To
achieve Cine-like contrast for a perfusion series, two style transfer neural
networks were implemented in Python using TensorFlow and PyTorch and compared.
These style transfer neural networks are from the Berkeley Artificial
Intelligence Research Lab and named Cycle Generative Adversarial Network
(CycleGAN) and Contrastive Unpaired Image-to-Image Translation (CUT)4,5.
CycleGAN is a traditional generative adversarial network that adds a cycle loss,
enforcing reciprocal domain style emulation, while preserving the content of
the original image. CUT is a completely different approach than that of
CycleGAN. CUT uses an InfoNCE loss to maximize mutual information between
corresponding input and output patches, while drawing upon other patches in the
image as contrastive negatives.
CUT was
implemented using PyTorch and CycleGAN was implemented using TensorFlow. Training
of each network was done using three, slice matched patient profiles. Total
training time was 3.5 hours for CUT and 12 hours for CycleGAN. Based on initial
experiments, CUT preserved the content of the perfusion image while applying
the style of cine in a higher degree than that of CycleGAN (Figure 1). Because
of this, the trained CUT network was chosen as the model to apply to a
non-motion corrected image series.
Generation
of ‘pseudo-cine’ perfusion series and registration algorithm:
Five
non-motion corrected perfusion series from healthy volunteers were used as
input series into the trained CUT network. Generation of a perfusion in the
style of cine (pseudo-cine) image series took only ~10 seconds for each
patient. A mid-temporal frame was selected as a reference frame, and the
pseudo-cine image series was registered pairwise to this reference frame using
non-rigid registration. The deformation fields generated in this step were then
applied pairwise to the original perfusion series for each corresponding
temporal frame. This gave the resultant registered perfusion series (Figure 2).
Comparison
to iterative KL-transform based registration:
The
KL-transform based registration technique was implemented in MATLAB. This
registration technique was compared to that of the style transfer registration technique2. The registration quality was rated by a
cardiologist blinded to the registration technique (5=excellent, 1=poor).Results
The CUT registration algorithm shows promising motion
correction comparable to that of the KL-transform based approach (Figures 3, 5).
Clinical grading of motion correction demonstrated scores of 4.2 and 4.1 for
the KL technique and CUT techniques respectively (0.72). Figure 4 shows temporal profiles from the
unregistered, KL and CUT techniques. The
CUT technique performed well both for abrupt breathing motion (patient 1) and
free-breathing sinusoidal motion (patient 2).
The CUT technique showed less drift of the registration than the
KL-based technique.Discussion and Conclusion
The CUT registration approach shows promising results for
motion correction. Registration of perfusion images is inherently difficult due
to a dynamic contrast range, but the CUT approach alleviates this by generating
a flattened contrast image series using neural style transfer. It has minimal
computational cost and only requires training a single neural network for fast
generation of pseudo-cine series for any perfusion input. This method has the
potential for widespread application to other CMR registration and segmentation
problems.Acknowledgements
The primary author would like to thank Nick Tustison for his advice and guidance on using the Advanced Normalization Tools package for non-rigid image registration. References
1.
T. Makela et al., "A review of cardiac
image registration methods," in IEEE Transactions on Medical Imaging, vol.
21, no. 9, pp. 1011-1021, Sept. 2002, doi: 10.1109/TMI.2002.804441.
2.
Xue, H, Brown, LAE, Nielles-Vallespin, S, Plein,
S, Kellman, P. Automatic in-line quantitative myocardial perfusion mapping:
Processing algorithm and implementation. Magn Reson Med. 2020; 83: 712– 730.
https://doi.org/10.1002/mrm.27954
3.
C. Scannell, A. Villa, J. Lee, M. Breeuwer, and
A. Chiribiri, “Robust Non-Rigid Motion Compensation of Free-Breathing
Myocardial Perfusion MRI Data,” IEEE Transactions on Medical Imaging, vol. 38,
no. 8, pp. 1812–1820, Aug. 2019.
4.
J.-Y. Zhu, T. Park, P. Isola, and A. Efros,
“Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial
Networks,” arXiv:1703.10593v7(cs), 2017.
5.
T. Park, A. Efros, R. Zhang, and J.-Y. Zhu,
“Contrastive Learning for Unpaired Iamge-to-Image Translation,”
arXiv:2007.15651(s), 2020.
6.
Tustison, N.J., Cook, P.A., Holbrook, A.J. et
al. The ANTsX ecosystem for quantitative biological and medical imaging. Sci
Rep 11, 9068 (2021). https://doi.org/10.1038/s41598-021-87564-6