0832

Segmentation-aware adversarial synthesis for registration of histology to MRI
Matteo Mancini1, Yuankai Huo2, Bennett A. Landman2, and Juan Eugenio Iglesias1

1Centre for Medical Image Computing, Dept. of Med. Phys. and Biomed. Eng., University College London, London, United Kingdom, 2Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, United States

Synopsis

MRI-histology registration lays the ground for a new generation of high-resolution brain atlases. The task is challenging given the different contrast and the histology-related artifacts. We propose a dataset-specific, synthesis-based approach that uses a generative adversarial network to reduce the problem to intra-modality registration. Exploiting automatic segmentation data and cycle-consistency, the proposed architecture is suitable for small-size datasets. We show the advantages of this approach compared to canonical registration both in quantitative and qualitative terms using data from the Allen Institute’s Human Brain Atlas.

Introduction

Registering histology to MRI opens the door to building new brain atlases on the basis of histology1, taking advantage of higher resolution features and tailored contrast depending on the chosen dye. However, such task is challenging given the profound differences in image contrast between the two modalities and the histology-related artefacts. One approach that has been proposed to overcome the limitations of inter-modality registration is synthesis-based registration2: using machine learning, one modality (e.g. histology) is synthesised from the other (MRI) in order to reduce the problem to a simpler intra-modality registration. This approach is straightforward with data from perfectly aligned modalities, but it becomes increasingly harder in unpaired images, as it is the case in histology and MRI. In this paper, we propose a generative adversarial network (GAN) able to synthesise MRI from histology and vice versa with very weak supervision using automatic segmentation as a constraint.

Methods

We propose a GAN trained using unpaired histology sections and MRI slices, which extends the architectures of CycleGAN3 and SynSegNet4. Six subnetworks are used during training (Fig.1): two generators (9-block ResNet) to synthesise MRI from histology and viceversa; two discriminators (PatchGAN) to distinguish between real and fake MRI and between real and fake histology; and two segmentation networks (6-block ResNet) to segment every slice in each modality in 5 classes (background, grey matter, white matter, cerebellum grey matter, cerebellum white matter). Both segmentation subnetworks are pretrained on the MRI data, and during training, the MRI segmentation network layers are frozen since pretraining already provided full supervision. At each iteration of the training procedure, two symmetric cycles exist as in the CycleGAN architecture3 (Fig.1). The loss function used to train the network is given by the sum of seven components:

  • two adversarial loss functions:\[\\L_{GAN}(G_{1},D_{1},A,B)=E_{b\sim p(b)}[logD_{1}(b)]+E_{a\sim p(a)}[log(1-D_{1}(G_{1}(a)))]\\L_{GAN}(G_{2},D_{2},B,A)=E_{a\sim p(a)}[logD_{2}(a)]+E_{b\sim p(b)}[log(1-D_{2}(G_{2}(b)))]\]
  • one cycle-consistency loss function:\[\\L_{cyc}(G_{1},G_{2})=E_{a\sim p(a)}[\parallel G_{2}(G_{1}(a))-a\parallel_{1}]+E_{b\sim p(b)}[\parallel G_{1}(G_{2}(b))-b\parallel_{1}]\]
  • two segmentation loss functions4:\[\\L_{seg}(S_{1},G_{1},A)=-\sum_im_{i}\cdot log(S_{1}(G_{1}(a_{i})))\\L_{seg}(S_{2},G_{2},B)=-\sum_im_{i}\cdot log(S_{2}(G_{2}(b_{i})))\]
  • one gradient-consistency5 loss function:\[\\GC(A,B)=\frac{1}{2}\{NCC(\nabla_{x}A,\nabla_{x}B)+NCC(\nabla_{y}A,\nabla_{y}B)\}\]
  • one background-consistency loss function:\[\\L_{background}(G_{1},G_{2},A,B)=\sum_i|a_{i}-G_{1}(a_{i})|+\sum_j|b_{j}-G_{2}(b_{j})|,\forall a_{i},b_{j}\ segmented\ as\ background\]

We introduced the use of two segmentation networks and the additional constraints given by gradient consistency and background consistency losses in order to overcome several issues observed using previous architectures: dataset-size-related overfitting and resulting artifacts tackled with background-consistency; label permutations with segmentation; fuzzy edges tackled with gradient-consistency. The multimodal dataset (ex vivo MRI, histology, labelling) from the Allen institute’s Human Brain Atlas6 was used as training set for the synthesis in our experiments. For pretraining the segmentation subnetworks, an automated SPM segmentation7 was computed on the MRI slices and used as ground-truth: in this way, the synthesis does not require any manual delineation. After training, for each of the 106 brain slices and each modality the correspondent synthesised counterpart was generated. Then the synthesised MRI for each histology section was registered to the actual MRI slice computing first an affine transformation and then a non-linear registration based on sum of squared differences using NiftyReg. As a comparison, histology was also directly registered to MRI without the aid of synthesis, using mutual information. For a quantitative evaluation, 1271 landmarks were manually placed8 in both histology and MRI data on 92 slices (average of landmarks per slice: 13.81; standard deviation: 4.43). For each of these landmarks, we computed the registration-related displacement as the distance between the ground-truth in the MRI and the actual position observed using synthesis-based registration and direct one. Finally, as further qualitative result, we used the synthesis-based registration to propagate the histological manual labels from the Allen dataset to the MRI volume.

Results

Fig.2 shows some examples of synthesis computed after training the network. Fig.3a-b shows the comparison between the landmark displacements using synthesis-based registration and direct one: the average displacement per slice was significantly lower in the synthesis-based case (p=1.63e-10, one-tailed t-test). Finally, Fig.4 shows the histological labels registered to the MRI volume with the aid of synthesis.

Discussion

With the proposed architecture, we realised a dataset-specific, synthesis-based framework to aid inter-modality registration. The several constraints used in the loss function prevented overfitting given the very small dataset and also limited some known artefacts affecting GANs3 (e.g. degenerated translations, label permutations). We noticed better performance in synthesising histology from MRI than the other case, in line with the availability of initial segmentation for the former.

Conclusion

We proposed a new architecture to tackle the problem of registering histology to MRI. Both the qualitative and quantitative results show visible improvements compared to classical non-linear registration. This approach will be used as part of a pipeline for building a high-resolution computational atlas of the human brain based on histology

Acknowledgements

Research primarily supported by the European Research Council (ERC) through the Starting Grant No 677697 (project “BUNGEE-TOOLS”), awarded to JEI. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.

References

1. Amunts K, Lepage C, Borgeat L, et al. BigBrain: An ultrahigh-resolution 3D human brain model. Science, 2013;340(6139):1472–1475.

2. Iglesias JE, Konukoglu E, Zikic D, et al. Is synthesizing MRI contrast useful for inter-modality analysis? Med Image Comput Comput Assist Interv 2013,16(1):631-8.

3. Zhu J, Park T, Isola P, et al. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. IEEE ICCV; 2017.

4. Huo Y, Xu Z, Moon H, et al. SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth. IEEE Trans. Med. Imag. 2018,10.1109/TMI.2018.2876633.

5. Hiasa Y, Otake Y, Takao M, et al. Cross-Modality Image Synthesis from Unpaired Data Using CycleGAN. Lecture Notes in Computer Science 2018,11037.

6. Ding S, Royall JJ, Sunkin SM, et al. Comprehensive cellular-resolution atlas of the adult human brain. J. Comp. Neurol. 2016,524(16):3127–481.

7. Ashburner J, Friston KJ. Unified segmentation. NeuroImage, 2005,26(3):839-51.

8. Iglesias JE, Modat M, Peter L, et al. Joint registration and synthesis using a probabilistic model for alignment of MRI and histological sections. Medical Image Analysis 2018,50:127-144.

Figures

Block-diagram of the network architecture showing the different subnetworks (G1: generator MRI->HISTO; G2: generator HISTO->MRI; D1: discriminator for HISTO; D2: discriminator for MRI; S1: segmentation for MRI; S2: segmentation for HISTO) highlighting the two different loops, the first one synthesising histology from MRI and the second one synthesising MRI from histology.

Several examples of MRI slices and histological sections and their synthetic counterparts.

Registration error in terms of manual landmarks displacement (in millimeters), represented averaged for each slice (top) and for each single landmark (bottom).

Coronal, axial and sagittal views of the MRI volume and the labelled data from histological sections registered using the proposed synthesis-based approach.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
0832