4722

Deep learning segmentation (AxonDeepSeg) to generate axonal-property map from ex vivo human optic chiasm using light microscopy
Thibault Tabarin1, Maria Morozova2,3, Carsten Jaeger2, Henriette Rush3, Markus Morawski3, Stefan Geyer2, and Siawoosh Mohammadi1

1Department of Neurophysics, Medical center Hamburg-Eppendorf, Hamburg, Germany, 2Department of Neurophysics, Max Planck Institute for Human cognitive and Brain Sciences, Leipzig, Germany, 3Paul Flechsig Institut of Brain Research, University Leipzig, Leipzig, Germany

Synopsis

Development of in-vivo histology using MRI needs validation strategies with gold standard methods. Ex-vivo histology combined with microscopy could become such a strategy; however, for comparing larger field-of-views automatic segmentation of axons and myelin will be required. State-of-the-art segmentation has recently involved deep learning (DL). In this work, we investigated the recently published AxonDeepSeg deep learning algorithm (ADS). We successful applied ADS on light microscopy images of an optical chiasm sample, improved the segmentation of myelin to access the full properties of individual fibers, and finally created microstructural maps such as the histology g-ratio map.

Introduction

Biophysical models combined with quantitative MRI (aka in-vivo histology1), have the potential to non-invasively assess the microstructure of brain tissue (e.g. axonal diameter/density, g-ratio2). However, this approach required validation by comparison with a gold standard, e.g. ex-vivo histology using microscopy3,4,5. To achieve a reliable comparison with MRI, several voxels of the MR images have to be covered, therefore large field-of-views of several cm3 are necessary. Such images, however, contains millions of axons, making manual segmentation unachievable. With recent developments of deep learning, efficient methods for automatic segmentation of biological samples are available6,7. In this work we have adapted the recently published algorithm AxonDeepSeg7 (ADS, developed for scanning electron microscopy (SEM)) to extract microstructural information of axons in human optic chiasm from light microscopy (LM) data. Additionally, we have implemented a novel method to segment individual fibers, making new microscopic axonal features (not accessible via the original ADS algorithm) available such as g-ratio and diameter of individual fibers.

Methods

Samples: A human optic chiasm sample (figure 1) was obtained at autopsy with prior informed consent (48 hrs postmortem, multiorgan failure) and approved by the responsible authorities. Following the standard Brain Bank procedures, blocks were immersion-fixed in (3% paraformaldehyde +1% glutaraldehyde) in phosphate-buffered saline (PBS, pH 7.4) at 4°C. After incubation in 1% osmium tetroxide for 1 hour, the different samples were contrasted (figure 1b). The resin blocks are sectioned at 1 μm (semi-thin sections) on a ultramicrotome (Reichert Ultracut S, Leica). Sections are stained with 1% toluidine blue, and digitized with an AxioScan Z1 slide scanner (Zeiss, 40x 0.95 NA objective ~250nm xy-resolution) (figure 1c-d).

Analysis: The analysis (written in Python2.7) was performed in 4 steps (figure 2), detailed below:

Preprocessing (step 1): We converted the original image (figure 1c-d) to grayscale and applied an image inversion transform to comply to ADS algorithm requirements.

Full-slice segmentation using ADS (step 2): We modified the original implementation of ADS (version 0.4) to perform segmentation on the full slice. After the preprocessing, we applied the u-net6 (ADS architecture7) using the SEM model (weights provided by ADS). We applied ADS “on the fly”; as the patches are created, they were analyzed and the final prediction image was recreated directly (figure 3b-e). The result is a classification of the pixels into 3 classes: background, myelin and axon (figure 3d-f). At this stage the myelin class is connected (figure 4b), making identification of individual fibers impossible.

Myelin segmentation (step 3): To efficiently separate individual myelin sheaths, we first skeletonized the myelin mask, then filled the enclosed structure (figure 4a-c) to create markers, and then performed a watershed segmentation9 on the fiber mask using the markers. Each fiber is identified and labeled with their corresponding myelin and axon parts (figure 4e-f).

Map creation (step 4): After myelin segmentation, the axon and myelin part of each axon become well-defined allowing us to calculate microscopic features (e.g. g-ratio for individual fibers). Then, we downsampled the image using a grid (figure 5a-b), identified fibers contained in each grid cell and calculated structural properties per cell (figure 5b-d). Three different maps (axon diameter, g-ratio, fiber density) are exemplified in figure 5e-g.

Accuracy testing: ADS performance was assessed with the DICE coefficient using manual labeling done by TT and MM of four representative patches.

Results & Discussion

Initial tests suggest that the preprocessing applied to the histological images was sufficient to reproduce similar accuracy (~80%) as reported by the ADS authors. Using ADS “patch by patch on the fly” allowed us to process large images (> 5 mm) on a standard desktop computer, without encountering memory problems. ADS classifies pixels into 3 classes (myelin, axon, background) but does not distinguish between different myelin sheaths (figure 4b), therefore we have developed a myelin segmentation approach that identifies individual fibers and their respective axon and myelin parts. In addition to properties obtainable via ADS (e.g. axon diameters and fiber densities, figure 5e-f), this new method additionally estimates individual fiber properties such as histological g-ratio, the ratio between an axon's inner and fiber’s outer diameter (figure 5g).

Conclusion

We have successfully adapted ADS (trained on SEM) to LM data, although LM produces images with lower resolution (~250 nm) and contrast than SEM. Moreover, our novel method to identify individual fibers provides additional microstructural information for validating biophysical MRI models. Future work will focus on retraining the neural network on LM histological images to improve performance. Finally, histological sections across the entire diameter of the optic nerve and tract will be investigated in order to better assess their topographic variability10.

Acknowledgements

This work was supported by the German Research Foundation (DFG Priority Program 2041 "Computational Connectomics”, [AL 1156/2-1;GE 2967/1-1; MO 2397/5-1; MO 2249/3–1], by the Emmy Noether Stipend: MO 2397/4-1) and by the BMBF (01EW1711A and B) in the framework of ERA-NET NEURON. The authors would like to thank Aldo Zaimi for its uselful helps with AxonDeepSeg and Rene Werner team for fruitful discussions.

References

1- Weiskopf N, Mohammadi S et al. Advances in MRI-based computional neuroanatomy: from morphometry to in-vivo histology. Curr. Opin. Neurol., 2015;28:313-322

2- Mohammadi S, Carey D et al. Whole-Brain In-vivo Measurements of the Axonal G-ratio In a group of 37 Healthy Volunteers. Frontiers in Neuroscience, 2015;9:441

3- Kelm N, West K L. Evaluation of diffusion kurtosis imaging in ex vivo hypomyelinated mouse brain. NeuroImage, 2016;124:612-626

4- Morawski M, Kirilina E et al. Developing 3D microscopy with CLARITY on human brain tissue: Towards a tool for informing and validating MRI-based histology. NeuroImage, 2018;182:417-428

5- Calamante F, Tournier J-D, Super-resolution track-density imaging studies off mouse brain: Comparison to histology. NeuroImage, 2012;59:286-296

6- Ronnerberger O, Fischer P et al. U-Net: Convolution Networks for Biomedical Image segementation. MICCAI, 2015;234-241

7- Zaimi A, Wabartha M, et al. AxonDeepSeg: Automatic axon and myelin segmentation from microscopy data using convolutional neural network. Scientific Report, 2018;8:3816

8- Zuiderveld K. Contrast Limited Adaptive Histogram Equalization. In: P. Heckbert: Graphics Gems IV, Academic Press 1994

9- Serge Beucher and Fernand Meyer. The morphological approach to segmentation: the watershed transformation. In Mathematical Morphology in Image Processing (Ed. E. R. Dougherty), pages 433–481 (1993)

10- Jonas B J, Muller-Bergh J A et al. Histomorphometry of Human Optic Nerve. Invest. Ophthalmology & visual Science, 1990;31:4

Figures

Figure 1: Macro- and microanatomy of the optic chiasm

a) Optic chiasm sample after dissection. A 3mm long segment was dissected from the optic tract, and further bisected sagittally. We studied sample E. b) Part of E embedded in Durcupan resin. c) Semi-thin section (1μm) was contrasted in 1% uranyl acetate, dehydratedin graded alcohols, then embedded in Durcupan resin and stained with toluidine blue and finally, scanned with an Axioscan Z1 slide scanner. d) Higher magnification of the boxed area in c. Myelin sheaths are dark blue, axonal cytoplasm and surrounding neuropil (background) are light blue.


Figure 2: Analysis pipeline

1- Preprocessing. We converted original RGB image (figure 1c) to grayscale and inverted it (myelin sheath become white), and applied CLAHE7, a histogram enhancement algorithm. 2- Full-slice segmentation using ADS (figure 3). Using ADS, we produced a prediction mask for each patch. 3- Myelin segmentation (figure 4). We identified individual myelin sheaths for each segmented fiber. 4- Map creation (figure 5). We apply a grid and calculate statistics for various measurements (e.g. axon diameter) in each grid element.


Figure 3: Full-slice segmentation using ADS (step 2)

a) Entire histological section with grid. b) Higher magnification of the boxed area in (a). c) Individual patch (512x512 pixels, converted to grayscale and inverted) preprocessed for ADS analysis. d) Prediction mask resulting from ADS, including 3 tissue classes: axon (yellow), myelin (green), and background (dark blue). e) Construction of prediction mask from individual patches as reported in the original ADS paper. f) Final prediction mask.


Figure 4: Myelin segmentation (step 3)

a) Prediction mask of the entire section (figure 3f). b) Higher magnification of the boxed area in a. c) Myelin-only prediction mask. d) Skeletonization of the myelin mask. e) Markers created by filling the closed contours in the skeletonized myelin mask. f) Myelin segmentation result based on watershed segmentation9 of the fiber mask (myelin + axon)(b) with markers created in (e) as seeds for individual fibers. Individual myelin sheaths are coded with random colors. g) Axon segmentation result with color labels matching myelin sheath in (f).


Figure 5: Microstructure map (step 4)

a) Prediction mask of the entire slice after step 3 (figure 4). b) Higher magnification of the boxed area in (a) overlaid by 25x25μm mesh grid. c) Histogram of axon diameter distribution in the highlighted area (green box) in (b). d) Zoomed-in map corresponding to b) area. Each pixel corresponds to the mean of the histogram in c). e, f, g) Maps of the entire section for axon diameter (e), fiber density (f), and histological g-ratio (g).


Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4722