Automatic in vivo detection of transplanted cells in MRI using transfer learning paradigm
Muhammad Jamal Afridi1, Arun Ross2, Steven Hoffman2, and Erik M Shapiro3

1Department of Radiology and Department of Computer Science, Michigan State University, East Lansing, MI, United States, 2Department of Computer Science, Michigan State University, East Lansing, MI, United States, 3Department of Radiology, Michigan State University, East Lansing, MI, United States

Synopsis

Despite advances in machine learning and computer-vision, many MRI studies rely on tedious manual procedures for quantifying imaging features, i.e. cell numbers, contrast area etc. Development of intelligent, automatic tools for quantifying imaging data requires large scale data for their training and tuning, which in the clinical arena is challenging to obtain. Here, we present an approach that obviates the need for large scale data collection to develop an intelligent and automatic tool for single cell detection in MRI. Our strategy achieves 91.3% accuracy for in vivo cell detection in MRI despite using only 40% of the data for training.

Introduction

Despite advances in machine learning and computer-vision, many MRI studies rely on tedious manual procedures for quantifying imaging features, i.e. cell numbers, contrast area etc. Development of intelligent, automatic tools for quantifying imaging data requires large scale data for their training and tuning, which in the clinical arena is challenging to obtain. This limitation is one key reason for the lack of intelligent automation in many MRI applications. Therefore, we present a transfer learning approach that obviates the need for large scale data collection to develop an intelligent and automatic tool for single cell detection in MRI. Our strategy achieves 91.3% accuracy for in vivo cell detection in MRI despite using only 40% of the data for training.

Hypothesis and background

Machine learning (ML) is a field of artificial intelligence that allows computers to automatically learn a task from humans and then perform the same task automatically. Traditional machine learning approaches only learn about one concept (e.g. spot detection in MRI) and require a large amount of training data to learn that specific concept. However, we see that humans do not require thousands of examples to learn a simple new concept. This is because; we learn new concepts by relating them to our previously learned knowledge. Therefore, we hypothesize that a ML algorithm should quickly (with less data) learn the concept of spots in MRI, once it already has the related knowledge of other natural entities that exist around us and whose images are freely available on the internet. This approach is called the transfer learning paradigm in machine learning. However, how this paradigm can be used for detecting transplanted cells in MRI, has not been shown in the past literature.

Our Machine Learning Strategy and MRI data

MRI:

We performed in vivo MRI of 5 rat brains. 3 rats were previously injected intracardiac with MPIO-labeled MSCs, delivering cells to the brain – 2 rats were naïve. We performed each MRI with a field strength of 7T using a 3D MRI FLASH sequence with TR/TE=30/10ms, and 100 μm isotropic voxels.

Machine Learning Approach :

(1) Since, a ML algorithm learns from humans, therefore, an expert manually labels (clicks) all the spots in 5 MRI scans. We will use only a small fraction of one MRI for training and the rest of the labels in remaining MRI for evaluating the performance of our approach.

(2) Each MRI is first converted into a set of 9x9 patches by segmenting brain and then using superpixels[1] as shown in the Fig. 1. Some of these patches contain an expert’s ground truth (click) and are spot patches whereas the remaining patches (with no ground truth) are all non-spot patches.

(3) Next, the algorithm transforms relevant natural images of different entities, freely available on internet, into a set of 9x9 image patches as shown in Fig. 2 & Fig. 3, and learns to differentiate them from a random image.

(4) Then, a convolutional neural network based deep learning technique analyze the spot patches and non-spot patches, while relating them to all previous concepts/models, it learns the new concept/model of spots in MRI.

(5) The learned model is then evaluated against the manual ground truth in a different unseen in vivo MRI.

Results and Discussion:

In vivo MRI of rats injected with labeled MSCs had dark spots distributed throughout the brain whereas the control animal scans did not show spots. Single cell distribution in the brain was verified by histology. We used the area under the curve (AUC) as a standard measure to evaluate the performance of the trained model, taking the manual spot definitions as ground truth. In Fig. 4 we show the results for detecting spot patches in MRI in two different testing scenarios. We see that with only 5% of the available training data we can achieve an improvement of up to 22% in comparison to the traditional approach utilized for cell detection in MRI, showing the fast learning ability of our approach. We also observed that the telescope images of supernova were the most useful (see Fig. 5). Using the same MRI data and settings as [2], our approach achieve 91.3% accuracy with only 40% of their data. This automatic approach can be extended to other related studies in MRI, creating impact with automation despite limited availability of data.

Acknowledgements

No acknowledgement found.

References

[1] Liu, Ming-Yu, et al. "Entropy rate superpixel segmentation." Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011.

[2] M. J. Afridi, X. Liu, E. Shapiro, and A. Ross. Automatic in vivo cell detection in MRI. In Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), pages 391–399. Springer, 2015.

Figures

Fig. 1: (A) In vivo MRI slice, (B) Brain segmentation and superpixel extraction[1], (C) Patch extraction. Based on the darkest pixel in each superpixel, we create a patch that can potentially contain a spot.

Fig. 2: Spot patches are generated from the MRI and then based on the characteristics of these patches, freely available data on the internet is transformed. These transformed patches are then used by ML algorithm to learn extra concepts (models) before knowing about spots.

Fig. 3: Images of different entities transformed by our algorithm. Some of them also create spot like images (white spot in black background, or vice versa), allowing algorithm to learn related concepts from non MRI data. Note that our approach then automatically choose the most relevant sources of additional images.

Fig. 4: As we increase the training examples on x-axis, the accuracy increases but it is important to note is that when data is limited, the improvement from baseline (no transfer learning) is huge. This makes our approach highly desirable for applications with limited data, such MRI based applications.

Fig. 5: Based on our analysis, learning from telescope images of supernova provided the most beneficial background information for learning the newer concept of spots in MRI. One can also imagine supernovas as spots in sky. Eye images containing Iris in it, ranked second.



Proc. Intl. Soc. Mag. Reson. Med. 24 (2016)
2316