Applications: Image Processing, Analysis & Interpretation
Daniel Rueckert1

1Imperial College London, United Kingdom

Synopsis

We will give an overview of the current state-of-the-art in deep learning for medical imaging applications such as segmentation and classification. In particular We will illustrate deep learning approaches for semantic image segmentation based on Convolutional Neural Networks (CNN). We will also show how adversarial approaches can be used to train CNNs that invariant to differences in the input data (e. g. different scanners and imaging protocols), and which does not require any labelled data for the test domain. Finally, we show some applications of CNNs in the context of image classification.

Convolutional Neural Networks for Image Segmentation

This talk will give an overview of deep learning approaches for applications such as medical image segmentation and image classification. We will illustrate deep learning approaches for image segmentation by studying DeepMedic, a multi-scale 3D Convolutional Neural Network (CNN) that has been successfully used for challenging tasks such as brain lesion segmentation in multi-modal MR imaging. Furthermore, we will show how improved segmentation performance can be achieved using ensembles of neural networks. We will introduce Ensembles of Multiple Models and Architectures (EMMA) which shows robust performance through aggregation of predictions from a wide range of CNNs which different architectures and parameters. EMMA can be seen as an unbiased, generic deep learning model which is shown to yield excellent performance, winning the first position in the BRATS 2017 competition among 50+ participating teams.

Domain adaptation for CNNs

Even though CNNs enable accurate automatic segmentation for a variety of medical imaging problems the performance of these systems often degrades when they are applied on new data that differ from the training data, for example, due to variations in imaging protocols. Manually annotating new data for each test domain is not a feasible solution. We will show how unsupervised domain adaptation using adversarial neural networks can be used to train a segmentation method which is more invariant to differences in the input data, and which does not require any annotations on the test domain [3]. Specifically, we learn domain-invariant features by learning to counter an adversarial network, which attempts to classify the domain of the input data by observing the activations of the segmentation network. We demonstrate the potential of such method for the brain segmentation in MR images of patients with traumatic brain injuries, acquired using different scanners and imaging protocols. Using our unsupervised approach, we obtain segmentation accuracies which are close to the upper bound of supervised domain adaptation.

Convolutional Neural Networks for Image Classification

In the final part of this talk, we will focus on how CNNs can be designed and employed for image classification. We will use an application exemplar of identifying fetal standard scan planes acquired from 2D ultrasound during pregnancy screening examinations. This is a highly complex recognition task which require years of training for human observers. We will demonstrate that method based on CNNs can automatically detect 13 fetal standard views in freehand 2D ultrasound data as well as provide a localisation of the fetal structures and organs via a bounding box [4]. An important contribution is that the neural network learns to localise the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localisation task.

Acknowledgements

We would like to thank the members of the BioMedIA group, Department of Computing, Imperial College London for their help.

References

[1] K. Kamnitsas, C. Ledig, V. F. J. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, D. Rueckert and B. Glocker. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis 36: 61-78, 2017.

[2] K. Kamnitsas, W. Bai, E. Ferrante, S. McDonagh, M. Sinclair, N. Pawlowski, M. Rajchl, M. Lee, B. Kainz, D. Rueckert and B. Glocker. Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2017:

[3] K. Kamnitsas, C. F. Baumgartner, C. Ledig, V. F. J. Newcombe, J. P. Simpson, A. D. Kane, D. K. Menon, A. V. Nori, A. Criminisi, D. Rueckert and B. Glocker. Unsupervised Domain Adaptation in Brain Lesion Segmentation with Adversarial Networks. Information Processing in Medical Imaging (IPMI): 597-609, 2017.

[4] C. F. Baumgartner, K. Kamnitsas, J. Matthew, T. P. Fletcher, S. Smith, L. M. Koch, B. Kainz and D. Rueckert. SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound. IEEE Transactions on Medical Imaging, 36(11): 2204 – 2215, 2017.

Proc. Intl. Soc. Mag. Reson. Med. 26 (2018)