4821

DeepRad: An Accessible, Open-source Tool for Deep Learning in Medical Imaging
Jinnian Zhang1, Samuel A Hurley2, Varun Jog1, and Alan B McMillan2

1Electrical & Computer Engineering, University of Wisconsin, Madison, WI, United States, 2Radiology, University of Wisconsin, Madison, WI, United States

Synopsis

Deep learning has shown incredible potential as a powerful tool in medical imaging, however accessibility to deep learning is still limited for users who lack expertise in computer programming, machine learning, or data science. Existing tools to perform deep learning have not been designed to be user friendly. We have developed a powerful, flexible, and easy-to-use software specifically tailored to medical imaging for biomedical researchers and physicians with limited programming skills to utilize deep learning for many common tasks.

Purpose

Deep learning in medical imaging has shown incredible potential to improve detection, classification, and diagnosis of disease1,2. However, limitations remain on the accessibility of these techniques to users such as physicians and researchers with limited expertise in computer programming, machine learning, or data science. Existing tools to perform deep learning have not been designed to be user friendly. Therefore, we have developed a powerful, flexible, and easy-to-use software specifically tailored to medical imaging for biomedical researchers and physicians, which will enable non-programmers or researchers with limited programming skills to utilize deep learning for many common tasks.

Methods

We propose a software tool “DeepRad” that functions as a graphical user interface (GUI) for deep learning with medical images. It is built using and the Python programming language and the PyQt GUI toolkit to enable cross-platform capability for Windows, MacOS, and Linux. The underlying backend for performing processing utilizes Tensorflow and Keras, which are popular and well-validated tools to implement a variety of deep learning neural networks. Software executables will be distributed on a local website, and source code will be hosted in a public repository such as GitHub to ensure widespread availability. As shown in Fig. 1, four different types of deep learning models are available: (1) classification of images within or between disease populations, (2) regression of a test score or other measure in relation to an image, (3) segmentation of images to identify structures related to normal anatomy or pathology, and (4) image-to-image synthesis to generate new images or summative images from whole images. Well-studied model structures such as VGG, DenseNet, ResNet, Inception, and Unet are made available to users for these tasks.

Discussion/Conclusion

DeepRad has two main modes. In “Quick Use” mode, there are 4 common tasks: classification, segmentation, regression and synthesis (Fig. 2). Fig. 3 shows the panel utilized for classification. For researchers with limited programming skills, they only need to follow the steps shown in the panel and click “Train”. In Fig. 3, the status of the training process will be updated in real time in the bottom-right text box. The top-right figure shows how the loss changes over each epoch. To allow portability, configuration files are stored as JSON files, matching the GUI as shown in Fig. 4. In the future, we will add features for more advanced customization, such as designing of neural networks, loss functions, and optimizers. We believe that researchers can create more powerful deep learning architectures, and DeepRad can make the design process easier and more efficient.

Acknowledgements

No acknowledgement found.

References

1. Wernick MN, Yang Y, Brankov JG, Yourganov G, Strother SC. Machine Learning in Medical Imaging. IEEE Signal Process Mag. NIH Public Access; 2010 Jul;27. PMID: 25382956

2. Wang S, Summers RM. Machine learning and radiology. Med Image Anal. 2012 Jul;16(5):933– 951. PMID: 22465077

Figures

Figure 1: Types of deep learning models

Figure 2: The four main tasks available in “Quick Use” mode

Figure 3: Training process of neural network

Figure 4: The file “train_config.json” stores all parameters contained in the classification panel.

Figure 5: The data augmentation dialog provides the user a straightforward way to specify augmentation during training.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4821