Automatic detection and classification of brain tumors was performed using deep learning based on conventional MRI and clinical information. A total of 441 patients where included: 202 patients with high grade glioma and 239 patients with brain metastases. Classification was performed using resnet34 architecture. The input data for classification were FLAIR images, post contrast T1W images and patients’ clinical information. Classification results showed high accuracy=89%, specificity=91% and sensitivity=86%. For lesion localization the mean intersection over union (IoU) score was 0.64±0.17. Our results indicate the promising potential of a deep learning approach for automatic non-invasive diagnosis of patients with brain tumors.
Introduction
Differentiation between high grade glioma (HGG) and brain metastasis is highly important due to differing medical treatment strategies for each diagnosis. While MRI is the modality of choice for the assessment of patients with brain tumors, differentiation between HGG and solitary brain metastasis is highly challenging due to their similar appearance on conventional MRI(1,2). Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications(3). The aim of this study was to perform automatic detection and classification of brain tumors using deep learning, based on conventional MRI and clinical information.
Methods
Patients and MRI Protocol: A total of 441 patients were included in this study, 202 patients with HGG and 239 patients with brain metastases. Scans were performed on 1.5 and 3.0 Tesla General Electric, Siemens and Philips systems. All scans included FLAIR images, post contrast 3D T1W images (T1WI+C). Patients' clinical information included age (60±13), weight (75±15) and gender.
Image analysis: Was performed in Matlab R2018a and fast.ai framework (PyTorch environment). Analysis workflow is illustrated in Figure 1, and includes:
Database preparation: Included brain extraction, intensity standardization and resizing the images to a 224X224 pixel size. To increase the size of our dataset, data augmentation was performed and included rotation flipping and random lighting.
Data labeling: The ground truth for classification was the post operation biopsy based tumor diagnosis. For localization, tumor area was manually marked in each patient based on the T1WI+C and using in-house software, resulting in 4 coordinates of bounding box location for each tumor.
Network architecture: resnet34 based deep convolutional networks was used with the data being randomly split into 80% training and 20% validation sets. The input data comprised of the FLAIR, T1W+C images and patients’ clinical information which was "colored" into the images. The network performance were tested based on three databases: one input layer: T1W+C images only, two input layers: T1W+C images and clinical data and three input layers: FLAIR, T1W+C images and clinical information (Figure 1.a). For the network training the adaptive moment estimator (Adam)(4) stochastic gradient descent optimizer was used with decreasing learning rates over multiple epochs; with batch size = 32 and initial learning rate = 0.02.
Evaluation of the segmentation results: The performance of the network was evaluated based on detection and classification. Class activation maps (CAM) were used for visual inspection of the generated results, highlighting image areas which triggered activation of the network. Detection was evaluated using the intersection over union (IoU) score, reflecting the network’s ability to predict bounding boxes that overlap with the ground truth bounding boxes. For the classification, the overall per patient accuracy, sensitivity and specificity were evaluated based on the mean softmax prediction scores used to determine tumor type into high grade glioma or brain metastasis.
Results
Lesion classification: In general, a good classification and localization was obtained in the majority of cases. Figure 2 shows representative class activation maps highlighting image areas which triggered activation of the network and the predicted class (with associated percentage of confidence) vs ground truth. Classification and localization results were accuracy=89%, specificity=91% and sensitivity=86% for differentiation between HGG and brain metastasis.
Lesion localization: The overall mean IoU was 0.64±0.17. Figure 3 shows representative results of tumor localization obtained in four patients showing the high consistency between the automatic localization (white rectangle) relative to the ground truth - manual defined bounding boxes (red rectangle).
Discussion & Conclusion
Automatic deep learning based detection and classification of brain tumors was performed. The best classification results were obtained when integrating both imaging data and clinical information as input data. Our results indicated the promising potential of a deep network approach for automatic non-invasive diagnosis of patients with brain tumors. Additional efforts in the project are focused on attempts at sub-classification of the metastatic tumor type based on their origin (work in progress).The proposed method may have many applications far beyond the scope of this study such as tumor grading and identification of tumor mutation status among others.