Hariharan Ravishankar1, Chitresh Bhushan2, Arathi Sreekumari1, and Dattesh D Shanbhag1
1GE Healthcare, Bangalore, India, 2GE Global Research, Niskayuna, NY, United States
Synopsis
Most of the advancements with
deep learning have come from mapping the reconstructed MRl images to outcomes
(e.g. tumor segmentation, survival rate, pathology risk map). In this work, we
present methods to arrive at critical medical imaging tasks like segmentation,
classification directly from raw k-space data without image reconstruction. We
specifically demonstrate that from k-space MRI data, we can perform hippocampus
segmentation as well as detection of motion affected scans with similar
performance to that obtained from imaging data. We also demonstrate that such
an approach is more resilient to localized artifacts (e.g signal loss in
hippocampus due to metal).
Introduction
Deep
learning has improved reconstruction of MR images from raw-data and become
indispensable part of workflow chain [1-4] . However, one of the
questions that is still unanswered is the “information loss” due to the
reconstruction process and its effect on outcomes of interest. Previous efforts
have seen improvements when raw-data is translated to “raw image” and convolutional neural networks (CNN) used
to accomplish segmentation tasks5,6. In this work, we directly map raw MRI k-space data
to clinical outcomes using AUTOMAP1 construct; thereby bypassing all the
reconstruction steps altogether (Fig.1). We demonstrate efficacy of the
method in two common scenarios: Segmentation (using hippocampus segmentation in brain)
and Classification (using head motion detection). We also ascertain the advantages of
raw data-based outcomes by simulating artifacts in the image and ascertain its
performance in hippocampus segmentation. Methods
Subjects: MRI data from three different clinical sites.
Cohort A.: Used for Hippocampus
Segmentation, 855 clinically
negative subjects ;
Cohort B: Used
for motion detection, 441 patients (stroke, tumors, tremors) , two sites.
All
studies approved by appropriate IRB.
MRI Scanner and Acquisition:
Cohort A: GE 3T Discovery MR750 scanner, dedicated head coil, 2D
SSFSE data.
Cohort B: 1.5T GE
SignaHDxt MRI scanner, 8-channel brain and head-neck-spine coils, Multiple MRI
protocols (T1W, T1W+CE, T2W, FLAIR) and orientations.
Data for both Cohort A
and Cohort B were acquired with Cartesian k-space trajectory.
Ground-truth (GT):
A.
For Hippocampus Segmentation: Ground-truth
data was obtained by registering MNI brain atlas to patient data using
non-rigid registration.
B. Motion
Detection (See Figure 2): Trained radiologist labelled each MRI volume as
belonging to:
a. Class-1 images with no motion artifacts present or
images with motion present but acceptable for diagnostic purpose and
b.
Class-2 images with motion artifacts present & are not
acceptable for diagnostic purposes. (See Fig.2).
Raw K-space Data: For all the experiments described below, we
re-sampled the data to 128x128 matrix in-plane. Next raw data was simulated
by taking the Fourier transform of the magnitude data and obtaining the complex
k-space data. Deep Learning (DL) Experiments
For all the experiments, we used AUTOMAP / UNet architecture
/classification network as described in literature [1, 7,8]. Typical
parameters: Epochs = 100, optimizer = rmsprop, batch_size = 32. See fig. 2 and 3.
Train and test split:
A.
Hippocampus Segmentation: 30000 train images and 1000 test
images.
B. Motion Detection: 15000 train images, 1500 test images
DL Experiments:
Experiment #1:
AUTOMAP based image reconstruction. Input: Raw k-space, output: MR image, mean-squared
error (MSE) loss.
Experiment #2:
2D UNet segmentation, dice loss. Input: MR image, output: hippocampus
segmentation mask. Evaluated on test data reconstructed with Experiment #1.
Experiment #3 (raw
data to segmentation mask): AutoMap + UNet based image segmentation:
Input: raw data, output: hippocampus segmentation mask (See Figure 3a).
Experiment #4
(Impact of artifact on hippocampus segmentation, See Figure 3b.): Simulate metal artifact by randomly modifying the signal
intensity between 5% to 50% within hippocampus mask and add/subtract it from
image data. Obtain raw k-space data from artifact images. Perform segmentation
task as described in Experiment #2 and #3.
Experiment #5 (image
data to classification task): CNN based classification network8:
Input: image data, output: motion labels (Class-1, class-2).
Experiment #6 (raw
data to classification task): AutoMap+ UNet based image classification:
Input: raw data, output: motion labels (Class-1, class-2)
Evaluation
Criteria:All evaluation done only for test cases.
Experiment #1: mean absolute intensity error (mae),
Experiment
#2 , 3 and 4: Dice overlap,
Experiment #5 and 6: Area under ROC curve (AUC). Results and Discussion
Overall, for experiment #1, mae is ~2% (Figure 4A). Dice
overlap score for hippocampus segmentation is similar (~83%) for both the image
based and raw data-based methods even on AUTOMAP reconstructed data (figure 4B
and 4C). Since metal artifact is localized in hippocampus region, CNN -based UNet
segmentation is affected by artifact signal intensity drop (10%) and unable to
predict hippocampus (Figure 5). On the contrary, raw data-based approach is still
able to segment the hippocampus, since artifact is not localized in k-space and
spread out. The intermittent representation of hippocampus segmentation
manifold learnt by the raw data approach (figure 5) does not exactly represent
brain anatomy but is more abstract. This abstract feature space allows raw data-based
analytics to provide more resilience to imaging artifacts when performing certain
medical imaging tasks. Overall, we noticed that image-based segmentation worked
in only 13% of cases with induced metal artifacts in test cases, while in case
of raw data-based methods, the success rate was ~ 55%. Signal intensity drop
> 30% caused failure even with raw data mapping.
For head motion detection classification task, the AUC was
98% for experiment #5 while for raw data-based classification (for experiment #6), accuracy dropped marginally to ~ 96%. Conclusion
In this work, we demonstrated that direct raw-data to
outcome-based measures are possible with AUTOMAP based network construct. We
notice almost similar performance between image based and raw data-based
imaging tasks such as segmentation and classification. However, when there are localized
imaging artifacts, we observed that raw-data based methods are much better
adept at handling these artifacts; thereby providing a more robust outcome measure
compared to image-based methods. Acknowledgements
No acknowledgement found.References
1) Zhu, Bo, et al.
"Image reconstruction by domain-transform manifold learning." Nature 555.7697
(2018): 487
2) Hyun, Chang
Min, et al. "Deep learning for undersampled MRI
reconstruction." Physics in medicine and biology (2018).
3) Deep Learning For Sparse MR Reconstruction in
Glioma Patients,Peter D Chang, Michael Z Liu, Daniel S Chow, Melissa Khy,
Christopher G Filippi, Janine Lupo, and Christopher Hess
4) Schlemper, Jo,
et al. "A deep cascade of convolutional neural networks for dynamic MR
image reconstruction." IEEE transactions on Medical Imaging 37.2
(2018): 491-503.
5) Schlemper J. et al. (2018) Cardiac MR Segmentation from
Undersampled k-space
Using Deep Latent Representation Learning. In: Frangi A., Schnabel J.,
Davatzikos C., Alberola-López C., Fichtinger G. (eds) Medical Image Computing
and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in
Computer Science, vol 11070. Springer, Cham
6) Huang Q., Chen X., Metaxas D., Nadar M.S. (2019) Brain Segmentation
from k-Space
with End-to-End Recurrent Attention Network. In: Shen D. et al. (eds) Medical
Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019.
Lecture Notes in Computer Science, vol 11766. Springer, Cham
7) U-Net: Convolutional Networks for Biomedical
Image Segmentation, Olaf Ronneberger, Philipp Fischer, Thomas Brox, Medical
Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS,
Vol.9351: 234--241, 2015
8) A. Sreekumari, D. Shanbhag, D. Yeo, T. Foo, J.
Pilitsis, J. Polzin, U. Patil, A. Coblentz, A. Kapadia, J. Khinda, A. Boutet,
J. Port, I. Hancu, American Journal of Neuroradiology Feb 2019, 40 (2) 217-223;