We have developed a deep learning based automated Quality Control (QC) tool, QC-Automator, for diffusion weighted MRI data, that will detect different artifacts. This will ensure that appropriate steps can be taken at the pre-processing stage to improve data quality and ensure that these artifacts do not affect the results of subsequent image analysis. Our tool based on convolutional neural nets has 94 – 98% accuracy in detecting the various artifacts including motion, multiband interleaving artifact, ghosting, susceptibility, herringbone and chemical shift. It is robust and fast and paves the way for efficient and effective artifact detection in large datasets.
1-Datasets:
We have created a dataset of ~14852 samples of artifacts (motion, multiband interleaving, ghosting, susceptibility, herringbone and chemical shift ) and ~100000 samples of artifact-free data, by manual inspection of three differently acquired DWI datasets for artifacts, by two experts. Each sample of the dataset was labeled based on the type of artifact present. Artifacts manifest differently, with some more distinguishable on the axial view than sagittal, and vice-versa. Axial slices were used as samples for ghosting, susceptibility, herringbone, and chemical shift artifacts and the dataset used sagittal slices were used for motion, and multiband interleaving artifacts. Figure 1 depicts the distribution of artifacts in our dataset. Figure 2 shows representative examples of the artifacts that can be detected.
2-Convolutional Neural Network Approach:
As humans rely on the identification of patterns in MRI data, to detect artifacts, deep learning tools, especially convolutional neural networks (CNN) can be a very powerful tool for that purpose. CNNs require a large number of parameters to be optimized during the training process which in turn needs large amounts of training data and computational power to successfully train. To overcome this, we adopted a transfer learning approach3, which consists of taking a classifier trained on another task and re-training a small number of parameters using a smaller amount of data to perform well on another task. We used the pre-trained VGG-Net network, which is one of the top architectures in computer vision4, as our base CNN. The top layer of the network was removed and replaced with a fully connected layer with 256 neurons, followed by a dense layer which performs the classification among two classes (Artifactual vs Artifact-free) using a Softmax layer. All parameters were fixed except those in the newly added layers. This vastly reduced the number of parameters required for training. We trained two classifiers: one for all artifacts manifest in the sagittal view, and the other one for artifacts manifest in the axial view. We used 80% of data for training and 20% for testing. Images for all slices were zero-padded to make them square and replicated three times for the three channels of the network. Each image was scaled so that its intensity lay between 0 and 1. Classifier was trained for 20 epochs using the RMSprop optimizer with a learning rate of 2e-4 and a cross entropy loss function. The classifiers were implemented in Keras 5.
[1] M. S. Graham, I. Drobnjak, and H. Zhang, "A supervised learning approach for diffusion MRI quality control with minimal training data," NeuroImage, 2018.
[2] C. Kelly, M. Pietsch, S. Counsell, and J.-D. Tournier, "Transfer learning and convolutional neural net fusion for motion artefact detection," in Proc. Intl. Soc. Mag. Reson. Med., 2016, pp. 1-2.
[3] S. Hoo-Chang, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., "Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning," IEEE transactions on medical imaging, vol. 35, p. 1285, 2016.
[4] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[5] Chollet, F., et al., 2015. Keras.https://github.com/fchollet/keras