Assessing the quality of MRI is necessary and time-consuming. MRIQC automates the quantification extracting a number of image quality measures (IQMs), and eases the individual inspection through specialized visual reports. MRIQC supports T1w, fMRI and dMRI brain images. MRIQC is an easy to use, “plug-and-play” tool, since it is multi-platform (including desktop and high performance clusters), and BIDS- and BIDS-apps compliant. The IQMs extracted automatically will yield unbiased exclusion criteria in MRI analyses.
The primary building blocks of MRIQC are 1) a preprocessing workflow; 2) the extraction of a set of IQMs from the minimally preprocessed derivatives of each individual image; 3) a report generation tool.
Anatomical preprocessing workflow - T1w images are corrected for intensity inhomogeneity, skull-stripped, segmented into brain tissues, and normalized to MNI space. Additionally, a mask containing only the air surrounding the head and an artifacts mask are computed.
Functional preprocessing workflow - fMRI images are corrected for head motion, skull-stripped, and normalized to the MNI template. Additionally, a mask for ghosting artifacts and a mask for tissue-less areas surrounding the head are computed.
Image Quality Measures (IQMs) - A battery of quality measures2 is defined for both anatomical and functional images. Some IQMs are general quality estimators (i.e. signal-to-noise ratio), and other are designed to identify specific artifacts.
Visual reports - MRIQC generates one individual report per image and a group report per modality (anatomical or functional). The individual report for T1w images (Figure 1) includes several mosaic views of the original scan, the original scan with intensity saturated to enhance artifacts, a zoomed-in version, etc. These plots are useful to check for movement of head and eyeballs, since the signal usually spills over the background on the phase encoding direction. Moreover, ghosts and other artifacts are easier to find in these “background” mosaics. The individual report for fMRI (Figure 2) includes a mosaic view of several volumes of interest like the mean BOLD signal and the standard deviation map. Additionally, a panel combines plots to identify bulk intensity peaks and dropouts, spectral spikes, and confounds like frame-displacement and DVARS with an fMRI “carpetplot”3. Once all the IQMs are computed for every input image, a group report per modality is generated to inform about the whole sample (Figure 3). The user can navigate through the colored distribution plots and click on interesting subjects to visualize their corresponding individual report.
MRIQC was initially started as a fork of the QAP (http://preprocessed-connectomes-project.org/quality-assessment-protocol/). We thank their team, and particularly S. Giavasis and K. Somandepali for their implementation of several IQMs used in MRIQC.
This work is funded by the Laura and John Arnold Foundation.
1. Alfaro-Almagro F, et al. UK Biobank. OHBM2016. Geneve, Switzerland; 2016. p1877;
2. http://mriqc.rtfd.io/en/latest/measures.html;
3. Powers et al, NeuroImage, S1053-8119(16)30387-1. doi:10.1016/j.neuroimage.2016.08.009;
4. Gorgolewski KJ, et al. Nipype [Software]. 2016. doi:10.5281/zenodo.50186;
5. Gorgolewski KJ, et al. BIDS. Sci Data. 2016. 3(160044). doi:10.1038/sdata.2016.44;
6. Gorgolewski KJ, et al. BIDS-Apps. BioRxiv. 2016. doi:10.1101/079145;
7. https://osf.io/haf97/;