Performing quality control to detect image artifacts and data-processing errors is crucial in structural magnetic resonance imaging, especially in developmental studies. Currently, many studies rely on visual inspection by trained raters for quality control. The subjectivity of these manual procedures lessens comparability between studies, and with growing study sizes quality control is increasingly time consuming. In addition, both inter-rater as well as intra-rater variability of manual quality control is high and may lead to inclusion of poor quality scans and exclusion of scans of usable quality. In the current study we present the Qoala-T tool, which is an easy and free to use supervised-learning model to reduce rater bias and misclassification in manual quality control procedures using FreeSurfer-processed scans. First, we manually rated quality of N ¼ 784 FreeSurfer-processed T1- weighted scans acquired in three different waves in a longitudinal study. Different supervised-learning models were then compared to predict manual quality ratings using FreeSurfer segmented output data. Results show that the Qoala-T tool using random forests is able to predict scan quality with both high sensitivity and specificity (mean area under the curve (AUC) ¼ 0.98). In addition, the Qoala-T tool was also able to adequately predict the quality of two novel unseen datasets (total N ¼ 872). Finally, analyses of age effects showed that younger participants were more likely to have lower scan quality, underlining that scan quality might confound findings attributed to age effects. These outcomes indicate that this procedure could further help to reduce variability related to manual quality control, thereby benefiting the comparability of data quality between studies.

, , , , ,
doi.org/10.1016/j.neuroimage.2019.01.014, hdl.handle.net/1765/131510
NeuroImage

Klapwijk, E.T., van de Kamp, F., van der Meulen, M., & Peters, S. (2019). Qoala-T: A supervised-learning tool for quality control of FreeSurfer segmented MRI data. NeuroImage, 189, 116–129. doi:10.1016/j.neuroimage.2019.01.014