Clinical risk prediction models are increasingly being developed and validated on multicenter datasets. In this article, we present a comprehensive framework for the evaluation of the predictive performance of prediction models at the center level and the population level, considering population-averaged predictions, center-specific predictions, and predictions assuming an average random center effect. We demonstrated in a simulation study that calibration slopes do not only deviate from one because of over- or underfitting of patterns in the development dataset, but also as a result of the choice of the model (standard versus mixed effects logistic regression), the type of predictions (marginal versus conditional versus assuming an average random effect), and the level of model validation (center versus population). In particular, when data is heavily clustered (ICC 20%), center-specific predictions offer the best predictive performance at the population level and the center level. We recommend that models should reflect the data structure, while the level of model validation should reflect the research question.

Additional Metadata
Keywords bias, calibration, clinical prediction model, discrimination, logistic regression, Mixed model, predictive performance
Persistent URL dx.doi.org/10.1177/0962280216668555, hdl.handle.net/1765/106551
Journal Statistical Methods in Medical Research
Citation
Wynants, L, Vergouwe, Y, van Huffel, S, Timmerman, D, & Van Calster, B. (2018). Does ignoring clustering in multicenter data influence the performance of prediction models? A simulation study. Statistical Methods in Medical Research, 27(6), 1723–1736. doi:10.1177/0962280216668555