Objectives Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. Study Design and Setting We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. Results In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥ 0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Conclusion Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies.

Calibration, Discrimination, External validation, Heterogeneity, Individual participant data (IPD), Model comparison, Multivariate meta-analysis, Prognostic model, Risk prediction
dx.doi.org/10.1016/j.jclinepi.2015.05.009, hdl.handle.net/1765/82275
Journal of Clinical Epidemiology
Department of Medical Oncology

Snell, K.I.E, Hua, H, Debray, T.P, Ensor, J, Look, M.P, Moons, K.G.M, & Riley, R.D. (2016). Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model. Journal of Clinical Epidemiology, 69, 40–50. doi:10.1016/j.jclinepi.2015.05.009