<p>Objective: To assess whether the Prediction model Risk Of Bias ASsessment Tool (PROBAST) and a shorter version of this tool can identify clinical prediction models (CPMs) that perform poorly at external validation. Study Design and Setting: We evaluated risk of bias (ROB) on 102 CPMs from the Tufts CPM Registry, comparing PROBAST to a short form consisting of six PROBAST items anticipated to best identify high ROB. We then applied the short form to all CPMs in the Registry with at least 1 validation (n=556) and assessed the change in discrimination (dAUC) in external validation cohorts (n=1,147). Results: PROBAST classified 98/102 CPMS as high ROB. The short form identified 96 of these 98 as high ROB (98% sensitivity), with perfect specificity. In the full CPM registry, 527 of 556 CPMs (95%) were classified as high ROB, 20 (3.6%) low ROB, and 9 (1.6%) unclear ROB. Only one model with unclear ROB was reclassified to high ROB after full PROBAST assessment of all low and unclear ROB models. Median change in discrimination was significantly smaller in low ROB models (dAUC -0.9%, IQR -6.2–4.2%) compared to high ROB models (dAUC -11.7%, IQR -33.3–2.6%; P&lt;0.001). Conclusion: High ROB is pervasive among published CPMs. It is associated with poor discriminative performance at validation, supporting the application of PROBAST or a shorter version in CPM reviews.</p>

doi.org/10.1016/j.jclinepi.2021.06.017, hdl.handle.net/1765/136417
Journal of Clinical Epidemiology
Erasmus MC: University Medical Center Rotterdam

E. (Esmee) Venema, Benjamin S. Wessler, Jessica K. Paulus, Rehab Salah, Gowri Raman, Lester Y. Leung, … David M. Kent. (2021). Large-scale validation of the prediction model risk of bias assessment Tool (PROBAST) using a short form. Journal of Clinical Epidemiology, 138, 32–39. doi:10.1016/j.jclinepi.2021.06.017