Causal assessment of adverse effects continues to evolve through the medical product lifecycle and requires clinical judgment to integrate evidence from multiple sources, including observational studies. This study describes a probabilistic framework quantifying how much can be learned from an epidemiological study, and how it varies by study design, database, or prior beliefs. We integrate new observational evidence with existing clinical knowledge by estimating the probability that a drug-outcome pair represents a true association, as a function of prior belief and estimated strength of association from an observational study. Incident user cohort with propensity score adjustment and self-controlled case series designs were applied in an automated fashion to 53 drug-outcome associations across a network of nine disparate databases covering over 100 million lives of patient-level data. When applying a incident user design to a large claims database for a drug with moderate sample, the performance of these method suggests that an observed relative risk of less than 2 does not substantially shift prior expectations and that a relative risk of greater than 5 is needed to revise a 10% prior belief in a true association to have a greater than 90% posterior probability. Study design, database, and sample size can substantially impact the predictive performance of observational results. This Bayesian framework offers a conceptual approach to interpreting observational database studies, which, with further data to fit the model, offers the potential to become a practical tool for direct use by clinicians and the broader research commnunity.

, , , , ,,
Statistics in Biopharmaceutical Research
Department of Medical Informatics

Ryan, P., Suchard, M., Schuemie, M., & Madigan, D. (2013). Learning From Epidemiology: Interpreting Observational Database Studies for the Effects of Medical Products. Statistics in Biopharmaceutical Research, 5(3), 170–179. doi:10.1080/19466315.2013.791638