In a recent BMJ article, the authors conducted a meta-analysis to compare estimated treatment effects from randomized trials with those derived from observational studies based on routinely collected data (RCD). They calculated a pooled relative odds ratio (ROR) of 1.31 (95 % confidence interval [CI]: 1.03–1.65) and concluded that RCD studies systematically over-estimated protective effects. However, their meta-analysis inverted results for some clinical questions to force all estimates from RCD to be below 1. We evaluated the statistical properties of this pooled ROR, and found that the selective inversion rule employed in the original meta-analysis can positively bias the estimate of the ROR. We then repeated the random effects meta-analysis using a different inversion rule and found an estimated ROR of 0.98 (0.78–1.23), indicating the ROR is highly dependent on the direction of comparisons. As an alternative to the ROR, we calculated the observed proportion of clinical questions where the RCD and trial CIs overlap, as well as the expected proportion assuming no systematic difference between the studies. Out of 16 clinical questions, 50 % CIs overlapped for 8 (50 %; 25 to 75 %) compared with an expected overlap of 60 % assuming no systematic difference between RCD studies and trials. Thus, there was little evidence of a systematic difference in effect estimates between RCD and RCTs. Estimates of pooled RORs across distinct clinical questions are generally not interpretable and may be misleading.
Inhalt
-
Öffentlich zugänglichA Bias in the Evaluation of Bias Comparing Randomized Trials with Nonexperimental Studies22. April 2017
-
Öffentlich zugänglichCompartmental Model Diagrams as Causal Representations in Relation to DAGs5. Mai 2017
-
Öffentlich zugänglichDoubly Robust Estimator for Indirectly Standardized Mortality Ratios1. September 2017
-
Öffentlich zugänglichA General Framework for and New Normalization of Attributable Proportion23. Dezember 2016