7
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Sensitivity analysis for publication bias in meta‐analyses

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Summary

          We propose sensitivity analyses for publication bias in meta‐analyses. We consider a publication process such that ‘statistically significant’ results are more likely to be published than negative or “non‐significant” results by an unknown ratio, η. Our proposed methods also accommodate some plausible forms of selection based on a study's standard error. Using inverse probability weighting and robust estimation that accommodates non‐normal population effects, small meta‐analyses, and clustering, we develop sensitivity analyses that enable statements such as ‘For publication bias to shift the observed point estimate to the null, “significant” results would need to be at least 30 fold more likely to be published than negative or “non‐significant” results’. Comparable statements can be made regarding shifting to a chosen non‐null value or shifting the confidence interval. To aid interpretation, we describe empirical benchmarks for plausible values of η across disciplines. We show that a worst‐case meta‐analytic point estimate for maximal publication bias under the selection model can be obtained simply by conducting a standard meta‐analysis of only the negative and ‘non‐significant’ studies; this method sometimes indicates that no amount of such publication bias could ‘explain away’ the results. We illustrate the proposed methods by using real meta‐analyses and provide an R package: PublicationBias.

          Related collections

          Most cited references51

          • Record: found
          • Abstract: found
          • Article: not found

          Bias in meta-analysis detected by a simple, graphical test.

          Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews. Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Sensitivity Analysis in Observational Research: Introducing the E-Value.

            Sensitivity analysis is useful in assessing how robust an association is to potential unmeasured or uncontrolled confounding. This article introduces a new measure called the "E-value," which is related to the evidence for causality in observational studies that are potentially subject to confounding. The E-value is defined as the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment-outcome association, conditional on the measured covariates. A large E-value implies that considerable unmeasured confounding would be needed to explain away an effect estimate. A small E-value implies little unmeasured confounding would be needed to explain away an effect estimate. The authors propose that in all observational studies intended to produce evidence for causality, the E-value be reported or some other sensitivity analysis be used. They suggest calculating the E-value for both the observed association estimate (after adjustments for measured confounders) and the limit of the confidence interval closest to the null. If this were to become standard practice, the ability of the scientific community to assess evidence from observational studies would improve considerably, and ultimately, science would be strengthened.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.

              We study recently developed nonparametric methods for estimating the number of missing studies that might exist in a meta-analysis and the effect that these studies might have had on its outcome. These are simple rank-based data augmentation techniques, which formalize the use of funnel plots. We show that they provide effective and relatively powerful tests for evaluating the existence of such publication bias. After adjusting for missing studies, we find that the point estimate of the overall effect size is approximately correct and coverage of the effect size confidence intervals is substantially improved, in many cases recovering the nominal confidence levels entirely. We illustrate the trim and fill method on existing meta-analyses of studies in clinical trials and psychometrics.
                Bookmark

                Author and article information

                Contributors
                mmathur@stanford.edu
                Journal
                J R Stat Soc Ser C Appl Stat
                J R Stat Soc Ser C Appl Stat
                10.1111/(ISSN)1467-9876
                RSSC
                Journal of the Royal Statistical Society. Series C, Applied Statistics
                John Wiley and Sons Inc. (Hoboken )
                0035-9254
                1467-9876
                28 August 2020
                November 2020
                : 69
                : 5 ( doiID: 10.1111/rssc.v69.5 )
                : 1091-1119
                Affiliations
                [ 1 ] Stanford University Palo Alto USA
                [ 2 ] Harvard T.H. Chan School of Public Health Boston USA
                Author notes
                [*] [* ] Address for correspondence: Maya B. Mathur, Quantitative Sciences Unit, Stanford University, 1701 Page Mill Road, Palo Alto, CA 94304, USA. E‐mail: mmathur@ 123456stanford.edu
                Article
                RSSC12440
                10.1111/rssc.12440
                7590147
                33132447
                8e76c5c3-e343-4588-8995-68d99d57d492
                © 2020 The Authors Journal of the Royal Statistical Society: Series C (Applied Statistics) Published by John Wiley & Sons Ltd on behalf of the Royal Statistical Society.

                This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.

                History
                Page count
                Figures: 10, Tables: 8, Pages: 29, Words: 14867
                Funding
                Funded by: National Institutes of Health , open-funder-registry 10.13039/100000002;
                Award ID: R01 CA222147
                Funded by: John E. Fetzer Memorial Trust , open-funder-registry 10.13039/100001107;
                Award ID: R2020‐16
                Categories
                Original Article
                Original Articles
                Custom metadata
                2.0
                November 2020
                Converter:WILEY_ML3GV2_TO_JATSPMC version:5.9.3 mode:remove_FC converted:27.10.2020

                Statistics
                file drawer,meta‐analysis,publication bias,sensitivity analysis
                Statistics
                file drawer, meta‐analysis, publication bias, sensitivity analysis

                Comments

                Comment on this article