29
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Prominent medical journals often provide insufficient information to assess the validity of studies with negative results

      research-article

      Read this article at

      ScienceOpenPublisherPMC
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Physicians reading the medical literature attempt to determine whether research studies are valid. However, articles with negative results may not provide sufficient information to allow physicians to properly assess validity.

          Methods

          We analyzed all original research articles with negative results published in 1997 in the weekly journals BMJ, JAMA, Lancet, and New England Journal of Medicine as well as those published in the 1997 and 1998 issues of the bimonthly Annals of Internal Medicine (N = 234). Our primary objective was to quantify the proportion of studies with negative results that comment on power and present confidence intervals. Secondary outcomes were to quantify the proportion of these studies with a specified effect size and a defined primary outcome. Stratified analyses by study design were also performed.

          Results

          Only 30% of the articles with negative results comment on power. The reporting of power (range: 15%-52%) and confidence intervals (range: 55–81%) varied significantly among journals. Observational studies of etiology/risk factors addressed power less frequently (15%, 95% CI, 8–21%) than did clinical trials (56%, 95% CI, 46–67%, p < 0.001). While 87% of articles with power calculations specified an effect size the authors sought to detect, a minority gave a rationale for the effect size. Only half of the studies with negative results clearly defined a primary outcome.

          Conclusion

          Prominent medical journals often provide insufficient information to assess the validity of studies with negative results.

          Related collections

          Most cited references31

          • Record: found
          • Abstract: found
          • Article: not found

          The use of predicted confidence intervals when planning experiments and the misuse of power when interpreting results.

          Although there is a growing understanding of the importance of statistical power considerations when designing studies and of the value of confidence intervals when interpreting data, confusion exists about the reverse arrangement: the role of confidence intervals in study design and of power in interpretation. Confidence intervals should play an important role when setting sample size, and power should play no role once the data have been collected, but exactly the opposite procedure is widely practiced. In this commentary, we present the reasons why the calculation of power after a study is over is inappropriate and how confidence intervals can be used during both study design and study interpretation.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Statistical problems in the reporting of clinical trials. A survey of three medical journals.

            Reports of clinical trials often contain a wealth of data comparing treatments. This can lead to problems in interpretation, particularly when significance testing is used extensively. We examined 45 reports of comparative trials published in the British Medical Journal, the Lancet, or the New England Journal of Medicine to illustrate these statistical problems. The issues we considered included the analysis of multiple end points, the analysis of repeated measurements over time, subgroup analyses, trials of multiple treatments, and the overall number of significance tests in a trial report. Interpretation of large amounts of data is complicated by the common failure to specify in advance the intended size of a trial or statistical stopping rules for interim analyses. In addition, summaries or abstracts of trials tend to emphasize the more statistically significant end points. Overall, the reporting of clinical trials appears to be biased toward an exaggeration of treatment differences. Trials should have a clearer predefined policy for data analysis and reporting. In particular, a limited number of primary treatment comparisons should be specified in advance. The overuse of arbitrary significance levels (for example, P less than 0.05) is detrimental to good scientific reporting, and more emphasis should be given to the magnitude of treatment differences and to estimation methods such as confidence intervals.
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Use of the CONSORT Statement and Quality of Reports of Randomized Trials

                Bookmark

                Author and article information

                Journal
                J Negat Results Biomed
                Journal of Negative Results in Biomedicine
                BioMed Central (London )
                1477-5751
                2002
                30 September 2002
                : 1
                : 1
                Affiliations
                [1 ]Division of General Internal Medicine, University of Pittsburgh, 933W MUH, 200 Lothrop Street, Pittsburgh, PA 15213, USA
                [2 ]Division of General Internal Medicine, Johns Hopkins Bayview Medical Center, Johns Hopkins University School of Medicine, A6W, 4940 Eastern AvenueBaltimore, MD 21224, USA
                [3 ]Division of General Internal Medicine, Vanderbilt University Medical Center, S-1121 MCN 2587, Nashville, TN 37232, USA
                Article
                1477-5751-1-1
                10.1186/1477-5751-1-1
                131026
                12437785
                ffa9c685-9f90-42b4-b819-51fd1d6e8d2e
                Copyright © 2002 Hebert et al; licensee BioMed Central Ltd. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL.
                History
                : 19 July 2002
                : 30 September 2002
                Categories
                Research

                Life sciences
                Life sciences

                Comments

                Comment on this article

                scite_
                33
                0
                21
                0
                Smart Citations
                33
                0
                21
                0
                Citing PublicationsSupportingMentioningContrasting
                View Citations

                See how this article has been cited at scite.ai

                scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.

                Similar content130

                Cited by9

                Most referenced authors540