32
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Quality assessment of observational studies in a drug-safety systematic review, comparison of two tools: the Newcastle–Ottawa Scale and the RTI item bank

      research-article

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          The study objective was to compare the Newcastle–Ottawa Scale (NOS) and the RTI item bank (RTI-IB) and estimate interrater agreement using the RTI-IB within a systematic review on the cardiovascular safety of glucose-lowering drugs.

          Methods

          We tailored both tools and added four questions to the RTI-IB. Two reviewers assessed the quality of the 44 included studies with both tools, (independently for the RTI-IB) and agreed on which responses conveyed low, unclear, or high risk of bias. For each question in the RTI-IB (n=31), the observed interrater agreement was calculated as the percentage of studies given the same bias assessment by both reviewers; chance-adjusted interrater agreement was estimated with the first-order agreement coefficient (AC1) statistic.

          Results

          The NOS required less tailoring and was easier to use than the RTI-IB, but the RTI-IB produced a more thorough assessment. The RTI-IB includes most of the domains measured in the NOS. Median observed interrater agreement for the RTI-IB was 75% (25th percentile [p25] =61%; p75 =89%); median AC1 statistic was 0.64 (p25 =0.51; p75 =0.86).

          Conclusion

          The RTI-IB facilitates a more complete quality assessment than the NOS but is more burdensome. The observed agreement and AC1 statistic in this study were higher than those reported by the RTI-IB’s developers.

          Related collections

          Most cited references34

          • Record: found
          • Abstract: found
          • Article: not found

          Pharmaceutical industry sponsorship and research outcome and quality: systematic review.

          To investigate whether funding of drug studies by the pharmaceutical industry is associated with outcomes that are favourable to the funder and whether the methods of trials funded by pharmaceutical companies differ from the methods in trials with other sources of support. Medline (January 1966 to December 2002) and Embase (January 1980 to December 2002) searches were supplemented with material identified in the references and in the authors' personal files. Data were independently abstracted by three of the authors and disagreements were resolved by consensus. 30 studies were included. Research funded by drug companies was less likely to be published than research funded by other sources. Studies sponsored by pharmaceutical companies were more likely to have outcomes favouring the sponsor than were studies with other sponsors (odds ratio 4.05; 95% confidence interval 2.98 to 5.51; 18 comparisons). None of the 13 studies that analysed methods reported that studies funded by industry was of poorer quality. Systematic bias favours products which are made by the company funding the research. Explanations include the selection of an inappropriate comparator to the product being investigated and publication bias.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Evaluating non-randomised intervention studies.

            To consider methods and related evidence for evaluating bias in non-randomised intervention studies. Systematic reviews and methodological papers were identified from a search of electronic databases; handsearches of key medical journals and contact with experts working in the field. New empirical studies were conducted using data from two large randomised clinical trials. Three systematic reviews and new empirical investigations were conducted. The reviews considered, in regard to non-randomised studies, (1) the existing evidence of bias, (2) the content of quality assessment tools, (3) the ways that study quality has been assessed and addressed. (4) The empirical investigations were conducted generating non-randomised studies from two large, multicentre randomised controlled trials (RCTs) and selectively resampling trial participants according to allocated treatment, centre and period. In the systematic reviews, eight studies compared results of randomised and non-randomised studies across multiple interventions using meta-epidemiological techniques. A total of 194 tools were identified that could be or had been used to assess non-randomised studies. Sixty tools covered at least five of six pre-specified internal validity domains. Fourteen tools covered three of four core items of particular importance for non-randomised studies. Six tools were thought suitable for use in systematic reviews. Of 511 systematic reviews that included non-randomised studies, only 169 (33%) assessed study quality. Sixty-nine reviews investigated the impact of quality on study results in a quantitative manner. The new empirical studies estimated the bias associated with non-random allocation and found that the bias could lead to consistent over- or underestimations of treatment effects, also the bias increased variation in results for both historical and concurrent controls, owing to haphazard differences in case-mix between groups. The biases were large enough to lead studies falsely to conclude significant findings of benefit or harm. Four strategies for case-mix adjustment were evaluated: none adequately adjusted for bias in historically and concurrently controlled studies. Logistic regression on average increased bias. Propensity score methods performed better, but were not satisfactory in most situations. Detailed investigation revealed that adequate adjustment can only be achieved in the unrealistic situation when selection depends on a single factor. Results of non-randomised studies sometimes, but not always, differ from results of randomised studies of the same intervention. Non-randomised studies may still give seriously misleading results when treated and control groups appear similar in key prognostic factors. Standard methods of case-mix adjustment do not guarantee removal of bias. Residual confounding may be high even when good prognostic data are available, and in some situations adjusted results may appear more biased than unadjusted results. Although many quality assessment tools exist and have been used for appraising non-randomised studies, most omit key quality domains. Healthcare policies based upon non-randomised studies or systematic reviews of non-randomised studies may need re-evaluation if the uncertainty in the true evidence base was not fully appreciated when policies were made. The inability of case-mix adjustment methods to compensate for selection bias and our inability to identify non-randomised studies that are free of selection bias indicate that non-randomised studies should only be undertaken when RCTs are infeasible or unethical. Recommendations for further research include: applying the resampling methodology in other clinical areas to ascertain whether the biases described are typical; developing or refining existing quality assessment tools for non-randomised studies; investigating how quality assessments of non-randomised studies can be incorporated into reviews and the implications of individual quality features for interpretation of a review's results; examination of the reasons for the apparent failure of case-mix adjustment methods; and further evaluation of the role of the propensity score.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              On the bias produced by quality scores in meta-analysis, and a hierarchical view of proposed solutions.

              Results from better quality studies should in some sense be more valid or more accurate than results from other studies, and as a consequence should tend to be distributed differently from results of other studies. To date, however, quality scores have been poor predictors of study results. We discuss possible reasons and remedies for this problem. It appears that 'quality' (whatever leads to more valid results) is of fairly high dimension and possibly non-additive and nonlinear, and that quality dimensions are highly application-specific and hard to measure from published information. Unfortunately, quality scores are often used to contrast, model, or modify meta-analysis results without regard to the aforementioned problems, as when used to directly modify weights or contributions of individual studies in an ad hoc manner. Even if quality would be captured in one dimension, use of quality scores in summarization weights would produce biased estimates of effect. Only if this bias were more than offset by variance reduction would such use be justified. From this perspective, quality weighting should be evaluated against formal bias-variance trade-off methods such as hierarchical (random-coefficient) meta-regression. Because it is unlikely that a low-dimensional appraisal will ever be adequate (especially over different applications), we argue that response-surface estimation based on quality items is preferable to quality weighting. Quality scores may be useful in the second stage of a hierarchical response-surface model, but only if the scores are reconstructed to maximize their correlation with bias.
                Bookmark

                Author and article information

                Journal
                Clin Epidemiol
                Clin Epidemiol
                Clinical Epidemiology
                Clinical Epidemiology
                Dove Medical Press
                1179-1349
                2014
                10 October 2014
                : 6
                : 359-368
                Affiliations
                [1 ]RTI Health Solutions, Barcelona, Spain
                [2 ]Drug Safety Research Unit, Southampton, UK
                [3 ]Associate Department of the School of Pharmacy and Biomedical Sciences, University of Portsmouth, Portsmouth, UK
                [4 ]RTI International, Research Triangle Park, NC, USA
                Author notes
                Correspondence: Andrea V Margulis, RTI Health Solutions, Travessera de Gracia 56, Atico 1, Barcelona 08006, Spain, Tel +34 933 622 806 or +34 693 822 166, Fax +34 93 414 2610, Email amargulis@ 123456rti.org
                Article
                clep-6-359
                10.2147/CLEP.S66677
                4199858
                25336990
                1b05931d-62f3-4dae-9879-72c87975c9e2
                © 2014 Margulis et al. This work is published by Dove Medical Press Limited, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License

                The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed.

                History
                Categories
                Original Research

                Public health
                systematic review,meta-analysis,quality assessment,ac1
                Public health
                systematic review, meta-analysis, quality assessment, ac1

                Comments

                Comment on this article