13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews

      research-article

      Read this article at

          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Background

          Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP.

          Methods

          Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as “reliable” when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review.

          Results

          We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions.

          Conclusions

          Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.

          Electronic supplementary material

          The online version of this article (10.1186/s12886-017-0561-9) contains supplementary material, which is available to authorized users.

          Related collections

          Most cited references42

          • Record: found
          • Abstract: not found
          • Article: not found

          The hazards of scoring the quality of clinical trials for meta-analysis.

          Although it is widely recommended that clinical trials undergo some type of quality review, the number and variety of quality assessment scales that exist make it unclear how to achieve the best assessment. To determine whether the type of quality assessment scale used affects the conclusions of meta-analytic studies. Meta-analysis of 17 trials comparing low-molecular-weight heparin (LMWH) with standard heparin for prevention of postoperative thrombosis using 25 different scales to identify high-quality trials. The association between treatment effect and summary scores and the association with 3 key domains (concealment of treatment allocation, blinding of outcome assessment, and handling of withdrawals) were examined in regression models. Pooled relative risks of deep vein thrombosis with LMWH vs standard heparin in high-quality vs low-quality trials as determined by 25 quality scales. Pooled relative risks from high-quality trials ranged from 0.63 (95% confidence interval [CI], 0.44-0.90) to 0.90 (95% CI, 0.67-1.21) vs 0.52 (95% CI, 0.24-1.09) to 1.13 (95% CI, 0.70-1.82) for low-quality trials. For 6 scales, relative risks of high-quality trials were close to unity, indicating that LMWH was not significantly superior to standard heparin, whereas low-quality trials showed better protection with LMWH (P<.05). Seven scales showed the opposite: high quality trials showed an effect whereas low quality trials did not. For the remaining 12 scales, effect estimates were similar in the 2 quality strata. In regression analysis, summary quality scores were not significantly associated with treatment effects. There was no significant association of treatment effects with allocation concealment and handling of withdrawals. Open outcome assessment, however, influenced effect size with the effect of LMWH, on average, being exaggerated by 35% (95% CI, 1%-57%; P= .046). Our data indicate that the use of summary scores to identify trials of high quality is problematic. Relevant methodological aspects should be assessed individually and their influence on effect sizes explored.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Assessing the quality of randomized trials: reliability of the Jadad scale.

            An instrument was developed and validated by Jadad, et al. to assess the quality of clinical trials using studies from the pain literature. Our study determined the reliability of the Jadad scale and the effect of blinding on interrater agreement in another group of primary studies. Four raters independently assessed blinded and unblinded versions of 76 randomized trials. Interrater agreement was calculated among combinations of four raters for blinded and unblinded versions of the studies. A 4 x 2 x 2 repeated measures design was employed to evaluate the effect of blinding. The interrater agreement for the Jadad scale was poor (kappa 0.37 to 0.39), but agreement improved substantially (kappa 0.53 to 0.59) with removal of the third item (an explanation of withdrawals). Blinding did not significantly affect the Jadad scale scores. A more precise description of how to score the withdrawal item and careful conduct of a practice set of articles might improve interrater agreement. In contrast with the conclusions reached by Jadad, we were unable to demonstrate a significant effect of blinding on the quality scores.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: found
              Is Open Access

              Systematic review adherence to methodological or reporting quality

              Background Guidelines for assessing methodological and reporting quality of systematic reviews (SRs) were developed to contribute to implementing evidence-based health care and the reduction of research waste. As SRs assessing a cohort of SRs is becoming more prevalent in the literature and with the increased uptake of SR evidence for decision-making, methodological quality and standard of reporting of SRs is of interest. The objective of this study is to evaluate SR adherence to the Quality of Reporting of Meta-analyses (QUOROM) and PRISMA reporting guidelines and the A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Overview Quality Assessment Questionnaire (OQAQ) quality assessment tools as evaluated in methodological overviews. Methods The Cochrane Library, MEDLINE®, and EMBASE® databases were searched from January 1990 to October 2014. Title and abstract screening and full-text screening were conducted independently by two reviewers. Reports assessing the quality or reporting of a cohort of SRs of interventions using PRISMA, QUOROM, OQAQ, or AMSTAR were included. All results are reported as frequencies and percentages of reports and SRs respectively. Results Of the 20,765 independent records retrieved from electronic searching, 1189 reports were reviewed for eligibility at full text, of which 56 reports (5371 SRs in total) evaluating the PRISMA, QUOROM, AMSTAR, and/or OQAQ tools were included. Notable items include the following: of the SRs using PRISMA, over 85% (1532/1741) provided a rationale for the review and less than 6% (102/1741) provided protocol information. For reports using QUOROM, only 9% (40/449) of SRs provided a trial flow diagram. However, 90% (402/449) described the explicit clinical problem and review rationale in the introduction section. Of reports using AMSTAR, 30% (534/1794) used duplicate study selection and data extraction. Conversely, 80% (1439/1794) of SRs provided study characteristics of included studies. In terms of OQAQ, 37% (499/1367) of the SRs assessed risk of bias (validity) in the included studies, while 80% (1112/1387) reported the criteria for study selection. Conclusions Although reporting guidelines and quality assessment tools exist, reporting and methodological quality of SRs are inconsistent. Mechanisms to improve adherence to established reporting guidelines and methodological assessment tools are needed to improve the quality of SRs. Electronic supplementary material The online version of this article (doi:10.1186/s13643-017-0527-2) contains supplementary material, which is available to authorized users.
                Bookmark

                Author and article information

                Contributors
                evan.mayo-wilson@jhu.edu
                sueko715@gmail.com
                RCHUCK@montefiore.org
                tli19@jhu.edu
                Journal
                BMC Ophthalmol
                BMC Ophthalmol
                BMC Ophthalmology
                BioMed Central (London )
                1471-2415
                5 September 2017
                5 September 2017
                2017
                : 17
                : 164
                Affiliations
                [1 ]ISNI 0000 0001 2171 9311, GRID grid.21107.35, Department of Epidemiology, , Johns Hopkins University Bloomberg School of Public Health, ; 615 North Wolfe Street, Baltimore, MD 21205 USA
                [2 ]ISNI 0000 0001 2152 0791, GRID grid.240283.f, Department of Ophthalmology and Visual Sciences, , Albert Einstein College of Medicine, Montefiore Medical Center, ; 3332 Rochambeau Avenue, Centennial, Room 306, New York, NY 10467 USA
                Author information
                http://orcid.org/0000-0001-6126-2459
                Article
                561
                10.1186/s12886-017-0561-9
                5584039
                28870179
                227e5e02-86ed-43dd-bdf2-00e7c9ea6163
                © The Author(s). 2017

                Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

                History
                : 5 June 2017
                : 30 August 2017
                Funding
                Funded by: FundRef http://dx.doi.org/10.13039/100000053, National Eye Institute;
                Award ID: U01 EY020522
                Categories
                Research Article
                Custom metadata
                © The Author(s) 2017

                Ophthalmology & Optometry
                systematic review standards,refractive error,clinical guidelines,research waste

                Comments

                Comment on this article