13
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      The STARD Statement for Reporting Studies of Diagnostic Accuracy: Explanation and Elaboration

      Read this article at

      ScienceOpenPublisherPubMed
      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in the study and to evaluate the generalisability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the reporting the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding and dissemination of the checklist. The document contains a clarification of the meaning, rationale and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart and this explanation and elaboration document should be useful resources to improve reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in healthcare.

          Related collections

          Most cited references65

          • Record: found
          • Abstract: found
          • Article: not found

          Assessing the generalizability of prognostic information.

          Physicians are often asked to make prognostic assessments but often worry that their assessments will prove inaccurate. Prognostic systems were developed to enhance the accuracy of such assessments. This paper describes an approach for evaluating prognostic systems based on the accuracy (calibration and discrimination) and generalizability (reproducibility and transportability) of the system's predictions. Reproducibility is the ability to produce accurate predictions among patients not included in the development of the system but from the same population. Transportability is the ability to produce accurate predictions among patients drawn from a different but plausibly related population. On the basis of the observation that the generalizability of a prognostic system is commonly limited to a single historical period, geographic location, methodologic approach, disease spectrum, or follow-up interval, we describe a working hierarchy of the cumulative generalizability of prognostic systems. This approach is illustrated in a structured review of the Dukes and Jass staging systems for colon and rectal cancer and applied to a young man with colon cancer. Because it treats the development of the system as a "black box" and evaluates only the performance of the predictions, the approach can be applied to any system that generates predicted probabilities. Although the Dukes and Jass staging systems are discrete, the approach can also be applied to systems that generate continuous predictions and, with some modification, to systems that predict over multiple time periods. Like any scientific hypothesis, the generalizability of a prognostic system is established by being tested and being found accurate across increasingly diverse settings. The more numerous and diverse the settings in which the system is tested and found accurate, the more likely it will generalize to an untested setting.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Problems of spectrum and bias in evaluating the efficacy of diagnostic tests.

            To determine why many diagnostic tests have proved to be valueless after optimistic introduction into medical practice, we reviewed a series of investigations and identified two major problems that can cause erroneous statistical results for the "sensitivity" and "specificity" indexes of diagnostic efficacy. Unless an appropriately broad spectrum is chosen for the diseased and nondiseased patients who comprise the study population, the diagnostic test may receive falsely high values for its "rule-in" and "rule-out" performances. Unless the interpretation of the test and the establishment of the true diagnosis are done independently, bias may falsely elevate the test's efficacy. Avoidance of these problems might have prevented the early optimism and subsequent disillusionment with the diagnostic value of two selected examples: the carcinoembryonic antigen and nitro-blue tetrazolium tests.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A consumer's guide to subgroup analyses.

              The extent to which a clinician should believe and act on the results of subgroup analyses of data from randomized trials or meta-analyses is controversial. Guidelines are provided in this paper for making these decisions. The strength of inference regarding a proposed difference in treatment effect among subgroups is dependent on the magnitude of the difference, the statistical significance of the difference, whether the hypothesis preceded or followed the analysis, whether the subgroup analysis was one of a small number of hypotheses tested, whether the difference was suggested by comparisons within or between studies, the consistency of the difference, and the existence of indirect evidence that supports the difference. Application of these guidelines will assist clinicians in making decisions regarding whether to base a treatment decision on overall results or on the results of a subgroup analysis.
                Bookmark

                Author and article information

                Journal
                Clinical Chemistry
                American Association for Clinical Chemistry (AACC)
                0009-9147
                1530-8561
                January 01 2003
                January 01 2003
                January 01 2003
                January 01 2003
                January 01 2003
                January 01 2003
                : 49
                : 1
                : 7-18
                Affiliations
                [1 ]Department of Clinical Epidemiology and Biostatistics, Academic Medical Center—University of Amsterdam, 1100 DE Amsterdam, The Netherlands
                [2 ]Department of Pathology, University of Virginia, Charlottesville, VA 22903
                [3 ]Clinical Chemistry, Washington, DC 20037
                [4 ]Centre for Statistical Sciences, Brown University, Providence, RI 02912
                [5 ]Centre for General Practice, University of Queensland, Herston QLD 4006, Australia
                [6 ]Department of Public Health & Community Medicine, University of Sydney, Sydney NSW 2006, Australia
                [7 ]Chalmers Research Group, Ottowa, Ontario, K1N 6M4 Canada
                [8 ]Institute for Health Policy Studies, University of California, San Francisco, San Francisco, CA 94118
                [9 ]Journal of the American Medical Association, Chicago, IL 60610
                [10 ]Institute for Research in Extramural Medicine, Free University, 1081 BT Amsterdam, The Netherlands
                Article
                10.1373/49.1.7
                12507954
                45f78b38-1a81-49b5-bb36-85129a9e3904
                © 2003

                https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model

                History

                Comments

                Comment on this article