11
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: not found
      • Article: not found

      High agreement but low kappa: II. Resolving the paradoxes

      ,
      Journal of Clinical Epidemiology
      Elsevier BV

      Read this article at

      ScienceOpenPublisher
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Related collections

          Most cited references11

          • Record: found
          • Abstract: not found
          • Article: not found

          Index for rating diagnostic tests.

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            On the Methods of Measuring Association Between Two Attributes

            G. YULE (1912)
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              A proposed solution to the base rate problem in the kappa statistic.

              Because it corrects for chance agreement, kappa (kappa) is a useful statistic for calculating interrater concordance. However, kappa has been criticized because its computed value is a function not only of sensitivity and specificity, but also the prevalence, or base rate, of the illness of interest in the particular population under study. For example, it has been shown for a hypothetical case in which sensitivity and specificity remain constant at .95 each, that kappa falls from .81 to .14 when the prevalence drops from 50% to 1%. Thus, differing values of kappa may be entirely due to differences in prevalence. Calculation of agreement presents different problems depending on whether one is studying reliability or validity. We discuss quantification of agreement in the pure validity case, the pure reliability case, and those studies that fall somewhere between. As a way of minimizing the base rate problem, we propose a statistic for the quantification of agreement (the Y statistic), which can be related to kappa but which is completely independent of prevalence in the case of validity studies and relatively so in the case of reliability.
                Bookmark

                Author and article information

                Journal
                Journal of Clinical Epidemiology
                Journal of Clinical Epidemiology
                Elsevier BV
                08954356
                January 1990
                January 1990
                : 43
                : 6
                : 551-558
                Article
                10.1016/0895-4356(90)90159-M
                28ec3515-7383-45d9-a3de-37ab3ec3a196
                © 1990

                http://www.elsevier.com/tdm/userlicense/1.0/

                History

                Comments

                Comment on this article