There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
Since the introduction of Cohen's kappa as a chance-adjusted measure of agreement
between two observers, several "paradoxes" in its interpretation have been pointed
out. The difficulties occur because kappa not only measures agreement but is also
affected in complex ways by the presence of bias between observers and by the distributions
of data across the categories that are used ("prevalence"). In this paper, new indices
that provide independent measures of bias and prevalence, as well as of observed agreement,
are defined and a simple formula is derived that expresses kappa in terms of these
three indices. When comparisons are made between agreement studies it can be misleading
to report kappa values alone, and it is recommended that researchers also include
quantitative indicators of bias and prevalence.