Agreement K

We find that in the second case, it shows a greater similarity between A and B than in the first. This is because, although the percentage of concordance is the same, the percentage of concordance that would occur «by chance» is significantly higher in the first case (0.54 versus 0.46). Kappa is an index that takes into account the observed concordance with a basic agreement. However, researchers should carefully consider whether Kappa`s basic agreement is relevant to each research question. Kappas Baseline is often described as the agreement by chance, which is only partially correct. The Basic Kappa Agreement is the agreement that would be expected due to random allocation, given the quantities indicated by the marginal amounts of the square contingency table. Therefore, Kappa = 0, if the observed allocation is apparently random, regardless of the quantitative opinion limited by marginal amounts. However, in many applications, investigators should be more interested in quantitative conjunctity in limit amounts than in the allocation notice described by the additional information on the diagonal of the square contingency table. Therefore, Kappa`s baseline is more distracting than revealing for many applications. Look at the following example: Zhao, Liu & Deng (2013) checked 22 inter-coder reliability indices and found that everyone makes unrealistic assumptions about the programmer`s behavior, which creates paradoxes and anomalies.

Krippendorff α made more of these assumptions, thus producing more paradoxes and anomalies than any other index. Professor Krippendorff (2013) replied that «most of the authors` discoveries are artifacts of being misled by strange, almost conspiratorial uses of language.» The commentary repeated Krippendorff`s long-standing position that Krippendorffs is α the level of value of value and is the only index qualified to perform this function (Hayes & Krippendorff, 2007; Krippendorff, 2004b, 2016). This document continues this dialogue. We propose a literary review to show that the scientific community, including Krippendorff, has defined the reliability of intercoders as an intercoder tuning, and Krippendorffs α, like all of its major competitors, has been developed and declared to measure intercoder tuning…