Appendix B

 

Cohen’s kappa

Cohen’s kappa can be used to assess the reliability of an individual (item) code.

  • In the formula for kappa below, Pr(a) is the relative observed agreement among coders for a given item, and Pr(e) is the hypothetical probability of chance agreement in the observed data calculated from the probabilities of each coder randomly reporting each possible code category for that item.

data processing and statistical adjustment appendix b equation 01

  • If the coders are in complete agreement then kappa equals 1. If there is no agreement among the coders (other than what would be expected by chance) then kappa is less than or equal to 0.