Jump to content

User:Miranche/Cohen's Kappa

From Wikipedia, the free encyclopedia

Cohen's kappa coefficient is a statistical measure of inter-rater agreement for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation since κ takes into account the agreement occurring by chance. However, some researchers have expressed concern over κ's tendency to take the observed categories' frequencies as givens, which can have the effect of underestimating agreement for a category that is also commonly used; for this reason, κ is considered an overly conservative measure of agreement.

Example

[edit]

Suppose that you were analyzing data related to people applying for a grant. Each grant proposal was rated by two people and each rater either said "Yes" or "No" to the proposal. Suppose the data was as follows, where rows are rater A and columns are rater B:

Yes No
Yes 20 5
No 10 15

In the notation from above we can see that the observed percentage agreement is Pr(a)=(20+15)/50 = 0.70.

To calculate Pr(e) (the probability of random agreement) we note that:

  • Rater A said "Yes" to 25 applicants and "No" to 25 applicants. Thus rater A said "Yes" 50% of the time.
  • Rater B said "Yes" to 30 applicants and "No" to 20 applicants. Thus rater B said "Yes" 60% of the time.

Therefore the probability that both of them would say "Yes" randomly is 0.50*0.60=0.30 and the probability that both of them would say "No" is 0.50*0.40=0.20. Thus the overall probability of random agreement is Pr(e) = 0.3+0.2 = 0.5.

So now applying our formula for Cohen's Kappa we get:

Online Calculator

[edit]

http://www.graphpad.com/quickcalcs/kappa1.cfm