Mutual Information as a Measure of Intercoder Agreement
Description
In a situation where two raters are classifying a series of observations, it is useful to have an index of agreement among the raters that takes into account both the simple rate of agreement and the complexity of the rating task. Information theory provides a measure of the quantity of information in a list of classifications which can be used to produce an appropriate index of agreement. A normalized weighted mutual information index improves upon the traditional intercoder agreement index in a number of ways, key being that there is no need to develop a model of error generation before use; comparison across experiments is easier; and that ratings are based on the distribution of agreement across categories, not just an overall agreement level.
Files
mutual-information-as-a-measure-of-intercoder-agreement.pdf
Files
(166.8 kB)
Name | Size | Download all |
---|---|---|
md5:f6d4488cc33bdfae2e6ef3a103f15732
|
166.8 kB | Preview Download |