Assessment of interrater agreement using a binomial-based statistic: K(bin)
This project develops K(bin), a relatively simple, binomial based statistic for assessing interrater agreement in which expected agreement is calculated a priori from the number of raters involved in the study and number of categories on the rating tool. The statistic is logical in interpretation, easily calculated, stable for small sample sizes, and has application over a wide range of possible combinations from the simplest case of two raters using a binomial scale to multiple raters using a multiple level scale.^ Tables of expected agreement values and tables of critical values for K(bin) which include power to detect three levels of the population parameter K for n from 2 to 30 and observed agreement $\ge$.70 calculated at alpha =.05,.025, and.01 are included.^ An example is also included which describes the use of the tables for planning and evaluating an interrater reliability study using the statistic, K(bin). ^
DeFord, Linda LaMoyne Whitham, "Assessment of interrater agreement using a binomial-based statistic: K(bin)" (1996). Texas Medical Center Dissertations (via ProQuest). AAI1385349.