Kappa Statistic

Kappa statistic 这个指标用于评判分类器的分类结果与随机分类的差异度。(Kappa is a measure of agreement normalized for chance agreement.

         P(A) - P(E)
> K = -----------
>       1 - P(E)
 
  
> Where P(A) is the percentage agreement (e.g., between your classifier and
> ground truth) and P(E) is the chance agreement.  K=1 indicates perfect
> agreement, K=0 indicates chance agreement.

P(A)是分类器赞同(agreement)的比率,P(E)是随机分类赞同(agreement)的比率。

K=1的时候表明分类器的决策时完全与随机分类相异的(正面)K=0时表明分类器的决策与随机分类相同(即分类器没有效果)

K=-1时表明分类器的决策比随机分类还要差。

一般来说,Kappa指标的结果是与分类器的AUC指标以及正确率成正相关的,所以K越接近于1越好

 

更多参见:

1、http://en.wikipedia.org/wiki/Cohen's_kappa

2、http://www.statistics.com/resources/glossary/k/kappa.php

你可能感兴趣的:(weka,Math,Machine,Learning)