TP:正例预测正确的个数
FP:负例预测错误的个数
TN:负例预测正确的个数
FN:正例预测错误的个数
1. accuracy_score(y_true,y_pred)
准确率(accuracy)是所有预测对的right/all
例子:
>>>y_pred = [0, 2, 1, 3]
>>>y_true = [0, 1, 2, 3]
>>>accuracy_score(y_true, y_pred)
0.5
2.
precision_score(y_true,y_pred,average='macro')
recall_score(y_true,y_pred,average='macro')
f1 _score(y_true,y_pred,average='macro')
精确率(precision) 描述的是在所有预测出来的正例中有多少是真的正例。
召回率(recall) ,描述的是所有正例我能发现多少。
F1值——精确率和召回率的调和均值,只有当精确率和召回率都很高时,F1值才会高。
分别计算各个类别的TP,FP,FN,Precision ,Recall和F1-score 然后求平均,即macro precision_score, macro recall_score和macro F1-score。
,,
例子:
实际数据:1, 2, 3, 4, 5, 6, 7, 8, 9
真实类别:A, A, A, A, B, B, B, C, C
预测类别:A, A, B, C, B, B, C, B, C
1) acc: (2+2+1)/10=0.5
2) macro precison,recall F1-score
则各个类别的真假阳阴性:
A B C 总计
TP 2 2 1 5
FP 0 2 2 4
FN 2 1 1 4
A类别的精确率 : PA = 2/(2+0)= 1
A类别的召回率 : RA = 2/(2+2)= 0.5
A类别的F1-score : FA = 2*(1*0.5)/(1+0.5) = 0.667
B类别的精确率 : PB = 2/(2+2)= 0.5
B类别的召回率 : RB = 2/(2+1)= 0.667
B类别的F1-score : FB = 2*(0.5*0.667)/(0.5+0.667) = 0.572
C类别的精确率 : PC = 1/(1+2)= 0.333
C类别的召回率 : RC = 1/(1+1)= 0.5
C类别的F1-score : FC = 2*(0.333*0.5)/(0.333+0.5) = 0.39976
macro precison-score = (1+0.5+0.333)/3 = 0.611
macro recall-score = (0.5+0.667+0.5)/3 = 0.556
macro F1-score = (0.667+0.572+0.39976)/3 = 0.5462