这里列的评价指标都是针对分类问题,回归问题则是利用MSE、RMSE等来评价,之后会陆续进行总结~
定义一个混淆矩阵:
准确率(Accuracy):
即预测的正确率
精确率(Precision):
即模型认为是对的中与实际正确的比值
召回率(Recall):
即真正例率
F1值
召回率与精准率在一个模型中往往无法同时提升,是一个此消彼长的关系,所以两者的调和平均作为新的评价标准。
注:确定分子分母:
分子:真正例率、真负例率、假正例率、假负例率中,先根据预测结果判“正负”,再根据与实际值对比后决定“真假”
分母:“正”对应真实值,即分母横着加,“负”对应预测值,即分母竖着加
#混淆矩阵
from sklearn.utils.multiclass import unique_labels
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = classes[unique_labels(y_true, y_pred)]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#print("Normalized confusion matrix")
else:
pass
#print('Confusion matrix, without normalization')
#print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
ax.set_ylim(len(classes)-0.5, -0.5)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
# y_test为真实label,y_pred为预测label,classes为类别名称,是个ndarray数组,内容为string类型的标签
class_names = np.array(["0","1"]) #按你的实际需要修改名称
plot_confusion_matrix(y_test, y_pre, classes=class_names, normalize=False)
from sklearn.metrics import accuracy_score, recall_score, f1_score, precision_score
acc=accuracy_score(y_test, y_pre)#准确率
precision = precision_score(y_test, y_pre)
recall = recall_score(y_test, y_pre)
f1 = f1_score(y_test, y_pre)
以假正例率为横坐标,真正例率为纵坐标形成的曲线为ROC曲线,ROC曲线与坐标轴围成的面积为AUC。
import numpy as np
from sklearn import metrics
from sklearn.metrics import auc
import matplotlib.pyplot as plt
true=np.array([1,0,1,0,0,1,1,1,1,1,0,0,0,0,])
pred=np.array([0.9,0.3,0.63,0.3,0.48,0.6,0.3,0.5,0.7,0.6,0.4,0.2,0.6,0.5])#预测为1的概率
FPR,TPR,threshhold = metrics.roc_curve(true,pred,pos_label=1)#pos_label是定义为正例的标签,与真实值要对应,如二分类问题(0,1)、(-1,1)那么pos_label就是1,如果是(1,2)那么pos_label就是2
AUC = auc(FPR,TPR)
plt.plot(FPR,TPR,label='ROC curvem(area = %0.2f)' % AUC,marker='o',color='b',linestyle='-')
plt.legend(loc='lower right')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.show()
解读:
FPR越小、TPR越大效果越好,即越靠近左上角的理想点(0,1)时。或AUC面积越大越好