分类模型的评价指标

评价指标:

1、准确率
2、精准率
3、召回率
4、f1-Score
5、auc曲线
在了解评价指标在hi前,首先需要了解一种叫做混淆矩阵的东西
混淆矩阵:
真正例TP:本来正确的,分类到正确的类型
伪正例FP:本来是错误的,分类到正确的
伪反例FN:本来是正确的,分类到错误的
真反例TN:本来是错误的,分类到错误的
	真正例率TPR=TP/(TP+FN)
		预测为正例并且实际为正例的样本占所有训练集中为正例样本的比例
		将正例预测对的占正样本的比例,这个比例越大越好
	伪反例率FPR=FP/(FP+TN)
		预测为正例但是实际为反例的样本占所有反例样本的比例
	准确率:(TP+TN)/(TP+FN+FP+TN)也就是预测正确的占所有预测结果的比例
需要用到的api是:
from sklearn.metrics import recall_score  # 使用的是召回率
from sklearn.metrics import accuracy_score  # 精确率
from sklearn.metrics import f1_score

程序如下:

from sklearn.linear_model import LogisticRegression
import warnings
from sklearn.metrics import recall_score  # 使用的是召回率
from sklearn.metrics import accuracy_score  # 精确率
from sklearn.metrics import f1_score
warnings.filterwarnings("ignore")
import sklearn.datasets as dt
from sklearn.model_selection import train_test_split
feature = dt.load_breast_cancer()['data']
target = dt.load_breast_cancer()['target']
x_train,x_test,y_train,y_test=train_test_split(feature,target,train_size=0.8,random_state=2023)
#log = LogisticRegression()# 比较重要的参数,超参数plentaly,用l1还是l2
# l = LogisticRegression(max_iter=1000,penalty='l2').fit(x_train,y_train)
l = LogisticRegression(max_iter=10000, penalty='l1',solver='liblinear').fit(x_train,y_train)
print('l', l.score(x_test, y_test))
print('召回率',recall_score(y_test,l.predict(x_test)))
print('精确率',accuracy_score(y_test,l.predict(x_test)))
print('f1-score',f1_score(y_test,l.predict(x_test)))

实验结果

l 0.9736842105263158
召回率 0.9859154929577465
精确率 0.9736842105263158
f1-score 0.979020979020979

AUC:

只可以用于二分类模型,改评价指标通常应用比较多,对于分类模型,需要一个阈值来判断分类,逻辑回归默认阈值时 0.5,表面之曲线下的面积
需要用到的api:from sklearn.metrics import roc_auc_score
还需要用到模型将样本集分到正例类别的概率
 l.predict_proba(x_test)[:,1]
from sklearn.linear_model import LogisticRegression
import warnings
from sklearn.metrics import roc_auc_score
from sklearn.metrics import recall_score  # 使用的是召回率
from sklearn.metrics import accuracy_score  # 精确率
from sklearn.metrics import f1_score
warnings.filterwarnings("ignore")
import sklearn.datasets as dt
from sklearn.model_selection import train_test_split
feature = dt.load_breast_cancer()['data']
target = dt.load_breast_cancer()['target']
x_train,x_test,y_train,y_test=train_test_split(feature,target,train_size=0.8,random_state=2023)
#log = LogisticRegression()# 比较重要的参数,超参数plentaly,用l1还是l2
# l = LogisticRegression(max_iter=1000,penalty='l2').fit(x_train,y_train)
l = LogisticRegression(max_iter=10000, penalty='l1',solver='liblinear')
l.fit(x_train,y_train)
# print('l', l.score(x_test, y_test))
# print('召回率',recall_score(y_test,l.predict(x_test)))
# print('精确率',accuracy_score(y_test,l.predict(x_test)))
# print('f1-score',f1_score(y_test,l.predict(x_test)))
# 找到模型将测试样本集分到正例类别的概率
y_score = l.predict_proba(x_test)[:,1]
a = roc_auc_score(y_test,y_score)
print(a)

结果如下:
0.9983622666229938

你可能感兴趣的:(分类,机器学习,人工智能)