KNN

1. 常规流程

导包 --> 实例化模型对象(有参:k) --> 拆分训练与测试集 --> 拟合(训练)模型 --> 评估 --> 参数调优

1.1 必导包

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"  #全部输出

1.2 实例化对象

from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=3)

1.3 引入数据,以乳腺癌数据为例

from sklearn.datasets import load_breast_cance
cancer = load_breast_cancer()
X = pd.DataFrame(cancer.data,columns=name) #模型输入为二维,ndarray和DF都可以,DF方便观察
y = cancer.target

1.4 切分数据、拟合、预测、评估

# 切分数据
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split (X,y,random_state = 0)


# 拟合
knn.fit(X_train,y_train)

# 预测
knn.predict(预测数据)

# 评估
knn.score(X_test,y_test)
print("knn.score(): \n{:.2f}".format(knn.score(X_test,y_test)))

2. K的参数调节

train_acc = []
test_acc = []

# n_neighbors取值从1到10 
neighbors_settings = range(2, 31)

for k in neighbors_settings:
    clf = KNeighborsClassifier(n_neighbors=k)
    clf.fit(X_train,y_train)
    train_acc.append(clf.score(X_train,y_train))
    test_acc.append(clf.score(X_test,y_test))
    
plt.plot(neighbors_settings,train_acc,label="training accuracy")
plt.plot(neighbors_settings, test_acc, label="test accuracy") 
plt.ylabel("Accuracy")
plt.xlabel("K") 
plt.legend()
#注意,切分的随机数种子会影响学习参数曲线,
np.argmax(test_acc) #返回最大值对应索引,K从2开始,所以15对应K=17

3. 交叉验证: 为了解决knn.score评估结果不稳定,K也就不稳定

3.1 实现流程

from sklearn.model_selection import cross_val_score

scores = cross_val_score(knn, cancer.data, cancer.target,cv=5) #默认5折 ,参数:模型,X,y,几折
print("scores: {}".format(scores))

mean_score = scores.mean()
print("mean_scores: {:.2f}".format(mean_score))

3.2 在学习曲线中用交叉验证


train_acc = []
test_acc = []
cross_acc = []

# n_neighbors取值从2到30 
neighbors_settings = range(2, 31)

for k in neighbors_settings:
    clf = KNeighborsClassifier(n_neighbors=k)
    clf.fit(X_train,y_train)
    train_acc.append(clf.score(X_train,y_train))
    test_acc.append(clf.score(X_test,y_test))
    cross_acc.append(cross_val_score(clf, cancer.data, cancer.target,cv=5).mean())
#交叉验证用的数据集最好用切分后的训练集,因为是被随机打乱过的
    
plt.plot(neighbors_settings,train_acc,label="training accuracy")
plt.plot(neighbors_settings, test_acc, label="test accuracy") 
plt.plot(neighbors_settings, cross_acc, label="cross accuracy") 
plt.ylabel("Accuracy")
plt.xlabel("K")
plt.legend()

np.argmax(cross_acc) #返回最大值对应索引,K从2开始,所以11对应K=13

4. 归一化(0-1标准化)

  • 公式:(x-min)/(max-min)
  • 为了解决单个数据维度过大影响结果的问题,譬如身高与身价分别作x,y求距离时,身高影响非常小
  • 结果相当于比例关系
  • 语法:

fit(self, X[, y]): 生成标准化的规则

transform(self, X): 根据上面生成的规则,对数据进行转换

fit_transform(self, X[, y]): 把上面两步合并成一步

4.1 流程

# 导包 --> 实例化 --> fit(被拆分过的训练集) --> 分别对训练集和测试集标准化
from sklearn.preprocessing import MinMaxScaler
minmax = MinMaxScaler()

# 先fit学习训练集的数据信息(最大最小值等),然后以此去标准化,测试集永远没有fit
minmax.fit(X_train)  #fit只能对训练集,即使是对测试集转化也是用这个
X_train_minmax =  minmax.transform(X_train) #ndarray
X_test_minmax = minmax.transform(X_test) 

# 或者  minmax.fit_transform(X_train, X_train) 一步完成

4.2 用标准化数据进行训练调参

# 用标准化数据进行训练评估
# 在学习曲线中用交叉验证
train_acc = []
test_acc = []
cross_acc = []

# n_neighbors取值从2到30 
neighbors_settings = range(2, 31)

for k in neighbors_settings:
    clf = KNeighborsClassifier(n_neighbors=k)
    clf.fit(X_train_minmax,y_train)
    train_acc.append(clf.score(X_train_minmax,y_train))
    test_acc.append(clf.score(X_test_minmax,y_test))
    cross_acc.append(cross_val_score(clf, X_train_minmax, y_train,cv=5).mean())
    
plt.plot(neighbors_settings,train_acc,label="training accuracy")
plt.plot(neighbors_settings, test_acc, label="test accuracy") 
plt.plot(neighbors_settings, cross_acc, label="cross accuracy") 
plt.ylabel("Accuracy")
plt.xlabel("K")
plt.legend()

取最优结果及其索引

max_score = np.max(cross_acc)
max_index = np.argmax(cross_acc) # 然后输出值+2 重新建模得到最优模型

你可能感兴趣的:(KNN)