K 近邻算法

一、K 近邻算法

1、分类

考虑任意个(K 个)邻居,采用“投票法”来指定标签。


3 近邻分类模型

2、回归

多个近邻时候,预测结果为这些邻居的平均值

3 近邻回归模型.png

二、K 近邻算法分析

1、 决策边界

使用更少的邻居对应更高的模型复杂度,而使用更多的邻居对应更低的模型复杂度

不同 n_neighbors 值的 K 近邻模型的决策边界.png

2、模型复杂度和泛化能力之间的关系

随着邻居个数的增多,模型变得更简单,训练集精度也随之下降

image.png

3、优点、缺点和参数

  • KNeighbors 分类器有 2 个重要参数:邻居个数与数据点之间距离的度量方法
  • 构建最近邻模型的速度通常很快,但如果训练集很大,预测速度可能会比较慢。
  • 如果数据集拥有很多特征(几百或更多),该算法效果不好
  • 大于大多数特征的大多数取值都为 0 的数据集(稀疏数据集),算法的效果尤其不好。

三、代码展示

1、分类

import matplotlib.pyplot as plt
import mglearn
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import load_breast_cancer

# 数据集展示
mglearn.plots.plot_knn_classification(n_neighbors=3)
plt.show()

# 获取数据集
X,y=mglearn.datasets.make_forge()
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0)

# 构建模型
clf=KNeighborsClassifier(n_neighbors=3)
clf.fit(X_train,y_train)
print('Test set predictions:{}'.format(clf.predict(X_test)))
print('Test set accuracy:{:.2f}'.format(clf.score(X_test,y_test)))

# 决策边界展示
fig,axes=plt.subplots(1,3,figsize=(10,3))
for n_neighbors,ax in zip([1,3,9],axes):
    clf=KNeighborsClassifier(n_neighbors=n_neighbors).fit(X,y)
    mglearn.plots.plot_2d_separator(clf,X,fill=True,eps=0.5,ax=ax,alpha=.4)
    mglearn.discrete_scatter(X[:,0],X[:,1],y,ax=ax)
    ax.set_title('{} neighbor(s)'.format(n_neighbors))
    ax.set_xlabel('feature 0')
    ax.set_ylabel('feature 1')
axes[0].legend(loc=3)
plt.show()

# 真实数据集
cancer=load_breast_cancer()
X_train,X_test,y_train,y_test=train_test_split(cancer.data,cancer.target,stratify=cancer.target,random_state=66)
training_accuracy=[]
test_accuracy=[]
neighbors_settings=range(1,11)

# 模型复杂度和泛化能力
for n_neighbors in neighbors_settings:
    clf=KNeighborsClassifier(n_neighbors=n_neighbors)
    clf.fit(X_train,y_train)
    training_accuracy.append(clf.score(X_train,y_train))
    test_accuracy.append(clf.score(X_test,y_test))
plt.plot(neighbors_settings,training_accuracy,label='training accuracy')
plt.plot(neighbors_settings,test_accuracy,label='test accuracy')
plt.xlabel('n_neighbors')
plt.ylabel('Accuracy')
plt.legend()
plt.show()

2、回归

from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
import mglearn

# 取 K 个值的平均值
mglearn.plots.plot_knn_regression(n_neighbors=3)


# 划分数据集
X,y=mglearn.datasets.make_wave(n_samples=40)
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0)

# 构建模型
reg=KNeighborsRegressor(n_neighbors=3)
reg.fit(X_train,y_train)
print('Test set predictions:\n{}'.format(reg.predict(X_test)))
print('Test set R^2:{:.2f}'.format(reg.score(X_test,y_test)))


fig,axes=plt.subplots(1,3,figsize=(15,4))
line=np.linspace(-3,3,1000).reshape(-1,1)
for n_neighbors,ax in zip([1,3,9],axes):
    reg=KNeighborsRegressor(n_neighbors=n_neighbors)
    reg.fit(X_train,y_train)
    ax.plot(line,reg.predict(line))
    ax.plot(X_train,y_train,'^',c=mglearn.cm2(0),markersize=8)
    ax.plot(X_test,y_test,'v',c=mglearn.cm2(1),markersize=8)
    ax.set_title('{} neighbors\n train score:{:.2f} test score:{:.2f}'.format(
        n_neighbors,reg.score(X_train,y_train),
        reg.score(X_test,y_test)
    ))
    ax.set_xlabel('Feature')
    ax.set_ylabel('Target')

axes[0].legend(['Model predictions','Training data/target','Test data/target'],loc='best')

plt.show()

你可能感兴趣的:(K 近邻算法)