k-近邻 算法采用测量不同特征值之间的距离方法进行分类
优点: 精度高、对异常值不敏感、无数据输入假定。
缺点: 计算复杂度高、空间复杂度高。
适用数据范围: 数值型和标称型。
在模式识别领域中,最近邻居法(KNN算法,又译K-近邻算法)是一种用于分类和回归的非参数统计方法。在这两种情况下,输入包含特征空间(Feature Space)中的k个最接近的训练样本。
在k-NN分类中,输出是一个分类族群。一个对象的分类是由其邻居的“多数表决”确定的,k个最近邻居(k为正整数,通常较小)中最常见的分类决定了赋予该对象的类别。若k = 1,则该对象的类别直接由最近的一个节点赋予。
在k-NN回归中,输出是该对象的属性值。该值是其k个最近邻居的值的平均值。
最近邻居法采用向量空间模型来分类,概念为相同类别的案例,彼此的相似度高,而可以借由计算与已知类别案例之相似度,来评估未知类别案例可能的分类。
K-NN是一种基于实例的学习,或者是局部近似和将所有计算推迟到分类之后的惰性学习。k-近邻算法是所有的机器学习算法中最简单的之一。
无论是分类还是回归,衡量邻居的权重都非常有用,使较近邻居的权重比较远邻居的权重大。例如,一种常见的加权方案是给每个邻居权重赋值为1/ d,其中d是到邻居的距离。
邻居都取自一组已经正确分类(在回归的情况下,指属性值正确)的对象。虽然没要求明确的训练步骤,但这也可以当作是此算法的一个训练样本集。
k-近邻算法的缺点是对数据的局部结构非常敏感。本算法与K-平均算法(另一流行的机器学习技术)没有任何关系,请勿与之混淆[1]。
[1] K-近邻算法
对未知类别属性的数据集中的每个点依次执行以下操作:
import numpy as np
from collections import Counter
def kNN_classify(inX, dataSet, labels, k):
dataSetSize = dataSet.shape[0] # 得到数据行数
diffMat = np.tile(inX, (dataSetSize, 1)) - dataSet # 待预测点与已知点的差值矩阵
sqDiffMat = diffMat**2 # 每一项求平方,消除负值
sqDiffMat = sqDiffMat.sum(axis=1) # 勾股定理计算距离的平方
distance = sqDiffMat**0.5 # 实际距离
sortedDistIndicies = np.argsort(distance) # 距离排序,得到索引的序列,升序
topK_y = [labels[i] for i in np.array(sortedDistIndicies)[:k]] # 距离最小的前k个点的标签
votes = Counter(topK_y) # 统计结果
return votes.most_common(1)[0][0] # 返回统计结果中占比最大的一个的key
# 原始数据,有两个特征
raw_data_x = [[3.393533211, 2.331273381],
[3.110073483, 1.781539638],
[1.343808831, 3.368360954],
[3.582294042, 4.679179110],
[2.280362439, 2.866990263],
[7.423436942, 4.696522875],
[5.745051997, 3.533989803],
[9.172168622, 2.511101045],
[7.792783481, 3.424088941],
[7.939820817, 0.791637231]]
# 原始数据标签,两类
raw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
# 转为ndarry
X_train = np.array(raw_data_x)
y_train = np.array(raw_data_y)
x = np.array([8.093607318, 3.365731514]) # 待预测点
predice_y = kNN_classify(x, X_train, y_train, 6) # 进行预测
print("the result is:", predice_y) # 将带预测点分到“1”类中
the result is: 1
import matplotlib.pyplot as plt
'''注意理解此处找符合条件点的方法:两个同样大小的array的套用'''
plt.scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], color='g') # 第一类数据
plt.scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], color='r') # 第二类数据
plt.scatter(x[0], x[1], color='b') # 待预测点
plt.show()
from sklearn.neighbors import KNeighborsClassifier
# 数据仍使用3.2.中定义的特征值和标签
print("X_train: ", X_train)
print("y_train: ", y_train)
# 1. kNN分类器初始化
kNN_classifier = KNeighborsClassifier(n_neighbors=6)
# 2. fit
kNN_classifier.fit(X_train, y_train)
# 3. predict
y_predict = kNN_classifier.predict(x.reshape(1, -1))
print("result: ", y_predict[0])
X_train:
[[3.39353321 2.33127338]
[3.11007348 1.78153964]
[1.34380883 3.36836095]
[3.58229404 4.67917911]
[2.28036244 2.86699026]
[7.42343694 4.69652288]
[5.745052 3.5339898 ]
[9.17216862 2.51110105]
[7.79278348 3.42408894]
[7.93982082 0.79163723]]
y_train:
[0 0 0 0 0 1 1 1 1 1]
result: 1
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
digits = datasets.load_digits() # 手写数字
X = digits.data # shape: (1797, 64)
y = digits.target # 标签
# 展示其中一个数字,64个特征,8*8的像素
plt.imshow(X[36].reshape(8, 8), cmap=matplotlib.cm.binary) # 数字“0”
plt.show()
# 将数据分为训练集和测试集,用于测试的比例为0.2,随机种子为888
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=888)
# 1. kNN分类器初始化
kNN_classifier = KNeighborsClassifier(n_neighbors=3) # 前3个最接近的,k=3
# 2. fit
kNN_classifier.fit(X_train, y_train)
# 3. predict
y_predict = kNN_classifier.predict(X_test)
# 4. 对结果进行评价:分类准确率分数 = sum(y_predict == y_test)/len(y_predict)
score = accuracy_score(y_test, y_predict)
print("score:", score)
score: 0.9944444444444445
import numpy as np
from math import sqrt
from collections import Counter
class kNNClassifier:
def __init__(self, k):
"""初始化kNN分类器"""
assert k >= 1, "k must be valid"
self.k = k
self._X_train = None
self._y_train = None
def fit(self, X_train, y_train):
"""根据训练数据集X_train和y_train训练kNN分类器"""
assert X_train.shape[0] == y_train.shape[0], \
"the size of X_train must be equal to the size of y_train"
assert self.k <= X_train.shape[0], "the size of X_train must be at least k."
self._X_train = X_train
self._y_train = y_train
return self # 建议这么做,符合规范,不这么做,也可以以运行
def predict(self, X_predict):
assert self._X_train is not None and self._y_train is not None, "must fit before perdict"
assert X_predict.shape[1] == self._X_train.shape[1], "the feature number of X_predict must be equal to X_train"
y_predict = [self._predict(x) for x in X_predict]
return np.array(y_predict)
def _predict(self, x):
"""给定单个待预测数据x,返回x_predict的预测结果值"""
# 此时的x是个向量
assert x.shape[0] == self._X_train.shape[1], "the feature number of x must be equal to X_train"
distances = [sqrt(np.sum((x_train - x)**2)) for x_train in X_train]
nearest = np.argsort(distances)
topK_y = [self._y_train[i] for i in nearest[:self.k]]
votes = Counter(topK_y)
return votes.most_common(1)[0][0]
def __repr__(self):
return "KNN(k=%d)" % self.k
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
digits = datasets.load_digits() # 手写数字
X = digits.data # shape: (1797, 64)
y = digits.target # 标签
# 将数据分为训练集和测试集,用于测试的比例为0.2,随机种子为888
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=888)
# 1. kNN分类器初始化
kNN_classifier = kNNClassifier(k=3) # 前3个最接近的,k=3
# 2. fit
kNN_classifier.fit(X_train, y_train)
# 3. predict
y_predict = kNN_classifier.predict(X_test)
# 4. 对结果进行评价:分类准确率分数 = sum(y_predict == y_test)/len(y_predict)
score = accuracy_score(y_test, y_predict)
print("score:", score)
score: 0.9916666666666667
参考:
1. Sklearn之数据预处理——StandardScaler
2. sklearn.preprocessing.StandardScaler
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
iris = datasets.load_iris() # 载入鸢尾花数据
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=666)
# 标准差标准化
standardScaler = StandardScaler()
standardScaler.fit(X_train)
X_train = standardScaler.transform(X_train)
X_test = standardScaler.transform(X_test)
# kNN
knn_clf = KNeighborsClassifier(n_neighbors=3)
knn_clf.fit(X_train, y_train)
score = knn_clf.score(X_test, y_test)
print("score: ", score)
score: 1.0
超参数
在机器学习中,超参数是在学习过程开始之前设置其值的参数。 相反,其他参数的值是通过训练得出的。 不同的模型训练算法需要不同的超参数,一些简单的算法(如普通最小二乘回归)不需要。 给定这些超参数,训练算法从数据中学习参数。相同种类的机器学习模型可能需要不同的超参数来适应不同的数据模式,并且必须对其进行调整以便模型能够最优地解决机器学习问题。 在实际应用中一般需要对超参数进行优化,以找到一个超参数元组(tuple),由这些超参数元组形成一个最优化模型,该模型可以将在给定的独立数据上预定义的损失函数最小化[2]。
[2] Hyperparameter (machine learning) https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)
GridSearchCV
GridSearchCV的名字其实可以拆分为两部分,GridSearch和CV,即网格搜索和交叉验证。这两个名字都非常好理解。网格搜索,搜索的是参数,即在指定的参数范围内,按步长依次调整参数,利用调整的参数训练学习器,从所有的参数中找到在验证集上精度最高的参数,这其实是一个训练和比较的过程[3]。
[3] Python机器学习笔记:Grid SearchCV(网格搜索) https://www.cnblogs.com/wj-1314/p/10422159.html
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn import datasets
digits = datasets.load_digits() # 手写数字集
X = digits.data
y = digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=666)
# 定义搜索的参数
param_grid = [
{
'weights': ['uniform'],
'n_neighbors': [i for i in range(1, 11)]
},
{
'weights': ['distance'],
'n_neighbors': [i for i in range(1, 11)],
'p': [i for i in range(1, 6)]
}
]
# 定义分类器
knn_clf = KNeighborsClassifier()
# 1. 定义
grid_search = GridSearchCV(knn_clf, param_grid)
# 2. fit
grid_search.fit(X_train, y_train) # 耗时操作,3min~5min
# 显示结果,名字后加下划线,表示由程序计算出来的参数,而非由用户传入
print("grid_search.best_estimator_ :", grid_search.best_estimator_)
# 评价
print("grid_search.best_score_ :", grid_search.best_score_)
# 最佳参数
print("grid_search.best_params_ :", grid_search.best_params_)
# 取得最佳分类器
knn_clf = grid_search.best_estimator_
# 用测试集评价
score = knn_clf.score(X_test, y_test)
print("score: ", score)
grid_search.best_estimator_ : KNeighborsClassifier(algorithm=‘auto’, leaf_size=30, metric=‘minkowski’,
metric_params=None, n_jobs=1, n_neighbors=3, p=3,
weights=‘distance’)
grid_search.best_score_ : 0.9853862212943633
grid_search.best_params_ : {‘n_neighbors’: 3, ‘p’: 3, ‘weights’: ‘distance’}
score: 0.9833333333333333