《机器学习:公式推导与代码实践》鲁伟著读书笔记。
K近邻(K-nearest neighbor,K-NN)算法是一种经典的监督学习的分类方法。K近邻算法是依据新样本与k个与其相邻最近的样本的类别来进行分类的,所以K近邻算法不像前面所学习到的机器学习其他方法一样,有显式的学习训练过程,将有标签的样本用来训练模型,而是直接计算新样本与所有样本点的距离来确定分类情况的。K近邻算法的三要素为:k值的选择、距离的度量方式和分类决策规则。
下面来介绍常用的距离度量方式:
闵氏距离,即闵可夫斯基距离(Minkowski distance),其具体定义如下。特征数为m维的向量样本集合X,对于任意的 x i , x j ∈ X x_{i},x_{j}\in X xi,xj∈X, x i = ( x 1 i , x 2 i , . . . , x m i ) T x_{i}=(x_{1i},x_{2i},...,x_{mi})^{T} xi=(x1i,x2i,...,xmi)T, x j = ( x 1 j , x 2 j , . . . , x m j ) T x_{j}=(x_{1j},x_{2j},...,x_{mj})^{T} xj=(x1j,x2j,...,xmj)T,样本 x i x_{i} xi和样本 x j x_{j} xj之间的闵氏距离可以定义为: d i j = ( ∑ k = 1 m ∣ x k i − x k j ∣ p ) 1 p , p ≥ 1 d_{ij}=\left(\sum_{k=1}^{m}\left|x_{ki}-x_{kj}\right|^{p}\right)^{\frac{1}{p}},p\geq1 dij=(k=1∑m∣xki−xkj∣p)p1,p≥1当p=1时,闵氏距离称为曼哈顿距离(Manhatan distance): d i j = ∑ k = 1 m ∣ x k i − x k j ∣ d_{ij}=\sum_{k=1}^{m}\left|x_{ki}-x_{kj}\right| dij=k=1∑m∣xki−xkj∣当p=2时,闵氏距离称为欧氏距离(Euclidean distance): d i j = ( ∑ k = 1 m ∣ x k i − x k j ∣ 2 ) 1 2 d_{ij}=\left(\sum_{k=1}^{m}\left|x_{ki}-x_{kj}\right|^{2}\right)^{\frac{1}{2}} dij=(k=1∑m∣xki−xkj∣2)21当p= ∞ \infty ∞时,闵氏距离称为切比雪夫距离(Chebyshev distance): d i j = max ∣ x k i − x k j ∣ d_{ij}=\text {max}\left|x_{ki}-x_{kj}\right| dij=max∣xki−xkj∣
马氏距离,即马哈拉诺比斯距离(Mahalanobis distance),是一种衡量各个特征之间相关性的距离度量方式。给定一个样本集合 X = ( x ) m ∗ n X=(x)_{m*n} X=(x)m∗n,其协方差矩阵为 S S S,那样本 x i x_{i} xi和样本 x j x_{j} xj之间的马氏距离可以定义为: d i j = [ ( x i − x j ) T S − 1 ( x i − x j ) ] 1 2 d_{ij}=[(x_{i}-x_{j})^{T}S^{-1}(x_{i}-x_{j})]^{\frac{1}{2}} dij=[(xi−xj)TS−1(xi−xj)]21当 S S S为单位矩阵时,即样本各个特征之间相互独立且方差为1时,马氏距离便为欧氏距离。
通常情况下,K近邻算法使用欧氏距离作为实例之间的距离度量方法。
K近邻算法最直观的解释为:给定一个训练集,对于测试集的一个实例来说,在训练集中找出与该实例距离最近的K个训练集中的实例,这K个训练集中的实例的多数属于哪个类,则测试集中的实例就属于哪个类。
一般来说,K值的大小对分类结果有重大影响。
所以,K值的选择太大也不好,太小也不好。我们一般采用k折交叉验证(k-fold Cross Validation)的方法来选择合适的K值。k折交叉验证的具体步骤如下:
通常为多数表决方法,K邻域内那个种类的训练样本数越多,则新样本的类别便为这个类别。
# 导入相关模块
import numpy as np
from collections import Counter
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.utils import shuffle
# 导入sklearn iris数据集
iris = datasets.load_iris()
# 打乱数据后的数据与标签
X, y = shuffle(iris.data, iris.target, random_state=13)
# 数据转换为float32格式
X = X.astype(np.float32)
# 训练集与测试集的简单划分,训练-测试比例为7:3
offset = int(X.shape[0] * 0.7)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
# 将标签转换为竖向量
y_train = y_train.reshape((-1,1))
y_test = y_test.reshape((-1,1))
# 打印训练集和测试集大小
print('X_train=', X_train.shape)
print('X_test=', X_test.shape)
print('y_train=', y_train.shape)
print('y_test=', y_test.shape)
X_train= (105, 4)
X_test= (45, 4)
y_train= (105, 1)
y_test= (45, 1)
定义欧氏距离
def compute_distances(X, X_train):
'''
输入:
X:测试样本实例矩阵
X_train:训练样本实例矩阵
输出:
dists:欧式距离
'''
# 测试实例样本量
num_test = X.shape[0]
# 训练实例样本量
num_train = X_train.shape[0]
# 基于训练和测试维度的欧氏距离初始化
dists = np.zeros((num_test, num_train))
# 测试样本与训练样本的矩阵点乘
M = np.dot(X, X_train.T)
# 测试样本矩阵平方
te = np.square(X).sum(axis=1)
# 训练样本矩阵平方
tr = np.square(X_train).sum(axis=1)
# 计算欧式距离
dists = np.sqrt(-2 * M + tr + np.matrix(te).T)
return dists
dists = compute_distances(X_test, X_train)
def predict_labels(y_train, dists, k=1):
'''
输入:
y_train:训练集标签
dists:测试集与训练集之间的欧氏距离矩阵
k:k值
输出:
y_pred:测试集预测结果
'''
# 测试样本量
num_test = dists.shape[0]
# 初始化测试集预测结果
y_pred = np.zeros(num_test)
# 遍历
for i in range(num_test):
# 初始化最近邻列表
closest_y = []
# 按欧氏距离矩阵排序后取索引,并用训练集标签按排序后的索引取值
# 最后拉平列表
# 注意np.argsort函数的用法
labels = y_train[np.argsort(dists[i, :])].flatten()
# 取最近的k个值
closest_y = labels[0:k]
# 对最近的k个值进行计数统计
# 这里注意collections模块中的计数器Counter的用法
c = Counter(closest_y)
# 取计数最多的那一个类别
y_pred[i] = c.most_common(1)[0][0]
return y_pred
选取k=5.
num_folds = 5
# 候选k值
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
# 训练数据划分
X_train_folds = np.array_split(X_train, num_folds)
# 训练标签划分
y_train_folds = np.array_split(y_train, num_folds)
k_to_accuracies = {}
# 遍历所有候选k值
for k in k_choices:
# 五折遍历
for fold in range(num_folds):
# 对传入的训练集单独划出一个验证集作为测试集
validation_X_test = X_train_folds[fold]
validation_y_test = y_train_folds[fold]
temp_X_train = np.concatenate(X_train_folds[:fold] + X_train_folds[fold + 1:])
temp_y_train = np.concatenate(y_train_folds[:fold] + y_train_folds[fold + 1:])
# 计算距离
temp_dists = compute_distances(validation_X_test, temp_X_train)
temp_y_test_pred = predict_labels(temp_y_train, temp_dists, k=k)
temp_y_test_pred = temp_y_test_pred.reshape((-1, 1))
# 查看分类准确率
num_correct = np.sum(temp_y_test_pred == validation_y_test)
num_test = validation_X_test.shape[0]
accuracy = float(num_correct) / num_test
k_to_accuracies[k] = k_to_accuracies.get(k,[]) + [accuracy]
# 打印不同 k 值不同折数下的分类准确率
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
k = 1, accuracy = 0.904762
k = 1, accuracy = 1.000000
k = 1, accuracy = 0.952381
k = 1, accuracy = 0.857143
k = 1, accuracy = 0.952381
k = 3, accuracy = 0.857143
k = 3, accuracy = 1.000000
k = 3, accuracy = 0.952381
k = 3, accuracy = 0.857143
k = 3, accuracy = 0.952381
k = 5, accuracy = 0.857143
k = 5, accuracy = 1.000000
k = 5, accuracy = 0.952381
k = 5, accuracy = 0.904762
k = 5, accuracy = 0.952381
k = 8, accuracy = 0.904762
k = 8, accuracy = 1.000000
k = 8, accuracy = 0.952381
k = 8, accuracy = 0.904762
k = 8, accuracy = 0.952381
k = 10, accuracy = 0.952381
k = 10, accuracy = 1.000000
k = 10, accuracy = 0.952381
k = 10, accuracy = 0.904762
k = 10, accuracy = 0.952381
k = 12, accuracy = 0.952381
k = 12, accuracy = 1.000000
k = 12, accuracy = 0.952381
k = 12, accuracy = 0.857143
k = 12, accuracy = 0.952381
k = 15, accuracy = 0.952381
k = 15, accuracy = 1.000000
k = 15, accuracy = 0.952381
k = 15, accuracy = 0.857143
k = 15, accuracy = 0.952381
k = 20, accuracy = 0.952381
k = 20, accuracy = 1.000000
k = 20, accuracy = 0.952381
k = 20, accuracy = 0.761905
k = 20, accuracy = 0.952381
k = 50, accuracy = 1.000000
k = 50, accuracy = 1.000000
k = 50, accuracy = 0.904762
k = 50, accuracy = 0.761905
k = 50, accuracy = 0.904762
k = 100, accuracy = 0.285714
k = 100, accuracy = 0.380952
k = 100, accuracy = 0.333333
k = 100, accuracy = 0.238095
k = 100, accuracy = 0.190476