机器学习初识
1). 监督学习(supervised learning),无监督学习(unsupervised learning),半监督学习(Semi-Supervised Learning),强化学习(reinforcement Learning )
2). 监督学习(supervised learning)和无监督学习(unsupervised learning)的判断:
是否有监督(supervised),就看输入数据是否有标签(label)。输入数据有标签,则为有监督学习,没标签则为无监督学习。
3). 监督学习:回归(Regression,连续)、分类(Classification,离散)
无监督学习:聚类(clustering)
分类算法KNN:
K近邻算法,即K-Nearest Neighbor algorithm,简称KNN算法。
可认为是:找最接近K的那个邻居。
实例:肿瘤良,恶性判断(手动实现)
from matplotlib import pyplot as plt
import numpy as np
raw_data_X = [[3.393533211, 2.331273381],
[3.110073483, 1.781539638],
[1.343808831, 3.368360954],
[3.582294042, 4.679179110],
[2.280362439, 2.866990263],
[7.423436942, 4.696522875],
[5.745051997, 3.533989803],
[9.172168622, 2.511101045],
[7.792783481, 3.424088941],
[7.939820817, 0.791637231]
]
raw_data_y = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
X_train = np.array(raw_data_X)
y_train = np.array(raw_data_y)
预测
# 假设新来一个样本数据判断x是恶性还是良性
x = np.array([8.093607318, 3.365731514])
plt.scatter(X_train[y_train==0,0],X_train[y_train==0,1],color = 'r') #setosa
plt.scatter(X_train[y_train==1,0],X_train[y_train==1,1],color = 'b') #versicolor
plt.scatter(x[0],x[1],color = 'g') #virginica
plt.show()
通过knn算法来预测
from math import sqrt
# 计算x距离所有的是十个个点的距离,然后选距离最近的前k个
# distances = []
# for x_train in X_train:
# d = sqrt(np.sum((x_train-x)**2))
# distances.append(d)
distances = [sqrt(np.sum((x_train-x)**2)) for x_train in X_train]
distances
[4.812566907609877,
5.229270827235305,
6.749798999160064,
4.6986266144110695,
5.83460014556857,
1.4900114024329525,
2.354574897431513,
1.3761132675144652,
0.3064319992975,
2.5786840957478887]
nearst = np.argsort(distances)
nearst
array([8, 7, 5, 6, 9, 3, 0, 1, 4, 2], dtype=int64)
# 假设我们指定K的值是6
k =6
top_k_y =[y_train[i] for i in nearst[:6]]
top_k_y
[1, 1, 1, 1, 1, 0]
# 数据统计量大的话使用的统计办法
from collections import Counter
votes = Counter(top_k_y)
votes
Counter({1: 5, 0: 1})
# 返回数量前 i 的数据信息
votes.most_common(1)
[(1, 5)]
predict_y = votes.most_common(1)[0][0]
predict_y
1
x患者是恶性肿瘤的可能性大