在上一篇文章中
感知器类的实现过程中,有以下几个概念:
1:学习速率(Learning rate)
学习速率(Learning rate)用来表示通过损失函数(Lost function)的梯度调整网络权重的超参数,其值越低,表示损失函数的变化越慢。
2:迭代次数。
迭代次数用来防止学习时候不停更新权重value而无法结束的情况发生。或者设置一个允许错误分类样本数量的阈值,也可以用来退出学习过程。
感知器类主要包含下面几个方法(function)
1:init,创建实例时用来初始化学习速率和迭代次数
def __init__(self, eta=0.01, n_iter=10):
self.eta = eta
self.n_iter = n_iter
2:predict,在学习过程中用于更新类标以及完成模型训练后用于预测未知数据的类标
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
numpy.where(condition, x, y)方法在condition条件满足时候,输出x,否则输出y。
3:net_input,感知器学习的输入,主要调用了计算向量点积方法,输入两个向量,输出一个标量
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
4:fit,感知器学习方法
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-list}, shape = [n_samples, n_features]
Traing vectors, where n_samples is ths number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
----------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X,y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
fit方法中主要是初始化了一个权重w,w[0]用于记录错误类标阈值,xi表示每一行的鸢尾花特征数据,y表示输出类标。
感知器类完整的实现如下:
# Perceptron.py
import numpy as np
class Perceptron(object):
"""Perceptron classifier.
parameters
---------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
----------
w_ : ld-array
Weights after fitting.
errors_ : list
Number of misclassifiercations in every epoch.
"""
def __init__(self, eta=0.01, n_iter=10):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-list}, shape = [n_samples, n_features]
Traing vectors, where n_samples is ths number
of samples and n_features is the number of
features.
y : array-like, shape = [n_samples]
Target values.
Returns
----------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X,y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
1:按照上一篇文章的步骤,导入鸢尾花数据,以及感知器类
注意:导入感知器类Perceptron的时候,Perceptron.py文件必须和运行python3.6在同一目录下
Python 3.6.0rc2 (default, Dec 11 2019, 17:36:06)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tkinter
>>> import pandas as pd
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> from Perceptron import Perceptron
>>> df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', header=None)
>>> df.tail()
0 1 2 3 4
145 6.7 3.0 5.2 2.3 Iris-virginica
146 6.3 2.5 5.0 1.9 Iris-virginica
147 6.5 3.0 5.2 2.0 Iris-virginica
148 6.2 3.4 5.4 2.3 Iris-virginica
149 5.9 3.0 5.1 1.8 Iris-virginica
>>>
然后利用感知器学习
>>> ppn = Perceptron(eta=0.1, n_iter=10)
>>> y = df.iloc[0:100, 4].values
>>> y = np.where(y == 'Iris-setosa', -1, 1)
>>> X = df.iloc[0:100, [0, 2]].values
>>> ppn.fit(X, y)
>>>
绘制每次迭代时候的错误分类数据
>>> plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o')
[]
>>> plt.xlabel('Epochs')
Text(0.5, 0, 'Epochs')
>>> plt.ylabel('Number of misclassifications')
Text(0, 0.5, 'Number of misclassifications')
>>> plt.show()
绘制的figure如下:
可以看到,经过6次迭代,分类器已经趋于收敛,能够对样本进行正确分类了。