[外链图片转存失败(img-23PmTXbA-1567001324533)(images/suijisenlin.gif)]
王境泽的机器学习技巧
什么是集成学习(Voting Classifier)?
同一数据,同时应用多种差异模型,将预测结果用某种方式投票选出最佳结果
例如:新出的电影好不好看?根据其他人评价自行判断
日常工作应用中,监督学习算法的选择:
算法准确率:集成学习(随机森林,GBDT)是仅次于深度学习的第二大算法
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
# 噪点参数0.3
X, y = datasets.make_moons(n_samples=500, noise=0.3, random_state=42)
plt.scatter(X[y==0,0], X[y==0,1], alpha=0.3)
plt.scatter(X[y==1,0], X[y==1,1], alpha=0.3)
[外链图片转存失败(img-jRiZvp8T-1567001324539)(output_4_1.png)]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
from sklearn.neighbors import KNeighborsClassifier # KNN
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_train)
knn_clf.score(X_test, y_test)
0.912
from sklearn.tree import DecisionTreeClassifier # 决策树
dt_clf = DecisionTreeClassifier(random_state=666)
dt_clf.fit(X_train, y_train)
dt_clf.score(X_test, y_test)
0.864
from sklearn.linear_model import LogisticRegression # 逻辑回归
log_clf = LogisticRegression(solver='lbfgs')
log_clf.fit(X_train, y_train)
log_clf.score(X_test, y_test)
0.864
y_predict1 = knn_clf.predict(X_test)
y_predict1
y_predict2 = dt_clf.predict(X_test)
y_predict2
y_predict3 = log_clf.predict(X_test)
y_predict3
array([1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1,
1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0,
0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0,
0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0], dtype=int64)
# 000,001 011,111
y_predict1 + y_predict2 + y_predict3
# 0,1, 2, 3
# 0, 1 = 0
# 2, 3 = 1
y_predict = ((y_predict1 + y_predict2 + y_predict3) >= 2).astype(np.int)
y_predict
array([1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1,
1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0,
0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0,
0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0,
1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1,
1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0])
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_predict)
0.92
建议先分别执行各个算法,调好参数后再统一集成到一起
from sklearn.ensemble import VotingClassifier
voting_clf = VotingClassifier(
estimators=[
('df_clf', DecisionTreeClassifier(random_state=222)),
('knn_clf', KNeighborsClassifier()),
('log_clf', LogisticRegression(solver='lbfgs')),
],
voting='hard'
)
voting_clf.fit(X_train, y_train)
VotingClassifier(estimators=[('df_clf',
DecisionTreeClassifier(class_weight=None,
criterion='gini',
max_depth=None,
max_features=None,
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
presort=False,
random_state=222,
splitter='best')),
('knn_clf',
KNeighborsClassifier(a...
n_jobs=None, n_neighbors=5,
p=2, weights='uniform')),
('log_clf',
LogisticRegression(C=1.0, class_weight=None,
dual=False, fit_intercept=True,
intercept_scaling=1,
l1_ratio=None, max_iter=100,
multi_class='warn',
n_jobs=None, penalty='l2',
random_state=None,
solver='lbfgs', tol=0.0001,
verbose=0,
warm_start=False))],
flatten_transform=True, n_jobs=None, voting='hard',
weights=None)
voting_clf.predict(X_test)
array([1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1,
1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0,
0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0,
0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0,
1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1,
1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0], dtype=int64)
voting_clf.score(X_test, y_test)
0.928
参数voting值
例:某数据有2种分类,使用五种模型训练并预测,得到结果为:
模型1:A 99%;B 1%
模型2:A 49%;B 51%
模型3:A 40%;B 60%
模型4:A 90%;B 10%
模型5:A 30%;B 70%
soft voting要求集成的每个模型都能估计概率,否则无法运算(kNN、决策树、逻辑回归都可以)
voting_clf = VotingClassifier(
estimators=[
('df_clf', DecisionTreeClassifier(random_state=666)),
('knn_clf', KNeighborsClassifier()),
('log_clf', LogisticRegression(solver='lbfgs')),
],
voting='soft',
)
voting_clf.fit(X_train, y_train)
voting_clf.score(X_test, y_test)
0.888
现有的集成学习方式,能用于集成的模型太少,想要提高集成学习的准确率需要:
子模型不需要太高的准确率,只要足够多,就可以极大提升最终模型准确率
例如:每个子模型有60%准确率
注意:子模型的准确率最好高于平均准确率
0.6 ** 3 + (2*3/2*1) * 0.6**2 * 0.4
0.648
如何产生大量有差异的子模型?
两种采样方式:
Bagging:放回采样(bootstrap),更常用,能制造更多子模型
Pasting:不放回采样
这里Bagging方式集成学习只使用决策树模型,
通过改变训练数据生成子模型,决策树大量的剪枝方式可以生成差异更大的模型,优于其他模型
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
bagging_clf = BaggingClassifier(
DecisionTreeClassifier(), # 集成的模型
n_estimators=500, # 集成多少个子模型
max_samples=100, # 每个子模型需要多少训练数据
bootstrap=True # 放回抽样
)
bagging_clf.fit(X_train, y_train)
bagging_clf.score(X_test, y_test)
0.92
子模型越多,准确率越高,模型训练越慢
bagging_clf = BaggingClassifier(
DecisionTreeClassifier(), # 集成的模型
n_estimators=5000, # 集成多少个子模型
max_samples=100, # 每个子模型需要多少训练数据
bootstrap=True) # 放回采样
bagging_clf.fit(X_train, y_train)
bagging_clf.score(X_test, y_test)
0.912
没有99%那么多,因为一些子模型准确率会低于均值,数据抽取次数过多,有大量相似模型
放回采样导致一部分样本始终没有被用于训练
平均约有37%的样本没有被用到
所以不需要分离测试集,直接使用没有被用过的样本数据做测试集即可
bagging_clf = BaggingClassifier(
DecisionTreeClassifier(),
n_estimators=500,
max_samples=100,
bootstrap=True,
oob_score=True) # OOB为True
bagging_clf.fit(X, y) # 训练数据采用全部样本数据,不需要测试集
bagging_clf.oob_score_ # 调用OOB方法计算准确率
0.916
Bagging非常易于并行化处理
每个子模型可以独立训练,使用独立cpu内核,加快速度
注意:如果训练时间非常久,同时使用所有内核 n_jobs=-1, 非常容易卡死,训练完成后才能恢复运算
%%time
bagging_clf = BaggingClassifier(
DecisionTreeClassifier(),
n_estimators=500,
max_samples=100,
bootstrap=True,
oob_score=True)
bagging_clf.fit(X, y)
Wall time: 2.82 s
BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None,
criterion='gini',
max_depth=None,
max_features=None,
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
presort=False,
random_state=None,
splitter='best'),
bootstrap=True, bootstrap_features=False, max_features=1.0,
max_samples=100, n_estimators=500, n_jobs=None,
oob_score=True, random_state=None, verbose=0,
warm_start=False)
%%time
bagging_clf = BaggingClassifier(
DecisionTreeClassifier(),
n_estimators=500,
max_samples=100,
bootstrap=True,
oob_score=True,
n_jobs=-1) # 使用全部cpu内核
bagging_clf.fit(X, y)
除了针对样本数据进行随机采样,还有更多方式
bootstrap_features 针对特征随机取样
random_subspaces_clf = BaggingClassifier(
DecisionTreeClassifier(),
n_estimators=500,
max_samples=500, # 关闭样本随机采样,因为子模型需要数据,设为全部500条,不再选取部分数据
bootstrap=True,
oob_score=True,
max_features=1, # 随机取1列特征(因为数据一共就2列特征)
bootstrap_features=True) # 对特征随机采样,放回采样
random_subspaces_clf.fit(X, y)
random_subspaces_clf.oob_score_
0.834
random_patches_clf = BaggingClassifier(
DecisionTreeClassifier(),
n_estimators=500,
max_samples=100, # 打开样本随机采样
bootstrap=True,
oob_score=True,
max_features=1, # 随机取1列特征(因为数据一共就2列特征)
bootstrap_features=True) # 对特征随机采样,放回采样
random_patches_clf.fit(X, y)
random_patches_clf.oob_score_
0.856
上面使用决策树、以Bagging集成学习的方式,就是随机森林
除了手动集成学习外,sklearn自带一个随机森林类,可以方便的创建随机森林模型
随机森林模型集成了决策树和Bagging分类器,所以拥有决策树和BaggingClassifier的所有参数
############################################################
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(
n_estimators=500, # 500棵树
oob_score=True, # 使用未被抽取的数据做测试集
random_state=666, # 随机数种子固定
# n_jobs=-1) # 所有cpu内核并行运算
)
rf_clf.fit(X, y)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=500,
n_jobs=None, oob_score=True, random_state=666, verbose=0,
warm_start=False)
rf_clf.oob_score_
0.896
调节参数,提升准确率
rf_clf2 = RandomForestClassifier(
n_estimators=500,
max_leaf_nodes=16, # 每个决策树最多有几个叶子节点
oob_score=True,
random_state=666,
n_jobs=-1)
rf_clf2.fit(X, y)
rf_clf2.oob_score_
0.92
# help(RandomForestClassifier)
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import RandomForestRegressor # 回归随机森林
上面的集成学习算法叫Bagging
[外链图片转存失败(img-aBQMuefN-1567001324544)(images/adaboosting.png)]
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=2), # 基础学习算法
n_estimators=500)
ada_clf.fit(X_train, y_train)
AdaBoostClassifier(algorithm='SAMME.R',
base_estimator=DecisionTreeClassifier(class_weight=None,
criterion='gini',
max_depth=2,
max_features=None,
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
min_samples_leaf=1,
min_samples_split=2,
min_weight_fraction_leaf=0.0,
presort=False,
random_state=None,
splitter='best'),
learning_rate=1.0, n_estimators=500, random_state=None)
ada_clf.score(X_test, y_test)
0.856
[外链图片转存失败(img-eu2N16PR-1567001324550)(images/gboosting.png)]
from sklearn.ensemble import GradientBoostingClassifier
# Gradient Boosting就是以决策树为基础的,不需要写,设好集成子模型数量就行
gb_clf = GradientBoostingClassifier(max_depth=3, n_estimators=30)
gb_clf.fit(X_train, y_train)
gb_clf.score(X_test, y_test)
0.92
scikit-learn自带的常用集成算法:
不过实际工作和比赛中,我们更常用的时专用集成学习库
其他常用Boosting算法实现,如:XGboost,LightGBM(这两个库都是Boosting算法中的GBDT第三方实现),广泛应用于各种数据科学机器学习竞赛中
sklearn不带,需要另行安装