在sacikit-learn中,GradientBoostingClassifier为GBDT的分类类, 而GradientBoostingRegressor为GBDT的回归类。两者的参数类型完全相同,当然有些参数比如损失函数loss的可选择项并不相同。这些参数中,类似于Adaboost,我们把重要参数分为两类,第一类是Boosting框架的重要参数,第二类是弱学习器即CART回归树的重要参数。
首先,我们来看boosting框架相关的重要参数。由于GradientBoostingClassifier和GradientBoostingRegressor的参数绝大部分相同,我们下面会一起来讲,不同点会单独指出。
这里对GBDT的类库弱学习器的重要参数做一个总结。由于GBDT使用了CART回归决策树,因此它的参数基本来源于决策树类,也就是说,和DecisionTreeClassifier和DecisionTreeRegressor的参数基本类似。
首先,我们载入需要的类库:
import pandas as pd
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
from sklearn import cross_validation, metrics
from sklearn.grid_search import GridSearchCV
from sklearn import cross_validation
from sklearn.model_selection import train_test_split
import matplotlib.pylab as plt
%matplotlib inline
接着导入数据,并切分数据
date = pd.read_csv('train_clearn.csv')
titanic_train = pd.read_csv('train.csv')
sourse_x = date.loc[ : ]
sourse_y = titanic_train.loc[ : , 'Survived']
#留出法
train_x,test_x,train_y,test_y = train_test_split(sourse_x,
sourse_y,
train_size=.8,
random_state=0)
对于任何参数先使用默认的,再拟合下数据:
gbm0 = GradientBoostingClassifier(random_state=10)
gbm0.fit(train_x, train_y)
y_pred = gbm0.predict(test_x)
y_predprob = gbm0.predict_proba(test_x)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(test_y, y_pred))
print("AUC Score (Train): %f" % metrics.roc_auc_score(test_y, y_predprob))
输出如下,可见拟合还可以,我们下面看看怎么通过调参提高模型的泛化能力。
Accuracy : 0.8045
AUC Score (Train): 0.880698
首先我们从步长(learning rate)和迭代次数(n_estimators)入手。一般来说,开始选择一个较小的步长来网格搜索最好的迭代次数。这里,我们将步长初始值设置为0.1。对于迭代次数进行网格搜索如下:
param_test1 = {'n_estimators':list(range(20,110,10))}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, min_samples_split=300,
min_samples_leaf=20,max_depth=8,max_features='sqrt', subsample=0.8,random_state=10),
param_grid = param_test1, scoring='roc_auc',iid=False,cv=5)
gsearch1.fit(train_x,train_y)
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
输出如下,可见最好的迭代次数是90。
([mean: 0.84477, std: 0.04336, params: {'n_estimators': 20},
mean: 0.84839, std: 0.04286, params: {'n_estimators': 30},
mean: 0.84916, std: 0.04456, params: {'n_estimators': 40},
mean: 0.84976, std: 0.04375, params: {'n_estimators': 50},
mean: 0.85312, std: 0.04165, params: {'n_estimators': 60},
mean: 0.85369, std: 0.04137, params: {'n_estimators': 70},
mean: 0.85899, std: 0.03888, params: {'n_estimators': 80},
mean: 0.86063, std: 0.03773, params: {'n_estimators': 90},
mean: 0.85890, std: 0.04001, params: {'n_estimators': 100}],
{'n_estimators': 90},
0.8606284721122759)
找到了一个合适的迭代次数,现在我们开始对决策树进行调参。首先我们对决策树最大深度max_depth和内部节点再划分所需最小样本数min_samples_split进行网格搜索。
param_test2 = {'max_depth':list(range(2,10,2)), 'min_samples_split':list(range(50,150,10))}
gsearch2 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=90, min_samples_leaf=20,
max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test2, scoring='roc_auc',iid=False, cv=5)
gsearch2.fit(train_x,train_y)
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
输出如下,可见最好的最大树深度是4,内部节点再划分所需最小样本数是110。
([mean: 0.86465, std: 0.03776, params: {'max_depth': 2, 'min_samples_split': 50},
mean: 0.86343, std: 0.03720, params: {'max_depth': 2, 'min_samples_split': 60},
mean: 0.86479, std: 0.03652, params: {'max_depth': 2, 'min_samples_split': 70},
mean: 0.86309, std: 0.03650, params: {'max_depth': 2, 'min_samples_split': 80},
mean: 0.86271, std: 0.03712, params: {'max_depth': 2, 'min_samples_split': 90},
mean: 0.86448, std: 0.03717, params: {'max_depth': 2, 'min_samples_split': 100},
mean: 0.86292, std: 0.03746, params: {'max_depth': 2, 'min_samples_split': 110},
mean: 0.86188, std: 0.03794, params: {'max_depth': 2, 'min_samples_split': 120},
mean: 0.86105, std: 0.03746, params: {'max_depth': 2, 'min_samples_split': 130},
mean: 0.86105, std: 0.03746, params: {'max_depth': 2, 'min_samples_split': 140},
mean: 0.85659, std: 0.03878, params: {'max_depth': 4, 'min_samples_split': 50},
mean: 0.85861, std: 0.03720, params: {'max_depth': 4, 'min_samples_split': 60},
mean: 0.85706, std: 0.03613, params: {'max_depth': 4, 'min_samples_split': 70},
mean: 0.85980, std: 0.03479, params: {'max_depth': 4, 'min_samples_split': 80},
mean: 0.86438, std: 0.03203, params: {'max_depth': 4, 'min_samples_split': 90},
mean: 0.85656, std: 0.03576, params: {'max_depth': 4, 'min_samples_split': 100},
mean: 0.86509, std: 0.03156, params: {'max_depth': 4, 'min_samples_split': 110},
mean: 0.86172, std: 0.03431, params: {'max_depth': 4, 'min_samples_split': 120},
mean: 0.86274, std: 0.03127, params: {'max_depth': 4, 'min_samples_split': 130},
mean: 0.86254, std: 0.03170, params: {'max_depth': 4, 'min_samples_split': 140},
mean: 0.84965, std: 0.03639, params: {'max_depth': 6, 'min_samples_split': 50},
mean: 0.85529, std: 0.03180, params: {'max_depth': 6, 'min_samples_split': 60},
mean: 0.85677, std: 0.03492, params: {'max_depth': 6, 'min_samples_split': 70},
mean: 0.85867, std: 0.03459, params: {'max_depth': 6, 'min_samples_split': 80},
mean: 0.85930, std: 0.03559, params: {'max_depth': 6, 'min_samples_split': 90},
mean: 0.86255, std: 0.03209, params: {'max_depth': 6, 'min_samples_split': 100},
mean: 0.86113, std: 0.03159, params: {'max_depth': 6, 'min_samples_split': 110},
mean: 0.86114, std: 0.03075, params: {'max_depth': 6, 'min_samples_split': 120},
mean: 0.85927, std: 0.03209, params: {'max_depth': 6, 'min_samples_split': 130},
mean: 0.85989, std: 0.03344, params: {'max_depth': 6, 'min_samples_split': 140},
mean: 0.85769, std: 0.03383, params: {'max_depth': 8, 'min_samples_split': 50},
mean: 0.85226, std: 0.03975, params: {'max_depth': 8, 'min_samples_split': 60},
mean: 0.85781, std: 0.03129, params: {'max_depth': 8, 'min_samples_split': 70},
mean: 0.85594, std: 0.03510, params: {'max_depth': 8, 'min_samples_split': 80},
mean: 0.85959, std: 0.03734, params: {'max_depth': 8, 'min_samples_split': 90},
mean: 0.86153, std: 0.03280, params: {'max_depth': 8, 'min_samples_split': 100},
mean: 0.86217, std: 0.03261, params: {'max_depth': 8, 'min_samples_split': 110},
mean: 0.86200, std: 0.03148, params: {'max_depth': 8, 'min_samples_split': 120},
mean: 0.85845, std: 0.03335, params: {'max_depth': 8, 'min_samples_split': 130},
mean: 0.86015, std: 0.03489, params: {'max_depth': 8, 'min_samples_split': 140}],
{'max_depth': 4, 'min_samples_split': 110},
0.8650923023336817)
由于决策树深度4是一个比较合理的值,我们把它定下来,对于内部节点再划分所需最小样本数min_samples_split,我们暂时不能一起定下来,因为这个还和决策树其他的参数存在关联。下面我们再对内部节点再划分所需最小样本数min_samples_split和叶子节点最少样本数min_samples_leaf一起调参。
param_test3 = {'min_samples_split':list(range(100,300,50)), 'min_samples_leaf':list(range(10,50,5))}
gsearch3 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=90,max_depth=4,
max_features='sqrt', subsample=0.8, random_state=10),
param_grid = param_test3, scoring='roc_auc',iid=False, cv=5)
gsearch3.fit(train_x,train_y)
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
输出如下,可见最好的 min_samples_leaf 是15,最好的 min_samples_split 是150。
([mean: 0.86129, std: 0.02999, params: {'min_samples_leaf': 10, 'min_samples_split': 100},
mean: 0.86209, std: 0.02903, params: {'min_samples_leaf': 10, 'min_samples_split': 150},
mean: 0.86268, std: 0.03289, params: {'min_samples_leaf': 10, 'min_samples_split': 200},
mean: 0.86138, std: 0.03741, params: {'min_samples_leaf': 10, 'min_samples_split': 250},
mean: 0.86056, std: 0.03236, params: {'min_samples_leaf': 15, 'min_samples_split': 100},
mean: 0.86635, std: 0.03091, params: {'min_samples_leaf': 15, 'min_samples_split': 150},
mean: 0.86512, std: 0.03507, params: {'min_samples_leaf': 15, 'min_samples_split': 200},
mean: 0.86227, std: 0.03938, params: {'min_samples_leaf': 15, 'min_samples_split': 250},
mean: 0.85656, std: 0.03576, params: {'min_samples_leaf': 20, 'min_samples_split': 100},
mean: 0.86423, std: 0.03237, params: {'min_samples_leaf': 20, 'min_samples_split': 150},
mean: 0.86162, std: 0.03662, params: {'min_samples_leaf': 20, 'min_samples_split': 200},
mean: 0.86145, std: 0.03945, params: {'min_samples_leaf': 20, 'min_samples_split': 250},
mean: 0.85594, std: 0.03259, params: {'min_samples_leaf': 25, 'min_samples_split': 100},
mean: 0.85750, std: 0.03426, params: {'min_samples_leaf': 25, 'min_samples_split': 150},
mean: 0.85895, std: 0.03968, params: {'min_samples_leaf': 25, 'min_samples_split': 200},
mean: 0.85581, std: 0.04150, params: {'min_samples_leaf': 25, 'min_samples_split': 250},
mean: 0.85937, std: 0.03380, params: {'min_samples_leaf': 30, 'min_samples_split': 100},
mean: 0.85792, std: 0.03247, params: {'min_samples_leaf': 30, 'min_samples_split': 150},
mean: 0.85723, std: 0.03633, params: {'min_samples_leaf': 30, 'min_samples_split': 200},
mean: 0.85514, std: 0.03842, params: {'min_samples_leaf': 30, 'min_samples_split': 250},
mean: 0.85774, std: 0.03341, params: {'min_samples_leaf': 35, 'min_samples_split': 100},
mean: 0.85655, std: 0.03558, params: {'min_samples_leaf': 35, 'min_samples_split': 150},
mean: 0.85707, std: 0.03838, params: {'min_samples_leaf': 35, 'min_samples_split': 200},
mean: 0.85261, std: 0.04124, params: {'min_samples_leaf': 35, 'min_samples_split': 250},
mean: 0.86045, std: 0.03303, params: {'min_samples_leaf': 40, 'min_samples_split': 100},
mean: 0.85548, std: 0.03708, params: {'min_samples_leaf': 40, 'min_samples_split': 150},
mean: 0.85620, std: 0.03833, params: {'min_samples_leaf': 40, 'min_samples_split': 200},
mean: 0.85193, std: 0.03854, params: {'min_samples_leaf': 40, 'min_samples_split': 250},
mean: 0.85958, std: 0.03422, params: {'min_samples_leaf': 45, 'min_samples_split': 100},
mean: 0.85553, std: 0.03468, params: {'min_samples_leaf': 45, 'min_samples_split': 150},
mean: 0.85817, std: 0.03889, params: {'min_samples_leaf': 45, 'min_samples_split': 200},
mean: 0.85150, std: 0.04096, params: {'min_samples_leaf': 45, 'min_samples_split': 250}],
{'min_samples_leaf': 15, 'min_samples_split': 150},
0.8663518940713297)
可以将上面调试的参数都放到GBDT类里面去拟合数据看看效果:
gbm1 = GradientBoostingClassifier(learning_rate=0.1, n_estimators=90,max_depth=4, min_samples_leaf =15,
min_samples_split =150, max_features='sqrt', subsample=0.8, random_state=10)
gbm1.fit(train_x,train_y)
y_pred1 = gbm1.predict(test_x)
y_predprob1 = gbm1.predict_proba(test_x)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(test_y, y_pred1))
print("AUC Score (Train): %f" % metrics.roc_auc_score(test_y, y_predprob1))
输出如下:
Accuracy : 0.8101
AUC Score (Train): 0.892227
此时 Accuracy 和 AUC 的分数都有所提高,说明通过网格搜索后模型的精度提高了,接着再对最大特征数max_features进行网格搜索。
param_test4 = {'max_features':list(range(2,14,2))}
gsearch4 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=90,max_depth=4, min_samples_leaf =15,
min_samples_split =150, subsample=0.8, random_state=10),
param_grid = param_test4, scoring='roc_auc',iid=False, cv=5)
gsearch4.fit(train_x,train_y)
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
输出如下:
([mean: 0.85981, std: 0.03986, params: {'max_features': 2},
mean: 0.86794, std: 0.02685, params: {'max_features': 4},
mean: 0.86334, std: 0.03244, params: {'max_features': 6},
mean: 0.86137, std: 0.03030, params: {'max_features': 8},
mean: 0.85977, std: 0.03542, params: {'max_features': 10},
mean: 0.85979, std: 0.03444, params: {'max_features': 12}],
{'max_features': 4},
0.8679439070256729)
现在我们再对子采样的比例进行网格搜索:
param_test5 = {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=90,max_depth=4, min_samples_leaf =15,
min_samples_split =150, max_features=9, random_state=10),
param_grid = param_test5, scoring='roc_auc',iid=False, cv=5)
gsearch5.fit(train_x,train_y)
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
输出如下:
([mean: 0.86052, std: 0.03611, params: {'subsample': 0.6},
mean: 0.86439, std: 0.03060, params: {'subsample': 0.7},
mean: 0.86344, std: 0.03568, params: {'subsample': 0.75},
mean: 0.86588, std: 0.03207, params: {'subsample': 0.8},
mean: 0.86704, std: 0.03273, params: {'subsample': 0.85},
mean: 0.86423, std: 0.02832, params: {'subsample': 0.9}],
{'subsample': 0.85},
0.8670370150477256)
现在我们基本已经得到我们所有调优的参数结果了。这时我们可以减半步长,最大迭代次数加倍来增加我们模型的泛化能力。再次拟合我们的模型:
gbm2 = GradientBoostingClassifier(learning_rate=0.05, n_estimators=90,max_depth=4, min_samples_leaf =15,
min_samples_split =150, max_features=9, subsample=0.85, random_state=10)
gbm2.fit(train_x,train_y)
y_pred2 = gbm2.predict(test_x)
y_predprob2 = gbm2.predict_proba(test_x)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(test_y, y_pred2))
print("AUC Score (Train): %f" % metrics.roc_auc_score(test_y, y_predprob2))
输出如下:
Accuracy : 0.7989
AUC Score (Train): 0.896179
可以看到Accuracy分数比起之前的版本稍有下降,这个原因是我们为了增加模型泛化能力,为防止过拟合而减半步长,最大迭代次数加倍,同时减小了子采样的比例,从而减少了训练集的拟合程度。
下面我们继续将步长缩小5倍,最大迭代次数增加5倍,继续拟合我们的模型:
gbm3 = GradientBoostingClassifier(learning_rate=0.01, n_estimators=90,max_depth=4, min_samples_leaf =15,
min_samples_split =150, max_features=9, subsample=0.85, random_state=10)
gbm3.fit(train_x,train_y)
y_pred3 = gbm3.predict(test_x)
y_predprob3 = gbm3.predict_proba(test_x)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(test_y, y_pred3))
print("AUC Score (Train): %f" % metrics.roc_auc_score(test_y, y_predprob3))
输出如下,可见减小步长增加迭代次数可以在保证泛化能力的基础上增加一些拟合程度。:
Accuracy : 0.8045
AUC Score (Train): 0.893808
最后我们继续步长缩小一半,最大迭代次数增加2倍,拟合我们的模型:
gbm4 = GradientBoostingClassifier(learning_rate=0.005, n_estimators=90,max_depth=4, min_samples_leaf =15,
min_samples_split =150, max_features=9, subsample=0.85, random_state=10)
gbm4.fit(train_x,train_y)
y_pred4 = gbm4.predict(test_x)
y_predprob4 = gbm4.predict_proba(test_x)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(test_y, y_pred4))
print("AUC Score (Train): %f" % metrics.roc_auc_score(test_y, y_predprob4))
输出如下,此时由于步长实在太小,导致拟合效果反而变差,也就是说,步长不能设置的过小。:
Accuracy : 0.8101
AUC Score (Train): 0.888406
提示,在设置需要调节参数的范围时,可以先大概设置一个范围,然后再根据结果进行调节。
以上就是GBDT调参的一个总结。