sklearn中的网格搜索自动调参(适用于小数据)

GridSearchCV 简介:

 

GridSearchCV,它存在的意义就是自动调参,只要把参数输进去,就能给出最优化的结果和参数。但是这个方法适合于小数据集,一旦数据的量级上去了,很难得出结果。这个时候就是需要动脑筋了。数据量比较大的时候可以使用一个快速调优的方法——坐标下降。它其实是一种贪心算法:拿当前对模型影响最大的参数调优,直到最优化;再拿下一个影响最大的参数调优,如此下去,直到所有的参数调整完毕。这个方法的缺点就是可能会调到局部最优而不是全局最优,但是省时间省力,巨大的优势面前,还是试一试吧,后续可以再拿bagging再优化。回到sklearn里面的GridSearchCV,GridSearchCV用于系统地遍历多种参数组合,通过交叉验证确定最佳效果参数。

GridSearchCV官方网址:http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html

 

常用参数解读:

estimator:所使用的分类器,如estimator=RandomForestClassifier(min_samples_split=100,min_samples_leaf=20,max_depth=8,max_features='sqrt',random_state=10), 并且传入除需要确定最佳的参数之外的其他参数。每一个分类器都需要一个scoring参数,或者score方法。
param_grid:值为字典或者列表,即需要最优化的参数的取值,param_grid =param_test1,param_test1 = {'n_estimators':range(10,71,10)}。
scoring :准确度评价标准,默认None,这时需要使用score函数;或者如scoring='roc_auc',根据所选模型不同,评价准则不同。字符串(函数名),或是可调用对象,需要其函数签名形如:scorer(estimator, X, y);如果是None,则使用estimator的误差估计函数。scoring参数选择如下:

 

参考地址:http://scikit-learn.org/stable/modules/model_evaluation.html


cv :交叉验证参数,默认None,使用三折交叉验证。指定fold数量,默认为3,也可以是yield训练/测试数据的生成器。
refit :默认为True,程序将会以交叉验证训练集得到的最佳参数,重新对所有可用的训练集与开发集进行,作为最终用于性能评估的最佳模型参数。即在搜索参数结束后,用最佳参数结果再次fit一遍全部数据集。
iid:默认True,为True时,默认为各个样本fold概率分布一致,误差估计为所有样本之和,而非各个fold的平均。
verbose:日志冗长度,int:冗长度,0:不输出训练过程,1:偶尔输出,>1:对每个子模型都输出。
n_jobs: 并行数,int:个数,-1:跟CPU核数一致, 1:默认值。
pre_dispatch:指定总共分发的并行任务数。当n_jobs大于1时,数据将在每个运行点进行复制,这可能导致OOM,而设置pre_dispatch参数,则可以预先划分总共的job数量,使数据最多被复制pre_dispatch次

 

常用方法:

grid.fit():运行网格搜索
grid_scores_:给出不同参数情况下的评价结果
best_params_:描述了已取得最佳结果的参数的组合
best_score_:成员提供优化过程期间观察到的最好的评分

from sklearn.datasets import load_iris
from  sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
from sklearn.model_selection import GridSearchCV
iris = load_iris()
X = iris.data
y = iris.target
k_range = range(1,31)
weights = ['uniform', 'distance']
#
param_gird = dict(n_neighbors=k_range,weights=weights)

knn = KNeighborsClassifier(n_neighbors=5)#现在是一个参数进行网格搜索

grid = GridSearchCV(knn,param_gird,cv=10,scoring='accuracy')
grid.fit(X,y)
print(grid.grid_scores_)
print(grid.best_score_)
print(grid.best_params_)

D:\python\ven\Scripts\python.exe D:/python/four.py
D:\python\ven\lib\site-packages\sklearn\model_selection\_search.py:761: DeprecationWarning: The grid_scores_ attribute was deprecated in version 0.18 in favor of the more elaborate cv_results_ attribute. The grid_scores_ attribute will not be available from 0.20
  DeprecationWarning)
[mean: 0.96000, std: 0.05333, params: {'n_neighbors': 1, 'weights': 'uniform'}, mean: 0.96000, std: 0.05333, params: {'n_neighbors': 1, 'weights': 'distance'}, mean: 0.95333, std: 0.05207, params: {'n_neighbors': 2, 'weights': 'uniform'}, mean: 0.96000, std: 0.05333, params: {'n_neighbors': 2, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 3, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 3, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 4, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 4, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 5, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 5, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 6, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 6, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 7, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 7, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 8, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 8, 'weights': 'distance'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 9, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 9, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 10, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 10, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 11, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 11, 'weights': 'distance'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 12, 'weights': 'uniform'}, mean: 0.97333, std: 0.04422, params: {'n_neighbors': 12, 'weights': 'distance'}, mean: 0.98000, std: 0.03055, params: {'n_neighbors': 13, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 13, 'weights': 'distance'}, mean: 0.97333, std: 0.04422, params: {'n_neighbors': 14, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 14, 'weights': 'distance'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 15, 'weights': 'uniform'}, mean: 0.98000, std: 0.03055, params: {'n_neighbors': 15, 'weights': 'distance'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 16, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 16, 'weights': 'distance'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 17, 'weights': 'uniform'}, mean: 0.98000, std: 0.03055, params: {'n_neighbors': 17, 'weights': 'distance'}, mean: 0.98000, std: 0.03055, params: {'n_neighbors': 18, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 18, 'weights': 'distance'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 19, 'weights': 'uniform'}, mean: 0.98000, std: 0.03055, params: {'n_neighbors': 19, 'weights': 'distance'}, mean: 0.98000, std: 0.03055, params: {'n_neighbors': 20, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 20, 'weights': 'distance'}, mean: 0.96667, std: 0.03333, params: {'n_neighbors': 21, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 21, 'weights': 'distance'}, mean: 0.96667, std: 0.03333, params: {'n_neighbors': 22, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 22, 'weights': 'distance'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 23, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 23, 'weights': 'distance'}, mean: 0.96000, std: 0.04422, params: {'n_neighbors': 24, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 24, 'weights': 'distance'}, mean: 0.96667, std: 0.03333, params: {'n_neighbors': 25, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 25, 'weights': 'distance'}, mean: 0.96000, std: 0.04422, params: {'n_neighbors': 26, 'weights': 'uniform'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 26, 'weights': 'distance'}, mean: 0.96667, std: 0.04472, params: {'n_neighbors': 27, 'weights': 'uniform'}, mean: 0.98000, std: 0.03055, params: {'n_neighbors': 27, 'weights': 'distance'}, mean: 0.95333, std: 0.04269, params: {'n_neighbors': 28, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 28, 'weights': 'distance'}, mean: 0.95333, std: 0.04269, params: {'n_neighbors': 29, 'weights': 'uniform'}, mean: 0.97333, std: 0.03266, params: {'n_neighbors': 29, 'weights': 'distance'}, mean: 0.95333, std: 0.04269, params: {'n_neighbors': 30, 'weights': 'uniform'}, mean: 0.96667, std: 0.03333, params: {'n_neighbors': 30, 'weights': 'distance'}]
0.98
{'n_neighbors': 13, 'weights': 'uniform'}

Process finished with exit code 0

 

使用 GridSearchCV 会造成很高的计算代价。

使用RandomizeSearchCV来降低计算代价

from sklearn.datasets import load_iris
from  sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
from sklearn.model_selection import GridSearchCV
from  sklearn.model_selection import RandomizedSearchCV
iris = load_iris()
X = iris.data
y = iris.target
k_range = range(1,31)
weights = ['uniform', 'distance']
#
param_gird = dict(n_neighbors=k_range,weights=weights)

knn = KNeighborsClassifier(n_neighbors=5)#现在是一个参数进行网格搜索

# grid = GridSearchCV(knn,param_gird,cv=10,scoring='accuracy')
# grid.fit(X,y)
# print(grid.grid_scores_)
# print(grid.best_score_)
# print(grid.best_params_)
rand = RandomizedSearchCV(knn,param_gird,cv=10,scoring='accuracy')
rand.fit(X,y)
print(rand.grid_scores_)
print(rand.best_score_)
print(rand.best_params

 

你可能感兴趣的:(机器学习,深度学习)