网格搜索法是指定参数值的一种穷举搜索方法,通过将估计函数的参数通过交叉验证的方法进行优化来得到最优的学习算法。
即,将各个参数可能的取值进行排列组合,列出所有可能的组合结果生成“网格”。然后将各组合用于SVM训练,并使用交叉验证对表现进行评估。在拟合函数尝试了所有的参数组合后,返回一个合适的分类器,自动调整至最佳参数组合,可以通过clf.best_params_获得参数值
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/sinat_32547403/article/details/73008127
交叉验证与网格搜索是机器学习中的两个非常重要且基本的概念,但是这两个概念在刚入门的时候并不是非常容易理解与掌握,自己开始学习的时候,对这两个概念理解的并不到位,现在写一篇关于交叉验证与网格搜索的文章,将这两个基本的概念做一下梳理。
网格搜索(Grid Search)名字非常大气,但是用简答的话来说就是你手动的给出一个模型中你想要改动的所用的参数,程序自动的帮你使用穷举法来将所用的参数都运行一遍。决策树中我们常常将最大树深作为需要调节的参数;AdaBoost中将弱分类器的数量作为需要调节的参数。
为了确定搜索参数,也就是手动设定的调节的变量的值中,那个是最好的,这时就需要使用一个比较理想的评分方式(这个评分方式是根据实际情况来确定的可能是accuracy、f1-score、f-beta、pricise、recall等)
有了好的评分方式,但是只用一次的结果就能说明某组的参数组合比另外的参数组合好吗?这显然是不严谨的,上小学的时候老师就告诉我们要求平均��。所以就有了交叉验证这一概念。下面以K折交叉验证为例介绍这一概念。
这个过程一共需要进行K次,将最后K次使用实现选择好的评分方式的评分求平均返回,然后找出最大的一个评分对用的参数组合。这也就完成了交叉验证这一过程。
### 举例 下面使用一个简单的例子(预测年收入是否大于5万美元)来进行说明网格搜索与交叉验证的使用。 数据集来自[UCI机器学习知识库](https://archive.ics.uci.edu/ml/datasets/Census+Income)。
import numpy as np
import pandas as pd
from IPython.display import display
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import make_scorer, fbeta_score, accuracy_score
from sklearn.model_selection import GridSearchCV, KFold
%matplotlib inline
data = pd.read_csv("census.csv")
# 将数据切分成特征和标签
income_raw = data['income']
features_raw = data.drop('income', axis=1)
# 显示部分数据
# display(features_raw.head(n=1))
# 因为原始数据中的,capital-gain 和 capital-loss的倾斜度非常高,所以要是用对数转换。
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# 归一化数字特征,是为了保证所有的特征均被平等的对待
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# display(features_raw.head(n=1))
# 独热编码,将非数字的形式转化为数字
features = pd.get_dummies(features_raw)
income = income_raw.replace(['>50K', ['<=50K']], [1, 0])
# 切分数据集
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size=0.2, random_state=0)
# Adaboost
from sklearn.ensemble import AdaBoostClassifier
clf_Ada = AdaBoostClassifier(random_state=0)
# 决策树
from sklearn.tree import DecisionTreeClassifier
clf_Tree = DecisionTreeClassifier(random_state=0)
# KNN
from sklearn.neighbors import KNeighborsClassifier
clf_KNN = KNeighborsClassifier()
# SVM
from sklearn.svm import SVC
clf_svm = SVC(random_state=0)
# Logistic
from sklearn.linear_model import LogisticRegression
clf_log = LogisticRegression(random_state=0)
# 随机森林
from sklearn.ensemble import RandomForestClassifier
clf_forest = RandomForestClassifier(random_state=0)
# GBDT
from sklearn.ensemble import GradientBoostingClassifier
clf_gbdt = GradientBoostingClassifier(random_state=0)
# GaussianNB
from sklearn.naive_bayes import GaussianNB
clf_NB = GaussianNB()
scorer = make_scorer(accuracy_score)
# 参数调优
kfold = KFold(n_splits=10)
# 决策树
parameter_tree = {'max_depth': xrange(1, 10)}
grid = GridSearchCV(clf_Tree, parameter_tree, scorer, cv=kfold)
grid = grid.fit(X_train, y_train)
print "best score: {}".format(grid.best_score_)
display(pd.DataFrame(grid.cv_results_).T)
best score: 0.855737070514.dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; }
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
---|---|---|---|---|---|---|---|---|---|
mean_fit_time | 0.0562535 | 0.0692133 | 0.0885126 | 0.110233 | 0.128337 | 0.158719 | 0.17124 | 0.193637 | 0.223979 |
mean_score_time | 0.00240474 | 0.00228212 | 0.00221529 | 0.0026047 | 0.00226772 | 0.00254297 | 0.00231481 | 0.00246696 | 0.00256622 |
mean_test_score | 0.75114 | 0.823811 | 0.839345 | 0.839926 | 0.846671 | 0.852392 | 0.851508 | 0.853139 | 0.855737 |
mean_train_score | 0.75114 | 0.82421 | 0.839628 | 0.840503 | 0.847878 | 0.853329 | 0.855264 | 0.859202 | 0.863667 |
param_max_depth | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
params | {u’max_depth’: 1} | {u’max_depth’: 2} | {u’max_depth’: 3} | {u’max_depth’: 4} | {u’max_depth’: 5} | {u’max_depth’: 6} | {u’max_depth’: 7} | {u’max_depth’: 8} | {u’max_depth’: 9} |
rank_test_score | 9 | 8 | 7 | 6 | 5 | 3 | 4 | 2 | 1 |
split0_test_score | 0.760641 | 0.8267 | 0.843836 | 0.844666 | 0.851575 | 0.855721 | 0.855445 | 0.86042 | 0.859038 |
split0_train_score | 0.750084 | 0.823828 | 0.839184 | 0.83943 | 0.847538 | 0.852913 | 0.854295 | 0.859947 | 0.863233 |
split1_test_score | 0.758154 | 0.821172 | 0.839138 | 0.842454 | 0.845218 | 0.849641 | 0.847706 | 0.850746 | 0.852957 |
split1_train_score | 0.750361 | 0.824442 | 0.839706 | 0.845911 | 0.850088 | 0.854203 | 0.855831 | 0.861482 | 0.864984 |
split2_test_score | 0.754837 | 0.824212 | 0.840243 | 0.84052 | 0.8466 | 0.854616 | 0.854339 | 0.854063 | 0.856551 |
split2_train_score | 0.750729 | 0.824718 | 0.839031 | 0.839307 | 0.847323 | 0.852237 | 0.854203 | 0.859578 | 0.86397 |
split3_test_score | 0.73162 | 0.820619 | 0.838032 | 0.838308 | 0.8466 | 0.850746 | 0.848535 | 0.846877 | 0.852957 |
split3_train_score | 0.753309 | 0.824503 | 0.839829 | 0.840106 | 0.848337 | 0.853742 | 0.85537 | 0.858104 | 0.863171 |
split4_test_score | 0.746545 | 0.818684 | 0.83361 | 0.833886 | 0.83969 | 0.847982 | 0.845495 | 0.85047 | 0.848811 |
split4_train_score | 0.751651 | 0.824718 | 0.840321 | 0.840597 | 0.844897 | 0.853558 | 0.858319 | 0.861912 | 0.864922 |
split5_test_score | 0.754284 | 0.826147 | 0.844942 | 0.845218 | 0.854063 | 0.859038 | 0.85738 | 0.858209 | 0.861802 |
split5_train_score | 0.750791 | 0.823889 | 0.839061 | 0.839338 | 0.847323 | 0.852729 | 0.854111 | 0.856967 | 0.862741 |
split6_test_score | 0.754284 | 0.825318 | 0.838032 | 0.837756 | 0.845495 | 0.848535 | 0.848535 | 0.852128 | 0.857103 |
split6_train_score | 0.750791 | 0.823981 | 0.839829 | 0.840167 | 0.848429 | 0.853773 | 0.855647 | 0.857766 | 0.863141 |
split7_test_score | 0.749793 | 0.821399 | 0.835499 | 0.835499 | 0.844623 | 0.85264 | 0.852087 | 0.853746 | 0.85264 |
split7_train_score | 0.75129 | 0.824416 | 0.840111 | 0.840418 | 0.848372 | 0.853501 | 0.854945 | 0.860811 | 0.863882 |
split8_test_score | 0.753387 | 0.826375 | 0.838264 | 0.83854 | 0.84407 | 0.852087 | 0.852917 | 0.852364 | 0.858446 |
split8_train_score | 0.750891 | 0.823864 | 0.839803 | 0.84008 | 0.848372 | 0.853071 | 0.854945 | 0.857801 | 0.863391 |
split9_test_score | 0.747857 | 0.827481 | 0.841858 | 0.842411 | 0.84877 | 0.852917 | 0.85264 | 0.852364 | 0.857064 |
split9_train_score | 0.751505 | 0.823741 | 0.839404 | 0.839681 | 0.848096 | 0.853563 | 0.854975 | 0.857647 | 0.863237 |
std_fit_time | 0.0123583 | 0.00442788 | 0.00552026 | 0.00631691 | 0.0053195 | 0.0157011 | 0.00476991 | 0.00622854 | 0.0147429 |
std_score_time | 0.000529214 | 0.000467091 | 0.000355028 | 0.000760624 | 0.000460829 | 0.000504627 | 0.000446289 | 0.000445256 | 0.000449312 |
std_test_score | 0.00769898 | 0.00292464 | 0.00333118 | 0.00358776 | 0.00382496 | 0.00324406 | 0.00360414 | 0.00366389 | 0.00363761 |
std_train_score | 0.000855482 | 0.000366166 | 0.000418973 | 0.00185264 | 0.00124698 | 0.000553171 | 0.00116151 | 0.00168732 | 0.000726325 |