sklearn常用内容

目录

1.随机网格搜索(RandomizedSearchCV)

2.网格搜索(GridSearchCV)

3.学习曲线(learning_curve)参数详解

4.Kflod 和StratifiedKFold



1.随机网格搜索(RandomizedSearchCV)

from sklearn.model_selection import RandomizedSearchCV
xgb = MultiOutputRegressor(xgboost.XGBRegressor())
##多输出模型:要在网格搜索的参数前添加”estimator__“
params = { "estimator__max_depth":[j for j in range(2,16,2)],
    "estimator__learning_rate": [0.06,0.08,0.1,0.12],
    "estimator__n_estimators":[i for i in range(40,200,20)]
}
xgb_best = RandomizedSearchCV(xgb,params,cv=3,n_jobs=-1)
xgb_best.fit(x_train_scaler,y_train)

2.网格搜索(GridSearchCV)

from sklearn.model_selection import GridSearchCV
estimators =MultiOutputRegressor(lightgbm.LGBMRegressor())
param_grid = { "estimator__learning_rate": [0.06,0.08,0.1,0.12],
    "estimator__n_estimators":[i for i in range(40,200,20)]
}
gbm = GridSearchCV(estimators, param_grid, cv=5)
gbm.fit(x_train_scaler, y_train)

#最好模型的参数
gbm.best_params_

3.学习曲线(learning_curve)参数详解

学习曲线就是通过画出不同训练集大小时训练集和交叉验证的准确率,可以看到模型在新数据上的表现,进而来判断模型是否方差偏高或偏差过高,以及增大训练集是否可以减小过拟合。

from sklearn.model_selection import learning_curve

train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=3, n_jobs=-1, train_sizes=0.2)
# estimator,是我们针对数据使用的实际分类器
# X 和 y分别表示特征和标签。
# train_sizes 是用来在曲线上绘制每个点的数据大小。
# train_scores 是每组数据进行训练后的算法训练得分。
# test_scores 是每组数据进行训练后的算法测试得分。

#常封装为以下函数
def plot_Learning_curve(estimator, title, X, y, ylim=None, cv=None,
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
    
    #获得的分数是一组数 求其平均值和方差
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    
    print(train_scores_mean)
    print(test_scores_mean)
    
    plt.grid()
    
    #画线的上下范围 均值+-方差
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")

    #画每个分数的图形 横坐标为训练次数
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")
 
    plt.legend(loc="best")
    return plt

4.Kflod 和StratifiedKFold

参数及例子详解(每天一点sklearn之KFold(9.8) - 知乎)

你可能感兴趣的:(sklearn,python,机器学习)