LightGBM 英文文档:https://lightgbm.readthedocs.io/en/latest/index.html
LightGBM 中文文档:https://lightgbm.apachecn.org/
LightGBM 原理可参考:深入理解LightGBM https://blog.csdn.net/program_developer/article/details/103838846
LightGBM是一款快速、分布式、高性能的基于决策树的梯度 Boosting 框架。可用于排序、分类、回归等机器学习任务中。
以Scikit-learn API 的 LGBMRegressor 为例
# Scikit-learn API
class lightgbm.LGBMRegressor(boosting_type='gbdt', num_leaves=31, max_depth=-1,
learning_rate=0.1, n_estimators=10, max_bin=255, subsample_for_bin=200000, objective=None,
min_split_gain=0.0, min_child_weight=0.001, min_child_samples=20, subsample=1.0,
subsample_freq=1, colsample_bytree=1.0, reg_alpha=0.0, reg_lambda=0.0, random_state=None,
n_jobs=-1, silent=True, **kwargs)
boosting_type (string__, optional (default="gbdt")) – ‘gbdt’, traditional Gradient Boosting Decision Tree. ‘dart’, Dropouts meet Multiple Additive Regression Trees. ‘goss’, Gradient-based One-Side Sampling. ‘rf’, Random Forest.
默认的就挺好
num_leaves (int__, optional (default=31)) – Maximum tree leaves for base learners.
每个基学习器的最大叶子节点。LightGBM 使用的是 leaf-wise 的算法,因此在调节树的复杂程度时使用的是 num_leaves,它的值的设置应该小于 2^(max_depth)
max_depth (int__, optional (default=-1)) – Maximum tree depth for base learners, -1 means no limit.
每个基学习器的最大深度。当模型过拟合,首先降低max_depth
learning_rate (float__, optional (default=0.1)) – Boosting learning rate.
梯度下降的步长。常用 0.1, 0.001, 0.003
n_estimators (int__, optional (default=10)) – Number of boosted trees to fit.
基学习器的数量
max_bin (int__, optional (default=255)) – Number of bucketed bins for feature values.
存储feature的bin的最大数量,对应的是直方图的组数k
subsample_for_bin (int__, optional (default=50000)) – Number of samples for constructing bins.
用来构建直方图的数据的样本数量
objective (string__, callable or None__, optional (default=None)) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below). default: ‘regression’ for LGBMRegressor, ‘binary’ or ‘multiclass’ for LGBMClassifier, ‘lambdarank’ for LGBMRanker.
min_split_gain(= min_gain_to_split) (float__, optional (default=0.)) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
最小切分的信息增益值
min_child_weight(= min_sum_hessian_in_leaf) (float__, optional (default=1e-3)) – Minimum sum of instance weight(hessian) needed in a child(leaf).
决定最小叶子节点样本权重和(hessian)的最小阈值,若是基学习器切分后得到的叶节点中样本权重和低于该阈值则不会进一步切分,在线性模型中该阈值就对应每个节点的最小样本数。当它的值较大时,可以避免模型学习到局部的特殊样本,防止模型过拟合。但如果这个值过高,又会导致欠拟合
min_child_samples(= min_data_in_leaf) (int__, optional (default=20)) – Minimum number of data need in a child(leaf).
一个叶子节点中最小的数据量,调大可以防止过拟合
subsample (= bagging_fraction)(float__, optional (default=1.)) – Subsample ratio of the training instance.
这个参数控制对于每棵树,在非重复采样的情况下随机采样的比例。减小这个参数的值算法会更加保守,避免过拟合,加快运算速度。但是这个值设置的过小,它可能会导致欠拟合
subsample_freq(= bagging_freq) (int__, optional (default=1)) – Frequence of subsample, <=0 means no enable.
bagging 的频率, 0 意味着禁用 bagging. k 意味着每 k 次迭代执行bagging
colsample_bytree(= feature_fraction) (float__, optional (default=1.)) – Subsample ratio of columns when constructing each tree.
用来控制每棵随机采样的列数的占比(每一列是一个特征)。 调小可以防止过拟合,加快运算速度。典型值:0.5-1范围: (0,1]。一般设置成0.8左右。
reg_alpha(= lambda_l1)(float__, optional (default=0.)) – L1 regularization term on weights.
L1 正则化项的权重系数,越大模型越保守。防止过拟合,提高泛化能力
reg_lambda(= lambda_l2) (float__, optional (default=0.)) – L2 regularization term on weights.
L2 正则化项的权重系数,越大模型越保守。防止过拟合,提高泛化能力
random_state (int or None__, optional (default=None)) – Random number seed. Will use default seeds in c++ code if set to None.
计算机不能产生绝对的随机数,只能产生伪随机数。伪就是有规律的意思。如果每次使用一样的 随机种子,生成的随机数列就是一样的了
n_jobs (int__, optional (default=-1)) – Number of parallel threads.
多线程,表示可以在机器的多个核上并行的构造树以及计算预测值。不过受限于通信成本,可能效率并不会说分为k个线程就得到k倍的提升,不过整体而言相对需要构造大量的树或者构建一棵复杂的树而言还是高效的
silent (bool__, optional (default=True)) – Whether to print messages while running boosting.
在运行过程中是否打印流程
针对 Leaf-wise (最佳优先) 树的参数优化
针对更快的训练速度
针对更好的准确率
处理过拟合
XGBoost和LightGBM的参数以及调参 https://www.jianshu.com/p/1100e333fcab
LightGBM算法总结 https://blog.csdn.net/weixin_39807102/article/details/81912566
LightGBM调参笔记 https://blog.csdn.net/u012735708/article/details/83749703
使用 Hyperopt 对 LightGBM 进行自动调参 https://zhuanlan.zhihu.com/p/52660316
使用 Hyperopt 进行参数调优 https://www.jianshu.com/p/35eed1567463
LightGBM原生接口
import lightgbm as lgb
from sklearn import datasets
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn.metrics import roc_auc_score, accuracy_score
# 加载数据
iris = datasets.load_iris()
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3)
# 转换成lgb特征的数据格式
train_data = lgb.Dataset(X_train, label=y_train)
validation_data = lgb.Dataset(X_test, label=y_test)
# 将参数写成字典格式
params = {
'learning_rate': 0.1,
'lambda_l1': 0.1,
'lambda_l2': 0.2,
'max_depth': 4,
'objective': 'multiclass',
'num_class': 3,
}
# 模型训练
gbm = lgb.train(params, train_data, valid_sets=[validation_data])
# 模型预测
y_pred = gbm.predict(X_test)
# 模型评估
print(accuracy_score(y_test, y_pred))
scikit-learn接口
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
# 加载数据
iris = load_iris()
data = iris.data
target = iris.target
# 划分训练数据和测试数据
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.2)
# 模型训练
gbm = LGBMRegressor(objective='regression', num_leaves=31, learning_rate=0.05, n_estimators=20)
gbm.fit(X_train, y_train, eval_set=[(X_test, y_test)], eval_metric='l1', early_stopping_rounds=5)
# 模型存储
joblib.dump(gbm, 'loan_model.pkl')
# 模型加载
gbm = joblib.load('loan_model.pkl')
# 模型预测
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration_)
# 模型评估
print('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5)
# 特征重要度
print('Feature importances:', list(gbm.feature_importances_))
# 网格搜索,参数优化
estimator = LGBMRegressor(num_leaves=31)
param_grid = {
'learning_rate': [0.01, 0.1, 1],
'n_estimators': [20, 40]
}
gbm = GridSearchCV(estimator, param_grid)
gbm.fit(X_train, y_train)
print('Best parameters found by grid search are:', gbm.best_params_)