集成模型(3)XgBoost主要原理及其python实现

XgBoost主要原理及其python实现

      • 1基本思想
        • 1.1目标函数的优化推导
        • 1.2内部节点分裂
      • 2.总结
      • 3.python实现
        • 3.1基学习器的实现
        • 3.2XgBoost回归器的实现
        • 3.3XgBoost分类器的实现

前言:本文的实现代码主要用于算法理解,以及文中有错误的地方欢迎指出。

1基本思想

首先XgBoost也是一种提升树模型,相对于传统的GBDT做出了一些优化。在传统的GBDT中,当我们训练第t轮的模型时,我们是去拟合前t-1轮模型在数据集上的残差,以此来让我们的强学习器的预测值更加接近真实值。对于XgBoost、lightGBM、CatBoost以及前面的GBDT训练的核心思想都是前向分步算法,即每轮只学习一个模型,不同之处在于损失函数的优化以及改进。

1.1目标函数的优化推导

在GBDT中我们是用梯度去近似残差,而在XgBoost中为了达到更好的精度使用到了二阶导数,具体的优化步骤如下:

(0)首先假设基学习器为 f k ( x ) = w q ( x ) f_k(x)=w_{q(x)} fk(x)=wq(x)(其中 w q ( x ) w_{q(x)} wq(x)表示叶节点 q q q的输出值, q ( x ) q(x) q(x)表示将样本 x x x传入第 k k k个决策树最后在哪个叶节点输出),强学习器 F ( x ) = ∑ k = 1 K f k ( x ) F(x)=\sum_{k=1}^Kf_k(x) F(x)=k=1Kfk(x),第t轮的预测值为 y i ^ ( t ) = ∑ k = 1 t f k ( x i ) \hat{y_i}^{(t)}=\sum_{k=1}^tf_k(x_i) yi^(t)=k=1tfk(xi),使用前向分步算法的思想学习第 t t t个基学习器,目标函数为 O b j = ∑ i = 1 n l ( y i , y i ^ ) + ∑ k = 1 K Ω ( f k ) Obj=\sum_{i=1}^nl(y_i,\hat{y_i})+\sum_{k=1}^K\Omega(f_k) Obj=i=1nl(yi,yi^)+k=1KΩ(fk)(其中 l ( y i , y i ^ ) l(y_i,\hat{y_i}) l(yi,yi^)是损失函数, ∑ k = 1 K Ω ( f k ) \sum_{k=1}^K\Omega(f_k) k=1KΩ(fk)是正则项部分,表示了树的复杂度,值越小复杂度就越低,相应的模型泛化能力就越强)。

对复杂度 ∑ k = 1 K Ω ( f k ) \sum_{k=1}^K\Omega(f_k) k=1KΩ(fk)的解释:XgBoost对树的复杂度包含了两个部分,叶节点的数量(L1正则化)以及叶节点输出值 w w w的平方(L2正则化),表达式为: Ω ( f ) = γ T + 1 2 λ ∣ ∣ w ∣ ∣ 2 \Omega(f)=\gamma T+\frac{1}{2}\lambda||w||^2 Ω(f)=γT+21λw2 T T T就是树叶结点的个数, w w w就是叶结点的输出值,而 γ \gamma γ λ \lambda λ就是我们预先设置的超参数,用来避免叶节点过多以及输出值的绝对值过大,避免过拟合。

(1)现在就根据前向分步算法的思想来学习第t轮的基学习器,此时前 t − 1 t-1 t1的模型已经训练得到,并且前 t − 1 t-1 t1轮模型的预测值 y i ^ ( t − 1 ) \hat{y_i}^{(t-1)} yi^(t1)也已知,相应的前 t − 1 t-1 t1轮树的复杂度也已知,我们令其为常数 c o n s t a n t constant constant,因此我们第t轮的目标函数如下:
O b j ( t ) = ∑ i = 1 n l ( y i , y i ^ ( t − 1 ) + f t ( x i ) ) + Ω ( f t ) + c o n s t a n t Obj^{(t)}=\sum_{i=1}^nl(y_i,\hat{y_i}^{(t-1)}+f_t(x_i))+\Omega(f_t)+constant Obj(t)=i=1nl(yi,yi^(t1)+ft(xi))+Ω(ft)+constant
(2)直接优化上面的目标函数肯定比较复杂,而对于复杂的函数而言我们可以进行泰勒展开为多项式的形式,然后对多项式进行求导就变得简单了。具体操作就是将 l ( y i , y i ^ ( t − 1 ) + f t ( x i ) ) l(y_i,\hat{y_i}^{(t-1)}+f_t(x_i)) l(yi,yi^(t1)+ft(xi)) y i ^ ( t − 1 ) \hat{y_i}^{(t-1)} yi^(t1)处进行泰勒展开,并且保留到二次项,那么目标函数就变成了如下形式:
O b j ( t ) = ∑ i = 1 n [ l ( y i , y i ^ ( t − 1 ) ) + g i f t ( x i ) + 1 2 h i f t 2 ( x i ) ] + Ω ( f t ) + c o n s t a n t Obj^{(t)}=\sum_{i=1}^n[l(y_i,\hat{y_i}^{(t-1)})+g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)]+\Omega(f_t)+constant Obj(t)=i=1n[l(yi,yi^(t1))+gift(xi)+21hift2(xi)]+Ω(ft)+constant
其中 g i = ∂ l ( y i , y i ^ ( t − 1 ) ) ∂ y i ^ ( t − 1 ) g_i=\frac{\partial l(y_i,\hat{y_i}^{(t-1)})}{\partial\hat{y_i}^{(t-1)}} gi=yi^(t1)l(yi,yi^(t1)) h i = ∂ l ( y i , y i ^ ( t − 1 ) ) ∂ ( y i ^ ( t − 1 ) ) 2 h_i=\frac{\partial l(y_i,\hat{y_i}^{(t-1)})}{\partial(\hat{y_i}^{(t-1)})^2} hi=(yi^(t1))2l(yi,yi^(t1)),分别表示一阶偏导和二阶偏导数。
(3)因为上面目标函数中 l ( y i , y i ^ ( t − 1 ) ) l(y_i,\hat{y_i}^{(t-1)}) l(yi,yi^(t1)) c o n s t a n t constant constant都是已知的常数不影响优化,所以省略掉以简化表达式,现在目标函数变为如下形式:
O b j ( t ) = ∑ i = 1 n [ g i f t ( x i ) + 1 2 h i f t 2 ( x i ) ] + Ω ( f t ) Obj^{(t)}=\sum_{i=1}^n[g_if_t(x_i)+\frac{1}{2}h_if_t^2(x_i)]+\Omega(f_t) Obj(t)=i=1n[gift(xi)+21hift2(xi)]+Ω(ft)
然后带入 Ω ( f ) \Omega(f) Ω(f) f t ( x ) f_t(x) ft(x)的表达式,目标函数变成如下形式:
O b j ( t ) = ∑ i = 1 n [ g i w q ( x i ) + 1 2 h i w q ( x i ) 2 ] + γ T + 1 2 λ ∑ j = 1 T w j 2 = ∑ j = 1 T [ ( ∑ i ∈ I j g i ) w j + 1 2 ( ∑ i ∈ I j h i + λ ) w j 2 ] + γ T Obj^{(t)}=\sum_{i=1}^n[g_iw_{q(x_i)}+\frac{1}{2}h_iw^2_{q(x_i)}]+\gamma T+\frac{1}{2}\lambda \sum_{j=1}^Tw_j^2\\=\sum_{j=1}^T[(\sum_{i\in{I_j}}g_i)w_j+\frac{1}{2}(\sum_{i\in{I_j}}h_i+\lambda)w^2_j]+\gamma T Obj(t)=i=1n[giwq(xi)+21hiwq(xi)2]+γT+21λj=1Twj2=j=1T[(iIjgi)wj+21(iIjhi+λ)wj2]+γT
其中 I j I_j Ij是叶节点 j j j上所有样本下标的集合, w j w_j wj就是叶节点 j j j的输出值(简单说一下上面这一步的转换:原本是每个样本的输出的值进行平方等处理之后求和,因为每个样本的输出值最终都是落在某一个叶结点上,所以转换后就是对于每一个叶节点来说,将落在该叶节点上的所有样本 ∑ i ∈ I j \sum_{i\in{I_j}} iIj进行相应处理即可)。
(4)现在令 G j = ∑ i ∈ I j g i G_j=\sum_{i\in{I_j}}g_i Gj=iIjgi H j = ∑ i ∈ I j h i H_j=\sum_{i\in{I_j}}h_i Hj=iIjhi,则目标函数变成如下形式:
O b j ( t ) = ∑ j = 1 T [ G j w j + 1 2 ( H j + λ ) w j 2 ] + γ T Obj^{(t)}=\sum_{j=1}^T[G_jw_j+\frac{1}{2}(H_j+\lambda)w^2_j]+\gamma T Obj(t)=j=1T[Gjwj+21(Hj+λ)wj2]+γT
现在目标函数的形势已经非常简单了,我们只需要对 w j w_j wj进行求导即可,即可得到第t轮的最优模型 f t ( x ) f_t(x) ft(x),其叶节点的输出值如下:
w j ∗ = − G j H j + λ w^*_j=-\frac{G_j}{H_j+\lambda} wj=Hj+λGj
然后将 w ∗ w^* w带入目标函数即可得到目标函数的最小值:
O b j m i n ( t ) = − 1 2 ∑ j = 1 T G j 2 H j + λ + γ T Obj^{(t)}_{min}=-\frac{1}{2}\sum_{j=1}^T\frac{G^2_j}{H_j+\lambda}+\gamma T Objmin(t)=21j=1THj+λGj2+γT

1.2内部节点分裂

现在虽然知道了基学习器叶节点的最有输出形式,但是基学习器内部节点是如何分裂的呢?基本思想和回归树一样,选择增益最高的特征和划分点左右当前节点的最优划分特征,增益的计算方法就是用分裂前节点的目标函数值(也就是损失值)减去分裂后左右子结点的目标函数值,具体计算的表达式如下:
G a i n = 1 2 [ G R 2 H R + λ + G L 2 H L + λ − ( G R + G L ) 2 H R + H L + λ ] − γ Gain=\frac{1}{2}[\frac{G^2_R}{H_R+\lambda}+\frac{G^2_L}{H_L+\lambda}-\frac{(G_R+G_L)^2}{H_R+H_L+\lambda}]-\gamma Gain=21[HR+λGR2+HL+λGL2HR+HL+λ(GR+GL)2]γ
此外XgBoost还加入了预排序的机制,基本思想就是当计算某个特征不同取值的增益时,先对这个特征的取值进行排序,每个样本的一阶导数值和二阶导数对应的排序,然后计算每个取值的增益值,只需要在 G L G_L GL上加上这个取值样本的一阶导数即可,具体算法步骤如下:
集成模型(3)XgBoost主要原理及其python实现_第1张图片

2.总结

  1. XgBoost相对于传统GBDT而言,在优化目标函数中使用了一阶导数以及二阶导数;
  2. XgBoost加入了正则项,避免过拟合
  3. 使用到了预排序,此外也加入了学习率,避免过拟合(GBDT中也可以加入学习率)。

3.python实现

3.1基学习器的实现

这里我并没有实现预排序,因为对于python来说感觉差别不大。

import pandas as pd
import numpy as np
import pygraphviz as pgv

'''构建回归树,节点分裂准则和叶节点输出值都是根据loss函数确定'''

#计算loss函数在当前模型(n-1轮)的一阶导和二阶导
#要实现其他损失函数只需要在计算导数这里修改即可
def cal_G_H(y_true:np.array,y_pred:np.array,loss='squarederror'):
    if loss == 'squarederror':
        G = np.sum(-2*(y_true - y_pred))
        H = np.sum(np.ones(len(y_true))*2)
    elif loss == 'logloss':
        exp_y_pred = np.exp(y_pred)
        G = np.sum(1-y_true-1/(1+exp_y_pred))
        H = np.sum(exp_y_pred/((1+exp_y_pred)**2))
    return G,H

#计算当前划分下的增益
def cal_Gain(G_L,G_R,H_L,H_R,reg_alpha,reg_lambda):
    return (G_L**2/(H_L+reg_lambda)+G_R**2/(H_R+reg_lambda)-(G_L+G_R)**2/((H_L+H_R)+reg_lambda))/2-reg_alpha

#选择最优划分特征以及划分点
def select_best_feature(data:pd.DataFrame,y_true:np.array,y_pred:np.array,reg_alpha=0,reg_lambda=1,loss='squarederror'):
    features = data.columns.tolist()
    best_feat = '' #最优划分特征
    best_split = -1 #最优划分点
    max_gain = -1 #最优划分特征及划分点对应的增益
    G, H = cal_G_H(y_true, y_pred) #未划分前所有样本的一阶导之和,以及二阶导之和
    for feat in features:
        feat_vals = sorted(data[feat].unique())
        split_vals = [(feat_vals[i]+feat_vals[i+1])/2 for i in np.arange(len(feat_vals)-1)]
        for val in split_vals:
            L_index = data[feat]<val #左子树的样本点下标
            G_L, H_L = cal_G_H(y_true[L_index], y_pred[L_index], loss) #计算左子树节点的一阶导之和以及二阶导之和
            cur_gain = cal_Gain(G_L,G-G_L,H_L,H-H_L,reg_alpha,reg_lambda)
            if cur_gain>max_gain:
                max_gain = cur_gain
                best_feat = feat
                best_split = val
    return best_feat, best_split,max_gain

#返回叶节点最优的输出值,即最小化损失函数loss
def cal_best_w(y_true:np.array,y_pred:np.array,reg_lambda,loss='squarederror'):
    G_j, H_j = cal_G_H(y_true,y_pred,loss)
    return -G_j/(H_j+reg_lambda)

#构建回归树
def build_treeRegressor(data:pd.DataFrame,y_true:np.array,y_pred:np.array,cur_depth=0,max_depth=3,min_samples_leaf=1,
                        gamma=1,reg_alpha=0,reg_lambda=0,loss='squarederror'):
    '''
    :param data:训练集
    :param y_true: 真实值
    :param y_pred: 当前模型的预测试
    :param cur_depth: 当前第几层
    :param max_depth: 树的最大层数
    :param min_samples_leaf: 叶节点最小样本数
    :param gamma: 分割所需要达到的最小增益
    :param reg_alpha: L1正则化参数
    :param reg_lambda: L2正则化参数
    :param loss: 选取的损失函数
    :return: 树模型
    '''
    tree = {}
    #当达到数的最大深度时,停止分裂
    if cur_depth>=max_depth:
        return {'isLeaf':True,'val':cal_best_w(y_true,y_pred,reg_lambda,loss)}
    best_feat, best_split, max_gain = select_best_feature(data,y_true,y_pred,reg_alpha,reg_lambda,loss)
    # print(best_feat, best_split, max_gain)
    # 如果分割后产生的增益小于阈值,则不分割
    if max_gain < gamma:
        return {'isLeaf': True, 'val': cal_best_w(y_true, y_pred, reg_lambda, loss)}
    L_tree_index = data[best_feat]<best_split
    R_tree_index = data[best_feat]>=best_split
    #如果分割后左子树或右子树样本数量小于叶节点最小样本数量则停止分割
    if len(L_tree_index)<min_samples_leaf or len(R_tree_index)<min_samples_leaf:
        return {'isLeaf':True,'val':cal_best_w(y_true,y_pred,reg_lambda,loss)}

    tree['isLeaf'] = False
    tree['best_feat'] = best_feat
    tree['best_split'] = best_split
    tree['l_tree'] = build_treeRegressor(data[L_tree_index],y_true[L_tree_index],y_pred[L_tree_index],cur_depth+1,
                                         max_depth,min_samples_leaf,gamma,reg_alpha,reg_lambda,'squarederror')
    tree['r_tree'] = build_treeRegressor(data[R_tree_index],y_true[R_tree_index],y_pred[R_tree_index],cur_depth+1,
                                         max_depth,min_samples_leaf,gamma,reg_alpha,reg_lambda,'squarederror')

    return tree

def predict(tree: {}, data: pd.DataFrame):
    y_pred = np.zeros(len(data))
    for i in np.arange(len(data)):
        tmp_tree = tree
        while (tmp_tree['isLeaf'] == False):
            cur_feat = tmp_tree['best_feat']
            split_val = tmp_tree['best_split']
            if data.loc[i, cur_feat] <= split_val:
                tmp_tree = tmp_tree['l_tree']
            else:
                tmp_tree = tmp_tree['r_tree']
        y_pred[i] = tmp_tree['val']

    return y_pred

def plotTree(A,tree:{}, father_node,depth,label):
    #如果当前是根节点
    if depth == 1:
        A.add_node(father_node)
        #如果既是根节点又是叶节点,即树桩
        if tree['isLeaf'] == True:
            A.add_edge(father_node,tree['val'],label=label)
            return
        else:
            plotTree(A,tree['l_tree'], father_node,depth+1,'<=')
            plotTree(A,tree['r_tree'], father_node,depth+1,'>')
            return
    if tree['isLeaf'] == True:
        A.add_edge(father_node, tree['val'], label=label)
        return
    A.add_edge(father_node, tree['best_feat']+':'+str(tree['best_split']), label=label)
    plotTree(A,tree['l_tree'], tree['best_feat']+':'+str(tree['best_split']), depth+1,'<=')
    plotTree(A,tree['r_tree'], tree['best_feat']+':'+str(tree['best_split']), depth+1,'>')

3.2XgBoost回归器的实现

这里实现了回归的集成模型,并在sklearn提供的房价数据上进行了实现。

import numpy as np
import pandas as pd
from XgBoost import treeRegressor
import pygraphviz as pgv
from sklearn.model_selection import train_test_split

'''构建xgboost回归模型,基学习器采用树模型'''

def build_xgboostRegressor(data:pd.DataFrame,y_true:np.array,n=3,max_depth=3,min_samples_leaf=1,gamma=1,
                           reg_alpha=0,reg_lambda=0,loss='squarederror',lr=0.1):
    y_pred = np.zeros(len(data)) #初始化
    xgboostRegressor = []
    for i in np.arange(n):
        fn = treeRegressor.build_treeRegressor(data,y_true,y_pred,0,max_depth,min_samples_leaf,gamma,reg_alpha,
                                               reg_lambda,loss)
        xgboostRegressor.append(fn)
        if i==0:
            y_pred += treeRegressor.predict(fn, data)
        else:
            y_pred += lr*treeRegressor.predict(fn, data)
        # print(y_pred)

    return xgboostRegressor

def predict(xgboostRegressors, data:pd.DataFrame,lr=0.1):
    y_pred = np.zeros(len(data))
    for i,tree in enumerate(xgboostRegressors):
        # y_pred += treeRegressor.predict(tree, data)
        if i==0:
            y_pred += treeRegressor.predict(tree, data)
        else:
            y_pred += lr*treeRegressor.predict(tree, data)
    return y_pred

if __name__ == '__main__':
    from sklearn import datasets
    from sklearn.metrics import mean_squared_error, mean_absolute_error
    X, y = datasets.load_boston(return_X_y=True)
    X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2, shuffle=True, random_state=2020)
    print('train:.{} test:.{}'.format(X_train.shape, X_test.shape))
    X_train_df = pd.DataFrame(X_train)
    X_test_df = pd.DataFrame(X_test)
    xgboostRegressors = build_xgboostRegressor(X_train_df, y_train,lr=0.3, gamma=1e-7,max_depth=6,
                                               min_samples_leaf=4, n=50, reg_lambda=1)

    y_pred_train = predict(xgboostRegressors, X_train_df,lr=0.3)
    y_pred_test = predict(xgboostRegressors, X_test_df,lr=0.3)
    print('train mse:{} mae:{}'.format(mean_squared_error(y_train,y_pred_train),mean_absolute_error(y_train,y_pred_train)))
    print('test mse:{} mae:{}'.format(mean_squared_error(y_test,y_pred_test),mean_absolute_error(y_test,y_pred_test)))

输出结果:

    train mse: 0.01542057648392986 mae: 0.09086041601082673
 	test mse: 16.13920446278822 mae: 2.5409239663769387

3.3XgBoost分类器的实现

这里实现了XgBoost分类器,也在sklearn提供的乳腺癌数据上进行了实验。

import numpy as np
import pandas as pd
from XgBoost import treeRegressor

'''构建xgboost分类器'''

def build_xgboostClassifier(data:pd.DataFrame, y_true:np.array,n=3,lr=0.1,max_depth=3,min_samples_leaf=1,gamma=1e-7,
                           reg_alpha=0,reg_lambda=0,loss='logloss'):
    if loss == 'logloss':
        f0 = np.log(np.sum(y_true)/np.sum(1-y_true)) #初始化一个常数是的损失函数的值最小
    y_pred = np.ones(len(y_true))*f0
    xgboostClassifiers = []
    xgboostClassifiers.append(f0)
    for i in np.arange(n-1):
        tree = treeRegressor.build_treeRegressor(data,y_true,y_pred,0,max_depth,min_samples_leaf,
                                                 gamma,reg_alpha,reg_lambda,loss)
        y_pred += lr * treeRegressor.predict(tree, data)
        xgboostClassifiers.append(tree)

    return xgboostClassifiers

def predict(xgboostClassifiers, data:pd.DataFrame, lr=0.1):
    fm = xgboostClassifiers[0]
    fm = np.ones(len(data))*fm
    for i in np.arange(len(xgboostClassifiers)-1):
        # print(treeRegressor.predict(xgboostClassifiers[i+1],data))
        fm += lr*treeRegressor.predict(xgboostClassifiers[i+1],data)

    y_pred_prob = 1 / (1 + np.exp(-fm))
    y_pred_prob[y_pred_prob > 0.5] = 1
    y_pred_prob[y_pred_prob <= 0.5] = 0
    print(y_pred_prob)
    return y_pred_prob

if __name__ == '__main__':
    from sklearn import datasets
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import precision_score, accuracy_score, recall_score

    X, y = datasets.load_breast_cancer(return_X_y=True)
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=True, random_state=2020)
    print('train:.{} test:.{}'.format(X_train.shape, X_test.shape))
    print(np.sum(y_train))
    X_train_df = pd.DataFrame(X_train)
    X_test_df = pd.DataFrame(X_test)

    xgboostClassifiers = build_xgboostClassifier(X_train_df, y_train, lr=1, gamma=1e-7, max_depth=6, min_samples_leaf=3,
                                               n=10, reg_lambda=1)

    y_pred_train = predict(xgboostClassifiers, X_train_df, lr=1)
    y_pred_test = predict(xgboostClassifiers, X_test_df, lr=1)
    print('train acc:{} precision:{} recall:{}'.format(accuracy_score(y_train, y_pred_train),
                                       precision_score(y_train, y_pred_train),
                                       recall_score(y_train,y_pred_train)))
    print('test acc:{} precision:{} recall:{}'.format(accuracy_score(y_test, y_pred_test),
                                      precision_score(y_test, y_pred_test),
                                      recall_score(y_test,y_pred_test)))

输出结果:

    train acc: 0.8593406593406593 precision: 0.819718309859155 recall: 1.0
    test acc: 0.8157894736842105 precision: 0.7586206896551724 recall: 1.0

你可能感兴趣的:(机器学习入门,机器学习,算法,数据挖掘,python)