将多个不同算法、同一算法不同参数或不同数据集的弱模型组合为强模型.
以加性模型表示为
H ( x ; Θ ) = ∑ τ α τ h ( x ; θ τ ) , Θ = arg min α , θ E D [ L ( y , ∑ τ α τ h ( x ; θ τ ) ) ] H(\pmb x;\Theta)=\sum_{\tau}\alpha_{\tau}h(\pmb x;\theta_{\tau}),\quad\Theta=\arg\min_{{\alpha,\theta}}\Bbb E_{\mathcal D}[L(y,\sum_{\tau}\alpha_{\tau}h(\pmb x;\theta_{\tau}))] H(xxx;Θ)=τ∑ατh(xxx;θτ),Θ=argα,θminED[L(y,τ∑ατh(xxx;θτ))]
假设集成M个分类误差为 ϵ < 0.5 \epsilon<0.5 ϵ<0.5且相互独立的弱模型,则集成分类器的误差率
P ( H ( x ) ≠ f ( x ) ) = ∑ m = 0 M / 2 C M m ( 1 − ϵ ) m ϵ M − m ≤ exp ( − 1 2 M ( 1 − 2 ϵ ) 2 ) P(H(\pmb x)\neq f(\pmb x))=\sum_{m=0}^{M/2}\Bbb C_M^m(1-\epsilon)^m\epsilon^{M-m} \leq\exp\left(-\frac{1}{2}M(1-2\epsilon)^2\right) P(H(xxx)=f(xxx))=m=0∑M/2CMm(1−ϵ)mϵM−m≤exp(−21M(1−2ϵ)2)
随着基分类器数量M的增大,集成分类器的误差以指数级下降.
Bagging
基模型无强依赖关系、可并行生成,一般为强模型,适合用于集成高方差低偏差的over-fitting模型(关注降低方差),如决策树模型所有样本均为叶结点时,错误率为0.
随机森林是典型的的bagging算法,使用cart决策树作为基模型,单个模型训练的样本集和特征集具有随机性,即:
Boosting
基模型存在强依赖关系、串行生成,新基模型是对之前总体模型的提升,基模型一般为正确率略优于随机猜测的弱模型.
根据之前迭代结果训练新基模型并更新总体模型,由于全局优化计算复杂,一般通过贪婪方法逐个求解最优基模型参数. Boosting的前向加性模型及求解方法为
H t ( x ) = H t − 1 ( x ) + α t h ( x , θ t ) , ( α t , θ t ) = arg min α , θ E D t [ L ( y , H t − 1 ( x ) + α h ( x ; θ ) ) ] H_t(\pmb x)=H_{t-1}(\pmb x)+\alpha_th(\pmb x,\theta_t),\quad (\alpha_t,\theta_t)=\arg\min_{\alpha,\theta}\Bbb E_{\mathcal D_t}[L(y,H_{t-1}(\pmb x)+\alpha h(\pmb x;\theta))] Ht(xxx)=Ht−1(xxx)+αth(xxx,θt),(αt,θt)=argα,θminEDt[L(y,Ht−1(xxx)+αh(xxx;θ))]
CART
基模型一般使用CART决策树,模型表示为
T ( x ; Θ ) = ∑ j = 1 J γ i I ( x ∈ R j ) T(\pmb x;\Theta)=\sum_{j=1}^J\gamma_i\Bbb I(\pmb x\in R_j) T(xxx;Θ)=j=1∑JγiI(xxx∈Rj)
其中 R j R_j Rj和 γ j \gamma_j γj分别为第 j j j个叶结点对应区域和结点输出值.
自适应增强(Adaptive Boosting,AdaBoost)是Boosting算法簇的先祖,通过集成多个弱模型成为一个强模型。
H ( x ; Θ ) = ∑ τ α τ h ( x ; θ τ ) H(\boldsymbol x;\Theta)=\sum_{\tau}\alpha_{\tau}h(\boldsymbol x;\theta_{\tau}) H(x;Θ)=τ∑ατh(x;θτ)
式中 h τ h_\tau hτ和 α τ \alpha_\tau ατ分别表示第 τ \tau τ次迭代所得基模型及其所占总体的权重。
AdaBoost算法属于前向分布算法的特例,AdaBoost的基本思想:
本文主要通过理论推导说明AdaBoost的学习过程,解释以下几个问题:
用于二分类的AdaBoost算法等价于基于指数损失的前向加性模型是在算法提出后之后才发现,典型的思想在前理论证明在后的算法,以不断调整分布的方式学习基模型,恰好可通过最小化指数损失得到解释。
H t ( x ) = H t − 1 ( x ) + α t h t ( x ) H_t(\boldsymbol x)=H_{t-1}(\boldsymbol x)+\alpha_th_t(\boldsymbol x) Ht(x)=Ht−1(x)+αtht(x)
令训练集样本分布表示为 D \mathcal D D,初始分布一般为均匀分布
D = ( w 1 , ⋯ , w m ) , w i = 1 m \mathcal D=(w_1,\cdots,w_m),\quad w_i=\frac{1}{m} D=(w1,⋯,wm),wi=m1
对于二分类模型 y = ± 1 y=\pm 1 y=±1,基于最小化指数损失训练模型,则第 t t t步所得基模型 h t h_t ht,应使得之前所得总体模型 H t − 1 H_{t-1} Ht−1集成 h t h_t ht后,能在原始样本分布上最小化指数损失,因此优化目标函数可表示为
( α t , h t ) = arg min α , h E D [ exp ( − y ( H t − 1 ( x ) + α h ( x ) ) ) ] = arg min α , h ∑ i w i exp ( − y i ( H t − 1 ( x ) ) ) ⋅ exp ( − α y i h ( x ) ) = arg min α , h ∑ i w i ( t ) exp ( − α y i h ( x i ) ) \begin{aligned} (\alpha_t,h_t) &=\arg\min_{\alpha,h}\Bbb E_{\mathcal D}[\exp(-y(H_{t-1}(\boldsymbol x)+\alpha h(\boldsymbol x)))]\\[1ex] &=\arg\min_{\alpha,h}\sum_iw_i\exp(-y_i(H_{t-1}(\boldsymbol x)))\cdot\exp(-\alpha y_ih(\boldsymbol x))\\ &=\arg\min_{\alpha,h}\sum_iw_i^{(t)}\exp(-\alpha y_ih(\boldsymbol x_i)) \end{aligned} (αt,ht)=argα,hminED[exp(−y(Ht−1(x)+αh(x)))]=argα,hmini∑wiexp(−yi(Ht−1(x)))⋅exp(−αyih(x))=argα,hmini∑wi(t)exp(−αyih(xi))
其中可将 w ( t ) w^{(t)} w(t)视为第 t t t次迭代每个观测的权重
w i ( t ) = w i exp ( − y i H t − 1 ( x i ) ) w_i^{(t)}=w_i\exp(-y_iH_{t-1}(\boldsymbol x_i)) wi(t)=wiexp(−yiHt−1(xi))
样本权重计算公式等价于以下递推的形式
w i ( t + 1 ) = w i ( t ) exp ( − y i α t h t ( x i ) ) \quad w_i^{(t+1)}=w_i^{(t)}\exp(-y_i\alpha_th_t(\boldsymbol x_i)) wi(t+1)=wi(t)exp(−yiαtht(xi))
实际计算过程中,样本分布也是通过迭代更新,这也解释了每次迭代完之后会调整样本分布的特性。
如何在第 t t t步训练基模型 h t h_t ht?
固定 α t \alpha_t αt并假定 α t > 0 \alpha_t > 0 αt>0,则最优 h t h_t ht满足
h t = arg min h ∑ i w i ( t ) exp ( − y i h ( x i ) ) ≈ arg min h ∑ i w i ( t ) ( 1 − y i h ( x i ) ) = arg max h E D t [ y h ( x ) ] = arg max h [ h ( x ) P ( y = 1 ∣ D t ) − h ( x ) P ( y = − 1 ∣ D t ) ] \begin{aligned} h_t &=\arg\min_h\sum_iw_i^{(t)}\exp(-y_ih(\boldsymbol x_i)) \approx\arg\min_h\sum_iw_i^{(t)}(1-y_ih(\boldsymbol x_i))\\ &=\argmax_h\Bbb E_{\mathcal D_t}[yh(\boldsymbol x)]\\ &=\arg\max_h[h(\boldsymbol x)P(y=1|\mathcal D_t) - h(\boldsymbol x)P(y=-1|\mathcal D_t)] \end{aligned} ht=arghmini∑wi(t)exp(−yih(xi))≈arghmini∑wi(t)(1−yih(xi))=hargmaxEDt[yh(x)]=arghmax[h(x)P(y=1∣Dt)−h(x)P(y=−1∣Dt)]
显然,最优解满足
h ( x ) = { 1 , P ( y = 1 ∣ D t ) > P ( y = − 1 ∣ D t ) − 1 , o t h e r w i s e h(\boldsymbol x)= \begin{cases} 1,&P(y=1|\mathcal D_t) > P(y=-1|\mathcal D_t)\\ -1,&otherwise\\ \end{cases} h(x)={1,−1,P(y=1∣Dt)>P(y=−1∣Dt)otherwise
因此,基模型(一般为CART决策树)的优化目标函数为
h t ( x ) = arg min h ∑ i w i ( t ) I ( y i ≠ h ( x i ) ) , h t ( x ) ∈ { − 1 , + 1 } h_t(\boldsymbol x)=\arg\min_h\sum_i w_i^{(t)}\Bbb I(y_i\neq h(\boldsymbol x_i)),\quad h_t(\boldsymbol x)\in\{-1,+1\} ht(x)=arghmini∑wi(t)I(yi=h(xi)),ht(x)∈{−1,+1}
因此,基模型的学习就可以看作为在加权的样本集中训练CART二分类决策树。
如何计算第 t t t步基模型所占总体权重 α t \alpha_t αt?
固定 h t h_t ht,则最优 α t \alpha_t αt满足
α t = arg min α ∑ i w i ( t ) exp ( − α y i h t ( x i ) ) \alpha_t=\arg\min_{\alpha}\sum_iw_i^{(t)}\exp(-\alpha y_ih_t(\boldsymbol x_i)) αt=argαmini∑wi(t)exp(−αyiht(xi))
对上述求偏导令其为0,由于 y ± 1 y\pm 1 y±1,易得
α t = 1 2 ln 1 − ϵ t ϵ t , ϵ t = ∑ i w i ( t ) I ( y i ≠ h t ( x i ) ) ∑ i w i ( t ) \alpha_t=\frac{1}{2}\ln\frac{1-\epsilon_t}{\epsilon_t},\quad \epsilon_t=\frac{\sum_iw_i^{(t)}\Bbb I(y_i\neq h_t(\boldsymbol x_i))}{\sum_iw_i^{(t)}} αt=21lnϵt1−ϵt,ϵt=∑iwi(t)∑iwi(t)I(yi=ht(xi))
给定训练数据如下表所示,假设弱分类器由 x < v x < v x<v或 x > v x > v x>v产生,其阈值 v v v是该分类器在训练集上分类误差率最低。
序号 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
x x x | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
y y y | 1 | 1 | 1 | -1 | -1 | -1 | 1 | 1 | 1 | -1 |
解: 初始化数据权值分布
D 1 = ( w 11 , w 12 , ⋯ , w 10 ) , w 1 i = 0.1 , i = 1 , 2 , ⋯ , 10 \mathcal D_1=(w_{11}, w_{12}, \cdots,w_{10}), \quad w_{1i}=0.1, \quad i=1,2,\cdots,10 D1=(w11,w12,⋯,w10),w1i=0.1,i=1,2,⋯,10
i. m = 1 m=1 m=1,训练弱学习器,计算基分类器权重,更新样本分布
阈值 v = 2.5 v=2.5 v=2.5时,基本分类器 h 1 ( x ) h_1(x) h1(x)分类误差最低,故
h 1 ( x ) = { 1 , x < 2.5 − 1 , x > 2.5 h_1(x)= \begin{cases} 1, &x<2.5 \\ -1, &x > 2.5 \end{cases} h1(x)={1,−1,x<2.5x>2.5
h 1 ( x ) h_1(x) h1(x)在分布 D 1 \mathcal D_1 D1上的分类误差率
e 1 = ∑ i = 1 10 e 1 i I ( y i ≠ h 1 ( x ) ) = 0.3 e_1=\sum_{i=1}^{10}e_{1i}\Bbb I(y_i\neq h_1(x))=0.3 e1=i=1∑10e1iI(yi=h1(x))=0.3
计算 h 1 h_1 h1的权重系数
α 1 = 1 2 ln 1 − e 1 e 1 = 0.4236 \alpha_1=\frac{1}{2}\ln\frac{1-e_1}{e_1}=0.4236 α1=21lne11−e1=0.4236
更新训练数据的权值分布
D 2 = ( w 21 , ⋯ , w 2 i , ⋯ , w 2 , 10 ) , w 2 i = w 1 i ⋅ e − α 1 y i h 1 ( x i ) Z 1 , i = 1 , 2 , ⋯ , 10 \mathcal D_2=(w_{21}, \cdots, w_{2i}, \cdots,w_{2,10}), \quad w_{2i}=\frac{w_{1i}\cdot e^{-\alpha_1y_ih_1(x_i)}}{Z_1}, \quad i=1,2,\cdots,10 D2=(w21,⋯,w2i,⋯,w2,10),w2i=Z1w1i⋅e−α1yih1(xi),i=1,2,⋯,10
则
D 2 = ( 0.071 , 0.071 , 0.071 , 0.071 , 0.071 , 0.071 , 0.167 , 0.167 , 0.167 , 0.071 ) \mathcal D_2=(0.071, 0.071, 0.071, 0.071, 0.071, 0.071, 0.167, 0.167, 0.167, 0.071) D2=(0.071,0.071,0.071,0.071,0.071,0.071,0.167,0.167,0.167,0.071)
ii. m = 2 m=2 m=2,训练弱学习器,计算基分类器权重,更新样本分布:
阈值 v = 8.5 v=8.5 v=8.5时,基本分类器 h 2 ( x ) h_2(x) h2(x)分类误差最低,故
h 2 ( x ) = { 1 , x < 8.5 − 1 , x > 8.5 h_2(x)= \begin{cases} 1, &x<8.5 \\ -1, &x > 8.5 \end{cases} h2(x)={1,−1,x<8.5x>8.5
h 2 ( x ) h_2(x) h2(x)在分布 D 2 \mathcal D_2 D2上的分类误差率
e 2 = ∑ i = 1 10 w 2 i I ( y i ≠ h 2 ( x ) ) = 0.2143 e_2=\sum_{i=1}^{10}w_{2i}\Bbb I(y_i\neq h_2(x))=0.2143 e2=i=1∑10w2iI(yi=h2(x))=0.2143
计算 h 2 h_2 h2的权重系数
α 2 = 1 2 ln 1 − e 2 e 2 = 0.6496 \alpha_2=\frac{1}{2}\ln\frac{1-e_2}{e_2}=0.6496 α2=21lne21−e2=0.6496
更新训练数据的权值分布,则
D 3 = ( 0.046 , 0.046 , 0.046 , 0.167 , 0.167 , 0.167 , 0.106 , 0.106 , 0.106 , 0.046 ) \mathcal D_3=(0.046, 0.046, 0.046, 0.167, 0.167, 0.167, 0.106, 0.106, 0.106, 0.046) D3=(0.046,0.046,0.046,0.167,0.167,0.167,0.106,0.106,0.106,0.046)
iii. m = 3 m=3 m=3,训练弱学习器,计算基分类器权重,更新样本分布
阈值 v = 5.5 v=5.5 v=5.5时,基本分类器 h 3 ( x ) h_3(x) h3(x)分类误差最低,故
h 3 ( x ) = { 1 , x < 5.5 − 1 , x > 5.5 h_3(x)= \begin{cases} 1, &x<5.5 \\ -1, &x > 5.5 \end{cases} h3(x)={1,−1,x<5.5x>5.5
h 3 ( x ) h_3(x) h3(x)在分布 D 3 \mathcal D_3 D3上的分类误差率
e 3 = ∑ i = 1 10 w 3 i I ( y i ≠ h 3 ( x ) ) = 0.1820 e_3=\sum_{i=1}^{10}w_{3i}\Bbb I(y_i\neq h_3(x))=0.1820 e3=i=1∑10w3iI(yi=h3(x))=0.1820
计算 h 3 h_3 h3的权重系数
α 3 = 1 2 ln 1 − e 3 e 3 = 0.7514 \alpha_3=\frac{1}{2}\ln\frac{1-e_3}{e_3}=0.7514 α3=21lne31−e3=0.7514
更新训练数据的权值分布,则
D 4 = ( 0.125 , 0.125 , 0.125 , 0.102 , 0.102 , 0.102 , 0.065 , 0.065 , 0.065 , 0.125 ) \mathcal D_4=(0.125,0.125,0.125,0.102,0.102,0.102,0.065,0.065,0.065,0.125) D4=(0.125,0.125,0.125,0.102,0.102,0.102,0.065,0.065,0.065,0.125)
IV. 构建强分类器
sign [ f 3 ( x ) ] = sign [ 0.4236 h 1 ( x ) + 0.6496 h 2 ( x ) + 0.7514 h 3 ( x ) ] \text{sign}[f_3(x)] = \text{sign}[0.4236h_1(x)+0.6496h_2(x)+0.7514h_3(x)] sign[f3(x)]=sign[0.4236h1(x)+0.6496h2(x)+0.7514h3(x)]
1.Boosting algorithm: AdaBoost
2.The Elements of Statistical Learning (Second Edition) (P341-345)