Softmax及其损失函数求导推导过程

Softmax激活函数的损失函数求导过程推导

  • Softmax函数介绍
  • Softmax的损失函数:交叉熵
  • Softmax求导过程
  • 损失函数求导

Softmax函数介绍

在深度学习领域,多分类问题的激活函数常常使用softmax函数,它将多个神经元的输出,映射到(0,1)区间内,可以看成概率来理解,从而来进行多分类!
以前学机器学习时了解过softmax,但没仔细推导过,这段时间在学CS224n课程,要自己推导,看了一些网上的推导,始终有些地方不明白的,后来终于把softmax求梯度过程弄明白了,下面把自己的心得和推导过程记录一下
先看一下Softmax函数:
S o f t m a x ( θ i ) = e θ i ∑ k e θ Softmax(\theta i) = \frac{e^{\theta i}}{\sum_k e^{\theta}} Softmax(θi)=keθeθi

Softmax的损失函数:交叉熵

L o s s = ∑ k y i   l n   a i Loss=\sum_k y_i {\ }{\rm ln } {\ }a_i Loss=kyi ln ai

Softmax求导过程

Softmax函数输入 θ \theta θ是有k个元素的向量,每个 θ \theta θ都会对应着一个softmax,因此,每个softmax都要分别对每个 θ \theta θ求导,对 θ \theta θ求导结果就是一个k*k的雅可比矩阵
[ ∂ S 0 ∂ θ 0 ∂ S 1 ∂ θ 0 . . . ∂ S k ∂ θ 0 ∂ S 0 ∂ θ 1 ∂ S 1 ∂ θ 1 . . . ∂ S k ∂ θ 0 . . . . . . . . . . . . ∂ S 0 ∂ θ k ∂ S 1 ∂ θ k . . . ∂ S k ∂ θ k ] \begin{bmatrix} \frac {\partial S_0 }{\partial \theta_0} & \frac {\partial S_1 }{\partial \theta_0} & {...} & \frac {\partial S_k }{\partial \theta_0} \\ \frac {\partial S_0 }{\partial \theta_1} & \frac {\partial S_1 }{\partial \theta_1} & {...} & \frac {\partial S_k }{\partial \theta_0} \\ {...} &{...}&{...}&{...} \\ \frac {\partial S_0 }{\partial \theta_k} & \frac {\partial S_1 }{\partial \theta_k} & {...} & \frac {\partial S_k }{\partial \theta_k} \end{bmatrix} θ0S0θ1S0...θkS0θ0S1θ1S1...θkS1............θ0Skθ0Sk...θkSk

代入softmax公式,求偏导公式为:
∂ S i ∂ θ j = ∂ e θ i ∑ e θ ∂ θ j \frac{\partial S_i}{\partial \theta_j}=\frac{\partial \frac{e^{\theta_i}}{\sum e^\theta}}{\partial \theta_j} θjSi=θjeθeθi
u = e θ i u=e^{\theta_i} u=eθi, v = ∑ e θ v=\sum e^\theta v=eθ,则根据复合函数求导法则, u v \frac{u}{v} vu导数为 u ′ v − u v ′ v 2 \frac{u'v-uv'}{v^2} v2uvuv
∂ S i ∂ θ j = ∂ e θ i ∑ e θ ∂ θ j = ∂ e θ i ∂ θ j ∑ e θ − e θ i ∂ ∑ e θ ∂ θ j ( ∑ e θ ) 2 \begin{aligned} \frac{\partial S_i}{\partial \theta_j}&=\frac{\partial \frac{e^{\theta_i}}{\sum e^\theta}}{\partial \theta_j}\\ &=\frac{\frac{{\partial e^{\theta_i}}}{\partial \theta_j}{\sum e^\theta}- e^{\theta_i}{\frac{\partial \sum e^\theta}{\partial \theta_j}}} {(\sum e^\theta)^2} \end{aligned} θjSi=θjeθeθi=(eθ)2θjeθieθeθiθjeθ
此时,需要考虑 i = j 和 i ≠ j i=j和i\ne j i=ji̸=j的情况
i = j i=j i=j时:
∂ e θ i ∂ θ j = e θ i ∂ ∑ e θ ∂ θ j = e θ i 则 ∂ S i ∂ θ j = e θ i ∑ e θ − e θ i e θ i ( ∑ e θ ) 2 = e θ i ∑ e θ ⋅ ∑ e θ − e θ i ∑ e θ = S i ⋅ ( 1 − S i ) \begin{aligned} \frac{\partial e^{\theta_i}}{\partial \theta_j}&=e^{\theta_i}\\ \frac{\partial \sum e^\theta}{\partial \theta_j} &= e^{\theta_i}\\ 则\frac{\partial S_i}{\partial \theta_j} &=\frac{e^{\theta_i}{\sum e^\theta}- e^{\theta_i}e^{\theta_i}}{(\sum e^\theta)^2}\\ &=\frac{e^{\theta_i}}{\sum e^\theta}\cdot \frac{\sum e^\theta-e^{\theta_i}}{\sum e^\theta}\\ &=S_i\cdot(1-S_i) \end{aligned} θjeθiθjeθθjSi=eθi=eθi=(eθ)2eθieθeθieθi=eθeθieθeθeθi=Si(1Si)
i ≠ j i\ne j i̸=j时:
∂ e θ i ∂ θ j = 0 ∂ ∑ e θ ∂ θ j = e θ j 则 ∂ S i ∂ θ j = 0 ⋅ ∑ e θ − e θ i e θ j ( ∑ e θ ) 2 = − e θ i ∑ e θ ⋅ e θ j ∑ e θ = − S i ⋅ S j \begin{aligned} \frac{\partial e^{\theta_i}}{\partial \theta_j}&=0\\ \frac{\partial \sum e^\theta}{\partial \theta_j} &= e^{\theta_j}\\ 则\frac{\partial S_i}{\partial \theta_j} &=\frac{0\cdot{\sum e^\theta}- e^{\theta_i}e^{\theta_j}}{(\sum e^\theta)^2}\\ &=-\frac{e^{\theta_i}}{\sum e^\theta}\cdot \frac{e^{\theta_j}}{\sum e^\theta}\\ &=-S_i\cdot S_j \end{aligned} θjeθiθjeθθjSi=0=eθj=(eθ)20eθeθieθj=eθeθieθeθj=SiSj

损失函数求导

根据链式求导法则,损失函数对 θ \theta θ求导可进行分解:
d L d θ = d L d S ⋅ d S d θ = − d ∑ y i l n S i d S i ⋅ d S i d θ i = − ∑ y i S i ⋅ d S i d θ j \begin{aligned} \frac{dL}{d\theta}&=\frac{dL}{d S}\cdot \frac{dS}{d\theta}\\ &=-\frac{d\sum y_i {\rm ln}S_i}{dS_i} \cdot \frac{dS_i}{d\theta_i}\\ &=-\sum \frac{y_i}{S_i} \cdot \frac{dS_i}{d\theta_j} \end{aligned} dθdL=dSdLdθdS=dSidyilnSidθidSi=SiyidθjdSi
代入上面的softmax求导结果,将 i = j i=j i=j i ≠ j i\ne j i̸=j结果相加:
d L d θ = − y i S i ⋅ S i ( 1 − S i ) − ∑ i ≠ j y i S j ⋅ ( − S i ⋅ S j ) = − y i ⋅ ( 1 − S i ) + ∑ i ≠ j y i ⋅ S i = S i ∑ y i − y i \begin{aligned} \frac{dL}{d\theta}&=-\frac{y_i}{S_i} \cdot S_i(1-S_i) - \sum_{i\ne j} \frac{y_i}{S_j} \cdot (-S_i\cdot S_j)\\ &=-y_i\cdot(1-S_i) +\sum_{i\ne j}y_i\cdot S_i\\ &=S_i\sum y_i - y_i \end{aligned} dθdL=SiyiSi(1Si)i̸=jSjyi(SiSj)=yi(1Si)+i̸=jyiSi=Siyiyi
因为yi相加的结果等于1,因此最后的结果就是
d L d θ = S i − y i \frac{dL}{d\theta}=S_i-y_i dθdL=Siyi

你可能感兴趣的:(公式推导)