本文给出 softmax 函数的定义, 并求解其在反向传播中的梯度
配套代码, 请参考文章 :
Python 和 PyTorch 对比实现 softmax 及其反向传播
系列文章索引 :
https://blog.csdn.net/oBrightLamp/article/details/85067981
softmax函数常用于多分类问题的输出层.
定义如下:
s i = e x i ∑ t = 1 k e x t ∑ t = 1 k e x t = e x 1 + e x 2 + e x 3 + ⋯ + e x k i = 1 , 2 , 3 , ⋯   , k s_{i} = \frac{e^{x_{i}}}{ \sum_{t = 1}^{k}e^{x_{t}}} \\ \quad \\ \sum_{t = 1}^{k}e^{x_{t}} = e^{x_{1}} + e^{x_{2}} +e^{x_{3}} + \cdots +e^{x_{k}}\\ \quad \\ i = 1, 2, 3, \cdots, k si=∑t=1kextexit=1∑kext=ex1+ex2+ex3+⋯+exki=1,2,3,⋯,k
编程实现softmax函数计算的时候, 因为存在指数运算 e x i e^{x_i} exi, 数值有可能非常大, 导致大数溢出.
一般在分式的分子和分母都乘以一个常数C, 变换成:
s i = C e x i C ∑ t = 1 k e x t = e x i + l o g C ∑ t = 1 k e x t + l o g C = e x i − m ∑ t = 1 k e x t − m m = − l o g C = m a x ( x i ) s_{i} = \frac{Ce^{x_{i}}}{ C\sum_{t = 1}^{k}e^{x_{t}}} = \frac{e^{x_{i} + logC }}{ \sum_{t = 1}^{k}e^{x_{t} + logC}} = \frac{e^{x_{i} - m }}{ \sum_{t = 1}^{k}e^{x_{t} - m}} \\ \quad \\ m = - logC = max(x_{i}) si=C∑t=1kextCexi=∑t=1kext+logCexi+logC=∑t=1kext−mexi−mm=−logC=max(xi)
C的值可自由选择, 不会影响计算结果. 这里 m 取 x i x_i xi 的最大值, 将数据集的最大值偏移至0.
考虑一个 softmax 变换:
x = ( x 1 , x 2 , x 3 , ⋯   , x k ) s = s o f t m a x ( x ) x = (x_1, x_2, x_3, \cdots, x_k)\\ \quad\\ s = softmax(x)\\ x=(x1,x2,x3,⋯,xk)s=softmax(x)
求 s1 对 x1 的导数:
s 1 = e x 1 ∑ t = 1 k e x t = e x 1 s u m s u m = ∑ t = 1 k e x t = e x 1 + ∑ t = 2 k e x t ∂ s u m ∂ x 1 = ∂ ∑ t = 1 k e x t ∂ x 1 = e x 1 ∂ s 1 ∂ x 1 = e x 1 ⋅ s u m − e x 1 ⋅ ∂ s u m ∂ x 1 s u m 2 = e x 1 ⋅ s u m − e x 1 ⋅ e x 1 s u m 2 = s 1 − s 1 2 s_{1} = \frac{e^{x_{1}}}{ \sum_{t = 1}^{k}e^{x_{t}}} = \frac{e^{x_{1}}}{ sum} \\ \quad \\ sum = \sum_{t = 1}^{k}e^{x_{t}} = e^{x_{1}} + \sum_{t = 2}^{k}e^{x_{t}}\\ \quad \\ \frac{\partial sum}{\partial x_{1}} = \frac{\partial \sum_{t = 1}^{k}e^{x_{t}}}{\partial x_{1}} = e^{x_{1}}\\ \quad \\ \frac{\partial s_{1}}{\partial x_{1}} =\frac{e^{x_{1}} \cdot sum -e^{x_{1}}\cdot \frac{\partial sum}{\partial x_{1}}}{sum^{2}}\\ \quad\\ =\frac{e^{x_{1}} \cdot sum -e^{x_{1}} \cdot e^{x_{1}}}{sum^{2}}\\ \quad\\ = s_{1} - s_{1}^{2} \\ s1=∑t=1kextex1=sumex1sum=t=1∑kext=ex1+t=2∑kext∂x1∂sum=∂x1∂∑t=1kext=ex1∂x1∂s1=sum2ex1⋅sum−ex1⋅∂x1∂sum=sum2ex1⋅sum−ex1⋅ex1=s1−s12
分母中 x2 对 s1 的梯度也有影响, 求 s1 对 x2 的导数:
∂ s 1 ∂ x 2 = 0 ⋅ s u m − e x 1 ⋅ ∂ s u m ∂ x 2 s u m 2 = − e x 1 ⋅ e x 2 s u m 2 = − s 1 s 2 \frac{\partial s_{1}}{\partial x_{2}}=\frac{0 \cdot sum -e^{x_{1}}\cdot \frac{\partial sum}{\partial x_{2}}}{sum^{2}} = \frac{ -e^{x_{1}} \cdot e^{x_{2}}}{sum^{2}} = - s_{1}s_{2}\\ ∂x2∂s1=sum20⋅sum−ex1⋅∂x2∂sum=sum2−ex1⋅ex2=−s1s2
同理可得:
$$
\frac{\partial s_{i}}{\partial x_{j}} =
\left{
\begin{array}{rr}
考虑一个输入向量 x, 经 softmax 函数归一化处理后得到向量 s, 往前 forward 传播得出误差值 error (标量 e ), 求 e 关于 x 的梯度.
x = ( x 1 , x 2 , x 3 , ⋯   , x k ) s = s o f t m a x ( x ) e = f o r w a r d ( s ) x = (x_1, x_2, x_3, \cdots, x_k)\\ \quad\\ s = softmax(x)\\ \quad\\ e = forward(s) x=(x1,x2,x3,⋯,xk)s=softmax(x)e=forward(s)
求解过程:
∇ e ( s ) = ( ∂ e ∂ s 1 , ∂ e ∂ s 2 , ∂ e ∂ s 3 , ⋯   , ∂ e ∂ s k ) ∂ e ∂ x i = ∂ e ∂ s 1 ∂ s 1 ∂ x i + ∂ e ∂ s 2 ∂ s 2 ∂ x i + ∂ e ∂ s 3 ∂ s 3 ∂ x i + ⋯ + ∂ e ∂ s k ∂ s k ∂ x i \nabla e_{(s)} = (\frac{\partial e}{\partial s_1},\frac{\partial e}{\partial s_2},\frac{\partial e}{\partial s_3}, \cdots ,\frac{\partial e}{\partial s_k}) \\ \quad\\ \frac{\partial e}{\partial x_i} = \frac{\partial e}{\partial s_1}\frac{\partial s_1}{\partial x_i} +\frac{\partial e}{\partial s_2}\frac{\partial s_2}{\partial x_i} +\frac{\partial e}{\partial s_3}\frac{\partial s_3}{\partial x_i} + \cdots +\frac{\partial e}{\partial s_k}\frac{\partial s_k}{\partial x_i}\\ ∇e(s)=(∂s1∂e,∂s2∂e,∂s3∂e,⋯,∂sk∂e)∂xi∂e=∂s1∂e∂xi∂s1+∂s2∂e∂xi∂s2+∂s3∂e∂xi∂s3+⋯+∂sk∂e∂xi∂sk
展开 ∂ e / ∂ x i \partial e/\partial x_i ∂e/∂xi 可得 e 关于 X 的梯度向量 :
∇ e ( x ) = ( ∂ e ∂ s 1 , ∂ e ∂ s 2 , ∂ e ∂ s 3 , ⋯   , ∂ e ∂ s k ) ( ∂ s 1 / ∂ x 1 ∂ s 1 / ∂ x 2 ⋯ ∂ s 1 / ∂ x k ∂ s 2 / ∂ x 1 ∂ s 2 / ∂ x 2 ⋯ ∂ s 2 / ∂ x k ⋮ ⋮ ⋱ ⋮ ∂ s k / ∂ x 1 ∂ s k / ∂ x 2 ⋯ ∂ s k / ∂ x k )    = ∇ e ( s ) ( − s 1 s 1 + s 1 − s 1 s 2 ⋯ − s 1 s k − s 2 s 1 − s 2 s 2 + s 2 ⋯ − s 2 s k ⋮ ⋮ ⋱ ⋮ − s k s 1 − s k s 2 ⋯ − s k s k + s k ) \nabla e_{(x)} = (\frac{\partial e}{\partial s_1},\frac{\partial e}{\partial s_2},\frac{\partial e}{\partial s_3}, \cdots ,\frac{\partial e}{\partial s_k}) \begin{pmatrix} \partial s_{1}/\partial x_{1}&\partial s_{1}/\partial x_{2}& \cdots&\partial s_{1}/\partial x_{k}\\ \partial s_{2}/\partial x_{1}&\partial s_{2}/\partial x_{2}& \cdots&\partial s_{2}/\partial x_{k}\\ \vdots & \vdots & \ddots & \vdots \\ \partial s_{k}/\partial x_{1}&\partial s_{k}/\partial x_{2}& \cdots&\partial s_{k}/\partial x_{k}\\ \end{pmatrix}\\ \;\\ % ----------- = \nabla e_{(s)} \begin{pmatrix} -s_{1}s_{1} + s_{1} & -s_{1}s_{2} & \cdots & -s_{1}s_{k} \\ -s_{2}s_{1} & -s_{2}s_{2} + s_{2} & \cdots & -s_{2}s_{k} \\ \vdots & \vdots & \ddots & \vdots \\ -s_{k}s_{1} & -s_{k}s_{2} & \cdots & -s_{k}s_{k} + s_{k} \end{pmatrix} ∇e(x)=(∂s1∂e,∂s2∂e,∂s3∂e,⋯,∂sk∂e)⎝⎜⎜⎜⎛∂s1/∂x1∂s2/∂x1⋮∂sk/∂x1∂s1/∂x2∂s2/∂x2⋮∂sk/∂x2⋯⋯⋱⋯∂s1/∂xk∂s2/∂xk⋮∂sk/∂xk⎠⎟⎟⎟⎞=∇e(s)⎝⎜⎜⎜⎛−s1s1+s1−s2s1⋮−sks1−s1s2−s2s2+s2⋮−sks2⋯⋯⋱⋯−s1sk−s2sk⋮−sksk+sk⎠⎟⎟⎟⎞
所有的 ∂ e / ∂ s i \partial e/\partial s_i ∂e/∂si 值都是已知的, 即是上游 forward 反向传播回来的误差梯度, 因此 ∇ e ( s ) \nabla e_{(s)} ∇e(s) 也是已知的.
接上回例子, 观察到 softmax 的梯度矩阵中, 同一列的元素相加 :
∑ t = 1 k ∂ s t ∂ x i = 1 \sum_{t = 1}^{k} \frac{\partial s_{t}}{\partial x_{i}} = 1 t=1∑k∂xi∂st=1
若 e 对 s 的梯度向量中, 每一个元素都恒等于某个实数 a :
∂ e ∂ s i ≡ a \frac{\partial e}{\partial s_{i}} \equiv a ∂si∂e≡a
则
∇ e ( x ) ≡ 0 \nabla e_{(x)} \equiv 0 ∇e(x)≡0
即, 若上游梯度均匀, 则不传递误差梯度.
若:
e = f o r w a r d ( x ) = − ∑ i = 1 k y i l o g ( s i )    ∇ e ( s ) = ( − y 1 s 1 , − y 2 s 2 , ⋯   , − y k s k )    y i s i ≡ a e=forward (x) = -\sum_{i = 1}^{k}y_{i}log(s_{i})\\ \;\\ \nabla e_{(s)}=( -\frac{y_1}{s_1}, -\frac{y_2}{s_2},\cdots,-\frac{y_k}{s_k})\\ \;\\ \frac{y_i}{s_i} \equiv a e=forward(x)=−i=1∑kyilog(si)∇e(s)=(−s1y1,−s2y2,⋯,−skyk)siyi≡a
这时就有 :
s i s j = y i y j \frac{s_i}{s_j}=\frac{y_i}{y_j} sjsi=yjyi
即 s i s_i si 概率分布收敛至 y i y_i yi 的等比例概率分布.
全文完.