softmax函数详解及误差反向传播的梯度求导

摘要

本文给出 softmax 函数的定义, 并求解其在反向传播中的梯度

相关

配套代码, 请参考文章 :

Python 和 PyTorch 对比实现 softmax 及其反向传播

系列文章索引 :
https://blog.csdn.net/oBrightLamp/article/details/85067981

正文

1. 定义

softmax函数常用于多分类问题的输出层.
定义如下:
s i = e x i ∑ t = 1 k e x t ∑ t = 1 k e x t = e x 1 + e x 2 + e x 3 + ⋯ + e x k i = 1 , 2 , 3 , ⋯   , k s_{i} = \frac{e^{x_{i}}}{ \sum_{t = 1}^{k}e^{x_{t}}} \\ \quad \\ \sum_{t = 1}^{k}e^{x_{t}} = e^{x_{1}} + e^{x_{2}} +e^{x_{3}} + \cdots +e^{x_{k}}\\ \quad \\ i = 1, 2, 3, \cdots, k si=t=1kextexit=1kext=ex1+ex2+ex3++exki=1,2,3,,k

编程实现softmax函数计算的时候, 因为存在指数运算 e x i e^{x_i} exi, 数值有可能非常大, 导致大数溢出.
一般在分式的分子和分母都乘以一个常数C, 变换成:

s i = C e x i C ∑ t = 1 k e x t = e x i + l o g C ∑ t = 1 k e x t + l o g C = e x i − m ∑ t = 1 k e x t − m m = − l o g C = m a x ( x i ) s_{i} = \frac{Ce^{x_{i}}}{ C\sum_{t = 1}^{k}e^{x_{t}}} = \frac{e^{x_{i} + logC }}{ \sum_{t = 1}^{k}e^{x_{t} + logC}} = \frac{e^{x_{i} - m }}{ \sum_{t = 1}^{k}e^{x_{t} - m}} \\ \quad \\ m = - logC = max(x_{i}) si=Ct=1kextCexi=t=1kext+logCexi+logC=t=1kextmeximm=logC=max(xi)

C的值可自由选择, 不会影响计算结果. 这里 m 取 x i x_i xi 的最大值, 将数据集的最大值偏移至0.

2. 梯度求导

考虑一个 softmax 变换:
x = ( x 1 , x 2 , x 3 , ⋯   , x k ) s = s o f t m a x ( x ) x = (x_1, x_2, x_3, \cdots, x_k)\\ \quad\\ s = softmax(x)\\ x=(x1,x2,x3,,xk)s=softmax(x)
求 s1 对 x1 的导数:
s 1 = e x 1 ∑ t = 1 k e x t = e x 1 s u m s u m = ∑ t = 1 k e x t = e x 1 + ∑ t = 2 k e x t ∂ s u m ∂ x 1 = ∂ ∑ t = 1 k e x t ∂ x 1 = e x 1 ∂ s 1 ∂ x 1 = e x 1 ⋅ s u m − e x 1 ⋅ ∂ s u m ∂ x 1 s u m 2 = e x 1 ⋅ s u m − e x 1 ⋅ e x 1 s u m 2 = s 1 − s 1 2 s_{1} = \frac{e^{x_{1}}}{ \sum_{t = 1}^{k}e^{x_{t}}} = \frac{e^{x_{1}}}{ sum} \\ \quad \\ sum = \sum_{t = 1}^{k}e^{x_{t}} = e^{x_{1}} + \sum_{t = 2}^{k}e^{x_{t}}\\ \quad \\ \frac{\partial sum}{\partial x_{1}} = \frac{\partial \sum_{t = 1}^{k}e^{x_{t}}}{\partial x_{1}} = e^{x_{1}}\\ \quad \\ \frac{\partial s_{1}}{\partial x_{1}} =\frac{e^{x_{1}} \cdot sum -e^{x_{1}}\cdot \frac{\partial sum}{\partial x_{1}}}{sum^{2}}\\ \quad\\ =\frac{e^{x_{1}} \cdot sum -e^{x_{1}} \cdot e^{x_{1}}}{sum^{2}}\\ \quad\\ = s_{1} - s_{1}^{2} \\ s1=t=1kextex1=sumex1sum=t=1kext=ex1+t=2kextx1sum=x1t=1kext=ex1x1s1=sum2ex1sumex1x1sum=sum2ex1sumex1ex1=s1s12
分母中 x2 对 s1 的梯度也有影响, 求 s1 对 x2 的导数:
∂ s 1 ∂ x 2 = 0 ⋅ s u m − e x 1 ⋅ ∂ s u m ∂ x 2 s u m 2 = − e x 1 ⋅ e x 2 s u m 2 = − s 1 s 2 \frac{\partial s_{1}}{\partial x_{2}}=\frac{0 \cdot sum -e^{x_{1}}\cdot \frac{\partial sum}{\partial x_{2}}}{sum^{2}} = \frac{ -e^{x_{1}} \cdot e^{x_{2}}}{sum^{2}} = - s_{1}s_{2}\\ x2s1=sum20sumex1x2sum=sum2ex1ex2=s1s2
同理可得:
$$
\frac{\partial s_{i}}{\partial x_{j}} =
\left{
\begin{array}{rr}

  • s_{i}^{2} +s_{i}, & i = j\
  • s_{i}s_{j}, & i \neq j
    \end{array}
    \right.
    展 开 可 得 s o f t m a x 的 梯 度 矩 阵 : 展开可得 softmax 的梯度矩阵: softmax:
    \nabla s_{(x)}=
    \begin{pmatrix}
    \partial s_{1}/\partial x_{1}&\partial s_{1}/\partial x_{2}& \cdots&\partial s_{1}/\partial x_{k}\
    \partial s_{2}/\partial x_{1}&\partial s_{2}/\partial x_{2}& \cdots&\partial s_{2}/\partial x_{k}\
    \vdots & \vdots & \ddots & \vdots \
    \partial s_{k}/\partial x_{1}&\partial s_{k}/\partial x_{2}& \cdots&\partial s_{k}/\partial x_{k}\
    \end{pmatrix}=
    \begin{pmatrix}
    -s_{1}s_{1} + s_{1} & -s_{1}s_{2} & \cdots & -s_{1}s_{k} \
    -s_{2}s_{1} & -s_{2}s_{2} + s_{2} & \cdots & -s_{2}s_{k} \
    \vdots & \vdots & \ddots & \vdots \
    -s_{k}s_{1} & -s_{k}s_{2} & \cdots & -s_{k}s_{k} + s_{k}
    \end{pmatrix}
    $$
    这是一个雅可比矩阵 (Jacobian) 矩阵.

3. 反向传播

考虑一个输入向量 x, 经 softmax 函数归一化处理后得到向量 s, 往前 forward 传播得出误差值 error (标量 e ), 求 e 关于 x 的梯度.
x = ( x 1 , x 2 , x 3 , ⋯   , x k ) s = s o f t m a x ( x ) e = f o r w a r d ( s ) x = (x_1, x_2, x_3, \cdots, x_k)\\ \quad\\ s = softmax(x)\\ \quad\\ e = forward(s) x=(x1,x2,x3,,xk)s=softmax(x)e=forward(s)
求解过程:
∇ e ( s ) = ( ∂ e ∂ s 1 , ∂ e ∂ s 2 , ∂ e ∂ s 3 , ⋯   , ∂ e ∂ s k ) ∂ e ∂ x i = ∂ e ∂ s 1 ∂ s 1 ∂ x i + ∂ e ∂ s 2 ∂ s 2 ∂ x i + ∂ e ∂ s 3 ∂ s 3 ∂ x i + ⋯ + ∂ e ∂ s k ∂ s k ∂ x i \nabla e_{(s)} = (\frac{\partial e}{\partial s_1},\frac{\partial e}{\partial s_2},\frac{\partial e}{\partial s_3}, \cdots ,\frac{\partial e}{\partial s_k}) \\ \quad\\ \frac{\partial e}{\partial x_i} = \frac{\partial e}{\partial s_1}\frac{\partial s_1}{\partial x_i} +\frac{\partial e}{\partial s_2}\frac{\partial s_2}{\partial x_i} +\frac{\partial e}{\partial s_3}\frac{\partial s_3}{\partial x_i} + \cdots +\frac{\partial e}{\partial s_k}\frac{\partial s_k}{\partial x_i}\\ e(s)=(s1e,s2e,s3e,,ske)xie=s1exis1+s2exis2+s3exis3++skexisk
展开 ∂ e / ∂ x i \partial e/\partial x_i e/xi 可得 e 关于 X 的梯度向量 :
∇ e ( x ) = ( ∂ e ∂ s 1 , ∂ e ∂ s 2 , ∂ e ∂ s 3 , ⋯   , ∂ e ∂ s k ) ( ∂ s 1 / ∂ x 1 ∂ s 1 / ∂ x 2 ⋯ ∂ s 1 / ∂ x k ∂ s 2 / ∂ x 1 ∂ s 2 / ∂ x 2 ⋯ ∂ s 2 / ∂ x k ⋮ ⋮ ⋱ ⋮ ∂ s k / ∂ x 1 ∂ s k / ∂ x 2 ⋯ ∂ s k / ∂ x k )    = ∇ e ( s ) ( − s 1 s 1 + s 1 − s 1 s 2 ⋯ − s 1 s k − s 2 s 1 − s 2 s 2 + s 2 ⋯ − s 2 s k ⋮ ⋮ ⋱ ⋮ − s k s 1 − s k s 2 ⋯ − s k s k + s k ) \nabla e_{(x)} = (\frac{\partial e}{\partial s_1},\frac{\partial e}{\partial s_2},\frac{\partial e}{\partial s_3}, \cdots ,\frac{\partial e}{\partial s_k}) \begin{pmatrix} \partial s_{1}/\partial x_{1}&\partial s_{1}/\partial x_{2}& \cdots&\partial s_{1}/\partial x_{k}\\ \partial s_{2}/\partial x_{1}&\partial s_{2}/\partial x_{2}& \cdots&\partial s_{2}/\partial x_{k}\\ \vdots & \vdots & \ddots & \vdots \\ \partial s_{k}/\partial x_{1}&\partial s_{k}/\partial x_{2}& \cdots&\partial s_{k}/\partial x_{k}\\ \end{pmatrix}\\ \;\\ % ----------- = \nabla e_{(s)} \begin{pmatrix} -s_{1}s_{1} + s_{1} & -s_{1}s_{2} & \cdots & -s_{1}s_{k} \\ -s_{2}s_{1} & -s_{2}s_{2} + s_{2} & \cdots & -s_{2}s_{k} \\ \vdots & \vdots & \ddots & \vdots \\ -s_{k}s_{1} & -s_{k}s_{2} & \cdots & -s_{k}s_{k} + s_{k} \end{pmatrix} e(x)=(s1e,s2e,s3e,,ske)s1/x1s2/x1sk/x1s1/x2s2/x2sk/x2s1/xks2/xksk/xk=e(s)s1s1+s1s2s1sks1s1s2s2s2+s2sks2s1sks2sksksk+sk
所有的 ∂ e / ∂ s i \partial e/\partial s_i e/si 值都是已知的, 即是上游 forward 反向传播回来的误差梯度, 因此 ∇ e ( s ) \nabla e_{(s)} e(s) 也是已知的.

4. 有趣的性质

4.1 相对误差

接上回例子, 观察到 softmax 的梯度矩阵中, 同一列的元素相加 :
∑ t = 1 k ∂ s t ∂ x i = 1 \sum_{t = 1}^{k} \frac{\partial s_{t}}{\partial x_{i}} = 1 t=1kxist=1
若 e 对 s 的梯度向量中, 每一个元素都恒等于某个实数 a :
∂ e ∂ s i ≡ a \frac{\partial e}{\partial s_{i}} \equiv a siea

∇ e ( x ) ≡ 0 \nabla e_{(x)} \equiv 0 e(x)0
即, 若上游梯度均匀, 则不传递误差梯度.

4.2 收敛性质

若:
e = f o r w a r d ( x ) = − ∑ i = 1 k y i l o g ( s i )    ∇ e ( s ) = ( − y 1 s 1 , − y 2 s 2 , ⋯   , − y k s k )    y i s i ≡ a e=forward (x) = -\sum_{i = 1}^{k}y_{i}log(s_{i})\\ \;\\ \nabla e_{(s)}=( -\frac{y_1}{s_1}, -\frac{y_2}{s_2},\cdots,-\frac{y_k}{s_k})\\ \;\\ \frac{y_i}{s_i} \equiv a e=forward(x)=i=1kyilog(si)e(s)=(s1y1,s2y2,,skyk)siyia
这时就有 :
s i s j = y i y j \frac{s_i}{s_j}=\frac{y_i}{y_j} sjsi=yjyi

s i s_i si 概率分布收敛至 y i y_i yi 的等比例概率分布.

全文完.

你可能感兴趣的:(深度学习基础)