矩阵求导详解

  • 向量,矩阵,张量求导
    • 向量对向量求导
    • 向量对矩阵求导
    • 矩阵对矩阵求导
    • 使用链式法则
    • 总结

向量,矩阵,张量求导

参考 http://cs231n.stanford.edu/vecDerivs.pdf

向量对向量求导

如何对 y = W x y = Wx y=Wx 求导?其中:

  • y : C × 1 y: {C\times1} y:C×1
  • W : C × D W: {C\times D} W:C×D
  • x : D × 1 x: {D\times 1} x:D×1

可以先通过计算一种特例,比如 ∂ y 7 ∂ x 3 \frac{\partial{y_7}}{\partial{x_3}} x3y7 来更好地理解, y 7 y_7 y7 可以写成
y 7 = ∑ j = 1 D W 7 , j x j = W 7 , 1 x 1 + W 7 , 2 x 2 + W 7 , 3 x 3 + ⋯ y_7 = \sum_{j=1}^{D}W_{7,j} x_j =W_{7,1}x_1 + W_{7,2}x_2 + W_{7,3}x_3+\cdots y7=j=1DW7,jxj=W7,1x1+W7,2x2+W7,3x3+
所以 ∂ y 7 ∂ x 3 = W 7 , 3 \frac{\partial{y_7}}{\partial{x_3}}=W_{7,3} x3y7=W7,3。进而, ∂ y ∂ x = W \frac{\partial{y}}{\partial{x}} = W xy=W

PS: 标量对向量求导的维度为 1 ∗ n 1*n 1n; 向量对标量求导的维度为 n ∗ 1 n*1 n1;

向量对矩阵求导

y = x W y = xW y=xW, 如何求 ∂ y ∂ W \frac{\partial{y}}{\partial{W}} Wy?其中:

  • y : 1 × C y: {1\times C} y:1×C
  • W : D × C W: {D\times C} W:D×C
  • x : 1 × D x: {1\times D} x:1×D

依然先计算特例: ∂ y 3 ∂ W 78 \frac{\partial{y_3}}{\partial{W_{78}}} W78y3, 首先
y 3 = x 1 W 13 + x 2 W 23 + ⋯ + x D W D 3 y_3 = x_1 W_{13} + x_2W_{23} + \dots + x_D W_{D3} y3=x1W13+x2W23++xDWD3
所以可以看到 ∂ y 3 ∂ W 78 = 0 \frac{\partial{y_3}}{\partial{W_{78}}}=0 W78y3=0,进一步又发现
∂ y j ∂ W i j = x i \frac{\partial{y_j}}{\partial{W_{ij}}} = x_i Wijyj=xi
于是令 F i , j , k = ∂ y i ∂ W j k F_{i,j,k}=\frac{\partial{y_i}}{\partial{W_{jk}}} Fi,j,k=Wjkyi,有
F i , j , i = x j F_{i,j,i} = x_j Fi,j,i=xj
张量 F F F 的其余项均为0,因此可以定义一个二维矩阵 G i , j = F i , j , i G_{i,j} = F_{i,j,i} Gi,j=Fi,j,i 来表示 ∂ y ∂ W \frac{\partial{y}}{\partial{W}} Wy的结果。


PS:Representing the important part of derivative arrays in a compact way is critical to efficient implementations of neural networks.

矩阵对矩阵求导

Y = X W Y=XW Y=XW, 如何求 ∂ Y a , b ∂ X c , d \frac{\partial{Y_{a,b}}}{\partial{X_{c,d}}} Xc,dYa,b?其中:

  • Y : n × C Y: {n\times C} Y:n×C
  • W : D × C W: {D\times C} W:D×C
  • x : n × D x: {n\times D} x:n×D

依然进行展开:
Y i , j = ∑ k = 1 D X i , k W k , j Y_{i,j} = \sum_{k=1}^D X_{i,k}W_{k,j} Yi,j=k=1DXi,kWk,j
于是有
(1) ∂ Y i , j ∂ X i , k = W k , j \frac{\partial{Y_{i,j}}}{\partial{X_{i,k}}} = W_{k,j} \tag{1} Xi,kYi,j=Wk,j(1)
因此
∂ Y a , b ∂ X c , d = { W d , b , when  a = c 0 , when  a ≠ c \frac{\partial{Y_{a,b}}}{\partial{X_{c,d}}} = \begin{cases} W_{d,b}, \qquad \text{when}\ a=c\\ 0, \qquad \text{when}\ a\neq c \end{cases} Xc,dYa,b={Wd,b,when a=c0,when a̸=c
可以发现

  1. 实际上 ∂ Y a , b ∂ X c , d \frac{\partial{Y_{a,b}}}{\partial{X_{c,d}}} Xc,dYa,b所有的结果都包含在 W W W 中。
  2. ∂ Y i , j ∂ X i , k \frac{\partial{Y_{i,j}}}{\partial{X_{i,k}}} Xi,kYi,j X , Y X,Y X,Y 的行索引没有关系。
  3. In fact, the matrix W holds all of these partials as it is–we just have to remember to index into it according to Equation 1 to obtain the specific partial derivative that we want.

使用链式法则

y = V m y=Vm y=Vm, 其中 m = W x m=Wx m=Wx, 求 ∂ y ∂ x \frac{\partial{y}}{\partial{x}} xy?

依然先从特例开始:
∂ y i ∂ x j = ∂ y i ∂ m ∂ m ∂ x j = ∑ k = 1 M ∂ y i ∂ m k ∂ m k ∂ x j = ∑ k = 1 M V i , k W k , j = V i , : W : , j \begin{aligned} \frac{\partial{y_i}}{\partial{x_j}} &= \frac{\partial{y_i}}{\partial{m}}\frac{\partial{m}}{\partial{x_j}} \\ &= \sum_{k=1}^M \frac{\partial{y_i}}{\partial{m_k}}\frac{\partial{m_k}}{\partial{x_j}} \\ &= \sum_{k=1}^M V_{i,k}W_{k,j} \\ &= V_{i,:}W_{:,j} \end{aligned} xjyi=myixjm=k=1Mmkyixjmk=k=1MVi,kWk,j=Vi,:W:,j
因此
∂ y ∂ x = V W \frac{\partial{y}}{\partial{x}} = VW xy=VW

总结

  1. 为了求得最终的导数结果,往往需要先求中间的结果,例如先求 ∂ y i ∂ x j \frac{\partial{y_i}}{\partial{x_j}} xjyi, 再求 ∂ y ∂ x \frac{\partial{y}}{\partial{x}} xy

你可能感兴趣的:(矩阵求导详解)