深度度量学习(Deep Metric Learning)函数求导公式

归一化向量的求导

向量模求导: 定义 x ∈ R 1 × d \bm{x}\in \mathbb{R}^{1\times d} xR1×d为一个列向量, ∣ ∣ x ∣ ∣ ||\bm{x}|| x为向量的模, x ^ \hat{\bm{x}} x^表示经过L2归一化之后的向量,长度为1

∂ ∂ x ( ∣ ∣ x ∣ ∣ ) = ∂ ∂ x x T x = 1 2 2 x T x T x = x T ∣ ∣ x ∣ ∣ = x ^ T \frac{\partial}{\partial \bm{x}}( ||\bm{x}||)=\frac{\partial }{\partial \bm{x}}\sqrt{\bm{x}^T \bm{x}}=\frac{1}{2}\frac{2\bm{x}^T}{\sqrt{\bm{x}^T \bm{x}}}=\frac{\bm{x}^T}{ ||\bm{x}||}=\hat{\bm{x}}^T x(x)=xxTx =21xTx 2xT=xxT=x^T

归一化向量求导:
∂ ∂ x ( x ^ ) = ∂ ∂ x ( x ∣ ∣ x ∣ ∣ ) = ∂ ∂ x ( x T 1 ∣ ∣ x ∣ ∣ ) = 1 ∣ ∣ x ∣ ∣ ∂ ∂ x ( x ) + x ∂ ∂ x ( 1 ∣ ∣ x ∣ ∣ ) = 1 ∣ ∣ x ∣ ∣ − x x ^ T ∣ ∣ x ∣ ∣ 2 = ∣ ∣ x ∣ ∣ I − x x ^ T ∣ ∣ x ∣ ∣ 2 = I − x ^ x ^ T ∣ ∣ x ∣ ∣ \frac{\partial}{\partial \bm{x}}(\hat{\bm{x}})=\frac{\partial}{\partial \bm{x}}(\frac{\bm{x}}{ ||\bm{x}||})=\frac{\partial}{\partial \bm{x}}(\bm{x}^T\frac{1}{ ||\bm{x}||})\\ =\frac{1}{ ||\bm{x}||}\frac{\partial}{\partial \bm{x}}(\bm{x})+\bm{x}\frac{\partial}{\partial \bm{x}}(\frac{1}{ ||\bm{x}||})\\ =\frac{1}{ ||\bm{x}||}-\bm{x}\frac{\hat{\bm{x}}^T}{||\bm{x}||^2}\\ =\frac{||\bm{x}||I-\bm{x}\hat{\bm{x}}^T}{||\bm{x}||^2}\\ =\frac{I-\hat{\bm{x}}\hat{\bm{x}}^T}{||\bm{x}||} x(x^)=x(xx)=x(xTx1)=x1x(x)+xx(x1)=x1xx2x^T=x2xIxx^T=xIx^x^T

你可能感兴趣的:(python机器学习,学习,人工智能)