机器学习极值问题

给出二次函数 f ( x ) = 1 2 x T P x + q T x + r f(x) = \frac{1}{2}x^TPx + q^Tx + r f(x)=21xTPx+qTx+r的极小值点。(P是对称矩阵)

解:

  1. 对f(x)求导数:

f ( x + Δ x ) − f ( x ) = 1 / 2 ( x + Δ x ) T P ( x + Δ x ) + q T ( x + Δ x ) + r − 1 / 2 x T P x − q T x − r = 1 / 2 ( x T + Δ x T ) P ( x + Δ x ) + q T ( x + Δ x ) + r − 1 / 2 x T P x − q T x − r = 1 / 2 ( x T P + Δ x T P ) ( x + Δ x ) + q T ( x + Δ x ) + r − 1 / 2 x T P x − q T x − r = 1 / 2 ( x T P x + x T P Δ x + Δ x T P x + Δ x T P Δ x ) + q T ( x + Δ x ) + r − 1 / 2 x T P x − q T x − r = 1 / 2 ( x T P Δ x + Δ x T P x + Δ x T P Δ x ) + q T Δ x = x T P Δ x + q T Δ x + 1 / 2 Δ x T P Δ x (1.1) f(x + \Delta x) - f(x) = 1/2(x + \Delta x)^TP(x+\Delta x) + q^T(x + \Delta x) + r - 1/2x^TPx - q^Tx - r =\\ 1/2(x ^T+ \Delta x^T)P(x+\Delta x) + q^T(x + \Delta x) + r - 1/2x^TPx - q^Tx - r =\\ 1/2(x ^TP+ \Delta x^TP)(x+\Delta x) + q^T(x + \Delta x) + r - 1/2x^TPx - q^Tx - r =\\ 1/2(x ^TPx+x ^TP\Delta x + \Delta x^TPx +\Delta x^TP\Delta x ) + q^T(x + \Delta x) + r - 1/2x^TPx - q^Tx - r =\\ 1/2(x ^TP\Delta x + \Delta x^TPx +\Delta x^TP\Delta x ) + q^T\Delta x \tag{1.1}=\\ x^TP\Delta x + q^T \Delta x + 1/2 \Delta x ^ T P \Delta x f(x+Δx)f(x)=1/2(x+Δx)TP(x+Δx)+qT(x+Δx)+r1/2xTPxqTxr=1/2(xT+ΔxT)P(x+Δx)+qT(x+Δx)+r1/2xTPxqTxr=1/2(xTP+ΔxTP)(x+Δx)+qT(x+Δx)+r1/2xTPxqTxr=1/2(xTPx+xTPΔx+ΔxTPx+ΔxTPΔx)+qT(x+Δx)+r1/2xTPxqTxr=1/2(xTPΔx+ΔxTPx+ΔxTPΔx)+qTΔx=xTPΔx+qTΔx+1/2ΔxTPΔx(1.1)
注意到,若P是对称矩阵,则上式(1.1)中是一个二次型,其中第一项和第二项是一样的,可以合并。

lim ⁡ Δ x → 0 f ( x + Δ x ) − f ( x ) Δ x = x T P + q T + 1 / 2 Δ x T P \lim_{\Delta x \to 0} \frac{f(x+\Delta x) - f(x)}{\Delta x} = x^TP + q^T + 1/2 \Delta x^T P Δx0limΔxf(x+Δx)f(x)=xTP+qT+1/2ΔxTP

将计算出的矩阵转置一下,可得:

D f ( x ) = P x + q Df(x)= P x+ q Df(x)=Px+q

  1. 根据极值的性质,将导数置为0,可得:

x = − P − 1 q x = - P^{-1} q x=P1q

需要注意,在求导时,列向量的每个分量、包括矩阵每个元素的自增量 Δ x \Delta x Δx是相同的

比如如下二次型:

( x 1 x 2 ) ( a b c d ) ( Δ x 1 Δ x 2 ) = ( a x 1 + c x 2 b x 1 + d x 2 ) ( Δ x 1 Δ x 2 ) = a x 1 Δ x 1 + c x 2 Δ x 1 + b x 1 Δ x 2 + d x 2 Δ x 2 = a x 1 Δ x + c x 2 Δ x + b x 1 Δ x + d x 2 Δ x ( 注意: Δ x 1 = Δ x 2 ) \begin {pmatrix} x_1 & x_2\end{pmatrix} \begin {pmatrix} a & b\\c&d\end{pmatrix} \begin {pmatrix} \Delta x_1 \\ \Delta x_2\end{pmatrix} = \\ \begin {pmatrix} ax_1 + cx_2&bx_1 + dx_2\end{pmatrix} \begin {pmatrix} \Delta x_1 \\ \Delta x_2\end{pmatrix} =\\ ax_1 \Delta x_1+cx_2 \Delta x_1 + bx_1 \Delta x_2+dx_2 \Delta x_2=\\ ax_1 \Delta x+cx_2 \Delta x + bx_1 \Delta x+dx_2 \Delta x \color{red}(注意:\Delta x_1 = \Delta x_2) (x1x2)(acbd)(Δx1Δx2)=(ax1+cx2bx1+dx2)(Δx1Δx2)=ax1Δx1+cx2Δx1+bx1Δx2+dx2Δx2=ax1Δx+cx2Δx+bx1Δx+dx2Δx(注意:Δx1=Δx2)

( Δ x 1 Δ x 2 ) ( a b c d ) ( x 1 x 2 ) = ( a Δ x 1 + c Δ x 2 b Δ x 1 + d Δ x 2 ) ( x 1 x 2 ) = a Δ x 1 x 1 + c Δ x 2 x 1 + b Δ x 1 x 2 + d Δ x 2 x 2 = a Δ x x 1 + c Δ x x 1 + b Δ x x 2 + d Δ x x 2 \begin {pmatrix} \Delta x_1 & \Delta x_2\end{pmatrix} \begin {pmatrix} a & b\\c&d\end{pmatrix} \begin {pmatrix} x_1 \\ x_2\end{pmatrix} = \\ \begin {pmatrix} a \Delta x_1 + c \Delta x_2&b \Delta x_1 + d \Delta x_2\end{pmatrix} \begin {pmatrix} x_1 \\ x_2\end{pmatrix} =\\ a \Delta x_1x_1+c \Delta x_2x_1 + b\Delta x_1x_2+d \Delta x_2x_2=\\ a \Delta xx_1+c \Delta xx_1 + b\Delta xx_2+d \Delta xx_2 (Δx1Δx2)(acbd)(x1x2)=(aΔx1+cΔx2bΔx1+dΔx2)(x1x2)=aΔx1x1+cΔx2x1+bΔx1x2+dΔx2x2=aΔxx1+cΔxx1+bΔxx2+dΔxx2

还需要注意的是,矩阵求导是对每个列向量x求导,而不是对矩阵的每个元素求导

你可能感兴趣的:(高等数学,线性代数,机器学习,人工智能,学习)