人工智能教程 - 数学基础课程1.7 - 最优化方法-1 最优化场景,思路

最优化场景

有些场景,变量很多,很大,微积分难于处理。此类问题可用最优化来解决。

P 1 ( x ) = 0 P_1(x)=0 P1(x)=0
P 2 ( x ) = 0 P_2(x)=0 P2(x)=0
.
.
.
P m ( x ) = 0 P_m(x)=0 Pm(x)=0

x = [ x 1 x 2 . . . x n ] x=\begin{bmatrix} x_1\\ x_2\\ .\\ .\\ .\\ x_n\\ \end{bmatrix} x=x1x2...xn 属于线性解方程问题,不属于最优解

最优解的话,可以是3个未知量,满足1000个方程组等等
最难的是:把不是最优解的问题转化成最优化问题。

Ex:
P 1 ( x ) = 0 ↔ P 1 2 ( x ) = 0 P_1(x)=0\leftrightarrow P_1^2(x)=0 P1(x)=0P12(x)=0
P 2 ( x ) = 0 ↔ P 2 2 ( x ) = 0 P_2(x)=0\leftrightarrow P_2^2(x)=0 P2(x)=0P22(x)=0
.
.
.
P m ( x ) = 0 ↔ P m 2 ( x ) = 0 P_m(x)=0\leftrightarrow P_m^2(x)=0 Pm(x)=0Pm2(x)=0

线性方程 → \rightarrow 二次方程

∑ i = 1 m P i 2 ( x ) = 0 \sum_{i=1}^mP_i^2(x)=0 i=1mPi2(x)=0

实际上就是解 f ( x ) = ∑ i = 1 m P i 2 ( x ) f(x)=\sum_{i=1}^mP_i^2(x) f(x)=i=1mPi2(x)的min f(x)

Basic optimization problem:

minimize f(x) for x ∈ R n x \in R^n xRn

Assumption: f(x) is twice continuously differentiable.

此外,如果认为有的方程不重要,有的极其重要的,可使用加权

f ( x ) = ∑ i = 1 m w i P i 2 ( x )      w i > 0 f(x)=\sum_{i=1}^mw_iP_i^2(x) \ \ \ \ w_i>0 f(x)=i=1mwiPi2(x)    wi>0

拿到结果后,再调整权。实际上是人工智能解决的问题

思路

Basic structure of a numerical algorithm for minimizing f(x)

第一步:找到一个你认为合理的点或者随机产生一个点,然后找一个你认为什么样是达到满意的值 ε \varepsilon ε,然后设置一个迭代次数k = 0

Step1: choose an initial point x 0 x_0 x0, set a convergence tolerance ε \varepsilon ε, and set a counter k = 0

第二步(最关键):确定方向

Step2: determine a search direction d k d_k dk for reducing f(x) from point x k x_k xk.

第三步(次关键):定步长

Step3: determine a step size α k \alpha_k αk such that f ( x k + α d k ) f( x_k+ \alpha d_k) f(xk+αdk) is minimized for α ≥ 0 \alpha \geq0 α0, and constuct x k + 1 = x k + α d k x_{k+1} =x_k+ \alpha d_k xk+1=xk+αdk

第四步:看走了多远。近的话,马上停下来。

Step4: if ∣ ∣ α k d k ∣ ∣ < ε ||\alpha_k d_k||<\varepsilon αkdk<ε, stop and output a solution x k + 1 x_{k+1} xk+1,else set k := k+1 and repeat from Step 2.

comments:

a) Steps 2 and 3 are key steps of an optimization algorithm,
b) Different ways to accomplish Step 2 leads to different algorithms.
c) Step 3 is a one-dimensional optimization problem and it is often called a line search step.

一阶偏导数 First-order:

First-order necessary condition: If x ∗ x^* x is a minimum point (minimizer), then ▽ ( x ∗ ) = 0 \bigtriangledown (x^*)=0 (x)=0. In other words,
if x ∗ x^* x is a minimum point, then it must be a stationary point.

▽ f = [ ∂ f ∂ x 1 . . . ∂ f ∂ x n ] \bigtriangledown f=\begin{bmatrix} \frac{\partial f}{\partial x_1}\\ .\\ .\\ .\\ \frac{\partial f}{\partial x_n} \end{bmatrix} f=x1f...xnf

二阶偏导数 矩阵:

Second-order sufficient condition: If ▽ ( x ∗ ) = 0 \bigtriangledown (x^*)=0 (x)=0 and H ( x ∗ ) H(x^*) H(x) is a positive definite matrix, i.e., H ( x ∗ ) > 0 H(x^*)>0 H(x)>0 ,
then x ∗ x^* x is a minimum point (minimizer).

H ( x ) = [ ∂ 2 f ∂ x 1 2 ∂ 2 f ∂ x 1 . ∂ x 2 ∂ 2 f ∂ x 1 . ∂ x 3 . . . ∂ 2 f ∂ x 1 . ∂ x n ∂ 2 f ∂ x 1 . ∂ x 2 ∂ 2 f ∂ x 2 2 . . . . . . . . . . . . . . . . ∂ 2 f ∂ x 1 . ∂ x n . . . . . ∂ 2 f ∂ x n 2 ] \LARGE H(x)=\begin{bmatrix} \frac{\partial ^2f}{\partial x_1^2} & \frac{\partial ^2f}{\partial x_1.\partial x_2} &\frac{\partial ^2f}{\partial x_1.\partial x_3} &.&.&.&\frac{\partial ^2f}{\partial x_1.\partial x_n}\\ \frac{\partial ^2f}{\partial x_1.\partial x_2} & \frac{\partial ^2f}{\partial x_2^2} &.\\ .&.&.&.\\ .&.&.&.&.\\ .&.&.&.&.&.\\ \frac{\partial ^2f}{\partial x_1.\partial x_n} &.&.&.&.&.&\frac{\partial ^2f}{\partial x_n^2}\\ \end{bmatrix} H(x)=x122fx1.x22f...x1.xn2fx1.x22fx222f....x1.x32f.................x1.xn2fxn22f

i行j列为 ∂ 2 f ∂ x i . ∂ x j \frac{\partial ^2f}{\partial x_i.\partial x_j} xi.xj2f

二阶偏导数矩阵为 对称的矩阵,也称为Hessian Matrix

当n = 1 时,H(x) = d 2 f d x 2 \frac{d^2f}{dx^2} dx2d2f

Hessian Matrix和梯度的关系

H ( x ) = ▽ ( ▽ T f ( x ) ) \LARGE\color{red}H(x) =\bigtriangledown (\bigtriangledown^Tf(x)) H(x)=(Tf(x))

你可能感兴趣的:(数学基础课程)