本篇是牛顿法的最后一篇,Levenberg-Marquardt算法,也就是阻尼牛顿法。
回顾上篇中的高斯牛顿法:
f ( x 0 + Δ x ) = f ( x 0 ) + J ( x 0 ) Δ x min Δ x 1 2 ∣ ∣ f ( x ) ∣ ∣ 2 = min Δ x 1 2 ( f ( x 0 ) + J ( x 0 ) Δ x ) T ( f ( x 0 ) + J ( x 0 ) Δ x ) J T J Δ x = − J T f f({\bf x_0+\Delta x}) = f({\bf x_0}) + J({\bf x_0}){\bf \Delta x} \\ \quad \\ \min_{\bf \Delta x} \frac{1}{2} ||f({\bf x})||^2 \\ = \min_{\bf \Delta x} \frac{1}{2} (f({\bf x_0})+J({\bf x_0}){\bf \Delta x})^T(f({\bf x_0})+J({\bf x_0}){\bf \Delta x}) \\ \quad \\ J^TJ{\bf \Delta x} = -J^Tf f(x0+Δx)=f(x0)+J(x0)ΔxΔxmin21∣∣f(x)∣∣2=Δxmin21(f(x0)+J(x0)Δx)T(f(x0)+J(x0)Δx)JTJΔx=−JTf
将牛顿法中的海森矩阵 H H H,使用一阶梯度 J T J J^TJ JTJ进行了替换,因而避免了计算二阶梯度。然而, J T J J^TJ JTJ并非正定,可能是奇异阵或者病态矩阵,求逆不稳定,使得迭代的稳定性和收敛性差。
为了提升迭代的稳定性,在优化函数上添加阻尼(机器学习里有时称为正则项),来降低迭代步长:
min Δ x ( 1 2 ∣ ∣ f ( x ) ∣ ∣ 2 + 1 2 λ Δ x T Δ x ) = min Δ x 1 2 ( f ( x 0 ) + J ( x 0 ) Δ x ) T ( f ( x 0 ) + J ( x 0 ) Δ x ) + 1 2 λ Δ x T Δ x ) \min_{\bf \Delta x} (\frac{1}{2} ||f({\bf x})||^2+\frac{1}{2}\lambda{\bf \Delta x^T\Delta x}) \\ = \min_{\bf \Delta x} \frac{1}{2} (f({\bf x_0})+J({\bf x_0}){\bf \Delta x})^T(f({\bf x_0})+J({\bf x_0}){\bf \Delta x}) + \frac{1}{2} \lambda {\bf \Delta x^T\Delta x}) \\ Δxmin(21∣∣f(x)∣∣2+21λΔxTΔx)=Δxmin21(f(x0)+J(x0)Δx)T(f(x0)+J(x0)Δx)+21λΔxTΔx)
在 1 2 ∣ ∣ f ( x ) ∣ ∣ 2 \frac{1}{2}||f({\bf x})||^2 21∣∣f(x)∣∣2后添加 1 2 λ Δ x T Δ x \frac {1}{2}\lambda {\bf \Delta x}^T {\bf \Delta x} 21λΔxTΔx,当求出的 Δ x \bf \Delta x Δx比较大时,对函数值进行惩罚。
还是按照高斯牛顿法一样求一阶梯度等于零,获得迭代表达式:
J T J Δ x + λ Δ x = − J T f ( J T J + λ I ) Δ x = − J T f Δ x = − ( J T J + λ I ) − 1 J T f J^TJ{\bf \Delta x} +\lambda {\bf \Delta x} = -J^Tf \\ (J^TJ+\lambda I){\bf \Delta x}=-J^Tf \\ {\bf \Delta x} = -(J^TJ+\lambda I)^{-1}J^Tf \\ JTJΔx+λΔx=−JTf(JTJ+λI)Δx=−JTfΔx=−(JTJ+λI)−1JTf
可以看出,阻尼牛顿法通过 J T J + λ I J^TJ+\lambda I JTJ+λI替换 H H H,一方面避免二阶梯度计算,另一方面保证了矩阵的可逆性。
上面的惩罚项中, λ \lambda λ用于控制 Δ x \bf \Delta x Δx对迭代步长的影响。当 f ( x 0 + Δ x ) f(\bf x_0 +\Delta x) f(x0+Δx)与一阶泰勒展开 f ( x 0 ) + J ( x 0 ) Δ x f({\bf x_0}) +J({\bf x_0}){\bf\Delta x} f(x0)+J(x0)Δx足够近似,迭代步长可以大一些,加速收敛;如果函数值与泰勒展开不够近似,迭代步长要小一些,增加稳定。
因此,可以设计以下策略调整阻尼系数:
ρ = f ( x 0 + Δ x ) − f ( x 0 ) J ( x 0 ) Δ x i f ρ > 3 4 : λ = 1 2 λ i f ρ < 1 2 : λ = 2 λ \rho = \frac {f({\bf x_0 + \Delta x})-f({\bf x_0})} {J({\bf x_0}){\bf \Delta x}} \\ \quad \\ if \quad \rho > \frac{3}{4}: \quad \lambda=\frac{1}{2}\lambda \\ \quad \\ if \quad \rho < \frac{1}{2}: \quad \lambda= 2\lambda ρ=J(x0)Δxf(x0+Δx)−f(x0)ifρ>43:λ=21λifρ<21:λ=2λ
LM算法还可以通过置信域思想实现:
min Δ x 1 2 ∣ ∣ f ( x ) ∣ ∣ 2 s . t . Δ x T Δ x ≤ d \min_{\bf \Delta x} \frac{1}{2} ||f({\bf x})||^2 \\ \quad \\ s.t. \quad {\bf \Delta x}^T{\bf \Delta x} \le d Δxmin21∣∣f(x)∣∣2s.t.ΔxTΔx≤d
通过限制 Δ x \bf \Delta x Δx的长度来约束迭代步长。这时可以通过拉格朗日乘子法把有约束的优化问题转换成无约束的优化问题:
1 2 ∣ ∣ f ( x 0 ) + J ( x 0 ) Δ x ∣ ∣ 2 + 1 2 λ ( Δ x T Δ x − d ) \frac{1}{2}||f({\bf x_0})+J({\bf x_0}){\bf \Delta x}||^2+\frac{1}{2}\lambda({\bf \Delta x}^T{\bf \Delta x} - d) 21∣∣f(x0)+J(x0)Δx∣∣2+21λ(ΔxTΔx−d)
这个优化问题涉及到KKT收敛性等等判定,最后的迭代表示式形式与阻尼牛顿法相同
置信域法中的 λ \lambda λ是求解拉格朗日乘子法计算出来的,而我们需要调节的是 Δ x \bf \Delta x Δx的置信域 d d d:
ρ = f ( x 0 + Δ x ) − f ( x 0 ) J ( x 0 ) Δ x 0 i f ρ > 3 4 : d = 2 d i f ρ < 1 2 : d = 1 2 d \rho = \frac {f({\bf x_0 + \Delta x})-f({\bf x_0})} {J({\bf x_0}){\bf \Delta x_0}} \\ \quad \\ if \quad \rho > \frac{3}{4}: \quad d=2d \\ \quad \\ if \quad \rho < \frac{1}{2}: \quad d=\frac{1}{2}d ρ=J(x0)Δx0f(x0+Δx)−f(x0)ifρ>43:d=2difρ<21:d=21d
最后给出LM算法的整体流程:
给出阻尼牛顿思想的LM算法代码:
import numpy as np
import scipy.optimize
import time
import math
def partial_derivate_xy(x, y, F):
dx = (F(x + 0.001, y) - F(x, y))/0.001
dy = (F(x, y + 0.001) - F(x, y))/0.001
return dx, dy
def non_linear_func(x, y):
fxy = 0.5 * (x ** 2 + y ** 2)
return fxy
def non_linear_func_2(x, y):
fxy = x*x + 2*y*y + 2*x*y + 3*x - y - 2
return fxy
def non_linear_func_3(x, y):
fxy = 0.5 * (x ** 2 - y ** 2)
return fxy
def non_linear_func_4(x, y):
fxy = x**4 + 2*y**4 + 3*x**2*y**2 + 4*x*y**2 + x*y + x + 2*y + 0.5
return fxy
def non_linear_func_5(x, y):
fxy = math.exp(x) + math.exp(0.5 * y) + x
return fxy
def non_linear_func_5_least_square(x, y):
fxy = math.pow(math.exp(x) + math.exp(0.5 * y) + x, 2)
return fxy
def damping_newton(x, y, F, l):
dx, dy = partial_derivate_xy(x, y, F)
fx = F(x, y)
grad = np.array([[dx], [dy]])
H = np.matmul(grad, grad.T) + l * np.eye(2)
g = - grad * fx
vec_delta = np.matmul(np.linalg.inv(H), g)
vec_opt = np.array([[x], [y]]) + vec_delta
x_opt = vec_opt[0][0]
y_opt = vec_opt[1][0]
rho = (F(x_opt, y_opt) - F(x, y)) / (np.matmul(grad.T, vec_delta))
if rho < 0.25:
l *= 2
if rho > 0.75:
l *= 0.5
return x_opt, y_opt, vec_delta, l
def optimizer(x0, y0, F, th=0.0001):
x = x0
y = y0
counter = 0
l = 1
while True:
x_opt, y_opt, vec_delta, l = damping_newton(x, y, F, l)
if np.linalg.norm(vec_delta) < th:
break
x = x_opt
y = y_opt
counter = counter + 1
print('iter: {}'.format(counter), 'optimized (x, y) = ({}, {})'.format(x, y), 'lambda: {}'.format(l))
return x, y
def verify_min(x, y, F):
fx = F(x, y)
deltax = np.linspace(-0.1, 0.1, 100)
deltay = np.linspace(-0.1, 0.1, 100)
x_range = x + deltax
y_range = y + deltay
counter = 0
for i in range(100):
for j in range(100):
f_range = F(x_range[i], y_range[j])
f_delta = fx - f_range
if f_delta < 0:
counter += 1
print('counter: {}'.format(counter))
if __name__ == '__main__':
x0 = 2.
y0 = 2.
start = time.time()
for i in range(1000):
result_x, result_y = optimizer(x0, y0, non_linear_func_5)
if i == 0:
break
end = time.time()
print(result_x, result_y, 'cost time: {}'.format(end - start))
print(partial_derivate_xy(result_x, result_y, non_linear_func_5_least_square))
verify_min(result_x, result_y, non_linear_func_5_least_square)
# scipyRes = scipy.optimize.fmin_cg(scipyF, np.array([0, 0]))
# print(scipyRes)