RLS算法到卡尔曼滤波 I

我的这篇文章写了RLS算法的直接动机。
https://blog.csdn.net/ZLH_HHHH/article/details/89061839
数学上等价于最小二乘法。
接着《 R L S RLS RLS算法》这篇文章来说。

P P P矩阵到意义

回想,最初目标函数的定义:
J = E ( ( Y ( k ) − H ( k ) x ^ ( k ) ) T ( Y ( k ) − H ( k ) x ^ ( k ) ) ) = E ( ( H ( k ) x + v ( k ) − H ( k ) x ^ ( k ) ) T ( H ( k ) x + v ( k ) − H ( k ) x ^ ( k ) ) ) J=E\Big((Y(k)-H(k)\hat x(k))^T(Y(k)-H(k)\hat x(k))\Big)\\ =E\Big((H(k)x+v(k)-H(k)\hat x(k))^T(H(k)x+v(k)-H(k)\hat x(k))\Big) J=E((Y(k)H(k)x^(k))T(Y(k)H(k)x^(k)))=E((H(k)x+v(k)H(k)x^(k))T(H(k)x+v(k)H(k)x^(k)))
解释一下,这里 v ( k ) v(k) v(k)是测量误差,是不可观测的。区别于估计误差。一般假设其均值为 0 0 0,即: E ( v ) = 0 E(v)=0 E(v)=0,即无偏。
那么:
J = E [ ( x ^ ( k ) − x ) T H T ( k ) H ( k ) ( x ^ ( k ) − x ) ] J=E\Big[\Big(\hat x(k)-x\Big)^TH^T(k)H(k)\Big(\hat x(k)-x\Big)\Big] J=E[(x^(k)x)THT(k)H(k)(x^(k)x)]
这也就是说,最小化 J J J时,等价于最小话:
J ′ = E ( ( x ^ ( k ) − x ) T ( x ^ ( k ) − x ) ) J'=E\Big((\hat x(k)-x)^T(\hat x(k)-x)\Big) J=E((x^(k)x)T(x^(k)x))
最小化上式,意味着新的一次估计更加接近真实的 x x x
令: A ( k ) = E ( ( x ^ ( k ) − x ) ( x ^ ( k ) − x ) T ) A(k)=E\Big((\hat x(k)-x)(\hat x(k)-x)^T\Big) A(k)=E((x^(k)x)(x^(k)x)T) 则有: J ′ = T r ( A ( k ) ) J'=Tr(A(k)) J=Tr(A(k))
最小化 J ′ J' J与最小化 J J J最后结果是一样的。
考虑计算最佳增益 K K K
x ^ ( k ) − x = x ^ ( k − 1 ) + K ( k ) ( y ( k ) − h ( k ) x ^ ( k − 1 ) ) − x = x ^ ( k − 1 ) + K ( k ) ( h ( k ) x + v ( k ) − h ( k ) x ^ ( k − 1 ) ) − x = ( I − K ( k ) h ( k ) ) ( x ^ ( k − 1 ) − x ) + K ( k ) v ( k ) \hat x(k)-x=\hat x(k-1)+K(k)\Big(y(k)-h(k)\hat x(k-1)\Big)-x\\ =\hat x(k-1)+K(k)\Big(h(k)x+v(k)-h(k)\hat x(k-1)\Big)-x \\ = (I-K(k)h(k))(\hat x(k-1)-x)+K(k)v(k) x^(k)x=x^(k1)+K(k)(y(k)h(k)x^(k1))x=x^(k1)+K(k)(h(k)x+v(k)h(k)x^(k1))x=(IK(k)h(k))(x^(k1)x)+K(k)v(k)

那么:
A ( k ) = ( I − K ( k ) h ( k ) ) A ( k − 1 ) ( I − K ( k ) h ( k ) ) T + K ( k ) R ( k ) K T ( k ) A(k)=(I-K(k)h(k))A(k-1)(I-K(k)h(k))^T+K(k)R(k)K^T(k) A(k)=(IK(k)h(k))A(k1)(IK(k)h(k))T+K(k)R(k)KT(k)
在这里, R ( k ) = E ( v 2 ( k ) ) R(k)=E(v^2(k)) R(k)=E(v2(k)).
计算最优增益:
∂ T r ( A ( k ) ) ∂ K ( k ) = − 2 h ( k ) A ( k − 1 ) + 2 h ( k ) A ( k − 1 ) h T ( k ) K T ( k ) + 2 R ( k ) K T ( k ) = 0 \frac{\partial Tr(A(k))}{\partial K(k)}=-2h(k)A(k-1)+2h(k)A(k-1)h^T(k)K^T(k)+2R(k)K^T(k)=0 K(k)Tr(A(k))=2h(k)A(k1)+2h(k)A(k1)hT(k)KT(k)+2R(k)KT(k)=0
h ( k ) A ( k − 1 ) + ( ( h ( k ) A ( k − 1 ) h T ( k ) + R ( k ) ) K T ( k ) = 0 h(k)A(k-1)+\Big((h(k)A(k-1)h^T(k)+R(k)\Big)K^T(k)=0 h(k)A(k1)+((h(k)A(k1)hT(k)+R(k))KT(k)=0
所以: K ( k ) = A ( k − 1 ) h T ( k ) R ( k ) + h ( k ) A ( k − 1 ) h T ( k ) K(k)=\frac{A(k-1)h^T(k)}{R(k)+h(k)A(k-1)h^T(k)} K(k)=R(k)+h(k)A(k1)hT(k)A(k1)hT(k)
显然,上述更新方式和带权最小二乘法是一样的。
因为:
K ( k ) = P ( k ) h T ( k ) R − ( k ) K(k)=P(k)h^T(k)R^-(k) K(k)=P(k)hT(k)R(k)
A A A递推式子,替换 P P P
( I − K ( k ) h ( k ) ) P ( k − 1 ) ( I − K ( k ) h ( k ) ) T + K ( k ) R ( k ) K T ( k ) = P ( k ) − P ( k ) h T ( k ) K T ( k ) + P ( k ) h T ( k ) R − ( k ) R ( k ) K T ( k ) = P ( k ) (I-K(k)h(k))P(k-1)(I-K(k)h(k))^T+K(k)R(k)K^T(k)\\= P(k)-P(k)h^T(k)K^T(k)+P(k)h^T(k)R^-(k)R(k)K^T(k)\\ =P(k) (IK(k)h(k))P(k1)(IK(k)h(k))T+K(k)R(k)KT(k)=P(k)P(k)hT(k)KT(k)+P(k)hT(k)R(k)R(k)KT(k)=P(k)
回想 J ′ J' J最小化时, J J J最小化,有足够的理由,认为 P ( k ) P(k) P(k)就是 x ^ ( k ) \hat x(k) x^(k)的协方差矩阵,即为:
P ( k ) = ( H T ( k ) H ( k ) ) − = E ( ( x ^ ( k ) − x ) ( x ^ ( k ) − x ) T ) P(k)=\Big(H^T(k)H(k)\Big)^-=E\Big((\hat x(k)-x)(\hat x(k)-x)^T\Big) P(k)=(HT(k)H(k))=E((x^(k)x)(x^(k)x)T)

你可能感兴趣的:(数学,和,机器学习)