版权声明:本文为作者原创文章,如转载请注明出处。
RLS算法需要的运算量比较大,在实际应用中会受到一些限制,因此出现了许多快速算法。比较常见的两种算法为阶递归自适应滤波器(Lattice-Based RLS)和快速横向滤波器(Fast Transversal RLS)。所有的快速算法都探索了信息数据的一些结构特性,以实现较低的计算复杂度。快速算法的模式是相似的,因为前向和后向预测滤波器是快速算法的基本组成部分。
对于一个预测过程,前向后验预测误差为
ε f ( k , i + 1 ) = x ( k ) − w f T ( k , i + 1 ) x ( k − 1 , i + 1 ) \varepsilon_f(k,i+1)=x(k)-\bm{w}^T_f(k,i+1)\bm{x}(k-1,i+1) εf(k,i+1)=x(k)−wfT(k,i+1)x(k−1,i+1)式中 w f T ( k , i + 1 ) = [ w f 0 ( k ) , w f 1 ( k ) , w f 2 ( k ) , . . . w f i ( k ) ] T \bm{w}^T_f(k,i+1)=[w_{f0}(k),w_{f1}(k),w_{f2}(k),...w_{fi}(k)]^T wfT(k,i+1)=[wf0(k),wf1(k),wf2(k),...wfi(k)]T x ( k − 1 , i + 1 ) = [ x ( k − 1 ) , x ( k − 2 ) , . . . x ( k − i − 1 ) ] T \bm{x}(k-1,i+1)=[x(k-1),x(k-2),...x(k-i-1)]^T x(k−1,i+1)=[x(k−1),x(k−2),...x(k−i−1)]T
定义加权误差向量
ε f ( k , i + 1 ) = x ^ ( k ) − X T ( k − 1 , i + 1 ) w f ( k , i + 1 ) \bm{\varepsilon}_f(k,i+1)=\widehat{\bm{x}}(k)-\bm{X}^T(k-1,i+1)\bm{w}_f(k,i+1) εf(k,i+1)=x (k)−XT(k−1,i+1)wf(k,i+1)
式中 x ^ ( k ) = [ x ( k ) , λ 1 / 2 x ( k − 1 ) , , λ x ( k − 2 ) , . . . , λ k / 2 x ( 0 ) ] \widehat{\bm{x}}(k)=[x(k),\lambda^{1/2}x(k-1), ,\lambda x(k-2),...,\lambda^{k/2} x(0)] x (k)=[x(k),λ1/2x(k−1),,λx(k−2),...,λk/2x(0)]
写为矩阵形式有
ε f ( k , i + 1 ) = X T ( k , i + 2 ) [ 1 − w f ( k , i + 1 ) ] \bm{\varepsilon}_f(k,i+1)=\bm{X}^T(k,i+2)\begin{bmatrix} 1 \\ -\bm{w}_f(k,i+1) \\ \end{bmatrix} εf(k,i+1)=XT(k,i+2)[1−wf(k,i+1)]
均方误差为
ξ f d ( k , i + 1 ) = ε f T ( k , i + 1 ) ε f ( k , i + 1 ) \xi^d_f(k,i+1)=\bm{\varepsilon}_f^T(k,i+1)\bm{\varepsilon}_f(k,i+1) ξfd(k,i+1)=εfT(k,i+1)εf(k,i+1)
= ∑ l = 0 k λ k − l [ x ( l ) − x T ( l − 1 , i + 1 ) w f ( k , i + 1 ) ] 2 =\sum_{l=0}^k \lambda^{k-l}[x(l)-\bm{x}^T(l-1,i+1)\bm{w}_f(k,i+1)]^2 =l=0∑kλk−l[x(l)−xT(l−1,i+1)wf(k,i+1)]2
令导数为0可以得到
w f ( k , i + 1 ) = [ ∑ l = 0 k λ k − l x ( l − 1 , i + 1 ) x T ( l − 1 , i + 1 ) ] − 1 ∑ l = 0 k λ k − l x ( l − 1 , i + 1 ) x ( l ) \bm{w}_f(k,i+1)=\left[ \sum_{l=0}^k \lambda^{k-l}\bm{x}(l-1,i+1)\bm{x}^T(l-1,i+1) \right]^{-1}\sum_{l=0}^k \lambda^{k-l} \bm{x}(l-1,i+1)x(l) wf(k,i+1)=[l=0∑kλk−lx(l−1,i+1)xT(l−1,i+1)]−1l=0∑kλk−lx(l−1,i+1)x(l)
= [ X ( k − 1 , i + 1 ) X T ( k − 1 , i + 1 ) ] − 1 X ( k − 1 , i + 1 ) x ^ ( k ) =[\bm{X}(k-1,i+1)\bm{X}^T(k-1,i+1)]^{-1}\bm{X}(k-1,i+1)\widehat{\bm{x}}(k) =[X(k−1,i+1)XT(k−1,i+1)]−1X(k−1,i+1)x (k)
= R D f − 1 ( k − 1 , i + 1 ) p D f ( k , i + 1 ) =\bm{R}^{-1}_{Df}(k-1,i+1)\bm{p}_{Df}(k,i+1) =RDf−1(k−1,i+1)pDf(k,i+1)
将最佳权系数代入可以得到最小均方误差
ξ f m i n d ( k , i + 1 ) = ∑ l = 0 k λ k − l x ( l ) [ x ( l ) − x T ( l − 1 , i + 1 ) w f ( k , i + 1 ) ] \xi^d_{fmin}(k,i+1)=\sum_{l=0}^k\lambda^{k-l}x(l)[x(l)-\bm{x}^T(l-1,i+1) \bm{w}_f(k,i+1)] ξfmind(k,i+1)=l=0∑kλk−lx(l)[x(l)−xT(l−1,i+1)wf(k,i+1)]
= ∑ l = 0 k λ k − l x 2 ( l ) − p D f T ( k , i + 1 ) w f ( k , i + 1 ) =\sum_{l=0}^k\lambda^{k-l}x^2(l)-\bm{p}_{Df}^T(k,i+1)\bm{w}_f(k,i+1) =l=0∑kλk−lx2(l)−pDfT(k,i+1)wf(k,i+1)
= σ f 2 ( k ) − w f ( k , i + 1 ) T p D f ( k , i + 1 ) =\sigma^2_f(k)-\bm{w}_f(k,i+1)^T\bm{p}_{Df}(k,i+1) =σf2(k)−wf(k,i+1)TpDf(k,i+1)
将上面两式组合可以得到
R D ( k , i + 2 ) = [ σ f 2 ( k ) p D f T ( k , i + 1 ) p D f T ( k , i + 1 ) R D f ( k − 1 , i + 1 ) ] \bm{R}_D(k,i+2)=\begin{bmatrix} \sigma^2_f(k)&\bm{p}^T_{Df}(k,i+1) \\ \bm{p}^T_{Df}(k,i+1) & \bm{R}_{Df}(k-1,i+1) \end{bmatrix} RD(k,i+2)=[σf2(k)pDfT(k,i+1)pDfT(k,i+1)RDf(k−1,i+1)]
R D ( k , i + 2 ) [ 1 − w f ( k , i + 1 ) ] = [ ξ f m i n d ( k , i + 1 ) 0 ] \bm{R}_D(k,i+2)\begin{bmatrix} 1 \\ -\bm{w}_{f}(k,i+1) \end{bmatrix}=\begin{bmatrix} \xi^d_{fmin}(k,i+1) \\ 0 \end{bmatrix} RD(k,i+2)[1−wf(k,i+1)]=[ξfmind(k,i+1)0]
同理可得,后向预测模型为
R D ( k , i + 2 ) [ − w b ( k , i + 1 ) 1 ] = [ 0 ξ b m i n d ( k , i + 1 ) ] \bm{R}_D(k,i+2)\begin{bmatrix} -\bm{w}_{b}(k,i+1) \\ 1 \end{bmatrix}=\begin{bmatrix} 0 \\ \xi^d_{b_{min}}(k,i+1) \end{bmatrix} RD(k,i+2)[−wb(k,i+1)1]=[0ξbmind(k,i+1)]
引入新变量 δ \delta δ
R D ( k , i + 2 ) [ 1 − w f ( k , i ) 0 ] = [ ξ f m i n d ( k , i ) 0 δ f ( k , i ) ] \bm{R}_D(k,i+2)\begin{bmatrix} 1\\ -\bm{w}_{f}(k,i) \\ 0 \end{bmatrix}=\begin{bmatrix} \xi^d_{fmin}(k,i)\\ 0\\ \delta_f(k,i)\\ \end{bmatrix} RD(k,i+2)⎣⎡1−wf(k,i)0⎦⎤=⎣⎡ξfmind(k,i)0δf(k,i)⎦⎤
式中 δ f ( k , i ) = ∑ l = 0 k λ k − l ε f ( l , i ) x ( l − i − 1 ) \delta_f(k,i)=\sum_{l=0}^k \lambda^{k-l}\varepsilon_f(l,i)x(l-i-1) δf(k,i)=l=0∑kλk−lεf(l,i)x(l−i−1)
同样的有
R D ( k , i + 2 ) [ 0 − w b ( k − 1 , i ) 1 ] = [ δ b ( k , i ) 0 ξ b m i n d ( k , i ) ] \bm{R}_D(k,i+2)\begin{bmatrix} 0\\ -\bm{w}_{b}(k-1,i) \\ 1 \end{bmatrix}=\begin{bmatrix} \delta_b(k,i)\\ 0\\ \xi^d_{bmin}(k,i)\\ \end{bmatrix} RD(k,i+2)⎣⎡0−wb(k−1,i)1⎦⎤=⎣⎡δb(k,i)0ξbmind(k,i)⎦⎤
式中 δ b ( k , i ) = ∑ l = 0 k λ k − l ε b ( l − 1 , i ) x ( l ) \delta_b(k,i)=\sum_{l=0}^k \lambda^{k-l}\varepsilon_b(l-1,i)x(l) δb(k,i)=l=0∑kλk−lεb(l−1,i)x(l)
后验误差迭代为
ξ b m i n d ( k , i + 1 ) = ξ b m i n d ( k − 1 , i ) − δ 2 ( k , i ) ξ f m i n d ( k , i ) \xi_{b_{min}}^d(k,i+1)=\xi_{b_{min}}^d(k-1,i)-\frac{\delta^2(k,i)}{\xi_{f_{min}}^d(k,i)} ξbmind(k,i+1)=ξbmind(k−1,i)−ξfmind(k,i)δ2(k,i)
后验估计权系数迭代为
w b ( k , i + 1 ) = [ 0 w b ( k − 1 , i ) ] − δ ( k , i ) ξ f m i n d ( k , i ) [ − 1 w f ( k , i ) ] \bm{w}_b(k,i+1)=\begin{bmatrix}0\\\bm{w}_{b}(k-1,i) \end{bmatrix}-\frac{\delta(k,i)}{\xi_{f_{min}}^d(k,i)}\begin{bmatrix}-1\\\bm{w}_{f}(k,i) \end{bmatrix} wb(k,i+1)=[0wb(k−1,i)]−ξfmind(k,i)δ(k,i)[−1wf(k,i)]
同样的,前验误差迭代为
ξ f m i n d ( k , i + 1 ) = ξ f m i n d ( k , i ) − δ 2 ( k , i ) ξ b m i n d ( k − 1 , i ) \xi_{f_{min}}^d(k,i+1)=\xi_{f_{min}}^d(k,i)-\frac{\delta^2(k,i)}{\xi_{b_{min}}^d(k-1,i)} ξfmind(k,i+1)=ξfmind(k,i)−ξbmind(k−1,i)δ2(k,i)
前验估计权系数迭代为
w f ( k , i + 1 ) = [ w f ( k , i ) 0 ] − δ ( k , i ) ξ b m i n d ( k − 1 , i ) [ w b ( k − 1 , i ) − 1 ] \bm{w}_f(k,i+1)=\begin{bmatrix} \bm{w}_{f}(k,i) \\ 0 \end{bmatrix}-\frac{\delta(k,i)}{\xi_{b_{min}}^d(k-1,i)}\begin{bmatrix}\bm{w}_{b}(k-1,i) \\-1\end{bmatrix} wf(k,i+1)=[wf(k,i)0]−ξbmind(k−1,i)δ(k,i)[wb(k−1,i)−1]
后验前向误差为
ε f ( k , i + 1 ) = ε f ( k , i ) − κ f ( k , i ) ε b ( k − 1 , i ) \varepsilon_f(k,i+1)=\varepsilon_f(k,i)-\kappa_f(k,i)\varepsilon_b(k-1,i) εf(k,i+1)=εf(k,i)−κf(k,i)εb(k−1,i)
式中 κ f ( k , i ) \kappa_f(k,i) κf(k,i)称为前向反射系数
κ f ( k , i ) = δ ( k , i ) ξ b m i n d ( k − 1 , i ) \kappa_f(k,i)=\frac{\delta(k,i)}{\xi_{b_{min}}^d(k-1,i)} κf(k,i)=ξbmind(k−1,i)δ(k,i)
后验后向预测误差为
ε b ( k , i + 1 ) = ε b ( k − 1 , i ) − κ b ( k , i ) ε f ( k , i ) \varepsilon_b(k,i+1)=\varepsilon_b(k-1,i)-\kappa_b(k,i)\varepsilon_f(k,i) εb(k,i+1)=εb(k−1,i)−κb(k,i)εf(k,i)
式中 κ b ( k , i ) \kappa_b(k,i) κb(k,i)称为后向反射系数
κ b ( k , i ) = δ ( k , i ) ξ f m i n d ( k , i ) \kappa_b(k,i)=\frac{\delta(k,i)}{\xi_{f_{min}}^d(k,i)} κb(k,i)=ξfmind(k,i)δ(k,i)
在初始阶段
ε b ( k , 0 ) = ε f ( k , 0 ) = x ( k ) \varepsilon_b(k,0)=\varepsilon_f(k,0)=x(k) εb(k,0)=εf(k,0)=x(k)
ξ f m i n d ( k , 0 ) = ξ b m i n d ( k , 0 ) = x 2 ( k ) + λ ξ f m i n d ( k − 1 , 0 ) \xi_{f_{min}}^d(k,0)=\xi_{b_{min}}^d(k,0)=x^2(k)+\lambda\xi_{f_{min}}^d(k-1,0) ξfmind(k,0)=ξbmind(k,0)=x2(k)+λξfmind(k−1,0)
δ \delta δ的迭代为
δ ( k , i ) = λ δ ( k − 1 , i ) + ε b ( k − 1 , i ) ε f ( k , i ) γ ( k − 1 , i ) \delta(k,i)=\lambda\delta(k-1,i)+\frac{\varepsilon_b(k-1,i)\varepsilon_f(k,i)}{\gamma(k-1,i)} δ(k,i)=λδ(k−1,i)+γ(k−1,i)εb(k−1,i)εf(k,i)
式中
γ ( k , i + 1 ) = γ ( k , i ) − ε b 2 ( k , i ) ξ b m i n d ( k , i ) \gamma(k,i+1)=\gamma(k,i)-\frac{\varepsilon^2_b(k,i)}{\xi^d_{b_{min}}(k,i)} γ(k,i+1)=γ(k,i)−ξbmind(k,i)εb2(k,i)
加权均方误差为
ξ d ( k , i + 1 ) = ∑ l = 0 k λ k − l ε 2 ( l , i + 1 ) \xi^d(k,i+1)=\sum^k_{l=0}\lambda^{k-l}\varepsilon^2(l,i+1) ξd(k,i+1)=l=0∑kλk−lε2(l,i+1)
对应的
δ D ( k , i ) = λ δ D ( k − 1 , i ) + ε ( k , i ) ε b ( k , i + 1 ) γ ( k , i ) \delta_D(k,i)=\lambda\delta_D(k-1,i)+\frac{\varepsilon(k,i)\varepsilon_b(k,i+1)}{\gamma(k,i)} δD(k,i)=λδD(k−1,i)+γ(k,i)ε(k,i)εb(k,i+1)
前向乘子为
v i ( k ) = δ D ( k , i ) ξ b m i n d ( k , i ) v_i(k)=\frac{\delta_D(k,i)}{\xi^d_{b_{min}}(k,i)} vi(k)=ξbmind(k,i)δD(k,i)
后验输出误差为
ε ( k , i + 1 ) = ε ( k , i ) − v i ( k ) ε b ( k , i ) \varepsilon(k,i+1)=\varepsilon(k,i)-v_i(k)\varepsilon_b(k,i) ε(k,i+1)=ε(k,i)−vi(k)εb(k,i)
由此获得阶递归RLS算法的全过程。
后验前向预测误差为
ε f ( k , N ) = x ( k ) − w f T ( k , N ) x ( k − 1 , N ) \varepsilon_f(k,N)=x(k)-\bm{w}^T_f(k,N)\bm{x}(k-1,N) εf(k,N)=x(k)−wfT(k,N)x(k−1,N)
= x T ( k , N + 1 ) [ 1 − w f ( k , N ) ] =\bm{x}^T(k,N+1)\begin{bmatrix}1\\-\bm{w}_f(k,N)\\\end{bmatrix} =xT(k,N+1)[1−wf(k,N)]
后验与先验前向预测误差之间的关系为
e f ( k , N ) = ε f ( k , N ) γ ( k − 1 , N ) e_f(k,N)=\frac{\varepsilon_f(k,N)}{\gamma(k-1,N)} ef(k,N)=γ(k−1,N)εf(k,N)
式中
γ ( k , N + 1 ) = λ ξ f m i n d ( k − 1 , N ) ξ f m i n d ( k , N ) γ ( k − 1 , N ) \gamma(k,N+1)=\frac{\lambda\xi^d_{f{min}}(k-1,N)}{\xi^d_{f{min}}(k,N)}\gamma(k-1,N) γ(k,N+1)=ξfmind(k,N)λξfmind(k−1,N)γ(k−1,N)
前向最小加权均方误差为
ξ f m i n d ( k , N ) = λ ξ f m i n d ( k − 1 , N ) + e f ( k , N ) ε f ( k , N ) \xi^d_{f_{min}}(k,N)=\lambda\xi^d_{f_{min}}(k-1,N)+e_f(k,N)\varepsilon_f(k,N) ξfmind(k,N)=λξfmind(k−1,N)+ef(k,N)εf(k,N)
前向预测权系数迭代
w f ( k , N ) = w f ( k − 1 , N ) + ϕ ( k − 1 , N ) e f ( k , N ) \bm{w}_f(k,N)=\bm{w}_f(k-1,N)+\phi(k-1,N)e_f(k,N) wf(k,N)=wf(k−1,N)+ϕ(k−1,N)ef(k,N)
式中
ϕ ( k , N + 1 ) = [ 0 ϕ ( k − 1 , N ) ] + 1 ξ f m i n d ( k , N ) [ 1 w f ( k , N ) ] ε f ( k , N ) \phi(k,N+1)=\begin{bmatrix}0\\\phi(k-1,N)\end{bmatrix}+\frac{1}{\xi^d_{f_{min}}(k,N)}\begin{bmatrix}1\\\bm{w}_f(k,N)\\\end{bmatrix}\varepsilon_f(k,N) ϕ(k,N+1)=[0ϕ(k−1,N)]+ξfmind(k,N)1[1wf(k,N)]εf(k,N)
利用 ϕ ^ ( k , N + 1 ) \widehat\phi(k,N+1) ϕ (k,N+1)代替 ϕ ( k , N + 1 ) \phi(k,N+1) ϕ(k,N+1)有
ϕ ^ ( k , N + 1 ) = [ 0 ϕ ^ ( k − 1 , N ) ] + 1 λ ξ f m i n d ( k − 1 , N ) [ 1 − w f ( k − 1 , N ) ] e f ( k , N ) \widehat\phi(k,N+1)=\begin{bmatrix}0\\\widehat\phi(k-1,N)\end{bmatrix}+\frac{1}{\lambda\xi^d_{f_{min}}(k-1,N)}\begin{bmatrix}1\\-\bm{w}_f(k-1,N)\\\end{bmatrix}e_f(k,N) ϕ (k,N+1)=[0ϕ (k−1,N)]+λξfmind(k−1,N)1[1−wf(k−1,N)]ef(k,N)
则前向预测权系数迭代可以改为
w f ( k , N ) = w f ( k − 1 , N ) + ϕ ^ ( k − 1 , N ) ε f ( k , N ) \bm{w}_f(k,N)=\bm{w}_f(k-1,N)+\widehat\phi(k-1,N)\varepsilon_f(k,N) wf(k,N)=wf(k−1,N)+ϕ (k−1,N)εf(k,N)
后验与先验预测误差之间的关系为
ε b ( k , N ) = e b ( k , N ) γ ( k , N ) \varepsilon_b(k,N)=e_b(k,N)\gamma(k,N) εb(k,N)=eb(k,N)γ(k,N)
转换因子
γ ( k , N + 1 ) γ ( k , N ) = λ ξ b m i n d ( k − 1 , N ) ξ b m i n d ( k , N ) \frac{\gamma(k,N+1)}{\gamma(k,N)}=\frac{\lambda\xi^d_{b_{min}}(k-1,N)}{\xi^d_{b_{min}}(k,N)} γ(k,N)γ(k,N+1)=ξbmind(k,N)λξbmind(k−1,N)
后向先验预测误差为
e b ( k , N ) = λ ξ b m i n d ( k − 1 , N ) ϕ N + 1 ( k , N + 1 ) e_b(k,N)=\lambda\xi^d_{b_{min}}(k-1,N)\phi_{N+1}(k,N+1) eb(k,N)=λξbmind(k−1,N)ϕN+1(k,N+1)
FTF中转换因子的迭代可以表示为
γ − 1 ( k , N ) = γ − 1 ( k , N + 1 ) − ϕ ^ N + 1 ( k , N + 1 ) e b ( k , N ) \gamma^{-1}(k,N)=\gamma^{-1}(k,N+1)-\widehat\phi_{N+1}(k,N+1)e_b(k,N) γ−1(k,N)=γ−1(k,N+1)−ϕ N+1(k,N+1)eb(k,N)
后向最小加权均方误差为
ξ b m i n d ( k , N ) = λ ξ b m i n d ( k − 1 , N ) + e b ( k , N ) ε b ( k , N ) \xi^d_{b_{min}}(k,N)=\lambda\xi^d_{b_{min}}(k-1,N)+e_b(k,N)\varepsilon_b(k,N) ξbmind(k,N)=λξbmind(k−1,N)+eb(k,N)εb(k,N)
则后向预测权系数迭代为
w b ( k , N ) = w b ( k − 1 , N ) + ϕ ^ ( k , N ) ε b ( k , N ) \bm{w}_b(k,N)=\bm{w}_b(k-1,N)+\widehat\phi(k,N)\varepsilon_b(k,N) wb(k,N)=wb(k−1,N)+ϕ (k,N)εb(k,N)
式中
[ ϕ ^ ( k , N ) 0 ] = ϕ ^ ( k , N + 1 ) − ϕ N + 1 ( k , N + 1 ) [ − w b ( k − 1 , N ) 1 ] \begin{bmatrix} \widehat\phi(k,N)\\ 0\\ \end{bmatrix}=\widehat\phi(k,N+1)-\phi_{N+1}(k,N+1)\begin{bmatrix}-\bm{w}_b(k-1,N)\\1\\\end{bmatrix} [ϕ (k,N)0]=ϕ (k,N+1)−ϕN+1(k,N+1)[−wb(k−1,N)1]
滤波先验误差为
e ( k , N ) = d ( k ) − w T ( k − 1 , N ) x ( k , N ) e(k,N)=d(k)-\bm{w}^T(k-1,N)\bm{x}(k,N) e(k,N)=d(k)−wT(k−1,N)x(k,N)
则后验误差为
ε ( k , N ) = e ( k , N ) γ ( k , N ) \varepsilon(k,N)=e(k,N)\gamma(k,N) ε(k,N)=e(k,N)γ(k,N)
滤波权系数迭代为
w ( k , N ) = w ( k − 1 , N ) + ϕ ^ ( k , N ) ε ( k , N ) \bm{w}(k,N)=\bm{w}(k-1,N)+\widehat\phi(k,N)\varepsilon(k,N) w(k,N)=w(k−1,N)+ϕ (k,N)ε(k,N)
由此获得快速横向RLS算法的全过程。