各种类型RLS自适应滤波算法的C++实现

2019独角兽企业重金招聘Python工程师标准>>> hot3.png

头文件:

/*
 * Copyright (c) 2008-2011 Zhang Ming (M. Zhang), [email protected]
 *
 * This program is free software; you can redistribute it and/or modify it
 * under the terms of the GNU General Public License as published by the
 * Free Software Foundation, either version 2 or any later version.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions are met:
 *
 * 1. Redistributions of source code must retain the above copyright notice,
 *    this list of conditions and the following disclaimer.
 *
 * 2. Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *
 * This program is distributed in the hope that it will be useful, but WITHOUT
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
 * more details. A copy of the GNU General Public License is available at:
 * http://www.fsf.org/licensing/licenses
 */


/*****************************************************************************
 *                                   rls.h
 *
 * Recursive Least Square Filter.
 *
 * The RLS adaptive filter recursively finds the filter coefficients that
 * minimize a weighted linear least squares cost function relating to the
 * input signals. This in contrast to other algorithms such as the LMS that
 * aim to reduce the mean square error.
 *
 * The input signal of RLS adaptive filter is considered deterministic,
 * while for the LMS and similar algorithm it is considered stochastic.
 * Compared to most of its competitors, the RLS exhibits extremely fast
 * convergence. However, this benefit comes at the cost of high computational
 * complexity, and potentially poor tracking performance when the filter to
 * be estimated changes.
 *
 * This file includes five types usually used RLS algorithms, they are:
 * conventional RLS (rls),       stabilised fast transversal RLS (sftrls),
 * lattice RLS (lrls),           error feedblck lattice RLS (eflrls),
 * QR based RLS (qrrls).
 *
 * Zhang Ming, 2010-10, Xi'an Jiaotong University.
 *****************************************************************************/


#ifndef RLS_H
#define RLS_H


#include 
#include 


namespace splab
{

    template
    Type rls( const Type&, const Type&, Vector&,
              const Type&, const Type& );

    template
    Type sftrls( const Type&, const Type&, Vector&,
                 const Type&, const Type&, const string& );

    template
    Type lrls( const Type&, const Type&, Vector&,
               const Type&, const Type&, const string& );

    template
    Type eflrls( const Type&, const Type&, Vector&,
                 const Type&, const Type&, const string& );

    template
    Type qrrls( const Type&, const Type&, Vector&,
                const Type&, const string& );


    #include 

}
// namespace splab


#endif
// RLS_H

实现文件:

/*
 * Copyright (c) 2008-2011 Zhang Ming (M. Zhang), [email protected]
 *
 * This program is free software; you can redistribute it and/or modify it
 * under the terms of the GNU General Public License as published by the
 * Free Software Foundation, either version 2 or any later version.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions are met:
 *
 * 1. Redistributions of source code must retain the above copyright notice,
 *    this list of conditions and the following disclaimer.
 *
 * 2. Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *
 * This program is distributed in the hope that it will be useful, but WITHOUT
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
 * more details. A copy of the GNU General Public License is available at:
 * http://www.fsf.org/licensing/licenses
 */


/*****************************************************************************
 *                              rls-impl.h
 *
 * Implementation for RLS Filter.
 *
 * Zhang Ming, 2010-10, Xi'an Jiaotong University.
 *****************************************************************************/


/**
 * The conventional RLS algorighm. The parameter "lambda" is the Forgetting
 * Factor, the smaller "lambda" is, the smaller contribution of previous samples.
 * This makes the filter more sensitive to recent samples, which means more
 * fluctuations in the filter co-efficients. Suggesting range is: [0.8, 1.0].
 * The parametet "delta" is the value to initialeze the inverse of the Auto-
 * Relation Matrix of input signal, which can be chosen as an estimation of
 * the input signal power.
 */
template 
Type rls( const Type &xk, const Type &dk, Vector &wn,
          const Type &lambda, const Type &delta )
{
    assert( Type(0.8) <= lambda );
    assert( lambda <= Type(1.0) );

    int filterLen = wn.size();
    Vector vP(filterLen);
    Vector vQ(filterLen);

    static Vector xn(filterLen);
    static Matrix invR = eye( filterLen, Type(1.0/delta) );

    // updata input signal
    for( int i=filterLen; i>1; --i )
        xn(i) = xn(i-1);
    xn(1) = xk;

    // priori error
    Type ak = dk - dotProd(wn,xn);

    vQ = invR * xn;
    vP = vQ / (lambda+dotProd(vQ,xn));

    // updata Correlation-Matrix's inverse
    invR = (invR - multTr(vQ,vP)) / lambda;

    // update Weight Vector
    wn += ak * vP;
    //    wn += ak * (invR*xn);

    return dotProd(wn,xn);
}


/**
 * Stabilized Fast Tranversal RLS
 */
template 
Type sftrls( const Type &xk, const Type &dk, Vector &wn,
             const Type &lambda, const Type &epsilon,
             const string &training )
{
    int filterLen = wn.size(),
        L = wn.size()-1;

    assert( Type(1.0-1.0/(2*L+2)) <= lambda );
    assert( lambda <= Type(1.0) );

    static Vector xn(filterLen), xnPrev(filterLen);

    const Type  k1 = Type(1.5),
                k2 = Type(2.5),
                k3 = Type(1.0);

    // initializing for begin
    Type    e, ep,
            ef, efp,
            eb1, eb2, ebp1, ebp2, ebp31, ebp32, ebp33;

    static Type gamma = 1,
                xiBmin = epsilon,
                xiFminInv = 1/epsilon;

    Vector phiExt(L+2);

    static Vector phi(filterLen),
                        wf(filterLen), wb(filterLen);

    // updata input signal
    xnPrev = xn;
    for( int i=1; i<=L; ++i )
        xn[i] = xnPrev[i-1];
    xn[0] = xk;

    if( training == "on" )
    {
        // forward prediction error
        efp = xk - dotProd(wf,xnPrev);
        ef  = gamma * efp;

        phiExt[0] = efp * xiFminInv/lambda;
        for( int i=0; i
Type lrls( const Type &xk, const Type &dk, Vector &vn,
           const Type &lambda, const Type &epsilon,
           const string &training )
{
    assert( Type(0.8) <= lambda );
    assert( lambda <= Type(1.0) );

    int filterLen = vn.size(),
        L = filterLen-1;

    // initializing for begin
    Vector    gamma(filterLen),
                    eb(filterLen),
                    kb(L), kf(L),
                    xiBmin(filterLen), xiFmin(filterLen);

    static Vector delta(L), deltaD(filterLen),
                        gammaOld(filterLen,Type(1.0)),
                        ebOld(filterLen),
                        xiBminOld(filterLen,epsilon),
                        xiFminOld(filterLen,epsilon);

    // initializing for Zeor Order
    gamma[0] = 1;
    xiBmin[0]= xk*xk + lambda*xiFminOld[0];
    xiFmin[0] = xiBmin[0];

    Type e = dk;
    Type ef = xk;
    eb[0] = xk;

    for( int j=0; j
Type eflrls( const Type &xk, const Type &dk, Vector &vn,
             const Type &lambda, const Type &epsilon,
             const string &training )
{
    assert( Type(0.8) <= lambda );
    assert( lambda <= Type(1.0) );

    int filterLen = vn.size(),
        L = filterLen-1;

    // initializing for begin
    Vector    gamma(filterLen),
                    eb(filterLen),
                    xiBmin(filterLen), xiFmin(filterLen);

    static Vector delta(L), deltaD(filterLen),
                        gammaOld(filterLen,Type(1.0)),
                        ebOld(filterLen),
                        kb(L), kf(L),
                        xiBminOld2(filterLen,epsilon),
                        xiBminOld(filterLen,epsilon),
                        xiFminOld(filterLen,epsilon);

    // initializing for Zeor Order
    gamma[0] = 1;
    xiBmin[0]= xk*xk + lambda*xiFminOld[0];
    xiFmin[0] = xiBmin[0];

    Type tmp = 0;
    Type e = dk;
    Type ef = xk;
    eb[0] = xk;

    for( int j=0; j
Type qrrls( const Type &xk, const Type &dk, Vector &wn,
            const Type &lambdaSqrt, const string &training )
{
    int filterLen = wn.size(),
		fL1 = filterLen+1;

    // initializing for begin
	static int k = 1;

	Type	dp, ep,
            gammap,
			c, cosTheta, sinTheta,
			tmp;

    static Type  sx1;

    Vector xp(filterLen);

    static Vector xn(filterLen), dn(filterLen), dq2p(filterLen);

	static Matrix Up(filterLen,filterLen);

	// updata input signal
	for( int i=filterLen; i>1; --i )
            xn(i) = xn(i-1);
        xn(1) = xk;

    if( training == "on" )
    {
        // initializing for 0 to L iterations
        if( k <= filterLen )
        {
            // updata Up
            for( int i=filterLen; i>1; --i )
                for( int j=1; j<=filterLen; ++j )
                    Up(i,j) = lambdaSqrt * Up(i-1,j);
            for( int j=1; j<=filterLen; ++j )
                    Up(1,j) = lambdaSqrt * xn(j);

            dn(k) = dk;
            for( int i=filterLen; i>1; --i )
                dq2p(i) = lambdaSqrt * dq2p(i-1);
            dq2p(1) = lambdaSqrt * dn(k);

            if( k == 1 )
            {
                sx1 = xk;
                if( abs(sx1) > Type(1.0-6) )
                    sx1 = Type(1.0);
            }

            // new Weight Vector
            wn(1) = dn(1) / sx1;
            if( k > 1 )
                for( int i=2; i<=k; ++i )
                {
                    tmp = 0;
                    for( int j=2; j<=i; ++j )
                        tmp += xn(k-j+1)*wn(i-j+1);

                    wn(i) = (-tmp+dn(i)) / sx1;
                }

            ep = dk - dotProd(wn,xn);

            k++;
        }
        else
        {
            xp = xn;
            gammap = 1;
            dp = dk;

            // Givens rotation
            for( int i=1; i<=filterLen; ++i )
            {
                c = sqrt( Up(i,fL1-i)*Up(i,fL1-i) + xp(fL1-i)*xp(fL1-i) );
                cosTheta = Up(i,fL1-i) / c;
                sinTheta = xp(fL1-i) / c;
                gammap *= cosTheta;

                for( int j=1; j<=filterLen; ++j )
                {
                    tmp = xp(j);
                    xp(j) = cosTheta*tmp - sinTheta*Up(i,j);
                    Up(i,j) = sinTheta*tmp + cosTheta*Up(i,j);
                }

                tmp = dp;
                dp = cosTheta*tmp - sinTheta*dq2p(i);
                dq2p(i) = sinTheta*tmp + cosTheta*dq2p(i);
            }

            ep = dp / gammap;

            // new Weight Vector
            wn(1) = dq2p(filterLen) / Up(filterLen,1);
            for( int i=2; i<=filterLen; ++i )
            {
                tmp = 0;
                for( int j=2; j<=i; ++j )
                    tmp += Up(fL1-i,i-j+1) * wn(i-j+1);
                wn(i) = (-tmp+dq2p(fL1-i)) / Up(fL1-i,i);
            }

            // updating internal variables
            Up *= lambdaSqrt;
            dq2p *= lambdaSqrt;
        }

        return dk-ep;
    }
    else
        return dotProd(wn,xn);
}

测试代码:

/*****************************************************************************
 *                                   rls_test.cpp
 *
 * RLS adaptive filter testing.
 *
 * Zhang Ming, 2010-10, Xi'an Jiaotong University.
 *****************************************************************************/


#define BOUNDS_CHECK

#include 
#include 
#include 
#include 
#include 
#include 


using namespace std;
using namespace splab;


typedef float   Type;
const   int     N = 1000;
const   int     orderRls = 1;
const   int     orderLrls = 16;
const   int     orderTrls = 12;
const   int     orderQrrls = 8;
const   int     sysLen = 8;
const   int     dispNumber = 10;


int main()
{
    int start = max(0,N-dispNumber);
    Vector    dn(N), xn(N), yn(N), sn(N), rn(N), en(N),
                    hn(sysLen+1), gn(orderLrls+1), wn(orderRls+1);
    Type lambda, delta, eps;


    cout << "/**************  Conventional RLS  <--->  Waveform Tracking \
*************/" << endl << endl;
    for( int k=0; k  Channel Equalization \
***************/" << endl << endl;
    for( int k=0; k<=sysLen; ++k )
        hn[k] = Type( 0.1 * pow(0.5,k) );
    dn = randn( 37, Type(0.0), Type(1.0), N );
    xn = wkeep( conv(dn,hn), N, "left" );
    lambda = Type(0.99), eps = Type(0.1);

    wn.resize(orderLrls);
    wn = Type(0.0);
    for( int k=0; k Delta(orderLrls+1);
    Delta[(orderLrls+1)/2] = Type(1.0);
    for( int k=0; k<=orderLrls; ++k )
//        gn[k] = lrls( Delta[k], Type(0.0), wn, lambda, eps, "off" );
        gn[k] = eflrls( Delta[k], Type(0.0), wn, lambda, eps, "off" );
    cout << setiosflags(ios::fixed) << setprecision(4);
    cout << "The original system:   " << hn << endl;
    cout << "The inverse system:   " << gn << endl;
    cout << "The cascade system:   " << conv( gn, hn ) << endl << endl;
//

    cout << "/************  Transversal RLS  <--->  System Identification \
************/" << endl << endl;
    Vector sys(8);
    sys[0] = Type(0.1);   sys[1] = Type(0.3);   sys[2] = Type(0.0);   sys[3] = Type(-0.2);
    sys[4] = Type(-0.4);  sys[5] = Type(-0.7);  sys[6] = Type(-0.4);  sys[7] = Type(-0.2);
    xn = randn( 37, Type(0.0), Type(1.0), N );
    dn = wkeep( conv(xn,sys), N, "left" );
    lambda = Type(0.99), eps = Type(1.0);
    wn.resize(orderTrls);
    wn = Type(0.0);

    for( int k=0; k  Signal Enhancement \
 ***************/" << endl << endl;
    sn = sin( linspace( Type(0.0), Type(4*TWOPI), N ) );
    rn = randn( 37, Type(0.0), Type(1.0), N );
    dn = sn + rn;
    int delay = orderQrrls/2;
    for( int i=0; i

运行结果:

/**************  Conventional RLS  <--->  Waveform Tracking *************/

The last 10 iterations result:

observed        desired         output          adaptive filter

-0.9010         0.4339          0.4339          -0.7975 1.2790
-0.9010         -0.4339         -0.4339         -0.7975 1.2790
-0.2225         -0.9749         -0.9749         -0.7975 1.2790
0.6235          -0.7818         -0.7818         -0.7975 1.2790
1.0000          0.0000          0.0000          -0.7975 1.2790
0.6235          0.7818          0.7818          -0.7975 1.2790
-0.2225         0.9749          0.9749          -0.7975 1.2790
-0.9010         0.4339          0.4339          -0.7975 1.2790
-0.9010         -0.4339         -0.4339         -0.7975 1.2790
-0.2225         -0.9749         -0.9749         -0.7975 1.2790

The theoretical optimal filter is:              -0.7972 1.2788


/**************  Lattice RLS  <--->  Channel Equalization ***************/

The original system:   size: 9 by 1
0.1000
0.0500
0.0250
0.0125
0.0063
0.0031
0.0016
0.0008
0.0004

The inverse system:   size: 17 by 1
0.6038
-0.0026
-0.0018
-0.0012
0.0008
0.0011
0.0011
-0.0009
8.7646
-5.0002
0.0000
0.0015
-0.0007
-0.0004
0.0014
0.0027
0.0048

The cascade system:   size: 25 by 1
0.0604
0.0299
0.0148
0.0073
0.0037
0.0020
0.0011
0.0005
0.8767
-0.0618
-0.0309
-0.0153
-0.0077
-0.0039
-0.0018
-0.0006
0.0002
-0.0016
0.0002
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000


/************  Transversal RLS  <--->  System Identification ************/

The last 10 iterations result:

input signal    original system output    identified system output

1.5173                  -1.7206                 -1.7206
-0.9741                 -1.3146                 -1.3146
-1.4056                 -2.1204                 -2.1204
-0.9319                 -2.3165                 -2.3165
-0.5763                 -2.1748                 -2.1748
0.4665                  -1.2228                 -1.2228
0.5959                  0.7623                  0.7623
0.5354                  1.7904                  1.7904
-0.4990                 1.6573                  1.6573
-1.1519                 0.4866                  0.4866

The unit impulse response of original system:   size: 8 by 1
0.1000
0.3000
0.0000
-0.2000
-0.4000
-0.7000
-0.4000
-0.2000

The unit impulse response of identified system:   size: 12 by 1
0.1000
0.3000
0.0000
-0.2000
-0.4000
-0.7000
-0.4000
-0.2000
0.0000
0.0000
-0.0000
-0.0000


/***************  QR Based RLS  <--->  Signal Enhancement  ***************/

The last 10 iterations result:

noised signal           enhanced signal         original signal

1.2928                  -0.2263                 -0.2245
-1.1740                 -0.1998                 -0.1999
-1.5808                 -0.1748                 -0.1752
-1.0823                 -0.1469                 -0.1504
-0.7017                 -0.1039                 -0.1255
0.3660                  -0.0745                 -0.1005
0.5205                  -0.0628                 -0.0754
0.4851                  -0.0439                 -0.0503
-0.5242                 -0.0220                 -0.0252
-1.1519                 0.0058                  0.0000

The SNR before denoising is:   17.5084 dB
The SNR after denoising is:    121.1680 dB


Process returned 0 (0x0)   execution time : 0.187 s
Press any key to continue.

转载于:https://my.oschina.net/zmjerry/blog/8531

你可能感兴趣的:(c/c++,操作系统,python)