OpenCV2.2 和 2.4.4 的 cvSetCaptureProperty 和 CvGaussBGModel (高斯背景建模)版本间差异

         OpenCV2.2的cvSet和cvGet获取帧是按照关键帧的方法来的,并不准确,较高版本的OpenCV修改了这个bug,改成用帧号的方式获取,经过测试,2.4.2,2.4.4版本都可以比较准确的获得到想要的帧。

         解决了这个问题之后又出现了新的问题,我们在做和视频前背景分离有关的项目,程序里本来用到的OpenCV2.2中的CvGaussBGModel在高版本已经废弃不用了,虽然网上提到混合高斯模型提取背景铺天盖地的都是CvGaussBGModel,转而使用的是cv::BackgroundSubtractorMOG和cv::BackgroundSubtractorMOG2方法,MOG是Mixture of Gaussian的缩写,所以两者实际在做一个事情,据说cv::BackgroundSubtractorMOG2是将混合高斯模型的应用发挥到极致的一个实现,下面我们就来看看两者间的差异。

          在openCV2.2中,CvGaussBGModel是在\opencv2\video\background_segm.hpp中定义的,到2.4时,CvGaussBGModel已经不再使用,并移到了legacy中,也就是虽然使用原来的CvGaussBGModel即使能编译通过,也无法正常运行,而2.4中的background_segm.hpp只定义了前背景分离较新的的几种方法。

          其中,BackgroundSubtractor是前背景分离的基类,BackgroundSubtractorMOG,和BackgroundSubtractorMOG2都派生自BackgroundSubtractor类,在新版本2.4.4的background_segm.hpp中这样描述这两种方法对应的文献:

/*!
 Gaussian Mixture-based Backbround/Foreground Segmentation Algorithm

 The class implements the following algorithm:
 "An improved adaptive background mixture model for real-time tracking with shadow detection"
 P. KadewTraKuPong and R. Bowden,
 Proc. 2nd European Workshp on Advanced Video-Based Surveillance Systems, 2001."
 http://personal.ee.surrey.ac.uk/Personal/R.Bowden/publications/avbs01/avbs01.pdf

*/
class CV_EXPORTS_W BackgroundSubtractorMOG : public BackgroundSubtractor
{


};


/*!
 The class implements the following algorithm:
 "Improved adaptive Gausian mixture model for background subtraction"
 Z.Zivkovic
 International Conference Pattern Recognition, UK, August, 2004.
 http://www.zoranz.net/Publications/zivkovic2004ICPR.pdf
*/
class CV_EXPORTS BackgroundSubtractorMOG2 : public BackgroundSubtractor
{
public:
    //! the default constructor
    BackgroundSubtractorMOG2();
    //! the full constructor that takes the length of the history, the number of gaussian mixtures, the background ratio parameter and the noise strength
    BackgroundSubtractorMOG2(int history,  float varThreshold, bool bShadowDetection=true);
    //! the destructor
    virtual ~BackgroundSubtractorMOG2();
    //! the update operator
    virtual void operator()(InputArray image, OutputArray fgmask, double learningRate=-1);

    //! computes a background image which are the mean of all background gaussians
    virtual void getBackgroundImage(OutputArray backgroundImage) const;

    //! re-initiaization method
    virtual void initialize(Size frameSize, int frameType);

    virtual AlgorithmInfo* info() const;

protected:
    Size frameSize;
    int frameType;
    Mat bgmodel;
    Mat bgmodelUsedModes;//keep track of number of modes per pixel
    int nframes;
    int history;
    int nmixtures;
    //! here it is the maximum allowed number of mixture components.
    //! Actual number is determined dynamically per pixel
    double varThreshold;
    // threshold on the squared Mahalanobis distance to decide if it is well described
    // by the background model or not. Related to Cthr from the paper.
    // This does not influence the update of the background. A typical value could be 4 sigma
    // and that is varThreshold=4*4=16; Corresponds to Tb in the paper.

    /////////////////////////
    // less important parameters - things you might change but be carefull
    ////////////////////////
    float backgroundRatio;
    // corresponds to fTB=1-cf from the paper
    // TB - threshold when the component becomes significant enough to be included into
    // the background model. It is the TB=1-cf from the paper. So I use cf=0.1 => TB=0.
    // For alpha=0.001 it means that the mode should exist for approximately 105 frames before
    // it is considered foreground
    // float noiseSigma;
    float varThresholdGen;
    //correspondts to Tg - threshold on the squared Mahalan. dist. to decide
    //when a sample is close to the existing components. If it is not close
    //to any a new component will be generated. I use 3 sigma => Tg=3*3=9.
    //Smaller Tg leads to more generated components and higher Tg might make
    //lead to small number of components but they can grow too large
    float fVarInit;
    float fVarMin;
    float fVarMax;
    //initial variance  for the newly generated components.
    //It will will influence the speed of adaptation. A good guess should be made.
    //A simple way is to estimate the typical standard deviation from the images.
    //I used here 10 as a reasonable value
    // min and max can be used to further control the variance
    float fCT;//CT - complexity reduction prior
    //this is related to the number of samples needed to accept that a component
    //actually exists. We use CT=0.05 of all the samples. By setting CT=0 you get
    //the standard Stauffer&Grimson algorithm (maybe not exact but very similar)

    //shadow detection parameters
    bool bShadowDetection;//default 1 - do shadow detection
    unsigned char nShadowDetection;//do shadow detection - insert this value as the detection result - 127 default value
    float fTau;
    // Tau - shadow threshold. The shadow is detected if the pixel is darker
    //version of the background. Tau is a threshold on how much darker the shadow can be.
    //Tau= 0.5 means that if pixel is more than 2 times darker then it is not shadow
    //See: Prati,Mikic,Trivedi,Cucchiarra,"Detecting Moving Shadows...",IEEE PAMI,2003.
};


          网上搜的程序,大多为GaussBGModel模型,更新cvUpdateBG时采用哪种算法更新,主要取决于cvCreateGaussianBGModel或者cvCreateFGDStatModel  建立了那种模型。这两种模型的参数均可调,GaussBGModel (混合\单高斯模型),对应实现文件cvbgfg_gaussmix.cpp,其算法基于   KaewTraKulPong and R. Bowden,An Improved Adaptive Background Mixture Model for Real-time Tracking and Shadow Detection, AVSS 2001; FGDStatModel 对应实现文件cvbgfg_acmmm2003.cpp,其算法基于   Liyuan Li, Weimin Huang, Irene Y.H. Gu, and Qi Tian, Foreground Object Detection from Videos Containing Complex Background, ACM-MM 2003。用GaussBGModel模型cvUpdateBGStatModel()函数的第三个参数为学习速率,即cvUpdateGaussianBGModel( IplImage* curr_frame, CvGaussBGModel*  bg_model, double learningRate)  ;用FGDStatModel更新模板使用cvUpdateFGDStatModel( IplImage* curr_frame, CvFGDStatModel*  model, double ) ,第三个参数没有用。

GaussBGModel 使用方法如下:

           CvGaussBGModel* bg_model = NULL; 

            CvGaussBGStatModelParams params;  
            params.win_size = 2000;            // 初始化阶段的帧数;用户自定义模型学 习率a=1/win_size;
            params.bg_threshold = 0.7;         //和其中一个高斯模型匹配时的阈值  
            params.weight_init = 0.05;         //高斯分布的初始权值
            params.variance_init = 30;  	//高斯分布的初始方差
            params.minArea = 15.f;  		//最小面积,这个参数用来去噪,当检测的目标矩阵区域面积小于这minArea时,就把它当噪声去除
            params.n_gauss = 5; 		//= K =Number of gaussian in mixture  
            params.std_threshold = 2.5;  	//是否为背景的的阈值
          
            bg_model = (CvGaussBGModel*)cvCreateGaussianBGModel(pFrame,params);     //建立模型
                  
            cvSmooth(pFrame,pFrame,CV_GAUSSIAN,3,0,0,0);  
      
            cvUpdateBGStatModel(pFrame,(CvBGStatModel*)bg_model,-0.00001);   //更新背景
            cvCopy(bg_model->foreground ,pFrImg,0);  
            cvCopy(bg_model->background ,pBkImg,0);  
   
            cvErode(pFrImg,pFrImg,0,1);  
            cvDilate(pFrImg,pFrImg,0,3);  

              
            cvShowImage("video",pFrame);  
            cvShowImage("foreground",pFrImg);  

            cvShowImage("foreground",pFrImg); 
            cvCopy(pFrImg,FirstImg,0); 

            cvReleaseBGStatModel((CvBGStatModel**)&bg_model);  

BackgroundSubtractorMOG2使用方法:

	cv::VideoCapture capture;
        capture.open(videoFile);

        cv::BackgroundSubtractorMOG2  mog;

        cv::Mat foreground;
        cv::Mat background;
        IplImage bkImage;
                
        cv::Mat frame;
        long frameNo = 0;
        while (capture.read(frame))
        {

           	// 运动前景检测,并更新背景
               mog(frame, foreground, 0.001);       

               // 腐蚀
               cv::erode(foreground, foreground, cv::Mat());

               // 膨胀
               cv::dilate(foreground, foreground, cv::Mat());

               mog.getBackgroundImage(background);   // 返回当前背景图像

               cv::imshow("video", foreground);
               cv::imshow("background", background);
	}

           可以看到,新版本的代码实现中都使用了cv命名空间,而且新版本中不是使用IplImage作为输入输出,而是cv::Mat,这对于编程来说方便了不少,如果要从老版本移植则需要IplImage和Mat之间的转化,参考文献如下:

cv::Mat,cvMat和IplImage的相互转换方法

CvMat、Mat、IplImage之间的转换详解及实例


           另外,有个编程小窍门,说Visual Studio2012可以在调试时直接有个窗口可以看到运行到此时的Mat或IplImage的图片,很激动人心!


REFERENCE:

原2.2高斯背景建模 http://hi.baidu.com/lin65505578/item/bab3e490cd0c9d15924f41dd

新2.4高斯背景建模效果 http://blog.csdn.net/loadstar_kun/article/details/8548253

你可能感兴趣的:(opencv,高斯背景建模,CvGaussBGModel)