SLAM之解析RANSAC算法

RANSAC算法在处理特征误匹配的时候常常被提及,算法不论大小,不学不明,那么今天让我们一起探秘这一个经典算法!

这一部分实际上在SLAM知识篇中我已经做了阐述,这里单独拿出来。如果还有什么问题,请留言。    


一篇RANSAC简介博客: https://blog.csdn.net/robinhjwy/article/details/79174914 

另一篇博客也很有帮助,可以参考:https://blog.csdn.net/YunlinWang/article/details/78147026

ORBSLAM2中的RANSAC算法举例: https://blog.csdn.net/qq_18661939/article/details/52077491

 

算法伪代码——

伪码形式的算法如下所示: 
输入: 
data —— 一组观测数据 
model —— 适应于数据的模型 
n —— 适用于模型的最少数据个数 
k —— 算法的迭代次数 
t —— 用于决定数据是否适应于模型的阀值 
d —— 判定模型是否适用于数据集的数据数目 
输出: 
best_model —— 跟数据最匹配的模型参数(如果没有找到好的模型,返回null) 
best_consensus_set —— 估计出模型的数据点 
best_error —— 跟数据相关的估计出的模型错误

iterations = 0
best_model = null
best_consensus_set = null
best_error = 无穷大
while ( iterations < k )
maybe_inliers = 从数据集中随机选择n个点
maybe_model = 适合于maybe_inliers的模型参数
consensus_set = maybe_inliers

for ( 每个数据集中不属于maybe_inliers的点 )
if ( 如果点适合于maybe_model,且错误小于t )
将点添加到consensus_set
if ( consensus_set中的元素数目大于d )
已经找到了好的模型,现在测试该模型到底有多好
better_model = 适合于consensus_set中所有点的模型参数
this_error = better_model究竟如何适合这些点的度量
if ( this_error < best_error )
我们发现了比以前好的模型,保存该模型直到更好的模型出现
best_model =  better_model
best_consensus_set = consensus_set
best_error =  this_error
增加迭代次数
返回 best_model, best_consensus_set, best_error
 

TODO 后续这部分还需要改动 

 

优缺点——

 RANSAC的优点是它能鲁棒的估计模型参数。例如,它能从包含大量局外点的数据集中估计出高精度的参数。RANSAC的缺点是它计算参数的迭代次数没有上限;如果设置迭代次数的上限,得到的结果可能不是最优的结果,甚至可能得到错误的结果。RANSAC只有一定的概率得到可信的模型,概率与迭代次数成正比。RANSAC的另一个缺点是它要求设置跟问题相关的阀值。 
RANSAC只能从特定的数据集中估计出一个模型,如果存在两个(或多个)模型,RANSAC不能找到别的模型。

 

RANSAC代码实现——   

(https://www.cnblogs.com/weizc/p/5257496.html)

下面的代码是关于直线拟合的。   

#include   
#include "LineParamEstimator.h"  
  
LineParamEstimator::LineParamEstimator(double delta) : m_deltaSquared(delta*delta) {}  
  
  
void LineParamEstimator::estimate(std::vector &data,   
                                                                    std::vector ¶meters)  
{  
    parameters.clear();  
    if(data.size()<2)  
        return;  
    double nx = data[1]->y - data[0]->y;  
    double ny = data[0]->x - data[1]->x;// 原始直线的斜率为K,则法线的斜率为-1/k  
    double norm = sqrt(nx*nx + ny*ny);  
      
    parameters.push_back(nx/norm);  
    parameters.push_back(ny/norm);  
    parameters.push_back(data[0]->x);  
    parameters.push_back(data[0]->y);          
}  
  
  
void LineParamEstimator::leastSquaresEstimate(std::vector &data,   
                                                                                            std::vector ¶meters)  
{  
    double meanX, meanY, nx, ny, norm;  
    double covMat11, covMat12, covMat21, covMat22; // The entries of the symmetric covarinace matrix  
    int i, dataSize = data.size();  
  
    parameters.clear();  
    if(data.size()<2)  
        return;  
  
    meanX = meanY = 0.0;  
    covMat11 = covMat12 = covMat21 = covMat22 = 0;  
    for(i=0; ix;  
        meanY +=data[i]->y;  
  
        covMat11    +=data[i]->x * data[i]->x;  
        covMat12    +=data[i]->x * data[i]->y;  
        covMat22    +=data[i]->y * data[i]->y;  
    }  
  
    meanX/=dataSize;  
    meanY/=dataSize;  
  
    covMat11 -= dataSize*meanX*meanX;  
        covMat12 -= dataSize*meanX*meanY;  
    covMat22 -= dataSize*meanY*meanY;  
    covMat21 = covMat12;  
  
    if(covMat11<1e-12) {  
        nx = 1.0;  
            ny = 0.0;  
    }  
    else {      //lamda1 is the largest eigen-value of the covariance matrix   
               //and is used to compute the eigne-vector corresponding to the smallest  
               //eigenvalue, which isn't computed explicitly.  
        double lamda1 = (covMat11 + covMat22 + sqrt((covMat11-covMat22)*(covMat11-covMat22) + 4*covMat12*covMat12)) / 2.0;  
        nx = -covMat12;  
        ny = lamda1 - covMat22;  
        norm = sqrt(nx*nx + ny*ny);  
        nx/=norm;  
        ny/=norm;  
    }  
    parameters.push_back(nx);  
    parameters.push_back(ny);  
    parameters.push_back(meanX);  
    parameters.push_back(meanY);  
}  
  
  
bool LineParamEstimator::agree(std::vector ¶meters, Point2D &data)  
{  
    double signedDistance = parameters[0]*(data.x-parameters[2]) + parameters[1]*(data.y-parameters[3]);   
    return ((signedDistance*signedDistance) < m_deltaSquared);  
}  

----------------------------------------------------------------------------------------------------------------------------------------------------------------

template  
double Ransac::compute(std::vector ¶meters,   
                                                      ParameterEsitmator *paramEstimator ,   
                                                    std::vector &data,   
                                                    int numForEstimate)  
{  
    std::vector leastSquaresEstimateData;  
    int numDataObjects = data.size();  
    int numVotesForBest = -1;  
    int *arr = new int[numForEstimate];// numForEstimate表示拟合模型所需要的最少点数,对本例的直线来说,该值为2  
    short *curVotes = new short[numDataObjects];  //one if data[i] agrees with the current model, otherwise zero  
    short *bestVotes = new short[numDataObjects];  //one if data[i] agrees with the best model, otherwise zero  
      
  
              //there are less data objects than the minimum required for an exact fit  
    if(numDataObjects < numForEstimate)   
        return 0;  
        // 计算所有可能的直线,寻找其中误差最小的解。对于100点的直线拟合来说,大约需要100*99*0.5=4950次运算,复杂度无疑是庞大的。一般采用随机选取子集的方式。  
    computeAllChoices(paramEstimator,data,numForEstimate,  
                                        bestVotes, curVotes, numVotesForBest, 0, data.size(), numForEstimate, 0, arr);  
  
       //compute the least squares estimate using the largest sub set  
    for(int j=0; jleastSquaresEstimate(leastSquaresEstimateData,parameters);  
  
    delete [] arr;  
    delete [] bestVotes;  
    delete [] curVotes;   
  
    return (double)leastSquaresEstimateData.size()/(double)numDataObjects;  
}  

 

ransac算法网上有些博客会有误导,请注意真正理解这个过程。   

你可能感兴趣的:(SLAM之解析RANSAC算法)