RANSAC、ICP、图优化g2o学习笔记

RANSAC

随机抽样一致算法(RANdom SAmple Consensus,RANSAC)
参考自wiki:
RANSAC WIKI
它采用迭代的方式从一组包含离群的被观测数据中估算出数学模型的参数。 RANSAC是一个非确定性算法,在某种意义上说,它会产生一个在一定概率下合理的结果,而更多次的迭代会使这一概率增加。

NOTE : 分成离群(outliers)、内群(inliers)
和普通的拟合(比如simple least squares method)不同的是,普通拟合把所有数据都用上,但是RANSAC会分离出离群,只用内群算模型,就增加了准确性,但是对参数敏感,需要选择一个好的参数

流程:

The set of inliers obtained for the fitting model is called consensus set. The RANSAC algorithm will iteratively repeat the above two steps until the obtained consensus set in certain iteration has enough inliers.
The input to the RANSAC algorithm is a set of observed data values, a way of fitting some kind of model to the observations, and some confidence parameters. RANSAC achieves its goal by repeating the following steps:
1.Select a random subset of the original data. Call this subset the hypothetical inliers.
2.A model is fitted to the set of hypothetical inliers.
3.All other data are then tested against the fitted model. Those points that fit the estimated model well, according to some model-specific loss function, are considered as part of the consensus set.
4.The estimated model is reasonably good if sufficiently many points have been classified as part of the consensus set.
5.Afterwards, the model may be improved by reestimating it using all members of the consensus set.

1.从原始数据中选择一个子集S作为假定内群
2.用这个内群生成模型
3.其他数据(余集)对应这个模型测试,余集中与模型M的误差小于某一设定阈值t的样本集以及S构成S*。S*认为是内点集,它们构成S的一致集(Consensus Set)–在算法里就是判断 若小于阈值则加入一致集
4.若点数≥d,认为得到正确的模型参数,并利用集S*重新计算新的模型M*;重新随机抽取新的S,重复以上过程。

算法:
Given:
data – a set of observed data points
model – a model that can be fitted to data points
n – the minimum number of data values required to fit the model
k – the maximum number of iterations allowed in the algorithm
t – a threshold value for determining when a data point fits a model
d – the number of close data values required to assert that a model fits well to data

Return:
bestfit – model parameters which best fit the data (or nul if no good model is found)

iterations = 0
bestfit = nul
besterr = something really large
while iterations < k {
    maybeinliers = n randomly selected values from data
    maybemodel = model parameters fitted to maybeinliers
    alsoinliers = empty set
    for every point in data not in maybeinliers {
        if point fits maybemodel with an error smaller than t
             add point to alsoinliers
    }
    if the number of elements in alsoinliers is > d {
        % this implies that we may have found a good model
        % now test how good it is
        bettermodel = model parameters fitted to all points in maybeinliers and alsoinliers
        thiserr = a measure of how well model fits these points
        if thiserr < besterr {
            bestfit = bettermodel
            besterr = thiserr
        }
    }
    increment iterations
}
return bestfit

t和d(clarification needed)是要针对模型设置的,但是迭代次数k是可以理论得出的。
若设取n个点,k次迭代后成功概率是p
w = number of inliers in data / number of points in data
w只能有大概值,w^n是所选择的n个点都是内群的概率, 1-w^n 是所选择的n个点至少有一个不是内群的概率, (1 − w^n)^k 是表示重复k次都没有全部的n个点都是内群的概率-这是失败概率即1-p
所以:p = 1 −(1 − w^n)^k

所以如果希望成功概率高,p = 0.99, 当n不变时,k越大,p越大, 当w不变时,n越大,所需的k就越大, 通常w未知,所以n选小一点比较好。

要想效果更好可以引入标准差

———-

ICP

参考自:IPC-wiki
ICP(Iterative Closest Point) :两点云之间最小差

简介:一个参考点云(the reference, or target, is kept fixed),另一个点云(the source),通过迭代修正他们的transform(旋转和平移)直到误差最小

输入:两个点云,两个点云连线的最初估计、迭代停止条件
输出:定义的transform

过程:
1. 为参考点云里的每一个点找到目标点云里最近的点
2. 用均方误差得到两个点云连线的最初估计
3. 用这个估计应用在目标点云
4. 迭代

图优化理论

参考图优化
图是由顶点(Vertex)和边(Edge)组成的结构,而图论则是研究图的理论。我们记一个图为G={V,E}G={V,E},其中V为顶点集,E为边集。顶点可以表示一个状态变量,边表示连接状态变量之间的测量值。如果一条连接超过两点就是超图。
多数时候我们使用迭代方式求解。从一个初值x0x0出发,不断地导致当前值附近的,能使目标函数下降的方式(反向梯度),然后沿着梯度方向走出一步,从而使得函数值下降一点。这样反复迭代,理论上对于任何函数,都能找到一个极小值点。

流程:
选择你想要的图里的节点与边的类型,确定它们的参数化形式;
往图里加入实际的节点和边;
选择初值,开始迭代;
每一步迭代中,计算对应于当前估计值的雅可比矩阵和海塞矩阵;
求解稀疏线性方程HkΔx=−bk,得到梯度方向;
继续用GN或LM进行迭代。如果迭代结束,返回优化值。

g2o

General Graph Optimization(g2o)
RANSAC、ICP、图优化g2o学习笔记_第1张图片
选择优化方法流程:
1.选择一个线性方程求解器,从 PCG, CSparse, Choldmod中选,实际则来自 g2o/solvers 文件夹中定义的。
2.选择一个 BlockSolver 。
3.选择一个迭代策略,从GN, LM, Doglog中选。

你可能感兴趣的:(计算机视觉)