PCL ICP算法概述总结

ICP (iterative closest points) 点云配准算法

经典的icp算法(可以参考Chen and Medioni,【2】 and Besl and McKay.【1】),对比于 Kabsch algorithm,ICP在图像和环境配准当中有着更广泛的应用。可以参考:
k-d tree算法,流程大致为:
输入:参考模型点云,实际模型点云,初始估计转换矩阵(可以用于加速配准),结束收敛标准或停止标准
1 参考模型点云和实际模型点云寻找配准
2 根据最小似然值得要求,估计一个4X4的转换矩阵,包含旋转和位移。
(根据上一步的转换模型点云,和参考模型点云对准)
3 根据上一步的转换矩阵,转换实际模型点云
4 迭代上面步骤

下面是一个小栗子,在二维空间内配准模型点云和实际点云(在二维坐标系中,旋转矩阵是一个2X2,平移是个2X1的矩阵–》》【R|T】(shape-> 2X3))
PCL ICP算法概述总结_第1张图片retrieved from: https://blog.csdn.net/tercel_zhang/article/details/79713644

结果如下:
PCL ICP算法概述总结_第2张图片

  1. 首先需要得到参考点云中每个点和实际模型点云的关系(点对点的关系),如果两个点相对靠近,我们可以把两个点定义为points_pair。 这边可以参考sklear里面的nearest neighbor模块。里面使用的是K-NN分类算法的思想. 简单的来讲,用待测点周围一定距离的点的种类,来决定待测点的分类。当然实际情况可以加上一定权重(半径,种类…)

PCL ICP算法概述总结_第3张图片
retrieved from https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm

class NearestNeighbors(NeighborsBase, KNeighborsMixin,
                       RadiusNeighborsMixin, UnsupervisedMixin):
    """Unsupervised learner for implementing neighbor searches.

    Read more in the :ref:`User Guide `.

    Parameters
    ----------
    n_neighbors : int, optional (default = 5)
        Number of neighbors to use by default for :meth:`kneighbors` queries.

    radius : float, optional (default = 1.0)
        Range of parameter space to use by default for :meth:`radius_neighbors`
        queries.

    algorithm : {'auto', 'ball_tree', 'kd_tree', 'brute'}, optional
        Algorithm used to compute the nearest neighbors:

        - 'ball_tree' will use :class:`BallTree`
        - 'kd_tree' will use :class:`KDTree`
        - 'brute' will use a brute-force search.
        - 'auto' will attempt to decide the most appropriate algorithm
          based on the values passed to :meth:`fit` method.

        Note: fitting on sparse input will override the setting of
        this parameter, using brute force.

    leaf_size : int, optional (default = 30)
        Leaf size passed to BallTree or KDTree.  This can affect the
        speed of the construction and query, as well as the memory
        required to store the tree.  The optimal value depends on the
        nature of the problem.

    metric : string or callable, default 'minkowski'
        metric to use for distance computation. Any metric from scikit-learn
        or scipy.spatial.distance can be used.

        If metric is a callable function, it is called on each
        pair of instances (rows) and the resulting value recorded. The callable
        should take two arrays as input and return one value indicating the
        distance between them. This works for Scipy's metrics, but is less
        efficient than passing the metric name as a string.

        Distance matrices are not supported.

        Valid values for metric are:

        - from scikit-learn: ['cityblock', 'cosine', 'euclidean', 'l1', 'l2',
          'manhattan']

        - from scipy.spatial.distance: ['braycurtis', 'canberra', 'chebyshev',
          'correlation', 'dice', 'hamming', 'jaccard', 'kulsinski',
          'mahalanobis', 'minkowski', 'rogerstanimoto', 'russellrao',
          'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean',
          'yule']

        See the documentation for scipy.spatial.distance for details on these
        metrics.

    p : integer, optional (default = 2)
        Parameter for the Minkowski metric from
        sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is
        equivalent to using manhattan_distance (l1), and euclidean_distance
        (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.

    metric_params : dict, optional (default = None)
        Additional keyword arguments for the metric function.

    n_jobs : int or None, optional (default=None)
        The number of parallel jobs to run for neighbors search.
        ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
        ``-1`` means using all processors. See :term:`Glossary `
        for more details.

    Examples
    --------
      >>> import numpy as np
      >>> from sklearn.neighbors import NearestNeighbors
      >>> samples = [[0, 0, 2], [1, 0, 0], [0, 0, 1]]

      >>> neigh = NearestNeighbors(2, 0.4)
      >>> neigh.fit(samples)  #doctest: +ELLIPSIS
      NearestNeighbors(...)

      >>> neigh.kneighbors([[0, 0, 1.3]], 2, return_distance=False)
      ... #doctest: +ELLIPSIS
      array([[2, 0]]...)

      >>> nbrs = neigh.radius_neighbors([[0, 0, 1.3]], 0.4, return_distance=False)
      >>> np.asarray(nbrs[0][0])
      array(2)

    See also
    --------
    KNeighborsClassifier
    RadiusNeighborsClassifier
    KNeighborsRegressor
    RadiusNeighborsRegressor
    BallTree

    Notes
    -----
    See :ref:`Nearest Neighbors ` in the online documentation
    for a discussion of the choice of ``algorithm`` and ``leaf_size``.

    https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm
    """
  1. 在搜集到的point_pair当中,可以用来计算X,Y两个方向的平移量和绕中心的旋转量,也就是之前提到的[R|X] (shape:2X3)的矩阵。具体方法如下:求得参考点云的平均值,和带配准模型点云的平均值,随后利用和差化积求得旋转角度,和X,Y的平移量。
   for pair in point_pairs:

        (x, y), (xp, yp) = pair

        x_mean += x
        y_mean += y
        xp_mean += xp
        yp_mean += yp

    x_mean /= n
    y_mean /= n
    xp_mean /= n
    yp_mean /= n

    s_x_xp = 0
    s_y_yp = 0
    s_x_yp = 0
    s_y_xp = 0
    for pair in point_pairs:

        (x, y), (xp, yp) = pair

        s_x_xp += (x - x_mean)*(xp - xp_mean)
        s_y_yp += (y - y_mean)*(yp - yp_mean)
        s_x_yp += (x - x_mean)*(yp - yp_mean)
        s_y_xp += (y - y_mean)*(xp - xp_mean)

    rot_angle = math.atan2(s_x_yp - s_y_xp, s_x_xp + s_y_yp)
    translation_x = xp_mean - (x_mean*math.cos(rot_angle) - y_mean*math.sin(rot_angle))
    translation_y = yp_mean - (x_mean*math.sin(rot_angle) + y_mean*math.cos(rot_angle))

实际使用的SDK或API:
Point Clould Libray: https://en.wikipedia.org/wiki/PCL_(Point_Cloud_Library);
VTK;
ITK;
Open3D;

【1】Besl, Paul J.; N.D. McKay (1992). “A Method for Registration of 3-D Shapes”. IEEE Transactions on Pattern Analysis and Machine Intelligence. 14 (2): 239–256. doi:10.1109/34.121791.
【2】Chen, Yang; Gerard Medioni (1991). “Object modelling by registration of multiple range images”. Image Vision Comput. 10 (3): 145–155. doi:10.1016/0262-8856(92)90066-C.

当2D配准准备好后,我们下面可以进行3D点云的配准!!!

你可能感兴趣的:(python,深度学习,机器学习,算法,图像识别)