最初了解到Hector SLAM 是从https://www.youtube.com/embed/F8pdObV_df4看到手持激光建图,被吸引到了也就想看一下具体的实现。关于Hector SLAM算法有一篇论文:A Flexible and Scalable SLAM System with Full 3D Motion Estimation;以及开源代码:https://github.com/tu-darmstadt-ros-pkg/hector_slam。对照着论文和代码,一睹算法真容,主要分享对论文的第四部分2D SLAM的理解。
初始化的时候,将获取的激光数据作为第一帧处理映射到地图中。在t时刻,激光获取到新的激光数据,想要与t-1时刻的地图匹配,首先要把激光数据变换到栅格地图之中;例如激光点Pm变换到栅格地图中,我们希望的是激光点能够到一个“被占据”的栅格(灰色)中,如果t时刻所有的激光点都能够变换到被占据的栅格中,也就意味着匹配成功。
为了刻画激光点映射到栅格地图中是否对应被占据栅格中以及对应的程度,采用了双线性插值的方法,求出M(Pm)。M(P00)->M(P11)的范围是0 到 1,表示被占据的概率。
对应的程序实现:
Eigen::Vector3f interpMapValueWithDerivatives(const Eigen::Vector2f& coords)
{
//check if coords are within map limits.
if (concreteGridMap->pointOutOfMapBounds(coords)){
return Eigen::Vector3f(0.0f, 0.0f, 0.0f);
}
//map coords are always positive, floor them by casting to int
Eigen::Vector2i indMin(coords.cast<int>());
//get factors for bilinear interpolation
Eigen::Vector2f factors(coords - indMin.cast());
int sizeX = concreteGridMap->getSizeX();
int index = indMin[1] * sizeX + indMin[0];
// get grid values for the 4 grid points surrounding the current coords. Check cached data first, if not contained
// filter gridPoint with gaussian and store in cache.
//intensities[]是栅格占据的概率,表述的是在coords附近的4个栅格被占据的概率
if (!cacheMethod.containsCachedData(index, intensities[0])) {
intensities[0] = getUnfilteredGridPoint(index);
cacheMethod.cacheData(index, intensities[0]);
}
++index;
if (!cacheMethod.containsCachedData(index, intensities[1])) {
intensities[1] = getUnfilteredGridPoint(index);
cacheMethod.cacheData(index, intensities[1]);
}
index += sizeX-1;
if (!cacheMethod.containsCachedData(index, intensities[2])) {
intensities[2] = getUnfilteredGridPoint(index);
cacheMethod.cacheData(index, intensities[2]);
}
++index;
if (!cacheMethod.containsCachedData(index, intensities[3])) {
intensities[3] = getUnfilteredGridPoint(index);
cacheMethod.cacheData(index, intensities[3]);
}
float dx1 = intensities[0] - intensities[1];
float dx2 = intensities[2] - intensities[3];
float dy1 = intensities[0] - intensities[2];
float dy2 = intensities[1] - intensities[3];
float xFacInv = (1.0f - factors[0]);
float yFacInv = (1.0f - factors[1]);
return Eigen::Vector3f(
((intensities[0] * xFacInv + intensities[1] * factors[0]) * (yFacInv)) +
((intensities[2] * xFacInv + intensities[3] * factors[0]) * (factors[1])),
-((dx1 * xFacInv) + (dx2 * factors[0])),
-((dy1 * yFacInv) + (dy2 * factors[1]))
);
}
对于变换到栅格的一个激光点,希望M(Pm)越大越好(趋于1),在获得t时刻的激光数据后,计算的是t-1时刻到t时刻的位姿增量,在t-1的附近去迭代匹配得到最好的匹配结果。构造了目标函数:
对应的程序实现:
void getCompleteHessianDerivs(const Eigen::Vector3f& pose, const DataContainer& dataPoints, Eigen::Matrix3f& H, Eigen::Vector3f& dTr)
{
int size = dataPoints.getSize();
Eigen::Affine2f transform(getTransformForState(pose));
float sinRot = sin(pose[2]);
float cosRot = cos(pose[2]);
H = Eigen::Matrix3f::Zero();
dTr = Eigen::Vector3f::Zero();
for (int i = 0; i < size; ++i) {
const Eigen::Vector2f& currPoint (dataPoints.getVecEntry(i));
Eigen::Vector3f transformedPointData(interpMapValueWithDerivatives(transform * currPoint));
float funVal = 1.0f - transformedPointData[0];
dTr[0] += transformedPointData[1] * funVal;
dTr[1] += transformedPointData[2] * funVal;
float rotDeriv = ((-sinRot * currPoint.x() - cosRot * currPoint.y()) * transformedPointData[1] + (cosRot * currPoint.x() - sinRot * currPoint.y()) * transformedPointData[2]);
dTr[2] += rotDeriv * funVal;
H(0, 0) += util::sqr(transformedPointData[1]);
H(1, 1) += util::sqr(transformedPointData[2]);
H(2, 2) += util::sqr(rotDeriv);
H(0, 1) += transformedPointData[1] * transformedPointData[2];
H(0, 2) += transformedPointData[1] * rotDeriv;
H(1, 2) += transformedPointData[2] * rotDeriv;
}
H(1, 0) = H(0, 1);
H(2, 0) = H(0, 2);
H(2, 1) = H(1, 2);
}
调用getCompleteHessianDerivs得到位姿增量:
bool estimateTransformationLogLh(Eigen::Vector3f& estimate, ConcreteOccGridMapUtil& gridMapUtil, const DataContainer& dataPoints)
{
//在已经有的地图上把新的激光点加载进来,配准的思想
gridMapUtil.getCompleteHessianDerivs(estimate, dataPoints, H, dTr);
//std::cout << "\nH\n" << H << "\n";
//std::cout << "\ndTr\n" << dTr << "\n";
if ((H(0, 0) != 0.0f) && (H(1, 1) != 0.0f)) {
//H += Eigen::Matrix3f::Identity() * 1.0f;
Eigen::Vector3f searchDir (H.inverse() * dTr);
//std::cout << "\nsearchdir\n" << searchDir << "\n";
if (searchDir[2] > 0.2f) {
searchDir[2] = 0.2f;
std::cout << "SearchDir angle change too large\n";
} else if (searchDir[2] < -0.2f) {
searchDir[2] = -0.2f;
std::cout << "SearchDir angle change too large\n";
}
updateEstimatedPose(estimate, searchDir);// estimate += searchDir;
return true;
}
return false;
}
迭代计算位姿增量
采用了多分辨率地图,减弱陷入迭代计算的局部极值。实现如下,其中mapContainer表示的是多分辨率地图,程序默认在低分辨率,迭代3次,在最高分辨率mapContainer[0]迭代5次。
virtual Eigen::Vector3f matchData(const Eigen::Vector3f& beginEstimateWorld, const DataContainer& dataContainer, Eigen::Matrix3f& covMatrix)
{
size_t size = mapContainer.size();
Eigen::Vector3f tmp(beginEstimateWorld);
for (int index = size - 1; index >= 0; --index){
//std::cout << " m " << i;
if (index == 0){
tmp = (mapContainer[index].matchData(tmp, dataContainer, covMatrix, 5));
}else{
dataContainers[index-1].setFrom(dataContainer, static_cast<float>(1.0 / pow(2.0, static_cast<double>(index))));
tmp = (mapContainer[index].matchData(tmp, dataContainers[index-1], covMatrix, 3));
}
}
return tmp;
Hector整体算法很直接,就是将激光点与已有的地图“对齐”。
存在的问题:
1、其中对于双线性差值,在理论上存在不连续的可能,Pm可能在计算的时候迭代的过程中跑出P00->P11围成的正方形。这个问题也被google的cartographer改进为三线性差值。https://github.com/googlecartographer/cartographer。
2、没有对地图的修正能力,一旦地图出错,之后的匹配也都会出现问题。
有问题可加群 (移动机器人导航与控制群:199938556) 讨论交流,进群修改备注。