LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
作者Tixiao Shan在2018年发表过LeGO-LOAM,当时他还在史蒂文斯理工学院读博士,19年毕业之后去了MIT做助理研究员(羡慕.jpg)。。。这篇文章LIO-SAM实际上是LeGO-LOAM的扩展版本,添加了IMU预积分因子和GPS因子,去除了帧帧匹配部分,然后更详细地描述了LeGO-LOAM帧图匹配部分的设计动机和细节。(引用于知乎大佬文章【论文阅读38】LIO-SAM)现在论文已经被IROS2020录用,作为高精度,imu,雷达,gps结合,程序还少的开源slam,非常值得学习。
目录
原文代码和数据
安装和运行
论文分析
程序注释
程序ros节点图
全部函数说明
imageProjection.cpp
featureExtraction.cpp
imuPreintegration.cpp
mapOptmization.cpp
原文:https://github.com/TixiaoShan/LIO-SAM/blob/master/config/doc/paper.pdf
代码:https://github.com/TixiaoShan/LIO-SAM
数据:链接: https://pan.baidu.com/s/1-sAB_cNlYPqTjDuaFgz9pg 提取码: ejmu (walk不需要改配置文件,其他两个需要下文有)
ros
sudo apt-get install -y ros-kinetic-navigation
sudo apt-get install -y ros-kinetic-robot-localization
sudo apt-get install -y ros-kinetic-robot-state-publisher
gtsam4.0.2(网速慢用国内的码云https://gitee.com/eminbogen/gtsam/tags)
wget -O ~/Downloads/gtsam.zip https://github.com/borglab/gtsam/archive/4.0.2.zip
cd ~/Downloads/ && unzip gtsam.zip -d ~/Downloads/
cd ~/Downloads/gtsam-4.0.2/
mkdir build && cd build
cmake -DGTSAM_BUILD_WITH_MARCH_NATIVE=OFF ..
sudo make install -j8
install(同上https://gitee.com/eminbogen/LIOSAM)
cd ~/catkin_ws/src
git clone https://github.com/TixiaoShan/LIO-SAM.git
cd ..
catkin_make
运行walk数据包不需要改params.yaml文件。其他两个数据包运行要修改topics和extrinsicRPY,extrinsicRot。需要保存pcd请修改保存true和路径。之后
sudo gedit /opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/nodeprocess.py
调大_TIMEOUT_SIGINT值
具体params.yaml配置修改:
lio_sam:
# Topics
pointCloudTopic: "points_raw" # Point cloud data
imuTopic: "imu_correct" # IMU data
odomTopic: "odometry/imu" # IMU pre-preintegration odometry, same frequency as IMU
gpsTopic: "odometry/gpsz" # GPS odometry topic from navsat, see module_navsat.launch file
# GPS Settings
useImuHeadingInitialization: false # if using GPS data, set to "true"
useGpsElevation: false # if GPS elevation is bad, set to "false"
gpsCovThreshold: 2.0 # m^2, threshold for using GPS data
poseCovThreshold: 25.0 # m^2, threshold for using GPS data
# Export settings
savePCD: true # https://github.com/TixiaoShan/LIO-SAM/issues/3
savePCDDirectory: "/data/lio/" # in your home folder, starts and ends with "/". Warning: the code deletes "LOAM" folder then recreates it. See "mapOptimization" for implementation
# Sensor Settings
N_SCAN: 16 # number of lidar channel (i.e., 16, 32, 64, 128)
Horizon_SCAN: 1800 # lidar horizontal resolution (Velodyne:1800, Ouster:512,1024,2048)
timeField: "time" # point timestamp field, Velodyne - "time", Ouster - "t"
downsampleRate: 1 # default: 1. Downsample your data if too many points. i.e., 16 = 64 / 4, 16 = 16 / 1
# IMU Settings
imuAccNoise: 3.9939570888238808e-03
imuGyrNoise: 1.5636343949698187e-03
imuAccBiasN: 6.4356659353532566e-05
imuGyrBiasN: 3.5640318696367613e-05
imuGravity: 9.80511
# Extrinsics (lidar -> IMU)
extrinsicTrans: [0.0, 0.0, 0.0]
extrinsicRPY: [1, 0, 0,
0, 1, 0,
0, 0, 1]
extrinsicRot: [1, 0, 0,
0, 1, 0,
0, 0, 1]
# extrinsicRPY: [1, 0, 0,
# 0, 1, 0,
# 0, 0, 1]
# LOAM feature threshold
edgeThreshold: 1.0
surfThreshold: 0.1
edgeFeatureMinValidNum: 10
surfFeatureMinValidNum: 100
# voxel filter paprams
odometrySurfLeafSize: 0.4 # default: 0.4
mappingCornerLeafSize: 0.2 # default: 0.2
mappingSurfLeafSize: 0.4 # default: 0.4
# robot motion constraint (in case you are using a 2D robot)
z_tollerance: 1000 # meters
rotation_tollerance: 1000 # radians
# CPU Params
numberOfCores: 4 # number of cores for mapping optimization
mappingProcessInterval: 0.15 # seconds, regulate mapping frequency
# Surrounding map
surroundingkeyframeAddingDistThreshold: 1.0 # meters, regulate keyframe adding threshold
surroundingkeyframeAddingAngleThreshold: 0.2 # radians, regulate keyframe adding threshold
surroundingKeyframeDensity: 2.0 # meters, downsample surrounding keyframe poses
surroundingKeyframeSearchRadius: 50.0 # meters, within n meters scan-to-map optimization (when loop closure disabled)
# Loop closure
loopClosureEnableFlag: false
surroundingKeyframeSize: 25 # submap size (when loop closure enabled)
historyKeyframeSearchRadius: 15.0 # meters, key frame that is within n meters from current pose will be considerd for loop closure
historyKeyframeSearchTimeDiff: 30.0 # seconds, key frame that is n seconds older will be considered for loop closure
historyKeyframeSearchNum: 25 # number of hostory key frames will be fused into a submap for loop closure
historyKeyframeFitnessScore: 0.3 # icp threshold, the smaller the better alignment
# Visualization
globalMapVisualizationSearchRadius: 1000.0 # meters, global map visualization radius
globalMapVisualizationPoseDensity: 10.0 # meters, global map visualization keyframe density
globalMapVisualizationLeafSize: 1.0 # meters, global map visualization cloud density
# Navsat (convert GPS coordinates to Cartesian)
navsat:
frequency: 50
wait_for_datum: false
delay: 0.0
magnetic_declination_radians: 0
yaw_offset: 0
zero_altitude: true
broadcast_utm_transform: false
broadcast_utm_transform_as_parent_frame: false
publish_filtered_gps: false
# EKF for Navsat
ekf_gps:
publish_tf: false
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom
frequency: 50
two_d_mode: false
sensor_timeout: 0.01
# -------------------------------------
# External IMU:
# -------------------------------------
imu0: imu_correct
# make sure the input is aligned with ROS REP105. "imu_correct" is manually transformed by myself. EKF can also transform the data using tf between your imu and base_link
imu0_config: [false, false, false,
true, true, true,
false, false, false,
false, false, true,
true, true, true]
imu0_differential: false
imu0_queue_size: 50
imu0_remove_gravitational_acceleration: true
# -------------------------------------
# Odometry (From Navsat):
# -------------------------------------
odom0: odometry/gps
odom0_config: [true, true, true,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom0_differential: false
odom0_queue_size: 10
# x y z r p y x_dot y_dot z_dot r_dot p_dot y_dot x_ddot y_ddot z_ddot
process_noise_covariance: [ 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 10.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.25, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015]
效果从地面平整,墙面贴合,墙体厚薄看还是非常好的。
论文认为loam系列文章存在一些问题。
1.将其数据保存在全局体素地图中
2.难以执行闭环检测
3.没有结合其他绝对测量(GPS,指南针等)
4.当该体素地图变得密集时,在线优化过程的效率降低
决定使用因子图的思想优化激光SLAM,引入四种因子。
1.IMU预积分因子
2.激光雷达里程因子
3.GPS因子
4.闭环因子
对于预积分因子。计算采用常见的角速度,加速度,线速度,位置的公式推导方式即s=vt+1/2at^2
对于里程计因子。采用三步。
1.粗计算新帧与前关键帧的相对位姿变换,按阈值提取关键帧,五个关键帧合成一次区域的体素化点云地图。e,p代表edge和plane。M代表合成点云地图,F代表单关键帧点云,取∪集是合成。
2.edge和plane进行匹配。
3.优化也是常规优化。点对线和面对面来优化变换矩阵。
对于GPS因子。作者提出。
1.GPS测量值的时间戳根据里程计时间戳进行线性插值。
2.无需不断添加GPS因子。
3.当估计的位置协方差大于接收的GPS位置协方差时,添加GPS因子。
4.GPS在z方向可信度较低。
这里计算协方差是提取0,7,14号位置,根据ros手册和协方差知识,只是指xyz的测量方差。因为常见GPS为50HZ,所以短时间内能测多次,可以求方差。
对于闭环因子。作者提出。
1.使用的是一种简单但有效的基于欧氏距离的闭环检测方法。
2.闭环系数对于校正机器人高度的漂移特别有用,因为GPS的海拔高度测量非常不准确。
成果
1.精度对比
2.时间上特征多的区域即使scan少也更耗时,park时间少于Amsterdam。左图park
3.程序上作者注释,如果运动慢,点云由于运动导致的矫正不如没有
imageProjection.cpp:
PointXYZIPT 类型,点和强度,线数,时间。可以进pcl库使用。
Main ImageProjection():接收rosbag和lidar信息,imu的数值,来自imuPreintegration的odom数据;内存+参数初始化重置
void allocateMemory():点云输入,全部点云,提取点云空间重置(pointColInd与pointRange的定义问题)
void resetParameters():输入点云,提取点云,各点与光心距,imu数据清零
void imuHandler(const sensor_msgs::Imu::ConstPtr &imuMsg):仅接收放队列
void odometryHandler(const nav_msgs::Odometry::ConstPtr &odometryMsg):仅接收放队列
void cloudHandler(const sensor_msgs::PointCloud2ConstPtr &laserCloudMsg):深度图生成,去畸变;深度图中提取点云,发布,重置参数
bool cachePointCloud(const sensor_msgs::PointCloud2ConstPtr &laserCloudMsg):清除临时点云,并检查点云帧里面是否有ring和time通道
bool deskewInfo():检测队列数量,
void imuDeskewInfo():计算imu转角
void odomDeskewInfo():读取odom数据,并根据协方差判断是否相信
void findRotation(double pointTime, float *rotXCur, float *rotYCur, float *rotZCur):将imuDeskewInfo数值用于每个点云点在一条线中的位置
void findPosition(double relTime, float *posXCur, float *posYCur, float *posZCur):高速时按位移百分比平移点云
PointType deskewPoint(PointType *point, double relTime):根据时间戳,对每个点去畸变
void projectPointCloud():将点云投影成深度图
void cloudExtraction():去边缘与过远点
void publishClouds():ros输出
featureExtraction.cpp:
struct smoothness_t 曲率与序号
struct by_value 大小排序函数
Main FeatureExtraction():获取imageProjection修正后输入点云,输出特征点和输入点云;参数初始化
void initializationValue():曲率储存,降采样,特征点,临近标签初始化
void laserCloudInfoHandler(const lio_sam::cloud_infoConstPtr &msgIn):点云句柄,包括下面所有内容
void calculateSmoothness():计算每个点的曲率
void markOccludedPoints():边界过远一侧去除,上下三线变化过大去除
void extractFeatures():特征点提取
void publishFeatureCloud():ros输出
imuPreintegration.cpp:
Main IMUPreintegration(): 接收imu数据,map来的odom数据,输出imu的odom,path;预积分参数设置,gtsam参数设置
void resetOptimization():gtsam相关优化参数重置
void resetParams():初始参数设定;是否第一帧;是否进行过优化;系统初始化
void odometryHandler(const nav_msgs::Odometry::ConstPtr &odomMsg):接收map里的odom数据
bool failureDetection(const gtsam::Vector3 &velCur, const gtsam::imuBias::ConstantBias &biasCur):判断优化后速度和bias是否计算过大
void imuHandler(const sensor_msgs::Imu::ConstPtr &imu_raw):接收imu原始数据,使用gtsam短暂预积分,计算PVB,输出暂时轨迹
mapOptmization.cpp:
struct PointXYZIRPYT 位置欧拉角时间
Main mapOptimization():接收imageProjection的lidar,IMU信息;数据集的GPS信息,发布位姿,周围点,路径,历史帧,icp帧,局部帧;定义滤波参数与内存;
void allocateMemory():历史,局部位姿,棱面点,特征值,协方差,协方差添加标志,降采样点云,KD树,最近关键帧点云,临近历史帧点云
void pointAssociateToMap(PointType const *const pi, PointType *const po):点云点到世界坐标系转换
gtsam::Pose3 pclPointTogtsamPose3(PointTypePose thisPoint):pcl到gtsam格式
gtsam::Pose3 trans2gtsamPose(float transformIn[]):数组到gtsam格式
Eigen::Affine3f pclPointToAffine3f(PointTypePose thisPoint):pcl到仿射矩阵格式(实际欧拉角)
Eigen::Affine3f trans2Affine3f(float transformIn[]):矩阵到仿射矩阵格式
PointTypePose trans2PointTypePose(float transformIn[]):数组到pcl格式
三个线程:
1.接收数据
void gpsHandler(const nav_msgs::Odometry::ConstPtr &gpsMsg):GPS数据进队列
void laserCloudInfoHandler(const lio_sam::cloud_infoConstPtr &msgIn):
void updateInitialGuess():更新初始位姿
void extractSurroundingKeyFrames():从关键帧里面提取附近回环候选帧
void downsampleCurrentScan():进行下采样
void scan2MapOptimization(): 构建点到平面、点到直线的残差, 用高斯牛顿法进行优化
void saveKeyFramesAndFactor():添加factor到gtsam
void correctPoses():更新位姿
void publishOdometry():ros输出
void publishFrames():ros输出
2.闭环检测
void loopClosureThread():定期启动performLoopClosure()进行闭环检测
void performLoopClosure():根据两帧计算icp引入到gtsam优化中
bool detectLoopClosure(int *latestID, int *closestID):寻找临近历史帧和关键帧
3.显示
void visualizeGlobalMapThread():循环调用publishGlobalMap();ros环境关闭时退出循环并生成pcd
void publishGlobalMap():输出临近点云
#include "utility.h"
#include "lio_sam/cloud_info.h"
// Velodyne点云点结构体构造,point4d是xyz和强度intensity.ring是线数,EIGEN_MAKE_ALIGNED_OPERATOR_NEW字符对齐
struct PointXYZIRT
{
PCL_ADD_POINT4D
PCL_ADD_INTENSITY;
uint16_t ring;
float time;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
} EIGEN_ALIGN16;
POINT_CLOUD_REGISTER_POINT_STRUCT (PointXYZIRT,
(float, x, x) (float, y, y) (float, z, z) (float, intensity, intensity)
(uint16_t, ring, ring) (float, time, time)
)
// Ouster
// struct PointXYZIRT {
// PCL_ADD_POINT4D;
// float intensity;
// uint32_t t;
// uint16_t reflectivity;
// uint8_t ring;
// uint16_t noise;
// uint32_t range;
// EIGEN_MAKE_ALIGNED_OPERATOR_NEW
// }EIGEN_ALIGN16;
// POINT_CLOUD_REGISTER_POINT_STRUCT(PointXYZIRT,
// (float, x, x) (float, y, y) (float, z, z) (float, intensity, intensity)
// (uint32_t, t, t) (uint16_t, reflectivity, reflectivity)
// (uint8_t, ring, ring) (uint16_t, noise, noise) (uint32_t, range, range)
// )
//循环imu长度,作者imu Microstrain 3DM-GX5-25,为500hz,使用loam数据集时要修改
const int queueLength = 500;
//引用参数
class ImageProjection : public ParamServer
{
private:
std::mutex imuLock;
std::mutex odoLock;
ros::Subscriber subLaserCloud;
ros::Publisher pubLaserCloud;
//输出距离不过近的点云点
ros::Publisher pubExtractedCloud;
//输出下文的cloudInfo
ros::Publisher pubLaserCloudInfo;
ros::Subscriber subImu;
std::deque imuQueue;
ros::Subscriber subOdom;
std::deque odomQueue;
std::deque cloudQueue;
sensor_msgs::PointCloud2 currentCloudMsg;
double *imuTime = new double[queueLength];
double *imuRotX = new double[queueLength];
double *imuRotY = new double[queueLength];
double *imuRotZ = new double[queueLength];
int imuPointerCur;
bool firstPointFlag;
Eigen::Affine3f transStartInverse;
pcl::PointCloud::Ptr laserCloudIn;
pcl::PointCloud::Ptr fullCloud;
pcl::PointCloud::Ptr extractedCloud;
int deskewFlag;
cv::Mat rangeMat;
bool odomDeskewFlag;
float odomIncreX;
float odomIncreY;
float odomIncreZ;
//这里面有imu是否可用,odo是否可用,lidar的位姿,scan中各点对应线号,各点与光心距离的信息
lio_sam::cloud_info cloudInfo;
double timeScanCur;
double timeScanNext;
std_msgs::Header cloudHeader;
public:
ImageProjection():
deskewFlag(0)
{
subImu = nh.subscribe(imuTopic, 2000, &ImageProjection::imuHandler, this, ros::TransportHints().tcpNoDelay());
subOdom = nh.subscribe(odomTopic, 2000, &ImageProjection::odometryHandler, this, ros::TransportHints().tcpNoDelay());
subLaserCloud = nh.subscribe(pointCloudTopic, 5, &ImageProjection::cloudHandler, this, ros::TransportHints().tcpNoDelay());
pubExtractedCloud = nh.advertise ("lio_sam/deskew/cloud_deskewed", 1);
pubLaserCloudInfo = nh.advertise ("lio_sam/deskew/cloud_info", 10);
//配置内存,重置参数
allocateMemory();
resetParameters();
//pcl控制台输出error信息
pcl::console::setVerbosityLevel(pcl::console::L_ERROR);
}
void allocateMemory()
{
laserCloudIn.reset(new pcl::PointCloud());
fullCloud.reset(new pcl::PointCloud());
extractedCloud.reset(new pcl::PointCloud());
fullCloud->points.resize(N_SCAN*Horizon_SCAN);
cloudInfo.startRingIndex.assign(N_SCAN, 0);
cloudInfo.endRingIndex.assign(N_SCAN, 0);
//从h文件看是16*1800
cloudInfo.pointColInd.assign(N_SCAN*Horizon_SCAN, 0);
cloudInfo.pointRange.assign(N_SCAN*Horizon_SCAN, 0);
resetParameters();
}
void resetParameters()
{
laserCloudIn->clear();
extractedCloud->clear();
// reset range matrix for range image projection 全部默认初值为最大浮点数
rangeMat = cv::Mat(N_SCAN, Horizon_SCAN, CV_32F, cv::Scalar::all(FLT_MAX));
imuPointerCur = 0;
firstPointFlag = true;
odomDeskewFlag = false;
for (int i = 0; i < queueLength; ++i)
{
imuTime[i] = 0;
imuRotX[i] = 0;
imuRotY[i] = 0;
imuRotZ[i] = 0;
}
}
~ImageProjection(){}
void imuHandler(const sensor_msgs::Imu::ConstPtr& imuMsg)
{
sensor_msgs::Imu thisImu = imuConverter(*imuMsg);
std::lock_guard lock1(imuLock);
imuQueue.push_back(thisImu);
// debug IMU data
// cout << std::setprecision(6);
// cout << "IMU acc: " << endl;
// cout << "x: " << thisImu.linear_acceleration.x <<
// ", y: " << thisImu.linear_acceleration.y <<
// ", z: " << thisImu.linear_acceleration.z << endl;
// cout << "IMU gyro: " << endl;
// cout << "x: " << thisImu.angular_velocity.x <<
// ", y: " << thisImu.angular_velocity.y <<
// ", z: " << thisImu.angular_velocity.z << endl;
// double imuRoll, imuPitch, imuYaw;
// tf::Quaternion orientation;
// tf::quaternionMsgToTF(thisImu.orientation, orientation);
// tf::Matrix3x3(orientation).getRPY(imuRoll, imuPitch, imuYaw);
// cout << "IMU roll pitch yaw: " << endl;
// cout << "roll: " << imuRoll << ", pitch: " << imuPitch << ", yaw: " << imuYaw << endl << endl;
}
void odometryHandler(const nav_msgs::Odometry::ConstPtr& odometryMsg)
{
std::lock_guard lock2(odoLock);
odomQueue.push_back(*odometryMsg);
}
void cloudHandler(const sensor_msgs::PointCloud2ConstPtr& laserCloudMsg)
{
//检测一下点数量是否够2个,线数通道和时间通道是否存在
if (!cachePointCloud(laserCloudMsg))
return;
//IMU不是空,imu序列的第一个时间戳大于当前帧雷达时间戳,IMU最后一个时间戳小于下一帧雷达时间戳
if (!deskewInfo())
return;
projectPointCloud();
cloudExtraction();
publishClouds();
resetParameters();
}
//检测一下点数量是否够2个,线数通道和时间通道是否存在
bool cachePointCloud(const sensor_msgs::PointCloud2ConstPtr& laserCloudMsg)
{
// cache point cloud
cloudQueue.push_back(*laserCloudMsg);
if (cloudQueue.size() <= 2)
return false;
else
{
//提取队列第一个做timeScanCur,之后pop出第一个,选第二个做timeScanNext。后面与imu对比时间戳。
currentCloudMsg = cloudQueue.front();
cloudQueue.pop_front();
cloudHeader = currentCloudMsg.header;
timeScanCur = cloudHeader.stamp.toSec();
timeScanNext = cloudQueue.front().header.stamp.toSec();
}
// convert cloud
pcl::fromROSMsg(currentCloudMsg, *laserCloudIn);
// check dense flag
if (laserCloudIn->is_dense == false)
{
ROS_ERROR("Point cloud is not in dense format, please remove NaN points first!");
ros::shutdown();
}
// check ring channel
static int ringFlag = 0;
if (ringFlag == 0)
{
ringFlag = -1;
for (int i = 0; i < (int)currentCloudMsg.fields.size(); ++i)
{
if (currentCloudMsg.fields[i].name == "ring")
{
ringFlag = 1;
break;
}
}
if (ringFlag == -1)
{
ROS_ERROR("Point cloud ring channel not available, please configure your point cloud data!");
ros::shutdown();
}
}
// check point time
if (deskewFlag == 0)
{
deskewFlag = -1;
for (int i = 0; i < (int)currentCloudMsg.fields.size(); ++i)
{
if (currentCloudMsg.fields[i].name == timeField)
{
deskewFlag = 1;
break;
}
}
if (deskewFlag == -1)
ROS_WARN("Point cloud timestamp not available, deskew function disabled, system will drift significantly!");
}
return true;
}
//IMU不是空,imu序列的第一个时间戳大于当前帧雷达时间戳,IMU最后一个时间戳小于下一帧雷达时间戳
bool deskewInfo()
{
std::lock_guard lock1(imuLock);
std::lock_guard lock2(odoLock);
// make sure IMU data available for the scan
if (imuQueue.empty() || imuQueue.front().header.stamp.toSec() > timeScanCur || imuQueue.back().header.stamp.toSec() < timeScanNext)
{
ROS_DEBUG("Waiting for IMU data ...");
return false;
}
imuDeskewInfo();
odomDeskewInfo();
return true;
}
//计算当前激光时间戳前,每个时刻imu累计出的角速度,并判定imu数据是否可用(数量够)
void imuDeskewInfo()
{
cloudInfo.imuAvailable = false;
//直到imu的时间戳到当前scan时间戳前0.01s以内
while (!imuQueue.empty())
{
if (imuQueue.front().header.stamp.toSec() < timeScanCur - 0.01)
imuQueue.pop_front();
else
break;
}
if (imuQueue.empty())
return;
imuPointerCur = 0;
for (int i = 0; i < (int)imuQueue.size(); ++i)
{
sensor_msgs::Imu thisImuMsg = imuQueue[i];
double currentImuTime = thisImuMsg.header.stamp.toSec();
//当前imu时间戳小于当前帧点云时间戳转换,大于下一帧+0.01s退出
// get roll, pitch, and yaw estimation for this scan
if (currentImuTime <= timeScanCur)
imuRPY2rosRPY(&thisImuMsg, &cloudInfo.imuRollInit, &cloudInfo.imuPitchInit, &cloudInfo.imuYawInit);
if (currentImuTime > timeScanNext + 0.01)
break;
//赋值0
if (imuPointerCur == 0){
imuRotX[0] = 0;
imuRotY[0] = 0;
imuRotZ[0] = 0;
imuTime[0] = currentImuTime;
++imuPointerCur;
continue;
}
// get angular velocity
double angular_x, angular_y, angular_z;
imuAngular2rosAngular(&thisImuMsg, &angular_x, &angular_y, &angular_z);
// integrate rotation 用a×t累加
double timeDiff = currentImuTime - imuTime[imuPointerCur-1];
imuRotX[imuPointerCur] = imuRotX[imuPointerCur-1] + angular_x * timeDiff;
imuRotY[imuPointerCur] = imuRotY[imuPointerCur-1] + angular_y * timeDiff;
imuRotZ[imuPointerCur] = imuRotZ[imuPointerCur-1] + angular_z * timeDiff;
imuTime[imuPointerCur] = currentImuTime;
++imuPointerCur;
}
//把赋值0后面的自加减回去
--imuPointerCur;
//如果数量为0直接返回,不为0,该imu属性为可用
if (imuPointerCur <= 0)
return;
cloudInfo.imuAvailable = true;
}
//应该是为了groundtruth对比用的,pose信息保存在cloudInfo里
void odomDeskewInfo()
{
cloudInfo.odomAvailable = false;
while (!odomQueue.empty())
{
if (odomQueue.front().header.stamp.toSec() < timeScanCur - 0.01)
odomQueue.pop_front();
else
break;
}
if (odomQueue.empty())
return;
if (odomQueue.front().header.stamp.toSec() > timeScanCur)
return;
// get start odometry at the beinning of the scan
nav_msgs::Odometry startOdomMsg;
for (int i = 0; i < (int)odomQueue.size(); ++i)
{
startOdomMsg = odomQueue[i];
if (ROS_TIME(&startOdomMsg) < timeScanCur)
continue;
else
break;
}
tf::Quaternion orientation;
tf::quaternionMsgToTF(startOdomMsg.pose.pose.orientation, orientation);
double roll, pitch, yaw;
tf::Matrix3x3(orientation).getRPY(roll, pitch, yaw);
// Initial guess used in mapOptimization
cloudInfo.initialGuessX = startOdomMsg.pose.pose.position.x;
cloudInfo.initialGuessY = startOdomMsg.pose.pose.position.y;
cloudInfo.initialGuessZ = startOdomMsg.pose.pose.position.z;
cloudInfo.initialGuessRoll = roll;
cloudInfo.initialGuessPitch = pitch;
cloudInfo.initialGuessYaw = yaw;
cloudInfo.imuPreintegrationResetId = round(startOdomMsg.pose.covariance[0]);
cloudInfo.odomAvailable = true;
// get end odometry at the end of the scan
odomDeskewFlag = false;
if (odomQueue.back().header.stamp.toSec() < timeScanNext)
return;
nav_msgs::Odometry endOdomMsg;
for (int i = 0; i < (int)odomQueue.size(); ++i)
{
endOdomMsg = odomQueue[i];
if (ROS_TIME(&endOdomMsg) < timeScanNext)
continue;
else
break;
}
if (int(round(startOdomMsg.pose.covariance[0])) != int(round(endOdomMsg.pose.covariance[0])))
return;
Eigen::Affine3f transBegin = pcl::getTransformation(startOdomMsg.pose.pose.position.x, startOdomMsg.pose.pose.position.y, startOdomMsg.pose.pose.position.z, roll, pitch, yaw);
tf::quaternionMsgToTF(endOdomMsg.pose.pose.orientation, orientation);
tf::Matrix3x3(orientation).getRPY(roll, pitch, yaw);
Eigen::Affine3f transEnd = pcl::getTransformation(endOdomMsg.pose.pose.position.x, endOdomMsg.pose.pose.position.y, endOdomMsg.pose.pose.position.z, roll, pitch, yaw);
Eigen::Affine3f transBt = transBegin.inverse() * transEnd;
float rollIncre, pitchIncre, yawIncre;
pcl::getTranslationAndEulerAngles(transBt, odomIncreX, odomIncreY, odomIncreZ, rollIncre, pitchIncre, yawIncre);
odomDeskewFlag = true;
}
//根据点云中某点的时间戳赋予其对应插值得到的旋转量
void findRotation(double pointTime, float *rotXCur, float *rotYCur, float *rotZCur)
{
*rotXCur = 0; *rotYCur = 0; *rotZCur = 0;
int imuPointerFront = 0;
//要么imuPointerFront计数大于了imu一组的数量imuPointerCur,(异常跳出)
//要么该次imu时间戳大于了该点时间戳(话说这函数一个点调用一次,是不是静态变量好一些)
while (imuPointerFront < imuPointerCur)
{
if (pointTime < imuTime[imuPointerFront])
break;
++imuPointerFront;
}
//如果计数为0或该次imu时间戳小于了该点时间戳(异常退出)
if (pointTime > imuTime[imuPointerFront] || imuPointerFront == 0)
{
*rotXCur = imuRotX[imuPointerFront];
*rotYCur = imuRotY[imuPointerFront];
*rotZCur = imuRotZ[imuPointerFront];
}
else {
int imuPointerBack = imuPointerFront - 1;
//算一下该点时间戳在本组imu中的位置
double ratioFront = (pointTime - imuTime[imuPointerBack]) / (imuTime[imuPointerFront] - imuTime[imuPointerBack]);
double ratioBack = (imuTime[imuPointerFront] - pointTime) / (imuTime[imuPointerFront] - imuTime[imuPointerBack]);
//按前后百分比赋予旋转量
*rotXCur = imuRotX[imuPointerFront] * ratioFront + imuRotX[imuPointerBack] * ratioBack;
*rotYCur = imuRotY[imuPointerFront] * ratioFront + imuRotY[imuPointerBack] * ratioBack;
*rotZCur = imuRotZ[imuPointerFront] * ratioFront + imuRotZ[imuPointerBack] * ratioBack;
}
}
void findPosition(double relTime, float *posXCur, float *posYCur, float *posZCur)
{
*posXCur = 0; *posYCur = 0; *posZCur = 0;
// If the sensor moves relatively slow, like walking speed, positional deskew seems to have little benefits. Thus code below is commented.
// if (cloudInfo.odomAvailable == false || odomDeskewFlag == false)
// return;
//
// float ratio = relTime / (timeScanNext - timeScanCur);
//
// *posXCur = ratio * odomIncreX;
// *posYCur = ratio * odomIncreY;
// *posZCur = ratio * odomIncreZ;
}
//这个类型是pcl::PointXYZI
PointType deskewPoint(PointType *point, double relTime)
{
//这个来源于上文的时间戳通道和imu可用判断,没有或是不可用则返回点
if (deskewFlag == -1 || cloudInfo.imuAvailable == false)
return *point;
//点的时间等于scan时间加relTime(后文的laserCloudIn->points[i].time)
double pointTime = timeScanCur + relTime;
//根据时间戳插值获取imu计算的旋转量与位置量(注意imu计算的)
float rotXCur, rotYCur, rotZCur;
findRotation(pointTime, &rotXCur, &rotYCur, &rotZCur);
float posXCur, posYCur, posZCur;
findPosition(relTime, &posXCur, &posYCur, &posZCur);
//这里的firstPointFlag来源于resetParameters函数,而resetParameters函数每次ros调用cloudHandler都会启动
//也就是求扫描第一个点时lidar的世界坐标系下变换矩阵的逆
if (firstPointFlag == true)
{
transStartInverse = (pcl::getTransformation(posXCur, posYCur, posZCur, rotXCur, rotYCur, rotZCur)).inverse();
firstPointFlag = false;
}
// transform points to start
//扫描当前点时lidar的世界坐标系下变换矩阵
Eigen::Affine3f transFinal = pcl::getTransformation(posXCur, posYCur, posZCur, rotXCur, rotYCur, rotZCur);
//扫描该点相对扫描本次scan第一个点的lidar变换矩阵=第一个点时lidar世界坐标系下变换矩阵的逆×当前点时lidar世界坐标系下变换矩阵
Eigen::Affine3f transBt = transStartInverse * transFinal;
//根据lidar位姿变换,修正点云位置
PointType newPoint;
newPoint.x = transBt(0,0) * point->x + transBt(0,1) * point->y + transBt(0,2) * point->z + transBt(0,3);
newPoint.y = transBt(1,0) * point->x + transBt(1,1) * point->y + transBt(1,2) * point->z + transBt(1,3);
newPoint.z = transBt(2,0) * point->x + transBt(2,1) * point->y + transBt(2,2) * point->z + transBt(2,3);
newPoint.intensity = point->intensity;
return newPoint;
}
void projectPointCloud()
{
int cloudSize = laserCloudIn->points.size();
// range image projection
for (int i = 0; i < cloudSize; ++i)
{
PointType thisPoint;
thisPoint.x = laserCloudIn->points[i].x;
thisPoint.y = laserCloudIn->points[i].y;
thisPoint.z = laserCloudIn->points[i].z;
thisPoint.intensity = laserCloudIn->points[i].intensity;
int rowIdn = laserCloudIn->points[i].ring;
//0--N_SCAN内,且是整数
if (rowIdn < 0 || rowIdn >= N_SCAN)
continue;
if (rowIdn % downsampleRate != 0)
continue;
//仰角
float horizonAngle = atan2(thisPoint.x, thisPoint.y) * 180 / M_PI;
//每线Horizon_SCAN(1800)的点,占据360度除一下
static float ang_res_x = 360.0/float(Horizon_SCAN);
int columnIdn = -round((horizonAngle-90.0)/ang_res_x) + Horizon_SCAN/2;
if (columnIdn >= Horizon_SCAN)
columnIdn -= Horizon_SCAN;
//如果线数不正确
if (columnIdn < 0 || columnIdn >= Horizon_SCAN)
continue;
//该点云点距离光心距离
float range = pointDistance(thisPoint);
if (range < 1.0)
continue;
//如果没有初始赋值
if (rangeMat.at(rowIdn, columnIdn) != FLT_MAX)
continue;
// for the amsterdam dataset
// if (range < 6.0 && rowIdn <= 7 && (columnIdn >= 1600 || columnIdn <= 200))
// continue;
// if (thisPoint.z < -2.0)
// continue;
thisPoint = deskewPoint(&thisPoint, laserCloudIn->points[i].time); // Velodyne
// thisPoint = deskewPoint(&thisPoint, (float)laserCloudIn->points[i].t / 1000000000.0); // Ouster
//计算各点距离
rangeMat.at(rowIdn, columnIdn) = pointDistance(thisPoint);
int index = columnIdn + rowIdn * Horizon_SCAN;
fullCloud->points[index] = thisPoint;
}
}
void cloudExtraction()
{
int count = 0;
// extract segmented cloud for lidar odometry
for (int i = 0; i < N_SCAN; ++i)
{
cloudInfo.startRingIndex[i] = count - 1 + 5;
for (int j = 0; j < Horizon_SCAN; ++j)
{
if (rangeMat.at(i,j) != FLT_MAX)
{
// mark the points' column index for marking occlusion later
cloudInfo.pointColInd[count] = j;
// save range info
cloudInfo.pointRange[count] = rangeMat.at(i,j);
// save extracted cloud
extractedCloud->push_back(fullCloud->points[j + i*Horizon_SCAN]);
// size of extracted cloud
++count;
}
}
cloudInfo.endRingIndex[i] = count -1 - 5;
}
}
void publishClouds()
{
cloudInfo.header = cloudHeader;
//publishCloud在h文件里
cloudInfo.cloud_deskewed = publishCloud(&pubExtractedCloud, extractedCloud, cloudHeader.stamp, "base_link");
pubLaserCloudInfo.publish(cloudInfo);
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "lio_sam");
ImageProjection IP;
ROS_INFO("\033[1;32m----> Image Projection Started.\033[0m");
ros::MultiThreadedSpinner spinner(3);
spinner.spin();
return 0;
}
#include "utility.h"
#include "lio_sam/cloud_info.h"
//曲率值和序号
struct smoothness_t{
float value;
size_t ind;
};
//曲率值对比
struct by_value{
bool operator()(smoothness_t const &left, smoothness_t const &right) {
return left.value < right.value;
}
};
class FeatureExtraction : public ParamServer
{
public:
ros::Subscriber subLaserCloudInfo;
ros::Publisher pubLaserCloudInfo;
ros::Publisher pubCornerPoints;
ros::Publisher pubSurfacePoints;
pcl::PointCloud::Ptr extractedCloud;
pcl::PointCloud::Ptr cornerCloud;
pcl::PointCloud::Ptr surfaceCloud;
pcl::VoxelGrid downSizeFilter;
lio_sam::cloud_info cloudInfo;
std_msgs::Header cloudHeader;
//曲率存放
std::vector cloudSmoothness;
float *cloudCurvature;
int *cloudNeighborPicked;
int *cloudLabel;
FeatureExtraction()
{
subLaserCloudInfo = nh.subscribe("lio_sam/deskew/cloud_info", 10, &FeatureExtraction::laserCloudInfoHandler, this, ros::TransportHints().tcpNoDelay());
pubLaserCloudInfo = nh.advertise ("lio_sam/feature/cloud_info", 10);
pubCornerPoints = nh.advertise("lio_sam/feature/cloud_corner", 1);
pubSurfacePoints = nh.advertise("lio_sam/feature/cloud_surface", 1);
initializationValue();
}
void initializationValue()
{
cloudSmoothness.resize(N_SCAN*Horizon_SCAN);
downSizeFilter.setLeafSize(odometrySurfLeafSize, odometrySurfLeafSize, odometrySurfLeafSize);
extractedCloud.reset(new pcl::PointCloud());
cornerCloud.reset(new pcl::PointCloud());
surfaceCloud.reset(new pcl::PointCloud());
cloudCurvature = new float[N_SCAN*Horizon_SCAN];
cloudNeighborPicked = new int[N_SCAN*Horizon_SCAN];
cloudLabel = new int[N_SCAN*Horizon_SCAN];
}
void laserCloudInfoHandler(const lio_sam::cloud_infoConstPtr& msgIn)
{
cloudInfo = *msgIn; // new cloud info
cloudHeader = msgIn->header; // new cloud header
pcl::fromROSMsg(msgIn->cloud_deskewed, *extractedCloud); // new cloud for extraction
//10点算曲率
calculateSmoothness();
//相邻两点距光心差距大则设为不进行特征提取的状态,差距过大,周围点也设置为不进行特征提取的状态
markOccludedPoints();
//对于scan中的棱点和面点进行提取
extractFeatures();
publishFeatureCloud();
}
//10点算曲率
void calculateSmoothness()
{
int cloudSize = extractedCloud->points.size();
for (int i = 5; i < cloudSize - 5; i++)
{
float diffRange = cloudInfo.pointRange[i-5] + cloudInfo.pointRange[i-4]
+ cloudInfo.pointRange[i-3] + cloudInfo.pointRange[i-2]
+ cloudInfo.pointRange[i-1] - cloudInfo.pointRange[i] * 10
+ cloudInfo.pointRange[i+1] + cloudInfo.pointRange[i+2]
+ cloudInfo.pointRange[i+3] + cloudInfo.pointRange[i+4]
+ cloudInfo.pointRange[i+5];
cloudCurvature[i] = diffRange*diffRange;//diffX * diffX + diffY * diffY + diffZ * diffZ;
cloudNeighborPicked[i] = 0;
cloudLabel[i] = 0;
// cloudSmoothness for sorting
cloudSmoothness[i].value = cloudCurvature[i];
cloudSmoothness[i].ind = i;
}
}
//相邻两点距光心差距大则设为不进行特征提取的状态,差距过大,周围点也设置为不进行特征提取的状态
void markOccludedPoints()
{
int cloudSize = extractedCloud->points.size();
// mark occluded points and parallel beam points
for (int i = 5; i < cloudSize - 6; ++i)
{
// occluded points
float depth1 = cloudInfo.pointRange[i];
float depth2 = cloudInfo.pointRange[i+1];
int columnDiff = std::abs(int(cloudInfo.pointColInd[i+1] - cloudInfo.pointColInd[i]));
if (columnDiff < 10){
// 10 pixel diff in range image
if (depth1 - depth2 > 0.3){
cloudNeighborPicked[i - 5] = 1;
cloudNeighborPicked[i - 4] = 1;
cloudNeighborPicked[i - 3] = 1;
cloudNeighborPicked[i - 2] = 1;
cloudNeighborPicked[i - 1] = 1;
cloudNeighborPicked[i] = 1;
}else if (depth2 - depth1 > 0.3){
cloudNeighborPicked[i + 1] = 1;
cloudNeighborPicked[i + 2] = 1;
cloudNeighborPicked[i + 3] = 1;
cloudNeighborPicked[i + 4] = 1;
cloudNeighborPicked[i + 5] = 1;
cloudNeighborPicked[i + 6] = 1;
}
}
// parallel beam
float diff1 = std::abs(float(cloudInfo.pointRange[i-1] - cloudInfo.pointRange[i]));
float diff2 = std::abs(float(cloudInfo.pointRange[i+1] - cloudInfo.pointRange[i]));
if (diff1 > 0.02 * cloudInfo.pointRange[i] && diff2 > 0.02 * cloudInfo.pointRange[i])
cloudNeighborPicked[i] = 1;
}
}
//对于scan中的棱点和面点进行提取
void extractFeatures()
{
cornerCloud->clear();
surfaceCloud->clear();
pcl::PointCloud::Ptr surfaceCloudScan(new pcl::PointCloud());
pcl::PointCloud::Ptr surfaceCloudScanDS(new pcl::PointCloud());
//和loam一样,对于每条线分为六部分,各自取20棱点和全部面点
for (int i = 0; i < N_SCAN; i++)
{
surfaceCloudScan->clear();
for (int j = 0; j < 6; j++)
{
//获取六段的初始截止位置
int sp = (cloudInfo.startRingIndex[i] * (6 - j) + cloudInfo.endRingIndex[i] * j) / 6;
int ep = (cloudInfo.startRingIndex[i] * (5 - j) + cloudInfo.endRingIndex[i] * (j + 1)) / 6 - 1;
if (sp >= ep)
continue;
//按曲率排序
std::sort(cloudSmoothness.begin()+sp, cloudSmoothness.begin()+ep, by_value());
int largestPickedNum = 0;
for (int k = ep; k >= sp; k--)
{
int ind = cloudSmoothness[k].ind;
//可特征提取的状态+曲率大于0.1,保存20个棱点,周围点除非光心距差距极大,不然都设置为可特征提取的状态
//达到均匀分布的目的
if (cloudNeighborPicked[ind] == 0 && cloudCurvature[ind] > edgeThreshold)
{
largestPickedNum++;
if (largestPickedNum <= 20){
cloudLabel[ind] = 1;
cornerCloud->push_back(extractedCloud->points[ind]);
} else {
break;
}
cloudNeighborPicked[ind] = 1;
for (int l = 1; l <= 5; l++)
{
int columnDiff = std::abs(int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l - 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
for (int l = -1; l >= -5; l--)
{
int columnDiff = std::abs(int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l + 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
}
}
//可特征提取的状态+曲率小于0.1,保存全部面点,周围点除非光心距差距极大,不然都设置为可特征提取的状态
//达到均匀分布的目的
for (int k = sp; k <= ep; k++)
{
int ind = cloudSmoothness[k].ind;
if (cloudNeighborPicked[ind] == 0 && cloudCurvature[ind] < surfThreshold)
{
cloudLabel[ind] = -1;
cloudNeighborPicked[ind] = 1;
for (int l = 1; l <= 5; l++) {
int columnDiff = std::abs(int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l - 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
for (int l = -1; l >= -5; l--) {
int columnDiff = std::abs(int(cloudInfo.pointColInd[ind + l] - cloudInfo.pointColInd[ind + l + 1]));
if (columnDiff > 10)
break;
cloudNeighborPicked[ind + l] = 1;
}
}
}
for (int k = sp; k <= ep; k++)
{
if (cloudLabel[k] <= 0){
surfaceCloudScan->push_back(extractedCloud->points[k]);
}
}
}
surfaceCloudScanDS->clear();
downSizeFilter.setInputCloud(surfaceCloudScan);
downSizeFilter.filter(*surfaceCloudScanDS);
*surfaceCloud += *surfaceCloudScanDS;
}
}
void freeCloudInfoMemory()
{
cloudInfo.startRingIndex.clear();
cloudInfo.endRingIndex.clear();
cloudInfo.pointColInd.clear();
cloudInfo.pointRange.clear();
}
void publishFeatureCloud()
{
// free cloud info memory
freeCloudInfoMemory();
// save newly extracted features
cloudInfo.cloud_corner = publishCloud(&pubCornerPoints, cornerCloud, cloudHeader.stamp, "base_link");
cloudInfo.cloud_surface = publishCloud(&pubSurfacePoints, surfaceCloud, cloudHeader.stamp, "base_link");
// publish to mapOptimization
pubLaserCloudInfo.publish(cloudInfo);
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "lio_sam");
FeatureExtraction FE;
ROS_INFO("\033[1;32m----> Feature Extraction Started.\033[0m");
ros::spin();
return 0;
}
发现已经有写的更好的博客了,引用一下他的注释
#include "utility.h"
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
using gtsam::symbol_shorthand::X; // Pose3 (x,y,z,r,p,y)
using gtsam::symbol_shorthand::V; // Vel (xdot,ydot,zdot)
using gtsam::symbol_shorthand::B; // Bias (ax,ay,az,gx,gy,gz)
class IMUPreintegration : public ParamServer {
public:
IMUPreintegration() {
// subscriber 订阅imu数据和激光Odom
// 业务逻辑都在callback里面写, 两个数据是耦合关系, imu通过激光odom给出优化后的预积分预测
// odom根据预测的位姿优化、融合出新的odom
subImu = nh.subscribe(imuTopic, 2000, &IMUPreintegration::imuHandler, this,
ros::TransportHints().tcpNoDelay());
subOdometry = nh.subscribe("lio_sam/mapping/odometry", 5,
&IMUPreintegration::odometryHandler, this,
ros::TransportHints().tcpNoDelay());
// publisher 发布融合后的imu path和预积分完成优化后预测的odom
pubImuOdometry = nh.advertise(odomTopic, 2000);
pubImuPath = nh.advertise("lio_sam/imu/path", 1);
map_to_odom = tf::Transform(tf::createQuaternionFromRPY(0, 0, 0), tf::Vector3(0, 0, 0));
// 下面是预积分使用到的gtsam的一些参数配置
boost::shared_ptr p = gtsam::PreintegrationParams::MakeSharedU(imuGravity);
p->accelerometerCovariance = gtsam::Matrix33::Identity(3, 3) * pow(imuAccNoise, 2); // acc white noise in continuous
p->gyroscopeCovariance = gtsam::Matrix33::Identity(3, 3) * pow(imuGyrNoise, 2); // gyro white noise in continuous
p->integrationCovariance =
gtsam::Matrix33::Identity(3, 3) * pow(1e-4, 2); // error committed in integrating position from velocities
gtsam::imuBias::ConstantBias
prior_imu_bias((gtsam::Vector(6) << 0, 0, 0, 0, 0, 0).finished());; // assume zero initial bias
priorPoseNoise = gtsam::noiseModel::Diagonal::Sigmas((gtsam::Vector(6)
<< 1e-2, 1e-2, 1e-2, 1e-2, 1e-2, 1e-2).finished()); // rad,rad,rad,m, m, m
priorVelNoise = gtsam::noiseModel::Isotropic::Sigma(3, 1e2); // m/s
priorBiasNoise = gtsam::noiseModel::Isotropic::Sigma(6, 1e-3); // 1e-2 ~ 1e-3 seems to be good
correctionNoise = gtsam::noiseModel::Isotropic::Sigma(6, 1e-2); // meter
noiseModelBetweenBias =
(gtsam::Vector(6) << imuAccBiasN, imuAccBiasN, imuAccBiasN, imuGyrBiasN, imuGyrBiasN, imuGyrBiasN).finished();
// 优化前后的imu
imuIntegratorImu_ = new gtsam::PreintegratedImuMeasurements(p, prior_imu_bias); // setting up the IMU integration for IMU message thread
imuIntegratorOpt_ =new gtsam::PreintegratedImuMeasurements(p, prior_imu_bias); // setting up the IMU integration for optimization
}
void resetOptimization() {
// gtsam相关优化参数重置
gtsam::ISAM2Params optParameters;
optParameters.relinearizeThreshold = 0.1;
optParameters.relinearizeSkip = 1;
optimizer = gtsam::ISAM2(optParameters);
gtsam::NonlinearFactorGraph newGraphFactors;
graphFactors = newGraphFactors;
gtsam::Values NewGraphValues;
graphValues = NewGraphValues;
}
void resetParams() {
lastImuT_imu = -1;
doneFirstOpt = false;
systemInitialized = false;
}
void odometryHandler(const nav_msgs::Odometry::ConstPtr &odomMsg) {
double currentCorrectionTime = ROS_TIME(odomMsg);
// make sure we have imu data to integrate
// 保证有imu数据,两个回调函数是互有联系的,
// 在imu的回调里就强调要完成一次优化才往下执行
if (imuQueOpt.empty())
return;
// 从雷达odom中取出位姿数据
float p_x = odomMsg->pose.pose.position.x;
float p_y = odomMsg->pose.pose.position.y;
float p_z = odomMsg->pose.pose.position.z;
float r_x = odomMsg->pose.pose.orientation.x;
float r_y = odomMsg->pose.pose.orientation.y;
float r_z = odomMsg->pose.pose.orientation.z;
float r_w = odomMsg->pose.pose.orientation.w;
int currentResetId = round(odomMsg->pose.covariance[0]);
gtsam::Pose3 lidarPose = gtsam::Pose3(gtsam::Rot3::Quaternion(r_w, r_x, r_y, r_z),
gtsam::Point3(p_x, p_y, p_z));
// correction pose jumped, reset imu pre-integration
if (currentResetId != imuPreintegrationResetId) {
resetParams();
imuPreintegrationResetId = currentResetId;
}
// 0. initialize system
// 第一帧进来初始化系统
if (systemInitialized == false) {
resetOptimization(); // 重置优化参数
// pop old IMU message
// 去掉一些比较旧的imu数据, 只需要保证雷达odom时间戳在imu队列中间
// 因为imu是高频数据, 这里是有效的
while (!imuQueOpt.empty()) {
if (ROS_TIME(&imuQueOpt.front()) < currentCorrectionTime - delta_t) {
lastImuT_opt = ROS_TIME(&imuQueOpt.front());
imuQueOpt.pop_front();
} else
break;
}
// initial pose
prevPose_ = lidarPose.compose(lidar2Imu); // 雷达odom转到imu系下
//PriorFactor,包括了位姿、速度和bias
//加入PriorFactor在图优化中基本都是必需的前提
gtsam::PriorFactor priorPose(X(0), prevPose_, priorPoseNoise);
graphFactors.add(priorPose);
// initial velocity
prevVel_ = gtsam::Vector3(0, 0, 0);
gtsam::PriorFactor priorVel(V(0), prevVel_, priorVelNoise);
graphFactors.add(priorVel);
// initial bias
prevBias_ = gtsam::imuBias::ConstantBias();
gtsam::PriorFactor priorBias(B(0), prevBias_, priorBiasNoise);
graphFactors.add(priorBias);
// add values、
// 除了因子外, 还要有节点value
graphValues.insert(X(0), prevPose_);
graphValues.insert(V(0), prevVel_);
graphValues.insert(B(0), prevBias_);
// optimize once
// 进行一次优化
optimizer.update(graphFactors, graphValues);
graphFactors.resize(0);
graphValues.clear(); //图和节点均清零, 为什么要清零?
// 重置积分器
imuIntegratorImu_->resetIntegrationAndSetBias(prevBias_);
imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_);
key = 1; // 计数
systemInitialized = true;
return;
}
// reset graph for speed
// key超过设定的100则重置整个图
// 减小计算压力,保存最后的噪声值
if (key == 100) {
// get updated noise before reset
gtsam::noiseModel::Gaussian::shared_ptr updatedPoseNoise = gtsam::noiseModel::Gaussian::Covariance(
optimizer.marginalCovariance(X(key - 1)));
gtsam::noiseModel::Gaussian::shared_ptr updatedVelNoise = gtsam::noiseModel::Gaussian::Covariance(
optimizer.marginalCovariance(V(key - 1)));
gtsam::noiseModel::Gaussian::shared_ptr updatedBiasNoise = gtsam::noiseModel::Gaussian::Covariance(
optimizer.marginalCovariance(B(key - 1)));
// reset graph 重置参数
resetOptimization();
// 重置之后还有类似与初始化的过程 区别在于噪声值不同
// add pose
gtsam::PriorFactor priorPose(X(0), prevPose_, updatedPoseNoise);
graphFactors.add(priorPose);
// add velocity
gtsam::PriorFactor priorVel(V(0), prevVel_, updatedVelNoise);
graphFactors.add(priorVel);
// add bias
gtsam::PriorFactor priorBias(B(0), prevBias_, updatedBiasNoise);
graphFactors.add(priorBias);
// add values
graphValues.insert(X(0), prevPose_);
graphValues.insert(V(0), prevVel_);
graphValues.insert(B(0), prevBias_);
// optimize once
// 优化一次, 相当于初始化
optimizer.update(graphFactors, graphValues);
graphFactors.resize(0);
graphValues.clear();
key = 1;
}
// 1. integrate imu data and optimize
// 这里才开始主要的优化流程
while (!imuQueOpt.empty()) {
// pop and integrate imu data that is between two optimizations
// 对两次优化的之间的imu数据进行优化
sensor_msgs::Imu *thisImu = &imuQueOpt.front(); // 最新的imu数据帧
double imuTime = ROS_TIME(thisImu);
if (imuTime < currentCorrectionTime - delta_t) {
// 求dt,初始是1/500,后续是两帧imu数据的时间差
double dt = (lastImuT_opt < 0) ? (1.0 / 500.0) : (imuTime - lastImuT_opt);
// 进行预积分得到新的状态值,注意用到的是imu数据的加速度和角速度
// 作者要求的9轴imu数据中欧拉角在本程序中没有任何用到, 全在地图优化里用到的
imuIntegratorOpt_->integrateMeasurement(
gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y,
thisImu->linear_acceleration.z),
gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y,
thisImu->angular_velocity.z), dt);
//在pop出一次数据前保存上一个数据的时间戳
lastImuT_opt = imuTime;
imuQueOpt.pop_front();
} else
break;
}
// add imu factor to graph
// 利用两帧之间的IMU数据完成了预积分后增加imu因子到因子图中
// add imu factor to graph
const gtsam::PreintegratedImuMeasurements
&preint_imu = dynamic_cast(*imuIntegratorOpt_);
gtsam::ImuFactor imu_factor(X(key - 1), V(key - 1), X(key), V(key), B(key - 1), preint_imu);
graphFactors.add(imu_factor);
// add imu bias between factor
graphFactors.add(
gtsam::BetweenFactor(B(key - 1), B(key), gtsam::imuBias::ConstantBias(),
gtsam::noiseModel::Diagonal::Sigmas(
sqrt(imuIntegratorOpt_->deltaTij()) *
noiseModelBetweenBias)));
// add pose factor
// 还加入了pose factor,其实对应于作者论文中的因子图结构
// 就是与imu因子一起的 Lidar odometry factor
gtsam::Pose3 curPose = lidarPose.compose(lidar2Imu);
gtsam::PriorFactor pose_factor(X(key), curPose, correctionNoise);
graphFactors.add(pose_factor);
// insert predicted values
// 插入预测的值 即因子图中x0 x1 x2……节点
gtsam::NavState propState_ = imuIntegratorOpt_->predict(prevState_, prevBias_);
graphValues.insert(X(key), propState_.pose());
graphValues.insert(V(key), propState_.v());
graphValues.insert(B(key), prevBias_);
// optimize 优化后重置
optimizer.update(graphFactors, graphValues);
optimizer.update();
graphFactors.resize(0);
graphValues.clear();
// Overwrite the beginning of the preintegration for the next step.
// 用这次的优化结果重写或者说是覆盖相关初始值, 为下一次优化准备
gtsam::Values result = optimizer.calculateEstimate();
prevPose_ = result.at(X(key));
prevVel_ = result.at(V(key));
prevState_ = gtsam::NavState(prevPose_, prevVel_);
prevBias_ = result.at(B(key));
// Reset the optimization preintegration object.
imuIntegratorOpt_->resetIntegrationAndSetBias(prevBias_);
// check optimization
// 检查是否有失败情况,如有则重置参数
if (failureDetection(prevVel_, prevBias_)) {
resetParams();
return;
}
// 2. after optiization, re-propagate imu odometry preintegration
// 为了维持实时性imuIntegrateImu就得在每次odom触发优化后立刻获取最新的bias,
// 同时对imu测量值imuQueImu执行bias改变的状态重传播处理, 这样可以最大限度的保证实时性和准确性?
prevStateOdom = prevState_;
prevBiasOdom = prevBias_;
// first pop imu message older than current correction data
// 去除旧的imu数据
double lastImuQT = -1;
while (!imuQueImu.empty() && ROS_TIME(&imuQueImu.front()) < currentCorrectionTime - delta_t) {
lastImuQT = ROS_TIME(&imuQueImu.front());
imuQueImu.pop_front();
}
// repropogate
// 重传播?
if (!imuQueImu.empty()) {
// reset bias use the newly optimized bias
// 使用最新的优化后的bias更新bias值
imuIntegratorImu_->resetIntegrationAndSetBias(prevBiasOdom);
// integrate imu message from the beginning of this optimization
// 利用imuQueImu中的数据进行预积分,主要区别旧在于上一行的更新了bias
for (int i = 0; i < (int) imuQueImu.size(); ++i) {
sensor_msgs::Imu *thisImu = &imuQueImu[i];
double imuTime = ROS_TIME(thisImu); // 时间戳
double dt = (lastImuQT < 0) ? (1.0 / 500.0) : (imuTime - lastImuQT);
// 进行预计分
imuIntegratorImu_->integrateMeasurement(
gtsam::Vector3(thisImu->linear_acceleration.x, thisImu->linear_acceleration.y,
thisImu->linear_acceleration.z),
gtsam::Vector3(thisImu->angular_velocity.x, thisImu->angular_velocity.y,
thisImu->angular_velocity.z), dt);
lastImuQT = imuTime;
}
}
++key;
doneFirstOpt = true;
}
bool failureDetection(const gtsam::Vector3 &velCur, const gtsam::imuBias::ConstantBias &biasCur) {
// 检测预计分失败的函数, 即时爆出错误,重置积分器
Eigen::Vector3f vel(velCur.x(), velCur.y(), velCur.z());
if (vel.norm() > 10) {
ROS_WARN("Large velocity, reset IMU-preintegration!");
return true;
}
Eigen::Vector3f ba(biasCur.accelerometer().x(), biasCur.accelerometer().y(), biasCur.accelerometer().z());
Eigen::Vector3f bg(biasCur.gyroscope().x(), biasCur.gyroscope().y(), biasCur.gyroscope().z());
if (ba.norm() > 0.1 || bg.norm() > 0.1) {
ROS_WARN("Large bias, reset IMU-preintegration!");
return true;
}
return false;
}
void imuHandler(const sensor_msgs::Imu::ConstPtr &imu_raw) {
// imu数据转换到雷达坐标系下
sensor_msgs::Imu thisImu = imuConverter(*imu_raw);
// publish static tf map->odom
tfMap2Odom.sendTransform(tf::StampedTransform(map_to_odom, thisImu.header.stamp, "map", "odom"));
// 两个双端队列分别装着优化前后的imu数据
imuQueOpt.push_back(thisImu);
imuQueImu.push_back(thisImu);
// 检查有没有执行过一次优化,这里需要先在odomhandler中优化一次后再进行该函数后续的工作
if (doneFirstOpt == false)
return;
// 获得时间间隔, 第一次为1/500,之后是两次imuTime间的差
double imuTime = ROS_TIME(&thisImu);
double dt = (lastImuT_imu < 0) ? (1.0 / 500.0) : (imuTime - lastImuT_imu);
lastImuT_imu = imuTime;
// integrate this single imu message
// 进行预积分
imuIntegratorImu_->integrateMeasurement(gtsam::Vector3(thisImu.linear_acceleration.x, thisImu.linear_acceleration.y,
thisImu.linear_acceleration.z),
gtsam::Vector3(thisImu.angular_velocity.x,
thisImu.angular_velocity.y,
thisImu.angular_velocity.z), dt);
// predict odometry
// 根据预计分结果, 预测odom
gtsam::NavState currentState = imuIntegratorImu_->predict(prevStateOdom, prevBiasOdom);
// publish odometry 发布新的odom
nav_msgs::Odometry odometry;
odometry.header.stamp = thisImu.header.stamp;
odometry.header.frame_id = "odom";
odometry.child_frame_id = "odom_imu";
// transform imu pose to ldiar imu位姿转到雷达系
// 预测值currentState获得imu位姿, 再由imu到雷达变换, 获得雷达位姿
gtsam::Pose3 imuPose = gtsam::Pose3(currentState.quaternion(), currentState.position());
gtsam::Pose3 lidarPose = imuPose.compose(imu2Lidar);
odometry.pose.pose.position.x = lidarPose.translation().x();
odometry.pose.pose.position.y = lidarPose.translation().y();
odometry.pose.pose.position.z = lidarPose.translation().z();
odometry.pose.pose.orientation.x = lidarPose.rotation().toQuaternion().x();
odometry.pose.pose.orientation.y = lidarPose.rotation().toQuaternion().y();
odometry.pose.pose.orientation.z = lidarPose.rotation().toQuaternion().z();
odometry.pose.pose.orientation.w = lidarPose.rotation().toQuaternion().w();
odometry.twist.twist.linear.x = currentState.velocity().x();
odometry.twist.twist.linear.y = currentState.velocity().y();
odometry.twist.twist.linear.z = currentState.velocity().z();
odometry.twist.twist.angular.x = thisImu.angular_velocity.x + prevBiasOdom.gyroscope().x();
odometry.twist.twist.angular.y = thisImu.angular_velocity.y + prevBiasOdom.gyroscope().y();
odometry.twist.twist.angular.z = thisImu.angular_velocity.z + prevBiasOdom.gyroscope().z();
odometry.pose.covariance[0] = double(imuPreintegrationResetId);
pubImuOdometry.publish(odometry);
// publish imu path
// 预测的imu path, 只保留3s内的轨迹
static nav_msgs::Path imuPath;
static double last_path_time = -1;
if (imuTime - last_path_time > 0.1) {
last_path_time = imuTime;
geometry_msgs::PoseStamped pose_stamped;
pose_stamped.header.stamp = thisImu.header.stamp;
pose_stamped.header.frame_id = "odom";
pose_stamped.pose = odometry.pose.pose;
imuPath.poses.push_back(pose_stamped);
while (!imuPath.poses.empty() &&
abs(imuPath.poses.front().header.stamp.toSec() - imuPath.poses.back().header.stamp.toSec()) > 3.0)
imuPath.poses.erase(imuPath.poses.begin());
if (pubImuPath.getNumSubscribers() != 0) {
imuPath.header.stamp = thisImu.header.stamp;
imuPath.header.frame_id = "odom";
pubImuPath.publish(imuPath);
}
}
// publish transformation
// 发布odom->base_link的变换
tf::Transform tCur;
tf::poseMsgToTF(odometry.pose.pose, tCur);
tf::StampedTransform odom_2_baselink = tf::StampedTransform(tCur, thisImu.header.stamp, "odom", "base_link");
tfOdom2BaseLink.sendTransform(odom_2_baselink);
}
public:
ros::Subscriber subImu;
ros::Subscriber subOdometry;
ros::Publisher pubImuOdometry;
ros::Publisher pubImuPath;
// map -> odom
tf::Transform map_to_odom;
tf::TransformBroadcaster tfMap2Odom;
// odom -> base_link
tf::TransformBroadcaster tfOdom2BaseLink;
bool systemInitialized = false;
gtsam::noiseModel::Diagonal::shared_ptr priorPoseNoise;
gtsam::noiseModel::Diagonal::shared_ptr priorVelNoise;
gtsam::noiseModel::Diagonal::shared_ptr priorBiasNoise;
gtsam::noiseModel::Diagonal::shared_ptr correctionNoise;
gtsam::Vector noiseModelBetweenBias;
gtsam::PreintegratedImuMeasurements *imuIntegratorOpt_;
gtsam::PreintegratedImuMeasurements *imuIntegratorImu_;
std::deque imuQueOpt;
std::deque imuQueImu;
gtsam::Pose3 prevPose_;
gtsam::Vector3 prevVel_;
gtsam::NavState prevState_;
gtsam::imuBias::ConstantBias prevBias_;
gtsam::NavState prevStateOdom;
gtsam::imuBias::ConstantBias prevBiasOdom;
bool doneFirstOpt = false;
double lastImuT_imu = -1;
double lastImuT_opt = -1;
gtsam::ISAM2 optimizer;
gtsam::NonlinearFactorGraph graphFactors;
gtsam::Values graphValues;
const double delta_t = 0;
int key = 1;
int imuPreintegrationResetId = 0;
// 雷达->imu外餐
gtsam::Pose3 imu2Lidar = gtsam::Pose3(gtsam::Rot3(1, 0, 0, 0),
gtsam::Point3(-extTrans.x(), -extTrans.y(), -extTrans.z()));
gtsam::Pose3 lidar2Imu = gtsam::Pose3(gtsam::Rot3(1, 0, 0, 0),
gtsam::Point3(extTrans.x(), extTrans.y(), extTrans.z()));;
};
int main(int argc, char **argv) {
ros::init(argc, argv, "roboat_loam");
IMUPreintegration ImuP;
ROS_INFO("\033[1;32m----> IMU Preintegration Started.\033[0m");
ros::spin();
return 0;
}
#include "utility.h"
#include "lio_sam/cloud_info.h"
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
using namespace gtsam;
using symbol_shorthand::X; // Pose3 (x,y,z,r,p,y)
using symbol_shorthand::V; // Vel (xdot,ydot,zdot)
using symbol_shorthand::B; // Bias (ax,ay,az,gx,gy,gz)
using symbol_shorthand::G; // GPS pose
/*
* A point cloud type that has 6D pose info ([x,y,z,roll,pitch,yaw] intensity is time stamp)
*/
struct PointXYZIRPYT {
PCL_ADD_POINT4D
PCL_ADD_INTENSITY; // preferred way of adding a XYZ+padding
float roll;
float pitch;
float yaw;
double time;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW // make sure our new allocators are aligned
} EIGEN_ALIGN16; // enforce SSE padding for correct memory alignment
POINT_CLOUD_REGISTER_POINT_STRUCT (PointXYZIRPYT,
(float, x, x)(float, y, y)
(float, z, z)(float, intensity, intensity)
(float, roll, roll)(float, pitch, pitch)(float, yaw, yaw)
(double, time, time))
typedef PointXYZIRPYT PointTypePose;
class mapOptimization : public ParamServer {
public:
mapOptimization() {
ISAM2Params parameters;
parameters.relinearizeThreshold = 0.1;
parameters.relinearizeSkip = 1;
isam = new ISAM2(parameters);
// subscriber 主要订阅分类好的cloud_info以及gps,用于后端优化和回环检测,
// 注意gps接受的是nav_msgs::Odometry消息, 是通过robot_localization节点融合imu和gps数据得到的
// callback里面只是装数据到队列
subLaserCloudInfo = nh.subscribe("lio_sam/feature/cloud_info", 1,
&mapOptimization::laserCloudInfoHandler, this,
ros::TransportHints().tcpNoDelay());
subGPS = nh.subscribe(gpsTopic, 200, &mapOptimization::gpsHandler, this,
ros::TransportHints().tcpNoDelay());
// publisher 发布一些odometry之类的
pubKeyPoses = nh.advertise("lio_sam/mapping/trajectory", 1);
pubLaserCloudSurround = nh.advertise("lio_sam/mapping/map_global", 1); // 全局地图
pubOdomAftMappedROS = nh.advertise("lio_sam/mapping/odometry", 1); // 优化后的odom
pubPath = nh.advertise("lio_sam/mapping/path", 1);
// 回环检测相关的一些历史帧
pubHistoryKeyFrames = nh.advertise("lio_sam/mapping/icp_loop_closure_history_cloud", 1);
pubIcpKeyFrames = nh.advertise("lio_sam/mapping/icp_loop_closure_corrected_cloud", 1);
// local map
pubRecentKeyFrames = nh.advertise("lio_sam/mapping/map_local", 1);
pubRecentKeyFrame = nh.advertise("lio_sam/mapping/cloud_registered", 1);
pubCloudRegisteredRaw = nh.advertise("lio_sam/mapping/cloud_registered_raw", 1);
// 不同的特征进行滤波
downSizeFilterCorner.setLeafSize(mappingCornerLeafSize, mappingCornerLeafSize, mappingCornerLeafSize);
downSizeFilterSurf.setLeafSize(mappingSurfLeafSize, mappingSurfLeafSize, mappingSurfLeafSize);
downSizeFilterICP.setLeafSize(mappingSurfLeafSize, mappingSurfLeafSize, mappingSurfLeafSize);
downSizeFilterSurroundingKeyPoses.setLeafSize(surroundingKeyframeDensity, surroundingKeyframeDensity,
surroundingKeyframeDensity); // for surrounding key poses of scan-to-map optimization
allocateMemory();
}
void allocateMemory() {
// 初始化一些参数
cloudKeyPoses3D.reset(new pcl::PointCloud());
cloudKeyPoses6D.reset(new pcl::PointCloud());
kdtreeSurroundingKeyPoses.reset(new pcl::KdTreeFLANN());
kdtreeHistoryKeyPoses.reset(new pcl::KdTreeFLANN());
laserCloudCornerLast.reset(new pcl::PointCloud()); // corner feature set from odoOptimization
laserCloudSurfLast.reset(new pcl::PointCloud()); // surf feature set from odoOptimization
laserCloudCornerLastDS.reset(
new pcl::PointCloud()); // downsampled corner featuer set from odoOptimization
laserCloudSurfLastDS.reset(
new pcl::PointCloud()); // downsampled surf featuer set from odoOptimization
laserCloudOri.reset(new pcl::PointCloud());
coeffSel.reset(new pcl::PointCloud());
laserCloudOriCornerVec.resize(N_SCAN * Horizon_SCAN);
coeffSelCornerVec.resize(N_SCAN * Horizon_SCAN);
laserCloudOriCornerFlag.resize(N_SCAN * Horizon_SCAN);
laserCloudOriSurfVec.resize(N_SCAN * Horizon_SCAN);
coeffSelSurfVec.resize(N_SCAN * Horizon_SCAN);
laserCloudOriSurfFlag.resize(N_SCAN * Horizon_SCAN);
std::fill(laserCloudOriCornerFlag.begin(), laserCloudOriCornerFlag.end(), false);
std::fill(laserCloudOriSurfFlag.begin(), laserCloudOriSurfFlag.end(), false);
laserCloudCornerFromMap.reset(new pcl::PointCloud());
laserCloudSurfFromMap.reset(new pcl::PointCloud());
laserCloudCornerFromMapDS.reset(new pcl::PointCloud());
laserCloudSurfFromMapDS.reset(new pcl::PointCloud());
kdtreeCornerFromMap.reset(new pcl::KdTreeFLANN());
kdtreeSurfFromMap.reset(new pcl::KdTreeFLANN());
latestKeyFrameCloud.reset(new pcl::PointCloud());
nearHistoryKeyFrameCloud.reset(new pcl::PointCloud());
for (int i = 0; i < 6; ++i) {
transformTobeMapped[i] = 0;
}
matP.setZero();
}
void laserCloudInfoHandler(const lio_sam::cloud_infoConstPtr &msgIn) {
// extract time stamp
timeLaserInfoStamp = msgIn->header.stamp;
timeLaserCloudInfoLast = msgIn->header.stamp.toSec();
// extract info and feature cloud
// corner和surface点云
cloudInfo = *msgIn;
pcl::fromROSMsg(msgIn->cloud_corner, *laserCloudCornerLast);
pcl::fromROSMsg(msgIn->cloud_surface, *laserCloudSurfLast);
std::lock_guard lock(mtx);
// 0.15s更新一下map
if (timeLaserCloudInfoLast - timeLastProcessing >= mappingProcessInterval) {
timeLastProcessing = timeLaserCloudInfoLast;
updateInitialGuess(); // imu预积分更新初始位姿
extractSurroundingKeyFrames(); // 从关键帧里面提取附近回环候选帧
downsampleCurrentScan(); // 不同的leaf size进行下采样,主要是corner cloud和surface cloud
scan2MapOptimization(); // 构建点到平面、点到直线的残差, 用高斯牛顿法进行优化
saveKeyFramesAndFactor(); // 添加factor,保存key pose之类的
correctPoses(); // 更新位姿
// publish odom
publishOdometry(); // 发布增量平滑后的odom
publishFrames(); // 发布一些关键帧点云之类的
}
}
void gpsHandler(const nav_msgs::Odometry::ConstPtr &gpsMsg) {
gpsQueue.push_back(*gpsMsg);
}
void pointAssociateToMap(PointType const *const pi, PointType *const po) {
po->x = transPointAssociateToMap(0, 0) * pi->x + transPointAssociateToMap(0, 1) * pi->y +
transPointAssociateToMap(0, 2) * pi->z + transPointAssociateToMap(0, 3);
po->y = transPointAssociateToMap(1, 0) * pi->x + transPointAssociateToMap(1, 1) * pi->y +
transPointAssociateToMap(1, 2) * pi->z + transPointAssociateToMap(1, 3);
po->z = transPointAssociateToMap(2, 0) * pi->x + transPointAssociateToMap(2, 1) * pi->y +
transPointAssociateToMap(2, 2) * pi->z + transPointAssociateToMap(2, 3);
po->intensity = pi->intensity;
}
pcl::PointCloud::Ptr
transformPointCloud(pcl::PointCloud::Ptr cloudIn, PointTypePose *transformIn) {
pcl::PointCloud::Ptr cloudOut(new pcl::PointCloud());
PointType *pointFrom;
int cloudSize = cloudIn->size();
cloudOut->resize(cloudSize);
Eigen::Affine3f transCur = pcl::getTransformation(transformIn->x, transformIn->y, transformIn->z,
transformIn->roll, transformIn->pitch, transformIn->yaw);
for (int i = 0; i < cloudSize; ++i) {
pointFrom = &cloudIn->points[i];
cloudOut->points[i].x =
transCur(0, 0) * pointFrom->x + transCur(0, 1) * pointFrom->y + transCur(0, 2) * pointFrom->z +
transCur(0, 3);
cloudOut->points[i].y =
transCur(1, 0) * pointFrom->x + transCur(1, 1) * pointFrom->y + transCur(1, 2) * pointFrom->z +
transCur(1, 3);
cloudOut->points[i].z =
transCur(2, 0) * pointFrom->x + transCur(2, 1) * pointFrom->y + transCur(2, 2) * pointFrom->z +
transCur(2, 3);
cloudOut->points[i].intensity = pointFrom->intensity;
}
return cloudOut;
}
gtsam::Pose3 pclPointTogtsamPose3(PointTypePose thisPoint) {
return gtsam::Pose3(gtsam::Rot3::RzRyRx(double(thisPoint.roll), double(thisPoint.pitch), double(thisPoint.yaw)),
gtsam::Point3(double(thisPoint.x), double(thisPoint.y), double(thisPoint.z)));
}
gtsam::Pose3 trans2gtsamPose(float transformIn[]) {
return gtsam::Pose3(gtsam::Rot3::RzRyRx(transformIn[0], transformIn[1], transformIn[2]),
gtsam::Point3(transformIn[3], transformIn[4], transformIn[5]));
}
Eigen::Affine3f pclPointToAffine3f(PointTypePose thisPoint) {
return pcl::getTransformation(thisPoint.x, thisPoint.y, thisPoint.z, thisPoint.roll, thisPoint.pitch,
thisPoint.yaw);
}
Eigen::Affine3f trans2Affine3f(float transformIn[]) {
return pcl::getTransformation(transformIn[3], transformIn[4], transformIn[5], transformIn[0], transformIn[1],
transformIn[2]);
}
PointTypePose trans2PointTypePose(float transformIn[]) {
PointTypePose thisPose6D;
thisPose6D.x = transformIn[3];
thisPose6D.y = transformIn[4];
thisPose6D.z = transformIn[5];
thisPose6D.roll = transformIn[0];
thisPose6D.pitch = transformIn[1];
thisPose6D.yaw = transformIn[2];
return thisPose6D;
}
void visualizeGlobalMapThread() {
// 按一定的频率发布全局地图
ros::Rate rate(0.2);
while (ros::ok()) {
rate.sleep();
publishGlobalMap();
}
// 下面是保存各种地图
if (savePCD == false)
return;
cout << "****************************************************" << endl;
cout << "Saving map to pcd files ..." << endl;
// create directory and remove old files;
savePCDDirectory = std::getenv("HOME") + savePCDDirectory;
int unused = system((std::string("exec rm -r ") + savePCDDirectory).c_str());
unused = system((std::string("mkdir ") + savePCDDirectory).c_str());
// save key frame transformations
pcl::io::savePCDFileASCII(savePCDDirectory + "trajectory.pcd", *cloudKeyPoses3D);
pcl::io::savePCDFileASCII(savePCDDirectory + "transformations.pcd", *cloudKeyPoses6D);
// extract global point cloud map
pcl::PointCloud::Ptr globalCornerCloud(new pcl::PointCloud());
pcl::PointCloud::Ptr globalCornerCloudDS(new pcl::PointCloud());
pcl::PointCloud::Ptr globalSurfCloud(new pcl::PointCloud());
pcl::PointCloud::Ptr globalSurfCloudDS(new pcl::PointCloud());
pcl::PointCloud::Ptr globalMapCloud(new pcl::PointCloud());
for (int i = 0; i < cloudKeyPoses3D->size(); i++) {
*globalCornerCloud += *transformPointCloud(cornerCloudKeyFrames[i], &cloudKeyPoses6D->points[i]);
*globalSurfCloud += *transformPointCloud(surfCloudKeyFrames[i], &cloudKeyPoses6D->points[i]);
cout << "\r" << std::flush << "Processing feature cloud " << i << " of " << cloudKeyPoses6D->size()
<< " ...";
}
// down-sample and save corner cloud
downSizeFilterCorner.setInputCloud(globalCornerCloud);
downSizeFilterCorner.filter(*globalCornerCloudDS);
pcl::io::savePCDFileASCII(savePCDDirectory + "cloudCorner.pcd", *globalCornerCloudDS);
// down-sample and save surf cloud
downSizeFilterSurf.setInputCloud(globalSurfCloud);
downSizeFilterSurf.filter(*globalSurfCloudDS);
pcl::io::savePCDFileASCII(savePCDDirectory + "cloudSurf.pcd", *globalSurfCloudDS);
// down-sample and save global point cloud map
*globalMapCloud += *globalCornerCloud;
*globalMapCloud += *globalSurfCloud;
pcl::io::savePCDFileASCII(savePCDDirectory + "cloudGlobal.pcd", *globalMapCloud);
cout << "****************************************************" << endl;
cout << "Saving map to pcd files completed" << endl;
}
void publishGlobalMap() {
if (pubLaserCloudSurround.getNumSubscribers() == 0)
return;
// cloudKeyPoses3Dc存的是关键帧的位姿
if (cloudKeyPoses3D->points.empty() == true)
return;
pcl::KdTreeFLANN::Ptr kdtreeGlobalMap(new pcl::KdTreeFLANN());;
pcl::PointCloud::Ptr globalMapKeyPoses(new pcl::PointCloud());
pcl::PointCloud::Ptr globalMapKeyPosesDS(new pcl::PointCloud());
pcl::PointCloud::Ptr globalMapKeyFrames(new pcl::PointCloud());
pcl::PointCloud::Ptr globalMapKeyFramesDS(new pcl::PointCloud());
// kd-tree to find near key frames to visualize
std::vector pointSearchIndGlobalMap;
std::vector pointSearchSqDisGlobalMap;
// search near key frames to visualize
mtx.lock();
kdtreeGlobalMap->setInputCloud(cloudKeyPoses3D);
kdtreeGlobalMap->radiusSearch(cloudKeyPoses3D->back(), globalMapVisualizationSearchRadius,
pointSearchIndGlobalMap, pointSearchSqDisGlobalMap, 0);
mtx.unlock();
// 找到附近的点云帧并发布出来
for (int i = 0; i < pointSearchIndGlobalMap.size(); ++i)
globalMapKeyPoses->push_back(cloudKeyPoses3D->points[pointSearchIndGlobalMap[i]]);
// downsample near selected key frames
pcl::VoxelGrid downSizeFilterGlobalMapKeyPoses; // for global map visualization
downSizeFilterGlobalMapKeyPoses.setLeafSize(globalMapVisualizationPoseDensity,
globalMapVisualizationPoseDensity,
globalMapVisualizationPoseDensity); // for global map visualization
downSizeFilterGlobalMapKeyPoses.setInputCloud(globalMapKeyPoses);
downSizeFilterGlobalMapKeyPoses.filter(*globalMapKeyPosesDS);
// extract visualized and downsampled key frames
for (int i = 0; i < globalMapKeyPosesDS->size(); ++i) {
if (pointDistance(globalMapKeyPosesDS->points[i], cloudKeyPoses3D->back()) >
globalMapVisualizationSearchRadius)
continue;
int thisKeyInd = (int) globalMapKeyPosesDS->points[i].intensity;
*globalMapKeyFrames += *transformPointCloud(cornerCloudKeyFrames[thisKeyInd],
&cloudKeyPoses6D->points[thisKeyInd]);
*globalMapKeyFrames += *transformPointCloud(surfCloudKeyFrames[thisKeyInd],
&cloudKeyPoses6D->points[thisKeyInd]);
}
// downsample visualized points
pcl::VoxelGrid downSizeFilterGlobalMapKeyFrames; // for global map visualization
downSizeFilterGlobalMapKeyFrames.setLeafSize(globalMapVisualizationLeafSize, globalMapVisualizationLeafSize,
globalMapVisualizationLeafSize); // for global map visualization
downSizeFilterGlobalMapKeyFrames.setInputCloud(globalMapKeyFrames);
downSizeFilterGlobalMapKeyFrames.filter(*globalMapKeyFramesDS);
publishCloud(&pubLaserCloudSurround, globalMapKeyFramesDS, timeLaserInfoStamp, "odom");
}
void loopClosureThread() {
// 什么时候才进行回环检测?
if (loopClosureEnableFlag == false)
return;
// 以一定的频率执行回环检测
ros::Rate rate(0.2);
while (ros::ok()) {
rate.sleep();
performLoopClosure();
}
}
bool detectLoopClosure(int *latestID, int *closestID) {
int latestFrameIDLoopCloure;
int closestHistoryFrameID;
latestKeyFrameCloud->clear();
nearHistoryKeyFrameCloud->clear();
std::lock_guard lock(mtx);
// find the closest history key frame
std::vector pointSearchIndLoop;
std::vector pointSearchSqDisLoop;
kdtreeHistoryKeyPoses->setInputCloud(cloudKeyPoses3D);
kdtreeHistoryKeyPoses->radiusSearch(cloudKeyPoses3D->back(), historyKeyframeSearchRadius, pointSearchIndLoop,
pointSearchSqDisLoop, 0);
// 两帧时间差也满足最小要求
closestHistoryFrameID = -1;
for (int i = 0; i < pointSearchIndLoop.size(); ++i) {
int id = pointSearchIndLoop[i];
if (abs(cloudKeyPoses6D->points[id].time - timeLaserCloudInfoLast) > historyKeyframeSearchTimeDiff) {
closestHistoryFrameID = id;
break;
}
}
if (closestHistoryFrameID == -1)
return false;
if (cloudKeyPoses3D->size() - 1 == closestHistoryFrameID)
return false;
// save latest key frames
latestFrameIDLoopCloure = cloudKeyPoses3D->size() - 1;
*latestKeyFrameCloud += *transformPointCloud(cornerCloudKeyFrames[latestFrameIDLoopCloure],
&cloudKeyPoses6D->points[latestFrameIDLoopCloure]);
*latestKeyFrameCloud += *transformPointCloud(surfCloudKeyFrames[latestFrameIDLoopCloure],
&cloudKeyPoses6D->points[latestFrameIDLoopCloure]);
// save history near key frames
bool nearFrameAvailable = false;
for (int j = -historyKeyframeSearchNum; j <= historyKeyframeSearchNum; ++j) {
if (closestHistoryFrameID + j < 0 || closestHistoryFrameID + j > latestFrameIDLoopCloure)
continue;
*nearHistoryKeyFrameCloud += *transformPointCloud(cornerCloudKeyFrames[closestHistoryFrameID + j],
&cloudKeyPoses6D->points[closestHistoryFrameID + j]);
*nearHistoryKeyFrameCloud += *transformPointCloud(surfCloudKeyFrames[closestHistoryFrameID + j],
&cloudKeyPoses6D->points[closestHistoryFrameID + j]);
nearFrameAvailable = true;
}
if (nearFrameAvailable == false)
return false;
*latestID = latestFrameIDLoopCloure;
*closestID = closestHistoryFrameID;
return true;
}
void performLoopClosure() {
if (cloudKeyPoses3D->points.empty() == true)
return;
int latestFrameIDLoopCloure; // 关键帧队列中最新的关键帧id
int closestHistoryFrameID; // 最近的关键帧id
if (detectLoopClosure(&latestFrameIDLoopCloure, &closestHistoryFrameID) == false)
return;
// 检测到了回环进入以下流程,将两帧点云进行icp配准得到最终的trans
// ICP Settings
pcl::IterativeClosestPoint icp;
icp.setMaxCorrespondenceDistance(100);
icp.setMaximumIterations(100);
icp.setTransformationEpsilon(1e-6);
icp.setEuclideanFitnessEpsilon(1e-6);
icp.setRANSACIterations(0);
// Downsample map cloud
pcl::PointCloud::Ptr cloud_temp(new pcl::PointCloud());
downSizeFilterICP.setInputCloud(nearHistoryKeyFrameCloud);
downSizeFilterICP.filter(*cloud_temp);
*nearHistoryKeyFrameCloud = *cloud_temp;
// publish history near key frames
publishCloud(&pubHistoryKeyFrames, nearHistoryKeyFrameCloud, timeLaserInfoStamp, "odom");
// Align clouds 将回环帧与local map进行匹配
icp.setInputSource(latestKeyFrameCloud);
icp.setInputTarget(nearHistoryKeyFrameCloud);
pcl::PointCloud::Ptr unused_result(new pcl::PointCloud());
icp.align(*unused_result);
// 通过icp score阈值判断是否匹配成功
// std::cout << "ICP converg flag:" << icp.hasConverged() << ". Fitness score: " << icp.getFitnessScore() << std::endl;
if (icp.hasConverged() == false || icp.getFitnessScore() > historyKeyframeFitnessScore)
return;
// publish corrected cloud
if (pubIcpKeyFrames.getNumSubscribers() != 0) {
pcl::PointCloud::Ptr closed_cloud(new pcl::PointCloud());
pcl::transformPointCloud(*latestKeyFrameCloud, *closed_cloud, icp.getFinalTransformation());
publishCloud(&pubIcpKeyFrames, closed_cloud, timeLaserInfoStamp, "odom");
}
// Get pose transformation
float x, y, z, roll, pitch, yaw;
Eigen::Affine3f correctionLidarFrame;
// icp得到的两帧之间的转换
correctionLidarFrame = icp.getFinalTransformation();
// transform from world origin to wrong pose
Eigen::Affine3f tWrong = pclPointToAffine3f(cloudKeyPoses6D->points[latestFrameIDLoopCloure]);
// transform from world origin to corrected pose
Eigen::Affine3f tCorrect =
correctionLidarFrame * tWrong;// pre-multiplying -> successive rotation about a fixed frame
pcl::getTranslationAndEulerAngles(tCorrect, x, y, z, roll, pitch, yaw);
// gtsam中添加回环的约束
gtsam::Pose3 poseFrom = Pose3(Rot3::RzRyRx(roll, pitch, yaw), Point3(x, y, z));
gtsam::Pose3 poseTo = pclPointTogtsamPose3(cloudKeyPoses6D->points[closestHistoryFrameID]);
gtsam::Vector Vector6(6);
float noiseScore = icp.getFitnessScore();
Vector6 << noiseScore, noiseScore, noiseScore, noiseScore, noiseScore, noiseScore;
noiseModel::Diagonal::shared_ptr constraintNoise = noiseModel::Diagonal::Variances(Vector6);
// Add pose constraint
std::lock_guard lock(mtx);
gtSAMgraph.add(BetweenFactor(latestFrameIDLoopCloure, closestHistoryFrameID, poseFrom.between(poseTo),
constraintNoise));
isam->update(gtSAMgraph);
isam->update();
isam->update();
isam->update();
isam->update();
isam->update();
gtSAMgraph.resize(0);
aLoopIsClosed = true;
}
void updateInitialGuess() {
// 更新初始位姿, 来源可以是GPS ODOM, 也可以是上一帧的位姿, 存在transformTobeMapped中
// initialization
if (cloudKeyPoses3D->points.empty()) {
// 第一帧点云进来
transformTobeMapped[0] = cloudInfo.imuRollInit;
transformTobeMapped[1] = cloudInfo.imuPitchInit;
transformTobeMapped[2] = cloudInfo.imuYawInit;
if (!useImuHeadingInitialization)
transformTobeMapped[2] = 0;
// 获取初始的transform
lastImuTransformation = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit); // save imu before return;
return;
}
// use imu pre-integration estimation for pose guess
// odom可用的话, 使用mu odom作为初始位姿, 每个点在imuPreintexx.cpp中会实时进行预计分优化, 并存储其优化后的odom
if (cloudInfo.odomAvailable == true && cloudInfo.imuPreintegrationResetId == imuPreintegrationResetId) {
transformTobeMapped[0] = cloudInfo.initialGuessRoll;
transformTobeMapped[1] = cloudInfo.initialGuessPitch;
transformTobeMapped[2] = cloudInfo.initialGuessYaw;
transformTobeMapped[3] = cloudInfo.initialGuessX;
transformTobeMapped[4] = cloudInfo.initialGuessY;
transformTobeMapped[5] = cloudInfo.initialGuessZ;
lastImuTransformation = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit); // save imu before return;
return;
}
// use imu incremental estimation for pose guess (only rotation)
// imu可用的话, 使用imu计算一个旋转增量, 这里?
if (cloudInfo.imuAvailable == true) {
Eigen::Affine3f transBack = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit);
Eigen::Affine3f transIncre = lastImuTransformation.inverse() * transBack;
Eigen::Affine3f transTobe = trans2Affine3f(transformTobeMapped);
Eigen::Affine3f transFinal = transTobe * transIncre;
pcl::getTranslationAndEulerAngles(transFinal, transformTobeMapped[3], transformTobeMapped[4],
transformTobeMapped[5],
transformTobeMapped[0], transformTobeMapped[1], transformTobeMapped[2]);
lastImuTransformation = pcl::getTransformation(0, 0, 0, cloudInfo.imuRollInit, cloudInfo.imuPitchInit,
cloudInfo.imuYawInit); // save imu before return;
return;
}
}
void extractForLoopClosure() {
// 提取回环候选帧
pcl::PointCloud::Ptr cloudToExtract(new pcl::PointCloud());
int numPoses = cloudKeyPoses3D->size();
for (int i = numPoses - 1; i >= 0; --i) {
if (cloudToExtract->size() <= surroundingKeyframeSize)
cloudToExtract->push_back(cloudKeyPoses3D->points[i]);
else
break;
}
extractCloud(cloudToExtract);
}
void extractNearby() {
// 提取附近的点云帧, 包括corner和surface, cloudKeyPoses3D
pcl::PointCloud::Ptr surroundingKeyPoses(new pcl::PointCloud());
pcl::PointCloud::Ptr surroundingKeyPosesDS(new pcl::PointCloud());
std::vector pointSearchInd;
std::vector pointSearchSqDis;
// extract all the nearby key poses and downsample them, 50m范围内的关键帧
kdtreeSurroundingKeyPoses->setInputCloud(cloudKeyPoses3D); // create kd-tree
kdtreeSurroundingKeyPoses->radiusSearch(cloudKeyPoses3D->back(), (double) surroundingKeyframeSearchRadius,
pointSearchInd, pointSearchSqDis);
// 将满足要求的点云帧加到surroundingKeyPoses中
for (int i = 0; i < pointSearchInd.size(); ++i) {
int id = pointSearchInd[i];
surroundingKeyPoses->push_back(cloudKeyPoses3D->points[id]);
}
downSizeFilterSurroundingKeyPoses.setInputCloud(surroundingKeyPoses);
downSizeFilterSurroundingKeyPoses.filter(*surroundingKeyPosesDS);
// also extract some latest key frames in case the robot rotates in one position
// 把10s内同方向的关键帧也加到surroundingKeyPosesDS中
int numPoses = cloudKeyPoses3D->size();
for (int i = numPoses - 1; i >= 0; --i) {
// 10s内的位姿态都加进来
if (timeLaserCloudInfoLast - cloudKeyPoses6D->points[i].time < 10.0)
surroundingKeyPosesDS->push_back(cloudKeyPoses3D->points[i]);
else
break;
}
extractCloud(surroundingKeyPosesDS);
}
void extractCloud(pcl::PointCloud::Ptr cloudToExtract) {
// 根据pose提取点云
std::vector> laserCloudCornerSurroundingVec;
std::vector> laserCloudSurfSurroundingVec;
laserCloudCornerSurroundingVec.resize(cloudToExtract->size());
laserCloudSurfSurroundingVec.resize(cloudToExtract->size());
// extract surrounding map
#pragma omp parallel for num_threads(numberOfCores)
for (int i = 0; i < cloudToExtract->size(); ++i) {
// 遍历每个位姿
if (pointDistance(cloudToExtract->points[i], cloudKeyPoses3D->back()) > surroundingKeyframeSearchRadius)
continue;
int thisKeyInd = (int) cloudToExtract->points[i].intensity;
laserCloudCornerSurroundingVec[i] = *transformPointCloud(cornerCloudKeyFrames[thisKeyInd],
&cloudKeyPoses6D->points[thisKeyInd]);
laserCloudSurfSurroundingVec[i] = *transformPointCloud(surfCloudKeyFrames[thisKeyInd],
&cloudKeyPoses6D->points[thisKeyInd]);
}
// fuse the map
// 构建local map
laserCloudCornerFromMap->clear();
laserCloudSurfFromMap->clear();
for (int i = 0; i < cloudToExtract->size(); ++i) {
*laserCloudCornerFromMap += laserCloudCornerSurroundingVec[i];
*laserCloudSurfFromMap += laserCloudSurfSurroundingVec[i];
}
// Downsample the surrounding corner key frames (or map)
downSizeFilterCorner.setInputCloud(laserCloudCornerFromMap);
downSizeFilterCorner.filter(*laserCloudCornerFromMapDS);
laserCloudCornerFromMapDSNum = laserCloudCornerFromMapDS->size();
// Downsample the surrounding surf key frames (or map)
downSizeFilterSurf.setInputCloud(laserCloudSurfFromMap);
downSizeFilterSurf.filter(*laserCloudSurfFromMapDS);
laserCloudSurfFromMapDSNum = laserCloudSurfFromMapDS->size();
}
void extractSurroundingKeyFrames() {
if (cloudKeyPoses3D->points.empty() == true)
return;
// 检测到了回环就提取回环帧,否则提取附近点云
// 第一次进来loopClosureEnableFlag = false, 直接提取附近关键帧
if (loopClosureEnableFlag == true) {
extractForLoopClosure();
} else {
extractNearby();
}
}
void downsampleCurrentScan() {
// Downsample cloud from current scan
laserCloudCornerLastDS->clear();
downSizeFilterCorner.setInputCloud(laserCloudCornerLast);
downSizeFilterCorner.filter(*laserCloudCornerLastDS);
laserCloudCornerLastDSNum = laserCloudCornerLastDS->size();
laserCloudSurfLastDS->clear();
downSizeFilterSurf.setInputCloud(laserCloudSurfLast);
downSizeFilterSurf.filter(*laserCloudSurfLastDS);
laserCloudSurfLastDSNum = laserCloudSurfLastDS->size();
}
void updatePointAssociateToMap() {
// 根据初始位姿将点云转换到Map系下
transPointAssociateToMap = trans2Affine3f(transformTobeMapped);
}
void cornerOptimization() {
updatePointAssociateToMap(); // 将points转到地图系
#pragma omp parallel for num_threads(numberOfCores)
// 遍历点云, 构建点到直线的约束
for (int i = 0; i < laserCloudCornerLastDSNum; i++) {
PointType pointOri, pointSel, coeff;
std::vector pointSearchInd;
std::vector pointSearchSqDis;
// 在map中搜索当前点的5个紧邻点
pointOri = laserCloudCornerLastDS->points[i];
pointAssociateToMap(&pointOri, &pointSel);
kdtreeCornerFromMap->nearestKSearch(pointSel, 5, pointSearchInd, pointSearchSqDis);
cv::Mat matA1(3, 3, CV_32F, cv::Scalar::all(0));
cv::Mat matD1(1, 3, CV_32F, cv::Scalar::all(0));
cv::Mat matV1(3, 3, CV_32F, cv::Scalar::all(0));
// 只有最近的点都在一定阈值内(1米)才进行计算
if (pointSearchSqDis[4] < 1.0) {
float cx = 0, cy = 0, cz = 0;
for (int j = 0; j < 5; j++) {
cx += laserCloudCornerFromMapDS->points[pointSearchInd[j]].x;
cy += laserCloudCornerFromMapDS->points[pointSearchInd[j]].y;
cz += laserCloudCornerFromMapDS->points[pointSearchInd[j]].z;
}
// 计算其算数平均值/均值
cx /= 5;
cy /= 5;
cz /= 5;
// 计算协方差
float a11 = 0, a12 = 0, a13 = 0, a22 = 0, a23 = 0, a33 = 0;
for (int j = 0; j < 5; j++) {
float ax = laserCloudCornerFromMapDS->points[pointSearchInd[j]].x - cx;
float ay = laserCloudCornerFromMapDS->points[pointSearchInd[j]].y - cy;
float az = laserCloudCornerFromMapDS->points[pointSearchInd[j]].z - cz;
a11 += ax * ax;
a12 += ax * ay;
a13 += ax * az;
a22 += ay * ay;
a23 += ay * az;
a33 += az * az;
}
a11 /= 5;
a12 /= 5;
a13 /= 5;
a22 /= 5;
a23 /= 5;
a33 /= 5;
matA1.at(0, 0) = a11;
matA1.at(0, 1) = a12;
matA1.at(0, 2) = a13;
matA1.at(1, 0) = a12;
matA1.at(1, 1) = a22;
matA1.at(1, 2) = a23;
matA1.at(2, 0) = a13;
matA1.at(2, 1) = a23;
matA1.at(2, 2) = a33;
// 求协方差矩阵的特征值和特征向量, 特征值:matD1,特征向量:保存在矩阵matV1中。
cv::eigen(matA1, matD1, matV1);
// 其中一个特征值远远大于其他两个,则呈线状
if (matD1.at(0, 0) > 3 * matD1.at(0, 1)) {
float x0 = pointSel.x;
float y0 = pointSel.y;
float z0 = pointSel.z;
float x1 = cx + 0.1 * matV1.at(0, 0);
float y1 = cy + 0.1 * matV1.at(0, 1);
float z1 = cz + 0.1 * matV1.at(0, 2);
float x2 = cx - 0.1 * matV1.at(0, 0);
float y2 = cy - 0.1 * matV1.at(0, 1);
float z2 = cz - 0.1 * matV1.at(0, 2);
// 与里程计的计算类似,计算到直线的距离
float a012 = sqrt(((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1)) *
((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1))
+ ((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1)) *
((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1))
+ ((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1)) *
((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1)));
float l12 = sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2) + (z1 - z2) * (z1 - z2));
float la = ((y1 - y2) * ((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1))
+ (z1 - z2) * ((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1))) / a012 / l12;
float lb = -((x1 - x2) * ((x0 - x1) * (y0 - y2) - (x0 - x2) * (y0 - y1))
- (z1 - z2) * ((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1))) / a012 / l12;
float lc = -((x1 - x2) * ((x0 - x1) * (z0 - z2) - (x0 - x2) * (z0 - z1))
+ (y1 - y2) * ((y0 - y1) * (z0 - z2) - (y0 - y2) * (z0 - z1))) / a012 / l12;
float ld2 = a012 / l12;
// 下面涉及到一个鲁棒核函数,作者简单地设计了这个核函数。
float s = 1 - 0.9 * fabs(ld2);
coeff.x = s * la;
coeff.y = s * lb;
coeff.z = s * lc;
coeff.intensity = s * ld2;
// 程序末尾根据s的值来判断是否将点云点放入点云集合laserCloudOri以及coeffSel中。
if (s > 0.1) {
laserCloudOriCornerVec[i] = pointOri;
coeffSelCornerVec[i] = coeff;
laserCloudOriCornerFlag[i] = true;
}
}
}
}
}
void surfOptimization() {
updatePointAssociateToMap();
#pragma omp parallel for num_threads(numberOfCores)
for (int i = 0; i < laserCloudSurfLastDSNum; i++) {
PointType pointOri, pointSel, coeff;
std::vector pointSearchInd;
std::vector pointSearchSqDis;
// 寻找5个紧邻点, 计算其特征值和特征向量
pointOri = laserCloudSurfLastDS->points[i];
pointAssociateToMap(&pointOri, &pointSel);
kdtreeSurfFromMap->nearestKSearch(pointSel, 5, pointSearchInd, pointSearchSqDis);
Eigen::Matrix matA0;
Eigen::Matrix matB0;
Eigen::Vector3f matX0;
matA0.setZero(); // 5*3 存储5个紧邻点
matB0.fill(-1);
matX0.setZero();
// 只考虑附近1.0m内
if (pointSearchSqDis[4] < 1.0) {
for (int j = 0; j < 5; j++) {
matA0(j, 0) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].x;
matA0(j, 1) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].y;
matA0(j, 2) = laserCloudSurfFromMapDS->points[pointSearchInd[j]].z;
}
// 求maxA0中点构成的平面法向量
matX0 = matA0.colPivHouseholderQr().solve(matB0);
// 法向量参数 ax+by+cz +d = 0
float pa = matX0(0, 0);
float pb = matX0(1, 0);
float pc = matX0(2, 0);
float pd = 1;
float ps = sqrt(pa * pa + pb * pb + pc * pc);
pa /= ps;
pb /= ps;
pc /= ps;
pd /= ps;
// 这里再次判断求解的方向向量和每个点相乘,最后结果是不是在误差范围内。
bool planeValid = true;
for (int j = 0; j < 5; j++) {
if (fabs(pa * laserCloudSurfFromMapDS->points[pointSearchInd[j]].x +
pb * laserCloudSurfFromMapDS->points[pointSearchInd[j]].y +
pc * laserCloudSurfFromMapDS->points[pointSearchInd[j]].z + pd) > 0.2) {
planeValid = false;
break;
}
}
// 是有效的平面
if (planeValid) {
float pd2 = pa * pointSel.x + pb * pointSel.y + pc * pointSel.z + pd;
float s = 1 - 0.9 * fabs(pd2) / sqrt(sqrt(pointSel.x * pointSel.x
+ pointSel.y * pointSel.y + pointSel.z * pointSel.z));
coeff.x = s * pa;
coeff.y = s * pb;
coeff.z = s * pc;
coeff.intensity = s * pd2;
// 误差在允许的范围内的话把这个点放到点云laserCloudOri中去,把对应的向量coeff放到coeffSel中
if (s > 0.1) {
laserCloudOriSurfVec[i] = pointOri;
coeffSelSurfVec[i] = coeff;
laserCloudOriSurfFlag[i] = true;
}
}
}
}
}
void combineOptimizationCoeffs() {
// 把两类损失和协方差丢到laserCloudOri和coeffSel中, 后续进行联合优化
// combine corner coeffs
for (int i = 0; i < laserCloudCornerLastDSNum; ++i) {
if (laserCloudOriCornerFlag[i] == true) {
laserCloudOri->push_back(laserCloudOriCornerVec[i]);
coeffSel->push_back(coeffSelCornerVec[i]);
}
}
// combine surf coeffs
for (int i = 0; i < laserCloudSurfLastDSNum; ++i) {
if (laserCloudOriSurfFlag[i] == true) {
laserCloudOri->push_back(laserCloudOriSurfVec[i]);
coeffSel->push_back(coeffSelSurfVec[i]);
}
}
// reset flag for next iteration 重置参数, 下一帧还要继续用
std::fill(laserCloudOriCornerFlag.begin(), laserCloudOriCornerFlag.end(), false);
std::fill(laserCloudOriSurfFlag.begin(), laserCloudOriSurfFlag.end(), false);
}
bool LMOptimization(int iterCount) {
// This optimization is from the original loam_velodyne by Ji Zhang, need to cope with coordinate transformation
// lidar <- camera --- camera <- lidar
// x = z --- x = y
// y = x --- y = z
// z = y --- z = x
// roll = yaw --- roll = pitch
// pitch = roll --- pitch = yaw
// yaw = pitch --- yaw = roll
// 高斯牛顿优化, 参考LOAM
// lidar -> camera
float srx = sin(transformTobeMapped[1]);
float crx = cos(transformTobeMapped[1]);
float sry = sin(transformTobeMapped[2]);
float cry = cos(transformTobeMapped[2]);
float srz = sin(transformTobeMapped[0]);
float crz = cos(transformTobeMapped[0]);
// 初次优化时,特征值门限设置为50,小于这个值认为是退化了,修改matX,matX=matP*matX2
int laserCloudSelNum = laserCloudOri->size();
if (laserCloudSelNum < 50) {
return false;
}
cv::Mat matA(laserCloudSelNum, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matAt(6, laserCloudSelNum, CV_32F, cv::Scalar::all(0));
cv::Mat matAtA(6, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matB(laserCloudSelNum, 1, CV_32F, cv::Scalar::all(0));
cv::Mat matAtB(6, 1, CV_32F, cv::Scalar::all(0));
cv::Mat matX(6, 1, CV_32F, cv::Scalar::all(0));
cv::Mat matP(6, 6, CV_32F, cv::Scalar::all(0));
PointType pointOri, coeff;
for (int i = 0; i < laserCloudSelNum; i++) {
// lidar -> camera
pointOri.x = laserCloudOri->points[i].y;
pointOri.y = laserCloudOri->points[i].z;
pointOri.z = laserCloudOri->points[i].x;
// lidar -> camera
coeff.x = coeffSel->points[i].y;
coeff.y = coeffSel->points[i].z;
coeff.z = coeffSel->points[i].x;
coeff.intensity = coeffSel->points[i].intensity;
// in camera
// 计算雅克比
float arx = (crx * sry * srz * pointOri.x + crx * crz * sry * pointOri.y - srx * sry * pointOri.z) * coeff.x
+ (-srx * srz * pointOri.x - crz * srx * pointOri.y - crx * pointOri.z) * coeff.y
+ (crx * cry * srz * pointOri.x + crx * cry * crz * pointOri.y - cry * srx * pointOri.z) *
coeff.z;
float ary = ((cry * srx * srz - crz * sry) * pointOri.x
+ (sry * srz + cry * crz * srx) * pointOri.y + crx * cry * pointOri.z) * coeff.x
+ ((-cry * crz - srx * sry * srz) * pointOri.x
+ (cry * srz - crz * srx * sry) * pointOri.y - crx * sry * pointOri.z) * coeff.z;
float arz =
((crz * srx * sry - cry * srz) * pointOri.x + (-cry * crz - srx * sry * srz) * pointOri.y) * coeff.x
+ (crx * crz * pointOri.x - crx * srz * pointOri.y) * coeff.y
+
((sry * srz + cry * crz * srx) * pointOri.x + (crz * sry - cry * srx * srz) * pointOri.y) * coeff.z;
// lidar -> camera
matA.at(i, 0) = arz;
matA.at(i, 1) = arx;
matA.at(i, 2) = ary;
matA.at(i, 3) = coeff.z;
matA.at(i, 4) = coeff.x;
matA.at(i, 5) = coeff.y;
matB.at(i, 0) = -coeff.intensity;
}
cv::transpose(matA, matAt);
matAtA = matAt * matA;
matAtB = matAt * matB;
cv::solve(matAtA, matAtB, matX, cv::DECOMP_QR);
if (iterCount == 0) {
cv::Mat matE(1, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matV(6, 6, CV_32F, cv::Scalar::all(0));
cv::Mat matV2(6, 6, CV_32F, cv::Scalar::all(0));
cv::eigen(matAtA, matE, matV);
matV.copyTo(matV2);
isDegenerate = false;
float eignThre[6] = {100, 100, 100, 100, 100, 100};
for (int i = 5; i >= 0; i--) {
if (matE.at(0, i) < eignThre[i]) {
for (int j = 0; j < 6; j++) {
matV2.at(i, j) = 0;
}
isDegenerate = true;
} else {
break;
}
}
matP = matV.inv() * matV2;
}
if (isDegenerate) {
cv::Mat matX2(6, 1, CV_32F, cv::Scalar::all(0));
matX.copyTo(matX2);
matX = matP * matX2;
}
transformTobeMapped[0] += matX.at(0, 0);
transformTobeMapped[1] += matX.at(1, 0);
transformTobeMapped[2] += matX.at(2, 0);
transformTobeMapped[3] += matX.at(3, 0);
transformTobeMapped[4] += matX.at(4, 0);
transformTobeMapped[5] += matX.at(5, 0);
float deltaR = sqrt(
pow(pcl::rad2deg(matX.at(0, 0)), 2) +
pow(pcl::rad2deg(matX.at(1, 0)), 2) +
pow(pcl::rad2deg(matX.at(2, 0)), 2));
float deltaT = sqrt(
pow(matX.at(3, 0) * 100, 2) +
pow(matX.at(4, 0) * 100, 2) +
pow(matX.at(5, 0) * 100, 2));
// 在判断是否是有效的优化时,要求旋转部分的模长小于0.05m,平移部分的模长也小于0.05度
if (deltaR < 0.05 && deltaT < 0.05) {
return true; // converged
}
return false; // keep optimizing
}
void scan2MapOptimization() {
if (cloudKeyPoses3D->points.empty())
return;
// 特征需要满足一定要求才可以进行
if (laserCloudCornerLastDSNum > edgeFeatureMinValidNum && laserCloudSurfLastDSNum > surfFeatureMinValidNum) {
// 构建kdtree搜索的map, 两类
kdtreeCornerFromMap->setInputCloud(laserCloudCornerFromMapDS);
kdtreeSurfFromMap->setInputCloud(laserCloudSurfFromMapDS);
// 迭代30次进行优化
for (int iterCount = 0; iterCount < 30; iterCount++) {
laserCloudOri->clear();
coeffSel->clear();
// 点到平面, 点到直线的残差, 这里写法还与aloam有点区别
cornerOptimization();
surfOptimization();
// 联合两类的残差
combineOptimizationCoeffs();
// 高斯牛顿法迭代优化
if (LMOptimization(iterCount) == true)
break;
}
// 更新transform
transformUpdate();
} else {
ROS_WARN("Not enough features! Only %d edge and %d planar features available.", laserCloudCornerLastDSNum,
laserCloudSurfLastDSNum);
}
}
void transformUpdate() {
// IMU可用的话更新transformTobeMapped
if (cloudInfo.imuAvailable == true) {
if (std::abs(cloudInfo.imuPitchInit) < 1.4) {
double imuWeight = 0.05;
tf::Quaternion imuQuaternion;
tf::Quaternion transformQuaternion;
double rollMid, pitchMid, yawMid;
// slerp roll
transformQuaternion.setRPY(transformTobeMapped[0], 0, 0);
imuQuaternion.setRPY(cloudInfo.imuRollInit, 0, 0);
// 线性插值
tf::Matrix3x3(transformQuaternion.slerp(imuQuaternion, imuWeight)).getRPY(rollMid, pitchMid, yawMid);
transformTobeMapped[0] = rollMid;
// slerp pitch
transformQuaternion.setRPY(0, transformTobeMapped[1], 0);
imuQuaternion.setRPY(0, cloudInfo.imuPitchInit, 0);
tf::Matrix3x3(transformQuaternion.slerp(imuQuaternion, imuWeight)).getRPY(rollMid, pitchMid, yawMid);
transformTobeMapped[1] = pitchMid;
}
}
transformTobeMapped[0] = constraintTransformation(transformTobeMapped[0], rotation_tollerance);
transformTobeMapped[1] = constraintTransformation(transformTobeMapped[1], rotation_tollerance);
transformTobeMapped[5] = constraintTransformation(transformTobeMapped[5], z_tollerance);
}
float constraintTransformation(float value, float limit) {
if (value < -limit)
value = -limit;
if (value > limit)
value = limit;
return value;
}
bool saveFrame() {
if (cloudKeyPoses3D->points.empty())
return true;
Eigen::Affine3f transStart = pclPointToAffine3f(cloudKeyPoses6D->back());
Eigen::Affine3f transFinal = pcl::getTransformation(transformTobeMapped[3], transformTobeMapped[4],
transformTobeMapped[5],
transformTobeMapped[0], transformTobeMapped[1],
transformTobeMapped[2]);
Eigen::Affine3f transBetween = transStart.inverse() * transFinal;
float x, y, z, roll, pitch, yaw;
pcl::getTranslationAndEulerAngles(transBetween, x, y, z, roll, pitch, yaw);
if (abs(roll) < surroundingkeyframeAddingAngleThreshold &&
abs(pitch) < surroundingkeyframeAddingAngleThreshold &&
abs(yaw) < surroundingkeyframeAddingAngleThreshold &&
sqrt(x * x + y * y + z * z) < surroundingkeyframeAddingDistThreshold)
return false;
return true;
}
void addOdomFactor() {
if (cloudKeyPoses3D->points.empty()) {
// 第一帧进来时初始化gtsam参数
noiseModel::Diagonal::shared_ptr priorNoise = noiseModel::Diagonal::Variances(
(Vector(6) << 1e-2, 1e-2, M_PI * M_PI, 1e8, 1e8, 1e8).finished()); // rad*rad, meter*meter
// 先验因子
gtSAMgraph.add(PriorFactor(0, trans2gtsamPose(transformTobeMapped), priorNoise));
initialEstimate.insert(0, trans2gtsamPose(transformTobeMapped));
} else {
noiseModel::Diagonal::shared_ptr odometryNoise = noiseModel::Diagonal::Variances(
(Vector(6) << 1e-6, 1e-6, 1e-6, 1e-4, 1e-4, 1e-4).finished());
gtsam::Pose3 poseFrom = pclPointTogtsamPose3(cloudKeyPoses6D->points.back());
gtsam::Pose3 poseTo = trans2gtsamPose(transformTobeMapped);
// 二元因子
gtSAMgraph.add(
BetweenFactor(cloudKeyPoses3D->size() - 1, cloudKeyPoses3D->size(), poseFrom.between(poseTo),
odometryNoise));
initialEstimate.insert(cloudKeyPoses3D->size(), poseTo); // 添加值
}
}
void addGPSFactor() {
if (gpsQueue.empty())
return;
// wait for system initialized and settles down
if (cloudKeyPoses3D->points.empty())
return;
else {
if (pointDistance(cloudKeyPoses3D->front(), cloudKeyPoses3D->back()) < 5.0)
return;
}
// pose covariance small, no need to correct
if (poseCovariance(3, 3) < poseCovThreshold && poseCovariance(4, 4) < poseCovThreshold)
return;
// pose的协方差比较大的时候才去添加gps factor
while (!gpsQueue.empty()) {
// 时间戳对齐
if (gpsQueue.front().header.stamp.toSec() < timeLaserCloudInfoLast - 0.2) {
// message too old
gpsQueue.pop_front();
} else if (gpsQueue.front().header.stamp.toSec() > timeLaserCloudInfoLast + 0.2) {
// message too new
break;
} else {
nav_msgs::Odometry thisGPS = gpsQueue.front();
gpsQueue.pop_front();
// GPS too noisy, skip
float noise_x = thisGPS.pose.covariance[0];
float noise_y = thisGPS.pose.covariance[7];
float noise_z = thisGPS.pose.covariance[14];
if (noise_x > gpsCovThreshold || noise_y > gpsCovThreshold)
continue;
float gps_x = thisGPS.pose.pose.position.x;
float gps_y = thisGPS.pose.pose.position.y;
float gps_z = thisGPS.pose.pose.position.z;
if (!useGpsElevation) {
gps_z = transformTobeMapped[5]; // gps的z一般不可信
noise_z = 0.01;
}
// GPS not properly initialized (0,0,0)
if (abs(gps_x) < 1e-6 && abs(gps_y) < 1e-6)
continue;
// 添加GPS因子
gtsam::Vector Vector3(3);
Vector3 << noise_x, noise_y, noise_z;
noiseModel::Diagonal::shared_ptr gps_noise = noiseModel::Diagonal::Variances(Vector3); // 噪声定义
gtsam::GPSFactor gps_factor(cloudKeyPoses3D->size(), gtsam::Point3(gps_x, gps_y, gps_z), gps_noise);
gtSAMgraph.add(gps_factor);
aLoopIsClosed = true;
break;
}
}
}
void saveKeyFramesAndFactor() {
if (saveFrame() == false)
return;
// 添加各种factor、保存关键帧
// odom factor
addOdomFactor();
// gps factor
addGPSFactor();
// cout << "****************************************************" << endl;
// gtSAMgraph.print("GTSAM Graph:\n");
// update iSAM
isam->update(gtSAMgraph, initialEstimate);
isam->update();
// update multiple-times till converge
if (aLoopIsClosed == true) {
isam->update();
isam->update();
isam->update();
isam->update();
isam->update();
}
gtSAMgraph.resize(0);
initialEstimate.clear();
//save key poses
PointType thisPose3D;
PointTypePose thisPose6D;
Pose3 latestEstimate;
// 最新的pose
isamCurrentEstimate = isam->calculateEstimate();
latestEstimate = isamCurrentEstimate.at(isamCurrentEstimate.size() - 1);
// cout << "****************************************************" << endl;
// isamCurrentEstimate.print("Current estimate: ");
// 这里不断的增加关键帧到cloudKeyPoses3D、cloudKeyPoses6D中
thisPose3D.x = latestEstimate.translation().x();
thisPose3D.y = latestEstimate.translation().y();
thisPose3D.z = latestEstimate.translation().z();
thisPose3D.intensity = cloudKeyPoses3D->size(); // this can be used as index
cloudKeyPoses3D->push_back(thisPose3D);
thisPose6D.x = thisPose3D.x;
thisPose6D.y = thisPose3D.y;
thisPose6D.z = thisPose3D.z;
thisPose6D.intensity = thisPose3D.intensity; // this can be used as index
thisPose6D.roll = latestEstimate.rotation().roll();
thisPose6D.pitch = latestEstimate.rotation().pitch();
thisPose6D.yaw = latestEstimate.rotation().yaw();
thisPose6D.time = timeLaserCloudInfoLast;
cloudKeyPoses6D->push_back(thisPose6D);
// cout << "****************************************************" << endl;
// cout << "Pose covariance:" << endl;
// cout << isam->marginalCovariance(isamCurrentEstimate.size()-1) << endl << endl;
// 边缘化得到每个变量的协方差
poseCovariance = isam->marginalCovariance(isamCurrentEstimate.size() - 1);
// save updated transform
transformTobeMapped[0] = latestEstimate.rotation().roll();
transformTobeMapped[1] = latestEstimate.rotation().pitch();
transformTobeMapped[2] = latestEstimate.rotation().yaw();
transformTobeMapped[3] = latestEstimate.translation().x();
transformTobeMapped[4] = latestEstimate.translation().y();
transformTobeMapped[5] = latestEstimate.translation().z();
// save all the received edge and surf points
pcl::PointCloud::Ptr thisCornerKeyFrame(new pcl::PointCloud());
pcl::PointCloud::Ptr thisSurfKeyFrame(new pcl::PointCloud());
pcl::copyPointCloud(*laserCloudCornerLastDS, *thisCornerKeyFrame);
pcl::copyPointCloud(*laserCloudSurfLastDS, *thisSurfKeyFrame);
// save key frame cloud
cornerCloudKeyFrames.push_back(thisCornerKeyFrame);
surfCloudKeyFrames.push_back(thisSurfKeyFrame);
// save path for visualization
updatePath(thisPose6D);
}
void correctPoses() {
if (cloudKeyPoses3D->points.empty())
return;
if (aLoopIsClosed == true) {
// clear path
globalPath.poses.clear();
// update key poses 更新位姿
int numPoses = isamCurrentEstimate.size();
for (int i = 0; i < numPoses; ++i) {
cloudKeyPoses3D->points[i].x = isamCurrentEstimate.at(i).translation().x();
cloudKeyPoses3D->points[i].y = isamCurrentEstimate.at(i).translation().y();
cloudKeyPoses3D->points[i].z = isamCurrentEstimate.at(i).translation().z();
cloudKeyPoses6D->points[i].x = cloudKeyPoses3D->points[i].x;
cloudKeyPoses6D->points[i].y = cloudKeyPoses3D->points[i].y;
cloudKeyPoses6D->points[i].z = cloudKeyPoses3D->points[i].z;
cloudKeyPoses6D->points[i].roll = isamCurrentEstimate.at(i).rotation().roll();
cloudKeyPoses6D->points[i].pitch = isamCurrentEstimate.at(i).rotation().pitch();
cloudKeyPoses6D->points[i].yaw = isamCurrentEstimate.at(i).rotation().yaw();
updatePath(cloudKeyPoses6D->points[i]);
}
aLoopIsClosed = false;
// ID for reseting IMU pre-integration
++imuPreintegrationResetId;
}
}
void updatePath(const PointTypePose &pose_in) {
geometry_msgs::PoseStamped pose_stamped;
pose_stamped.header.stamp = timeLaserInfoStamp;
pose_stamped.header.frame_id = "odom";
pose_stamped.pose.position.x = pose_in.x;
pose_stamped.pose.position.y = pose_in.y;
pose_stamped.pose.position.z = pose_in.z;
tf::Quaternion q = tf::createQuaternionFromRPY(pose_in.roll, pose_in.pitch, pose_in.yaw);
pose_stamped.pose.orientation.x = q.x();
pose_stamped.pose.orientation.y = q.y();
pose_stamped.pose.orientation.z = q.z();
pose_stamped.pose.orientation.w = q.w();
globalPath.poses.push_back(pose_stamped);
}
void publishOdometry() {
// Publish odometry for ROS
nav_msgs::Odometry laserOdometryROS;
laserOdometryROS.header.stamp = timeLaserInfoStamp;
laserOdometryROS.header.frame_id = "odom";
laserOdometryROS.child_frame_id = "odom_mapping";
laserOdometryROS.pose.pose.position.x = transformTobeMapped[3];
laserOdometryROS.pose.pose.position.y = transformTobeMapped[4];
laserOdometryROS.pose.pose.position.z = transformTobeMapped[5];
laserOdometryROS.pose.pose.orientation = tf::createQuaternionMsgFromRollPitchYaw(transformTobeMapped[0],
transformTobeMapped[1],
transformTobeMapped[2]);
laserOdometryROS.pose.covariance[0] = double(imuPreintegrationResetId);
pubOdomAftMappedROS.publish(laserOdometryROS);
}
void publishFrames() {
if (cloudKeyPoses3D->points.empty())
return;
// publish key poses
publishCloud(&pubKeyPoses, cloudKeyPoses3D, timeLaserInfoStamp, "odom");
// Publish surrounding key frames
publishCloud(&pubRecentKeyFrames, laserCloudSurfFromMapDS, timeLaserInfoStamp, "odom");
// publish registered key frame
if (pubRecentKeyFrame.getNumSubscribers() != 0) {
pcl::PointCloud::Ptr cloudOut(new pcl::PointCloud());
PointTypePose thisPose6D = trans2PointTypePose(transformTobeMapped);
*cloudOut += *transformPointCloud(laserCloudCornerLastDS, &thisPose6D);
*cloudOut += *transformPointCloud(laserCloudSurfLastDS, &thisPose6D);
publishCloud(&pubRecentKeyFrame, cloudOut, timeLaserInfoStamp, "odom");
}
// publish registered high-res raw cloud
if (pubCloudRegisteredRaw.getNumSubscribers() != 0) {
pcl::PointCloud::Ptr cloudOut(new pcl::PointCloud());
pcl::fromROSMsg(cloudInfo.cloud_deskewed, *cloudOut);
PointTypePose thisPose6D = trans2PointTypePose(transformTobeMapped);
*cloudOut = *transformPointCloud(cloudOut, &thisPose6D);
publishCloud(&pubCloudRegisteredRaw, cloudOut, timeLaserInfoStamp, "odom");
}
// publish path
if (pubPath.getNumSubscribers() != 0) {
globalPath.header.stamp = timeLaserInfoStamp;
globalPath.header.frame_id = "odom";
pubPath.publish(globalPath);
}
}
public:
// gtsam
NonlinearFactorGraph gtSAMgraph;
Values initialEstimate;
Values optimizedEstimate;
ISAM2 *isam;
Values isamCurrentEstimate;
Eigen::MatrixXd poseCovariance;
ros::Publisher pubLaserCloudSurround;
ros::Publisher pubOdomAftMappedROS;
ros::Publisher pubKeyPoses;
ros::Publisher pubPath;
ros::Publisher pubHistoryKeyFrames;
ros::Publisher pubIcpKeyFrames;
ros::Publisher pubRecentKeyFrames;
ros::Publisher pubRecentKeyFrame;
ros::Publisher pubCloudRegisteredRaw;
ros::Subscriber subLaserCloudInfo;
ros::Subscriber subGPS;
std::deque gpsQueue;
lio_sam::cloud_info cloudInfo;
vector::Ptr> cornerCloudKeyFrames;
vector::Ptr> surfCloudKeyFrames;
pcl::PointCloud::Ptr cloudKeyPoses3D;
pcl::PointCloud::Ptr cloudKeyPoses6D;
pcl::PointCloud::Ptr laserCloudCornerLast; // corner feature set from odoOptimization
pcl::PointCloud::Ptr laserCloudSurfLast; // surf feature set from odoOptimization
pcl::PointCloud::Ptr laserCloudCornerLastDS; // downsampled corner featuer set from odoOptimization
pcl::PointCloud::Ptr laserCloudSurfLastDS; // downsampled surf featuer set from odoOptimization
pcl::PointCloud::Ptr laserCloudOri;
pcl::PointCloud::Ptr coeffSel;
std::vector laserCloudOriCornerVec; // corner point holder for parallel computation
std::vector coeffSelCornerVec;
std::vector laserCloudOriCornerFlag;
std::vector laserCloudOriSurfVec; // surf point holder for parallel computation
std::vector coeffSelSurfVec;
std::vector laserCloudOriSurfFlag;
pcl::PointCloud::Ptr laserCloudCornerFromMap;
pcl::PointCloud::Ptr laserCloudSurfFromMap;
pcl::PointCloud::Ptr laserCloudCornerFromMapDS;
pcl::PointCloud::Ptr laserCloudSurfFromMapDS;
pcl::KdTreeFLANN::Ptr kdtreeCornerFromMap;
pcl::KdTreeFLANN::Ptr kdtreeSurfFromMap;
pcl::KdTreeFLANN::Ptr kdtreeSurroundingKeyPoses;
pcl::KdTreeFLANN::Ptr kdtreeHistoryKeyPoses;
pcl::PointCloud::Ptr latestKeyFrameCloud;
pcl::PointCloud::Ptr nearHistoryKeyFrameCloud;
pcl::VoxelGrid downSizeFilterCorner;
pcl::VoxelGrid downSizeFilterSurf;
pcl::VoxelGrid downSizeFilterICP;
pcl::VoxelGrid downSizeFilterSurroundingKeyPoses; // for surrounding key poses of scan-to-map optimization
ros::Time timeLaserInfoStamp;
double timeLaserCloudInfoLast;
float transformTobeMapped[6];
std::mutex mtx;
double timeLastProcessing = -1;
bool isDegenerate = false;
Eigen::Matrix matP;
int laserCloudCornerFromMapDSNum = 0;
int laserCloudSurfFromMapDSNum = 0;
int laserCloudCornerLastDSNum = 0;
int laserCloudSurfLastDSNum = 0;
bool aLoopIsClosed = false;
int imuPreintegrationResetId = 0;
nav_msgs::Path globalPath;
Eigen::Affine3f transPointAssociateToMap;
Eigen::Affine3f lastImuTransformation;
};
int main(int argc, char **argv) {
ros::init(argc, argv, "lio_sam");
mapOptimization MO;
ROS_INFO("\033[1;32m----> Map Optimization Started.\033[0m");
// 两个线程,一边按固定的频率进行回环检测、添加约束边,另外一边进行地图发布和保存
std::thread loopthread(&mapOptimization::loopClosureThread, &MO);
std::thread visualizeMapThread(&mapOptimization::visualizeGlobalMapThread, &MO);
// 这里不间断的执行callback
ros::spin();
return 0;
}