R3Live系列学习(二)FAST-LIO源码阅读

在上一篇我们提到,livox雷达给业界内的3D激光领域提供了一大补充,而loam-livox在温柔的使用下表现也还不错,但在比较颠簸激烈的环境下也难以维持高精度,因此lidar与imu的结合使用在近年也日益流行。

FAST-LIO基于IEEKF(迭代误差KF)的紧耦合建图方法,鲁棒性较loam-livox提升了一档,在非常极端的运动下也有可能维持住位姿。它的论文是《FAST-LIO: A Fast, Robust LiDAR-inertial Odometry Package by
Tightly-Coupled Iterated Kalman Filter》。它的源码只有一个进程,非常简洁清晰,代码风格大开大合,与colmap那样绵密严谨的商业软件,风格迥异。

与loam-livox的处理方式相似,在接收lidar消息的时候,将点云分为角点、面点,并且用pcl点云中的强度与曲率分别对应反射率(为了去除坏点)与单点的时间戳,随后采集角点集合与面点集合,不再赘述。

一、imu预处理与运动补偿

imu消息有两个作用,参与状态估计与点云去畸变。在loam-livox中的去畸变方法是直接用上次的估计位姿直线推断本帧位姿并按时间戳去畸变,在FAST-LIO中,采用imu积分并插值的方法。imu将会利用前若干帧数据(默认20帧)进行初始化,与msckf-vio的处理方式类似。详见IMU_Processing.hpp中:

//前20帧静置初始化,静置时的加速度均值,作为imu的重力;静置时的角速度均值作为重力的bias,将imu重力的xyz互相相乘作为重力的协方差矩阵
//将imu状态量加入kf的状态列表中
IMU_init(meas, kf_state, init_iter_num);

imu的状态量我们结合论文来看,首先看一下新定义的运算符,这个运算符大致意义是,状态量(向量)之间直接加减,矩阵与轴角(向量)之间运算遵循变换公式(罗德里格斯公式)进行计算。

R3Live系列学习(二)FAST-LIO源码阅读_第1张图片

论文的3式直接解释了imu的6维状态量,它们的导数如1式所示,这也是imu的离散状态下的变换矩阵(雅克比矩阵)。

 R3Live系列学习(二)FAST-LIO源码阅读_第2张图片

 在离散状态的处理过程中,每次获取imu数据帧后,都会对系统状态进行一次更新预测先验。源码中ikfom toolkit的处理比较复杂(自己太弱),因此直接从论文中看:

 如式4所示,在一个传播周期内,第0次预测值是上一次的最优估计,之后的每一次都在上一次的基础上作离散处理的累加。而预测值与真值的误差(式5),是需要进行最优估计的变量,将误差作一阶线性展开(套用EKF过程),即得到式22、23,而误差值的变换矩阵与噪声的变换矩阵,则是将式5分别求导得到的。

R3Live系列学习(二)FAST-LIO源码阅读_第3张图片

 

R3Live系列学习(二)FAST-LIO源码阅读_第4张图片

因此在获取imu单帧后进行协方差矩阵的先验更新方程也被确定如式8。

 

 在点云去畸变时,需求得lidar周期的多段离散时间完毕后的状态预测,将点云按该状态量转换到imu坐标系下处理,再转换回lidar坐标系。

R3Live系列学习(二)FAST-LIO源码阅读_第5张图片

 那么去畸变的函数可以按如下理解:

void ImuProcess::UndistortPcl(const MeasureGroup &meas, esekfom::esekf &kf_state, PointCloudXYZI &pcl_out)
{
  auto v_imu = meas.imu;
  v_imu.push_front(last_imu_);

  //在一个lidar周期内的imu数据
  const double &imu_beg_time = v_imu.front()->header.stamp.toSec();
  const double &imu_end_time = v_imu.back()->header.stamp.toSec();
  const double &pcl_beg_time = meas.lidar_beg_time;
  
  //按之前已设定的曲率(单点时间戳)对点云进行排列
  pcl_out = *(meas.lidar);
  sort(pcl_out.points.begin(), pcl_out.points.end(), time_list);
  const double &pcl_end_time = pcl_beg_time + pcl_out.points.back().curvature / double(1000);
  
  state_ikfom imu_state = kf_state.get_x();
  IMUpose.clear();

  //保存数据:时间差,加速度,角速度,线速度,位置,姿态(后三个为推断值)
  IMUpose.push_back(set_pose6d(0.0, acc_s_last, angvel_last, imu_state.vel, imu_state.pos, imu_state.rot.toRotationMatrix()));

  V3D angvel_avr, acc_avr, acc_imu, vel_imu, pos_imu;
  M3D R_imu;

  double dt = 0;

  input_ikfom in;
  for (auto it_imu = v_imu.begin(); it_imu < (v_imu.end() - 1); it_imu++)
  {
    auto &&head = *(it_imu);
    auto &&tail = *(it_imu + 1);
    
    //选取能cover住lidar时间戳的imu数据帧
    if (tail->header.stamp.toSec() < last_lidar_end_time_)    continue;
    
    angvel_avr<<0.5 * (head->angular_velocity.x + tail->angular_velocity.x),
                0.5 * (head->angular_velocity.y + tail->angular_velocity.y),
                0.5 * (head->angular_velocity.z + tail->angular_velocity.z);
    acc_avr   <<0.5 * (head->linear_acceleration.x + tail->linear_acceleration.x),
                0.5 * (head->linear_acceleration.y + tail->linear_acceleration.y),
                0.5 * (head->linear_acceleration.z + tail->linear_acceleration.z);

    // fout_imu << setw(10) << head->header.stamp.toSec() - first_lidar_time << " " << angvel_avr.transpose() << " " << acc_avr.transpose() << endl;

    //加速度去除重力影响
    acc_avr     = acc_avr * G_m_s2 / mean_acc.norm(); // - state_inout.ba;

    if(head->header.stamp.toSec() < last_lidar_end_time_)
    {
      dt = tail->header.stamp.toSec() - last_lidar_end_time_;
      // dt = tail->header.stamp.toSec() - pcl_beg_time;
    }
    else
    {
      dt = tail->header.stamp.toSec() - head->header.stamp.toSec();
    }
    
    in.acc = acc_avr;
    in.gyro = angvel_avr;
    Q.block<3, 3>(0, 0).diagonal() = cov_gyr;
    Q.block<3, 3>(3, 3).diagonal() = cov_acc;
    Q.block<3, 3>(6, 6).diagonal() = cov_bias_gyr;
    Q.block<3, 3>(9, 9).diagonal() = cov_bias_acc;

    //每收到一次imu数据帧则进行一次前向更新(先验)
    kf_state.predict(dt, Q, in);

    /* save the poses at each IMU measurements */
    imu_state = kf_state.get_x();
    angvel_last = angvel_avr - imu_state.bg;
    acc_s_last  = imu_state.rot * (acc_avr - imu_state.ba);
    for(int i=0; i<3; i++)
    {
      acc_s_last[i] += imu_state.grav[i];
    }
    double &&offs_t = tail->header.stamp.toSec() - pcl_beg_time;
    IMUpose.push_back(set_pose6d(offs_t, acc_s_last, angvel_last, imu_state.vel, imu_state.pos, imu_state.rot.toRotationMatrix()));
  }

  /*** calculated the pos and attitude prediction at the frame-end ***/
  double note = pcl_end_time > imu_end_time ? 1.0 : -1.0;
  dt = note * (pcl_end_time - imu_end_time);

  //确保lidar与imu的处理时间对齐
  kf_state.predict(dt, Q, in);
  
  imu_state = kf_state.get_x();
  last_imu_ = meas.imu.back();
  last_lidar_end_time_ = pcl_end_time;

  auto it_pcl = pcl_out.points.end() - 1;

  //依次处理本周期内的imu数据帧
  for (auto it_kp = IMUpose.end() - 1; it_kp != IMUpose.begin(); it_kp--)
  {
    auto head = it_kp - 1;
    auto tail = it_kp;

    //位姿、线速度使用上一帧的imu数据,加速度、角速度使用下一帧的数据
    R_imu<rot);
    vel_imu<vel);
    pos_imu<pos);
    acc_imu<acc);
    angvel_avr<gyr);

    //处理lidar帧落在本个imu帧时间内的点云
    for(; it_pcl->curvature / double(1000) > head->offset_time; it_pcl --)
    {
      //单点距imu帧头的时间差
      dt = it_pcl->curvature / double(1000) - head->offset_time;

      //按时间戳的差值进行插值
      //本时刻的imu位姿
      M3D R_i(R_imu * Exp(angvel_avr, dt));
      V3D P_i(it_pcl->x, it_pcl->y, it_pcl->z);
      V3D T_ei(pos_imu + vel_imu * dt + 0.5 * acc_imu * dt * dt - imu_state.pos);
      
      //lidar点通过外参(Rp+t)转换到imu坐标系,在imu坐标系下向前一帧做反向变换,再转换回到lidar坐标系 
      //非常简便,但这样的方式并不是严格的线性插值
      V3D P_compensate = imu_state.offset_R_L_I.conjugate() * (imu_state.rot.conjugate() * (R_i * (imu_state.offset_R_L_I * P_i + imu_state.offset_T_L_I) + T_ei) - imu_state.offset_T_L_I);
      
      //获取去畸变后的点
      it_pcl->x = P_compensate(0);
      it_pcl->y = P_compensate(1);
      it_pcl->z = P_compensate(2);

      if (it_pcl == pcl_out.points.begin()) break;
    }
  }
}

二、状态估计

继续从论文着手,每个特征点(面点与角点)转换到世界坐标系下后,与已建立地图中的对应特征点的残差如式12(scan to map匹配),在这里Gj是地图上线特征或面特征的向量;并且,对于测量值,即本帧点云,其真值就是测量值与噪音误差的差值,这一点在KF系列中已经很熟悉了。

 因此借助上述式子,可得到基于点云的点与地图的特征点的真值、测量值、状态量、噪音误差的等式,而恰好就是在理想状态下的真值与噪音模型,它可以一阶展开为测量值、对误差量的一阶偏导与噪声的等式(式14)。在零点处展开可以大大减小非线性转为线性的误差,这也是误差KF的优势。第一个式子可以作为式14求H(雅克比矩阵)的求导依据。

 从真值与预测值的等式可推断出,真值与第K次迭代的值的差值,等同于在第K次迭代的时候对误差量进行一阶展开,这里求得的式16是误差的协方差矩阵的先验更新矩阵。

因此综合上述式子,就可以得到整体误差的优化函数,分两部分,状态量真值与迭代量之差(在第一部分的先验中得到),以及测量值与迭代量之差(式14得到),而式17实际上就是先验与观测量两个高斯分布的最大后验估计!那么将式15代入式17,就可以直接将该优化函数对误差量求偏导,进入了迭代优化过程。在整个过程中反复求解增益K,再用K求得误差值的后验。直到误差值的变化量小于阈值。

当迭代结束后,便更新协方差矩阵,作为下一次迭代开始的先验。

 那么laserMapping.cpp主循环的状态初始化以及状态更新的两处关键函数就不难理解了,但某些细节,我还没完全理解,而build_manifold.hpp中的宏定义函数的写法更加深了这一点-_-!。

kf.init_dyn_share(get_f, df_dx, df_dw, h_share_model, NUM_MAX_ITERATIONS, epsi);
kf.update_iterated_dyn_share_modified(LASER_POINT_COV, solve_H_time);
//构造优化函数并求解该函数的雅克比矩阵
void h_share_model(state_ikfom &s, esekfom::dyn_share_datastruct &ekfom_data)
{
    double match_start = omp_get_wtime();
    laserCloudOri->clear(); 
    corr_normvect->clear(); 
    total_residual = 0.0; 

    //对降采样后的每个特征点进行残差计算
    for (int i = 0; i < feats_down_size; i++)
    {
        PointType &point_body  = feats_down_body->points[i]; 
        PointType &point_world = feats_down_world->points[i]; 

        //将点转换至世界坐标系下
        V3D p_body(point_body.x, point_body.y, point_body.z);
        V3D p_global(s.rot * (s.offset_R_L_I*p_body + s.offset_T_L_I) + s.pos);
        point_world.x = p_global(0);
        point_world.y = p_global(1);
        point_world.z = p_global(2);
        point_world.intensity = point_body.intensity;

        vector pointSearchSqDis(NUM_MATCH_POINTS);

        auto &points_near = Nearest_Points[i];

        if (ekfom_data.converge)
        {
            //在已构造的地图上查找特征点的最近邻
            ikdtree.Nearest_Search(point_world, NUM_MATCH_POINTS, points_near, pointSearchSqDis);
            point_selected_surf[i] = points_near.size() < NUM_MATCH_POINTS ? false : pointSearchSqDis[NUM_MATCH_POINTS - 1] > 5 ? false : true;
        }

        if (!point_selected_surf[i]) continue;

        VF(4) pabcd;
        point_selected_surf[i] = false;

        //拟合平面方程ax+by+cz+d=0并求解点到平面距离
        if (esti_plane(pabcd, points_near, 0.1f))
        {
            float pd2 = pabcd(0) * point_world.x + pabcd(1) * point_world.y + pabcd(2) * point_world.z + pabcd(3);
            float s = 1 - 0.9 * fabs(pd2) / sqrt(p_body.norm());

            //经验值,将该点加入残差
            if (s > 0.9)
            {
                point_selected_surf[i] = true;
                normvec->points[i].x = pabcd(0);
                normvec->points[i].y = pabcd(1);
                normvec->points[i].z = pabcd(2);
                normvec->points[i].intensity = pd2;
                res_last[i] = abs(pd2);
            }
        }
    }
    
    effct_feat_num = 0;

    for (int i = 0; i < feats_down_size; i++)
    {
        if (point_selected_surf[i])
        {
            laserCloudOri->points[effct_feat_num] = feats_down_body->points[i];
            corr_normvect->points[effct_feat_num] = normvec->points[i];
            total_residual += res_last[i];
            effct_feat_num ++;
        }
    }

    res_mean_last = total_residual / effct_feat_num;
    match_time  += omp_get_wtime() - match_start;
    double solve_start_  = omp_get_wtime();
    
    /*** Computation of Measuremnt Jacobian matrix H and measurents vector ***/
    ekfom_data.h_x = MatrixXd::Zero(effct_feat_num, 12); //23
    ekfom_data.h.resize(effct_feat_num);

    //求观测值与误差的雅克比矩阵,如论文式14以及式12、13
    for (int i = 0; i < effct_feat_num; i++)
    {
        const PointType &laser_p  = laserCloudOri->points[i];
        V3D point_this_be(laser_p.x, laser_p.y, laser_p.z);
        M3D point_be_crossmat;
        point_be_crossmat << SKEW_SYM_MATRX(point_this_be);
        V3D point_this = s.offset_R_L_I * point_this_be + s.offset_T_L_I;//imu坐标系下
        M3D point_crossmat;
        point_crossmat<points[i];
        V3D norm_vec(norm_p.x, norm_p.y, norm_p.z);

        /*** calculate the Measuremnt Jacobian matrix H ***/
        V3D C(s.rot.conjugate() *norm_vec);
        V3D A(point_crossmat * C);
        if (extrinsic_est_en)
        {
            V3D B(point_be_crossmat * s.offset_R_L_I.conjugate() * C); //s.rot.conjugate()*norm_vec);
            ekfom_data.h_x.block<1, 12>(i,0) << norm_p.x, norm_p.y, norm_p.z, VEC_FROM_ARRAY(A), VEC_FROM_ARRAY(B), VEC_FROM_ARRAY(C);
        }
        else
        {
            ekfom_data.h_x.block<1, 12>(i,0) << norm_p.x, norm_p.y, norm_p.z, VEC_FROM_ARRAY(A), 0.0, 0.0, 0.0, 0.0, 0.0, 0.0;
        }

        /*** Measuremnt: distance to the closest surface/corner ***/
        ekfom_data.h(i) = -norm_p.intensity;//点到面的距离
    }
    solve_time += omp_get_wtime() - solve_start_;
}
//iterated error state EKF update modified for one specific system.
//实现迭代误差KF,更新状态量与协方差矩阵
void update_iterated_dyn_share_modified(double R, double &solve_time) 
{
	dyn_share_datastruct dyn_share;
	dyn_share.valid = true;
	dyn_share.converge = true;
	int t = 0;

	//迭代开始前的状态量与协方差矩阵为上次结束后的最优估计
	state x_propagated = x_;
	cov P_propagated = P_;

	int dof_Measurement; 
	
	Matrix K_h;
	Matrix K_x;
	
	vectorized_state dx_new = vectorized_state::Zero();
	for(int i=-1; i h_x_ = dyn_share.h_x;
	#endif	
		double solve_start = omp_get_wtime();
		dof_Measurement = h_x_.rows();//观测量(约束)的数目
		vectorized_state dx;
		x_.boxminus(dx, x_propagated);
		dx_new = dx;
		
		if(! dyn_share.valid)
		{
			continue; 
		}
		
		//协方差矩阵在迭代过程中不会代入下一次迭代,直到最后一次退出时更新,在迭代过程中更新的只是先验
		P_ = P_propagated;
		
		//这一大段都在求协方差的先验更新,大致上是P=(J^-1)*P*(J^-T)如论文式16~18
		Matrix res_temp_SO3;
		MTK::vect<3, scalar_type> seg_SO3;
		for (std::vector >::iterator it = x_.SO3_state.begin(); it != x_.SO3_state.end(); it++) {
			int idx = (*it).first;
			int dim = (*it).second;
			for(int i = 0; i < 3; i++){
				seg_SO3(i) = dx(idx+i);
			}

			res_temp_SO3 = MTK::A_matrix(seg_SO3).transpose();
			dx_new.template block<3, 1>(idx, 0) = res_temp_SO3 * dx_new.template block<3, 1>(idx, 0);
			for(int i = 0; i < n; i++){
				P_. template block<3, 1>(idx, i) = res_temp_SO3 * (P_. template block<3, 1>(idx, i));	
			}
			for(int i = 0; i < n; i++){
				P_. template block<1, 3>(i, idx) =(P_. template block<1, 3>(i, idx)) *  res_temp_SO3.transpose();	
			}
		}

		Matrix res_temp_S2;
		MTK::vect<2, scalar_type> seg_S2;
		for (std::vector >::iterator it = x_.S2_state.begin(); it != x_.S2_state.end(); it++) {
			int idx = (*it).first;
			int dim = (*it).second;
			for(int i = 0; i < 2; i++){
				seg_S2(i) = dx(idx + i);
			}

			Eigen::Matrix Nx;
			Eigen::Matrix Mx;
			x_.S2_Nx_yy(Nx, idx);
			x_propagated.S2_Mx(Mx, seg_S2, idx);
			res_temp_S2 = Nx * Mx; 
			dx_new.template block<2, 1>(idx, 0) = res_temp_S2 * dx_new.template block<2, 1>(idx, 0);
			for(int i = 0; i < n; i++){
				P_. template block<2, 1>(idx, i) = res_temp_S2 * (P_. template block<2, 1>(idx, i));	
			}
			for(int i = 0; i < n; i++){
				P_. template block<1, 2>(i, idx) = (P_. template block<1, 2>(i, idx)) * res_temp_S2.transpose();
			}
		}
		
		if(n > dof_Measurement)
		{
			Eigen::Matrix h_x_cur = Eigen::Matrix::Zero(dof_Measurement, n);
			h_x_cur.topLeftCorner(dof_Measurement, 12) = h_x_;
			
			//每一次迭代将重新计算增益K,即论文式18
			Matrix K_ = P_ * h_x_cur.transpose() * (h_x_cur * P_ * h_x_cur.transpose()/R + Eigen::Matrix::Identity(dof_Measurement, dof_Measurement)).inverse()/R;
			K_h = K_ * dyn_share.h;
			K_x = K_ * h_x_cur;
		}
		else
		{
		#ifdef USE_sparse
			//避免求逆矩阵,K按稀疏矩阵分解的方法如论文式20	
			spMt A = h_x_.transpose() * h_x_;
			cov P_temp = (P_/R).inverse();
			P_temp. template block<12, 12>(0, 0) += A;
			P_temp = P_temp.inverse();
			
			K_ = P_temp. template block(0, 0) * h_x_.transpose();
			K_x = cov::Zero();
			K_x. template block(0, 0) = P_inv. template block(0, 0) * HTH;
			
		#else
			cov P_temp = (P_/R).inverse();

			Eigen::Matrix HTH = h_x_.transpose() * h_x_; 
			P_temp. template block<12, 12>(0, 0) += HTH;
			
			cov P_inv = P_temp.inverse();

			K_h = P_inv. template block(0, 0) * h_x_.transpose() * dyn_share.h;
			
			K_x.setZero(); // = cov::Zero();
			K_x. template block(0, 0) = P_inv. template block(0, 0) * HTH;
		#endif 
		}

		//由于是误差迭代KF,得到的是误差的最优估计!
		Matrix dx_ = K_h + (K_x - Matrix::Identity()) * dx_new; 
		state x_before = x_;
		
		x_.boxplus(dx_);//将误差的最优估计更新状态量

		dyn_share.converge = true;

		//判断已收敛的条件是误差的估计值小于阈值
		for(int i = 0; i < n ; i++)
		{
			if(std::fabs(dx_[i]) > limit[i])
			{
				dyn_share.converge = false;
				break;
			}
		}
		if(dyn_share.converge) t++;
		
		if(!t && i == maximum_iter - 2)
		{
			dyn_share.converge = true;
		}

		//结束迭代后,更新协方差矩阵的后验值,大致上是P=(I-K*H)*P,如论文式19
		if(t > 1 || i == maximum_iter - 1)
		{
			L_ = P_;

			Matrix res_temp_SO3;
			MTK::vect<3, scalar_type> seg_SO3;
			for(typename std::vector >::iterator it = x_.SO3_state.begin(); it != x_.SO3_state.end(); it++) {
				int idx = (*it).first;
				for(int i = 0; i < 3; i++){
					seg_SO3(i) = dx_(i + idx);
				}
				res_temp_SO3 = MTK::A_matrix(seg_SO3).transpose();
				for(int i = 0; i < n; i++){
					L_. template block<3, 1>(idx, i) = res_temp_SO3 * (P_. template block<3, 1>(idx, i)); 
				}
				
				for(int i = 0; i < 12; i++){
					K_x. template block<3, 1>(idx, i) = res_temp_SO3 * (K_x. template block<3, 1>(idx, i));
				}
				
				for(int i = 0; i < n; i++){
					L_. template block<1, 3>(i, idx) = (L_. template block<1, 3>(i, idx)) * res_temp_SO3.transpose();
					P_. template block<1, 3>(i, idx) = (P_. template block<1, 3>(i, idx)) * res_temp_SO3.transpose();
				}
			}

			Matrix res_temp_S2;
			MTK::vect<2, scalar_type> seg_S2;
			for(typename std::vector >::iterator it = x_.S2_state.begin(); it != x_.S2_state.end(); it++) {
				int idx = (*it).first;

				for(int i = 0; i < 2; i++){
					seg_S2(i) = dx_(i + idx);
				}

				Eigen::Matrix Nx;
				Eigen::Matrix Mx;
				x_.S2_Nx_yy(Nx, idx);
				x_propagated.S2_Mx(Mx, seg_S2, idx);
				res_temp_S2 = Nx * Mx; 
				for(int i = 0; i < n; i++){
					L_. template block<2, 1>(idx, i) = res_temp_S2 * (P_. template block<2, 1>(idx, i)); 
				}
				
				for(int i = 0; i < 12; i++){
					K_x. template block<2, 1>(idx, i) = res_temp_S2 * (K_x. template block<2, 1>(idx, i));
				}
				
				for(int i = 0; i < n; i++){
					L_. template block<1, 2>(i, idx) = (L_. template block<1, 2>(i, idx)) * res_temp_S2.transpose();
					P_. template block<1, 2>(i, idx) = (P_. template block<1, 2>(i, idx)) * res_temp_S2.transpose();
				}
			}

			P_ = L_ - K_x.template block(0, 0) * P_.template block<12, n>(0, 0);
			
			solve_time += omp_get_wtime() - solve_start;
			return;
		}
		solve_time += omp_get_wtime() - solve_start;
	}
}

在最佳状态估计完毕后,便将lidar坐标系的点云转换至全局坐标系下,获得不断累加的地图与较好的建图效果!整个过程,利用的是imu与点云分别的单列信息,达到了紧耦合位姿估计的目的。

你可能感兴趣的:(激光SLAM,r3live系列学习,自动驾驶)