Cartographer源码阅读02-local_trajectory_builder.cc(1)-前端匹配

承接上文,本文我们进行local_trajectory_builder2d.cc的阅读,这部分代码主要是激光数据,IMU,里程计等数据加入函数的实现,这部分的功能是实现前端匹配.
代码在Cartographer/mapping/internal/2d/scan_matching/local_trajectory_builder_2d.cc

1.加入激光束:AddRangedate

1.1AddRangedate

在global_trajectory_builder中,对应的激光雷达数据进行子图匹配时调用了此函数:

//子图匹配
std::unique_ptr<typename LocalTrajectoryBuilder::MatchingResult>
        matching_result = local_trajectory_builder_->AddRangeData(
            sensor_id, timed_point_cloud_data);

增加激光数的函数,本函数主要的作用是

  • 多传感器数据之间的同步(主要是指的多个激光雷达进行数据同步)
  • 根据时间戳调用extrapolator(姿态外推器)进行插值,得到每个激光点的位姿
  • 运动畸变的去除
  • 无效数据滤除(miss点云集)
  • 调用AddAccumulatedRangeData()函数实现进行数据的匹配,插入.
std::unique_ptr<LocalTrajectoryBuilder2D::MatchingResult>
LocalTrajectoryBuilder2D::AddRangeData(
        const std::string& sensor_id,
        const sensor::TimedPointCloudData& unsynchronized_data)
{
    //进行多个传感器的数据同步,只有一个传感器的时候,可以直接忽略
    auto synchronized_data =
            range_data_collator_.AddRangeData(sensor_id, unsynchronized_data);

    if (synchronized_data.ranges.empty())
    {
        LOG(INFO) << "Range data collator filling buffer.";
        return nullptr;
    }

    //数据的时间戳,取点云获取的时间为基准为PoseExtrapolator初始化
    const common::Time& time = synchronized_data.time;

    // Initialize extrapolator now if we do not ever use an IMU.
    if (!options_.use_imu_data())
    {
        InitializeExtrapolator(time);
    }

    if (extrapolator_ == nullptr)
    {
        // Until we've initialized the extrapolator with our first IMU message, we
        // cannot compute the orientation of the rangefinder.
        LOG(INFO) << "Extrapolator not yet initialized.";
        return nullptr;
    }

    CHECK(!synchronized_data.ranges.empty());
    // TODO(gaschler): Check if this can strictly be 0.
    //检查激光数据最后一个点的时间是否为0。正常情况应该都为0
    CHECK_LE(synchronized_data.ranges.back().point_time.time, 0.f);
	
    //第一个激光束的时间戳.
	//第一个点的时间就等于点云集获取的时间加上第一个点记录的相对时间
    const common::Time time_first_point =
            time +
            common::FromSeconds(synchronized_data.ranges.front().point_time.time);

	// 如果该时间比PoseExtrapolator的最新时间还要早,
	//说明在第一个点被捕获时,PoseExtrapolator还没初始化姿
    if (time_first_point < extrapolator_->GetLastPoseTime())
    {
        LOG(INFO) << "Extrapolator is still initializing.";
        return nullptr;
    }

    //使用PoseExtrapolator插值得到每一个点的位姿,相当于运动畸变去除.
    std::vector<transform::Rigid3f> range_data_poses;
    // 为该集合预分配内存大小
    range_data_poses.reserve(synchronized_data.ranges.size());
    bool warned = false;

    //枚举每一个激光点.
    for (const auto& range : synchronized_data.ranges)
    {
        //每一个激光束是时间戳.点云中每个点的捕获时刻
        common::Time time_point = time + common::FromSeconds(range.point_time.time);

        if (time_point < extrapolator_->GetLastExtrapolatedTime())
        {
            if (!warned)
            {
                LOG(ERROR)
                        << "Timestamp of individual range data point jumps backwards from "
                        << extrapolator_->GetLastExtrapolatedTime() << " to " << time_point;
                warned = true;
            }

            //根据时间戳进行位姿插值.
            time_point = extrapolator_->GetLastExtrapolatedTime();
        }

        //获得每一个激光点的位姿.
        range_data_poses.push_back(
                    extrapolator_->ExtrapolatePose(time_point).cast<float>());
    }

    if (num_accumulated_ == 0)
    {
        // 'accumulated_range_data_.origin' is uninitialized until the last
        // accumulation.
        accumulated_range_data_ = sensor::RangeData{{}, {}, {}};
    }

    // Drop any returns below the minimum range and convert returns beyond the
    // maximum range into misses.
    // 得到所有的累计数据,这里进行运动畸变去除.
    for (size_t i = 0; i < synchronized_data.ranges.size(); ++i)
    {
        //获取第i帧点云.
        const sensor::TimedRangefinderPoint& hit =
                synchronized_data.ranges[i].point_time;

        //原点的位姿
        const Eigen::Vector3f origin_in_local =
                range_data_poses[i] *
                synchronized_data.origins.at(synchronized_data.ranges[i].origin_index);

        //击中点的位姿.
        sensor::RangefinderPoint hit_in_local =
                range_data_poses[i] * sensor::ToRangefinderPoint(hit);

        //激光束的距离
        const Eigen::Vector3f delta = hit_in_local.position - origin_in_local;
        const float range = delta.norm();

        //根据范围判断是否合法.根据距离判断是hit还是miss.
        if (range >= options_.min_range())
        {
            if (range <= options_.max_range())
            {
                accumulated_range_data_.returns.push_back(hit_in_local);
            }
            else
            {
                hit_in_local.position =
                        origin_in_local +
                        options_.missing_data_ray_length() / range * delta;
				//miss的点云集
                accumulated_range_data_.misses.push_back(hit_in_local);
            }
        }
    }

    //累计数据自加
    ++num_accumulated_;

    //如果累计的数据足够,则进行匹配.例如允许5帧激光数据进行匹配
	//默认点云帧数为1
    if (num_accumulated_ >= options_.num_accumulated_range_data())
    {
        //计算时间差.
        const common::Time current_sensor_time = synchronized_data.time;
        absl::optional<common::Duration> sensor_duration;
        if (last_sensor_time_.has_value())
        {
            sensor_duration = current_sensor_time - last_sensor_time_.value();
        }

        last_sensor_time_ = current_sensor_time;
        num_accumulated_ = 0;

        //得到重力向量.需将点云数据投影到水平方向
        const transform::Rigid3d gravity_alignment = transform::Rigid3d::Rotation(
                    extrapolator_->EstimateGravityOrientation(time));

        // TODO(gaschler): This assumes that 'range_data_poses.back()' is at time
        // 'time'.
        // 累计数据的原点.
        accumulated_range_data_.origin = range_data_poses.back().translation();

        //最终调用的函数.
        return AddAccumulatedRangeData(
                    time,
                    TransformToGravityAlignedFrameAndFilter(//将点云投影到水平方向并进行滤波
                        gravity_alignment.cast<float>() * range_data_poses.back().inverse(),
                        accumulated_range_data_),
                    gravity_alignment, sensor_duration);
    }
    return nullptr;
}

从最后一部分代码可以看出,TransformToGravityAlignedFrameAndFilter()将点云投影到水平面并进行滤波,然后返回过滤并矫正过的点云数据,并传入AddAccumulatedRangeData()函数.

1.2TransformToGravityAlignedFrameAndFilter

把数据转换到重力对齐的坐标系,即水平方向.
并进行网格滤波

sensor::RangeData LocalTrajectoryBuilder2D::TransformToGravityAlignedFrameAndFilter(
        const transform::Rigid3f& transform_to_gravity_aligned_frame,
        const sensor::RangeData& range_data) const
{
    //把点云投影到平面上来.裁剪数据,指定一定z轴范围内的数据
    const sensor::RangeData cropped =
            sensor::CropRangeData(sensor::TransformRangeData(
                                      range_data, transform_to_gravity_aligned_frame),
                                  options_.min_z(), options_.max_z());//-0.8到2之间

    //对于投影到平面的点云进行网格滤波,去除冗余的点.
    return sensor::RangeData
    {
        cropped.origin,
                sensor::VoxelFilter(options_.voxel_filter_size()).Filter(cropped.returns),
                sensor::VoxelFilter(options_.voxel_filter_size()).Filter(cropped.misses)
    };
}

1.3AddAccumulatedRangeData

AddRangedate()调用AddAccumulatedRangeData进行激光点云的匹配插入,并且返回匹配结果.调用了ScanMatch(扫描匹配)和InsertIntoSubmap(插入最近子图)

//这里面会进行激光数据的匹配.调用了ScanMatch和InsertIntoSubmap
std::unique_ptr<LocalTrajectoryBuilder2D::MatchingResult>
LocalTrajectoryBuilder2D::AddAccumulatedRangeData(
        const common::Time time,
        const sensor::RangeData& gravity_aligned_range_data,
        const transform::Rigid3d& gravity_alignment,
        const absl::optional<common::Duration>& sensor_duration)
{
    if (gravity_aligned_range_data.returns.empty())
    {
        LOG(WARNING) << "Dropped empty horizontal range data.";
        return nullptr;
    }

    // Computes a gravity aligned pose prediction.
    // 预测6自由度的位姿(也就是还没投影到水平面)
    const transform::Rigid3d non_gravity_aligned_pose_prediction =
            extrapolator_->ExtrapolatePose(time);

    // 把上面的6自由度位姿投影成3自由度的位姿.
	//--作为scan-match的初始解.当没有里程计数据的时候,可以把帧间匹配获得的位姿作为初始位姿
    const transform::Rigid2d pose_prediction = transform::Project2D(
                non_gravity_aligned_pose_prediction * gravity_alignment.inverse());

    //进行网格滤波滤波.
    const sensor::PointCloud& filtered_gravity_aligned_point_cloud =
            sensor::AdaptiveVoxelFilter(options_.adaptive_voxel_filter_options())
            .Filter(gravity_aligned_range_data.returns);

    if (filtered_gravity_aligned_point_cloud.empty())
    {
        return nullptr;
    }

    // local map frame <- gravity-aligned frame
    // 进行ScanMatch匹配.匹配是指激光数据是跟活跃子图匹配,求取Scan插入Submap的最优Pose
    std::unique_ptr<transform::Rigid2d> pose_estimate_2d =
            ScanMatch(time, pose_prediction, filtered_gravity_aligned_point_cloud);

    if (pose_estimate_2d == nullptr)
    {
        LOG(WARNING) << "Scan matching failed.";
        return nullptr;
    }

    //把位姿重新换回到6自由度.
    const transform::Rigid3d pose_estimate =
            transform::Embed3D(*pose_estimate_2d) * gravity_alignment;

    //更新估计器.用最优的位姿来重新更新估计器
    extrapolator_->AddPose(time, pose_estimate);

    //把点云转换到估计出来的位姿中.即Submap坐标系中.
    //此运算其实是矩阵相乘,将估计位姿乘重力转换矩阵
    sensor::RangeData range_data_in_local =
            TransformRangeData(gravity_aligned_range_data,
                               transform::Embed3D(pose_estimate_2d->cast<float>()));

    //把点云插入到SubMap中,返回插入结果.
    std::unique_ptr<InsertionResult> insertion_result = InsertIntoSubmap(
                time, range_data_in_local, filtered_gravity_aligned_point_cloud,
                pose_estimate, gravity_alignment.rotation()  	

1.4小结

这部分代码是实现前端匹配的核心代码:
1.1AddRangedate这部分代码主要是点云数据进行了不同传感器数据的同步,运动畸变去除,过滤无用的点云数据,并且调用AddAccumulatedRangeData
1.2 TransformToGravityAlignedFrameAndFilter: 将点云数据投影到水平面,并进行体素滤波,并将处理后的点云数据返回,并在1.1中被1.3调用
1.3AddAccumulatedRangeData:姿态外推器利用点云数据估计初始位姿—> 将点云进行体素滤波,调用ScanMatch进行扫描匹配,获得激光点云的最优位姿—> 利用最优位姿更新位姿外推器 —> 将点云数据转换到submap坐标系,调用InsertIntoSubmap,将点云插入submap
从1.3节的代码里可以看到调用ScanMatch进行扫描匹配,调用InsertIntoSubmap将点云数据插入子图,另外还有姿态外推器extrapolator接下来将对这三个函数进行阅读

你可能感兴趣的:(SLAM,c++,slam,自动驾驶)