cartographer 3D点云建图教程

cartographer 3D点云建图教程

This article was original written by XRBLS, welcome re-post, but please keep this copyright info, thanks, any question could be asked via wechat: jintianiloveu

无人驾驶或者机器人,很重要的一环是建图,通过3D点云来完成,或者扫地机器人里面的单线激光雷达来完成。本文详细记录一下采用个cartographer来完成的建图步骤。整个过程简单,异形。首先安装一些依赖。

cartographer 依赖

安装 ceres solver:

git clone https://github.com/BlueWhaleRobot/ceres-solver.git
cd ceres-solver
mkdir build
cd build
cmake ..
make -j
sudo make install

安装protobuf 3.0:

git clone https://github.com/google/protobuf.git
cd protobuf
git checkout v3.6.1
mkdir build
cd build
cmake  \
  -DCMAKE_POSITION_INDEPENDENT_CODE=ON \
  -DCMAKE_BUILD_TYPE=Release \
  -Dprotobuf_BUILD_TESTS=OFF \
  ../cmake
make -j 2
sudo make install

安装cartographer: 请注意,关于cartographer的安装,不要用cmake,要用ninja,另外据说master branch已经崩溃(谷歌垃圾程序员不知道搞什么事情),checkout release-1.0版本。

git clone https://github.com/googlecartographer/cartographer.git
cd cartographer
mkdir build
cd build
cmake .. -G Ninja
ninja
CTEST_OUTPUT_ON_FAILURE=1 ninja test
sudo ninja install

这里其实可以用官方的cartographer,不过用蓝经机器人的也行。至于各种区别,日后在研究研究。

最后,需要安装一下ros的wrapper。

mkdir catkin_ws
cd catkin_ws
wstool init src
wstool merge -t src https://raw.githubusercontent.com/googlecartographer/cartographer_ros/master/cartographer_ros.rosinstall
wstool update -t src

catkin_make_isolated --install --use-ninja

开始测试

ok,所有依赖安装完之后,再来测试一下。关于测试可以通过下载一个2D的博物馆的rosbag包进行离线的测试,最终会先生成一个pbstream的文件,然后使用cartographer自带的节点,将pbstream生成map。

rosrun cartographer_ros cartographer_pbstream_to_ros_map  -map_filestem=./aa_map -pbstream_filename=/media/jintain/wd/ros/slam/rosbags/cartographer_paper_deutsches_museum.bag.pbstream

这种离线的方式便可以将离线计算的pbstream结果,转化成需要的地图文件。但是本次教程的重要部分是总结一下,如何使用自己的数据进行建图,在线建图,然后将结果保存起来。

使用cartographer在自己的雷达数据上建立3D地图

分步骤来,假设已经安装好了cartographer。

  1. 编写自己的launch文件,启动cartographer,加载配置文件,话题remap,udrf位姿等

    首先需要配置一些文件,需要的文件包括:a) demo_my_rslidar_3d.launch: 启动文件; b) my_rslidar_3d.launch: 配置文件,话题remap,等。

    其中 my_rslidar_3d.launch 最为重要,它包含的内容可以大致写为:

    
      
    
      
    
      
    
      
          
          
          
      
    
      
    
    
    

    也就是说,制定urdf(位姿转换),发布状态,话题的remap,以及添加配置文件,其中cartographer定位需要的传感器数据就是, odom里程计, imu数据,激光雷达数据三种。

  2. 启动cartographer开始建图

    这一步比较简单,直接启动上面的demo launch就行了。

  3. 保存建图状态

    在线见图的时候,不会自动保存状态到本地,需要调用服务,来将状态保存。在rosbag播放完成之后,调用i一个service保存数据到本地。

    先调用 rosrun rqt_service_caller rqt_service_caller /finish_trajectory 结束建图。

    然后再调用,/write_state, 开始写入状态。保存成功之后,test_3d.pbfile 可以在 ~/.ros 目录下找到(这个也太隐蔽了吧)。

    请注意,上面的操作是一个窗口,可以可视化操作。

    在得到pbstream文件之后,可以进一步导出为ply的3D点云

    此时,cartographer可以关掉了,因为我们已经拿到了pbstream文件,但是还需要将这个pbstream文件进一步转化成3D的ply点云文件。

     roslaunch cartographer_ros assets_writer_my_rslidar_3d.launch bag_filenames:=`pwd`/../rosbags/2018-08-11-13-20-34.bag    pose_graph_filename:=`pwd`/../rosbags/test_3d.pbstream
    

可视化保存的点云

由于最后生成的是ply的格式,接下来得将其转换为pcd,然后用pcl来可视化一下,看看最终点云地图张啥样。先将ply转换成pcd:

pcl_ply2pcd 2018-08-11-13-20-34.bag_points.ply test_3d.pcd

然后用下面的代码,可以方便的可视化pcd:

#include 
#include 
#include 
// #include 
#include 

int
main (int argc, char** argv)
{
  pcl::PointCloud::Ptr cloud (new pcl::PointCloud);

  if (pcl::io::loadPCDFile ("../ct_lx_rmp.pcd", *cloud) == -1) //* load the file
  {
    PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
    return (-1);
  }
  std::cout << "Loaded "
            << cloud->width * cloud->height
            << " data points from test_pcd.pcd with the following fields: "
            << std::endl;
  for (size_t i = 0; i < cloud->points.size (); ++i)
    std::cout << "    " << cloud->points[i].x
              << " "    << cloud->points[i].y
              << " "    << cloud->points[i].z << std::endl;


    // visualize it
    pcl::visualization::CloudViewer viewer("ff");
    viewer.showCloud(cloud);
    while (!viewer.wasStopped())     {

    }
  return (0);
}

CMakeLists.txt:

cmake_minimum_required(VERSION 2.8 FATAL_ERROR)

project(pcd_read)

find_package(PCL REQUIRED)

include_directories(${PCL_INCLUDE_DIRS})
link_directories(${PCL_LIBRARY_DIRS})
add_definitions(${PCL_DEFINITIONS})

add_executable (pcd_read pcd_read.cpp)
target_link_libraries (pcd_read ${PCL_LIBRARIES})

当然,如果没有ply2pcd工具,直接编译 ply2pcd.cpp:

#include 
#include 
#include 
#include 
#include 

using namespace pcl;
using namespace pcl::io;
using namespace pcl::console;

void
printHelp (int, char **argv)
{
  print_error ("Syntax is: %s [-format 0|1] input.ply output.pcd\n", argv[0]);
}

bool
loadCloud (const std::string &filename, pcl::PCLPointCloud2 &cloud)
{
  TicToc tt;
  print_highlight ("Loading "); print_value ("%s ", filename.c_str ());

  pcl::PLYReader reader;
  tt.tic ();
  if (reader.read (filename, cloud) < 0)
    return (false);
  print_info ("[done, "); print_value ("%g", tt.toc ()); print_info (" ms : "); print_value ("%d", cloud.width * cloud.height); print_info (" points]\n");
  print_info ("Available dimensions: "); print_value ("%s\n", pcl::getFieldsList (cloud).c_str ());

  return (true);
}

void
saveCloud (const std::string &filename, const pcl::PCLPointCloud2 &cloud, bool format)
{
  TicToc tt;
  tt.tic ();

  print_highlight ("Saving "); print_value ("%s ", filename.c_str ());
  
  pcl::PCDWriter writer;
  writer.write (filename, cloud, Eigen::Vector4f::Zero (), Eigen::Quaternionf::Identity (), format);
  
  print_info ("[done, "); print_value ("%g", tt.toc ()); print_info (" ms : "); print_value ("%d", cloud.width * cloud.height); print_info (" points]\n");
}

/* ---[ */
int
main (int argc, char** argv)
{
  print_info ("Convert a PLY file to PCD format. For more information, use: %s -h\n", argv[0]);

  if (argc < 3)
  {
    printHelp (argc, argv);
    return (-1);
  }

  // Parse the command line arguments for .pcd and .ply files
  std::vector pcd_file_indices = parse_file_extension_argument (argc, argv, ".pcd");
  std::vector ply_file_indices = parse_file_extension_argument (argc, argv, ".ply");
  if (pcd_file_indices.size () != 1 || ply_file_indices.size () != 1)
  {
    print_error ("Need one input PLY file and one output PCD file.\n");
    return (-1);
  }

  // Command line parsing
  bool format = 1;
  parse_argument (argc, argv, "-format", format);
  print_info ("PCD output format: "); print_value ("%s\n", (format ? "binary" : "ascii"));

  // Load the first file
  pcl::PCLPointCloud2 cloud;
  if (!loadCloud (argv[ply_file_indices[0]], cloud)) 
    return (-1);

  // Convert to PLY and save
  saveCloud (argv[pcd_file_indices[0]], cloud, format);

  return (0);
}

可以很直观的得到点云的结果。

关于自己点云一些关键点

仔细观察会发现,这里面有一些重要因素,比如你需要一个urdf,你需要知道 lidar, imu想对于base link的转换关系,同时在进行将状态转换到ply的时候,你需要告诉assets writer一个关于机器人的配置文件。这个配置文件怎么弄呢?

我们来看看关于这个assets writer需要的配置文件.lua:

-- Copyright 2016 The Cartographer Authors
--
-- Licensed under the Apache License, Version 2.0 (the "License");
-- you may not use this file except in compliance with the License.
-- You may obtain a copy of the License at
--
--      http://www.apache.org/licenses/LICENSE-2.0
--
-- Unless required by applicable law or agreed to in writing, software
-- distributed under the License is distributed on an "AS IS" BASIS,
-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-- See the License for the specific language governing permissions and
-- limitations under the License.

VOXEL_SIZE = 5e-2

include "transform.lua"

options = {
  tracking_frame = "laserbase_link",
  pipeline = {
    {
      action = "min_max_range_filter",
      min_range = 1.,
      max_range = 20.,
    },
    {
      action = "dump_num_points",
    },
    {
      action = "fixed_ratio_sampler",
      sampling_ratio = 0.01,
    },

    {
      action = "write_ply",
      filename = "points.ply"
    },

    -- Gray X-Rays. These only use geometry to color pixels.
    {
      action = "write_xray_image",
      voxel_size = VOXEL_SIZE,
      filename = "xray_yz_all",
      transform = YZ_TRANSFORM,
    },
    {
      action = "write_xray_image",
      voxel_size = VOXEL_SIZE,
      filename = "xray_xy_all",
      transform = XY_TRANSFORM,
    },
    {
      action = "write_xray_image",
      voxel_size = VOXEL_SIZE,
      filename = "xray_xz_all",
      transform = XZ_TRANSFORM,
    },

    -- Now we recolor our points by frame and write another batch of X-Rays. It
    -- is visible in them what was seen by the horizontal and the vertical
    -- laser.
    {
      action = "color_points",
      frame_id = "rslidar",
      color = { 255., 0., 0. },
    },

    {
      action = "write_xray_image",
      voxel_size = VOXEL_SIZE,
      filename = "xray_yz_all_color",
      transform = YZ_TRANSFORM,
    },
    {
      action = "write_xray_image",
      voxel_size = VOXEL_SIZE,
      filename = "xray_xy_all_color",
      transform = XY_TRANSFORM,
    },
    {
      action = "write_xray_image",
      voxel_size = VOXEL_SIZE,
      filename = "xray_xz_all_color",
      transform = XZ_TRANSFORM,
    },
  }
}

return options

其实这里面修改的东西就几个,frameid,然后没了。。如果你采用rslida,也就是速腾的激光雷达,那这个基本上可以不用。

你可能感兴趣的:(3D)