双目相机障碍物层避障

1. 获取深度相机点云

一般由相机的ROS驱动发布出来,以realsensed 435i为例,将rs_rgbd.launch或rs_camera.launch文件中的由false改为true,

<arg name="enable_pointcloud" default="true"/>

点云话题名称:/camera/depth/color/points
刷新频率:10Hz左右

2. 点云滤波

获取的相机点云数据波动比较大,而且一般避障的时候只需要附近的一距离和角度范围,所以需要进行滤波,这里采用体素滤波。滤波y方向上的点云,也就是一定高度范围的点云,裁剪掉打到地面上的点云。体素滤波realsense_cloud_voxel_filtering.cpp

#include 
#include 
#include 
#include 
#include 

typedef pcl::PointCloud<pcl::PointXYZ> PointCloud;
typedef pcl::PointXYZ PointXYZ;

class FilterAndPublish
{
    public:
        FilterAndPublish()
        {
            printf("Made object\n");
            pub = nh.advertise<PointCloud> ("/points_filtered", 1);
            sub = nh.subscribe<PointCloud>("/camera/depth/color/points", 1, &FilterAndPublish::callback, this);
            this->thresh = 15; // This is the minimum number of points that have to occupy a voxel in order for it to survive the downsample.
        }

        void callback(const PointCloud::ConstPtr& msg)
        {
            PointCloud::Ptr cloud (new PointCloud);
            PointCloud::Ptr cloud_filtered (new PointCloud);
            *cloud = *msg;

            // What to do here: 
            // 1. Take cloud and put it in a voxel grid while restricting the bounding box
            // 2. Go through the voxels and remove all points in a voxel that has less than this.thresh points
            // 3. Publish resulting cloud

            pcl::VoxelGrid<PointXYZ> vox;
            vox.setInputCloud(cloud);
            // The leaf size is the size of voxels pretty much. Note that this value affects what a good threshold value would be.
            vox.setLeafSize(0.05f, 0.05f, 0.05f);
            // I limit the overall volume being considered so lots of "far away" data that is just terrible doesn't even have to be considered.
            vox.setFilterLimits(-1.0, 1.0);
            // The line below is perhaps the most important as it reduces ghost points.
            vox.setMinimumPointsNumberPerVoxel(this->thresh);
            vox.filter(*cloud_filtered);
            
            pub.publish (cloud_filtered);
        }

    private:
        ros::NodeHandle nh;
        ros::Publisher pub;
        ros::Subscriber sub;
        int thresh;

};


int main(int argc, char** argv)
{
    ros::init(argc, argv, "pcl_filtering");
    FilterAndPublish f = FilterAndPublish();
    ros::spin();
    return 0;
}

订阅的话题是:/camera/depth/color/points;发布的话题是:points_cloud

3. 点云转激光

将裁剪后的点云转为激光数据。pointcloud_to_laserscan_melodic,这里需要注意launch文件里的几个参数。 sample_node.launch



    
    
    
        
        
        
            target_frame: camera_link 
            transform_tolerance: 0.01
            min_height: -0.15
            max_height: 0.7
            
            
            angle_min: -1.5708   
            angle_max: 1.5708 
            angle_increment: 0.0087 
            scan_time: 0.3333
            range_min: 0.25
            range_max: 3.0
            use_inf: true
            inf_epsilon: 1.0
            
            concurrency_level: 1
        
    

订阅的话题是:points_cloud;发布的话题是:/camera/scan

4. 配置插件

(1) costmap_common_params.yaml

camera_layer:
  enabled: true
  max_obstacle_height: 2.0
  mark_threshold: 0
  combination_method: 1
  inflation_radius: 0.5 #1.5
  track_unknown_space: true 
  obstacle_range: 2.5 # 只有障碍物在这个范围内才会被标记    2.5
  raytrace_range: 3.0 # 这个范围内不存在的障碍物都会被清除  3.0
  observation_sources: laser_scan_sensor
  laser_scan_sensor:
    data_type: LaserScan
    topic: camera/scan
    marking: true
    clearing: true
    inf_is_valid: true

(2) global_costmap_params.yaml

global_costmap:
  global_frame: map
  robot_base_frame: base_link
  update_frequency: 1.0
  publish_frequency: 1.0
  static_map: true
  transform_tolerance: 2.0 # 0.5
  track_unknown_space: true
  resolution: 0.05
  plugins:
    - { name: static_layer, type: "costmap_2d::StaticLayer" }
    - { name: obstacle_layer, type: "costmap_2d::VoxelLayer" }
    - { name: camera_layer, type: "costmap_2d::VoxelLayer" }
    #     - {name: range_sensor_layer,          type: "range_sensor_layer::RangeSensorLayer"}
    #     - {name: vectormap_layer,         type: "vectormap_layer::VectorMapLayer"}
    - { name: global_inflation_layer, type: "costmap_2d::InflationLayer" }
#     - {name: polygon_obstacle_layer,   type: "polygon_obstacle_layer_namespace::PolygonObstacleLayer"}

(3) local_costmap_params.yaml

local_costmap:
  global_frame: odom
  robot_base_frame: base_link
  update_frequency: 7.0
  publish_frequency: 7.0
  static_map: false # 一般我们并不希望局部地图使用静态的地图,因为这不利于我们的局部避障所以局部地图static_map设为fasle
  rolling_window: true
  track_unknown_space: true
  width: 6.0
  height: 6.0
  resolution: 0.05
  transform_tolerance: 0.5
  plugins:
    - { name: static_layer, type: "costmap_2d::StaticLayer" }
    - { name: obstacle_layer, type: "costmap_2d::VoxelLayer" }
    - { name: camera_layer, type: "costmap_2d::VoxelLayer" }

    #   - {name: range_sensor_layer,      type: "range_sensor_layer::RangeSensorLayer"}
    #    - {name: vectormap_layer,     type: "vectormap_layer::VectorMapLayer"}
    - { name: local_inflation_layer, type: "costmap_2d::InflationLayer" }
  #
  #        - {name: polygon_obstacle_layer,  type: "polygon_obstacle_layer_namespace::PolygonObstacleLayer"}

5. 验证

rospack plugins --attrib=plugin costmap_2d

如果能检测到camera_layer则说明安装成功。也可以用rviz查看。

你可能感兴趣的:(室内机器人专栏,slam,自动驾驶)