ROS深度图转激光/点云原理

深度图转激光在ROS包depthimage_to_laserscan中实现,本篇讲解其计算过程。关于点云转激光数据的思路也是类似的,只需要将一定高度范围内的数据进行投影即可。

 

1. 深度图转激光原理

原理如图(1)所示。深度图转激光中,对任意给定的一个深度图像点m(u,v,z)

,其转换激光的步骤为:

a.将深度图像的点m(u,v,z)

转换到深度相机坐标系下的坐标点 M(x,y,z)

b.计算直线AO

CO的夹角 AOC

,计算公式如下:

  θ=atan(x/z)

c.将叫AOC

影射到相应的激光数据槽中.已知激光的最小最大范围 [α,β],激光束共细分为 N分,那么可用数组 laser[N]表示激光数据。那么点 M投影到数组laser中的索引值 n

可如下计算:

  n=N(θα)/(βα)

laser[n]

的值为 M在x轴上投影的点 C到相机光心 O的距离 r

,即

laser[n]=r=OC=z2+x2

ROS深度图转激光/点云原理_第1张图片

图(1)



 template
    void convert(const sensor_msgs::ImageConstPtr& depth_msg, const image_geometry::PinholeCameraModel& cam_model, 
		 const sensor_msgs::LaserScanPtr& scan_msg, const int& scan_height) const{
      // Use correct principal point from calibration
      float center_x = cam_model.cx();
      float center_y = cam_model.cy();

      // Combine unit conversion (if necessary) with scaling by focal length for computing (X,Y)
      double unit_scaling = depthimage_to_laserscan::DepthTraits::toMeters( T(1) );
      float constant_x = unit_scaling / cam_model.fx();
      float constant_y = unit_scaling / cam_model.fy();
      
      const T* depth_row = reinterpret_cast(&depth_msg->data[0]);
      int row_step = depth_msg->step / sizeof(T);

      int offset = (int)(cam_model.cy()-scan_height/2);
      depth_row += offset*row_step; // Offset to center of image

      for(int v = offset; v < offset+scan_height_; v++, depth_row += row_step){
		for (int u = 0; u < (int)depth_msg->width; u++) // Loop over each pixel in row
		{	
		  T depth = depth_row[u];
		  
		  double r = depth; // Assign to pass through NaNs and Infs
		  double th = -atan2((double)(u - center_x) * constant_x, unit_scaling); // Atan2(x, z), but depth divides out
		  int index = (th - scan_msg->angle_min) / scan_msg->angle_increment;
		  
		  if (depthimage_to_laserscan::DepthTraits::valid(depth)){ // Not NaN or Inf
		    // Calculate in XYZ
		    double x = (u - center_x) * depth * constant_x;
		    double z = depthimage_to_laserscan::DepthTraits::toMeters(depth);
		    
		    // Calculate actual distance
		    r = sqrt(pow(x, 2.0) + pow(z, 2.0));
		  }
	  
	  // Determine if this point should be used.
	  if(use_point(r, scan_msg->ranges[index], scan_msg->range_min, scan_msg->range_max)){
	    scan_msg->ranges[index] = r;
	  }
	}
      }
    }
    
    image_geometry::PinholeCameraModel cam_model_; ///< image_geometry helper class for managing sensor_msgs/CameraInfo messages.
    
    float scan_time_; ///< Stores the time between scans.
    float range_min_; ///< Stores the current minimum range to use.
    float range_max_; ///< Stores the current maximum range to use.
    int scan_height_; ///< Number of pixel rows to use when producing a laserscan from an area.
    std::string output_frame_id_; ///< Output frame_id for each laserscan.  This is likely NOT the camera's frame_id.
};



2  深度图转点云原理

首先,要了解下世界坐标到图像的映射过程,考虑世界坐标点M(Xw,Yw,Zw)映射到图像点m(u,v)的过程,如下图所示:

ROS深度图转激光/点云原理_第2张图片

详细原理请参考相机标定,这里不做赘述。形式化表示如下:

zcuv1=f/dx000f/dy0u0v01[RT]xwywzw1

其中u,v

为图像坐标系下的任意坐标点。 u0,v0分别为图像的中心坐标。 xw,yw,zw表示世界坐标系下的三维坐标点。 zc表示相机坐标的z轴值,即目标到相机的距离。 R,T

分别为外参矩阵的3x3旋转矩阵和3x1平移矩阵。

 对外参矩阵的设置:由于世界坐标原点和相机原点是重合的,即没有旋转和平移,所以:

R=000010001

, T=000

.

注意到,相机坐标系和世界坐标系的坐标原点重合,因此相机坐标和世界坐标下的同一个物体具有相同的深度,即zc=zw

.于是公式可进一步简化为:

zcuv1=f/dx000f/dy0u0v01100010001000xwywzc1

 从以上的变换矩阵公式,可以计算得到图像点[u,v]T

 到世界坐标点 [xw,yw,zw]T

的变换公式:

xwywzw===zc(uu0)dx/fzc(vv0)dy/fzc


代码请参考https://github.com/ros-perception/image_pipeline/blob/indigo/depth_image_proc/src/nodelets/point_cloud_xyzrgb.cpp

template
void PointCloudXyzrgbNodelet::convert(const sensor_msgs::ImageConstPtr& depth_msg,
                                      const sensor_msgs::ImageConstPtr& rgb_msg,
                                      const PointCloud::Ptr& cloud_msg,
                                      int red_offset, int green_offset, int blue_offset, int color_step)
{
  // Use correct principal point from calibration
  float center_x = model_.cx();
  float center_y = model_.cy();

  // Combine unit conversion (if necessary) with scaling by focal length for computing (X,Y)
  double unit_scaling = DepthTraits::toMeters( T(1) );
  float constant_x = unit_scaling / model_.fx();
  float constant_y = unit_scaling / model_.fy();
  float bad_point = std::numeric_limits::quiet_NaN ();
  
  const T* depth_row = reinterpret_cast(&depth_msg->data[0]);
  int row_step = depth_msg->step / sizeof(T);
  const uint8_t* rgb = &rgb_msg->data[0];
  int rgb_skip = rgb_msg->step - rgb_msg->width * color_step;

  sensor_msgs::PointCloud2Iterator iter_x(*cloud_msg, "x");
  sensor_msgs::PointCloud2Iterator iter_y(*cloud_msg, "y");
  sensor_msgs::PointCloud2Iterator iter_z(*cloud_msg, "z");
  sensor_msgs::PointCloud2Iterator iter_r(*cloud_msg, "r");
  sensor_msgs::PointCloud2Iterator iter_g(*cloud_msg, "g");
  sensor_msgs::PointCloud2Iterator iter_b(*cloud_msg, "b");
  sensor_msgs::PointCloud2Iterator iter_a(*cloud_msg, "a");

  for (int v = 0; v < int(cloud_msg->height); ++v, depth_row += row_step, rgb += rgb_skip)
  {
    for (int u = 0; u < int(cloud_msg->width); ++u, rgb += color_step, ++iter_x, ++iter_y, ++iter_z, ++iter_a, ++iter_r, ++iter_g, ++iter_b)
    {
      T depth = depth_row[u];

      // Check for invalid measurements
      if (!DepthTraits::valid(depth))
      {
        *iter_x = *iter_y = *iter_z = bad_point;
      }
      else
      {
        // Fill in XYZ
        *iter_x = (u - center_x) * depth * constant_x;
        *iter_y = (v - center_y) * depth * constant_y;
        *iter_z = DepthTraits::toMeters(depth);
      }

      // Fill in color
      *iter_a = 255;
      *iter_r = rgb[red_offset];
      *iter_g = rgb[green_offset];
      *iter_b = rgb[blue_offset];
    }
  }
}

} // namespace depth_image_proc



你可能感兴趣的:(ROS)