基于Matlab的激光雷达与单目摄像头联合外参标定

1、背景介绍 

        目前团队正在与某主机厂合作开发L4级自动驾驶出租车项目,所用的传感器包括80线激光雷达和单目摄像头。为了充分利用Lidar和Cam各自的优势,设计了一种融合算法,需要获得Lidar2Camera的联合外参。

        前期使用Autoware的CalibrationToolKit工具包进行联合外参的标定,但所的结果始终不能令人满意,后经分析主要是在选取确定标定板位姿的点云数据时,红色圆圈范围内的点云并不严格在一个平面上。后偶然发现https://zhuanlan.zhihu.com/p/404762012https://zhuanlan.zhihu.com/p/404762012https://zhuanlan.zhihu.com/p/404762012        博主总结的联合外参标定工具中包含Matlab的相关功能模块,随后基于此重新开始联合外参标定工作。其他参考链接:https://blog.csdn.net/Wolf_Young/article/details/115975157?utm_source=app&app_version=4.16.0&code=app_1562916241&uLinkId=usr1mkqgl919blenhttps://blog.csdn.net/Wolf_Young/article/details/115975157?utm_source=app&app_version=4.16.0&code=app_1562916241&uLinkId=usr1mkqgl919blenhttps://blog.csdn.net/Wolf_Young/article/details/115975157?utm_source=app&app_version=4.16.0&code=app_1562916241&uLinkId=usr1mkqgl919blenMATLAB终于可以完成相机与激光雷达的标定啦!!_高榤497的博客-CSDN博客文章目录一、相机与激光雷达之间的标定一、相机与激光雷达之间的标定1.MATLAB2020b(1)获取方式:百度网盘链接提取码:zadk(2)安装教程:详细说明2.详细步骤官方教程:仔细瞅瞅(1)搭建实验平台,实验操作系统Ubuntu16.04+ROS(kinetic)+velodyne(16线)+basler+棋盘格(10*7 41mm)(2)Ubuntu下打开激光雷达和相机,教程见前文,并录制数据bag(ps:一定要激光雷达和相机的频率的一致!一定一定!)(3)将数据bag中的激光雷https://blog.csdn.net/qq_27339501/article/details/110224436?spm=1001.2101.3001.6650.2&utm_medium=distribute.pc_relevant.none-task-blog-2~default~CTRLIST~default-2.no_search_link&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2~default~CTRLIST~default-2.no_search_link

2、Matlab的Lidar and Camera Calibration模块的操作

        Matlab标定工具与CalibrationToolKit的底层原理是一致的,即首先确定标定板在像素坐标系下的位姿,然后确定标定板在点云3D坐标系下的位姿,根据摄像头的内参换算得到Lidar2Cam的联合外参(利用OpenCv的PnP函数)。

        相比较来说,Matlab的标定工具更傻瓜一些,获得pcd点云数据和图片数据后,直接调用Matlab例程代码(需微调)即可自动计算出外参,而CalibrationToolKit则需进行手动调整点云等一系列较为繁琐的操作,且楼主在实际开发过程中获得结果非常不理想。

2.1标定过程及相关代码

        准备工作及注意事项:标定开始需要准备无白边的棋盘格标定板,行和列的格子个数最好分别是奇数和偶数(Matlab推荐);标定时站位距离不应过远,最好在5-10米以内,否则在之后标定过程中会出现相机内参标定不准确及无法自动检测棋盘格边界的问题;标定位置可多选数组,以增加结果的准确性;标定生成的相机内参及联合外参均需转置之后方可进行后续使用(指融合算法)。

基于Matlab的激光雷达与单目摄像头联合外参标定_第1张图片

        过程1:录制.bag文件,保证Cam和Lidar的时间戳一致;

        过程2:利用代码将.bag信息转换为图片文件及.pcd(点云)文件,代码如下:

#include 
#include 
#include 
#include 
#include 
#include 


#include 
#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include 

using namespace std;

int c;
int count_img=0;
int count_pcd=0;


ros::Subscriber sub_img;
ros::Subscriber sub_velo;
ros::Publisher pub_img;
ros::Publisher pub_velo;


int getch()
{
  static struct termios oldt, newt;
  tcgetattr( STDIN_FILENO, &oldt);           // save old settings
  newt = oldt;
  newt.c_lflag &= ~(ICANON);                 // disable buffering
  tcsetattr( STDIN_FILENO, TCSANOW, &newt);  // apply new settings
  int c = getchar();  // read character (non-blocking)
  tcsetattr( STDIN_FILENO, TCSANOW, &oldt);  // restore old settings
  return c;
}

void img_cb(const sensor_msgs::Image::ConstPtr& image)
{
	cout << "Image call back ..." << endl;
	stringstream ss;
	ss << "/media/enjoy/新加卷/PCD_Image_get/image/" << count_img << ".jpg" << endl;
	
	cv_bridge::CvImagePtr cv_ptr;
	try
	{
		cv_ptr = cv_bridge::toCvCopy(image,"bgr8");
	}
	catch(cv_bridge::Exception& e)
	{
		ROS_ERROR("cv_bridge exception:%s",e.what());
	}
	
	if (c == 'i') // flag to take an input sample
	{
		pub_img.publish(image);
		cout << "I got image  !" << endl;
	}
	
	else if (c == 'o')
	{
		ROS_INFO("Saved this Image sample!");
		cv::imwrite(ss.str(),cv_ptr->image);
		count_img++;	
	}
}

void velo_cb(const sensor_msgs::PointCloud2::ConstPtr& in_cloud)
{
	pcl::PointCloud::Ptr cloud(new pcl::PointCloud);
	pcl::fromROSMsg(*in_cloud, *cloud);
	
	cout << "Velodyne call back ..." << endl;
	
	stringstream ss;
	ss << "/media/enjoy/新加卷/PCD_Image_get/pcd/" << count_pcd << ".pcd";
	if (c == 'i') // flag to take an input sample
	{
		pub_velo.publish(in_cloud);
		cout << "I got velodyne  !" << endl;
	}
	else if (c == 'o')
	{
		ROS_INFO("Saved this velodyne sample!");
		pcl::PCDWriter writer;
	std::vector indices;
	pcl::CropBox roof(true);
	roof.setMin(Eigen::Vector4f(0, -3, -6, 1));//看数据像是车身大小
	roof.setMax(Eigen::Vector4f(15, 3,6, 1));
	roof.setInputCloud(cloud);//输入的是上部中的立方体
	roof.filter(indices);
	pcl::PointIndices::Ptr inliers{ new pcl::PointIndices }; //创建一个内点对象,将提取到车身周围点,放到内点对象中
	for (int point : indices) 
	{
		inliers->indices.push_back(point);
	}
 
	//创建点云提取函数
	pcl::ExtractIndices extract;
	extract.setInputCloud(cloud);
	extract.setIndices(inliers);
	extract.setNegative(false);  //false 提取内点也就是提取车身周围的几个点,, true提取出了车身周围的点
	extract.filter(*cloud);

		writer.write (ss.str(),*cloud,false);
		count_pcd++;
	}
}
int main(int argc,char** argv)
{
	ros::init(argc,argv,"capture_node");
	ros::NodeHandle nh("~");
	
	
	sub_img = nh.subscribe("/Image_topic", 1, img_cb);
	sub_velo = nh.subscribe("/middle/rslidar_points", 1, velo_cb);
	pub_img = nh.advertise("/calib/Image", 10);
	pub_velo = nh.advertise("/calib/pointcloud", 10);
	
	while(ros::ok())
	{
		std::cout << " Now, press an appropriate key ... " << std::endl;
		std::cout << " 'i' to take an input sample" << std::endl;
		std::cout << " 'o' to save an input sample" << std::endl;
		std::cout << " 'e' to end the calibration process" << std::endl;
		//c = getchar();
		c = getch();
		
		if (c == 'e')
		{
			ros::shutdown();
		}
		
		ros::spinOnce();
	}
	return 0;
}

         上述代码利用pcl::CropBox对点云数据进行了精简,仅获得车身前部空间标定板所在的部分点云(增加计算速度),代码参考链接:https://blog.csdn.net/ChuiGeDaQiQiu/article/details/117648182https://blog.csdn.net/ChuiGeDaQiQiu/article/details/117648182icon-default.png?t=L9C2https://blog.csdn.net/ChuiGeDaQiQiu/article/details/117648182

         过程3:利用Matlab的Camera Calibration对相机的intrinsic进行标定,标定输入即为过程2得到的图片。将得到的标定结果保存为.mat文件。

基于Matlab的激光雷达与单目摄像头联合外参标定_第2张图片

         过程4:调用Matlab代码对上述过程得到的图片、.PCD及.mat(相机内参)进行调用,自动获得联合外参,代码如下:

clc;clear all;close all;
imagePath = 'D:\PCD_Image_get\image';
ptCloudPath = 'D:\PCD_Image_get\pcd';
cameraParamsPath = fullfile(imagePath, 'matlab1.mat');

intrinsic = load(cameraParamsPath); % Load camera intrinsics
imds = imageDatastore(imagePath); % Load images using imageDatastore
pcds = fileDatastore(ptCloudPath, 'ReadFcn', @pcread); % Load point cloud files

imageFileNames = imds.Files;
ptCloudFileNames = pcds.Files;

squareSize = 108; % Square size of the checkerboard

% Set random seed to generate reproducible results.
rng('default'); 

[imageCorners3d, checkerboardDimension, dataUsed] = ...
    estimateCheckerboardCorners3d(imageFileNames, intrinsic.cameraParams, squareSize);
imageFileNames = imageFileNames(dataUsed); % Remove image files that are not used

% Display Checkerboard corners
% helperShowImageCorners(imageCorners3d, imageFileNames, intrinsic.cameraParams);

% Extract ROI from the detected image corners
roi = helperComputeROI(imageCorners3d, 5);

% Filter point cloud files corresponding to the detected images
ptCloudFileNames = ptCloudFileNames(dataUsed);
[lidarCheckerboardPlanes, framesUsed, indices] = detectRectangularPlanePoints(ptCloudFileNames,checkerboardDimension, ...
    'DimensionTolerance',0.5,'RemoveGround', true);
%[lidarCheckerboardPlanes, framesUsed, indices] = ...detectRectangularPlanePoints(ptCloudFileNames, checkerboardDimension);
%helperShowLidarCorners(ptCloudFileNames, indices);
% Remove ptCloud files that are not used
ptCloudFileNames = ptCloudFileNames(framesUsed);
% Remove image files 
imageFileNames = imageFileNames(framesUsed);
% Remove 3D corners from images
imageCorners3d = imageCorners3d(:, :, framesUsed);

helperShowLidarCorners(ptCloudFileNames, indices);

[tform, errors] = estimateLidarCameraTransform(lidarCheckerboardPlanes, ...
    imageCorners3d, 'CameraIntrinsic', intrinsic.cameraParams);

% visualize the lidar and the image data fused together
helperFuseLidarCamera(imageFileNames, ptCloudFileNames, indices, ...
    intrinsic.cameraParams, tform);

% visualize the Errors
 % The Errors include the follow:
 % 1. Translation Error:Mean of difference between centroid of checkerboard
 % corners in the lidar and the projected corners in 3D from an image.
 % 2. Rotation Error: Mean of difference between the normals of checkerboard in the point cloud and the projected corners in 3-D from an image.
 % 3. Reprojection Error:Mean of difference between the centroid of image
 % corners and projected lidar corners on the image.
helperShowError(errors)

% Results
% After calibration check for data with high calibration errors and re-run the calibration.
outlierIdx = errors.RotationError < mean(errors.RotationError);
[newTform, newErrors] = estimateLidarCameraTransform(lidarCheckerboardPlanes(outlierIdx), ...
        imageCorners3d(:, :, outlierIdx), 'CameraIntrinsic', intrinsic.cameraParams);
    helperShowError(newErrors);
    
% Save the result
% dlmwrite('calib_result.txt',newTform.T)
 

         代码来源及Matlab联合外参标定工具官方说明链接:

https://ww2.mathworks.cn/help/lidar/ug/lidar-and-camera-calibration.html;jsessionid=065c4d973f55f8ff49fb7cf7a225https://ww2.mathworks.cn/help/lidar/ug/lidar-and-camera-calibration.html;jsessionid=065c4d973f55f8ff49fb7cf7a225icon-default.png?t=L9C2https://ww2.mathworks.cn/help/lidar/ug/lidar-and-camera-calibration.html;jsessionid=065c4d973f55f8ff49fb7cf7a225重要说明:在直接调用标定板点云边框识别函数detectRectangularPlanePoints时常常会出现无法检测到标定板而出现的各种报错。楼主经过查看Matlab对该函数的定义发现,如果不指定函数中'DimensionTolerance'的值(即点云检测边框精度),则它默认为0.01。过高的精度要求常常导致无法检测到标定板的点云位姿,为了提高标定板检测成功率,可根据自己需要将该参数调高(即允许更高的边框检测误差)。

基于Matlab的激光雷达与单目摄像头联合外参标定_第3张图片

基于Matlab的激光雷达与单目摄像头联合外参标定_第4张图片

        标定结果如上图所示,精度令人比较满意,保存生成的tform数据,即为联合外参的旋转矩阵(偏置后使用)及平移矩阵 。

        过程5:利用Opencv对标定结果进一步验证,代码如下:

#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include 
#include "cv_bridge/cv_bridge.h"
#include "sensor_msgs/image_encodings.h"
int main(int argc, char** argv)
{
  // read a image and a pcd
  cv::Mat image_origin = cv::imread("/home/enjoy/points2image/0.jpg");
  pcl::PointCloud::Ptr cloud_origin(new pcl::PointCloud);
  pcl::PointCloud::Ptr cloud_withoutNAN(new pcl::PointCloud);
  pcl::io::loadPCDFile ("/home/enjoy/points2image/0.pcd", *cloud_origin);
  std::vector indices;
  pcl::removeNaNFromPointCloud(*cloud_origin, *cloud_withoutNAN, indices);

  std::vector pts_3d;
  // for (size_t i = 0; i < cloud_withoutNAN->size(); ++i)
  // {
  //   pcl::PointXYZI point_3d = cloud_withoutNAN->points[i];
  //     if (point_3d.x > 0 )
  //    {
  //     pts_3d.emplace_back(cv::Point3f(point_3d.x, point_3d.y, point_3d.z));
  //   }
  // }
 for (size_t i = 0; i < cloud_origin->size(); ++i)
  {
    pcl::PointXYZI point_3d = cloud_origin->points[i];
      if (point_3d.x > 0 )
     {
      pts_3d.emplace_back(cv::Point3f(point_3d.x, point_3d.y, point_3d.z));
    }
  }
  // using iterator
 //Rodrigues transform
   double r_vec1[3];
   double R_matrix[9] ={-0.0248081563169757,-0.999577342547861,0.0151555813187337,
                                                -0.0102513534023755,-0.0149050838374972,-0.999836360725702,
                                                0.999639667647084,-0.0249594619483961,-0.00987725294170718
   };
    CvMat pr_vec;
    CvMat pR_matrix;
    cvInitMatHeader(&pr_vec,1,3,CV_64FC1,r_vec1,CV_AUTOSTEP);
    cvInitMatHeader(&pR_matrix,3,3,CV_64FC1,R_matrix,CV_AUTOSTEP);
    cvRodrigues2(&pR_matrix, &pr_vec,0);
  // read calibration parameter
  double fx = 2399.33875598524, fy =2408.73780160170;
  double cx = 1007.26495794494, cy = 560.636410205530;
  double k1 =-0.528079600928251, k2 =0.243825655125111, k3 =0;
  double p1 = 0, p2 = 0;
  cv::Mat camera_matrix = (cv::Mat_(3, 3) << fx, 0.0, cx, 0.0, fy, cy, 0.0, 0.0, 1.0);
  cv::Mat distortion_coeff = (cv::Mat_(1, 5) <(3, 1) << r_vec1[0], r_vec1[1], r_vec1[2]);
  cv::Mat t_vec = (cv::Mat_(3, 1) <<-0.0159063387558948,	-0.472938103973732,	0.505238210760195);
  // project 3d-points into image view
  std::vector pts_2d;
  cv::projectPoints(pts_3d, r_vec, t_vec, camera_matrix, distortion_coeff, pts_2d);
  cv::Mat image_project = image_origin.clone();
  int image_rows = image_origin.rows;
  int image_cols = image_origin.cols;

  for (size_t i = 0; i < pts_2d.size(); ++i)
  {
    cv::Point2f point_2d = pts_2d[i];
    // if (point_2d.x < 0 || point_2d.x > image_cols || point_2d.y < 0 || point_2d.y > image_rows)
    // {
    //   continue;
    // }
    // else
    // {
    //   image_project.at(point_2d.y, point_2d.x)[0] = 0;
    //   image_project.at(point_2d.y, point_2d.x)[1] = 0;
    //   image_project.at(point_2d.y, point_2d.x)[2] = 255;
    // }
    
    if (point_2d.x > 0 && point_2d.x < image_cols && point_2d.y > 0 && point_2d.y < image_rows)
    {
      image_project.at(point_2d.y, point_2d.x)[0] = 0;
      image_project.at(point_2d.y, point_2d.x)[1] = 0;
      image_project.at(point_2d.y, point_2d.x)[2] = 255;
    } 
    else
    {
      continue;
    }  
  }

  cv::imshow("origin image", image_origin);
  cv::imshow("project image", image_project);
  cv::imwrite("/home/enjoy/points2image/image_origin.jpg", image_origin);
  cv::imwrite("/home/enjoy/points2image/image_project.jpg", image_project);
  cv::waitKey(1000000);

  return 0;
}

 参考链接:https://my.oschina.net/u/4414208/blog/3720172https://my.oschina.net/u/4414208/blog/3720172icon-default.png?t=L9C2https://my.oschina.net/u/4414208/blog/3720172

 

 最后,感谢上述链接的博主,对本人在开发工作中的启发与帮助。 

你可能感兴趣的:(ubuntu,matlab,传感器)