Realsense D455 标定+运行VINS-MONO

运行参考:https://blog.csdn.net/qq_41839222/article/details/86552367
标定参考:https://blog.csdn.net/xiaoxiaoyikesu/article/details/105646064

安装kalibr:https://github.com/ethz-asl/kalibr/wiki/installation
问题1:在安装依赖sudo pip install python-igraph --upgrade时,报错:

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-HgfXHE/python-igraph/

尝试了很多方法,最后用apt-get安装成功。(另外,貌似不能在python3.7环境下安装这个,我把anaconda关了,在默认的python2.7中才弄好)

sudo apt-get install python-igraph

问题2:在编译kalibr源码的时候遇到了下载suitesparse过久的问题。
解决方法:在github中下载相应版本https://github.com/jluttine/suitesparse/releases,
下载下来suitesprse压缩包后,解压的文件夹名字为“suitesparse-4.2.1”,要把这个文件夹改名为“SuiteSparse”,然后放到"ros空间/build/kalibr/suitesparse/suitesparse_src-prefix/src/suitesparse_src"下。
(参考了https://blog.csdn.net/u010003609/article/details/104715475中走火入魔的评论)

问题3:编译kalibr的时候报错:
Errors << aslam_cv_python:make /home/yasaburo3/project/kalibr-workspace/logs/aslam_cv_python/build.make.000.log
/usr/include/boost/python/detail/destroy.hpp:20:9: error: ‘Eigen::MatrixBase::~MatrixBase() [with Derived = Eigen::Matrix]’ is protected within this context
p->~T();
参考:https://github.com/ethz-asl/kalibr/issues/396Realsense D455 标定+运行VINS-MONO_第1张图片
这样子看起来是由于新版本的Eigen中把原来的一个public的部分改成了protected,根据这个回答进行修改之后,解决了这个问题。(那么我觉得讲道理换一个旧版本的Eigen可能也是有效的)

安装code_utils和imu_utils之后,按照上面标定参考的文章,得到了IMU的标定结果:

%YAML:1.0
---
type: IMU
name: d455
Gyr:
   unit: " rad/s"
   avg-axis:
      gyr_n: 2.4619970724548633e-03
      gyr_w: 3.3803458569191471e-05
   x-axis:
      gyr_n: 1.7344090512858039e-03
      gyr_w: 3.0020282041206071e-05
   y-axis:
      gyr_n: 3.8089577973404579e-03
      gyr_w: 5.9955836202839161e-05
   z-axis:
      gyr_n: 1.8426243687383285e-03
      gyr_w: 1.1434257463529169e-05
Acc:
   unit: " m/s^2"
   avg-axis:
      acc_n: 2.3490554641910726e-02
      acc_w: 7.1659587470836986e-04
   x-axis:
      acc_n: 2.0733442788182205e-02
      acc_w: 6.7470235021600312e-04
   y-axis:
      acc_n: 2.2826384734378355e-02
      acc_w: 9.0457613257271044e-04
   z-axis:
      acc_n: 2.6911836403171623e-02
      acc_w: 5.7050914133639622e-04

https://blog.csdn.net/Duke_Star/article/details/115942252

相机的内参就采用realsense官方标定好的参数:

rostopic echo /camera/color/camera_info

内参K :
在这里插入图片描述
或者使用如下方法获得相机内参和相机相对IMU的外参(参考:https://www.freesion.com/article/47171367483/)(没用到kalibr)

#include  // Include RealSense Cross Platform API
#include
#include

int main(int argc, char * argv[]) try
{
  rs2::log_to_console(RS2_LOG_SEVERITY_ERROR);

  /// Create librealsense context for managing devices
  rs2::context ctx;   
  auto devs = ctx.query_devices();                  ///获取设备列表
  int device_num = devs.size();
  std::cout<<"device num: "<<device_num<<std::endl;///设备数量

  ///我只连了一个设备,因此我查看第0个设备的信息
  /// 当无设备连接时此处抛出rs2::error异常
  rs2::device dev = devs[0];
  ///设备编号,每个设备都有一个不同的编号, 如果连接了多个设备,便可根据此编号找到你希望启动的设备
  char serial_number[100] = {0};
  strcpy(serial_number, dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER));
  printf("serial_number: %s\n",serial_number); 

  ///设置从设备管道获取的深度图和彩色图的配置对象
  rs2::config cfg;
  ///配置彩色图像流:分辨率640*480,图像格式:BGR, 帧率:30帧/秒
  ///默认配置任意一个设备,若要配置指定的设备可以根据设备在设备列表里的***进行制定:
  ///int indx = 0; ///表示第0个设备
  ///cfg.enable_stream(RS2_STREAM_COLOR,indx, 640, 480, RS2_FORMAT_BGR8, 30);
  cfg.enable_stream(RS2_STREAM_COLOR,640, 480, RS2_FORMAT_BGR8, 30);
  ///配置深度图像流:分辨率640*480,图像格式:Z16, 帧率:30帧/秒
  cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);

  ///生成Realsense管道,用来封装实际的相机设备
  rs2::pipeline pipe;
  pipe.start(cfg); ///根据给定的配置启动相机管道

  rs2::frameset data;
  data = pipe.wait_for_frames();///等待一帧数据,默认等待5s

  rs2::depth_frame depth       = data.get_depth_frame(); ///获取深度图像数据
  rs2::video_frame color       = data.get_color_frame();  ///获取彩色图像数据
  rs2::stream_profile dprofile =  depth.get_profile();
  rs2::stream_profile cprofile =  color.get_profile();

  ///获取彩色相机内参
  rs2::video_stream_profile cvsprofile(cprofile);
  rs2_intrinsics color_intrin =  cvsprofile.get_intrinsics();
  std::cout<<"\ncolor intrinsics: ";
  std::cout<<color_intrin.width<<"  "<<color_intrin.height<<"  ";
  std::cout<<color_intrin.ppx<<"  "<<color_intrin.ppy<<"  ";
  std::cout<<color_intrin.fx<<"  "<<color_intrin.fy<<std::endl;
  std::cout<<"coeffs: ";
  for(auto value : color_intrin.coeffs)
    std::cout<<value<<"  ";
  std::cout<<std::endl;
  std::cout<<"distortion model: "<<color_intrin.model<<std::endl;///畸变模型

  ///获取深度相机内参
  rs2::video_stream_profile dvsprofile(dprofile);
  rs2_intrinsics depth_intrin =  dvsprofile.get_intrinsics();
  std::cout<<"\ndepth intrinsics: ";
  std::cout<<depth_intrin.width<<"  "<<depth_intrin.height<<"  ";
  std::cout<<depth_intrin.ppx<<"  "<<depth_intrin.ppy<<"  ";
  std::cout<<depth_intrin.fx<<"  "<<depth_intrin.fy<<std::endl;
  std::cout<<"coeffs: ";
  for(auto value : depth_intrin.coeffs)
    std::cout<<value<<"  ";
  std::cout<<std::endl;
  std::cout<<"distortion model: "<<depth_intrin.model<<std::endl;///畸变模型

  ///获取深度相机相对于彩色相机的外参,即变换矩阵: P_color = R * P_depth + T
  rs2_extrinsics extrin = dprofile.get_extrinsics_to(cprofile);
  std::cout<<"\nextrinsics of depth camera to color camera: \nrotaion: "<<std::endl;
  for(int i = 0; i < 3; ++i){
    for(int j = 0; j < 3; ++j){
      float value = extrin.rotation[3*i + j];
      std::cout<<value<<"  ";
    }
    std::cout<<std::endl;
  }
  std::cout<<std::endl;

  std::cout<<"translation: ";
  for(auto value : extrin.translation)
    std::cout<<value<<"  ";
  std::cout<<std::endl;

  while(1)
  {
	///等待一帧数据,默认等待5s
    data = pipe.wait_for_frames();

    rs2::depth_frame depth  = data.get_depth_frame(); ///获取深度图像数据
    rs2::video_frame color  = data.get_color_frame();  ///获取彩色图像数据
    int color_width  = color.get_width();
    int color_height = color.get_height();

    int depth_width  = depth.get_width();
    int depth_height = depth.get_height();

    if (!color || !depth) break;                ///如果获取不到数据则退出

	///将彩色图像和深度图像转换为Opencv格式
    cv::Mat image(cv::Size(color_width, color_height), CV_8UC3, (void*)color.get_data(), cv::Mat::AUTO_STEP);
    cv::Mat depthmat(cv::Size(depth_width, depth_height), CV_16U, (void*)depth.get_data(), cv::Mat::AUTO_STEP);
		
	///显示
    cv::imshow("image",image);
    cv::imshow("depth",depthmat);
    cv::waitKey(1);
  }
  return EXIT_SUCCESS;
}
catch (const rs2::error & e)
{	
    ///捕获相机设备的异常
    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
    return EXIT_FAILURE;
}
catch (const std::exception& e)
{
    std::cerr<<"Other error : " << e.what() << std::endl;
    return EXIT_FAILURE;
}

CMakeLists.txt:

cmake_minimum_required(VERSION 3.4.1)
project(calib)
set(CMAKE_CXX_FLAGS "-std=c++11")

find_package(OpenCV REQUIRED)
find_package(realsense2 REQUIRED)

include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${realsense2_INCLUDE_DIRS})

add_executable(calib calib.cpp)
target_link_libraries(calib
    ${OpenCV_LIBS}
    ${realsense2_LIBRARY}
    )   

得到的参数如下:

%YAML:1.0

#common parameters
imu_topic: "/camera/imu"
image_topic: "/camera/color/image_raw"
output_path: "/home/tony-ws1/output/"

#camera calibration 
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 480
distortion_parameters:
   k1: -5.58381e-02
   k2: 6.70407e-02
   p1: -2.42111e-04
   p2: -2.12417e-02
projection_parameters:
   fx: 3.7958282470703125e+02
   fy: 3.793941955566406e+02
   cx: 3.1927392578125e+02
   cy: 2.471514129638672e+02

# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 0   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                        # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
                        # 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning.                        
#If you choose 0 or 1, you should write down the following matrix.
#Rotation from camera frame to imu frame, imu^R_cam
extrinsicRotation: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [0.999998, 0.000727753, -0.0018542, 
          -0.000731598, 0.999998, -0.00207414, 
          0.00185269, 0.00207549, 0.999996]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [-0.0591689, -6.45117e-05, 0.000440176]

#feature traker paprameters
max_cnt: 150            # max feature number in feature tracking
min_dist: 25            # min distance between two features 
freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
F_threshold: 1.0        # ransac threshold (pixel)
show_track: 1           # publish tracking image as topic
equalize: 0             # if image is too dark or light, trun on equalize to find enough features
fisheye: 0              # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points

#optimization parameters
max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
max_num_iterations: 8   # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters       The more accurate parameters you provide, the better performance
acc_n: 2.3490554641910726e-02          # accelerometer measurement noise standard deviation. #0.2
gyr_n: 2.4619970724548633e-03         # gyroscope measurement noise standard deviation.     #0.05
acc_w: 7.1659587470836986e-04         # accelerometer bias random work noise standard deviation.  #0.02
gyr_w: 3.3803458569191471e-05       # gyroscope bias random work noise standard deviation.     #4.0e-5
g_norm: 9.794       # gravity magnitude in Shanghai

之后就可以运行了

roslaunch realsense2_camera rs_camera.launch 
roslaunch vins_estimator realsense_color.launch 
roslaunch vins_estimator vins_rviz.launch

Realsense D455 标定+运行VINS-MONO_第2张图片
效果也不是特别的好。第一次实验的时候出现了漂移非常严重的问题,但是这次没有那么严重了,可能是我这次运动的比较慢?当特征点不足的时候会导致视觉里程计失效,这时仅靠IMU来估计运动,误差就会非常大。另外,IMU和相机的外参没有使用kalibr标定,用的是程序自动标定的,这可能也是误差较大的原因之一,之后再用kalibr标一下看看。

你可能感兴趣的:(VINS-MONO,自动驾驶,人工智能,机器学习)