我最开始被zed2的标定问题折磨了好久,用kalibr工具标定出来的结果始终不好,投影误差甚至达到了5-10个像素。
后来我在github上发现有人问了这么一个问题:
https://github.com/stereolabs/zed-ros-wrapper/issues/620
发现使用 /zed2/zed_node/left/image_rect_gray 这个topic,其实是zed2已经经过了畸变校正的图像,可以不用标定畸变参数,直接从一些topic里就可以获得参数值,而且跑出来的结果也比我自己标定的结果要好。
最初ZED2设备连接的时候,会有一个相机(一般是左目)和IMU的轴系转换关系Camera-IMU Transform,这个是系统值,最好还是自行标定。
具体标定操作见:ZED2双目相机+IMU标定
根据标定文件填写参数文件zed2_stereo_config.yaml,发现这个参数对结果的影响很大,如果设置不对,过几分钟就发散了。
%YAML:1.0
#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam;
imu: 1
num_of_cam: 2
imu_topic: "/zed2/zed_node/imu/data"
image0_topic: "/zed2/zed_node/left/image_rect_gray"
image1_topic: "/zed2/zed_node/right/image_rect_gray"
output_path: "~"
cam0_calib: "cam0.yaml"
cam1_calib: "cam1.yaml"
image_width: 1280
image_height: 720
# Extrinsic parameter between IMU and Camera.
estimate_extrinsic: 1 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
body_T_cam0: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [0.00481607, -0.9999862, -0.00209716, 0.01817731,
0.01004909, 0.00214548, -0.9999472, -0.01173207,
0.99993791, 0.00479474, 0.01005929, -0.04475461,
0, 0, 0, 1]
body_T_cam1: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [0.00595301, -0.99997993, -0.00216885, -0.10199108,
0.00965226, 0.00222625, -0.99995094, -0.0121086,
0.9999357, 0.00593179, 0.00966532, -0.04648173,
0, 0, 0, 1]
#Multiple thread support
multiple_thread: 1
#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 30 # min distance between two features
freq: 10 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
flow_back: 1 # perform forward and backward optical flow to improve feature tracking accuracy
#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time
max_num_iterations: 8 # max solver itrations, to guarantee real time
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
#imu parameters The more accurate parameters you provide, the better performance
acc_n: 0.1 # accelerometer measurement noise standard deviation.
gyr_n: 0.01 # gyroscope measurement noise standard deviation.
acc_w: 0.001 # accelerometer bias random work noise standard deviation.
gyr_w: 0.0001 # gyroscope bias random work noise standard deviation.
g_norm: 9.81007 # gravity magnitude
#unsynchronization parameters
estimate_td: 0 # online estimate time offset between camera and imu
td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)
#loop closure parameters
load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "~/output/pose_graph/" # save and load path
save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0
运行命令:
roscore
roslaunch vins vins_rviz.launch
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/zed/zed2_stereo_config.yaml
可以直接调用zed2相机,也可以使用录制的bag包:
roslaunch zed_wrapper zed2.launch
rosbag play mydata.bag
加上回环检测部分:
rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/zed/zed2_stereo_config.yaml
加GPS松耦合:
rosrun global_fusion global_fusion_node ~/catkin_ws/src/VINS-Fusion/config/zed/zed2_stereo_config.yaml
这里经过摸索发现同样需要yaml配置文件才能运行,具体原因还没有搞清楚。
同样,需要先配置标定文件test_stereo_imu.yaml:
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
Camera.type: "PinHole"
# Camera calibration and distortion parameters (OpenCV) (equal for both cameras after stereo rectification)
Camera.fx: 528.3009033203125
Camera.fy: 528.3009033203125
Camera.cx: 632.7931518554688
Camera.cy: 372.5525817871094
# 用的是校正过的节点,所以畸变参数设置为0
Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.width: 1280
Camera.height: 720
# Camera frames per second
Camera.fps: 15.0
# stereo baseline times fx
Camera.bf: 63.396108984375
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
# Close/Far threshold. Baseline times.
ThDepth: 40.0 # 35
# Transformation from camera 0 to body-frame (imu)
# 从左目转换到IMU坐标系
Tbc: !!opencv-matrix
rows: 4
cols: 4
dt: f
data: [ 0.0055827285742915, 0.0128040922714603, 0.9999024394223516, 0.0285440762197234,
-0.9999801332587812, 0.0029981004108222, 0.0055447706603969, -0.1038871459045697,
-0.0029268121592544, -0.9999135295689473, 0.0128205754767047, -0.0063514683297355,
0.0000000000000000, -0.0000000000000000, -0.0000000000000000, 1.0000000000000000]
# IMU noise
# get it from Project of **zed-examples/tutorials/tutorial 7 - sensor data/**.
IMU.NoiseGyro: 0.007 # 1.6968e-04
IMU.NoiseAcc: 0.0016 # 2.0000e-3
IMU.GyroWalk: 0.0019474
IMU.AccWalk: 0.0002509 # 3.0000e-3
IMU.Frequency: 400
#--------------------------------------------------------------------------------------------
# Stereo Rectification. Only if you need to pre-rectify the images.
# Camera.fx, .fy, etc must be the same as in LEFT.P
#--------------------------------------------------------------------------------------------
LEFT.height: 720
LEFT.width: 1280
LEFT.D: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [0, 0, 0, 0, 0]
LEFT.K: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [528.3009033203125, 0.0, 632.7931518554688, 0.0, 528.3009033203125, 372.5525817871094, 0.0, 0.0, 1.0]
LEFT.R: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
LEFT.Rf: !!opencv-matrix
rows: 3
cols: 3
dt: f
data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
LEFT.P: !!opencv-matrix
rows: 3
cols: 4
dt: d
data: [528.3009033203125, 0.0, 632.7931518554688, 0.0, 0.0, 528.3009033203125, 372.5525817871094, 0.0, 0.0, 0.0, 1.0, 0.0]
RIGHT.height: 720
RIGHT.width: 1280
RIGHT.D: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [0, 0, 0, 0, 0]
RIGHT.K: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [528.3009033203125, 0.0, 632.7931518554688, 0.0, 528.3009033203125, 372.5525817871094, 0.0, 0.0, 1.0]
RIGHT.R: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
RIGHT.P: !!opencv-matrix
rows: 3
cols: 4
dt: d
data: [528.3009033203125, 0.0, 632.7931518554688, -63.47084045410156, 0.0, 528.3009033203125, 372.5525817871094, 0.0, 0.0, 0.0, 1.0, 0.0]
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1200
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
通过如下命令启动程序:
rosrun ORB_SLAM3 Stereo_Inertial Vocabulary/ORBvoc.txt \
Examples/zed2/test_stereo_imu.yaml true \
/camera/left/image_raw:=/zed2/zed_node/left/image_rect_gray \
/camera/right/image_raw:=/zed2/zed_node/right/image_rect_gray \
/imu:=/zed2/zed_node/imu/data
注意订阅的topic一定要对应,也可以直接在源代码修改topic名称。
关于参数文件yaml的填写,可以参照这几个博客:
双目立体相机的Rectification
双目相机标定和orbslam2双目参数详解
MYNTEYE相机标定 + 运行ORB_SLAM3