在动态环境中,使用aruco码作为landmark有助于机器人的快速定位。这里把主要的使用过程梳理一下,之前做过却没有整理。本文基于ros框架。最近测试发现感觉aruco码检测精度问题,感觉这个实战还有些工作要做。
我使用的是一个usb广角镜头,介于针孔相机和鱼眼相机之间,首先使用ros包usb_cam打开相机
https://github.com/ros-drivers/usb_cam
输出的话题为:/usb_cam/image_raw /usb_cam/camera_info
因为相机没有标定,所以 /usb_cam/camera_info为空。
这是我的roslaunch,一般插上usb相机,linux设备会显示为/dev/video0 , /dev/video1 …
<launch>
<node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" >
<param name="video_device" value="/dev/video1" />
<param name="image_width" value="640" />
<param name="image_height" value="480" />
<param name="pixel_format" value="yuyv" />
<param name="camera_frame_id" value="usb_cam" />
<param name="io_method" value="mmap"/>
</node>
<node name="image_view" pkg="image_view" type="image_view" respawn="false" output="screen">
<remap from="image" to="/usb_cam/image_raw"/>
<param name="autosize" value="true" />
</node>
</launch>
这里好多文章,参考如下:
https://blog.csdn.net/qq_36804363/article/details/89269776
save保存在临时目录中/tmp,commit之后会在.ros/camera_info文件夹下保存head_camera.yaml文件,内容如下
image_width: 640
image_height: 480
camera_name: head_camera
camera_matrix:
rows: 3
cols: 3
data: [502.693541683172, 0, 340.0840028509343, 0, 505.255225296267, 230.8081850657802, 0, 0, 1]
distortion_model: plumb_bob
distortion_coefficients:
rows: 1
cols: 5
data: [0.1159532663569074, -0.1333633000304404, 0.003771834918850461, 0.001535879744852717, 0]
rectification_matrix:
rows: 3
cols: 3
data: [1, 0, 0, 0, 1, 0, 0, 0, 1]
projection_matrix:
rows: 3
cols: 4
data: [516.1822509765625, 0, 341.2437019169374, 0, 0, 518.700439453125, 232.4051810117744, 0, 0, 0, 1, 0]
这里的camera_matrix对应camera_info里的K,distortion_coefficients对应camera_info里的D,rectification_matrix对应camera_info里的R,projection_matrix对应camera_info里的P,至于怎么用?具体看代码。
这里再给一个棋盘格的下载百度云盘地址
https://pan.baidu.com/s/1FheRrbMVq1bdOEWwCGSY9w 提取码: 5vfy
获得了畸变参数,我们就要把有畸变的图像矫正回来,怎么矫正呢?
参考文章ROS 公用包学习解析<四> image_proc, depth_image_proc
terminal一句话,这里ROS_NAMESPACE是你的usb_cam。
ROS_NAMESPACE=usb_cam rosrun image_proc image_proc
这样ros就发布出/usb_cam/image_rect矫正后的数据。
矫正图像不是必须的,有的算法包只要提供原始图像和畸变参数(camera_info)就可以。
有的算法包要提供矫正后的图像
这里就要使用旷世外参标定的三件套了
旷视研究院 3D 组
1-CamLaserCalibraTool 用于标定相机和激光之间的外参(贺博)好用~
2-CamOdomCalibraTool 用于标定相机和odom之间的外参
3-OdomLaserCalibraTool 用于标定激光和odom(差速机器人就是2个主动轮的中间)之间的外参(贺博)好用~
我这里就是用1和3标定相机和激光、激光和odom,这样相机到odom之间的外参自然就是知道的了。
标定3-激光和odom,只要按说明修改就可以。
标定1-相机和激光,要找一个大一些的板子(40cm*40cm),按照要求配置calibra_config.yaml文件,这是我的,贺博的这个需要告诉畸变参数和输入原始图像,这里我使用了单个apriltag,因为我打印的是A4纸,比较小,单个定位精度会好一些吧(多个apriltag码还没测过)。
%YAML:1.0
#common parameters
savePath: "/home/*yourname*/catkin_ws_cali_cam_lidar/src/log/"
bag_path: "/home/*yourname*/Downloads/ros_bag/okagv_ros_bag/2021-03-02/2021-03-02-11-09-05.bag"
scan_topic_name: "/scan"
img_topic_name: "/usb_cam/image_raw"
# tag info
tag_type: 2 # 1: kalibr_tag, 2: apriltag
tag_size: 0.160 # tag size, unit meter
tag_spacing: 0.3 # tag spacing, only used in kalibr_tag. For details, please see kalibr tag description.
black_border: 1 # if you use kalibr_tag black_boarder = 2; if you use apriltag black_boarder = 1
#camera calibration
model_type: PINHOLE
camera_name: camera
image_width: 640
image_height: 480
distortion_parameters:
k1: -0.32623831
k2: 0.090618678
p1: 0.000139543
p2: 0.0
projection_parameters:
fx: 356.6336273
fy: 358.2610314
cx: 325.5064935
cy: 237.6865029
把标定结果写到urdf文件中就可以了,就是把标定结果一一对应写就好了
<joint name="camera_link_joint" type="fixed">
<parent link="laser_link" />
<child link="usb_cam" />
<origin rpy="-1.47493 -0.0609987 -1.56192" xyz="0.140066 -0.00735856 -0.182245" />
</joint>
</robot>
以aruco为例,修改simple_single.cpp
void image_callback(const sensor_msgs::ImageConstPtr& msg)
{
/*
if ((image_pub.getNumSubscribers() == 0) &&
(debug_pub.getNumSubscribers() == 0) &&
(pose_pub.getNumSubscribers() == 0) &&
(transform_pub.getNumSubscribers() == 0) &&
(position_pub.getNumSubscribers() == 0) &&
(marker_pub.getNumSubscribers() == 0) &&
(pixel_pub.getNumSubscribers() == 0))
{
ROS_DEBUG("No subscribers, not looking for aruco markers");
return;
}
*/
static tf::TransformBroadcaster br;
if(cam_info_received)
{
ros::Time curr_stamp = msg->header.stamp;
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::RGB8);
inImage = cv_ptr->image;
//detection results will go into "markers"
markers.clear();
//Ok, let's detect
mDetector.detect(inImage, markers, camParam, marker_size, false);
//for each marker, draw info and its boundaries in the image
cartographer_ros_msgs::LandmarkList apriltag_LandMarkList;
for(size_t i=0; i<markers.size(); ++i)
{
// only publishing the selected marker
if(markers[i].id == marker_id)
{
tf::Transform transform = aruco_ros::arucoMarker2Tf(markers[i], rotate_marker_axis_);
tf::StampedTransform cameraToReference;
cameraToReference.setIdentity();
if ( reference_frame != camera_frame )
{
getTransform(reference_frame,
camera_frame,
cameraToReference);
}
transform =
static_cast<tf::Transform>(cameraToReference)
* static_cast<tf::Transform>(rightToLeft)
* transform;
tf::StampedTransform stampedTransform(transform, curr_stamp,
reference_frame, marker_frame);
br.sendTransform(stampedTransform);
geometry_msgs::PoseStamped poseMsg;
tf::poseTFToMsg(transform, poseMsg.pose);
poseMsg.header.frame_id = reference_frame;
poseMsg.header.stamp = curr_stamp;
pose_pub.publish(poseMsg);
geometry_msgs::TransformStamped transformMsg;
tf::transformStampedTFToMsg(stampedTransform, transformMsg);
transform_pub.publish(transformMsg);
geometry_msgs::Vector3Stamped positionMsg;
positionMsg.header = transformMsg.header;
positionMsg.vector = transformMsg.transform.translation;
position_pub.publish(positionMsg);
geometry_msgs::PointStamped pixelMsg;
pixelMsg.header = transformMsg.header;
pixelMsg.point.x = markers[i].getCenter().x;
pixelMsg.point.y = markers[i].getCenter().y;
pixelMsg.point.z = 0;
pixel_pub.publish(pixelMsg);
//Publish rviz marker representing the ArUco marker patch
visualization_msgs::Marker visMarker;
visMarker.header = transformMsg.header;
visMarker.id = 1;
visMarker.type = visualization_msgs::Marker::CUBE;
visMarker.action = visualization_msgs::Marker::ADD;
visMarker.pose = poseMsg.pose;
visMarker.scale.x = marker_size;
visMarker.scale.y = 0.001;
visMarker.scale.z = marker_size;
visMarker.color.r = 1.0;
visMarker.color.g = 0;
visMarker.color.b = 0;
visMarker.color.a = 1.0;
visMarker.lifetime = ros::Duration(3.0);
marker_pub.publish(visMarker);
cartographer_ros_msgs::LandmarkEntry apriltag_LandMarkEntry;
apriltag_LandMarkEntry.id = std::to_string(marker_id);
apriltag_LandMarkEntry.tracking_from_landmark_transform = poseMsg.pose;
apriltag_LandMarkEntry.translation_weight = 1.0;
apriltag_LandMarkEntry.rotation_weight = 1.0;
apriltag_LandMarkEntry.type = "apriltag";
apriltag_LandMarkList.landmarks.push_back(apriltag_LandMarkEntry);
}
// but drawing all the detected markers
markers[i].draw(inImage,cv::Scalar(0,0,255),2);
}
if (apriltag_LandMarkList.landmarks.size() != 0)
{
apriltag_LandMarkList.header = msg->header;
apriltag_landmark_publisher_.publish(apriltag_LandMarkList);
}
//draw a 3d cube in each marker if there is 3d info
if(camParam.isValid() && marker_size>0)
{
for(size_t i=0; i<markers.size(); ++i)
{
CvDrawingUtils::draw3dAxis(inImage, markers[i], camParam);
}
}
if(image_pub.getNumSubscribers() > 0)
{
//show input with augmented information
cv_bridge::CvImage out_msg;
out_msg.header.stamp = curr_stamp;
out_msg.encoding = sensor_msgs::image_encodings::RGB8;
out_msg.image = inImage;
image_pub.publish(out_msg.toImageMsg());
}
if(debug_pub.getNumSubscribers() > 0)
{
//show also the internal image resulting from the threshold operation
cv_bridge::CvImage debug_msg;
debug_msg.header.stamp = curr_stamp;
debug_msg.encoding = sensor_msgs::image_encodings::MONO8;
debug_msg.image = mDetector.getThresholdedImage();
debug_pub.publish(debug_msg.toImageMsg());
}
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
}
}
以apriltag为例,作为landmark使用后不理想,应该是要限制检测距离和角度来提高3D检测精度才可以。这里也贴出来,修改continuous_detector.cpp
void ContinuousDetector::imageCallback (
const sensor_msgs::ImageConstPtr& image_rect,
const sensor_msgs::CameraInfoConstPtr& camera_info)
{
// Lazy updates:
// When there are no subscribers _and_ when tf is not published,
// skip detection.
if (tag_detections_publisher_.getNumSubscribers() == 0 &&
tag_detections_image_publisher_.getNumSubscribers() == 0 &&
!tag_detector_->get_publish_tf())
{
// ROS_INFO_STREAM("No subscribers and no tf publishing, skip processing.");
return;
}
// Convert ROS's sensor_msgs::Image to cv_bridge::CvImagePtr in order to run
// AprilTag 2 on the iamge
try
{
cv_image_ = cv_bridge::toCvCopy(image_rect, image_rect->encoding);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
// Publish detected tags in the image by AprilTag 2
tag_detections_publisher_.publish(
tag_detector_->detectTags(cv_image_,camera_info));
AprilTagDetectionArray tag_detection_array = tag_detector_->detectTags(cv_image_,camera_info);
cartographer_ros_msgs::LandmarkList apriltag_LandMarkList;
for(int i=0; i<tag_detection_array.detections.size(); i++)
{
AprilTagDetection item = tag_detection_array.detections[i];
//
cartographer_ros_msgs::LandmarkEntry apriltag_LandMarkEntry;
apriltag_LandMarkEntry.id = std::to_string(item.id[0]);
apriltag_LandMarkEntry.tracking_from_landmark_transform.position = item.pose.pose.pose.position;
apriltag_LandMarkEntry.tracking_from_landmark_transform.orientation = item.pose.pose.pose.orientation;
apriltag_LandMarkEntry.translation_weight = 1.0;
apriltag_LandMarkEntry.rotation_weight = 1.0;
apriltag_LandMarkEntry.type = "apriltag";
apriltag_LandMarkList.landmarks.push_back(apriltag_LandMarkEntry);
}
if(apriltag_LandMarkList.landmarks.size() != 0)
{
apriltag_LandMarkList.header = tag_detection_array.header;
apriltag_landmark_publisher_.publish(apriltag_LandMarkList);
}
// Publish the camera image overlaid by outlines of the detected tags and
// their payload values
if (draw_tag_detections_image_)
{
tag_detector_->drawDetections(cv_image_);
tag_detections_image_publisher_.publish(cv_image_->toImageMsg());
}
}
好了,按照cartographer的landmark demo应该就可以运行了。