1. 编译完成之后就可以使用了,按照说明我们可以知道,首先开启roscore
再打开一个命令窗口使用命令:rosrun ORB_SLAM ORB_SLAM PATH_TO_VOCABULARY PATH_TO_SETTINGS_FILE
其中ORB_SLAM PATH_TO_VOCABULARY:是一种树型数据结构模型,ORB-SLAM
里面主要用来做回访(loop-closure)
检 测,对于不同数据集严格来说需要离线单独处理生成,但一般成像条件都差不多所以对于不同图像数据集可以使用相同的词汇数据文件(相当于一个数据库文件,方 便快速保存和查询视觉特征信息)。虽然是TXT文件,打开就是许多数字而已。(https://answers.cosrobotics.org/question/184/guan-yu-orb_slamyun-xing-yi-ji-rgb-d-datasetde-xiang-ji-can-shu-de-wen-ti/)
PATH_TO_SETTINGS_FILE:这个很容易理解就是相机的内参;
然后我们输入命令如图所示:
我们查看ORB_SLAM节点的topic
salm@salm:~$ rtopic list -v
Published topics:
* /ORB_SLAM/Map [visualization_msgs/Marker] 1 publisher
* /ORB_SLAM/Frame [sensor_msgs/Image] 1 publisher
* /rosout [rosgraph_msgs/Log] 1 publisher
* /tf [tf2_msgs/TFMessage] 1 publisher
* /rosout_agg [rosgraph_msgs/Log] 1 publisher
Subscribed topics:
* /camera/image_raw [sensor_msgs/Image] 1 subscriber
* /rosout [rosgraph_msgs/Log] 1 subscriber
(1)知道该节点是订阅/camera/image_raw这个topic,然后被 ORB_SLALM
节点处理后的图像帧被发布到话题 ORB_SLAM/Frame
中,可以通过使用 image_view
功能包来查看
rosrun image_view image_view image:=/ORB_SLAM/Frame _autosize:=true
(2)ORB_SLAM
节点处理得到的地图被发布到话题 /ORB_SLAM/Map
中,摄像机当前位姿和地图全局坐标原点通过 /tf
功能包分别发布到话题 /ORB_SLAM/Camera
和话题 /ORB_SLAM/World
中,通过运行 rviz
功能包来查看地图:
rosrun rviz rviz -d Data/rviz.rviz
这都是我们在订阅了/camera/image_raw才能看到的试验结果,那么我们从那里得到/camera/image_raw这个节点呢?
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
那么我就尝试首先发布一个经典的图片topic 给ORB_SLAM节点:
(1)首先进入自己的工作空间catkin_ws,然后进入src,使用catkin_pkg_create命令创建我的功能包
参考 http://wiki.ros.org/image_transport/Tutorials
http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29
$ cd catkin_ws/
$ cd src/
$ catkin_create_pkg learning_image_transport image_transport cv_bridge
$ cd ..
$ catkin_make
$ cd src
$ vi my_publisher.cpp
输入:
#include#include #include #include //非常有用的功能包,实现ROS与OPENCV图像的转换 int main(int argc, char** argv) { ros::init(argc, argv, "image_publisher"); ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Publisher pub = it.advertise("camera/image_raw", 1); //这里我们发布的主题要与ORB_SLAM节点订阅的话题一致 //cv::Mat image = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR); cv::Mat image = cv::imread("/home/salm/myopencv/lena.jpg",1); //写入自己要载入的图像位置 cv::waitKey(30); sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", image).toImageMsg(); //这一句就是把图像读入并转为opencv可处理的图像 ros::Rate loop_rate(5); while (nh.ok()) { pub.publish(msg); ros::spinOnce(); loop_rate.sleep(); } }
之后在CMakefile.txt文件添加
add_executable(my_publisher src/my_publisher.cpp)
target_link_libraries(my_publisher ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
进行catkin_make即可我们运行查看一下实现的结果:
$ rosrun learning_image_transport my_publisher
$ rosrun ORB_SLAM ORB_SLAM Data/ORBvoc.txt Data/Settings.yaml
$ rosrun image_view image_view image:=/ORB_SLAM/Frame _autosize:=true
就是这个结果,因为我也是一边学习,一边实现,我自己也不知道里面使用的算法,具体为什么会这样,这也是不断学习的原因
(2)那现在我再发布一个动态的topic 用摄像头捕捉的话题发布给ORB_SLAM,看看实现结果:
与之前的一样,仍然在src 文件下 $ vi image_converter.cpp
输入:
#include
#include
#include
#include
#include //include the headers for OPENCV's image processing and GUI module
#include //
static const std::string OPENCV_WINDOW = "Image window"; //define show image gui
class ImageConverter
{
ros::NodeHandle nh_; //define Nodehandle
image_transport::ImageTransport it_; //use this to create a publisher or subscriber
image_transport::Subscriber image_sub_; //
image_transport::Publisher image_pub_;
public:
ImageConverter()
: it_(nh_)
{
// Subscrive to input video feed and publish output video feed
image_sub_ = it_.subscribe("/usb_cam/image_raw", 1,
&ImageConverter::imageCb, this);
//image_pub_ = it_.advertise("/image_converter/output_video", 1);
image_pub_ = it_.advertise("/camera/image_raw", 1);
cv::namedWindow(OPENCV_WINDOW); //Opencv HighGUI calls to create/destroy a display window on start-up / shutdon
}
~ImageConverter()
{
cv::destroyWindow(OPENCV_WINDOW);
}
void imageCb(const sensor_msgs::ImageConstPtr& msg)
{
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
cv::imshow(OPENCV_WINDOW, cv_ptr->image);
cv::waitKey(3);
// Output modified video stream
image_pub_.publish(cv_ptr->toImageMsg());
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "image_converter");
ImageConverter ic;
ros::spin();
return 0;
}
同理 添加
add_executable(image_converter src/image_converter.cpp)
target_link_libraries(image_converter ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
当然前提是电脑有摄像头,并且下载了USB摄像头的驱动:https://github.com/bosch-ros-pkg/usb_cam
输入以下命令:
$ roslaunch usb_cam usb_cam-test.launch
(在启动摄像头可能会遇到一些问题吧,比如出现 Webcam: expected picture but didn't get it...就要修改launch文件下的 "pixel_format",比如有mjpeg ,yuyv等)
$ rosrun learning_image_transport image_converter
$ rosrun ORB_SLAM ORB_SLAM Data/ORBvoc.txt Data/Settings.yaml
$ rosrun image_view image_view image:=/ORB_SLAM/Frame _autosize:=true
版权所有,转载请注明出处 http://www.cnblogs.com/li-yao7758258/p/5912663.html