在现实中,我们一般是使用摄像头来获取图像,这次要实现在ROS中使用摄像头传输图像,我使用的是usb网络摄像头(不需要驱动,我使用的是不需要驱动的usb摄像头),首先我们可以获取当前电脑连接的usb设备,代码如下所示:
~$ lsusb
如果需要驱动,可以参考我引用的文章:http://blog.csdn.net/yake827/article/details/44983093
下面我详细介绍如何一步一步操作(给初学者一个参考,大牛绕过)
首先创建一个工作空间:
~$ mkdir -p ~/ros_ws/src
~$ cd ~/ros_ws/
~$ catkin_make
~$ source devel/setup.bash
再建立一个程序包
~$ cd ~/ros_ws/src
~$ catkin_create_pkg learning_image_transport roscpp std_msgs cv_bridge image_transport sensor_msgs
然后在程序包learning_image_transport下的src目录中建立两个cpp文件
~$ cd ~/ros_ws/src/learning_image_transport/src/
~$ gedit my_publisher.cpp
将下列代码复制进去
#include
#include
#include
#include
#include // for converting the command line parameter to integer
int main(int argc, char** argv)
{
// Check if video source has been passed as a parameter
if(argv[1] == NULL)
{
ROS_INFO("argv[1]=NULL\n");
return 1;
}
ros::init(argc, argv, "image_publisher");
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
image_transport::Publisher pub = it.advertise("camera/image", 1);
// Convert the passed as command line parameter index for the video device to an integer
std::istringstream video_sourceCmd(argv[1]);
int video_source;
// Check if it is indeed a number
if(!(video_sourceCmd >> video_source))
{
ROS_INFO("video_sourceCmd is %d\n",video_source);
return 1;
}
cv::VideoCapture cap(video_source);
// Check if video device can be opened with the given index
if(!cap.isOpened())
{
ROS_INFO("can not opencv video device\n");
return 1;
}
cv::Mat frame;
sensor_msgs::ImagePtr msg;
ros::Rate loop_rate(5);
while (nh.ok()) {
cap >> frame;
// Check if grabbed frame is actually full with some content
if(!frame.empty()) {
msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", frame).toImageMsg();
pub.publish(msg);
//cv::Wait(1);
}
ros::spinOnce();
loop_rate.sleep();
}
}
保存以后,继续创建my_subscriber.cpp
~$ gedit my_subscriber.cpp
复制下列代码
#include
#include
#include
#include
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
try
{
cv::imshow("view", cv_bridge::toCvShare(msg, "bgr8")->image);
// cv::waitKey(30);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("Could not convert from '%s' to 'bgr8'.", msg->encoding.c_str());
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "image_listener");
ros::NodeHandle nh;
cv::namedWindow("view");
cv::startWindowThread();
image_transport::ImageTransport it(nh);
image_transport::Subscriber sub = it.subscribe("camera/image", 1, imageCallback);
ros::spin();
cv::destroyWindow("view");
}
接下来要把涉及到的各种包和opencv在CMakeList中声明一下。
回到程序包目录下
-$ cd ~/ros_ws/src/learning_image_transport/
-$ gedit CMakeLists.txt
添加以下语句
find_package(OpenCV REQUIRED)
# add the publisher example
add_executable(my_publisher src/my_publisher.cpp)
target_link_libraries(my_publisher ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
# add the subscriber example
add_executable(my_subscriber src/my_subscriber.cpp)
target_link_libraries(my_subscriber ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
将这个包进行编译
-$ cd ~/ros_ws/
-$ catkin_make
接下来开始运行程序,首先启动ROS。
~$ roscore
运行my_publisher节点.(如果运行不起来,需要先source devel/setup.bash)
$ rosrun learning_image_transport my_publisher 0
这时候会看到我们的摄像头灯已经亮起来了,0代表默认摄像头,如果有多个摄像头,则第二个是1,依次类推.
接下来运行my_subscriber节点来接收图像
-$ rosrun learning_image_transport my_subscriber
这时候如果没有错误的话就会弹出图像窗口