下面是一些USB摄像头的驱动(大多数摄像头都支持uvc标准):
1 使用软件库里的uvc-camera功能包
1.1 检查摄像头
lsusb
-------------------------------------
显示如下:
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 007: ID 046d:082b Logitech, Inc. Webcam C170
Bus 001 Device 006: ID 0461:4e2a Primax Electronics, Ltd
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
1.2 安装uvc camera功能包
sudo apt-get install ros-indigo-uvc-camera
1.3 安装image相关功能包
sudo apt-get install ros-kinetic-image-*
sudo apt-get install ros-kinetic-rqt-image-view
1.4 运行uvc_camera节点
rosrun uvc_camera uvc_camera_node
1.5 查看图像信息
(1)使用image_view节点查看图像
rosrun image_view image_view image:=/image_raw
-------------------------------------
说明:最后面的附加选项“image:=/image_raw”是把话题列表中的话题以图像形式查看的选项。
(2)用rqt_image_view节点检查
rqt_image_view image:=/image_raw
(3)使用rviz查看
rviz
增加image,然后将[Image] → [Image Topic]的值更改为“/image_raw”。
使用apt-get安装的软件包好像只有执行程序,没有launch文件和节点源文件等等,所以采用了自建uvc-camera软件包更该参数。
2 使用usb_cam软件包
2.1 安装usb_cam软件包
sudo apt-get install ros-kinetic-usb-cam
2.2 启用launch文件
roslaunch usb_cam usb_cam-test.launch
-------------------------------------
显示如下:
launch文件的目录为:/opt/ros/kinetic/share/usb_cam
,可在该目录下找到luanch文件并修改参数。
3 使用opencv驱动USB摄像头
首先创建一个工作空间:
$ mkdir -p ~/ros_ws/src
$ cd ~/ros_ws/
$ catkin_make
$ source devel/setup.bash
再建立一个功能包:
$ cd ~/ros_ws/src
$ catkin_create_pkg learning_image_transport roscpp std_msgs cv_bridge image_transport sensor_msgs
然后在功能包learning_image_transport下的src目录中建立两个cpp文件:
$ cd ~/ros_ws/src/learning_image_transport/src/
$ gedit my_publisher.cpp
然后在功能包learning_image_transport下的src目录中建立两个cpp文件:
$ cd ~/ros_ws/src/learning_image_transport/src/
$ gedit my_publisher.cpp
将下列代码复制进去:
#include
#include
#include
#include
#include // for converting the command line parameter to integer
int main(int argc, char** argv)
{
// Check if video source has been passed as a parameter
if(argv[1] == NULL)
{
ROS_INFO("argv[1]=NULL\n");
return 1;
}
ros::init(argc, argv, "image_publisher");
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
image_transport::Publisher pub = it.advertise("camera/image", 1);
// Convert the passed as command line parameter index for the video device to an integer
std::istringstream video_sourceCmd(argv[1]);
int video_source;
// Check if it is indeed a number
if(!(video_sourceCmd >> video_source))
{
ROS_INFO("video_sourceCmd is %d\n",video_source);
return 1;
}
cv::VideoCapture cap(video_source);
// Check if video device can be opened with the given index
if(!cap.isOpened())
{
ROS_INFO("can not opencv video device\n");
return 1;
}
cv::Mat frame;
sensor_msgs::ImagePtr msg;
ros::Rate loop_rate(5);
while (nh.ok())
{
cap >> frame;
// Check if grabbed frame is actually full with some content
if(!frame.empty())
{
msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", frame).toImageMsg();
pub.publish(msg);
//cv::Wait(1);
}
}
ros::spinOnce();
loop_rate.sleep();
}
保存以后,继续创建my_subscriber.cpp:
$ gedit my_subscriber.cpp
复制下列代码:
#include
#include
#include
#include
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
try
{
cv::imshow("view", cv_bridge::toCvShare(msg, "bgr8")->image);
// cv::waitKey(30);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("Could not convert from '%s' to 'bgr8'.", msg->encoding.c_str());
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "image_listener");
ros::NodeHandle nh;
cv::namedWindow("view");
cv::startWindowThread();
image_transport::ImageTransport it(nh);
image_transport::Subscriber sub = it.subscribe("camera/image", 1,imageCallback);
ros::spin();
cv::destroyWindow("view");
}
接下来要把涉及到的各种包和opencv在CMakeList中声明一下,回到程序包目录下。
$ cd ~/ros_ws/src/learning_image_transport/
$ gedit CMakeLists.txt
添加以下语句:
find_package(OpenCV REQUIRED)
# add the publisher example
add_executable(my_publisher src/my_publisher.cpp)
target_link_libraries(my_publisher ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
# add the subscriber example
add_executable(my_subscriber src/my_subscriber.cpp)
target_link_libraries(my_subscriber ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
将这个包进行编译:
$ cd ~/ros_ws/
$ catkin_make
接下来开始运行程序,首先启动ROS。
$ roscore
运行my_publisher节点.(如果运行不起来,需要先source devel/setup.bash)。
$ rosrun learning_image_transport my_publisher 0
这时候会看到我们的摄像头灯已经亮起来了,0代表默认摄像头,如果有多个摄像头,则第二个是1,依次类推。
接下来运行my_subscriber节点来接收图像。
$ rosrun learning_image_transport my_subscriber
这时候如果没有错误的话就会弹出图像窗口,如下所示:
参考:
ROS-USB摄像头]
ROS学习笔记(一):在ROS中使用USB网络摄像头传输图像