在之前的几篇文章中,说明了如何在Ubuntu 16.04中配置OpenCV 3.4.1,并根据一些OpenCV的教程完成了其基本知识内容的学习,但在学习过程中用于实验的图像大部分是来自网络或是借助于笔记本电脑的摄像头,而为了完成一些更加复杂的任务,需要使用其它的视觉传感器,所以在这里记录一下Ubuntu 16.04使用Kinect 2.0 RGB-D传感器的内容。
Kinect 2.0 在Windows系统中是有官方提供的现成的驱动程序可以使用的,并且有对应的SDK可以辅助开发人员进行应用开发,但在Linux系统中,就需要在Github上找对应的开源驱动代码,并且进行安装调试。
Kinect 2.0 的开源驱动包位于:https://github.com/OpenKinect
在其内有两类驱动,一类是 libfreenect 用于一代Kinect的驱动,另一类 libfreenect2 用于二代Kinect的驱动,在本文中用的是二代驱动程序。以下内容是针对Ubuntu系统说明的,如果要查看其它系统,则可以移步到:https://github.com/OpenKinect/libfreenect2查看驱动包的详细说明。
Virtual machines likely do not work, because USB 3.0 isochronous transfer is quite delicate.
因为我们只关注Ubuntu所以只列出Ubuntu系统的要求:
cd ~ #设置克隆目录,当前在Home目录中,也可以设置在其它目录中
git clone https://github.com/OpenKinect/libfreenect2.git
cd libfreenect2
sudo apt-get install build-essential cmake pkg-config
sudo apt-get install libusb-1.0-0-dev
sudo apt-get install libturbojpeg libjpeg-turbo8-dev
sudo apt-get install libglfw3-dev
sudo apt-get install libopenni2-dev
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local/ #修改了原来的安装位置
make
sudo make install #这里直接用root权限安装,如果是普通权限有的时候会出现安装不成功的情况
sudo cp ../platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/
./bin/Protonect #重新接入之后运行测试
sudo apt-get install openni2-utils && sudo make install-openni2 && NiViewer2 #可选测试
重新接入的过程中,如果只是重新插拨一下USB接口,可能并不会生效,最稳妥的办法是将Kinect断电之后重新上电,在上下电的过程中USB接口可以不用重新插拨。
以上所有步骤如果不出问题,则可以得到类似以下内容:
运行可选的测试命令也会出现类似的图像,但那个是全屏的,按下ESC就可以退出,就不再放图片了,所以到了这一步,驱动包的安装就已经成功了,下面的内容是如果在代码中使用传感器获取数据,这一步才是最需要关注的。
编程实现通过Ubuntu读取Kinect的数据,同样需要借助于驱动包所提供的库函数,下面就详细解释如何在程序中实现数据的获取与处理。首先以下为需要包含的头文件:
#include
#include
#include
#include
#include #用于日志信息,视需求可省略
具体代码的实现参考了以下网址:
http://answers.opencv.org/question/76468/opencvkinect-onekinect-for-windows-v2linuxlibfreenect2/
这一步对于所有的操作都是必需的,针对一个传感器的情况,可以使用以下代码,这里为了调用方便直接将其定义为全局变量:
libfreenect2::Freenect2 freenect2;
libfreenect2::Freenect2Device *dev = 0;
libfreenect2::PacketPipeline *pipeline = 0;
//! [discovery]
if(freenect2.enumerateDevices() == 0)
{
std::cout << "no device connected!" << std::endl;
return -1;
}
string serial = freenect2.getDefaultDeviceSerialNumber();
if(serial == "") return -1;
cout<<"The serial number is :"<//! [discovery]
int depthProcessor = Processor::cl;
if(depthProcessor == Processor::cpu)
{
if(!pipeline)
{
//! [pipeline]
pipeline = new libfreenect2::CpuPacketPipeline();
//! [pipeline]
}
}
else
if (depthProcessor == Processor::gl)
{
#ifdef LIBFREENECT2_WITH_OPENGL_SUPPORT
if(!pipeline)
pipeline = new libfreenect2::OpenGLPacketPipeline();
#else
std::cout << "OpenGL pipeline is not supported!" << std::endl;
#endif
}
else
if (depthProcessor == Processor::cl)
{
#ifdef LIBFREENECT2_WITH_OPENCL_SUPPORT
if(!pipeline)
pipeline = new libfreenect2::OpenCLPacketPipeline();
#else
std::cout << "OpenCL pipeline is not supported!" << std::endl;
#endif
}
if(pipeline)
{
//! [open]
dev = freenect2.openDevice(serial, pipeline);
//! [open]
}
else
{
dev = freenect2.openDevice(serial);
}
if(dev == 0)
{
std::cout << "failure opening device!" << std::endl;
return -1;
}
//! [listeners]
libfreenect2::SyncMultiFrameListener listener(libfreenect2::Frame::Color |libfreenect2::Frame::Depth |libfreenect2::Frame::Ir);
libfreenect2::FrameMap frames;
dev->setColorFrameListener(&listener);
dev->setIrAndDepthFrameListener(&listener);
//! [listeners]
//! [start]
dev->start();
std::cout << "device serial: " << dev->getSerialNumber() << std::endl;
std::cout << "device firmware: " << dev->getFirmwareVersion() << std::endl;
//! [start]
libfreenect2::Registration* registration = new libfreenect2::Registration(dev->getIrCameraParams(), dev->getColorCameraParams());
libfreenect2::Frame undistorted(512, 424, 4), registered(512, 424, 4), depth2rgb(1920, 1080 + 2, 4);
Mat rgbmat, depthmat, irmat, depthmatUndistorted, rgbd, rgbd2;
while(!(waitKey(1)==27))
{
listener.waitForNewFrame(frames);
libfreenect2::Frame *rgb = frames[libfreenect2::Frame::Color];
libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir];
libfreenect2::Frame *depth = frames[libfreenect2::Frame::Depth];
//! [loop start]
Mat rgbmat = cv::Mat(rgb->height, rgb->width, CV_8UC4, rgb->data);
Mat irmat = cv::Mat(ir->height, ir->width, CV_32FC1, ir->data);
Mat depthmat =cv::Mat(depth->height, depth->width, CV_32FC1, depth->data);
cv::imshow("rgb", rgbmat);
cv::imshow("ir", irmat / 4500.0f);
cv::imshow("depth", depthmat / 4500.0f);
// //! [registration]
// registration->apply(rgb, depth, &undistorted, ®istered, true, &depth2rgb);
// //! [registration]
// cv::Mat(undistorted.height, undistorted.width, CV_32FC1, undistorted.data).copytTo(depthmatUndistorted);
// cv::Mat(registered.height, registered.width, CV_8UC4, registered.data).copyTo(rgbd);
// cv::Mat(depth2rgb.height, depth2rgb.width, CV_32FC1, depth2rgb.data).copyTo(rgbd2);
// cv::imshow("undistorted", depthmatUndistorted / 4500.0f);
// cv::imshow("registered", rgbd);
// cv::imshow("depth2RGB", rgbd2 / 4500.0f);
//! [loop end]
listener.release(frames);
}
//! [loop end]
//! [stop]
dev->stop(); //停止数据传输
dev->close(); //关闭设备
//! [stop]
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
using namespace std;
using namespace cv;
enum Processor { cl, gl, cpu };
int main(int argc, char *argv[])
{
//! [context]
libfreenect2::Freenect2 freenect2;
libfreenect2::Freenect2Device *dev = nullptr;
libfreenect2::PacketPipeline *pipeline = nullptr;
//! [context]
//! [discovery]
if(freenect2.enumerateDevices() == 0)
{
std::cout << "no device connected!" << std::endl;
return -1;
}
string serial = freenect2.getDefaultDeviceSerialNumber();
if(serial == "") return -1;
cout<<"The serial number is :"<//! [discovery]
int depthProcessor = Processor::cl;
if(depthProcessor == Processor::cpu)
{
if(!pipeline)
{
//! [pipeline]
pipeline = new libfreenect2::CpuPacketPipeline();
//! [pipeline]
}
}
else
if (depthProcessor == Processor::gl)
{
#ifdef LIBFREENECT2_WITH_OPENGL_SUPPORT
if(!pipeline)
pipeline = new libfreenect2::OpenGLPacketPipeline();
#else
std::cout << "OpenGL pipeline is not supported!" << std::endl;
#endif
}
else
if (depthProcessor == Processor::cl)
{
#ifdef LIBFREENECT2_WITH_OPENCL_SUPPORT
if(!pipeline)
pipeline = new libfreenect2::OpenCLPacketPipeline();
#else
std::cout << "OpenCL pipeline is not supported!" << std::endl;
#endif
}
if(pipeline)
{
//! [open]
dev = freenect2.openDevice(serial, pipeline);
//! [open]
}
else
{
dev = freenect2.openDevice(serial);
}
if(dev == 0)
{
std::cout << "failure opening device!" << std::endl;
return -1;
}
//! [listeners]
libfreenect2::SyncMultiFrameListener listener(libfreenect2::Frame::Color |libfreenect2::Frame::Depth |libfreenect2::Frame::Ir);
libfreenect2::FrameMap frames;
dev->setColorFrameListener(&listener);
dev->setIrAndDepthFrameListener(&listener);
//! [listeners]
//! [start]
dev->start();
std::cout << "device serial: " << dev->getSerialNumber() << std::endl;
std::cout << "device firmware: " << dev->getFirmwareVersion() << std::endl;
//! [start]
libfreenect2::Registration* registration = new libfreenect2::Registration(dev->getIrCameraParams(), dev->getColorCameraParams());
libfreenect2::Frame undistorted(512, 424, 4), registered(512, 424, 4), depth2rgb(1920, 1080 + 2, 4);
Mat rgbmat, depthmat, irmat, depthmatUndistorted, rgbd, rgbd2;
while(!(waitKey(1)==27))
{
listener.waitForNewFrame(frames);
libfreenect2::Frame *rgb = frames[libfreenect2::Frame::Color];
libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir];
libfreenect2::Frame *depth = frames[libfreenect2::Frame::Depth];
//! [loop start]
Mat rgbmat = cv::Mat(rgb->height, rgb->width, CV_8UC4, rgb->data);
Mat irmat = cv::Mat(ir->height, ir->width, CV_32FC1, ir->data);
Mat depthmat =cv::Mat(depth->height, depth->width, CV_32FC1, depth->data);
cv::imshow("rgb", rgbmat);
cv::imshow("ir", irmat / 4500.0f);
cv::imshow("depth", depthmat / 4500.0f);
// //! [registration]
// registration->apply(rgb, depth, &undistorted, ®istered, true, &depth2rgb);
// //! [registration]
// cv::Mat(undistorted.height, undistorted.width, CV_32FC1, undistorted.data).copytTo(depthmatUndistorted);
// cv::Mat(registered.height, registered.width, CV_8UC4, registered.data).copyTo(rgbd);
// cv::Mat(depth2rgb.height, depth2rgb.width, CV_32FC1, depth2rgb.data).copyTo(rgbd2);
// cv::imshow("undistorted", depthmatUndistorted / 4500.0f);
// cv::imshow("registered", rgbd);
// cv::imshow("depth2RGB", rgbd2 / 4500.0f);
//! [loop end]
listener.release(frames);
}
//! [loop end]
//! [stop]
dev->stop();
dev->close();
//! [stop]
delete registration;
std::cout << "Goodbye World!" << std::endl;
return 0;
}
# cmake needs this line
cmake_minimum_required(VERSION 2.8)
# Enable C++11
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED TRUE)
# Define project name
project(kinect)
# Find OpenCV, you may need to set OpenCV_DIR variable
# to the absolute path to the directory containing OpenCVConfig.cmake file
# via the command line or GUI
find_package(OpenCV REQUIRED)
FIND_PACKAGE(PkgConfig REQUIRED)
FIND_PACKAGE(freenect2 REQUIRED)
include_directories( ${OpenCV_INCLUDE_DIRS} )
include_directories( ${freenect2_INCLUDE_DIRS} )
# Declare the executable target built from your sources
add_executable(kinect kinect.cpp)
# Link your application with OpenCV libraries
target_link_libraries(kinect ${freenect2_LIBRARIES} ${OpenCV_LIBS})
完成这个从Kinect的驱动安装,到利用安装的驱动包进行编程实现,到将数据格式转化为OpenCV可识别的Mat格式,一路上遇到不少坑,驱动的安装和使用说明里,说的真的不是很清楚,所以不得不去找了很多的资料,拼拼凑凑,将这套代码搭起来,之后又找到了参考网址中的代码,学习之后,终于对自己拼凑出来的东西放心了不少。
再之后就CMakeLists文件,对于这方便不是太懂,所以这个也折腾了好久,尤其是一开始,驱动包是按照Github里给的说明安装的,在说明中用的命令是 cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/freenect2 ,然后在后面安装的时候直接装到home目录下,生成的库需要设置环境变量,不然编译的时候找不到,又是各种折腾,最后还是有问题,后来我就自己把路径改了,就像前面写的那样,才消除了编译通不过的问题。
想想都是泪,为何不直接用windows系统?又省事,资料又多,配置好VS,真是省心又放心。算了,上一个结果图吧,右下小窗口挡一下自己(^_^)
现在看来,这个编程实现的坑终于填平了!
在ROS中配置Kinect的是通过 IAI Kinect2 包的安装实现的,这个包为 ROS 和 Kinect 的连接提供了必要的工具和库。
IAI Kinect2包提供了以下功能:
从以上功能中可以看出,IAI Kinect2包的使用,需要在系统中安装libfreenect2,就是我们第一步中所安装的驱动包。
cd ~/catkin_ws/src/ #进行ROS的工作目录下的源文件目录
git clone https://github.com/code-iai/iai_kinect2.git #克隆对应的源码包到当前目录
cd iai_kinect2
rosdep install -r --from-paths .
cd ~/catkin_ws
catkin build
完成以下的编译之后,运行source命令,以使得编译的包对当前ROS系统可见:
source devel/setup.bash
接下来运行接口启动命令:
roslaunch kinect2_bridge kinect2_bridge.launch
PS:启动接口launch文件前记得重开一个命令窗口,启动roscore
之后会出现如下图所示信息:
提示,服务已经就绪等待客户端接入。
最后运行对应的ROS节点:
rosrun kinect2_viewer kinect2_viewer
出现如下效果图:
OK!到这里ROS与Kinect之间的通道Bridge也搭建成功了!