这个rgbd-slam代码是根据高翔的代码改编过来的,具体可以参考创客制造的以下教程
https://www.ncnynl.com/category/rgbd-slam/
下面我分为7步来介绍我的实现步骤以及中途可能遇到的bug和解决方案
这是淘宝上买的二手货,只要300多,用起来没什么问题,你可以先在windows下用它的sdk测试以下相机有没有什么问题
左边的圆孔为红外发射器,中间的圆孔是普通rgb相机,右边圆孔为红外相机。普通相机是用来获取彩图的,红外相机根据红外发射器发出的结构光来产生深度图,我们仅仅根据从相机上获取的彩图和深度图来做vslam。
opencv3(opencv4目前还不够完整,有些库还没有)
https://www.cnblogs.com/arkenstone/p/6490017.html
pcl1.8
https://www.cnblogs.com/lifeofershisui/p/9037829.html
https://www.cnblogs.com/lifeofershisui/p/9037829.html
以下重点看第4步
pcl在ubuntu16.04上的安装配置
1.安装依赖
sudo apt-get update
sudo apt-get install git build-essential linux-libc-dev
sudo apt-get install cmake cmake-gui
sudo apt-get install libusb-1.0-0-dev libusb-dev libudev-dev
sudo apt-get install mpi-default-dev openmpi-bin openmpi-common
sudo apt-get install libflann1.8 libflann-dev
sudo apt-get install libeigen3-dev
sudo apt-get install libboost-all-dev
sudo apt-get install libvtk5.10-qt4 libvtk5.10 libvtk5-dev
sudo apt-get install libqhull* libgtest-dev
sudo apt-get install freeglut3-dev pkg-config
sudo apt-get install libxmu-dev libxi-dev
sudo apt-get install mono-complete
sudo apt-get install qt-sdk openjdk-8-jdk openjdk-8-jre
#以下不知安装顺序是否正确
sudo apt-get install libproj-dev
sudo apt-get install libpcl-dev pcl-tools
2.下载安装pcl
git clone https://github.com/PointCloudLibrary/pcl.git
cd pcl
mkdir release
cd release
()
cmake -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_GPU=ON -DBUILD_apps=ON DBUILD_examples=ON -DCMAKE_INSTALL_PREFIX=/usr ..
make
sudo make install
3.安装PCLVisualizer
sudo apt-get install libopenni-dev
sudo apt-get install libopenni2-dev
4.ubuntu16.04工程使用pcl库遇到bug(!!!)
解决 在CMakeList.txt中include_directories( ${PCL_INCLUDE_DIRS} )前加list(REMOVE_ITEM PCL_LIBRARIES "vtkproj4")
eigen
https://www.cnblogs.com/newneul/p/8256803.html
https://blog.csdn.net/fsfengqingyangheihei/article/details/72897176
g2o
https://blog.csdn.net/slzlincent/article/details/86555166
https://blog.csdn.net/qq_33591712/article/details/82807814
用g2o_viewer查看相机路径,可能会遇到以下问题
解决方法:https://blog.csdn.net/sinat_34156619/article/details/86489199
你以为kinect直接插到电脑上就可以直接打开,你太天真了
https://blog.csdn.net/u013453604/article/details/48013959
安装openni、NITE、Sensor,然后用NiViewer测试一番
如果能打开,那末恭喜兄弟,你今天早点下班了
个人去clone
https://github.com/gaoxiang12/rgbd-slam-tutorial-gx
下载成功进行目录
cd "part VII"
mkdir build
cd build
cmake ..
make
编译失败的话一般用两种原因,1.找不到头文件和库文件,2.找错了头文件和库文件
对于1,可能是你没有把软件安装成功,也可能是环境变量没有配置
如果你确定软件确实安装成功了那末你去/usr/include和/usr/local/include里面取找库的头文件,
去/usr/lib和/esr/local/lib里面去找库文件,ubuntu的库一般都装在这些目录下面(个人经验,但不知道会不会装到其他地方)
然后在工程src下的CMakeLists.txt里面添加
头文件
include_directories( 头文件所在目录 )
eg:include_directories( /usr/local/include/g2o )
库文件
TARGET_LINK_LIBRARIES( 可执行文件 库文件目录和文件名 )
eg:TARGET_LINK_LIBRARIES( main /usr/lib/libOpenNI.so )
你可以取了解以下cmake,C++编程很使用的工具,高翔的vslam14讲里面有cmake用法的简单介绍
对于2,这个问题比较头疼,可能是你软件装错版本了,问题很杂,解决办法就是把终端里面出现的问题复制粘贴到百度上
代码make成功后还不能运行,你还需要数据集进行测试
数据集下载地址:http://yun.baidu.com/s/1i33uvw5
有了数据集,在代码里面更改数据集的目录就可以进行测试了
运行 bin/slam
运行结束后查看结果 pcl_viewer result.pcd
是不是满满的成就感,至此软件部分搞定了
我讲个大概,想深入一点,你可以去看以下vslam教程
https://www.jianshu.com/p/ae307655b51b
https://www.ncnynl.com/category/rgbd-slam/
想再深入一点,去看高翔大神的vslam14讲
vslam分为前端,后端,回环检测和建图四大部分
前端:主要做视觉,通过两张图片的差异计算出相机的位移和姿态的变化,大致有两种算法,特征点识别和光流
后端:前端所计算出来的相机姿态变化时间一长误差会有所积累,后端中用非线性优化去减小这些误差
回环检测:相机有时会看到之前已经看到过的目标,但是相机不知道自己回到了之前的位置上,这个时候需要用回环检测与之前所拍的图像进行比对,如果两个图像相似度高,相机认为自己回到了之前的位置上,避免建图的时候把同一个目标建在两个位置上
建图:把前端的图片提取出一些作为关键帧,用关键帧来建图(如果用所有帧建图的话,内存很快会爆炸的,你的代码运行不会超过一分钟),建图一般用点云图,点云图直观好看,还有一种是八叉树地图,优点是容量特别小。但是这些图还不能用于机器人的自动导航,因为机器人不知道这些图里面哪些地方有面,走不通。
main.c
#include
#include
#include
#include //kinect打开的秘诀在这里
#include "opencv2/opencv.hpp"
using namespace std;
using namespace cv;
void CheckOpenNIError( XnStatus result, string status )
{
if( result != XN_STATUS_OK )
cerr << status << " Error: " << xnGetStatusString( result ) << endl;
}
int main( int argc, char** argv )
{
XnStatus result = XN_STATUS_OK;
xn::DepthMetaData depthMD;
xn::ImageMetaData imageMD;
#define height 480
#define width 640
//OpenCV
IplImage* imgDepth16u=cvCreateImage(cvSize(width,height),IPL_DEPTH_16U,1);
IplImage* imgRGB8u=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,3);
IplImage* depthShow=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
IplImage* imageShow=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,3);
cvNamedWindow("depth",1);
cvNamedWindow("image",1);
char key=0;
//【2】
// context
xn::Context context;
result = context.Init();
CheckOpenNIError( result, "initialize context" );
// creategenerator
xn::DepthGenerator depthGenerator;
result = depthGenerator.Create( context );
CheckOpenNIError( result, "Create depth generator" );
xn::ImageGenerator imageGenerator;
result = imageGenerator.Create( context );
CheckOpenNIError( result, "Create image generator" );
//【3】
//map mode
XnMapOutputMode mapMode;
mapMode.nXRes = width;
mapMode.nYRes = height;
mapMode.nFPS = 30;
result = depthGenerator.SetMapOutputMode( mapMode );
result = imageGenerator.SetMapOutputMode( mapMode );
//【4】
// correct view port
depthGenerator.GetAlternativeViewPointCap().SetViewPoint( imageGenerator );
//【5】
//read data
result = context.StartGeneratingAll();
//【6】
result = context.WaitNoneUpdateAll();
cout<<"init OK!"<
CMakelists.txt
cmake_minimum_required(VERSION 2.6)
project(openkinect)
find_package( OpenCV REQUIRED )
message( $(OpenCV_INCLUDE_DIRS) )
include_directories( $(OpenCV_INCLUDE_DIRS) )
include_directories( /usr/include/ni )#openni头文件目录
add_executable(openkinect main.cpp)
target_link_libraries( openkinect ${OpenCV_LIBS} /usr/lib/libOpenNI.so )#openni库文件
install(TARGETS openkinect RUNTIME DESTINATION bin)
mkdir build
cd build
cmake ..
make
运行 ./openkinect
进入例程代码part V,前端建图代码在src的visualOdometry.cpp
打开这个文件进行修改,就是把从数据集读取图片的方式改为从kinect读取图片的方式,直接放我改的代码
visualOdometry.cpp
/*************************************************************************
> File Name: rgbd-slam-tutorial-gx/part V/src/visualOdometry.cpp
> Author: xiang gao
> Mail: [email protected]
> Created Time: 2015年08月01日 星期六 15时35分42秒
************************************************************************/
#include
#include
#include
using namespace std;
#include "slamBase.h"
using namespace cv;
#include
// 给定index,读取一帧数据
FRAME readFrame( int index, ParameterReader& pd );
// 度量运动的大小
double normofTransform( cv::Mat rvec, cv::Mat tvec );
void CheckOpenNIError( XnStatus result, string status )
{
if( result != XN_STATUS_OK )
cerr << status << " Error: " << xnGetStatusString( result ) << endl;
}
FRAME myReadFrame(int index,
xn::DepthMetaData& depthMD,
xn::ImageMetaData& imageMD,
xn::DepthGenerator& depthGenerator,
xn::ImageGenerator& imageGenerator
)
{
FRAME f;
//get meta data
depthGenerator.GetMetaData(depthMD);
imageGenerator.GetMetaData(imageMD);
if(depthMD.Data()!=NULL && imageMD.Data()!=NULL)
{
//方法【1】通过Mat定义
//convert DepthMetaData to Mat
unsigned short* p = (unsigned short*) depthMD.Data();
Mat depthMat1(480,640,CV_16SC1,p);
Mat depthMatShow1(480,640,CV_8UC1);
convertScaleAbs(depthMat1,depthMatShow1,255/4096.0);//这一步很重要;
normalize(depthMatShow1,depthMatShow1,255,0,CV_MINMAX);
imshow("testDepthMat",depthMatShow1);
//convert ImageMetaData to Mat
uchar *q = (uchar *) imageMD.Data();
Mat rgbMat1(480,640,CV_8UC3,q);
Mat rgbMatShow1;
cvtColor(rgbMat1,rgbMatShow1,CV_RGB2BGR);
imshow("testColorMat",rgbMatShow1);
//注意生成和读取的顺序要一致,如果读取顺序与生成顺序相反
f.rgb = rgbMatShow1;
f.depth = depthMat1;
f.frameID = index;
return f;
} else cout<<"error"<= max_norm )
{
currIndex++;
continue;
}
Eigen::Isometry3d T = cvMat2Eigen( result.rvec, result.tvec );
cout<<"T="<>filename;
f.rgb = cv::imread( filename );
ss.clear();
filename.clear();
ss<>filename;
f.depth = cv::imread( filename, -1 );
return f;
}
double normofTransform( cv::Mat rvec, cv::Mat tvec )
{
return fabs(min(cv::norm(rvec), 2*M_PI-cv::norm(rvec)))+ fabs(cv::norm(tvec));
}
src的CMakelists.txt里面添加openni的头文件和库文件路径即可
# 增加一个可执行的二进制
ADD_EXECUTABLE( main main.cpp )
# 增加PCL库的依赖
FIND_PACKAGE( PCL REQUIRED COMPONENTS common io visualization filters )
list(REMOVE_ITEM PCL_LIBRARIES "vtkproj4") # use this in Ubuntu 16.04
# 增加opencv的依赖
FIND_PACKAGE( OpenCV REQUIRED )
# 添加头文件和库文件
ADD_DEFINITIONS( ${PCL_DEFINITIONS} )
INCLUDE_DIRECTORIES( ${PCL_INCLUDE_DIRS} )
LINK_LIBRARIES( ${PCL_LIBRARY_DIRS} )
# INCLUDE_DIRECTORIES( ${PROJECT_SOURSE_DIR}/include )
include_directories( /usr/include/ni )
#ADD_EXECUTABLE( generate_pointcloud generatePointCloud.cpp )
#TARGET_LINK_LIBRARIES( generate_pointcloud ${OpenCV_LIBS}
# ${PCL_LIBRARIES} )
ADD_LIBRARY( slambase slamBase.cpp )
TARGET_LINK_LIBRARIES( slambase
${OpenCV_LIBS}
${PCL_LIBRARIES} )
#ADD_EXECUTABLE( detectFeatures detectFeatures.cpp )
#TARGET_LINK_LIBRARIES( detectFeatures
# slambase
# ${OpenCV_LIBS}
# ${PCL_LIBRARIES} )
#ADD_EXECUTABLE( joinPointCloud joinPointCloud.cpp)
#TARGET_LINK_LIBRARIES( joinPointCloud
# slambase
# ${OpenCV_LIBS}
# ${PCL_LIBRARIES} )
ADD_EXECUTABLE( visualOdometry visualOdometry.cpp)
TARGET_LINK_LIBRARIES( visualOdometry
slambase
${OpenCV_LIBS}
${PCL_LIBRARIES}
/usr/lib/libOpenNI.so )
直接编译运行
缓缓移动摄像头,你可以看到pcl点云图的构建
pcl_viewer data/result.pcd
效果还不错,你可以看到一个码农正在进行辛勤的耕作。
但这个代码你发现运行一段时间就会出现内存爆炸的情况,而且你把摄像头转回去的时候出现了比较明显的误差,这些都是前端的问题,下面加入后端和回环检测。
进入例程的part VII
slam.cpp
/*************************************************************************
> File Name: rgbd-slam-tutorial-gx/part V/src/visualOdometry.cpp
> Author: xiang gao
> Mail: [email protected]
> Created Time: 2015年08月15日 星期六 15时35分42秒
* add g2o slam end to visual odometry
* add keyframe and simple loop closure
************************************************************************/
#include
#include
#include
using namespace std;
#include "slamBase.h"
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
using namespace cv;
#define XN_PLATFORM XN_PLATFORM_LINUX_X86
#define linux 1
//不加以上define以下头文件编译无法通过
#include
// 把g2o的定义放到前面
typedef g2o::BlockSolver_6_3 SlamBlockSolver;
typedef g2o::LinearSolverEigen< SlamBlockSolver::PoseMatrixType > SlamLinearSolver;
// 给定index,读取一帧数据
FRAME readFrame( int index, ParameterReader& pd );
// 估计一个运动的大小
double normofTransform( cv::Mat rvec, cv::Mat tvec );
// 检测两个帧,结果定义
enum CHECK_RESULT {NOT_MATCHED=0, TOO_FAR_AWAY, TOO_CLOSE, KEYFRAME};
// 函数声明
CHECK_RESULT checkKeyframes( FRAME& f1, FRAME& f2, g2o::SparseOptimizer& opti, bool is_loops=false );
// 检测近距离的回环
void checkNearbyLoops( vector& frames, FRAME& currFrame, g2o::SparseOptimizer& opti );
// 随机检测回环
void checkRandomLoops( vector& frames, FRAME& currFrame, g2o::SparseOptimizer& opti );
void CheckOpenNIError( XnStatus result, string status )
{
if( result != XN_STATUS_OK )
cerr << status << " Error: " << xnGetStatusString( result ) << endl;
}
xn::DepthMetaData depthMD;
xn::ImageMetaData imageMD;
xn::DepthGenerator depthGenerator;
xn::ImageGenerator imageGenerator;
FRAME myReadFrame(int index)
{
FRAME f;
//get meta data
depthGenerator.GetMetaData(depthMD);
imageGenerator.GetMetaData(imageMD);
if(depthMD.Data()!=NULL)
{
//方法【1】通过Mat定义
//convert DepthMetaData to Mat
unsigned short* p = (unsigned short*) depthMD.Data();
Mat depthMat1(480,640,CV_16SC1,p);
Mat depthMatShow1;
convertScaleAbs(depthMat1,depthMatShow1,255/4096.0);//这一步很重要;
normalize(depthMatShow1,depthMatShow1,255,0,CV_MINMAX);
imshow("testDepthMat",depthMatShow1);//显示图片会增加一些延时,便于数据被实际读取出来
//convert ImageMetaData to Mat
uchar *q = (uchar *) imageMD.Data();
Mat rgbMat1(480,640,CV_8UC3,q);
Mat rgbMatShow1(480,640,CV_8UC3);
cvtColor(rgbMat1,rgbMatShow1,CV_RGB2BGR);
imshow("testColorMat",rgbMatShow1);
f.depth = depthMat1.clone();
f.rgb = rgbMatShow1;
f.frameID = index;
return f;
}
cout<<"error"< keyframes;
// initialize
cout<<"Initializing ..."<setBlockOrdering( false );
SlamBlockSolver* blockSolver = new SlamBlockSolver( linearSolver );
g2o::OptimizationAlgorithmLevenberg* solver = new g2o::OptimizationAlgorithmLevenberg( blockSolver );
g2o::SparseOptimizer globalOptimizer; // 最后用的就是这个东东
globalOptimizer.setAlgorithm( solver );
// 不要输出调试信息
globalOptimizer.setVerbose( false );
// 向globalOptimizer增加第一个顶点
g2o::VertexSE3* v = new g2o::VertexSE3();
v->setId( currIndex );
v->setEstimate( Eigen::Isometry3d::Identity() ); //估计为单位矩阵
v->setFixed( true ); //第一个顶点固定,不用优化
globalOptimizer.addVertex( v );
keyframes.push_back( currFrame );
double keyframe_threshold = atof( pd.getData("keyframe_threshold").c_str() );
bool check_loop_closure = pd.getData("check_loop_closure")==string("yes");
/
cout<<"init OK"< voxel; // 网格滤波器,调整地图分辨率
pcl::PassThrough pass; // z方向区间滤波器,由于rgbd相机的有效深度区间有限,把太远的去掉
pass.setFilterFieldName("z");
pass.setFilterLimits( 0.0, 6.0 ); //4m以上就不要了
double gridsize = atof( pd.getData( "voxel_grid" ).c_str() ); //分辨图可以在parameters.txt里调
voxel.setLeafSize( gridsize, gridsize, gridsize );
for (size_t i=0; i(globalOptimizer.vertex( keyframes[i].frameID ));
Eigen::Isometry3d pose = vertex->estimate(); //该帧优化后的位姿
PointCloud::Ptr newCloud = image2PointCloud( keyframes[i].rgb, keyframes[i].depth, camera ); //转成点云
// 以下是滤波
voxel.setInputCloud( newCloud );
voxel.filter( *tmp );
pass.setInputCloud( tmp );
pass.filter( *newCloud );
// 把点云变换后加入全局地图中
pcl::transformPointCloud( *newCloud, *tmp, pose.matrix() );
*output += *tmp;
tmp->clear();
newCloud->clear();
}
voxel.setInputCloud( output );
voxel.filter( *tmp );
//存储
pcl::io::savePCDFile( "./result.pcd", *tmp );
cout<<"Final map is saved."<>filename;
f.rgb = cv::imread( filename );
ss.clear();
filename.clear();
ss<>filename;
f.depth = cv::imread( filename, -1 );
f.frameID = index;
return f;
}
double normofTransform( cv::Mat rvec, cv::Mat tvec )
{
return fabs(min(cv::norm(rvec), 2*M_PI-cv::norm(rvec)))+ fabs(cv::norm(tvec));
}
CHECK_RESULT checkKeyframes( FRAME& f1, FRAME& f2, g2o::SparseOptimizer& opti, bool is_loops)
{
static ParameterReader pd;
static int min_inliers = atoi( pd.getData("min_inliers").c_str() );
static double max_norm = atof( pd.getData("max_norm").c_str() );
static double keyframe_threshold = atof( pd.getData("keyframe_threshold").c_str() );
static double max_norm_lp = atof( pd.getData("max_norm_lp").c_str() );
static CAMERA_INTRINSIC_PARAMETERS camera = getDefaultCamera();
// 比较f1 和 f2
RESULT_OF_PNP result = estimateMotion( f1, f2, camera );
if ( result.inliers < min_inliers ) //inliers不够,放弃该帧
return NOT_MATCHED;
// 计算运动范围是否太大
double norm = normofTransform(result.rvec, result.tvec);
if ( is_loops == false )
{
if ( norm >= max_norm )
return TOO_FAR_AWAY; // too far away, may be error
}
else
{
if ( norm >= max_norm_lp)
return TOO_FAR_AWAY;
}
if ( norm <= keyframe_threshold )
return TOO_CLOSE; // too adjacent frame
// 向g2o中增加这个顶点与上一帧联系的边
// 顶点部分
// 顶点只需设定id即可
if (is_loops == false)
{
g2o::VertexSE3 *v = new g2o::VertexSE3();
v->setId( f2.frameID );
v->setEstimate( Eigen::Isometry3d::Identity() );
opti.addVertex(v);
}
// 边部分
g2o::EdgeSE3* edge = new g2o::EdgeSE3();
// 连接此边的两个顶点id
edge->setVertex( 0, opti.vertex(f1.frameID ));
edge->setVertex( 1, opti.vertex(f2.frameID ));
edge->setRobustKernel( new g2o::RobustKernelHuber() );
// 信息矩阵
Eigen::Matrix information = Eigen::Matrix< double, 6,6 >::Identity();
// 信息矩阵是协方差矩阵的逆,表示我们对边的精度的预先估计
// 因为pose为6D的,信息矩阵是6*6的阵,假设位置和角度的估计精度均为0.1且互相独立
// 那么协方差则为对角为0.01的矩阵,信息阵则为100的矩阵
information(0,0) = information(1,1) = information(2,2) = 100;
information(3,3) = information(4,4) = information(5,5) = 100;
// 也可以将角度设大一些,表示对角度的估计更加准确
edge->setInformation( information );
// 边的估计即是pnp求解之结果
Eigen::Isometry3d T = cvMat2Eigen( result.rvec, result.tvec );
//edge->setMeasurement( T );
edge->setMeasurement( T.inverse() );
// 将此边加入图中
opti.addEdge(edge);
return KEYFRAME;
}
void checkNearbyLoops( vector& frames, FRAME& currFrame, g2o::SparseOptimizer& opti )
{
static ParameterReader pd;
static int nearby_loops = atoi( pd.getData("nearby_loops").c_str() );
// 就是把currFrame和 frames里末尾几个测一遍
if ( frames.size() <= nearby_loops )
{
// no enough keyframes, check everyone
for (size_t i=0; i& frames, FRAME& currFrame, g2o::SparseOptimizer& opti )
{
static ParameterReader pd;
static int random_loops = atoi( pd.getData("random_loops").c_str() );
srand( (unsigned int) time(NULL) );
// 随机取一些帧进行检测
if ( frames.size() <= random_loops )
{
// no enough keyframes, check everyone
for (size_t i=0; i
代码中加入关键帧的选择,加入了回环检测。首先用pnp计算当前帧与上一个关键帧的姿态变化,用它来初始化g2o,用回环检测添加g2o的边。扫描完成后再进行g2o的优化,建图的效果明显好多了。
CMakeLists.txt
# 增加一个可执行的二进制
ADD_EXECUTABLE( main main.cpp )
# 增加PCL库的依赖
FIND_PACKAGE( PCL REQUIRED COMPONENTS common io visualization filters )
list(REMOVE_ITEM PCL_LIBRARIES "vtkproj4") # use this in Ubuntu 16.04
# 增加opencv的依赖
FIND_PACKAGE( OpenCV REQUIRED )
include_directories( $(OpenCV_INCLUDE_DIRS) )
# 添加头文件和库文件
ADD_DEFINITIONS( ${PCL_DEFINITIONS} )
INCLUDE_DIRECTORIES( ${PCL_INCLUDE_DIRS} )
LINK_LIBRARIES( ${PCL_LIBRARY_DIRS} )
include_directories( /usr/include/ni )
# 添加g2o的依赖
# 因为g2o不是常用库,要添加它的findg2o.cmake文件
LIST( APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules )
SET( G2O_ROOT /usr/local/include/g2o )
#FIND_PACKAGE( G2O REQUIRED )
# CSparse
#FIND_PACKAGE( CSparse REQUIRED )
INCLUDE_DIRECTORIES( ${G2O_INCLUDE_DIR} ${CSPARSE_INCLUDE_DIR} )
ADD_EXECUTABLE( generate_pointcloud generatePointCloud.cpp )
TARGET_LINK_LIBRARIES( generate_pointcloud ${OpenCV_LIBS}
${PCL_LIBRARIES} )
ADD_LIBRARY( slambase slamBase.cpp )
TARGET_LINK_LIBRARIES( slambase
${OpenCV_LIBS}
${PCL_LIBRARIES} )
#ADD_EXECUTABLE( detectFeatures detectFeatures.cpp )
#TARGET_LINK_LIBRARIES( detectFeatures
# slambase
# ${OpenCV_LIBS}
# ${PCL_LIBRARIES} )
#ADD_EXECUTABLE( joinPointCloud joinPointCloud.cpp)
#TARGET_LINK_LIBRARIES( joinPointCloud
# slambase
# ${OpenCV_LIBS}
# ${PCL_LIBRARIES} )
#ADD_EXECUTABLE( visualOdometry visualOdometry.cpp)
#TARGET_LINK_LIBRARIES( visualOdometry
# slambase
# ${OpenCV_LIBS}
# ${PCL_LIBRARIES} )
#ADD_EXECUTABLE( slamEnd slamEnd.cpp )
#TARGET_LINK_LIBRARIES( slamEnd
# slambase
# ${OpenCV_LIBS}
# ${PCL_LIBRARIES}
# g2o_core g2o_types_slam3d g2o_solver_csparse g2o_stuff g2o_csparse_extension ${CSPARSE_LIBRARY})
ADD_EXECUTABLE( slam slam.cpp )
TARGET_LINK_LIBRARIES( slam
slambase
${OpenCV_LIBS}
${PCL_LIBRARIES}
g2o_core g2o_types_slam3d g2o_solver_csparse g2o_stuff g2o_csparse_extension ${CSPARSE_LIBRARY} /usr/lib/libOpenNI.so )
用pcl_viewer查看优化效果
效果明显好多了,细节还比较清楚
ROS里面也有实现rgbd-slam的包
https://www.ncnynl.com/archives/201709/1991.html
打开core
roscore
打开深度摄像头
roslaunch freenect_launch freenect.launch depth_registration:=true
打开vslam建图
roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start"
这个效果还不错,而且实时性也是不错的。