ORB-SLAM2 在线构建稠密点云(一)

在这一篇博客中我们(当然不是我啦,网络上有很多的资源,我只是整合了一下,原创性的知识还是来至于广大研究人员)在ROS环境下读取深度相机的数据,然后基于ORB-SLAM2这个框架实现在线构建点云地图和八叉树地图的构建(Octomap)。博文分三个部分:第一部分讲解如何在ORB-SLAM上添加点云建图线程;第二部分讲解如何在ROS下利用深度相机在线构建点云地图;第三部分讲解如何将点云地图在线转换为八叉树地图。注意这里涉及到两个代码,一个是我们在ORB_SLAM2的基础上修改得到的 ORB_SLAM2_PointCloud[2] 和运行的ROS节点 orbslam2_pointcloud[3] ,这两个代码我都放在了码云上面供大家下载。

1、ORB_SLAM2_PointCloud

由于ORB-SLAM2在构建的时候只在地图中保留了特征点,对于使用RGB-D相机的小伙伴而言,更希望得到一个点云地图。好在ORB-SLAM2这个框架结构清晰,只需要单独添加一个线程用于维护点云地图,将ORB-SLAM2生成的关键帧传入点云地图构建线程,利用传入关键帧来生成点云地图,这就是基本思路了。其实这个工作高翔博士在早期就已经做过了,并上传到了github中。

1.1 代码安装

修改的内容都基本一样,你可以直接load高博修改的版本,也可以在这个地址上修改我使用的版本[2]。下载代码以后,请首先参考ORB-SLAM2的安装方式安装相关依赖。然后对代码进行编译。由于添加了点云构建线程,因此还需要安装点云库(点云的安装参考这个博客)。下载代码后按照按照原版ORB_SLAM2的编译方式进行编译即可。

1.2 数据集运行

与原版的ORB-SLAM启动方式一样,使用如下命令就可以直接启动,你就会看到这样的效果:(使用TUM的RGBD数据集)

./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml ~/dataset/rgbd_dataset_freiburg1_room/  ~/dataset/rgbd_dataset_freiburg1_room/associate.txt

ORB_SLAM_PointCloud_TUM


ORB-SLAM2 在线构建稠密点云(一)_第1张图片
运行结束以后会在你的home目录下生成一个名为 “resultPointCloudFile.pcd” 的文件。
ORB-SLAM2 在线构建稠密点云(一)_第2张图片
这个文件可以通过新建一个终端利用PCL库自带的显示脚本查看:

 pcl_viewer resultPointCloudFile.pcd 

ORB-SLAM2 在线构建稠密点云(一)_第3张图片

2、ROS下运行ORB_SLAM2_PointCloud

运行思路,将修改后的 ORB_SLAM2_PointCloud 编译的库拷贝到ROS下并新建一个ROS节点,通过这个节点读取相机的数据,然后调用 ORB_SLAM2_PointCloud 的接口,实现利用RGBD相机实时构建地图的功能。 ROS节点通过新增的函数接口,读取点云线程生成的点云地图,发布到指定的topic上面,供其他节点调用。该代码可在[3]下载。

2.1 新建ROS包

在ROS工作空间下下新建一个名为 orbslam2_ros 的包,将修改后的 ORB-SLAM_PointCloud 编译的库文件拷贝到ROS下新建一个包,并将所有的头文件拷贝到你所构建的ROS包下的include中。
具体操作为:
1、把 libORB_SLAM2.so 拷贝到 orbslam2_pointcloud ROS包中去,覆盖lib目录下的文件
2、把  ORB-SLAM_PointCloud 中的头文件拷贝到 orbslam2_pointcloud ROS包中去,覆盖include目录下的文件
3、新建一个config文件夹,然后把相机的内参写为ORB_SLAM2的参数文件格式(这里我使用的是ASTRA的那款RGBD相机),基本只需要复制一个ORB_SLAM2的配置文件过来修改相机的内参即可。 
ORB-SLAM2 在线构建稠密点云(一)_第4张图片

%YAML:1.0

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV) 
Camera.fx: 515.2888
Camera.fy: 517.6610
Camera.cx: 317.9098
Camera.cy: 241.5734

Camera.k1:  0.039128
Camera.k2:  -0.136553
Camera.p1:  -0.000142
Camera.p2:  -0.000765
Camera.k3:  0

Camera.width: 640
Camera.height: 480

# Camera frames per second 
Camera.fps: 30.0

# IR projector baseline times fx (aprox.)
Camera.bf: 40.0

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
ThDepth: 50.0

# Deptmap values factor 
DepthMapFactor: 1.0

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000

# ORB Extractor: Scale factor between levels in the scale pyramid 	
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid	
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast			
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

2.2 编写ROS节点

这一步需要把相机的彩色图像和深度图像读取进来,然后传递给ORB_SLAM2的接口函数(orbslam->TrackRGBD(color,depth,ros::Time::now().toSec()); ),这里涉及到同步接受相机的彩色图topic和深度图topic的问题,因此需要使用ROS中的软同步机制 ***“message_filters”***。这里我将主要的代码贴出,这个代码也是从高翔博士的代码中修改的。

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 
#include 
#include 
#include
#include 
#include
#include

#include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 
#include 

#include 
#include 
#include 
#include 

//#include 

#include "orbslam2_pointcloud/System.h"

using namespace std;

enum ORBSLAM_MODE
{
    Mono = 0,
    RGBD ,
    STEREO
};
		  
class Receiver
{
    public:
                  enum Mode
                  {
		      NOTHING = 0,
		      IMAGE ,
		      DEPTH,
		      BOTH,
		      CLOUD
                  };

    private:
                  std::mutex lock;

                  //const std::string topicColor, topicDepth,cameraParamFile , orbVocFile; 
                  bool mbuseExact, mbuseCompressed;
		  bool mborbshow;
		  char mborbmode;
		  
		  Mode memode;
		  bool mbupdateImage, mbupdateCloud; //code flage 
		  bool mbview,mbsave,mbrunning,mbpublishPointCloud;
	 
                 
                  const size_t queueSize;

                  cv::Mat mcolor, mdepth;
                  cv::Mat cameraMatrixColor, cameraMatrixDepth;
                  cv::Mat lookupX, lookupY;

                  typedef message_filters::sync_policies::ExactTime<sensor_msgs::Image, sensor_msgs::Image, sensor_msgs::CameraInfo, sensor_msgs::CameraInfo> ExactSyncPolicy;
                  typedef message_filters::sync_policies::ApproximateTime<sensor_msgs::Image, sensor_msgs::Image, sensor_msgs::CameraInfo, sensor_msgs::CameraInfo> ApproximateSyncPolicy;

                  ros::NodeHandle nh;
                  ros::AsyncSpinner spinner;
		  ros::Publisher pub_pointcloud;
		  
                  image_transport::ImageTransport it;
                  image_transport::SubscriberFilter *subImageColor, *subImageDepth;
                  message_filters::Subscriber<sensor_msgs::CameraInfo> *subCameraInfoColor, *subCameraInfoDepth;

                  message_filters::Synchronizer<ExactSyncPolicy> *syncExact;
                  message_filters::Synchronizer<ApproximateSyncPolicy> *syncApproximate;

                  std::thread imageViewerThread;
                  

                  pcl::PointCloud<pcl::PointXYZRGBA> ::Ptr outputMap;
                  pcl::PCDWriter writer;
                  std::ostringstream oss;
                  std::vector<int> params;
				  
				 
				  
	//ORB-SLAM slam
                  ORB_SLAM2::System*orbslam = nullptr;

public:
                  Receiver(const std::string &topicColor, const std::string &topicDepth,
			   const std::string &cameraParamFile,const std::string orbVocFile,
			   const char orbmode, const bool orbshow)
                      :queueSize(5),nh("~"), spinner(0), it(nh)
                  {
		    
		    mbpublishPointCloud = false;
		    memode =NOTHING;
		    mbupdateImage = false;
		    mbupdateCloud= false;
		    mbsave= false;
		    mbrunning= true;
		    
		    mbuseExact = false;  //
		    mbuseCompressed = false;
		    
		    mborbmode =orbmode;
		    mborbshow = orbshow;
		    
                    cameraMatrixColor = cv::Mat::zeros(3, 3, CV_64F); // 相机内参数矩阵
                    cameraMatrixDepth = cv::Mat::zeros(3, 3, CV_64F);
                    /*params.push_back(cv::IMWRITE_JPEG_QUALITY);
                    params.push_back(100);
                    params.push_back(cv::IMWRITE_PNG_COMPRESSION);
                    params.push_back(1);
                    params.push_back(cv::IMWRITE_PNG_STRATEGY);
                    params.push_back(cv::IMWRITE_PNG_STRATEGY_RLE);
                    params.push_back(0);
 
		    cameraMatrixColor.at(0,0)  =628.6055; //intel realsense 
		    cameraMatrixColor.at(0,2)  =322.5793;
		    cameraMatrixColor.at(1,1)  =634.6200;
		    cameraMatrixColor.at(1,2)  =261.3863;
		    cout<< "cameraMatrixColor: "<	
					
		   if(mborbmode == RGBD)
		   {
		     orbslam = new ORB_SLAM2::System(orbVocFile,cameraParamFile,ORB_SLAM2::System::RGBD,mborbshow);
		     mbpublishPointCloud = true;
		   }
		   else if(mborbmode == Mono)
		   {
		     orbslam = new ORB_SLAM2::System(orbVocFile,cameraParamFile,ORB_SLAM2::System::MONOCULAR,mborbshow);
		   }
		   else if(mborbmode == STEREO)
		   {
		     orbslam = new ORB_SLAM2::System(orbVocFile,cameraParamFile,ORB_SLAM2::System::STEREO,mborbshow);
		   }
		   else
		     ;
                     
                    pub_pointcloud= nh.advertise<sensor_msgs::PointCloud2> ("PointCloudOutput", 10); //创建ROS的发布节点
				  
		    
		    std::string topicCameraInfoColor = topicColor.substr(0, topicColor.rfind('/')) + "/camera_info";
		    std::string topicCameraInfoDepth = topicDepth.substr(0, topicDepth.rfind('/')) + "/camera_info";

		    image_transport::TransportHints hints(mbuseCompressed ? "compressed" : "raw");
		    subImageColor = new image_transport::SubscriberFilter(it, topicColor, queueSize, hints);
		    subImageDepth = new image_transport::SubscriberFilter(it, topicDepth, queueSize, hints);
		    subCameraInfoColor = new message_filters::Subscriber<sensor_msgs::CameraInfo>(nh, topicCameraInfoColor, queueSize);
		    subCameraInfoDepth = new message_filters::Subscriber<sensor_msgs::CameraInfo>(nh, topicCameraInfoDepth, queueSize);

		    cout<<"topicColor: "<<topicColor<<endl;
		    cout<<"topicDepth: "<<topicDepth<<endl;
		    cout<<"subCameraInfoColor: "<<topicCameraInfoColor<<endl;
		    cout<<"subCameraInfoDepth: "<<topicCameraInfoDepth<<endl;
		    cout<<"cameraParamFile: "<<cameraParamFile<<endl<<endl;

		    if(mbuseExact)
		    {
			syncExact = new message_filters::Synchronizer<ExactSyncPolicy>(ExactSyncPolicy(queueSize), *subImageColor, *subImageDepth, *subCameraInfoColor, *subCameraInfoDepth);
			syncExact->registerCallback(boost::bind(&Receiver::callback, this, _1, _2, _3, _4));
		    }
		    else
		    {
			syncApproximate = new message_filters::Synchronizer<ApproximateSyncPolicy>(ApproximateSyncPolicy(queueSize), *subImageColor, *subImageDepth, *subCameraInfoColor, *subCameraInfoDepth);
			syncApproximate->registerCallback(boost::bind(&Receiver::callback, this, _1, _2, _3, _4));
		    }
		
	            
                  }

                  ~Receiver()
                  {
		      if(orbslam)
		      {
			orbslam->Shutdown();
			delete orbslam;
		      }
                  }

                  void run()
                  {
		      start();
		      stop();
                  }


private:
void start()
{
	spinner.start();

	std::chrono::milliseconds duration(100);
	while(!mbupdateImage || !mbupdateCloud)
	{
	    if(!ros::ok())
	    {
		    return;
	    }
	    std::this_thread::sleep_for(duration);
	    cout<<" wait publisher publish the topic .... "<<endl;
	}

	cv::Mat color, depth, depthDisp, combined; // local varibale
	
	std::chrono::time_point<std::chrono::high_resolution_clock> start, now; 
	double fps = 0;
	size_t frameCount = 0;
	std::ostringstream oss;
	
	const cv::Point pos(5, 15); //信息显示的位置
	const cv::Scalar colorText = CV_RGB(255, 255, 255);
	const double sizeText = 0.5;
	const int lineText = 1;
	const int font = cv::FONT_HERSHEY_SIMPLEX;
	
	cout << "starting..."<<endl;;
	
	start = std::chrono::high_resolution_clock::now();
	
	
	for(; mbrunning && ros::ok();)
	{
		 
			if(mbupdateImage)//数据发送到ORB-SLAM
			{
			    lock.lock();
			    color = this->mcolor;
			    depth = this->mdepth;
			    mbupdateImage = false;
			    lock.unlock();
			    if(orbslam)
			    {
			       if(mborbmode == RGBD)
				{
				  orbslam->TrackRGBD(color,depth,ros::Time::now().toSec()); 
				}
				else if(mborbmode == Mono)
				{
				  orbslam->TrackMonocular(color,ros::Time::now().toSec());
				}
				else if(mborbmode == STEREO)
				{
				  orbslam->TrackStereo(color,depth,ros::Time::now().toSec());
				}
				else
				  ;
		   
				    
			    }
			    
			    ++frameCount; //计算帧频率
			    now = std::chrono::high_resolution_clock::now();
			    double elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(now - start).count() / 1000.0;
			    if(elapsed >= 1.0)
			    {
				    fps = frameCount / elapsed;
				    oss.str("");
				    oss << "fps: " << fps << " ( " << elapsed / frameCount * 1000.0 << " ms)";
				    start = now;
				    frameCount = 0;
			    
			    }
			    
			    if(mbpublishPointCloud)
			    {
				orbslam->getPointCloudMap(outputMap);
				pointcloudpublish(*outputMap);
			    }
			    mbview  = true;
			}
			
			if(mbview ) //优先跟新ORB track 
			{
			    mbview = false;
			    
			    
			    if(memode == NOTHING )
			    {
				;
			    }
			   else if(memode == IMAGE )
			    {
				cv::putText(color, oss.str(), pos, font, sizeText, colorText, lineText, CV_AA);
				cv::imshow("color image", color);
				cv::waitKey(1);
			    }
			      else if(memode == DEPTH )
			    {
				dispDepth(color, depthDisp, 1); //maxValue = 1
				cv::putText(depthDisp, oss.str(), pos, font, sizeText, colorText, lineText, CV_AA);
				cv::imshow("depth image",depthDisp);
			    }
			      else if(memode == BOTH )
			    {
				combine(color, depth, combined);
				cv::putText(combined, oss.str(), pos, font, sizeText, colorText, lineText, CV_AA);
				cv::imshow("combined image",combined);
			    }
			      else if(memode == CLOUD )
			    {
				//generatePointCloud(depth, color, cloud);					
				//cloudViewer();
			    }
			}
  
		  
	}

	 
}

void stop()
{
		cv::destroyAllWindows();  // endup the file 
		cv::waitKey(100);
		spinner.stop();

		if(mbuseExact)
		{
			delete syncExact;
		}
		else
		{
			delete syncApproximate;
		}

		delete subImageColor;
		delete subImageDepth;
		delete subCameraInfoColor;
		delete subCameraInfoDepth;

		mbrunning = false;
		if(memode == BOTH)
		{
			imageViewerThread.join();
		}
}

void callback(const sensor_msgs::Image::ConstPtr imageColor, const sensor_msgs::Image::ConstPtr imageDepth,
	      const sensor_msgs::CameraInfo::ConstPtr cameraInfoColor, const sensor_msgs::CameraInfo::ConstPtr cameraInfoDepth)
 {
        cv::Mat color, depth;

        //readCameraInfo(cameraInfoColor, cameraMatrixColor);
        //readCameraInfo(cameraInfoDepth, cameraMatrixDepth);
      //  readImage(imageColor, color);
     //   readImage(imageDepth, depth);
		cv_bridge::CvImageConstPtr pCvImage;

			pCvImage = cv_bridge::toCvShare(imageColor, "rgb8");
			pCvImage->image.copyTo(color);
			pCvImage = cv_bridge::toCvShare(imageDepth, imageDepth->encoding); //imageDepth->encoding
			pCvImage->image.copyTo(depth);
        // IR image input
        if(color.type() == CV_16U)
        {
              cv::Mat tmp;
              color.convertTo(tmp, CV_8U, 0.02);
              cv::cvtColor(tmp, color, CV_GRAY2BGR);
        }

        lock.lock();
        this->mcolor = color;
        this->mdepth = depth;
        mbupdateImage = true;
        mbupdateCloud = true;
        lock.unlock();
 }

  void readImage(const sensor_msgs::Image::ConstPtr msgImage, cv::Mat &image) const
  {
            cv_bridge::CvImageConstPtr pCvImage;
            pCvImage = cv_bridge::toCvShare(msgImage, msgImage->encoding);
            pCvImage->image.copyTo(image);
  }

  void readCameraInfo(const sensor_msgs::CameraInfo::ConstPtr cameraInfo, cv::Mat &cameraMatrix) const
  {
        double *itC = cameraMatrix.ptr<double>(0, 0);
        for(size_t i = 0; i < 9; ++i, ++itC)
        {
            *itC = cameraInfo->K[i];
        }
  }
 
/**
 * @ 将深度图像映射成彩色图
 * 
 */
  void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue)
  {
    cv::Mat tmp = cv::Mat(in.rows, in.cols, CV_8U);
    const uint32_t maxInt = 255;

  //  #pragma omp parallel for
    for(int r = 0; r < in.rows; ++r)
    {
      const uint16_t *itI = in.ptr<uint16_t>(r);
      uint8_t *itO = tmp.ptr<uint8_t>(r);

      for(int c = 0; c < in.cols; ++c, ++itI, ++itO)
      {
        *itO = (uint8_t)std::min((*itI * maxInt / maxValue), 255.0f);
      }
    }

    cv::applyColorMap(tmp, out, cv::COLORMAP_JET);
  }
/**
 * @ 将深度图像叠加在彩色图像上
 * 
 */
  void combine(const cv::Mat &inC, const cv::Mat &inD, cv::Mat &out)
  {
    out = cv::Mat(inC.rows, inC.cols, CV_8UC3);

 //   #pragma omp parallel for
    for(int r = 0; r < inC.rows; ++r)
    {
      const cv::Vec3b
      *itC = inC.ptr<cv::Vec3b>(r),
       *itD = inD.ptr<cv::Vec3b>(r);
      cv::Vec3b *itO = out.ptr<cv::Vec3b>(r);

      for(int c = 0; c < inC.cols; ++c, ++itC, ++itD, ++itO)
      {
        itO->val[0] = (itC->val[0] + itD->val[0]) >> 1;
        itO->val[1] = (itC->val[1] + itD->val[1]) >> 1;
        itO->val[2] = (itC->val[2] + itD->val[2]) >> 1;
      }
    }
  }
void pointcloudpublish(pcl::PointCloud<pcl::PointXYZRGBA> inputMap )
 { 
	//pcl::PointCloud cloud; 	
	sensor_msgs::PointCloud2 output;   //声明的输出的点云的格式
	//pcl::toPCLPointCloud2(*p,p2); //转换到 PCLPointCloud2 类型
	//pcl_conversions::fromPCL(p2, output);    //转换到 ros 下的格式
	//pcl::io::loadPCDFile ("/home/crp/catkin_ws/test.pcd", cloud);  
	toROSMsg(inputMap,output);
	output.header.stamp=ros::Time::now();
	output.header.frame_id  ="world";
	pub_pointcloud.publish(output);
 }

};

int main(int argc, char **argv)
{
    std::string topicColor ,topicDepth ,cameraParamFile;
    ros::init(argc, argv, "orbslam_ros", ros::init_options::AnonymousName);
    if(!ros::ok())
    {
	    return 0;
    }   
    if(ros::param::get("~topicColor" ,topicColor))
	    ;
    else
	    topicColor = "/kinect2/qhd/image_color_rect";

    if(ros::param::get("~topicDepth" ,topicDepth))
	    ;
    else
	    topicDepth = "/kinect2/qhd/image_depth_rect";
    if(ros::param::get("~cameraParamFile" ,cameraParamFile))
	    ;
    else
	cameraParamFile= "/home/crp/catkin_ws/src/orbslam2_ros/kinect2_qhd.yaml" ;
 
    string orbVoc = "/home/crp/catkin_ws/src/SLAM/orbslam2_pointcloud/ORBvoc.txt" ;
    bool orbmode = RGBD;
    bool orbshow = true;
    Receiver receiver(topicColor, topicDepth, cameraParamFile,orbVoc,orbmode, orbshow);
    cout<<"starting receiver..."<<endl;
    receiver.run();
    ros::shutdown();
    return 0;
}

2.3编译运行

启动相机节点

roslaunch astra_launch astra.launch

启动SLAM节点

roslaunch orbslam2_pointcloud astra.launch

注:当你换了电脑以后你必须要在要新的电脑上编译依次 ORB_SLAM2_PointCloud 包并将 “libORB_SLAM2.so” “libDBoW2.so” “libg2o.so”重新覆盖ROS包中的三个连接库,否则编译的时候会提示你没有定义一大堆东西。

3、发布Octomap

在ROS中启动octomap的服务,让他固定接受一个点云topic并转化为Octomap。具体操作为:新建一个 octomapTransform.launch 文件,填入以下内容

<?xml version="1.0"?>
<launch>
  <node pkg="octomap_server" type="octomap_server_node" name="octomap_server">

    <!-- resolution in meters per pixel -->
    <param name="resolution" value="0.05" />

    <!-- name of the fixed frame, needs to be "/map" for SLAM -->
    <param name="frame_id" type="string" value="/world" />

    <!-- max range / depth resolution of the kinect in meter -->
    <param name="sensor_model/max_range" value="50.0" />
    <param name="latch" value="true" />

    <!-- max/min height for occupancy map, should be in meters -->
    <param name="pointcloud_max_z" value="1000" />
    <param name="pointcloud_min_z" value="0" />

    <!-- topic from where pointcloud2 messages are subscribed -->
    <remap from="/cloud_in" to="/orbslam2_ros/LocaPointCloud" />

  </node>
</launch>

ORB Octomap

关于Octomap的安装和启动方法可以查看我另外一篇博客(Octomap 在ROS环境下实时显示)。

上一篇 :Octomap 在ROS环境下实时显示    下一篇 :ORB-SLAM2 在线构建稠密点云(二)

[1] 高翔博士开源的代码: https://github.com/gaoxiang12/ORBSLAM2_with_pointcloud_map
[2] 我自己修改后的代码: https://gitee.com/cenruping/ORB_SLAM2_PointCloud
[3] 对应ROS节点的代码: https://gitee.com/cenruping/orbslam2_pointcloud

如果大家觉得文章对你有所帮助,麻烦大家帮忙点个赞。O(∩_∩)O

欢迎大家在评论区交流讨论([email protected]

你可能感兴趣的:(SLAM-VIO)