本机系统:Jetpack5.0.2 Ubuntu 20.04 LTS
注意事项:想要避坑,务必按照文中版本准备各种环境
1.Pangolin 0.5
网址:https://github.com/stevenlovegrove/Pangolin
安装命令:
1 #安装依赖
2 sudo apt install libglew-dev
3 sudo apt install cmake
4 sudo apt install libpython2.7-dev
5 #可选依赖见github,我没装所以就不贴出来啦
6 #下载并编译
7 git clone https://github.com/stevenlovegrove/Pangolin.git
8 cd Pangolin
9 mkdir build
10 cd build
11 cmake ..
12 cmake --build .
13 #最后不要忘了安装
14 sudo make install
可能出现的报错1:
error: ‘AV_PIX_FMT_XVMC_MPEG2_MC’ was not declared in this scope
解决方案:在/Pangolin/CMakeModules/FindFFMPEG.cmake
中63,64行
sizeof(AVFormatContext::max_analyze_duration);
}" HAVE_FFMPEG_MAX_ANALYZE_DURATION2
替换成
sizeof(AVFormatContext::max_analyze_duration);
}" HAVE_FFMPEG_MAX_ANALYZE_DURATION
/Pangolin/src/video/drivers/ffmpeg.cpp
中第37行 namespace pangolin
上面加上
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER
第78,79行
TEST_PIX_FMT_RETURN(XVMC_MPEG2_MC);
TEST_PIX_FMT_RETURN(XVMC_MPEG2_IDCT);
替换成
#ifdef FF_API_XVMC
TEST_PIX_FMT_RETURN(XVMC_MPEG2_MC);
TEST_PIX_FMT_RETURN(XVMC_MPEG2_IDCT);
#endif
101-105行
TEST_PIX_FMT_RETURN(VDPAU_H264);
TEST_PIX_FMT_RETURN(VDPAU_MPEG1);
TEST_PIX_FMT_RETURN(VDPAU_MPEG2);
TEST_PIX_FMT_RETURN(VDPAU_WMV3);
TEST_PIX_FMT_RETURN(VDPAU_VC1);
替换成
#ifdef FF_API_VDPAU
TEST_PIX_FMT_RETURN(VDPAU_H264);
TEST_PIX_FMT_RETURN(VDPAU_MPEG1);
TEST_PIX_FMT_RETURN(VDPAU_MPEG2);
TEST_PIX_FMT_RETURN(VDPAU_WMV3);
TEST_PIX_FMT_RETURN(VDPAU_VC1);
#endif
127行
TEST_PIX_FMT_RETURN(VDPAU_MPEG4);
替换成
#ifdef FF_API_VDPAU
TEST_PIX_FMT_RETURN(VDPAU_MPEG4);
#endif
可能出现的报错2:
error: ‘AVFMT_RAWPICTURE’ was not declared in this scope
解决方案:在Pangolin/include/pangolin/video/drivers/ffmpeg.h
开头加
#define AV_CODEC_FLAG_GLOBAL_HEADER (1 << 22)
#define CODEC_FLAG_GLOBAL_HEADER AV_CODEC_FLAG_GLOBAL_HEADER
#define AVFMT_RAWPICTURE 0x0020
2.OpenCV 4.2.0
由于Jetpack5.0.2
系统自带OpenCV 4.5.5
,所以首先我们要把OpenCV 4.5.5
卸载
因为OpenCV 4.5.5
不是我们安装的,所以一般网上教的卸载方法没有用,个人尝试后最好用的是将/usr
中所有OpenCV
的相关项删除:
whereis opencv4
使用上述命令后终端会显示OpenCV
的安装路径,进入对应的路径删除OpenCV
相关项
#删除文件夹
sudo rm -rf
#删除文件
sudo rm
删除所有相关项后使用pkg-config --modversion opencv
命令查询
#如果没有打印版本信息则证明`OpenCV`卸载完成
对于OpenCV 4.2.0
版本推荐使用python
安装
sudo apt update
sudo apt install libopencv-dev python3-opencv
安装完成后使用pkg-config --modversion opencv
查询
4.2.0
3.ROS2(foxy)
推荐使用鱼香ROS一键安装指令(真的无敌好用)
wget http://fishros.com/install -O fishros && . fishros
回车后按照指引选择就可以了
4.ORB-SLAM3(for Ubuntu20.04)
代码地址:https://github.com/zang09/ORB-SLAM3-STEREO-FIXED
git clone https://github.com/zang09/ORB-SLAM3-STEREO-FIXED.git ORB_SLAM3
cd ORB_SLAM3
chmod +x build.sh
./build.sh
安装的时间可能会比较久一点,对于我们使用而言,可以找到/ORB_SLAM3
路径下的Cmakelist.txt
文件将所有example
的编译全部注释掉
5.ORB-SLAM3-ROS2
代码地址:https://github.com/zang09/ORB_SLAM3_ROS2
mkdir -p orbslam_ws/src
cd ~/orbslam_ws/src
git clone https://github.com/zang09/ORB_SLAM3_ROS2.git orbslam3_ros2
将/orbslam3_ros2
中Cmakelist.txt
的"/opt/ros/foxy/lib/python3.8/site-packages/"
替换成你自己的路径,一般就是这个,所以不用换
# You should set the PYTHONPATH to your own python site-packages path
set(ENV{PYTHONPATH} "/opt/ros/foxy/lib/python3.8/site-packages/")
将/orbslam3_ros2/CMakeModules/FindORB_SLAM3.cmake
中的"~/Install/ORB_SLAM/ORB_SLAM3"
替换成你步骤4中ORB-SLAM3
文件夹的路径
# To help the search ORB_SLAM3_ROOT_DIR environment variable as the path to ORB_SLAM3 root folder
# e.g. `set( ORB_SLAM3_ROOT_DIR=~/ORB_SLAM3) `
set(ORB_SLAM3_ROOT_DIR "~/Install/ORB_SLAM/ORB_SLAM3")
然后开始编译
cd ~/orbslam
colcon build --symlink-install --packages-select orbslam3
可能出现的问题:找不到sophus/se3.hpp
解决方案:回到你步骤4中ORB-SLAM3
文件夹的路径下,进入/Thirdparty/Sophus/build
cd ~/{ORB_SLAM3_ROOT_DIR}/Thirdparty/Sophus/build
sudo make install
本文使用zed2
相机作为视觉传感器设备
1.安装zed sdk
SDK下载地址:https://www.stereolabs.com/developers/release/
sudo chmod +x ZED_SDK_Tegra_L4T35.1_v3.7.7.run
# 运行zed sdk安装包
./ZED_SDK_Tegra_L4T35.1_v3.7.7.run
验证安装是否成功
cd /usr/local/zed/tools
# 运行sdk自带软件
./ZED_Depth_Viewer
如果能看到图像则代表安装成功,在我测试过程中之前是可以的,后来又不行了
如果你也是这样不要慌,运行下面的软件,如果可以也代表可以
./ZED_explorer
2.安装zed-ros2-wrapper
colcon build 包
地址:https://github.com/stereolabs/zed-ros2-wrapper
mkdir -p zed_ws/src
cd zed_ws/src
git clone --recursive https://github.com/stereolabs/zed-ros2-wrapper.git
# 下载针对ubuntu20.04的补丁
git clone https://github.com/ros-perception/image_common.git --branch 3.0.0
cd ..
rosdep install --from-paths src --ignore-src -r -y
# 一定注意要使用 --symlink-install选项
colcon build --symlink-install
echo source $(pwd)/install/local_setup.bash >> ~/.bashrc
source ~/.bashrc
sudo apt remove ros-foxy-image-transport-plugins ros-foxy-compressed-depth-image-transport ros-foxy-compressed-image-transport
官方说可能会有的问题:If "rosdep" is missing you can install it with:
sudo apt-get install python-rosdep python-rosinstall-generator python-vcstool python-rosinstall build-essential
反正我是没有碰到这个问题
还有就是所有命令执行完毕后须重启终端才能应用改变
1.启动ZED node
ZED:
ros2 launch zed_wrapper zed.launch.py
ZED Mini:
ros2 launch zed_wrapper zedm.launch.py
ZED2:
ros2 launch zed_wrapper zed2.launch.py
ZED2i:
ros2 launch zed_wrapper zed2i.launch.py
反正我只试了zed2,你们用其他设备的应该也一样,问题不大
2.更改ORB-SLAM3-ROS2
中订阅的话题名
# 使用ros2 topic list命令查看发布的话题
ros2 topic list
然后将ORB-SLAM3-ROS2
源码中的订阅话题名改为zed
发布的话题名(这里提供的是双目-惯性模式的例子,其他的也一样)
subImu_ = this->create_subscription<ImuMsg>("zed2/zed_node/imu/data_raw", 1000, std::bind(&StereoInertialNode::GrabImu, this, _1));
subImgLeft_ = this->create_subscription<ImageMsg>("zed2/zed_node/left_raw/image_raw_color", 100, std::bind(&StereoInertialNode::GrabImageLeft, this, _1));
subImgRight_ = this->create_subscription<ImageMsg>("zed2/zed_node/right_raw/image_raw_color", 100, std::bind(&StereoInertialNode::GrabImageRight, this, _1));
修改完了之后记得
cd ~/orbslam3_ws
colcon build --symlink-install
接着开始准备相机的配置文件zed2.yaml
,这里给出的是zed2
的!(该配置文件可以成功运行mono
、stereo
模式,stereo-inertial
模式还在测试中)
记住了,不同相机的配置文件不通用
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
# These are taken from the Camera Information in Python API of ZED
#--------------------------------------------------------------------------------------------
Camera.type: "PinHole"
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 524.650390625
Camera.fy: 524.650390625
Camera.cx: 636.3681030273438
Camera.cy: 359.5227966308594
Camera.k1: 0.0
Camera.k2: 0.0
Camera.k3: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.bFishEye: 0
Camera.width: 1280
Camera.height: 720
# Camera frames per second
Camera.fps: 10.0
# stereo baseline in meters times fx
Camera.bf: 62.92245016 # 119.9321517944336 * 524.65039
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
# Close/Far threshold. Baseline times.
ThDepth: 45.0
DepthMapFactor: 2500.0
# Transformation from camera to body-frame (imu)
# Taken from ROS after the camera is opened (ROS DEFAULT COORDINATE SYSTEM)
# # kalibr imu2cam
# Tbc: !!opencv-matrix
# rows: 4
# cols: 4
# dt: f
# data: [ 0.00486182, -0.99997499, -0.00513631, 0.02426753,
# -0.00410027, 0.00511639, -0.9999785, 0.00009415,
# 0.99997978, 0.00488277, -0.00407529, -0.02679442,
# 0.0, 0.0, 0.0, 1.0, ]
# kalibr cam2imu
Tbc: !!opencv-matrix
rows: 4
cols: 4
dt: f
data: [ 0.00486182, -0.00410027, 0.99997978, 0.02667628,
-0.99997499, 0.00511639, 0.00488277, 0.02439728,
-0.00513631, -0.9999785, -0.00407529, 0.0001096,
0.0, 0.0, 0.0, 1.0, ]
# IMU noise
IMU.NoiseGyro: 0.007000000216066837 #0.028 #0.007000000216066837
IMU.NoiseAcc: 0.016 #0.0015999999595806003 #0.006 #0.0015999999595806003
IMU.GyroWalk: 0.0019474000437185168 #0.008 #0.0019474000437185168
IMU.AccWalk: 0.0002508999896235764 #0.001 #0.0002508999896235764
IMU.Frequency: 100
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
PointCloudMapping.Resolution: 0.01
完事后开启ORB-SLAM3-ROS2
source ~/colcon_ws/install/local_setup.bash
MONO
mode
ros2 run orbslam3 mono PATH_TO_VOCABULARY PATH_TO_YAML_CONFIG_FILE
STEREO
mode
ros2 run orbslam3 stereo PATH_TO_VOCABULARY PATH_TO_YAML_CONFIG_FILE BOOL_RECTIFY
RGBD
mode(这个我没试,有兴趣的可以自己去试试)
ros2 run orbslam3 rgbd PATH_TO_VOCABULARY PATH_TO_YAML_CONFIG_FILE
STEREO-INERTIAL
mode
ros2 run orbslam3 stereo-inertial PATH_TO_VOCABULARY PATH_TO_YAML_CONFIG_FILE BOOL_RECTIFY [BOOL_EQUALIZE]
其中:
PATH_TO_VOCABULARY
在 orbslam3_ros2/vocabulary/ORBvoc.txt
路径下
PATH_TO_YAML_CONFIG_FILE
就是上文准备的zed2.yaml
的路径
注:本文集网络各家之精华,引用了不少博客内容,具体已经分不清,所以在这就不做引用标注了
9月28日补充:
STEREO-INERTIAL
mode 的源代码未对真实硬件(如zed2相机)输出的左右两目的图像数据做时间对齐,导致使用zed2相机发布的数据运行ORB-SLAM3一直报"big time difference"
的错误,而后直接退出。
想要解决问题,应该将stereo-inertial-node.cpp
、stereo-inertial-node.hpp
、Cmakelist.txt
的代码进行相应的修改
stereo-inertial-node.cpp
#include "stereo-inertial-node1.hpp"
#include
using std::placeholders::_1;
StereoInertialNode::StereoInertialNode(ORB_SLAM3::System *SLAM, const string &strSettingsFile, const string &strDoRectify, const string &strDoEqual) :
Node("ORB_SLAM3_ROS2"),
SLAM_(SLAM)
{
stringstream ss_rec(strDoRectify);
ss_rec >> boolalpha >> doRectify_;
stringstream ss_eq(strDoEqual);
ss_eq >> boolalpha >> doEqual_;
bClahe_ = doEqual_;
std::cout << "Rectify: " << doRectify_ << std::endl;
std::cout << "Equal: " << doEqual_ << std::endl;
if (doRectify_)
{
// Load settings related to stereo calibration
cv::FileStorage fsSettings(strSettingsFile, cv::FileStorage::READ);
if (!fsSettings.isOpened())
{
cerr << "ERROR: Wrong path to settings" << endl;
assert(0);
}
cv::Mat K_l, K_r, P_l, P_r, R_l, R_r, D_l, D_r;
fsSettings["LEFT.K"] >> K_l;
fsSettings["RIGHT.K"] >> K_r;
fsSettings["LEFT.P"] >> P_l;
fsSettings["RIGHT.P"] >> P_r;
fsSettings["LEFT.R"] >> R_l;
fsSettings["RIGHT.R"] >> R_r;
fsSettings["LEFT.D"] >> D_l;
fsSettings["RIGHT.D"] >> D_r;
int rows_l = fsSettings["LEFT.height"];
int cols_l = fsSettings["LEFT.width"];
int rows_r = fsSettings["RIGHT.height"];
int cols_r = fsSettings["RIGHT.width"];
if (K_l.empty() || K_r.empty() || P_l.empty() || P_r.empty() || R_l.empty() || R_r.empty() || D_l.empty() || D_r.empty() ||
rows_l == 0 || rows_r == 0 || cols_l == 0 || cols_r == 0)
{
cerr << "ERROR: Calibration parameters to rectify stereo are missing!" << endl;
assert(0);
}
cv::initUndistortRectifyMap(K_l, D_l, R_l, P_l.rowRange(0, 3).colRange(0, 3), cv::Size(cols_l, rows_l), CV_32F, M1l_, M2l_);
cv::initUndistortRectifyMap(K_r, D_r, R_r, P_r.rowRange(0, 3).colRange(0, 3), cv::Size(cols_r, rows_r), CV_32F, M1r_, M2r_);
}
subImu_ = this->create_subscription<ImuMsg>("imu", 1000, std::bind(&StereoInertialNode::GrabImu, this, _1));
subImgLeft_ = std::make_shared<message_filters::Subscriber<ImageMsg> >(shared_ptr<rclcpp::Node>(this), "camera/left");
subImgRight_ = std::make_shared<message_filters::Subscriber<ImageMsg> >(shared_ptr<rclcpp::Node>(this), "camera/right");
syncApproximate = std::make_shared<message_filters::Synchronizer<approximate_sync_policy> >(approximate_sync_policy(10), *subImgLeft_, *subImgRight_);
syncApproximate->registerCallback(&StereoInertialNode::GrabImage, this);
syncThread_ = new std::thread(&StereoInertialNode::SyncWithImu, this);
}
StereoInertialNode::~StereoInertialNode()
{
// Delete sync thread
syncThread_->join();
delete syncThread_;
// Stop all threads
SLAM_->Shutdown();
// Save camera trajectory
SLAM_->SaveKeyFrameTrajectoryTUM("KeyFrameTrajectory.txt");
}
void StereoInertialNode::GrabImu(const ImuMsg::SharedPtr msg)
{
bufMutex_.lock();
imuBuf_.push(msg);
bufMutex_.unlock();
}
void StereoInertialNode::GrabImage(const ImageMsg::SharedPtr msgLeft, const ImageMsg::SharedPtr msgRight)
{
bufMutexLeft_.lock();
if (!imgLeftBuf_.empty())
imgLeftBuf_.pop();
imgLeftBuf_.push(msgLeft);
bufMutexLeft_.unlock();
bufMutexRight_.lock();
if (!imgRightBuf_.empty())
imgRightBuf_.pop();
imgRightBuf_.push(msgRight);
bufMutexRight_.unlock();
}
cv::Mat StereoInertialNode::GetImage(const ImageMsg::SharedPtr msg)
{
// Copy the ros image message to cv::Mat.
cv_bridge::CvImageConstPtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvShare(msg, sensor_msgs::image_encodings::MONO8);
}
catch (cv_bridge::Exception &e)
{
RCLCPP_ERROR(this->get_logger(), "cv_bridge exception: %s", e.what());
}
if (cv_ptr->image.type() == 0)
{
return cv_ptr->image.clone();
}
else
{
std::cerr << "Error image type" << std::endl;
return cv_ptr->image.clone();
}
}
void StereoInertialNode::SyncWithImu()
{
// const double maxTimeDiff = 0.01;
while (1)
{
cv::Mat imLeft, imRight;
double tImLeft = 0, tImRight = 0;
if (!imgLeftBuf_.empty() && !imgRightBuf_.empty() && !imuBuf_.empty())
{
tImLeft = Utility::StampToSec(imgLeftBuf_.front()->header.stamp);
tImRight = Utility::StampToSec(imgRightBuf_.front()->header.stamp);
if (tImLeft > Utility::StampToSec(imuBuf_.back()->header.stamp))
continue;
bufMutexLeft_.lock();
imLeft = GetImage(imgLeftBuf_.front());
imgLeftBuf_.pop();
bufMutexLeft_.unlock();
bufMutexRight_.lock();
imRight = GetImage(imgRightBuf_.front());
imgRightBuf_.pop();
bufMutexRight_.unlock();
vector<ORB_SLAM3::IMU::Point> vImuMeas;
bufMutex_.lock();
if (!imuBuf_.empty())
{
// Load imu measurements from buffer
vImuMeas.clear();
while (!imuBuf_.empty() && Utility::StampToSec(imuBuf_.front()->header.stamp) <= tImLeft)
{
double t = Utility::StampToSec(imuBuf_.front()->header.stamp);
cv::Point3f acc(imuBuf_.front()->linear_acceleration.x, imuBuf_.front()->linear_acceleration.y, imuBuf_.front()->linear_acceleration.z);
cv::Point3f gyr(imuBuf_.front()->angular_velocity.x, imuBuf_.front()->angular_velocity.y, imuBuf_.front()->angular_velocity.z);
vImuMeas.push_back(ORB_SLAM3::IMU::Point(acc, gyr, t));
imuBuf_.pop();
}
}
bufMutex_.unlock();
if (bClahe_)
{
clahe_->apply(imLeft, imLeft);
clahe_->apply(imRight, imRight);
}
if (doRectify_)
{
cv::remap(imLeft, imLeft, M1l_, M2l_, cv::INTER_LINEAR);
cv::remap(imRight, imRight, M1r_, M2r_, cv::INTER_LINEAR);
}
SLAM_->TrackStereo(imLeft, imRight, tImLeft, vImuMeas);
std::chrono::milliseconds tSleep(1);
std::this_thread::sleep_for(tSleep);
}
}
}
stereo-inertial-node.hpp
#ifndef __STEREO_INERTIAL_NODE_HPP__
#define __STEREO_INERTIAL_NODE_HPP__
#include "rclcpp/rclcpp.hpp"
#include "sensor_msgs/msg/image.hpp"
#include "sensor_msgs/msg/imu.hpp"
#include "message_filters/subscriber.h"
#include "message_filters/synchronizer.h"
#include "message_filters/sync_policies/approximate_time.h"
#include
#include "System.h"
#include "Frame.h"
#include "Map.h"
#include "Tracking.h"
#include "utility.hpp"
using ImuMsg = sensor_msgs::msg::Imu;
using ImageMsg = sensor_msgs::msg::Image;
class StereoInertialNode : public rclcpp::Node
{
public:
StereoInertialNode(ORB_SLAM3::System* pSLAM, const string &strSettingsFile, const string &strDoRectify, const string &strDoEqual);
~StereoInertialNode();
private:
void GrabImu(const ImuMsg::SharedPtr msg);
// void GrabImageLeft(const ImageMsg::SharedPtr msgLeft);
// void GrabImageRight(const ImageMsg::SharedPtr msgRight);
cv::Mat GetImage(const ImageMsg::SharedPtr msg);
void SyncWithImu();
void GrabImage(const ImageMsg::SharedPtr msgLeft, const ImageMsg::SharedPtr msgRight);
typedef message_filters::sync_policies::ApproximateTime<sensor_msgs::msg::Image, sensor_msgs::msg::Image> approximate_sync_policy;
rclcpp::Subscription<ImuMsg>::SharedPtr subImu_;
std::shared_ptr<message_filters::Subscriber<sensor_msgs::msg::Image> > subImgLeft_;
std::shared_ptr<message_filters::Subscriber<sensor_msgs::msg::Image> > subImgRight_;
std::shared_ptr<message_filters::Synchronizer<approximate_sync_policy> > syncApproximate;
ORB_SLAM3::System *SLAM_;
std::thread *syncThread_;
// IMU
queue<ImuMsg::SharedPtr> imuBuf_;
std::mutex bufMutex_;
// Image
queue<ImageMsg::SharedPtr> imgLeftBuf_, imgRightBuf_;
std::mutex bufMutexLeft_, bufMutexRight_;
bool doRectify_;
bool doEqual_;
cv::Mat M1l_, M2l_, M1r_, M2r_;
bool bClahe_;
cv::Ptr<cv::CLAHE> clahe_ = cv::createCLAHE(3.0, cv::Size(8, 8));
};
#endif
Cmakelist.txt
cmake_minimum_required(VERSION 3.5)
project(orbslam3)
# You should set the PYTHONPATH to your own python site-packages path
set(ENV{PYTHONPATH} "/opt/ros/foxy/lib/python3.8/site-packages/")
set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_SOURCE_DIR}/CMakeModules)
# Default to C++14
if(NOT CMAKE_CXX_STANDARD)
set(CMAKE_CXX_STANDARD 14)
endif()
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
add_compile_options(-Wall -Wextra -Wpedantic)
endif()
find_package(ament_cmake REQUIRED)
find_package(rclcpp REQUIRED)
find_package(sensor_msgs REQUIRED)
find_package(cv_bridge REQUIRED)
find_package(message_filters REQUIRED)
find_package(Sophus REQUIRED)
find_package(Pangolin REQUIRED)
find_package(ORB_SLAM3 REQUIRED)
include_directories(
include
${ORB_SLAM3_ROOT_DIR}/include
${ORB_SLAM3_ROOT_DIR}/include/CameraModels
)
link_directories(
include
)
add_executable(mono
src/monocular/mono.cpp
src/monocular/monocular-slam-node.cpp
)
ament_target_dependencies(mono rclcpp sensor_msgs cv_bridge ORB_SLAM3 Pangolin)
add_executable(rgbd
src/rgbd/rgbd.cpp
src/rgbd/rgbd-slam-node.cpp
)
ament_target_dependencies(rgbd rclcpp sensor_msgs cv_bridge message_filters ORB_SLAM3 Pangolin)
add_executable(stereo
src/stereo/stereo.cpp
src/stereo/stereo-slam-node.cpp
)
ament_target_dependencies(stereo rclcpp sensor_msgs cv_bridge message_filters
ORB_SLAM3 Pangolin)
add_executable(stereo-inertial
src/stereo-inertial/stereo-inertial.cpp
src/stereo-inertial/stereo-inertial-node.cpp
)
ament_target_dependencies(stereo-inertial rclcpp sensor_msgs cv_bridge message_filters ORB_SLAM3 Pangolin)
install(TARGETS mono rgbd stereo stereo-inertial
DESTINATION lib/${PROJECT_NAME})
# Install launch files.
#install(DIRECTORY launch config vocabulary
# DESTINATION share/${PROJECT_NAME}/)
ament_package()
修改完再
cd ~/orbslam3_ws
colcon build --symlink-install
就可以运行 stereo-inertial
模式了