与pangolin相比:缺点,不容易添加控件(按钮,滑动条等) ;优点,简单调试方便.
首先看个简单示例程序,创建一个窗口并显示坐标系:
//创建可视化窗口
viz::Viz3d window1("window1");
//构造一个坐标系,并显示到窗口中
window1.showWidget("Coordinate", viz::WCoordinateSystem());
//开启永久循环暂留
window1.spin();
第一句,创建窗口。
很简单,类型为Viz3d类型,参数为窗口名称。
第二句,在窗口中显示部件。
viz模块中,窗口内显示的一切东西通通为部件,也就是Widget,这里调用showWidget()函数,将部件显示在窗口中。
来看一下showWidget()定义:
/** @brief Shows a widget in the window.
@param id A unique id for the widget. @param widget The widget to be displayed in the window.
@param pose Pose of the widget.
*/
void showWidget(const String &id, const Widget &widget, const Affine3d &pose = Affine3d::Identity());
看看三个参数:
&id:这个参数是为了给部件定义一个unique名称,用于后面定位到此部件,并不是想当然的认为是窗口中部件的名称(类似于坐标轴的名字一样,运行发现窗口中除了坐标轴,并没有名称)。
&widget:自然就是要显示的部件了。这是用viz::WCoordinateSystem()直接当场创建了一个坐标系部件,WCoordinateSystem类是继承自Widget3D类,所以本质类型也是部件,可以在窗口中显示。
后面还有一个带有默认值的pose参数,示例程序中没有传入此参数,采用的默认值,关于Affine3d后面再说。
第三句,永久循环暂留。
spin()函数开启一个event loop永远循环。直观的用处就是让画面停在那里,如果没有这一句的话,画面基本就是秒闪一下然后消失了。
关于spin()函数有两点要说:
第一就是spin()真的会让程序停在那里,不会运行下方的语句,除非按q或者e建,才会继续走。
第二就是这个函数的变种:void spinOnce(int time = 1, bool force_redraw = false); 表示event loop循环time时间,也就是在词句的停留时间,time的单位为毫秒,第二个force_redraw还不知道有啥用,测试来看,没有区别。后面有这个函数的示例程序。
OK,下面就是这个最简单窗口的运行结果: [cmakeList libopencv_viz.so]
viz::Viz3d myWindow("Coordinate Frame"); /*创建一个可视化窗口*/
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());/*窗口的坐标系相对关系 XYZ-> RGB;*/
XYZ-> RGB (右手系:X红->右,Y绿->上;Z蓝黑->外);
#include
#include
#include
#include
#include
using namespace std;
using namespace cv;
// EXP 2: 将坐标系绕Y轴旋转并循环显示
int main()
{
viz::Viz3d window("window");
//创建一个1*3的rotation vector
Mat rvec = Mat::zeros(1, 3, CV_32F);//循环外,全局变量
//动画的本质:调整部件位姿并循环显示,控制条件是窗口没被停止,也就是主动按下了q或者e键
while (!window.wasStopped()) {
rvec.at(0, 0) = 0.f;
rvec.at(0, 1) += CV_PI * 0.01f; // Y轴旋转
rvec.at(0, 2) = 0.f;
Mat rmat;
//罗德里格斯公式,将罗德里格斯向量转换成旋转矩阵
Rodrigues(rvec, rmat);
//构造仿射变换类型的pose,这个类型暂且看做OpenCV中的位姿类型,两个Mat参数,一个旋转,一个平移
Affine3f pose(rmat, Vec3f(0, 0, 0));
//这一句就是整个可视化窗口能够动起来的核心语句了,
//说白了就是利用循环来不断调整上面坐标系部件的位姿,达到动画的效果
//另外这里就利用到了坐标系的ID,来表征调整的是坐标系的位姿
window.showWidget("Coordinate", viz::WCoordinateSystem(), pose);
//控制单帧暂留时间,调整time参数的效果就是平面转的快慢,本质上就是每一个位姿的停留显示时间。
window.spinOnce(1, false);
}
return 0;
}
好,再来加点东西:
viz::Viz3d window("window");
window.showWidget("Coordinate", viz::WCoordinateSystem());
viz::WPlane plane;
window.showWidget("plane", plane);
window.spin();
WPlane即为平面类,也是继承自Widget3D类。这里创建一个平面部件(各种参数都不写了,用默认值),添加到窗口中,画面如下:
OK,多了个白色正方形平面。
int main()
{
// viz::Viz3d window("window");
// window.showWidget("Coordinate", viz::WCoordinateSystem());
// viz::WPlane plane;
// window.showWidget("plane", plane);
// window.spin();
viz::Viz3d window("window");
window.showWidget("Coordinate", viz::WCoordinateSystem());
viz::WPlane plane; //创建平面
window.showWidget("plane", plane);//添加平面,并设置一个ID为plane
Mat rvec = Mat::zeros(1, 3, CV_32F); //创建一个1*3的rotation vector
//动画的本质:调整部件位姿并循环显示,控制条件是窗口没被停止,也就是主动按下了q或者e键
while(!window.wasStopped())
{
rvec.at(0, 0) = 0.f;
rvec.at(0, 1) += CV_PI * 0.01f;
rvec.at(0, 2) = 0.f;
Mat rmat;
//罗德里格斯公式,将罗德里格斯向量转换成旋转矩阵
Rodrigues(rvec, rmat);
//构造仿射变换类型的pose,这个类型暂且看做OpenCV中的位姿类型,两个参数,一个旋转,一个平移
Affine3f pose(rmat, Vec3f(0, 0, 0));
//这一句就是整个可视化窗口能够动起来的核心语句了,
//说白了就是利用循环来不断调整上面plane部件的位姿,达到动画的效果
//另外这里就利用到了平面的ID,来表征调整的是平面的位姿
window.setWidgetPose("plane", pose);
//控制单帧暂留时间,调整time参数的效果就是平面转的快慢,本质上就是每一个位姿的停留显示时间。
window.spinOnce(1, false);
}
return 0;
}
主要的区别就是下方多了一个循环。
定义一个旋转向量rvec,然后调用Rodrigues()函数将其转换成旋转矩阵rmat,再用rmat构造pose,最后用setWidgetPose()函数和构造的pose,来达到调整部件位姿的效果,循环执行后的结果就是看起来平面转起来了。
这里说一下Affine3f类型的 pose参数,Affine3是一个3维仿射变换类,先摆上构造函数源码:
Affine3();
//! Augmented affine matrix
Affine3(const Mat4& affine);
//! Rotation matrix
Affine3(const Mat3& R, const Vec3& t = Vec3::all(0));
//! Rodrigues vector
Affine3(const Vec3& rvec, const Vec3& t = Vec3::all(0));
//! Combines all contructors above. Supports 4x4, 4x3, 3x3, 1x3, 3x1 sizes of data matrix
explicit Affine3(const Mat& data, const Vec3& t = Vec3::all(0));
//! From 16th element array
explicit Affine3(const float_type* vals);
默认构造函数不用说了。看其他构造函数很容易发现,其实它就是一个变换T,此处用于表征位姿。构造函数就多种多样了:直接4*4矩阵构造、旋转向量和平移向量构造、旋转矩阵和平移向量构造、开放式构造(也就是旋转向量部分随你用什么矩阵了)、16元素的数组构造。示例中用的是开放式构造。
构造好了位姿,利用循环和window.setWidgetPose(“plane”, pose);这句,不断调整部件的pose数据,最终画面就动起来了。由于不能传动图,所以只能传个截图,白色平面会绕着绿色坐标轴右方向转动:
单个3D点的画法:
// EXP 4: 将单个3D点显示
int main(){
viz::Viz3d myWindow("Coordinate Frame"); /*创建一个可视化窗口*/
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());/*窗口的坐标系相对关系 XYZ-> RGB; 右手系 */
Mat cloud0(1, 1, CV_32FC3);
cloud0.at(0,0)=4.0f;
cloud0.at(0,1)=4.0f;
cloud0.at(0,1)=4.0f;
viz::WCloud cloud_widget(cloud0, viz::Color::green());
myWindow.showWidget("simple_cloud", cloud_widget);
myWindow.spin();
return 0;
}
多个3D点云画法
// EXP 4: 将多个3D点云显示,读取.ply文件并转Mat
#include
#include
#include
using namespace cv;
using namespace std;
static void help()
{
cout
<< "--------------------------------------------------------------------------" << endl
<< "This program shows how to use makeTransformToGlobal() to compute required pose,"
<< "how to use makeCameraPose and Viz3d::setViewerPose. You can observe the scene "
<< "from camera point of view (C) or global point of view (G)" << endl
<< "Usage:" << endl
<< "./transformations [ G | C ]" << endl
<< endl;
}
/*学习目标
这次学习中将学会下面三部分:
如何使用makeTransformToGlobal计算位姿
如何使用makeCameraPose和Viz3d :: setViewerPose
如何通过轴和视锥显示可视化相机位置
gluLookAt函数 https://blog.csdn.net/blues1021/article/details/51496427
*/
static Mat cvcloud_load()
{
Mat cloud(1, 1889, CV_32FC3); //定义Mat 类型的 CV_32FC3 的结构存储
ifstream ifs("../bunny.ply");
string str;
for(size_t i = 0; i < 12; ++i)
getline(ifs, str);
Point3f* data = cloud.ptr();
float dummy1, dummy2;
for(size_t i = 0; i < 1889; ++i)
ifs >> data[i].x >> data[i].y >> data[i].z >> dummy1 >> dummy2;
cloud *= 5.0f;
return cloud;
}
int main(int argn, char **argv)
{
help();
if (argn < 2)
{
cout << "Missing arguments." << endl;
return 1;
}
bool camera_pov = (argv[1][0] == 'C');
// bool camera_pov = false;
viz::Viz3d myWindow("Coordinate Frame"); /*创建一个可视化窗口*/
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem());/*窗口的坐标系相对关系 XYZ-> RGB; 右手系 */
Vec3f cam_pos(3.0f,3.0f,3.0f), cam_focal_point(3.0f,3.0f,2.0f), cam_y_dir(-1.0f,0.0f,0.0f);
Affine3f cam_pose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);/*从摄像机位置,摄像机焦点和y方向获取摄像机位姿*/
Affine3f transform = viz::makeTransformToGlobal(Vec3f(0.0f,-1.0f,0.0f), Vec3f(-1.0f,0.0f,0.0f), Vec3f(0.0f,0.0f,-1.0f), cam_pos);/*知道了相机坐标系轴获取变换矩阵*/
Mat bunny_cloud = cvcloud_load(); /*从bunny.ply文件创建云部件*/
viz::WCloud cloud_widget(bunny_cloud, viz::Color::green()); /*指定颜色green*/
Affine3f cloud_pose = Affine3f().translate(Vec3f(0.0f,0.0f,3.0f)); //3.0F ???
Affine3f cloud_pose_global = transform * cloud_pose;/*已知相机坐标系中的姿势,估计全局姿势*/
if (!camera_pov) //如果将视点设置为全局视点,则可视化相机坐标系和视锥
{
viz::WCameraPosition cpw(0.5); // Coordinate axes
viz::WCameraPosition cpw_frustum(Vec2f(0.889484, 0.523599)); // Camera frustum
myWindow.showWidget("CPW", cpw, cam_pose);
myWindow.showWidget("CPW_FRUSTUM", cpw_frustum, cam_pose);
}
myWindow.showWidget("bunny", cloud_widget, cloud_pose_global);/*使用估计的全局姿势可视化点云部件*/ /*id , 点云控件, pose*/
if (camera_pov) //如果将视点设置为相机的视点,则将查看者姿态设置为cam_pose
myWindow.setViewerPose(cam_pose);
myWindow.spin();
return 0;
}
1、这是从摄像机的角度来看的结果
2、这是从全局角度来看的结果
Viz模块中主要使用Affine3f仿射变换来处理空间转换的过程,下面实例实现3D点云在世界坐标系下,绕相机坐标系z轴旋转:
#include
#include
#include
#include
#include
using namespace std;
using namespace cv;
// load a ply file
// http://graphics.stanford.edu/data/3Dscanrep/
Mat cvcloud_load()
{
Mat cloud(1, 1889, CV_32FC3);
ifstream ifs("/home/ubuntu/Qt_program/slam/cv_viz/bunny.ply");
string str;
for(size_t i = 0; i < 12; ++i)
getline(ifs, str);
Point3f* data = cloud.ptr();
float dummy1, dummy2;
for(size_t i = 0; i < 1889; ++i)
ifs >> data[i].x >> data[i].y >> data[i].z >> dummy1 >> dummy2;
cloud *= 5.0f;
return cloud;
}
int main()
{
// step 1. construct window
viz::Viz3d window("mywindow");
window.showWidget("Coordinate Widget", viz::WCoordinateSystem());
// step 2. set the camera pose
Vec3f cam_position(3.0f, 3.0f, -3.0f), cam_focal_point(3.f, 3.f, -4.0f), cam_y_direc(-1.0f,0.0f,0.0f);
Affine3f cam_pose = viz::makeCameraPose(cam_position, cam_focal_point, cam_y_direc);
Affine3f transform = viz::makeTransformToGlobal(Vec3f(0.0f,-1.0f, 0.0f), Vec3f(-1.0f, 0.0f, 0.0f), Vec3f(0.0f, 0.0f, -1.0f), cam_position);
Mat bunny = cvcloud_load();
viz::WCloud bunny_cloud(bunny,viz::Color::green());
double z = 0.0f;
Affine3f cloud_pose_global;
while(!window.wasStopped())
{
z += CV_PI*0.01f;
cloud_pose_global = transform.inv()*Affine3f(Vec3f(0.0, 0.0, z), Vec3f(0.0, 0.0, 2.0))*Affine3f::Identity();
window.showWidget("bunny_cloud", bunny_cloud, cloud_pose_global);
// step 3. To show camera and frustum by pose
// scale is 0.5
viz::WCameraPosition camera(0.5);
// show the frustum by intrinsic matrix
viz::WCameraPosition camera_frustum(Matx33f(3.1,0,0.1,0,3.2,0.2,0,0,1));
window.showWidget("Camera", camera, cam_pose);
window.showWidget("Camera_frustum", camera_frustum, cam_pose);
window.spinOnce(1, true);
}
return 0;
}
Cmakelist
cmake_minimum_required(VERSION 2.8)
set(CMAKE_CXX_COMPILER "g++")
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
list(APPEND CMAKE_CXX_FLAGS "-O3 -DEBUG -ffast-math -Wall -pthread -fopenmp -std=c++11") #-DNDEBUG
project(test_viz3d)
# OpenCV
find_package(OpenCV 3.3.0 REQUIRED)
file(GLOB_RECURSE VTK_LIBRARIES "/usr/local/lib/libopencv_viz.so") # todo ++
include_directories(${OpenCV_INCLUDE_DIRS})
message("OpenCV_INCLUDE_DIRS: ${OpenCV_INCLUDE_DIRS}")
message("OpenCV_LIBS: ${OpenCV_LIBS}")
## VTK # todo --
#find_package(VTK REQUIRED)
#include(${VTK_USE_FILE})
#message("VTK_USE_FILE: ${VTK_USE_FILE}")
#message("VTK_LIBRARIES: ${VTK_LIBRARIES}")
# PROJECT
include_directories(include)
aux_source_directory(src DIR_SRCS)
add_executable(${PROJECT_NAME} ${DIR_SRCS})
target_link_libraries(${PROJECT_NAME} ${OpenCV_LIBS} ${VTK_LIBRARIES}) # todo ++
// EXP 6_2: 将线段显示,轨迹 :opencv - viz 画出SLAM轨迹 https://www.cnblogs.com/winslam/p/9244598.html
/*
opencv画图:首先创建一个窗口,然后show你想要添加的widget,
问题是:部件widget在每一帧中只能显示当前状态,对于历史状态全部会清除,所以每一次循环,你不仅要添加widget的当前状态,还要将历史状态全部添加进去(第n次循环要实例化n个widget);
*/
#include
#include
#include
#include
#include
#include
using namespace std;
using namespace cv;
int main()
{
ifstream fin("../data/KeyFrameTrajectory.txt");
if (!fin)
{
cerr << "error in openning the file !" << endl;
return 0;
}
// visualization
cv::viz::Viz3d vis("Visual Odometry");
cv::viz::WCoordinateSystem world_coor(1.0), camera_coor(0.5);
vis.setBackgroundColor(cv::viz::Color::black());// 设置背景颜色
// draw the trace
int i = 0;
Point3d point_begin(0.0, 0.0, 0.0);
Point3d point_end;
//cv::viz::WLine wline(cv::Point3f(0, 0, 0), (100, 100, 100), cv::Scalar(0, 0, 255));
cv::Point3d cam_pos(0, -1.0, -1.0), cam_focal_point(0, 0, 0), cam_y_dir(0, 1, 0);
cv::Affine3d cam_pose = cv::viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);
vis.setViewerPose(cam_pose);
world_coor.setRenderingProperty(cv::viz::LINE_WIDTH, 1.0); // ??
camera_coor.setRenderingProperty(cv::viz::LINE_WIDTH, 5.0);// ??
vis.showWidget("World", world_coor);
vis.showWidget("Camera", camera_coor);
vector count_color;
vector> poses;
vector> vecs;
for (int i = 0;; i++)
{
if (fin.eof())
{
cout << "read over " << endl;
break;
}
vector vec(8,0); //获取每行数据
for (auto & d : vec)
{
fin >> d;//双向影响
}
vecs.push_back(vec);
}
if (false)
{
for (int j = 0; j < vecs.size(); j++)
{
for (int i = 1; i < 8; i++)// 获取位姿
{
cout << vecs[j][i] << " ";
}
cout << endl;
}
}
for (int j = 0; j < vecs.size(); j++)
{
Eigen::Quaterniond q(vecs[j][7], vecs[j][4], vecs[j][5], vecs[j][6]);
Eigen::Isometry3d T(q);
T.pretranslate(Eigen::Vector3d(vecs[j][1], vecs[j][2], vecs[j][3]));
poses.push_back(T);
cv::Affine3d M(
cv::Affine3d::Mat3(
T(0, 0), T(0, 1), T(0, 2),
T(1, 0), T(1, 1), T(1, 2),
T(2, 0), T(2, 1), T(2, 2)
),
cv::Affine3d::Vec3(
T.translation()(0, 0),
T.translation()(1, 0),
T.translation()(2, 0)
)
);
//cout << "x = " << T.translation()(0, 0) << " z = " << T.translation()(1, 0) << endl;
//*******************************************************************************************
// 画出轨迹
//*******************************************************************************************
vector lines;
point_end = Point3d( // 更新终点
T.translation()(0, 0),
T.translation()(1, 0),
T.translation()(2, 0)
);
viz::WLine line(point_begin, point_end, cv::viz::Color::green());
lines.push_back(line); //收集 第一个 line 到 当前line
// 每个循环 画出 第一个 line 到 当前line
for (vector::iterator iter = lines.begin(); iter != lines.end(); iter++)
{
string id = to_string(i); //每一次循环,你不仅要添加widget的当前状态,还要将历史状态全部添加进去(第n次循环要实例化n个widget)
vis.showWidget(id, *iter);
i++;
}
point_begin = point_end; // 更新 起始点
vis.setWidgetPose("Camera", M);
vis.spinOnce(1, false);
}
vis.saveScreenshot("KeyFrameTrajectory.png");
return 1;
}
1 单独坐标系 (上下等效 .不加Pose,等效视图坐标系)
// exp1: camera_pose
int main() {
viz::Viz3d window("mywindow"); // step 1. construct window /*创建一个可视化窗口*/
// window.showWidget("Coordinate Widget", viz::WCoordinateSystem());/*窗口的坐标系相对关系 XYZ-> RGB; 右手系 */
viz::WCameraPosition camera(0.5); //viz::WCameraPosition cpw(0.5); // Coordinate axes
window.showWidget("Camera", camera); //
window.spin();
return 0;
}
2 坐标系加pose,加白色菱形(视锥)
viz::WCameraPosition camera_frustum(Matx33f(3.1, 0, 0.1, 0, 3.2, 0.2, 0, 0, 1)); //白色菱形
// viz::WCameraPosition camera_frustum(Vec2f(0.889484, 0.523599)); //camera_frustum/cpw_frustum
window.showWidget("Camera_frustum", camera_frustum, cam_pose);
代码分享: https://github.com/xiaopengsu/cv_viz
Pangolin是对OpenGL进行封装的轻量级的OpenGL输入/输出和视频显示的库。可以用于3D视觉和3D导航的视觉图,可以输入各种类型的视频、并且可以保留视频和输入数据用于debug。
使用Pangolin画出相机的轨迹(包括朝向)。
数据集结构data.csv:
#timestamp, p_RS_R_x [m], p_RS_R_y [m], p_RS_R_z [m], q_RS_w [], q_RS_x [], q_RS_y [], q_RS_z [], v_RS_R_x [m s^-1], v_RS_R_y [m s^-1], v_RS_R_z [m s^-1], b_w_RS_S_x [rad s^-1], b_w_RS_S_y [rad s^-1], b_w_RS_S_z [rad s^-1], b_a_RS_S_x [m s^-2], b_a_RS_S_y [m s^-2], b_a_RS_S_z [m s^-2]
1403636580838555648,4.688319,-1.786938,0.783338,0.534108,-0.153029,-0.827383,-0.082152,-0.027876,0.033207,0.800006,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580843555328,4.688177,-1.786770,0.787350,0.534640,-0.152990,-0.826976,-0.082863,-0.029272,0.033992,0.804771,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580848555520,4.688028,-1.786598,0.791382,0.535178,-0.152945,-0.826562,-0.083605,-0.030043,0.034999,0.808240,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580853555456,4.687878,-1.786421,0.795429,0.535715,-0.152884,-0.826146,-0.084391,-0.030230,0.035853,0.810462,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580858555648,4.687727,-1.786240,0.799484,0.536244,-0.152821,-0.825731,-0.085213,-0.029905,0.036316,0.811406,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580863555328,4.687579,-1.786059,0.803540,0.536768,-0.152768,-0.825314,-0.086049,-0.029255,0.036089,0.811225,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580868555520,4.687435,-1.785881,0.807594,0.537289,-0.152725,-0.824896,-0.086890,-0.028469,0.035167,0.810357,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580873555456,4.687295,-1.785709,0.811642,0.537804,-0.152680,-0.824481,-0.087725,-0.027620,0.033777,0.808910,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
1403636580878555648,4.687158,-1.785544,0.815682,0.538317,-0.152627,-0.824067,-0.088553,-0.026953,0.031990,0.806951,-0.003172,0.021267,0.078502,-0.025266,0.136696,0.075593
前八项分别为 时间戳,x,y,z,0,1,2,3。
#include
#include
#include
#include
using namespace std;
typedef vector> VecSE3;
typedef vector> VecVec3d;
string file = "../data/KeyFrameTrajectory.txt"; //string file = "./data.csv";
void Draw(const VecSE3 &poses);
int main(int argc, char **argv)
{
//读位姿
VecSE3 poses;
VecVec3d points;
ifstream fin(file);//位姿
string lineStr;
int j = 0;
while(getline(fin,lineStr))//每行
{
j+=1;//隔100个点取一个数据
if (j%100 != 0 )
continue;
//cout<= 8)//只取前八个
continue;
intg >> data[i];
i+=1;
}
poses.push_back(Sophus::SE3(
//eigen.tuxfamily.org/dox-devel/classEigen_1_1Quaternion.html
Eigen::Quaterniond(data[4], data[5], data[6], data[7]),//四元数
Eigen::Vector3d(data[1], data[2], data[3])//位移
));
//cout << data[1] <<" "<< data[2]<<" "<< data[3]<<" "<< data[4]<();
glMultMatrixf((GLfloat *) m.data());
const float w = 0.25;
const float h = w*0.75;
const float z = w*0.6;
glColor3f(1, 0, 0);
glLineWidth(2);
glBegin(GL_LINES);
//画相机模型
glVertex3f(0, 0, 0);
glVertex3f(w,h,z);
glVertex3f(0, 0, 0);
glVertex3f(w,-h,z);
glVertex3f(0, 0, 0);
glVertex3f(-w,-h,z);
glVertex3f(0, 0, 0);
glVertex3f(-w,h,z);
glVertex3f(w,h,z);
glVertex3f(w,-h,z);
glVertex3f(-w,h,z);
glVertex3f(-w,-h,z);
glVertex3f(-w,h,z);
glVertex3f(w,h,z);
glVertex3f(-w,-h,z);
glVertex3f(w,-h,z);
glEnd();
glPopMatrix();
}
//画轨迹
glLineWidth(2);
for (size_t i = 0; i < poses.size() - 1; i++)
{
glColor3f(1 - (float) i / poses.size(), 0.0f, (float) i / poses.size());
glBegin(GL_LINES);
auto p1 = poses[i], p2 = poses[i + 1];
glVertex3d(p1.translation()[0], p1.translation()[1], p1.translation()[2]);
glVertex3d(p2.translation()[0], p2.translation()[1], p2.translation()[2]);
}
glEnd();
pangolin::FinishFrame();
usleep(5000); // sleep 5 ms
}
}
CMakeLists.txt
cmake_minimum_required( VERSION 2.8 )
project( show )
set( CMAKE_BUILD_TYPE "Release" )
set( CMAKE_CXX_FLAGS "-std=c++11 -O3" )
list( APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules )
# 寻找G2O Cholmod eigen3
find_package( G2O REQUIRED )
find_package( Cholmod )
include_directories(
${G2O_INCLUDE_DIRS} ${CHOLMOD_INCLUDE_DIR}
"/usr/include/eigen3"
)
# sophus
find_package( Sophus REQUIRED )
include_directories( ${Sophus_INCLUDE_DIRS} )
find_package( Pangolin REQUIRED)
include_directories( ${Pangolin_INCLUDE_DIRS} )
add_executable( draw draw.cpp )
target_link_libraries( draw
${CHOLMOD_LIBRARIES}
${Sophus_LIBRARIES}
${Pangolin_LIBRARIES}
)
除此之外,还需添加cmake_modules。
运行结果:
// 创建名称为“Main”的GUI窗口,尺寸为640×640
pangolin::CreateWindowAndBind("Main",640,480); /*函数的入口的参数依次为视窗的名称、宽度和高度*/
// 启动深度测试
glEnable(GL_DEPTH_TEST); /*启动了深度测试功能,该功能会使得pangolin只会绘制朝向镜头的那一面像素点,避免容易混淆的透视关系出现,因此在任何3D可视化中都应该开启该功能。*/
// 创建一个观察相机视图
pangolin::OpenGlRenderState s_cam( // 参数依次为观察相机的图像高度、宽度、4个内参以及最近和最远视距
pangolin::ProjectionMatrix(640,480,420,420,320,320,0.2,100), /*ProjectMatrix(int h, int w, int fu, int fv, int cu, int cv, int znear, int zfar)*/
pangolin::ModelViewLookAt(2,0,2, 0,0,0, pangolin::AxisY)/*ModelViewLookAt(double x, double y, double z,double lx, double ly, double lz, AxisDirection Up)*/
); //参数依次为相机所在的位置,以及相机所看的视点位置(一般会设置在原点)
// 创建交互视图-
pangolin::Handler3D handler(s_cam); //交互相机视图句柄
pangolin::View& d_cam = pangolin::CreateDisplay() /*创建一个交互式视图(view)用于显示上一步摄像机所“拍摄”到的内容*/
.SetBounds(0.0, 1.0, 0.0, 1.0, -640.0f/480.0f)/*setBounds()函数前四个参数依次表示视图在视窗中的范围(下、上、左、右),可以采用相对坐标(0~1)以及绝对坐标(使用Attach对象)。*/
.SetHandler(&handler);
while( !pangolin::ShouldQuit() )
{
// 清空颜色和深度缓存
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
d_cam.Activate(s_cam);/*使用glclear命令分别清空色彩缓存和深度缓存并激活之前设定好的视窗对象(否则视窗内会保留上一帧的图形,这种“多重曝光”效果通常并不是我们需要的)。*/
//
// // 在原点绘制一个立方体
// pangolin::glDrawColouredCube();
// 绘制坐标系
glLineWidth(3);
glBegin ( GL_LINES );
glColor3f ( 0.8f,0.f,0.f );
glVertex3f( -1,-1,-1 );
glVertex3f( 0,-1,-1 );
glColor3f( 0.f,0.8f,0.f);
glVertex3f( -1,-1,-1 );
glVertex3f( -1,0,-1 );
glColor3f( 0.2f,0.2f,1.f);
glVertex3f( -1,-1,-1 );
glVertex3f( -1,-1,0 );
glEnd();
// 运行帧循环以推进窗口事件
pangolin::FinishFrame(); /*用FinishFrame命令刷新视窗*/
}
// 创建视窗
pangolin::CreateWindowAndBind("Main",640,480);
// 启动深度测试
glEnable(GL_DEPTH_TEST);
// 创建一个摄像机
pangolin::OpenGlRenderState s_cam(
pangolin::ProjectionMatrix(640,480,420,420,320,240,0.1,1000),
pangolin::ModelViewLookAt(-0,0.5,-3, 0,0,0, pangolin::AxisY)
);
// 分割视窗
const int UI_WIDTH = 180;
// 右侧用于显示视口
pangolin::View& d_cam = pangolin::CreateDisplay()
.SetBounds(0.0, 1.0, pangolin::Attach::Pix(UI_WIDTH), 1.0, -640.0f/480.0f)
.SetHandler(new pangolin::Handler3D(s_cam));
// 左侧用于创建控制面板
pangolin::CreatePanel("ui")
.SetBounds(0.0, 1.0, 0.0, pangolin::Attach::Pix(UI_WIDTH));
// 创建控制面板的控件对象,pangolin中
pangolin::Var A_Button("ui.a_button", false, false); // 按钮
pangolin::Var A_Checkbox("ui.a_checkbox", false, true); // 选框
pangolin::Var B_Checkbox("ui.b_checkbox", false, false); // 选框
pangolin::Var Double_Slider("ui.a_slider", 3, 0, 5); //double滑条
pangolin::Var Int_Slider("ui.b_slider", 2, 0, 5); //int滑条
pangolin::Var A_string("ui.a_string", "Hello Pangolin");
pangolin::Var SAVE_IMG("ui.save_img", false, false); // 按钮
pangolin::Var SAVE_WIN("ui.save_win", false, false); // 按钮
pangolin::Var RECORD_WIN("ui.record_win", false, false); // 按钮
pangolin::Var> reset("ui.Reset", SampleMethod);//
// 绑定键盘快捷键
// Demonstration of how we can register a keyboard hook to alter a Var
pangolin::RegisterKeyPressCallback(pangolin::PANGO_CTRL + 'b', pangolin::SetVarFunctor("ui.a_slider", 3.5));
// Demonstration of how we can register a keyboard hook to trigger a method
pangolin::RegisterKeyPressCallback(pangolin::PANGO_CTRL + 'r', SampleMethod);
// Default hooks for exiting (Esc) and fullscreen (tab).
while( !pangolin::ShouldQuit() )
{
// Clear entire screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// 各控件的回调函数
if(pangolin::Pushed(A_Button))
std::cout << "Push button A." << std::endl;
if(A_Checkbox)
Int_Slider = Double_Slider;
// 保存整个win
if( pangolin::Pushed(SAVE_WIN) )
pangolin::SaveWindowOnRender("window");
// 保存view
if( pangolin::Pushed(SAVE_IMG) )
d_cam.SaveOnRender("cube");
// 录像
if( pangolin::Pushed(RECORD_WIN) )
pangolin::DisplayBase().RecordOnRender("ffmpeg:[fps=50,bps=8388608,unique_filename]//screencap.avi");
d_cam.Activate(s_cam);
// glColor3f(1.0,0.0,1.0);
pangolin::glDrawColouredCube();
// Swap frames and Process Events
pangolin::FinishFrame();
}
// 创建视窗
pangolin::CreateWindowAndBind("MultiImage", 752, 480);
// 启动深度测试
glEnable(GL_DEPTH_TEST);
// 设置摄像机
pangolin::OpenGlRenderState s_cam(
pangolin::ProjectionMatrix(752, 480, 420, 420, 320, 320, 0.1, 1000),
pangolin::ModelViewLookAt(-2, 0, -2, 0, 0, 0, pangolin::AxisY)
);
// ---------- 创建三个视图 ---------- //
pangolin::View& d_cam = pangolin::Display("cam")
.SetBounds(0., 1., 0., 1., -752/480.)
.SetHandler(new pangolin::Handler3D(s_cam));
pangolin::View& cv_img_1 = pangolin::Display("image_1")
.SetBounds(2/3.0f, 1.0f, 0., 1/3.0f, 752/480.)
.SetLock(pangolin::LockLeft, pangolin::LockTop);
pangolin::View& cv_img_2 = pangolin::Display("image_2")
.SetBounds(0., 1/3.0f, 2/3.0f, 1.0, 752/480.)
.SetLock(pangolin::LockRight, pangolin::LockBottom);
// 创建glTexture容器用于读取图像
pangolin::GlTexture imgTexture1(752, 480, GL_RGB, false, 0, GL_BGR, GL_UNSIGNED_BYTE);
pangolin::GlTexture imgTexture2(752, 480, GL_RGB, false, 0, GL_BGR, GL_UNSIGNED_BYTE);
while(!pangolin::ShouldQuit()){
// 清空颜色和深度缓存
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// 启动相机
d_cam.Activate(s_cam);
// 添加一个立方体
glColor3f(1.0f, 1.0f, 1.0f);
pangolin::glDrawColouredCube();
// 从文件读取图像
cv::Mat img1 = cv::imread("../data/1.png"); /*在视窗左上角和右下角实时显示了EuRoc数据集中的图像*/
cv::Mat img2 = cv::imread("../data/2.png");
// 向GPU装载图像
imgTexture1.Upload(img1.data, GL_BGR, GL_UNSIGNED_BYTE);
imgTexture2.Upload(img2.data, GL_BGR, GL_UNSIGNED_BYTE);
// 显示图像
cv_img_1.Activate();
glColor3f(1.0f, 1.0f, 1.0f); // 设置默认背景色,对于显示图片来说,不设置也没关系
imgTexture1.RenderToViewportFlipY(); // 需要反转Y轴,否则输出是倒着的
cv_img_2.Activate();
glColor3f(1.0f, 1.0f, 1.0f); // 设置默认背景色,对于显示图片来说,不设置也没关系
imgTexture2.RenderToViewportFlipY();
pangolin::FinishFrame();
}
代码分享:https://github.com/xiaopengsu/slam_pangolin
自己写的对比SLAM 的3D显示代码程序: https://github.com/xiaopengsu/myslam_ch9_04_rgbd_vo