opencv中3D点根据相机参数投影成2D点+solvePnP函数计算相机姿态+2D坐标到3D+相机参数calibration(标定与配准,求得深度彩色相机的内参与外参,再进行配准)

1.opencv中3D点根据相机参数投影成2D点,直接上代码:

输入:3D坐标+旋转,平移矩阵(所谓的相机姿态)+相机内参(包括畸变矩阵)

输出:2D坐标

(1.投影函数:根据相机参数(包括畸变矩阵),把3D点投影成2D点

2.搞清楚R和t的具体含义。

R的第i行 表示摄像机坐标系中的第i个坐标轴方向的单位向量在世界坐标系里的坐标;
R的第i列 表示世界坐标系中的第i个坐标轴方向的单位向量在摄像机坐标系里的坐标;
t 表示世界坐标系的原点在摄像机坐标系的坐标;
-R的转置 * t 表示摄像机坐标系的原点在世界坐标系的坐标

#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include
#include
std::vector Generate3DPoints();

int main(int argc, char* argv[])
{
// Read 3D points
std::vector objectPoints = Generate3DPoints();
std::vector imagePoints;
cv::Mat intrisicMat(3, 3, cv::DataType::type); // Intrisic matrix
intrisicMat.at(0, 0) = 1.6415318549788924e+003;
intrisicMat.at(1, 0) = 0;
intrisicMat.at(2, 0) = 0;
intrisicMat.at(0, 1) = 0;
intrisicMat.at(1, 1) = 1.7067753507885654e+003;
intrisicMat.at(2, 1) = 0;
intrisicMat.at(0, 2) = 5.3262822453148601e+002;
intrisicMat.at(1, 2) = 3.8095355839052968e+002;
intrisicMat.at(2, 2) = 1;
cv::Mat rVec(3, 1, cv::DataType::type); // Rotation vector
rVec.at(0) = -3.9277902400761393e-002;
rVec.at(1) = 3.7803824407602084e-002;
rVec.at(2) = 2.6445674487856268e-002;
cv::Mat tVec(3, 1, cv::DataType::type); // Translation vector
tVec.at(0) = 2.1158489381208221e+000;
tVec.at(1) = -7.6847683212704716e+000;
tVec.at(2) = 2.6169795190294256e+001;
cv::Mat distCoeffs(5, 1, cv::DataType::type);   // Distortion vector
distCoeffs.at(0) = -7.9134632415085826e-001;
distCoeffs.at(1) = 1.5623584435644169e+000;
distCoeffs.at(2) = -3.3916502741726508e-002;
distCoeffs.at(3) = -1.3921577146136694e-002;
distCoeffs.at(4) = 1.1430734623697941e+002;
std::cout << "Intrisic matrix: " << intrisicMat << std::endl << std::endl;
std::cout << "Rotation vector: " << rVec << std::endl << std::endl;
std::cout << "Translation vector: " << tVec << std::endl << std::endl;
std::cout << "Distortion coef: " << distCoeffs << std::endl << std::endl;
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
std::cout << "Press any key to exit.";
std::cin.ignore();
std::cin.get();
return 0;
}
std::vector Generate3DPoints()
{
std::vector points;
double x, y, z;
x = .5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = .5;
points.push_back(cv::Point3d(x, y, z));
for (unsigned int i = 0; i < points.size(); ++i)
{
std::cout << points[i] << std::endl << std::endl;
}
return points;
}

2.opencv中solvePnP函数计算相机姿态:http://blog.csdn.net/aptx704610875/article/details/48915149

或者http://www.cnblogs.com/gaoxiang12/p/4659805.html

输入:3D坐标,2D坐标+相机内参(包括畸变矩阵)

输出:相机姿态(旋转平移矩阵)

3.超级好的一个博客!:http://www.cnblogs.com/gaoxiang12/p/4652478.html(这个博客还包括根据RGB-D图像,点云拼接(三维重建),很详细!)

(涉及到了三维点云,除了opencv,还用到PCL)

原理+代码实现从(2D->3D)

输入:2D坐标,相机内参(包括畸变矩阵),相机姿态

输出:3D坐标

(1,2,3其实就是3D,2D坐标,相机姿态(在已经知道相机参数情况下),知道其中两个可以求得另外一个!)

4.用kinect1来拍摄三维模型,很重要的一步就是先要标定相机(因为深度相机与彩色相机不是一个位置,存在偏差): 

Kinect1深度图与摄像头RGB的标定与配 :  

http://blog.csdn.net/aichipmunk/article/details/9264703

(补充:

①保存拍摄的深度彩色图,可以用openni的代码,也可以更改pcl kinfu代码,openni可以看我pcl配置那一篇博客
②拍好深度,彩色图后也可以用matlab工具箱来标定:http://blog.csdn.net/wangxiaokun671903/article/details/38925553(可以获得深度,彩色相机内参外参)
③获得内外参数后,利用本文的1,2,3来配准深度彩色图片)

你可能感兴趣的:(PCL)