1.opencv中3D点根据相机参数投影成2D点,直接上代码:
输入:3D坐标+旋转,平移矩阵(所谓的相机姿态)+相机内参(包括畸变矩阵)
输出:2D坐标
(1.投影函数:根据相机参数(包括畸变矩阵),把3D点投影成2D点
2.搞清楚R和t的具体含义。
R的第i行 表示摄像机坐标系中的第i个坐标轴方向的单位向量在世界坐标系里的坐标;
R的第i列 表示世界坐标系中的第i个坐标轴方向的单位向量在摄像机坐标系里的坐标;
t 表示世界坐标系的原点在摄像机坐标系的坐标;
-R的转置 * t 表示摄像机坐标系的原点在世界坐标系的坐标。
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include
#include
std::vector
int main(int argc, char* argv[])
{
// Read 3D points
std::vector
std::vector
cv::Mat intrisicMat(3, 3, cv::DataType
intrisicMat.at
intrisicMat.at
intrisicMat.at
intrisicMat.at
intrisicMat.at
intrisicMat.at
intrisicMat.at
intrisicMat.at
intrisicMat.at
cv::Mat rVec(3, 1, cv::DataType
rVec.at
rVec.at
rVec.at
cv::Mat tVec(3, 1, cv::DataType
tVec.at
tVec.at
tVec.at
cv::Mat distCoeffs(5, 1, cv::DataType
distCoeffs.at
distCoeffs.at
distCoeffs.at
distCoeffs.at
distCoeffs.at
std::cout << "Intrisic matrix: " << intrisicMat << std::endl << std::endl;
std::cout << "Rotation vector: " << rVec << std::endl << std::endl;
std::cout << "Translation vector: " << tVec << std::endl << std::endl;
std::cout << "Distortion coef: " << distCoeffs << std::endl << std::endl;
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
std::cout << "Press any key to exit.";
std::cin.ignore();
std::cin.get();
return 0;
}
std::vector
{
std::vector
double x, y, z;
x = .5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = .5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = .5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = .5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = -.5;
points.push_back(cv::Point3d(x, y, z));
x = -.5; y = -.5; z = .5;
points.push_back(cv::Point3d(x, y, z));
for (unsigned int i = 0; i < points.size(); ++i)
{
std::cout << points[i] << std::endl << std::endl;
}
return points;
}
2.opencv中solvePnP函数计算相机姿态:http://blog.csdn.net/aptx704610875/article/details/48915149
或者http://www.cnblogs.com/gaoxiang12/p/4659805.html
输入:3D坐标,2D坐标+相机内参(包括畸变矩阵)
输出:相机姿态(旋转平移矩阵)
3.超级好的一个博客!:http://www.cnblogs.com/gaoxiang12/p/4652478.html(这个博客还包括根据RGB-D图像,点云拼接(三维重建),很详细!)
(涉及到了三维点云,除了opencv,还用到PCL)
原理+代码实现从(2D->3D)
输入:2D坐标,相机内参(包括畸变矩阵),相机姿态
输出:3D坐标
(1,2,3其实就是3D,2D坐标,相机姿态(在已经知道相机参数情况下),知道其中两个可以求得另外一个!)
4.用kinect1来拍摄三维模型,很重要的一步就是先要标定相机(因为深度相机与彩色相机不是一个位置,存在偏差):
Kinect1深度图与摄像头RGB的标定与配准 :
http://blog.csdn.net/aichipmunk/article/details/9264703