像素坐标转世界坐标的计算

原理

下图表示了小孔成像模型(图片及公式参考opencv官方资料)


这个图里涉及4个坐标系:

  1. 世界坐标系:其坐标原点可视情况而定,可以表示空间的物体,单位为长度单位,比如mm,用矩阵表示;
  2. 相机坐标系:以摄像机光心为原点(在针孔模型中也就是针孔为中心),z轴与光轴重合,也就是z轴指向相机的前方(与成像平面垂直),x轴与y轴的正方向与世界坐标系平行,单位为长度单位,比如mm,用矩阵表示;
  3. 图像物理坐标系(也叫成像平面坐标系):用物理长度单位表示像素的位置,坐标原点为摄像机光轴与图像物理坐标系的交点位置。坐标系为图上o-xy,单位为长度单位,比如mm,用矩阵表示。
  4. 像素坐标系:坐标原点在左上角,以像素为单位,有明显的范围限制,即用于表示全画面的像素长和像素长宽,矩阵表示。

以下公式描述了、、和之间的转换关系。
z\begin{bmatrix}u \\ v\\ 1 \end{bmatrix}= \begin{bmatrix}1/d_x&0&c_x\\0&1/d_y&c_y\\0&0&1 \end{bmatrix} \begin{bmatrix}f&0&0\\ 0&f&0\\ 0&0&1 \end{bmatrix} \begin{bmatrix}r11&r12&r13&t1\\ r21&r22&r23&t2\\ r31&r32&r33&t3 \end{bmatrix} \begin{bmatrix}X_W \\ Y_W \\Z_W \\ 1\end{bmatrix}
以上公式中,和表示1个像素有多少长度,即用传感器的尺寸除以像素数量,比如2928.384umx2205.216um的传感的分辨率为2592x1944,每个像素的大小即约1.12um。
表示焦距,在上图中根据相似三角形,P点和p点具有以下关系:

即 ,可见:越大,和越大,越大,和越小。
和表示中心点在像素坐标系中的位置。
要求像素坐标系中某像素点对应在世界坐标系中的位置,需要知道相机的内参、外参,相机的内参可以通过标定获得,外参可以人为设定。
第一步,将像素坐标变换到相机坐标系:

两边乘以K的逆后推导出:

第二步,从相机坐标系变换到世界坐标系:

将方程乘以,可以推导出:

代码

通过输入相机的内参,旋转向量,平移向量和像素坐标,可以通过以下函数求出对应的世界坐标点。
以下代码中需求注意要对平移向量取转置,将1x3矩阵变为3x1矩阵后,才能实现3x3矩阵和3x1矩阵的乘法运算。

void cameraToWorld(InputArray cameraMatrix, InputArray rV, InputArray tV, vector imgPoints, vector &worldPoints)
{
    Mat invK64, invK;
    invK64 = cameraMatrix.getMat().inv();
    invK64.convertTo(invK, CV_32F);
    Mat r, t, rMat;
    rV.getMat().convertTo(r, CV_32F);
    tV.getMat().convertTo(t, CV_32F);
    Rodrigues(r, rMat);

    //计算 invR * T
    Mat invR = rMat.inv();
    //cout << "invR\n" << invR << endl;
    //cout << "t\n" << t << t.t() << endl;
    Mat transPlaneToCam;
    if(t.size() == Size(1, 3)){
        transPlaneToCam = invR * t;//t.t();
    }
    else if(t.size() == Size(3, 1)){
        transPlaneToCam = invR * t.t();
    }
    else{
        return;
    }
    //cout << "transPlaneToCam\n" << transPlaneToCam << endl;

    int npoints = (int)imgPoints.size();
    //cout << "npoints\n" << npoints << endl;
    for (int j = 0; j < npoints; ++j){
        Mat coords(3, 1, CV_32F);
        Point3f pt;
        coords.at(0, 0) = imgPoints[j].x;
        coords.at(1, 0) = imgPoints[j].y;
        coords.at(2, 0) = 1.0f;
        //[x,y,z] = invK * [u,v,1]
        Mat worldPtCam = invK * coords;
        //cout << "worldPtCam:" << worldPtCam << endl;
        //[x,y,1] * invR
        Mat worldPtPlane = invR * worldPtCam;
        //cout << "worldPtPlane:" << worldPtPlane << endl;
        //zc 
        float scale = transPlaneToCam.at(2) / worldPtPlane.at(2);
        //cout << "scale:" << scale << endl;
        Mat scale_worldPtPlane(3, 1, CV_32F);
        //scale_worldPtPlane.at(0, 0) = worldPtPlane.at(0, 0) * scale;
        //zc * [x,y,1] * invR
        scale_worldPtPlane = scale * worldPtPlane;
        //cout << "scale_worldPtPlane:" << scale_worldPtPlane << endl;
        //[X,Y,Z]=zc*[x,y,1]*invR - invR*T
        Mat worldPtPlaneReproject = scale_worldPtPlane - transPlaneToCam;
        //cout << "worldPtPlaneReproject:" << worldPtPlaneReproject << endl;
        pt.x = worldPtPlaneReproject.at(0);
        pt.y = worldPtPlaneReproject.at(1);
        //pt.z = worldPtPlaneReproject.at(2);
        pt.z = 1.0f;
        worldPoints.push_back(pt);
    }
}

    def cameraToWorld(self, cameraMatrix, r, t, imgPoints):
        invK = np.asmatrix(cameraMatrix).I
        rMat = np.zeros((3, 3), dtype=np.float64)
        cv2.Rodrigues(r, rMat)
        #print('rMat=', rMat)
        #计算 invR * T
        invR =  np.asmatrix(rMat).I #3*3
        #print('invR=', invR)
        transPlaneToCam = np.dot(invR , np.asmatrix(t)) #3*3 dot 3*1 = 3*1
        #print('transPlaneToCam=', transPlaneToCam)
        worldpt = []   
        coords = np.zeros((3, 1), dtype=np.float64)

        for imgpt in imgPoints:
            coords[0][0] = imgpt[0][0]
            coords[1][0] = imgpt[0][1]
            coords[2][0] = 1.0
            worldPtCam = np.dot(invK , coords)  #3*3 dot 3*1 = 3*1
            #print('worldPtCam=', worldPtCam)
            #[x,y,1] * invR
            worldPtPlane = np.dot(invR , worldPtCam) #3*3 dot 3*1 = 3*1
            #print('worldPtPlane=', worldPtPlane)
            #zc 
            scale = transPlaneToCam[2][0] / worldPtPlane[2][0]
            #print("scale: ", scale)
            #zc * [x,y,1] * invR
            scale_worldPtPlane = np.multiply(scale , worldPtPlane)
            #print("scale_worldPtPlane: ", scale_worldPtPlane)
            #[X,Y,Z]=zc*[x,y,1]*invR - invR*T
            worldPtPlaneReproject = np.asmatrix(scale_worldPtPlane) - np.asmatrix(transPlaneToCam)  #3*1 dot 1*3 = 3*3
            #print("worldPtPlaneReproject: ", worldPtPlaneReproject)
            pt = np.zeros((3, 1), dtype=np.float64)
            pt[0][0] = worldPtPlaneReproject[0][0]
            pt[1][0] = worldPtPlaneReproject[1][0]
            pt[2][0] = 0
            worldpt.append(pt.T.tolist())
        #print('worldpt:',worldpt)
        return worldpt

验证

先使用projectPoints生成像素点:

Vec3f eulerAngles;//欧拉角
vector translation_vectors;/* 每幅图像的平移向量 */
Mat rotationMatrix = eulerAnglesToRotationMatrix(eulerAngles);
*pR_matrix = rotationMatrix;
cvRodrigues2(pR_matrix, pnew_vec, 0);   //从旋转矩阵求旋转向量
Mat mat_tmp(pnew_vec->rows, pnew_vec->cols, pnew_vec->type, pnew_vec->data.fl);
cv::Mat distortion_coeffs1 = cv::Mat(1, 5, CV_32FC1, cv::Scalar::all(0)); /* 摄像机的5个畸变系数:k1,k2,p1,p2,k3 */
projectPoints(tempPointSet, mat_tmp, translation_vectors[i], intrinsic_matrix, distortion_coeffs1, image_points2);

使用以下欧拉角:

[0, 0, 0]//欧拉角度,表示平面和相机的角度
旋转向量:[0, 0, 0]

对应的平移向量,表示空间坐标原点相对在相平面原点偏移x=134mm,y=132mm,z=200mm。

原始:[134.0870803179094, 132.7580766544178, 200.3789038923399]

生成空间坐标点:

        Size board_size = Size(11,8);
        Size square_size = Size(30, 30);
        vector tempPointSet;
        for (int j = 0; j

经projectPoints计算后对应的像素空间点是:

projectPoints(tempPointSet, mat_tmp, translation_vectors[i], intrinsic_matrix, distortion_coeffs1, image_points2);
cout << "原始空间点:\n" << image_points2 << endl;

[1194.8174, 1074.1355;
 1285.1735, 1074.1355;
 1375.5295, 1074.1355;
 1465.8856, 1074.1355;
 1556.2417, 1074.1355;
 1646.5978, 1074.1355;
 1736.9539, 1074.1355;
 1827.3099, 1074.1355;
 1917.666, 1074.1355;
 2008.0221, 1074.1355;
 2098.3782, 1074.1355;
 1194.8174, 1164.5713;
 1285.1735, 1164.5713;
 1375.5295, 1164.5713;
 1465.8856, 1164.5713;
 1556.2417, 1164.5713;
 1646.5978, 1164.5713;
 1736.9539, 1164.5713;
 1827.3099, 1164.5713;
 1917.666, 1164.5713;
 2008.0221, 1164.5713;
 2098.3782, 1164.5713;
 1194.8174, 1255.0072;
 1285.1735, 1255.0072;
 1375.5295, 1255.0072;
 1465.8856, 1255.0072;
 1556.2417, 1255.0072;
 1646.5978, 1255.0072;
 1736.9539, 1255.0072;
 1827.3099, 1255.0072;
 1917.666, 1255.0072;
 2008.0221, 1255.0072;
 2098.3782, 1255.0072;
 1194.8174, 1345.443;
 1285.1735, 1345.443;
 1375.5295, 1345.443;
 1465.8856, 1345.443;
 1556.2417, 1345.443;
 1646.5978, 1345.443;
 1736.9539, 1345.443;
 1827.3099, 1345.443;
 1917.666, 1345.443;
 2008.0221, 1345.443;
 2098.3782, 1345.443;
 1194.8174, 1435.8789;
 1285.1735, 1435.8789;
 1375.5295, 1435.8789;
 1465.8856, 1435.8789;
 1556.2417, 1435.8789;
 1646.5978, 1435.8789;
 1736.9539, 1435.8789;
 1827.3099, 1435.8789;
 1917.666, 1435.8789;
 2008.0221, 1435.8789;
 2098.3782, 1435.8789;
 1194.8174, 1526.3147;
 1285.1735, 1526.3147;
 1375.5295, 1526.3147;
 1465.8856, 1526.3147;
 1556.2417, 1526.3147;
 1646.5978, 1526.3147;
 1736.9539, 1526.3147;
 1827.3099, 1526.3147;
 1917.666, 1526.3147;
 2008.0221, 1526.3147;
 2098.3782, 1526.3147;
 1194.8174, 1616.7506;
 1285.1735, 1616.7506;
 1375.5295, 1616.7506;
 1465.8856, 1616.7506;
 1556.2417, 1616.7506;
 1646.5978, 1616.7506;
 1736.9539, 1616.7506;
 1827.3099, 1616.7506;
 1917.666, 1616.7506;
 2008.0221, 1616.7506;
 2098.3782, 1616.7506;
 1194.8174, 1707.1864;
 1285.1735, 1707.1864;
 1375.5295, 1707.1864;
 1465.8856, 1707.1864;
 1556.2417, 1707.1864;
 1646.5978, 1707.1864;
 1736.9539, 1707.1864;
 1827.3099, 1707.1864;
 1917.666, 1707.1864;
 2008.0221, 1707.1864;
 2098.3782, 1707.1864]

经函数求出的空间坐标点是:

vector worldPoint;
cameraToWorld(intrinsic_matrix, mat_tmp, translation_vec_tmp, image_points2, worldPoint);
cout << "计算空间点:\n" << worldPoint << endl;

[0, 0, 1;
 30, 0, 1;
 60, 0, 1;
 90.000015, 0, 1;
 120.00002, 0, 1;
 149.99995, 0, 1;
 179.99998, 0, 1;
 209.99998, 0, 1;
 239.99998, 0, 1;
 270, 0, 1;
 300, 0, 1;
 0, 29.999985, 1;
 30, 29.999985, 1;
 60, 29.999985, 1;
 90.000015, 29.999985, 1;
 120.00002, 29.999985, 1;
 149.99995, 29.999985, 1;
 179.99998, 29.999985, 1;
 209.99998, 29.999985, 1;
 239.99998, 29.999985, 1;
 270, 29.999985, 1;
 300, 29.999985, 1;
 0, 60.000015, 1;
 30, 60.000015, 1;
 60, 60.000015, 1;
 90.000015, 60.000015, 1;
 120.00002, 60.000015, 1;
 149.99995, 60.000015, 1;
 179.99998, 60.000015, 1;
 209.99998, 60.000015, 1;
 239.99998, 60.000015, 1;
 270, 60.000015, 1;
 300, 60.000015, 1;
 0, 89.999969, 1;
 30, 89.999969, 1;
 60, 89.999969, 1;
 90.000015, 89.999969, 1;
 120.00002, 89.999969, 1;
 149.99995, 89.999969, 1;
 179.99998, 89.999969, 1;
 209.99998, 89.999969, 1;
 239.99998, 89.999969, 1;
 270, 89.999969, 1;
 300, 89.999969, 1;
 0, 120.00002, 1;
 30, 120.00002, 1;
 60, 120.00002, 1;
 90.000015, 120.00002, 1;
 120.00002, 120.00002, 1;
 149.99995, 120.00002, 1;
 179.99998, 120.00002, 1;
 209.99998, 120.00002, 1;
 239.99998, 120.00002, 1;
 270, 120.00002, 1;
 300, 120.00002, 1;
 0, 149.99998, 1;
 30, 149.99998, 1;
 60, 149.99998, 1;
 90.000015, 149.99998, 1;
 120.00002, 149.99998, 1;
 149.99995, 149.99998, 1;
 179.99998, 149.99998, 1;
 209.99998, 149.99998, 1;
 239.99998, 149.99998, 1;
 270, 149.99998, 1;
 300, 149.99998, 1;
 0, 179.99998, 1;
 30, 179.99998, 1;
 60, 179.99998, 1;
 90.000015, 179.99998, 1;
 120.00002, 179.99998, 1;
 149.99995, 179.99998, 1;
 179.99998, 179.99998, 1;
 209.99998, 179.99998, 1;
 239.99998, 179.99998, 1;
 270, 179.99998, 1;
 300, 179.99998, 1;
 0, 209.99998, 1;
 30, 209.99998, 1;
 60, 209.99998, 1;
 90.000015, 209.99998, 1;
 120.00002, 209.99998, 1;
 149.99995, 209.99998, 1;
 179.99998, 209.99998, 1;
 209.99998, 209.99998, 1;
 239.99998, 209.99998, 1;
 270, 209.99998, 1;
 300, 209.99998, 1]

可以对比按11*8格和30mm/格所生成空间坐标点结果,基本一致。

你可能感兴趣的:(像素坐标转世界坐标的计算)