【三维重建】摄像机标定(张正友相机标定法)

摄像机标定的目的是为了求解摄像机的内、外参数

 求解投影矩阵M

【三维重建】摄像机标定(张正友相机标定法)_第1张图片

通过建立特殊的场景,我们能过得到多对世界坐标(P_{1},P_{2} ... P_{n})和对应图像坐标(p_{1},p_{2} ... p _{n})

根据摄像机几何可知 :p = K[R T]P = MP ,M是一个3*4的矩阵,令

M = \begin{bmatrix} m_{1}\\m_{2} \\ m_{3} \end{bmatrix}        p = MP \rightarrow p_{i} =\begin{bmatrix} u_{i}\\v_{i} \end{bmatrix}= \begin{bmatrix} \frac{m_{1}P_{i}}{m_{3}P_{i}} \\ \frac{m_{2}P_{i}}{m_{3}P_{i}} \end{bmatrix}\rightarrow \left\{\begin{matrix} m_{1}P_{i}-u_{i}(m_{3}P_{i} )=0\\ m_{2}P_{i}-v_{i}(m_{3}P_{i} )=0 \end{matrix}\right.

通过一对点可以得到两个方程组,M中一共有11个位置量,因此至少需要6对点,通过最小二乘法求解 Pm=0 可以得到m_{1},m_{2},m_{3}。需要注意的是在求解Ax=0 这个齐次方程组中,x是方程组的解,对于任意 \lambda \neq 0\lambda x也是方程组的解,所以我们加了一个约束,使得||x||=1,因此,我们求解出来的值和实际值的常数倍。

【三维重建】摄像机标定(张正友相机标定法)_第2张图片

求解内参矩阵K

【三维重建】摄像机标定(张正友相机标定法)_第3张图片

【三维重建】摄像机标定(张正友相机标定法)_第4张图片

R是旋转矩阵,因此 \left \| r_{i}\right \| = 1, r_{i}\cdot r_{j}=0(i\neq j),r_{i}\cdot r_{j}=1(i= j)

\left \| \rho a_{3} \right \| = \left \| r_{3} \right \|=1 \rightarrow \rho=\frac{\pm 1}{\left \| a_{3} \right \|}

\left\{\begin{matrix} \rho^{2}(a_3\cdot a_{2}) = v_{0} \\ \rho^{2}(a_3\cdot a_{1}) = u_{0} \end{matrix}\right.

同样因为R旋转矩阵,r_{i}\times r_{j} = r_{k}(i\neq j),r_{i}\times r_{j} = 0(i= j)

\left\{\begin{matrix} \rho^{2}(a_3\times a_{2}) = \alpha r_{2} - \alpha cot \theta r_{1} \\ \rho^{2}(a_3\times a_{1}) = \frac{\beta }{sin \theta} r_{1} \end{matrix}\right.\rightarrow \left\{\begin{matrix} \rho^{2}||a_3\times a_{2}|| =\alpha /sin \theta \\ \rho^{2}||a_3\times a_{1}|| =\beta /sin \theta \end{matrix}\right.

cos \theta = -\frac{(a_{1} \times a_{3}) \cdot(a_{2} \times a_{3})}{||a_{1} \times a_{3}|| \cdot ||a_{2} \times a_{3}||}

\left\{\begin{matrix} \alpha = \rho^2||a_{1} \times a_{3}|| sin \theta\\ \beta = \rho^2||a_{2} \times a_{3}|| sin \theta \end{matrix}\right.

内参矩阵全部求出。

求解外参矩阵R、T

\rho[A \ b] = K[R \ T]

\left\{\begin{matrix} \rho A = KR\rightarrow R=\rho K^{-1}A \\\rho b = KT \rightarrow T=\rho K^{-1}b \end{matrix}\right.

求解畸变参数

p_i =MP_i 所求得的是理想情况下的坐标,下面的建模相当于对理想点进行缩放,d表示坐标点离图像中心的距离,d越大畸变得越厉害。若 k_p > 0  ,\lambda > 1理想点会被往里扯,那对应的就是桶形畸变,若 k_p < 0  ,\lambda > 1理想点会被往外扯,那对应的就是枕形畸变。

【三维重建】摄像机标定(张正友相机标定法)_第5张图片

【三维重建】摄像机标定(张正友相机标定法)_第6张图片

\left\{\begin{matrix} m_1P_i-u_i \lambda m_3P_i=0\\ m_2P_i-v_i \lambda m_3P_i=0 \end{matrix}\right.\rightarrow Qm=0

由于Q中包含\lambda,因此这不是一个线性模型,只能用迭代法求取最优解

张正友标定法

接下来我们将介绍张正友标定法,以及相应代码。

GitHub - ldx-star/Zhangzhenyou-Calibration

在张正友标定法中,我们并不需要一个立体的模型,只需要一个平面的标定板。

Step1  获取世界坐标和像素坐标的对应点对

张正友标定法假设Z=0,将世界坐标系建立在靶面上,因此我们在提取世界坐标的时候只需要得到(X,Y)。

我们的棋盘格中每个方格的宽度是15mm,将第一个角点位置定为(0,0),第二个角点就是(0,0.15)... ,而像素坐标就是对应角点在图像上像素的坐标。

bool CamCalibration::getKeyPoints() {
    auto chessBoards = chessBoards_;
    const float square_size = square_size_;
    auto &points_3d_vec = points_3d_vec_;
    auto &points_2d_vec = points_2d_vec_;

    const int row = row_;
    const int col = col_;
    //采集世界坐标
    for (int i = 0; i < chessBoards.size(); i++) {
        std::vector points_3d; // 一张图的世界坐标
        for (int r = 0; r < row; r++) {
            for (int c = 0; c < col; c++) {
                cv::Point2f point;
                point.x = r * square_size;
                point.y = c * square_size;
                points_3d.push_back(point);
            }
        }
        points_3d_vec.push_back(points_3d);
    }
    //采集像素坐标,使用opencv库提取角点
    for (auto img: chessBoards) {
        std::vector points_2d;

        bool found_flag = cv::findChessboardCorners(img, cv::Size(col, row), points_2d, cv::CALIB_CB_ADAPTIVE_THRESH +
                                                                                        cv::CALIB_CB_NORMALIZE_IMAGE); //cv::Size(col,row)
        if (!found_flag) {
            std::cerr << "found chess board corner failed";
            return false;
        }
        //指定亚像素计算迭代标注
        cv::TermCriteria criteria = cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 40, 0.001);
        cv::cornerSubPix(img, points_2d, cv::Size(5, 5), cv::Size(-1, -1), criteria);

        //display
//        cv::cvtColor(img,img,cv::COLOR_GRAY2BGR);
//        cv::drawChessboardCorners(img, cv::Size(col, row), points_2d, found_flag);
//        cv::namedWindow("corner img", cv::WINDOW_NORMAL);
//        cv::resizeWindow("corner img", img.cols / 2, img.rows / 2);
//        cv::imshow("corner img", img);
//        cv::waitKey(300);

        points_2d_vec.push_back(points_2d);
    }
    std::cout << "getKeyPoints succeed" << std::endl;
    return true;
}

Step2  计算单应性矩阵H

之前我们讲过世界坐标到像素坐标的映射关系 :

s \begin{bmatrix} u\\v \\ 1 \end{bmatrix}= A[R\ T]\begin{bmatrix} X\\Y \\ Z \\ 1 \end{bmatrix}=A[r_{1},r_{2},r_{3},t]\begin{bmatrix} X\\Y \\ Z \\ 1 \end{bmatrix}

为了和论文对应,我们将字母都换成论文中的字母。A是之前讲的K,表示内参矩阵;s是之间讲的\rho表示系数。

由于Z=0,

s \begin{bmatrix} u\\v \\ 1 \end{bmatrix}=A[r_{1},r_{2},r_{3},t]\begin{bmatrix} X\\Y \\ 0 \\ 1 \end{bmatrix}=A[r_{1},r_{2},t]\begin{bmatrix} X\\Y \\ 1 \end{bmatrix}

单应性矩阵

H=A[r_{1},r_{2},t]

同样的

H = \begin{bmatrix} h_{1}\\h_{2} \\ h_{3} \end{bmatrix}        p = HP \rightarrow p_{i} =\begin{bmatrix} u_{i}\\v_{i} \end{bmatrix}= \begin{bmatrix} \frac{h_{1}P_{i}}{h_{3}P_{i}} \\ \frac{h_{2}P_{i}}{h_{3}P_{i}} \end{bmatrix}\rightarrow \left\{\begin{matrix} h_{1}P_{i}-u_{i}(h_{3}P_{i} )=0\\ h_{2}P_{i}-v_{i}(h_{3}P_{i} )=0 \end{matrix}\right.

通过最小二乘法可以将H求出。

为了使计算稳定,需要先对数据进行Z-Score标准化

x^{'}= (x-mean_x)/variance_x \\ y^{'}= (y-mean_y)/variance_y

转换成矩阵的形式

\begin{bmatrix} x^{'}\\y^{'} \\ 1 \end{bmatrix} = \begin{bmatrix} 1/variance_x&0 &-mean_x/variance_x \\ 0 & 1/variance_y & -mean_y/variance_y \\0&0&1\end{bmatrix} \begin{bmatrix} x\\y \\ 1 \end{bmatrix}

我们需要将归一化矩阵保存下来,在求出H后将数据还原

N_1p = H^{'}N_2P\rightarrow p = (N_1^{-1}H^{'}N_2)P\rightarrow p = HP

/**
 * Z-Score 标准化(均值为0,方差为1)
 * @param points 原始数据点
 * @param normal_points 输出型参数,标准化后的数据点
 * @param normT 输出型参数,归一化矩阵的转置
 */
void CamCalibration::Normalize(const std::vector &points, std::vector &normal_points,
                               cv::Mat &normT) {
    //求均值
    float mean_x = 0;
    float mean_y = 0;
    for (const auto &point: points) {
        mean_x += point.x;
        mean_y += point.y;
    }
    mean_x /= points.size();
    mean_y /= points.size();
    //求方差
    for (const auto &point: points) {
        mean_x += point.x;
        mean_y += point.y;
    }
    float variance_x = 0;
    float variance_y = 0;
    for (const auto &point: points) {
        float tmp_x = pow(point.x - mean_x, 2);
        float tmp_y = pow(point.y - mean_y, 2);
        variance_x += tmp_x;
        variance_y += tmp_y;
    }
    variance_x = sqrt(variance_x);
    variance_y = sqrt(variance_y);

    for (const auto &point: points) {
        cv::Point2f p;
        p.x = (point.x - mean_x) / variance_x;
        p.y = (point.y - mean_y) / variance_y;
        normal_points.push_back(p);
    }
    normT.at(0, 0) = 1 / variance_x;
    normT.at(0, 2) = -mean_x / variance_x;
    normT.at(1, 1) = 1 / variance_y;
    normT.at(1, 2) = -mean_y / variance_y;
}

**
 * 计算单应性矩阵
 */
void CamCalibration::CalcH() {
    const auto &points_3d_vec = points_3d_vec_;
    const auto &points_2d_vec = points_2d_vec_;
    for (int i = 0; i < points_2d_vec.size(); i++) {
        //每一张图的世界坐标和像素坐标
        const auto &points_3d = points_3d_vec[i];
        const auto &points_2d = points_2d_vec[i];
        std::vector normal_points_3d, normal_points_2d;
        cv::Mat normT_3d, normT_2d;
        Normalize(points_3d, normal_points_3d, normT_3d);
        Normalize(points_2d, normal_points_2d, normT_2d);

        cv::Mat H = cv::Mat::eye(3, 3, CV_32F);

        int corner_size = normal_points_2d.size();
        if (corner_size < 4) {
            std::cerr << "corner size < 4";
            exit(-1);
        }

        cv::Mat A(corner_size * 2, 9, CV_32F, cv::Scalar(0));
        for (int i = 0; i < corner_size; i++) {
            cv::Point2f point_3d = points_3d[i];
            cv::Point2f point_2d = points_2d[i];
            A.at(i * 2, 0) = point_3d.x;
            A.at(i * 2, 1) = point_3d.y;
            A.at(i * 2, 2) = 1;
            A.at(i * 2, 3) = 0;
            A.at(i * 2, 4) = 0;
            A.at(i * 2, 5) = 0;
            A.at(i * 2, 6) = -point_2d.x * point_3d.x;
            A.at(i * 2, 7) = -point_2d.x * point_3d.y;
            A.at(i * 2, 8) = -point_2d.x;

            A.at(i * 2 + 1, 0) = 0;
            A.at(i * 2 + 1, 1) = 0;
            A.at(i * 2 + 1, 2) = 0;
            A.at(i * 2 + 1, 3) = point_3d.x;
            A.at(i * 2 + 1, 4) = point_3d.y;
            A.at(i * 2 + 1, 5) = 1;
            A.at(i * 2 + 1, 6) = -point_2d.y * point_3d.x;
            A.at(i * 2 + 1, 7) = -point_2d.y * point_3d.y;
            A.at(i * 2 + 1, 8) = -point_2d.y;
        }
        cv::Mat U, W, VT;                                                    // A =UWV^T
        cv::SVD::compute(A, W, U, VT,
                         cv::SVD::MODIFY_A | cv::SVD::FULL_UV); // Eigen 返回的是V,列向量就是特征向量, opencv 返回的是VT,所以行向量是特征向量
        H = VT.row(8).reshape(0, 3);
        // H = inv_N2 * H * N3
        cv::Mat normT_2d_inv;
        cv::invert(normT_2d, normT_2d_inv);
        H = normT_2d_inv * H * normT_3d;
        H_vec_.push_back(H);
    }
}

Step3  计算内参矩阵A

【三维重建】摄像机标定(张正友相机标定法)_第7张图片

H=[h_1 \ h_2 \ h_3] = \lambda A[r_1 \ r_2 \ t]\rightarrow [A^{-1}h_1 \ A^{-1}h_2 \ A^{-1}h_3] = \lambda [r_1 \ r_2 \ t]

已知r_1\perp r_2 ,||r_1||=||r_2||=1

(A^{-1}h_1) \cdot(A^{-1}h_2) = (A^{-1}h_1) ^T(A^{-1}h_2)=h_1^TA^{-T}A^{-1}h_2=0

h_1^TA^{-T}A^{-1}h_1 - h_2^TA^{-T}A^{-1}h_2 =0

【三维重建】摄像机标定(张正友相机标定法)_第8张图片

B是对称矩阵,所以只需存储上半矩阵

b=[B_{11}\ B_{12}\ B_{22}\ B_{13}\ B_{23}\ B_{33}]^T

h_i^TBh_j = \begin{bmatrix} h_{i1} &h_{i2} &h_{i3} \end{bmatrix} \begin{bmatrix} B_{11} &B_{12} &B_{13} \\ B_{21} &B_{22} &B_{23} \\ B_{31}&B_{32} &B{B_{33}} \end{bmatrix} \begin{bmatrix} h_{j1}\\h_{j2} \\ h_{j3} \end{bmatrix} \\\\ =h_{i1}B_{11}h_{j1} + h_{i2}B_{12}h_{j1}+h_{i1}B_{12}h_{j2}+h_{i3}B_{13}h_{j1}+h_{i1}B_{13}h_{j3}+h_{i2}B_{22}h_{j2}+h_{i3}B_{23}h_{j2}+h_{i2}B_{23}h_{j3}+h_{i3}B_{33}h_{j3} \\ \\ =V_{ij}^Tb \\ \\ V_{ij} = \begin{bmatrix} h_{i1}h_{j1} &h_{i1}h_{j2}+h_{i2}h_{j1} & h_{i2}h_{j2}&h_{i1}h_{j3}+h_{i3}h_{j1} &h_{i2}h_{j3}+h_{i3}h_{j2} &h_{i3}h_{j3} \end{bmatrix}^T

\begin{bmatrix} V_{12}^T\\ (V_{11}-V_{12})^T \end{bmatrix}b=0

求出B就可以得到内参矩阵中的参数,需要注意的是,我们之前求的B是不带尺度因子的,真实的B与求出的B差一个系数B=\lambda B^{'}

【三维重建】摄像机标定(张正友相机标定法)_第9张图片

step4 计算外参矩阵R、T

[A^{-1}h_1 \ A^{-1}h_2 \ A^{-1}h_3] = \lambda [r_1 \ r_2 \ t]\rightarrow \left\{\begin{matrix} r_1 = \frac{1}{\lambda}A^{-1}h_1\\\\r_2 = \frac{1}{\lambda} A^{-1}h_2 \\\\ r_3 = r_1\times r_2 \\\\ t = \frac{1}{\lambda} A^{-1}h_3 \end{matrix}\right.

由于存在噪声,我们求解的R不一定满足选装矩阵的特性R^{T}R=I

因此需要求解最优的旋转矩阵:

\underset{R}{min}\begin{Vmatrix} R-Q \end{Vmatrix}^2_{F} \ ,R^TR=I,Q = \begin{bmatrix} r_1 & r_2 & r_3 \end{bmatrix}

\begin{Vmatrix} R-Q \end{Vmatrix}^{2}_F = trace*(R-Q)^T(R-Q))\\=3+trace(Q^TQ)-2trace(R^TQ)

整个优化就可以看作最小化 trace(R^TQ)

trace(R^TQ) = trace(R^TUSV^T) = trace(V^TR^TUS)

Z = V^TR^TU ,当R = UV^TZ=I

void CamCalibration::CalcRT() {
    const auto &K = K_;
    const auto &H_vec = H_vec_;
    auto &R_vec = R_vec_;
    auto &t_vec = t_vec_;

    cv::Mat K_inverse;
    cv::invert(K, K_inverse);

    for (const auto &H: H_vec) {
        cv::Mat M = K_inverse * H;

        cv::Vec3d r1(M.at(0, 0), M.at(1, 0), M.at(2, 0));
        cv::Vec3d r2(M.at(0, 1), M.at(1, 1), M.at(2, 1));
        cv::Vec3d r3 = r1.cross(r2);
        cv::Mat Q = cv::Mat::eye(3, 3, CV_64F);

        Q.at(0, 0) = r1(0);
        Q.at(1, 0) = r1(1);
        Q.at(2, 0) = r1(2);
        Q.at(0, 1) = r2(0);
        Q.at(1, 1) = r2(1);
        Q.at(2, 1) = r2(2);
        Q.at(0, 2) = r3(0);
        Q.at(1, 2) = r3(1);
        Q.at(2, 2) = r3(2);
        cv::Mat normQ;
        cv::normalize(Q, normQ);

        cv::Mat U, W, VT;
        cv::SVD::compute(normQ, W, U, VT, cv::SVD::MODIFY_A | cv::SVD::FULL_UV);
        cv::Mat R = U * VT;

        R_vec.push_back(R);
        cv::Mat t = cv::Mat::eye(3, 1, CV_64F);
        M.col(2).copyTo(t.col(0));
        t_vec.push_back(t);
    }
}

step4 计算畸变参数

张正友标定法只考虑了切向畸变,畸变公式如下:

\hat{x}=x(1+k_1r^2+k_2r^4) \\ \hat{y}=y(1+k_1r^2+k_2r^4)

(x,y)(\hat{x},\hat{y}) 分别是理想情况的归一化图像坐标和畸变后的归一化图像坐标,r为图像像素点到图像中心点的距离\ (r^2 = x^2 + y^2),距离越大畸变越大。

\begin{bmatrix} \hat{u}\\\hat{v}\\1 \end{bmatrix} = \begin{bmatrix} \alpha & \gamma & u_0\\ 0& \beta & v_0 \\ 0&0 &1 \end{bmatrix}\begin{bmatrix} \hat{x}\\\hat{y}\\1 \end{bmatrix} \rightarrow \left\{\begin{matrix} \hat{u}=\alpha \hat{x} + \gamma \hat{y} + u_0 \\ \\ \hat{v}= \beta \hat{y} + v_0 \end{matrix}\right.

\begin{bmatrix} u\\v\\1 \end{bmatrix} = \begin{bmatrix} \alpha & \gamma & u_0\\ 0& \beta & v_0 \\ 0&0 &1 \end{bmatrix}\begin{bmatrix}x\\y\\1 \end{bmatrix} \rightarrow \left\{\begin{matrix}u=\alpha x + \gamma y + u_0 \\ \\ v= \beta y + v_0 \end{matrix}\right.

像平面一般接近90度,所以\gamma为0,因此

\left\{\begin{matrix} \hat{u}=\alpha \hat{x} + u_0\\ \hat{v}= \beta \hat{y} + v_0\\ u=\alpha x + u_0 \\ v = \beta y +v_0 \end{matrix}\right.\rightarrow \left\{\begin{matrix} \hat{u} = u_0+\frac{u-u_0}{x}\hat{x} \\ \\ \hat{v} = v_0+\frac{v-v_0}{x}\hat{x} \end{matrix}\right.

\left\{\begin{matrix} \hat{u} = u+(u-u_0)(k_1r^2+k_2r^4)\\ \hat{v} = v+(v-v_0)(k_1r^2+k_2r^4) \end{matrix}\right.

\begin{bmatrix} (u-u_0)r^2 &(u-u_0)r^4 \\ (v-v_0)r^2 &(v-v_0)r^4 \end{bmatrix}\begin{bmatrix} k_1\\k_2 \end{bmatrix}=\begin{bmatrix} \hat{u}-u\\ \hat{v}-v \end{bmatrix}

Dk=d

通过最小二乘法可求得k.

k = (D^TD)^{-1}D^Td

void CamCalibration::CalDistCoeff() {
    std::vector r2_vec;
    std::vector ideal_point_vec;

    //用一副图进行计算
    const cv::Mat &K = K_;
    const cv::Mat &R = R_vec_[0];
    const cv::Mat &t = t_vec_[0];
    auto points_3d = points_3d_vec_[0];
    auto &dist_coeff = dist_coeff_;
    for (const auto &point_3d: points_3d) {
        cv::Mat p_3d = (cv::Mat_(3, 1) << point_3d.x, point_3d.y, 0);
        //世界坐标转相机坐标
        cv::Mat p_cam = R * p_3d + t;
        //转换成欧式坐标
        p_cam.at(0, 0) = p_cam.at(0, 0) / p_cam.at(2, 0);
        p_cam.at(1, 0) = p_cam.at(1, 0) / p_cam.at(2, 0);
        p_cam.at(2, 0) = 1;
        double x = p_cam.at(0, 0);
        double y = p_cam.at(1, 0);
        double r2 = x * x + y * y;
        r2_vec.push_back(r2);

        //相机坐标转像素坐标
        cv::Mat p_pix = K * p_cam;
        ideal_point_vec.emplace_back(p_pix.at(0, 0), p_pix.at(1, 0));
    }
    const std::vector &dist_point_vec = points_2d_vec_[0];
    double u0 = K.at(0, 2);
    double v0 = K.at(1, 2);

    cv::Mat D = cv::Mat::eye(ideal_point_vec.size() * 2, 2, CV_64F);
    cv::Mat d = cv::Mat::eye(ideal_point_vec.size() * 2, 1, CV_64F);
    for (int i = 0; i < ideal_point_vec.size(); i++) {
        double r2 = r2_vec[i];
        const auto &ideal_p = ideal_point_vec[i];
        const auto &dist_p = dist_point_vec[i];
        D.at(i * 2, 0) = (ideal_p.x - u0) * r2;
        D.at(i * 2, 1) = (ideal_p.x - u0) * r2 * r2;
        D.at(i * 2 + 1, 0) = (ideal_p.y - v0) * r2;
        D.at(i * 2 + 1, 1) = (ideal_p.y - v0) * r2 * r2;
        d.at(2 * i, 0) = dist_p.x - ideal_p.x;
        d.at(2 * i + 1, 0) = dist_p.y - ideal_p.y;

    }
    cv::Mat DT;
    cv::transpose(D, DT);
//    std::cout << "D:" << D << std::endl;

    cv::Mat DTD_inverse;
    cv::invert(DT * D, DTD_inverse);
//    std::cout << "DTD_inv:" << DTD_inverse << std::endl;

    dist_coeff = DTD_inverse * DT * d;
    std::cout << "distort coeff: " << dist_coeff.at(0, 0) << ", " << dist_coeff.at(1, 0) << std::endl;
}

自此张正友相机标定完成,当然,我们所求出来的参数都是独立的,是理想情况下的值,因此需要使用最大似然估计进行优化,这部分大家自行了解。

参考内容:

计算机视觉之三维重建(深入浅出SfM与SLAM核心算法)——2.摄像机标定_哔哩哔哩_bilibili

张正友标定论文的解读和C++代码编写-CSDN博客

A flexible new technique for camera calibration | IEEE Journals & Magazine | IEEE Xplore

你可能感兴趣的:(人工智能,计算机视觉)