前言:
摄像机标定是机器人视觉进行目标定位跟踪的首要环节,通过标定板标定好摄像机的内外参数,然后进行后续的定位识别工作。本次将介绍摄像机标定的实验。
一、相机标定步骤
OpenCV使用棋盘格板进行标定,如下图所示。为了标定相机,我们需要输入一系列三维点和它们对应的二维图像点。在黑白相间的棋盘格上,二维图像点很容易通过角点检测找到。而对于真实世界中的三维点呢?由于我们采集中,是将相机放在一个地方,而将棋盘格定标板进行移动变换不同的位置,然后对其进行拍摄。所以我们需要知道(X,Y,Z)的值。但是简单来说,我们定义棋盘格所在平面为XY平面,即Z=0。对于定标板来说,我们可以知道棋盘格的方块尺寸,例如30mm,这样我们就可以把棋盘格上的角点坐标定义为(0,0,0),(30,0,0),(60,0,0),···,这个结果的单位是mm。
3D点称为object points,2D图像点称为image points。
为了找到棋盘格模板,我们使用openCV中的函数cv2.findChessboardCorners()。我们也需要告诉程序我们使用的模板是什么规格的,例如8*8的棋盘格或者5*5棋盘格等,建议使用x方向和y方向个数不相等的棋盘格模板。下面实验中,我们使用的是10*7的棋盘格,每个方格边长是20mm,即含有9*6的内部角点。这个函数如果检测到模板,会返回对应的角点,并返回true。当然不一定所有的图像都能找到需要的模板,所以我们可以使用多幅图像进行定标。除了使用棋盘格,我们还可以使用圆点阵,对应的函数为cv2.findCirclesGrid()。
找到角点后,我们可以使用cv2.cornerSubPix()可以得到更为准确的角点像素坐标。我们也可以使用cv2.drawChessboardCorners()将角点绘制到图像上显示。
通过上面的步骤,我们得到了用于标定的三维点和与其对应的图像上的二维点对。我们使用cv2.calibrateCamera()进行标定,这个函数会返回标定结果、相机的内参数矩阵、畸变系数、旋转矩阵和平移向量。
然后我们就可以使用新得到的内参数矩阵和畸变系数对图像进行去畸变了。
二、相关API介绍
2.1 cv2.findChessboardCorners()函数
我们需要使用findChessboardCorners函数提取角点,这里的角点专指的是标定板上的内角点,这些角点与标定板的边缘不接触。其函数原型如下:
findChessboardCorners(...)
findChessboardCorners(image, patternSize[, corners[, flags]]) -> retval, corners
. @brief Finds the positions of internal corners of the chessboard.
.
. @param image Source chessboard view. It must be an 8-bit grayscale or color image.
. @param patternSize Number of inner corners per a chessboard row and column
. ( patternSize = cvSize(points_per_row,points_per_colum) = cvSize(columns,rows) ).
. @param corners Output array of detected corners.
. @param flags Various operation flags that can be zero or a combination of the following values:
. - **CALIB_CB_ADAPTIVE_THRESH** Use adaptive thresholding to convert the image to black
. and white, rather than a fixed threshold level (computed from the average image brightness).
. - **CALIB_CB_NORMALIZE_IMAGE** Normalize the image gamma with equalizeHist before
. applying fixed or adaptive thresholding.
. - **CALIB_CB_FILTER_QUADS** Use additional criteria (like contour area, perimeter,
. square-like shape) to filter out false quads extracted at the contour retrieval stage.
. - **CALIB_CB_FAST_CHECK** Run a fast check on the image that looks for chessboard corners,
. and shortcut the call if none is found. This can drastically speed up the call in the
. degenerate condition when no chessboard is observed.
.
. The function attempts to determine whether the input image is a view of the chessboard pattern and
. locate the internal chessboard corners. The function returns a non-zero value if all of the corners
. are found and they are placed in a certain order (row by row, left to right in every row).
. Otherwise, if the function fails to find all the corners or reorder them, it returns 0. For example,
. a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black
. squares touch each other. The detected coordinates are approximate, and to determine their positions
. more accurately, the function calls cornerSubPix. You also may use the function cornerSubPix with
. different parameters if returned coordinates are not accurate enough.
.
. Sample usage of detecting and drawing chessboard corners: :
. @code
. Size patternsize(8,6); //interior number of corners
. Mat gray = ....; //source image
. vector corners; //this will be filled by the detected corners
.
. //CALIB_CB_FAST_CHECK saves a lot of time on images
. //that do not contain any chessboard corners
. bool patternfound = findChessboardCorners(gray, patternsize, corners,
. CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
. + CALIB_CB_FAST_CHECK);
.
. if(patternfound)
. cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
. TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
.
. drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
. @endcode
. @note The function requires white space (like a square-thick border, the wider the better) around
. the board to make the detection more robust in various environments. Otherwise, if there is no
. border and the background is dark, the outer black squares cannot be segmented properly and so the
. square grouping and ordering algorithm fails.
第一个参数Image,传入拍摄的棋盘图Mat图像,必须是8位的灰度或者彩色图像;
第二个参数patternSize,每个棋盘图上内角点的行列数,一般情况下,行列数不要相同,便于后续标定程序识别标定板的方向;
第三个参数corners,用于存储检测到的内角点图像坐标位置,一般是数组形式;
第四个参数flage:用于定义棋盘图上内角点查找的不同处理方式,有默认值。
2.2 cv2.cornerSubPix()函数
为了提高标定精度,需要在初步提取的角点信息上进一步提取亚像素信息,降低相机标定偏差,常用的方法是cornerSubPix函数,其函数原型如下:
cornerSubPix(...)
cornerSubPix(image, corners, winSize, zeroZone, criteria) -> corners
. @brief Refines the corner locations.
.
. The function iterates to find the sub-pixel accurate location of corners or radial saddle points, as
. shown on the figure below.
.
. ![image](pics/cornersubpix.png)
.
. Sub-pixel accurate corner locator is based on the observation that every vector from the center \f$q\f$
. to a point \f$p\f$ located within a neighborhood of \f$q\f$ is orthogonal to the image gradient at \f$p\f$
. subject to image and measurement noise. Consider the expression:
.
. \f[\epsilon _i = {DI_{p_i}}^T \cdot (q - p_i)\f]
.
. where \f${DI_{p_i}}\f$ is an image gradient at one of the points \f$p_i\f$ in a neighborhood of \f$q\f$ . The
. value of \f$q\f$ is to be found so that \f$\epsilon_i\f$ is minimized. A system of equations may be set up
. with \f$\epsilon_i\f$ set to zero:
.
. \f[\sum _i(DI_{p_i} \cdot {DI_{p_i}}^T) - \sum _i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i)\f]
.
. where the gradients are summed within a neighborhood ("search window") of \f$q\f$ . Calling the first
. gradient term \f$G\f$ and the second gradient term \f$b\f$ gives:
.
. \f[q = G^{-1} \cdot b\f]
.
. The algorithm sets the center of the neighborhood window at this new center \f$q\f$ and then iterates
. until the center stays within a set threshold.
.
. @param image Input image.
. @param corners Initial coordinates of the input corners and refined coordinates provided for
. output.
. @param winSize Half of the side length of the search window. For example, if winSize=Size(5,5) ,
. then a \f$5*2+1 \times 5*2+1 = 11 \times 11\f$ search window is used.
. @param zeroZone Half of the size of the dead region in the middle of the search zone over which
. the summation in the formula below is not done. It is used sometimes to avoid possible
. singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such
. a size.
. @param criteria Criteria for termination of the iterative process of corner refinement. That is,
. the process of corner position refinement stops either after criteria.maxCount iterations or when
. the corner position moves by less than criteria.epsilon on some iteration.
第一个参数image,输入图像的像素矩阵,最好是8位灰度图像,检测效率更高;
第二个参数corners,初始的角点坐标向量,同时作为亚像素坐标位置的输出,所以需要是浮点型数据;
第三个参数winSize,大小为搜索窗口的一半;
第四个参数zeroZone,死区的一半尺寸,死区为不对搜索区的中央位置做求和运算的区域。它是用来避免自相关矩阵出现某些可能的奇异性。当值为(-1,-1)时表示没有死区;
第五个参数criteria,定义求角点的迭代过程的终止条件,可以为迭代次数和角点精度两者的组合;
2.3 drawChessboardCorners函数
drawChessboardCorners函数用于绘制被成功标定的角点,函数原型:
drawChessboardCorners(...)
drawChessboardCorners(image, patternSize, corners, patternWasFound) -> image
. @brief Renders the detected chessboard corners.
.
. @param image Destination image. It must be an 8-bit color image.
. @param patternSize Number of inner corners per a chessboard row and column
. (patternSize = cv::Size(points_per_row,points_per_column)).
. @param corners Array of detected corners, the output of findChessboardCorners.
. @param patternWasFound Parameter indicating whether the complete board was found or not. The
. return value of findChessboardCorners should be passed here.
.
. The function draws individual chessboard corners detected either as red circles if the board was not
. found, or as colored corners connected with lines if the board was found.
第一个参数image,8位灰度或者彩色图像;
第二个参数patternSize,每张标定棋盘上内角点的行列数;
第三个参数corners,初始的角点坐标向量,同时作为亚像素坐标位置的输出,所以需要是浮点型数据;
第四个参数patternWasFound,标志位,用来指示定义的棋盘内角点是否被完整的探测到,true表示别完整的探测到,函数会用直线依次连接所有的内角点,作为一个整体,false表示有未被探测到的内角点,这时候函数会以(红色)圆圈标记处检测到的内角点;
2.4 cv2.calibrateCamera函数
获取到棋盘标定图的内角点图像坐标之后,就可以使用calibrateCamera函数进行标定,计算相机内参和外参系数,其calibrateCamera函数原型如下:
calibrateCamera(...)
calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]]) -> retval, cameraMatrix, distCoeffs, rvecs, tvecs
. @overload double calibrateCamera( InputArrayOfArrays objectPoints,
. InputArrayOfArrays imagePoints, Size imageSize,
. InputOutputArray cameraMatrix, InputOutputArray distCoeffs,
. OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs,
. OutputArray stdDeviations, OutputArray perViewErrors,
. int flags = 0, TermCriteria criteria = TermCriteria(
. TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON) )
第一个参数objectPoints,为世界坐标系中的三维点。需要依据棋盘上单个黑白矩阵的大小,计算出(初始化)每一个内角点的世界坐标;
第二个参数imagePoints,为每一个内角点对应的图像坐标点;
第三个参数imageSize,为图像的像素尺寸大小,在计算相机的内参和畸变矩阵时需要使用到该参数;
第四个参数cameraMatrix为相机的内参矩阵;
第五个参数distCoeffs为畸变矩阵;
第六个参数rvecs为旋转向量;
第七个参数tvecs为位移向量;
第八个参数flags为标定时所采用的算法。有如下几个参数:
CV_CALIB_USE_INTRINSIC_GUESS:使用该参数时,在cameraMatrix矩阵中应该有fx,fy,u0,v0的估计值。否则的话,将初始化(u0,v0)图像的中心点,使用最小二乘估算出fx,fy。
CV_CALIB_FIX_PRINCIPAL_POINT:在进行优化时会固定光轴点。当CV_CALIB_USE_INTRINSIC_GUESS参数被设置,光轴点将保持在中心或者某个输入的值。
CV_CALIB_FIX_ASPECT_RATIO:固定fx/fy的比值,只将fy作为可变量,进行优化计算。当CV_CALIB_USE_INTRINSIC_GUESS没有被设置,fx和fy将会被忽略。只有fx/fy的比值在计算中会被用到。
CV_CALIB_ZERO_TANGENT_DIST:设定切向畸变参数(p1,p2)为零。
CV_CALIB_FIX_K1,…,CV_CALIB_FIX_K6:对应的径向畸变在优化中保持不变。
CV_CALIB_RATIONAL_MODEL:计算k4,k5,k6三个畸变参数。如果没有设置,则只计算其它5个畸变参数。
第九个参数criteria是最优迭代终止条件设定。
在使用该函数进行标定运算之前,需要对棋盘上每一个内角点的空间坐标系的位置坐标进行初始化,标定的结果是生成相机的内参矩阵cameraMatrix、相机的5个畸变系数distCoeffs,另外每张图像都会生成属于自己的平移向量和旋转向量。
2.5 cv2.undistort()函数
利用求得的相机的内参和外参数据,可以对图像进行畸变的矫正,使用undistort函数实现,其函数原型如下:
undistort(...)
undistort(src, cameraMatrix, distCoeffs[, dst[, newCameraMatrix]]) -> dst
. @brief Transforms an image to compensate for lens distortion.
.
. The function transforms an image to compensate radial and tangential lens distortion.
.
. The function is simply a combination of cv::initUndistortRectifyMap (with unity R ) and cv::remap
. (with bilinear interpolation). See the former function for details of the transformation being
. performed.
.
. Those pixels in the destination image, for which there is no correspondent pixels in the source
. image, are filled with zeros (black color).
.
. A particular subset of the source image that will be visible in the corrected image can be regulated
. by newCameraMatrix. You can use cv::getOptimalNewCameraMatrix to compute the appropriate
. newCameraMatrix depending on your requirements.
.
. The camera matrix and the distortion parameters can be determined using cv::calibrateCamera. If
. the resolution of images is different from the resolution used at the calibration stage, \f$f_x,
. f_y, c_x\f$ and \f$c_y\f$ need to be scaled accordingly, while the distortion coefficients remain
. the same.
.
. @param src Input (distorted) image.
. @param dst Output (corrected) image that has the same size and type as src .
. @param cameraMatrix Input camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
. @param distCoeffs Input vector of distortion coefficients
. \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
. of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
. @param newCameraMatrix Camera matrix of the distorted image. By default, it is the same as
. cameraMatrix but you may additionally scale and shift the result by using a different matrix.
第一个参数src,输入参数,代表畸变的原始图像;
第二个参数cameraMatrix,为之前求得的相机的内参矩阵;
第三个参数distCoeffs,为之前求得的相机畸变矩阵;
第四个参数dst,矫正后的输出图像,跟输入图像具有相同的类型和大小;
第五个参数newCameraMatrix,默认跟cameraMatrix保持一致;
三、演示代码
我们编写如下演示代码:
#coding:utf-8
import numpy as np
import cv2
import matplotlib.pyplot as plt
import pickle
def calibrate_camera():
# 将每个校准图像映射到棋盘角的数量
objp_dict = {
1: (9, 5),
2: (9, 6),
3: (9, 6),
4: (9, 6),
5: (9, 6),
6: (9, 6),
7: (9, 6),
8: (9, 6),
9: (9, 6),
10: (9, 6),
11: (9, 6),
12: (9, 6),
13: (9, 6),
14: (9, 6),
15: (9, 6),
16: (9, 6),
17: (9, 6),
18: (9, 6),
19: (9, 6),
20: (9, 6),
}
# 用于校准的对象点和角点列表
objp_list = [] # 存储3D点
corners_list = [] # 存储2D点
# 浏览所有图像并找到角点
for k in objp_dict.keys():
nx, ny = objp_dict[k]
objp = np.zeros((nx*ny,3), np.float32)
objp[:,:2] = np.mgrid[0:nx, 0:ny].T.reshape(-1,2)
# 遍历每一幅棋盘格板,获取其对应的内角点数目,即 nx * ny。
# 用数组的形式来保存每一幅棋盘格板中所有内角点的三维坐标。
# 将世界坐标系建在标定板上,所有点的Z坐标全部为0,所以只需要赋值x和y
# print(objp),部分输出如下:
# [[0. 0. 0.]
# [1. 0. 0.]
# [2. 0. 0.]
# [3. 0. 0.]
# [4. 0. 0.]
# [5. 0. 0.]
# [6. 0. 0.]
# [7. 0. 0.]
# [8. 0. 0.]
# [0. 1. 0.]
# [1. 1. 0.]
# [2. 1. 0.]
# ...
fname = 'camera_cal/calibration%s.jpg' % str(k)
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None)
if ret == True:
objp_list.append(objp)
corners2 = cv2.cornerSubPix(gray, corners, (5, 5), (-1, -1),
criteria = (cv2.TERM_CRITERIA_MAX_ITER | cv2.TERM_CRITERIA_EPS, 30, 0.001))
# 在原角点的基础上寻找亚像素角点,其中,criteria是设置寻找亚像素角点的参数,
# 采用的停止准则是最大循环次数30和最大误差容限0.001
if corners2.any():
corners_list.append(corners2)
else:
corners_list.append(corners)
# # Draw and display the corners
# cv2.drawChessboardCorners(img, (nx, ny), corners, ret)
# cv2.imshow('img', img)
# cv2.waitKey(5000)
# print('Found corners for %s' % fname)
# else:
# print('Warning: ret = %s for %s' % (ret, fname))
#
# cv2.destroyAllWindows()
# 标定
img = cv2.imread('test_images/straight_lines1.jpg')
img_size = (img.shape[1], img.shape[0])
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objp_list, corners_list, img_size,None,None)
# print ("ret:", ret) # ret为bool值
# print ("mtx:\n", mtx) # 内参数矩阵
# print ("dist:\n", dist ) # 畸变系数 distortion cofficients = (k_1,k_2,p_1,p_2,k_3)
# print ("rvecs:\n", rvecs) # 旋转向量,外参数
# print ("tvecs:\n", tvecs ) # 平移向量,外参数
return mtx, dist
if __name__ == '__main__':
mtx, dist = calibrate_camera()
save_dict = {'mtx': mtx, 'dist': dist}
with open('calibrate_camera.p', 'wb') as f:
pickle.dump(save_dict, f)
# pickle提供了一个简单的持久化功能。可以将对象以文件的形式存放在磁盘上。
# pickle模块只能在python中使用,python中几乎所有的数据类型(列表,字典,集合,类等)都可以用pickle来序列化,
# pickle序列化后的数据,可读性差,人一般无法识别。
# pickle.dump(obj, file[, protocol])序列化对象,并将结果数据流写入到文件对象中。
# 参数protocol是序列化模式,默认值为0,表示以文本的形式序列化。protocol的值还可以是1或2,表示以二进制的形式序列化。
# 示例校准图像
img = cv2.imread('camera_cal/calibration5.jpg')
cv2.imshow("原图",img)
dst = cv2.undistort(img, mtx, dist, None, mtx)
print(help(cv2.undistort))
cv2.imshow("校正后", dst)
cv2.imwrite('example_images/undistort_calibration.png', dst)
cv2.waitKey()
其中,使用到的棋盘图像如下: