The functions in this section use a so-called pinhole camera model. In this model, a scene view is formed by projecting 3D points into the image plane using a perspective transformation.
or
where:
- are the coordinates of a 3D point in the world coordinate space
- are the coordinates of the projection point in pixels
- is a camera matrix, or a matrix of intrinsic parameters
- is a principal point that is usually at the image center
- are the focal lengths expressed in pixel units.
Thus, if an image from the camera is scaled by a factor, all of these parameters should be scaled (multiplied/divided, respectively) by the same factor. The matrix of intrinsic parameters does not depend on the scene viewed. So, once estimated, it can be re-used as long as the focal length is fixed (in case of zoom lens). The joint rotation-translation matrix is called a matrix of extrinsic parameters. It is used to describe the camera motion around a static scene, or vice versa, rigid motion of an object in front of a still camera. That is, translates coordinates of a point to a coordinate system, fixed with respect to the camera. The transformation above is equivalent to the following (when ):
The following figure illustrates the pinhole camera model.
Real lenses usually have some distortion, mostly radial distortion and slight tangential distortion. So, the above model is extended as:
, , , , , and are radial distortion coefficients. and are tangential distortion coefficients. Higher-order coefficients are not considered in OpenCV.
The next figure shows two common types of radial distortion: barrel distortion (typically and pincushion distortion (typically ).
In the functions below the coefficients are passed or returned as
vector. That is, if the vector contains four elements, it means that . The distortion coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera parameters. And they remain the same regardless of the captured image resolution. If, for example, a camera has been calibrated on images of 320 x 240
resolution, absolutely the same distortion coefficients can be used for 640 x 480
images from the same camera while , , , and need to be scaled appropriately.
The functions below use the above model to do the following:
- Project 3D points to the image plane given intrinsic and extrinsic parameters.
- Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections.
- Estimate intrinsic and extrinsic camera parameters from several views of a known calibration pattern (every view is described by several 3D-2D point correspondences).
- Estimate the relative position and orientation of the stereo camera “heads” and compute the rectification transformation that makes the camera optical axes parallel.
Note
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
double
calibrateCamera
(InputArrayOfArrays
objectPoints, InputArrayOfArrays
imagePoints, Size
imageSize, InputOutputArray
cameraMatrix, InputOutputArray
distCoeffs, OutputArrayOfArrays
rvecs, OutputArrayOfArrays
tvecs, int
flags=0, TermCriteria
criteria=TermCriteria( TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON)
)
cv2.
calibrateCamera
(objectPoints, imagePoints, imageSize
[, cameraMatrix
[, distCoeffs
[, rvecs
[, tvecs
[, flags
[, criteria
]
]
]
]
]
]
) → retval, cameraMatrix, distCoeffs, rvecs, tvecs
double
cvCalibrateCamera2
(const CvMat*
object_points, const CvMat*
image_points, const CvMat*
point_counts, CvSize
image_size, CvMat*
camera_matrix, CvMat*
distortion_coeffs, CvMat*
rotation_vectors=NULL, CvMat*
translation_vectors=NULL, int
flags=0, CvTermCriteria
term_crit=cvTermCriteria( CV_TERMCRIT_ITER+CV_TERMCRIT_EPS,30,DBL_EPSILON)
)
cv.
CalibrateCamera2
(objectPoints, imagePoints, pointCounts, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, flags=0
) → None
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the views. The algorithm is based on [Zhang2000]and [BouguetMCT]. The coordinates of 3D object points and their corresponding 2D projections in each view must be specified. That may be achieved by using an object with a known geometry and easily detectable feature points. Such an object is called a calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as a calibration rig (see findChessboardCorners()
). Currently, initialization of intrinsic parameters (when CV_CALIB_USE_INTRINSIC_GUESS
is not set) is only implemented for planar calibration patterns (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also be used as long as initial cameraMatrix
is provided.
The algorithm performs the following steps:
CV_CALIB_FIX_K?
are specified.solvePnP()
.imagePoints
and the projected (using the current estimates for camera parameters and the poses) object points objectPoints
. See projectPoints()
for details.The function returns the final re-projection error.
Note
If you use a non-square (=non-NxN) grid and findChessboardCorners()
for calibration, and calibrateCamera
returns bad values (zero distortion coefficients, an image center very far from (w/2-0.5,h/2-0.5)
, and/or large differences between and (ratios of 10:1 or more)), then you have probably used patternSize=cvSize(rows,cols)
instead of using patternSize=cvSize(cols,rows)
in findChessboardCorners()
.
See also
findChessboardCorners()
, solvePnP()
, initCameraMatrix2D()
, stereoCalibrate()
, undistort()