OpenCV中 变换模型求解的函数补充: 刚体变换(Rigid Transform)和仿射变换(Affine Transform)

OpenCV中提供了丰富的图像几何变换模型求解和进行几何变换的函数接口。

库中已经提供的两种图像变换模型(仿射变换getAffineTransform和透视变换getPerspectiveTransform)的求解方法,以及仿射求逆变换的求解等, 并通过warpAfine ,warpPerspective等函数实现了图像的重采样。

但这两种变换在很多应用中略显不足,这是因为:

1)这两个变换都是在理想情况下的的求解,即仿射变换使用3个点对求解6个参数,透视变换使用4个点对求解8个参数,因此使用上述无法求解最小二乘解。

2)实际中经常使用到的刚体变换模型并没有提供方法。


下面做两点补充:

1)给出仿射变换和透视变换模型估计的最小二乘解

1.1仿射变换模型最小二乘估计

struct TAffineTrans2D
{
	double a1, b1, c1;
	double a2, b2, c2;
};

void estimateAffine2D(cv::Point2f* srcPoints, cv::Point2f* dstPoints, int pointsNum, TAffineTrans2D& transform)
{
	double* vecL = new double[pointsNum * 2];
	double* matA = new double[pointsNum * 6 * 2];

	double* Li = vecL;
	double* Ai = matA;

	for (int i = 0; i < pointsNum; ++ i)
	{
		Ai[0] = srcPoints[i].x;
		Ai[1] = srcPoints[i].y;
		Ai[2] = 1.0f;
		Ai[3] = 0.0f;
		Ai[4] = 0.0f;
		Ai[5] = 0.0f;

		Li[0] = dstPoints[i].x;

		Ai += 6;

		Ai[0] = 0.0f;
		Ai[1] = 0.0f;
		Ai[2] = 0.0f;
		Ai[3] = srcPoints[i].x;
		Ai[4] = srcPoints[i].y;
		Ai[5] = 1.0f;

		Li[1] = dstPoints[i].y;

		Ai += 6;	
		
		Li += 2;
	}
	cv::Mat cvMatA(pointsNum * 2, 6, CV_64FC1, matA);
	cv::Mat cvMatL(pointsNum * 2, 1, CV_64FC1, vecL);

	cv::Mat cvMatRes(1, 6, CV_64FC1);
	cv::solve(cvMatA, cvMatL, cvMatRes, cv::DECOMP_SVD);

	delete[] vecL;
	delete[] matA;

	double* datRes = (double*)(cvMatRes.data);
	transform.a1 = datRes[0];
	transform.b1 = datRes[1];
	transform.c1 = datRes[2];

	transform.a2 = datRes[3];
	transform.b2 = datRes[4];
	transform.c2 = datRes[5];
}

1.2透视变换模型的最小二乘估计

该功能可通过cv::findHomography()函数实现,不在重复。

2) 求解刚体变换模型

struct TRigidTrans2D
{
	double matR[4];

	double X;
	double Y;
};


void estimateRigid2D(cv::Point2f* srcPoints, cv::Point2f* dstPoints, int pointsNum, TRigidTrans2D& transform)
{
	double srcSumX = 0.0f;
	double srcSumY = 0.0f;

	double dstSumX = 0.0f;
	double dstSumY = 0.0f;

	for (int i = 0; i < pointsNum; ++ i)
	{
		srcSumX += srcPoints[i].x;
		srcSumY += srcPoints[i].y;

		dstSumX += dstPoints[i].x;
		dstSumY += dstPoints[i].y;
	}

	cv::Point2f centerSrc, centerDst;

	centerSrc.x = float(srcSumX / pointsNum);
	centerSrc.y = float(srcSumY / pointsNum);

	centerDst.x = float(dstSumX / pointsNum);
	centerDst.y = float(dstSumY / pointsNum);

	cv::Mat srcMat(2, pointsNum, CV_64FC1);
	cv::Mat dstMat(2, pointsNum, CV_64FC1);

	double* srcDat = (double*)(srcMat.data);
	double* dstDat = (double*)(dstMat.data);
	for (int i = 0; i < pointsNum; ++ i)
	{
		srcDat[i] = srcPoints[i].x - centerSrc.x;
		srcDat[pointsNum + i] = srcPoints[i].y - centerSrc.y;

		dstDat[i] = dstPoints[i].x - centerDst.x;
		dstDat[pointsNum + i] = dstPoints[i].y - centerDst.y;
	}

	cv::Mat matS = srcMat * dstMat.t();

	cv::Mat matU, matW, matV;
	cv::SVDecomp(matS, matW, matU, matV);

	cv::Mat matTemp = matU * matV;
	double det = cv::determinant(matTemp);

	double datM[] = {1, 0, 0, det};
	cv::Mat matM(2, 2, CV_64FC1, datM);

	cv::Mat matR = matV.t() * matM * matU.t();

	memcpy(transform.matR, matR.data, sizeof(double) * 4);

	double* datR = (double*)(matR.data);
	transform.X = centerDst.x - (centerSrc.x * datR[0] + centerSrc.y * datR[1]);
	transform.Y = centerDst.y - (centerSrc.x * datR[2] + centerSrc.y * datR[3]);
}


你可能感兴趣的:(OpenCV,拾遗)