基于opencv3.1的特征检测、特征点匹配、图像拼接(四)

基于opencv3.1的特征检测、特征点匹配、图像拼接(四)

在上一节的过程中,我们已经掌握了opencv自带的两个接口函数findHomography和perspectiveTransform,并修改了opencv官方手册中的一些存在的版本兼容问题。然后,根据得到的单应性矩阵H,我们得到了图像间的投影变换关系。我们可以将其中一张图像每个像素点进行投影变换,最后将变换后的图片与另一张图片进行图片叠加即可得到初步的拼接结果。

首先我们默认将右侧的图片进行投影变换覆盖到左侧图像上(反之同理,但是需要将代码细节改正一下,比如投影输出图像的尺寸等),所用到的接口为warpPerspective:

void warpPerspective(InputArray src, OutputArray dst, InputArray M, Size dsize, int flags=INTER_LINEAR, int borderMode=BORDER_CONSTANT, const Scalar& borderValue=Scalar())

第一个参数src是输入待处理的图像,第二个参数dst是我们投影变换后的输出图像;第三个参数是单应性矩阵H,第四个参数是输出图像的大小,剩下的参数为常量。此处我们需要注意第四个参数:因为做投影变换的图像是右侧的图,其图像的宽度width(cols)应该是右侧图像投影到左侧图像之后,最右边的右上角和右下角两个点的x值的最大值。高度我们取为左侧图片的高度。代码表示如下:

warpPerspective(img1, imageTransform1, H, Size(MAX(corners.right_top.x, corners.right_bottom.x), img2.rows));

因此我们在执行这段代码之前,必须先解算出右图投影在左图时的四个边角的坐标,即size参数值,代码如下(摘选自opencv手册中的源码):

	//直接进行图像投影变换
	std::vector<Point2f> obj_corners(4);//定义右图的四个角
	obj_corners[0] = cvPoint(0, 0); 
	obj_corners[1] = cvPoint(img1.cols, 0);
	obj_corners[2] = cvPoint(img1.cols, img1.rows);
	obj_corners[3] = cvPoint(0, img1.rows);
	std::vector<Point2f> scene_corners(4);//定义左图的四个角

	perspectiveTransform(obj_corners, scene_corners, H);//将右图四角投影至左图

	cout << "left_top:" << scene_corners[0].x <<" "<<scene_corners[0].y<< endl;
	cout << "right_top:" << scene_corners[1].x << " " << scene_corners[1].y << endl;
	cout << "right_bottom:" << scene_corners[2].x << " " << scene_corners[2].y << endl;
	cout << "left_bottom:" << scene_corners[3].x << " " << scene_corners[3].y << endl;

	Mat imageTransform1, imageTransform2;
	warpPerspective(img1, imageTransform1, H, Size(MAX(scene_corners[1].x, scene_corners[2].x), img2.rows));
	imshow("投影变换右图", imageTransform1);
	imwrite("trans1.jpg", imageTransform1);

运行效果如下:
基于opencv3.1的特征检测、特征点匹配、图像拼接(四)_第1张图片
基于opencv3.1的特征检测、特征点匹配、图像拼接(四)_第2张图片

基于opencv3.1的特征检测、特征点匹配、图像拼接(四)_第3张图片
接下来,将左图覆盖在dst上面,这里我们要用到函数:copyTo;这个函数浪费了我整整一天的时间…因为我之前的代码风格都是在读取图片的时候,像这样直接读成灰度图:

Mat img_object = imread("box.png", IMREAD_GRAYSCALE);

如果原图是灰度图,那么copyTo不会报错,但是如果读取过程中使用灰度图像的读取方式就会报错,内存泄漏,本人能力有限,没有具体分析产生这样的问题的原因,只是把imread后面代表灰度的参数删掉了保证了程序的稳定运行;关于图像拷贝与覆盖这部分的代码为:

int dst_width = imageTransform1.cols;  //为拼接图的长度
	int dst_height = img2.rows;//为左图的高度
	cout << dst_height << endl << dst_width << endl;
	Mat dst(dst_height, dst_width, CV_8UC3);
	dst.setTo(0);
	imageTransform1.copyTo(dst(Rect(0, 0, imageTransform1.cols, imageTransform1.rows)));
	img2.copyTo(dst(Rect(0, 0, img2.cols, img2.rows)));
	imshow("b_dst", dst);

效果如下:基于opencv3.1的特征检测、特征点匹配、图像拼接(四)_第4张图片
从上图可以看出,图像拼接效果并不十分理想,原因有:

  1. 两张图像亮度不同;
  2. 图像有一定角度畸变;
  3. 拼接处存在裂缝。

在这里我们采取加权融合的思路,在重叠部分从左至右,对每个像素点进行融合:即重叠部分每个像素点都可以认为是左图和投影后的右图的像素点加权叠加,最左侧的像素点左图权值最大,最右侧的像素点右图权值最大。所以设定像素点在左图的权值与当前像素点和左边界的距离成正比,最初为1,随着像素点往右移动,权值越来越小直到变成0。对左图右图重新进行图像拼接,结果如下图所示,整张图像看起来非常自然。
这部分的代码为:

	//img2 左:imageTransform1:右图投影后
	//边界加权融合
	double start = MIN(scene_corners[0].x, scene_corners[3].x);//开始位置,即重叠区域的左边界
	double DoubleWidth = img1.cols - start;//重叠区域的宽度  
	double alpha = 1;//左图像素的权重  
	for (int i = 0; i < dst.rows; i++)
	{
		uchar* theLeft = img2.ptr<uchar>(i);  //获取第i行的首地址
		uchar* theRight = imageTransform1.ptr<uchar>(i);
		uchar* d = dst.ptr<uchar>(i);
		for (int j = start; j < img2.cols; j++)
		{
			if (theRight[j * 3] == 0 && theRight[j * 3 + 1] == 0 && theRight[j * 3 + 2] == 0)
				alpha = 1;
			else
				alpha = (DoubleWidth - (j - start)) / DoubleWidth;
			d[j * 3] = theLeft[j * 3] * alpha + theRight[j * 3] * (1 - alpha);
			d[j * 3 + 1] = theLeft[j * 3 + 1] * alpha + theRight[j * 3 + 1] * (1 - alpha);
			d[j * 3 + 2] = theLeft[j * 3 + 2] * alpha + theRight[j * 3 + 2] * (1 - alpha);
		}
	}
	imshow("边界融合之后", dst);

效果如下:
基于opencv3.1的特征检测、特征点匹配、图像拼接(四)_第5张图片
可以说很耐斯了,这部分终于告一段落了!!!!!!!!!!!
撒花✿✿ヽ(°▽°)ノ✿!!!!!!!!!!!!!!!!!!!!!!!!!

最后给出全部代码:

//TODO:由已知的单应性矩阵进行投影变换,
//求出来四角投影后坐标
//进行图像拷贝 图像覆盖 缝隙进行处理
//进行图像合成
//2019.3.7

#include 
#include 
#include "opencv2/core.hpp"
#include "opencv2/features2d.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/calib3d.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/xfeatures2d.hpp"
using namespace cv;
using namespace std;
using namespace cv::xfeatures2d;

/* @function main */
int main(int argc, char** argv)
{
	Mat img1 = imread("ll.jpg");
	Mat img2 = imread("rr.jpg");
	imshow("ll", img1);
	imshow("rr", img2);
	if (!img1.data || !img2.data)
	{
		std::cout << " --(!) Error reading images " << std::endl; return -1;
	}
	//-- Step 1: Detect the keypoints using SURF Detector
	int minHessian = 2000;
	Ptr<SURF> detector = SURF::create(minHessian);
	std::vector<KeyPoint> keypoints_object, keypoints_scene;
	detector->detect(img1, keypoints_object);
	detector->detect(img2, keypoints_scene);
	//-- Step 2: Calculate descriptors (feature vectors)
	Ptr<SURF>extractor = SURF::create();
	Mat descriptors_object, descriptors_scene;
	extractor->compute(img1, keypoints_object, descriptors_object);
	extractor->compute(img2, keypoints_scene, descriptors_scene);
	//-- Step 3: Matching descriptor vectors using FLANN matcher
	FlannBasedMatcher matcher;
	std::vector< DMatch > matches;
	matcher.match(descriptors_object, descriptors_scene, matches);
	double max_dist = 0; double min_dist = 100;
	//-- Quick calculation of max and min distances between keypoints
	for (int i = 0; i < descriptors_object.rows; i++)
	{
		double dist = matches[i].distance;
		if (dist < min_dist) min_dist = dist;
		if (dist > max_dist) max_dist = dist;
	}
	//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
	std::vector< DMatch > good_matches;
	for (int i = 0; i < descriptors_object.rows; i++)
	{
		if (matches[i].distance < 3 * min_dist)
		{
			good_matches.push_back(matches[i]);
		}
	}
	Mat img_matches;
	drawMatches(img1, keypoints_object, img2, keypoints_scene,
		good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), std::vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
	//-- Localize the object
	std::vector<Point2f> obj;
	std::vector<Point2f> scene;
	for (int i = 0; i < good_matches.size(); i++)
	{
		//-- Get the keypoints from the good matches
		obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
		scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
	}
	Mat H = findHomography(obj, scene, RANSAC);
	std::cout << "单应性矩阵为:\n" << H << std::endl;

	//直接进行图像投影变换
	std::vector<Point2f> obj_corners(4);//定义右图的四个角
	obj_corners[0] = cvPoint(0, 0); 
	obj_corners[1] = cvPoint(img1.cols, 0);
	obj_corners[2] = cvPoint(img1.cols, img1.rows);
	obj_corners[3] = cvPoint(0, img1.rows);
	std::vector<Point2f> scene_corners(4);//定义左图的四个角

	perspectiveTransform(obj_corners, scene_corners, H);//将右图四角投影至左图

	cout << "left_top:" << scene_corners[0].x <<" "<<scene_corners[0].y<< endl;
	cout << "right_top:" << scene_corners[1].x << " " << scene_corners[1].y << endl;
	cout << "right_bottom:" << scene_corners[2].x << " " << scene_corners[2].y << endl;
	cout << "left_bottom:" << scene_corners[3].x << " " << scene_corners[3].y << endl;

	Mat imageTransform1, imageTransform2;
	warpPerspective(img1, imageTransform1, H, Size(MAX(scene_corners[1].x, scene_corners[2].x), img2.rows));
	imshow("投影变换右图", imageTransform1);
	imwrite("trans1.jpg", imageTransform1);
	int dst_width = imageTransform1.cols;  //为拼接图的长度
	int dst_height = img2.rows;//为左图的高度
	cout << dst_height << endl << dst_width << endl;
	Mat dst(dst_height, dst_width, CV_8UC3);
	dst.setTo(0);
	imageTransform1.copyTo(dst(Rect(0, 0, imageTransform1.cols, imageTransform1.rows)));
	img2.copyTo(dst(Rect(0, 0, img2.cols, img2.rows)));
	imshow("b_dst", dst);
	//-- Show detected matches
	imshow("Good Matches & Object detection", img_matches);
	//img2 左:imageTransform1:右图投影后
	//边界加权融合
	double start = MIN(scene_corners[0].x, scene_corners[3].x);//开始位置,即重叠区域的左边界
	double DoubleWidth = img1.cols - start;//重叠区域的宽度  
	double alpha = 1;//左图像素的权重  
	for (int i = 0; i < dst.rows; i++)
	{
		uchar* theLeft = img2.ptr<uchar>(i);  //获取第i行的首地址
		uchar* theRight = imageTransform1.ptr<uchar>(i);
		uchar* d = dst.ptr<uchar>(i);
		for (int j = start; j < img2.cols; j++)
		{
			if (theRight[j * 3] == 0 && theRight[j * 3 + 1] == 0 && theRight[j * 3 + 2] == 0)
				alpha = 1;
			else
				alpha = (DoubleWidth - (j - start)) / DoubleWidth;
			d[j * 3] = theLeft[j * 3] * alpha + theRight[j * 3] * (1 - alpha);
			d[j * 3 + 1] = theLeft[j * 3 + 1] * alpha + theRight[j * 3 + 1] * (1 - alpha);
			d[j * 3 + 2] = theLeft[j * 3 + 2] * alpha + theRight[j * 3 + 2] * (1 - alpha);
		}
	}
	imshow("边界融合之后", dst);
	waitKey(0);
	return 0;
}


你可能感兴趣的:(毕设,opencv)