本文介绍如何利用OpenCV从两幅图像中提取定位精度较高的尺度不变特征变换(SIFT)特征,并结合特征匹配方法建立两幅图像的特征对应关系。
利用MFC实现了简易的Demo程序以便交互操作,界面如下所示。首先为两幅待匹配的图像选择正确的路径,再指定拟提取的特征类型(SIFT、SURF、ORB等),以及特征匹配方法(Brute Force、Flann Based等),随后点击“显示匹配点对”,即可呈现两幅图像中经过精确匹配的SIFT特征对应关系。此外,还可点击“保存匹配结果”,将匹配得到的SIFT点对的像素坐标,以文件形式保存至指定路径。
(1)从MFC EditBrowse Control控件中读取图像文件的路径;
CString selectedPath1, selectedPath2;
GetDlgItemText(IDC_MFCEDITBROWSE_Image1, selectedPath1);
GetDlgItemText(IDC_MFCEDITBROWSE_Image2, selectedPath2);
(2)将CString类型的文件路径转为cv::String类型,作为imread函数的输入参数;
USES_CONVERSION;
cv::String cvStr1 = W2A(selectedPath1);
cv::String cvStr2 = W2A(selectedPath2);
Mat img1 = imread(cvStr1);
Mat img2 = imread(cvStr2);
(3)判断图像是否读取成功,并将彩色图像转为灰度图像。
if (img1.data == NULL || img2.data == NULL)
{
MessageBox(_T("Reading error"));
return;
}
if (img1.channels() > 1)
{
cvtColor(img1, img1, COLOR_BGR2GRAY);
}
if (img2.channels() > 1)
{
cvtColor(img2, img2, COLOR_BGR2GRAY);
}
利用Opencv提供的SIFT检测函数,提取两幅图像中各自包含的关键点;
Ptr<SIFT> sift = SIFT::create();
vector<KeyPoint> keypoints1, keypoints2;
sift->detect(img1, keypoints1);
sift->detect(img2, keypoints2);
依据每幅图像的关键点信息,计算对应的描述符。
Mat descriptors1, descriptors2;
sift->compute(img1, keypoints1, descriptors1);
sift->compute(img2, keypoints2, descriptors2);
(1)结合从两幅图像中提取的关键点和描述符,寻找粗略匹配的SIFT点对;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
vector<vector<DMatch> > matchPoints;
vector<Mat> trainDesc(1, descriptors1);
matcher->add(trainDesc);
matcher->train();
matcher->knnMatch(descriptors2, matchPoints, 2);
(2)利用Lowe提出的误匹配筛选方法,即当候选匹配点对的最小距离小于次小距离的0.6倍时,才认为它们是正确匹配;
int trainIndex, queryIndex;
vector<DMatch> candiMatchPoints;
vector<Point2f> candiPts1, candiPts2;
vector<KeyPoint> candiKeyPts1, candiKeyPts2;
for (int i = 0; i < matchPoints.size(); i++)
{
if (matchPoints[i][0].distance < 0.6 * matchPoints[i][1].distance)
{
candiMatchPoints.push_back(matchPoints[i][0]);
trainIndex = matchPoints[i][0].trainIdx;
queryIndex = matchPoints[i][0].queryIdx;
candiKeyPts1.push_back(keypoints1[trainIndex]);
candiKeyPts2.push_back(keypoints2[queryIndex]);
candiPts1.push_back(keypoints1[trainIndex].pt);
candiPts2.push_back(keypoints2[queryIndex].pt);
}
}
(3)采用随机抽样一致方法进一步剔除误匹配点对;
vector<uchar> ransacStatus;
Mat funMat = findFundamentalMat(candiPts1, candiPts2, ransacStatus, FM_RANSAC);
(4)记录最终得到的正确匹配点对,以供显示和保存。
int index = 0;
vector<Point2f> matchPts1, matchPts2;
vector<DMatch> goodMatchPoints;
vector<KeyPoint> goodKeyPts1, goodKeyPts2;
for (int i=0; i<candiMatchPoints.size(); i++)
{
if (ransacStatus[i] != 0)
{
goodKeyPts1.push_back(candiKeyPts1[i]);
goodKeyPts2.push_back(candiKeyPts2[i]);
candiMatchPoints[i].queryIdx = index;
candiMatchPoints[i].trainIdx = index;
goodMatchPoints.push_back(candiMatchPoints[i]);
matchPts1.push_back(candiKeyPts1[i].pt);
matchPts2.push_back(candiKeyPts2[i].pt);
index++;
}
}
在两幅图像中显示特征匹配且剔除误匹配之后的结果。
Mat img;
drawMatches(img1, goodKeyPts1, img2, goodKeyPts2, goodMatchPoints, img);
namedWindow("SIFT + Flann Based", 0);
resizeWindow("SIFT + Flann Based", 1600, 600);
imshow("SIFT + Flann Based", img);
waitKey(0);
将手机从不同角度拍摄的两张图像导入程序,得到特征提取和匹配结果如下所示,可以看到两种视角下的特征点对应关系比较准确。
(1)两幅图像的特征匹配关系:
[1] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60: 91-110 (2004).
[2] SIFT原理参考:https://blog.csdn.net/zddblog/article/details/7521424
[3] OpenCV代码实现参考:https://blog.csdn.net/hellohake/article/details/104930117