特征点匹配的效果来看,匹配的效果还是相当糟糕的,如果我们拿着这样子的匹配结果去实现图像拼接或者物体追踪,效果肯定是极差的。所以我们需要进一步筛选匹配点,来获取优秀的匹配点,这就是所谓的“去粗取精”。这里我们采用了Lowe’s算法来进一步获取优秀匹配点。
为了排除因为图像遮挡和背景混乱而产生的无匹配关系的关键点,SIFT的作者Lowe提出了比较最近邻距离与次近邻距离的SIFT匹配方式:取一幅图像中的一个SIFT关键点,并找出其与另一幅图像中欧式距离最近的前两个关键点,在这两个关键点中,如果最近的距离除以次近的距离得到的比率ratio少于某个阈值T,则接受这一对匹配点因为对于错误匹配,由于特征空间的高维性,相似的距离可能有大量其他的错误匹配,从而它的ratio值比较高。显然降低这个比例阈值T,SIFT匹配点数目会减少,但更加稳定,反之亦然。
Lowe推荐ratio的阈值为0.8,但作者对大量任意存在尺度、旋转和亮度变化的两幅图片进行匹配,结果表明ratio取值在0. 4~0. 6 之间最佳,小于0. 4的很少有匹配点,大于0. 6的则存在大量错误匹配点,所以建议ratio的取值原则如下:
ratio=0. 4:对于准确度要求高的匹配;
ratio=0. 6:对于匹配点数目要求比较多的匹配;
ratio=0. 5:一般情况下。
带Lowe匹配筛选算法的surf特征匹配demo:
#include "highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/legacy/legacy.hpp"
#include
using namespace cv;
using namespace std;
int main()
{
Mat image01 = imread("g2.jpg", 1);
Mat image02 = imread("g4.jpg", 1);
imshow("p2", image01);
imshow("p1", image02);
//灰度图转换
Mat image1, image2;
cvtColor(image01, image1, CV_RGB2GRAY);
cvtColor(image02, image2, CV_RGB2GRAY);
//提取特征点
SurfFeatureDetector surfDetector(2000); // 海塞矩阵阈值,在这里调整精度,值越大点越少,越精准
vector keyPoint1, keyPoint2;
surfDetector.detect(image1, keyPoint1);
surfDetector.detect(image2, keyPoint2);
//特征点描述,为下边的特征点匹配做准备
SurfDescriptorExtractor SurfDescriptor;
Mat imageDesc1, imageDesc2;
SurfDescriptor.compute(image1, keyPoint1, imageDesc1);
SurfDescriptor.compute(image2, keyPoint2, imageDesc2);
FlannBasedMatcher matcher;
vector > matchePoints;
vector GoodMatchePoints;
vector train_desc(1, imageDesc1);
matcher.add(train_desc);
matcher.train();
matcher.knnMatch(imageDesc2, matchePoints, 2);
cout << "total match points: " << matchePoints.size() << endl;
// Lowe's algorithm,获取优秀匹配点
for (int i = 0; i < matchePoints.size(); i++)
{
if (matchePoints[i][0].distance < 0.6 * matchePoints[i][1].distance)
{
GoodMatchePoints.push_back(matchePoints[i][0]);
}
}
Mat first_match;
drawMatches(image02, keyPoint2, image01, keyPoint1, GoodMatchePoints, first_match);
imshow("first_match ", first_match);
waitKey();
return 0;
}
Lowe算法应用到我们的机器人跟随代码中:
//使用flann匹配
FlannBasedMatcher matcher;
//vectormatches;
//matcher.match(image1, image2, matches);//匹配结束
//min_dist,max_dist:{matches[0],matches[end -1]}
vector > matches;
vector GoodMatches;
vector train_desc(1, image1);
matcher.add(train_desc);
matcher.train();
matcher.knnMatch(image2, matches, 2);//粗匹配
//cout << "total match points: " << matchePoints.size() << endl;
// Lowe's algorithm,获取优秀匹配点,筛选粗匹配
/*
两个关键点中,如果最近的距离除以次近的距离得到的比率ratio少于某个阈值T,
则接受这一对匹配点。因为对于错误匹配,由于特征空间的高维性,相似的距离可能有大量其他的错误匹配,
从而它的ratio值比较高。显然降低这个比例阈值T,SIFT匹配点数目会减少,但更加稳定,反之亦然。
*/
for (int i = 0; i < matches.size(); i++)
{
if (matches[i][0].distance < 0.6 * matches[i][1].distance)
{
GoodMatches.push_back(matches[i][0]);
}
}
sort(GoodMatches.begin(), GoodMatches.end());//将GoodMatches进行排序
vectorgood_matches;
int ptsPairs = std::min(50, (int)(matches.size() * 0.15));
for (int i = 0; i < ptsPairs; i++)//保存ptsPairs个匹配结果
{
good_matches.push_back(GoodMatches[i]);
}
double averagematch_dist = .0;
for(int i = 0; i < ptsPairs; i++)
{
averagematch_dist = averagematch_dist + good_matches[i].distance / ptsPairs;
}
//averagematch = 0.5 * (good_matches[0].distance + good_matches[ptsPairs].distance);
if(!(good_matches[0].distance < 1.0 &&
good_matches[ptsPairs - 1].distance < 1.0 &&
(good_matches[0].distance - averagematch_dist < 0.5 ) &&
(good_matches[ptsPairs - 1].distance - averagematch_dist < 0.5)))
continue;
今天这边的机器人因为给做SLAM的同学在做UWB跟随和导航,我这边的暂时没有进行测试,还不知道具体的效果怎么样,是否能够正常运行。