本节我们将讲如何使用FlannBasedMatcher接口和FLANN()函数来实现快速高效匹配(快速最邻近逼近搜索函数库,Fast Library for Approximate Nearest Neighbors,FLANN)。
在OpenCV源码中,可以找到FlannBasedMatcher类:
可以看到,FlannBasedMatcher类也是继承自DecriptorMatcher,并且同样主要使用来自FlannBasedMatcher类的match方法进行匹配。
FlannBasedMatcher::match()函数从每个描述符查询集中找到最佳匹配。
两个版本的源码:
void DescriptorMatcher::match(const Mat& queryDescriptors, const Mat& trainDescriptors, vector<DMatch>& matches, const Mat& mask=Mat())
void DescriptorMatcher::match(const Mat& queryDescriptors, vector<DMatch>& matches, const vector<Mat>& masks=vector<Mat>())
参数三:一组掩膜,每个masks[i]从第i个图像trainDescCollection[i]指定输入查询和训练描述符允许匹配的掩膜;
示例代码:
# include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include
using namespace std;
using namespace cv;
int main(int argc, char **argv)
{
Mat imgage1= imread("../1.png", 1);
Mat imgage2 = imread("../2.png", 1);
if(!imgage1.data && !imgage2.data) {
printf("图像路径错误!\n");
return false;
}
int minHessian = 300;
Ptr<SIFT > detector = SIFT::create(minHessian);
std::vector<KeyPoint> keypoints1, keypoints2;
detector->detect(imgage1, keypoints1);
detector->detect(imgage2, keypoints2);
Ptr<SIFT> extractor = SIFT::create() ;
Mat descriptors1, descriptors2;
extractor->compute(imgage1, keypoints1, descriptors1);
extractor->compute(imgage2, keypoints2, descriptors2);
//采用FLANN算法匹配描述符向量
Ptr<FlannBasedMatcher> matcher = FlannBasedMatcher::create();
std::vector<DMatch> matches;
matcher->match(descriptors1, descriptors2, matches);
double max_dist = 0;
double min_dist = 100;
//快速计算关键点之间的最大和最小距离
for(int i = 0; i < descriptors1.rows; i++)
{
double dist = matches[i].distance;
if(dist < min_dist) min_dist = dist ;
if(dist > max_dist) max_dist = dist ;
}
//输出距离信息
printf("最大距离:%f\n", max_dist );
printf("最小距离:%f\n", min_dist);
//使用符合条件的匹配结果(距离小于2 * min_dist ),
std::vector<DMatch> good_matches;
for(int i = 0; i < descriptors1.rows; i++)
{
if(matches[i].distance < 2 * min_dist)
{
good_matches.push_back(matches[i]);
}
}
//绘制出符合条件的匹配点
Mat img_matches;
drawMatches(imgage1, keypoints1, imgage2, keypoints2, good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//输出相关点信息
for(int i = 0; i < good_matches.size(); i++)
{
printf("符合条件的匹配点[%d] 特征点1:%d------- 特征点2:%d\n", i, good_matches[i].queryIdx, good_matches[i].trainIdx);
}
//显示效果图
imshow("匹配效果图", img_matches);
// waitKey(0);
while (char (waitKey(1)) != 'q'){}
return 0;
}
CMakeLists.txt:
cmake_minimum_required(VERSION 3.6)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_BUILD_TYPE "Debug")
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(open 1.cpp)
target_link_libraries(open ${OpenCV_LIBS})
用SIFT进行关键点和描述子的提取,用FLANN进行匹配
实现代码:
# include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include
#include
using namespace std;
using namespace cv;
int main()
{
//改变console字体颜色
// system("color 6F");
//[1]载入图片,显示并转化为灰图
Mat trainImage = imread("../1.png"), trainImage_gray;
imshow("原始图", trainImage);
cvtColor(trainImage, trainImage_gray,6);
//【2】检测SIFT关键点,提取训练图像描述符
vector<KeyPoint> train_keypoints;
Mat trainDescriptor;
int minHessian = 80;
Ptr<SIFT > detector = SIFT::create(minHessian);
detector->detect(trainImage_gray, train_keypoints);
Ptr<SIFT> extractor = SIFT::create() ;
extractor->compute(trainImage_gray, train_keypoints, trainDescriptor);
//[3]创建基于FLANN的描述符匹配对象;
FlannBasedMatcher matcher;
vector<Mat> train_desc_collection(1, trainDescriptor); //?????????????
matcher.add(train_desc_collection);
matcher.train();
//[4]创建视频对象,定义帧率
VideoCapture cap(0);
unsigned int frameCount = 0; //帧数
//【5】不断循环直到按下q
while (char (waitKey(1)) != 'q')
{
//<1>参数设置
int64 time0 = getTickCount();
Mat captureImage, captureImage_gray;
cap >> captureImage; //采集视频到captureImage中
if(captureImage.empty()) {
continue;
}
//<2>转化图像到灰度图
cvtColor(captureImage, captureImage_gray, 6);
//[3]检测S关键点,提取测试图像描述符
vector<KeyPoint> test_keypoint;
Mat testDescriptor;
detector->detect(captureImage_gray, test_keypoint);
extractor->compute(captureImage_gray, test_keypoint, testDescriptor);
//<4>匹配训练和测试描述符
vector<vector<DMatch>> matches ;
matcher.knnMatch(testDescriptor, matches, 2);
//[5]根据劳式算法(Low s alogrithm)得到好的匹配
vector<DMatch> goodMatches;
for(unsigned int i = 0; i < matches.size(); i++)
{
if(matches[i][0].distance < 0.6 * matches[i][1].distance)
goodMatches.push_back(matches[i][0]);
}
//[6]绘制匹配点并显示窗口
Mat dstImage;
drawMatches(captureImage, test_keypoint, trainImage,train_keypoints, goodMatches, dstImage);
imshow("匹配窗口", dstImage);
//[7]输出帧率信息
cout << "当前帧率为:" << getTickFrequency() / (getTickCount() - time0) << endl;
}
return 0;
}
CMakeLists.txt:
cmake_minimum_required(VERSION 3.6)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_BUILD_TYPE "Debug")
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(open 1.cpp)
target_link_libraries(open ${OpenCV_LIBS})
编译运行程序中出现的问题:
需要将其改为指针的使用方式;
解决方法:加入头文件#include
,若不加,只能调用cvCvtColor()函数来进行图片的转换,但是这个函数的参数 CvMat 型的,而我们定义的是c++中的Mat类变量,所以要使用cvtColor()。
直接将该变量CV_BGR2GRAY改成对应的枚举数值:6即可解决;
补充:摄像头操作相关命令
可以使用camorama和cheese命令来显示摄像头捕捉到的视频,其中cheese -d 设备
中通过 lsusb
可以查看usb摄像头的型号, ls /dev/video*
可以看到驱动的摄像头。
比较 | SIFT | SURF |
---|---|---|
尺度空间 | DOG与不同的图片卷积 | 不同尺度的box filters与原图卷积 |
特征点检测 | 先进行非极大值抑制,再取出低对比对的点,再通过Hessian矩阵去除边缘的点 | 先用hessian确定候选的点,然后进行极大抑制 |
方向 | 在正方形区域内统计梯度的直方图,找到max的对应的方向,可以有多个方向 | 在圆形区域内,计算各个扇形范围内x,y方向的haar小波响应,找模最大的扇形方向 |
特征描述子 | 1616的采用划分成44的区域,计算每个采用区域样点的梯度方向和幅值,组成8bin直方图 | 2020的区域划分为44的子区域,每个区域找5*5个采用点,计算采用点的harr小波响应,记录dx的求和,dy的求和,dx,dy模分别的求和 |
理论上:SURF是SIFT速度的三倍。
在FLANN特征匹配的基础上,可以利用Hormography映射找出已知物。就是利用findHomography函数通过匹配的关键点找出相应的变换,再利用perspectiveTransform函数映射点群。
s i x i ′ y i ′ I s_i x_{i'} y_{i'}I sixi′yi′I ~ H x i y i I H x_i y_iI HxiyiI
该函数的作用就是找到并返回原图像与目标图像之间的透视变换H
:
Mat findHomography(InputArray srcPoints, InputArray dstPoints, int method=0, double ransacReprojThreshold=3, OutputArray mask=noArray())
标志符 | 含义 |
---|---|
0 | 使用所有点的常规方法 |
CV_RANSAC | 基于RANSAC的鲁棒方法 |
CV_LMEDS | 最小中值鲁棒方法 |
该函数的作用是:进行向量的透视矩阵 变换
void perspectiveTransform(InputArray src, OutputArray dst, InputArray m)
示例代码:
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp" //findHomography有关
#include "opencv2/opencv.hpp" //imgproc.hpp与line()函数有关
#include
using namespace std;
using namespace cv;
int main()
{
//[1]载入图像
Mat srcImage1 = imread("../1.jpg", 1);
Mat srcImage2 = imread("../2.jpg", 1);
if(!srcImage1.data || !srcImage2.data) {
printf("读取图片错误!\n");
return false;
}
//[2]使用SIFT算法检测关键点
int minHessian = 80;
Ptr<SIFT > detector = SIFT::create(minHessian);//定义一个特征检测对象;
std::vector<KeyPoint> keypoints1, keypoints2;
//[3]调用detect函数来检测SIFT的特征点,保存在vector中
detector->detect(srcImage1, keypoints1);
detector->detect(srcImage2, keypoints2);
//[4]计算描述符(特征向量)
Ptr<SIFT> extractor = SIFT::create();
Mat descriptors1, descriptors2;
extractor->compute(srcImage1, keypoints1, descriptors1);
extractor->compute(srcImage2,keypoints2, descriptors2);
// [5]使用暴力匹配
// 实例化一个匹配器
// Ptr matcher = DescriptorMatcher::create("BruteForce-Hamming");
Ptr<DescriptorMatcher> matcher= DescriptorMatcher::create("FlannBased");
std::vector<DMatch> matches;
//匹配两幅图中的描述子
matcher->match(descriptors1, descriptors2, matches);
double max_dist = 0, min_dist = 100;
// 【6】计算关键点之间距离的最大和最小值;
for(int i = 0; i < descriptors1.rows; i++)
{
double dist = matches[i].distance;
if(dist < min_dist) min_dist = dist;
if(dist> max_dist) max_dist = dist;
}
cout << "max dist :" << max_dist;
cout << "min dist:" << min_dist;
// [7]存下匹配距离小与3*min_dist 的点对
std::vector<DMatch> good_matches;
for(int i = 0; i < descriptors1.rows; i++)
{
if(matches[i].distance < 3*min_dist)
{
good_matches.push_back(matches[i]);
}
}
// [8]绘制从图像中匹配出的关键点
Mat imageMatches;
drawMatches(srcImage1, keypoints1, srcImage2, keypoints2, good_matches,imageMatches, Scalar::all(-1),Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);//进行绘制
//定义两个局部变量
vector<Point2f> obj;
vector<Point2f> scene;
//从匹配成功的匹配中获取冠军点;
for(unsigned int i=0 ; i < good_matches.size(); i++)
{
obj.push_back(keypoints1[good_matches[i].queryIdx].pt);
scene.push_back(keypoints2[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, 8); //计算透视变换
//从待测的变换中获取角点
vector<Point2f> obj_corners(4);
obj_corners[0] = cv::Point(0, 0);
obj_corners[1] = cv::Point(srcImage1.cols, 0);
obj_corners[3] = cv::Point(0, srcImage1.rows);
obj_corners[2] = cv::Point(srcImage1.cols, srcImage1.rows);
vector<Point2f> scence_corners(4);
//透视变换
perspectiveTransform(obj_corners, scence_corners, H);
//绘制角点间的直线
line(imageMatches, scence_corners[0] + Point2f(static_cast<float>(srcImage1.cols),0), scence_corners[1] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0 ,123), 4);
line(imageMatches, scence_corners[1] + Point2f(static_cast<float>(srcImage1.cols),0), scence_corners[2] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0 ,123), 4);
line(imageMatches, scence_corners[2] + Point2f(static_cast<float>(srcImage1.cols),0), scence_corners[3] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0 ,123), 4);
line(imageMatches, scence_corners[3] + Point2f(static_cast<float>(srcImage1.cols),0), scence_corners[0] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0 ,123), 4);
//[9]显示效果
imshow("匹配图", imageMatches);
//waitKey(0);
while(char(waitKey(1) != 'q')){}
return 0;
}