opencv图像特征点的提取和匹配(一)
opencv中进行特征点的提取和匹配的思路一般是:提取特征点、生成特征点的描述子,然后进行匹配。opencv提供了一个三个类分别完成图像特征点的提取、描述子生成和特征点的匹配,三个类分别是:FeatureDetector,DescriptorExtractor,DescriptorMatcher。从这三个基类派生出了不同的类来实现不同的特征提取算法、描述及匹配。
首先是特征提取基类:FeatureDetector,实现二维图像特征的提取。这个类是派生于Algorithm类,这个类应该是封装了大量的算法。FeatureDetector的具体实现如下:
class CV_EXPORTS FeatureDetector { public: virtual ~FeatureDetector(); void detect( const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat() ) const; void detect( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints, const vector<Mat>& masks=vector<Mat>() ) const; virtual void read(const FileNode&); virtual void write(FileStorage&) const; static Ptr<FeatureDetector> create( const string& detectorType ); protected: ... };通过定义FeatureDetector对象,并调用静态成员函数create函数根据名字来实现多种特征检测方法,具体实现如下:
Ptr<FeatureDetector> FeatureDetector::create(const string& detectorType)支持的算子主要包括以下几种:
“FAST”—FastFeatureDetector;
"STAR"—StarFeatureDetector;
"SIFT"— SiftFeatureDetector;
"SURF"—SurfFeatureDetector;
"ORB" — OrbFeatureDetecotr;
"MSER"—MserFeatureDetector;
"GFTT"— GoodFeatureDetector;
"HARRIS"—GoodFeatureToTrackDetector;
"Dense" —DenseFeatureDetector;
"SimpleBlob"—SimpleBlobDectector;
还支持组合类型:特征检测算子的适配器名字("Grid"-GridAdaptedFeatureDetector,"Pyramid"-PyramidAdaptedFeatureDetector)+对应特征检测算子(上面支持的类型)的名字构成。比如:“GridFAST”、“PyramidSTAR”等。
从FeatureDetector类派生出了对应于不同检测算法的子类:FastFeatureDetector、MserFeatureDetector、StarFeatureDetector、SiftFeatureDetector、SurfFeatureDetector、OrbFeatureDetector、SimpleBlobDetector等。
头文件的处理:在opencv2.4.9中,如果要提取sift特征或者surf特征的话,应该添加头文件<opencv2/nonfree/feature2d.hpp>。在这个头文件中声明了两个类:SIFT和SURF。具体源码如下:
class CV_EXPORTS_W SIFT : public Feature2D { public: CV_WRAP explicit SIFT( int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6); //! returns the descriptor size in floats (128) CV_WRAP int descriptorSize() const; //! returns the descriptor type CV_WRAP int descriptorType() const; //! finds the keypoints using SIFT algorithm void operator()(InputArray img, InputArray mask, vector<KeyPoint>& keypoints) const; //! finds the keypoints and computes descriptors for them using SIFT algorithm. //! Optionally it can compute descriptors for the user-provided keypoints void operator()(InputArray img, InputArray mask, vector<KeyPoint>& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false) const; AlgorithmInfo* info() const; void buildGaussianPyramid( const Mat& base, vector<Mat>& pyr, int nOctaves ) const; void buildDoGPyramid( const vector<Mat>& pyr, vector<Mat>& dogpyr ) const; void findScaleSpaceExtrema( const vector<Mat>& gauss_pyr, const vector<Mat>& dog_pyr, vector<KeyPoint>& keypoints ) const; protected: ... }; typedef SIFT SiftFeatureDetector; typedef SIFT SiftDescriptorExtractor;可以看出SIFT类派生于feature2D类,而feature2D有派生于FeatureDetector类和DescriptorExtractor类;因此派生于feature2D类的类的对象既可以调用FeatureDetector的成员函数实现特征的提取,又可以调用DescroptorExtractor的成员函数来生成特征描述子。由SIFT类定义的后面两句(typedef...)也可以看出,通过SIFT类声明的对象拥有提取特征和生成特征描述子的双重功能。
注意:可能是由于opencv版本的原因,网上有些参考资料(包括opencv自带的用户手册)解释说:在提取SIFT特征和SURF特征时,需要添加<opencv2/nonfree/nonfree.hpp>,并在程序的开始添加:initModule_nonfree();。我在实验的过程中发现,即使添加头文件<opencv2/nonfree/nonfree.hpp>,vc++还是不能识别initModule_nonfree()。通过源码可以发现:头文件<opencv2/nonfree/nonfree.hpp>包含头文件<opencv2/nonfree/feature2d.hpp>,只是多声明了bool initModule_nonfree()函数;
#include "opencv2/nonfree/features2d.hpp" namespace cv { CV_EXPORTS_W bool initModule_nonfree(); }我在opencv2.4.9中直接添加头文件 <opencv2/nonfree/feature2d.hpp>即可提取并检测sift和surf特征点。
SURF类的应用和SIFT类一样;ORB特征检测算子也可以像SIFT类一样直接声明对象然后使用,因为也存在一个派生于feature2D类的ORB类。其他算子好像都不行,但存在对应的子类来单独实现特征的检测。
下面给出对图像进行SIFT特征检测和匹配的程序:
#include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/nonfree/features2d.hpp> #include <iostream> using namespace std; using namespace cv; int main(int argc,char* argv[]) { Ptr<DescriptorMatcher> siftMatcher = DescriptorMatcher::create("BruteForce"); SiftFeatureDetector siftDetector; Mat img1 = imread("box.png"); Mat img2 = imread("box_in_scene.png"); vector<KeyPoint> keypoints1,keypoints2; siftDetector.detect(img1,keypoints1); siftDetector.detect(img2,keypoints2); cout<<"Number of detected keypoints img1:"<<keypoints1.size()<<"points.--- img2:" <<keypoints2.size()<<"points."<<endl; SiftDescriptorExtractor siftExtractor; Mat descriptor1,descriptor2; siftExtractor.compute(img1,keypoints1,descriptor1); siftExtractor.compute(img2,keypoints2,descriptor2); cout<<"Number of Descriptors1:"<<descriptor1.rows<<endl; cout<<"Number of Descriptors2:"<<descriptor2.rows<<endl; cout<<"Demension of sift Descriptors:"<<descriptor1.cols<<endl; Mat imgkey1,imgkey2; drawKeypoints(img1,keypoints1,imgkey1,Scalar::all(-1)); drawKeypoints(img2,keypoints2,imgkey2,Scalar::all(-1)); imshow("box",imgkey1); imshow("box_in_scene",imgkey2); vector<DMatch> matches; siftMatcher->match(descriptor1,descriptor2,matches,Mat()); Mat imgmatches; drawMatches(img1, keypoints1, img2, keypoints2, matches, imgmatches, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS); imshow("Match Results:",imgmatches); waitKey(0); return 0; }
匹配结果:
上面的结果是在opencv2.4.9+vs2010+win7中运行的。鉴于水平有限,难免有错误,希望指正,共同进步!!