1. OpenCV学习笔记(27)KAZE 算法原理与源码分析(一)非线性扩散滤波
2. OpenCV学习笔记(28)KAZE 算法原理与源码分析(二)非线性尺度空间构建
3. OpenCV学习笔记(29)KAZE 算法原理与源码分析(三)特征检测与描述
4. OpenCV学习笔记(30)KAZE 算法原理与源码分析(四)KAZE特征的性能分析与比较
5. OpenCV学习笔记(31)KAZE 算法原理与源码分析(五)KAZE的性能优化及与SIFT的比较
==================================================================================================
1. 论文: http://www.robesafe.com/personal/pablo.alcantarilla/papers/Alcantarilla12eccv.pdf
2. 项目主页:http://www.robesafe.com/personal/pablo.alcantarilla/kaze.html
3. 作者代码:http://www.robesafe.com/personal/pablo.alcantarilla/code/kaze_features_1_4.tar
(需要boost库,另外其计时函数的使用比较复杂,可以用OpenCV的cv::getTickCount代替)
4. Computer Vision Talks的评测:http://computer-vision-talks.com/2013/03/porting-kaze-features-to-opencv/
5. Computer Vision Talks 博主Ievgen Khvedchenia将KAZE集成到OpenCV的cv::Feature2D类,但需要重新编译OpenCV,并且没有实现算法参数调整和按Mask过滤特征点的功能:https://github.com/BloodAxe/opencv/tree/kaze-features
6. 我在Ievgen的项目库中提取出KAZE,封装成继承cv::Feature2D的类,无需重新编译OpenCV,实现了参数调整和Mask过滤的功能: https://github.com/yuhuazou/kaze_opencv (2013-03-28更新,对KAZE代码进行了优化)
7. Matlab 版的接口程序,封装了1.0版的KAZE代码:https://github.com/vlfeat/vlbenchmarks/blob/unstable/%2BlocalFeatures/Kaze.m
==================================================================================================
KAZE特征的检测步骤大致如下:
1) 首先通过AOS算法和可变传导扩散(Variable Conductance Diffusion)([4,5])方法来构造非线性尺度空间。
2) 检测感兴趣特征点,这些特征点在非线性尺度空间上经过尺度归一化后的Hessian矩阵行列式是局部极大值(3×3邻域)。
3) 计算特征点的主方向,并且基于一阶微分图像提取具有尺度和旋转不变性的描述向量。
KAZE特征的尺度空间构造与SIFT类似。尺度级别按对数递增,共有O组octaves,每个octave有S个sub-level。与SIFT中每个新octave逐层进行降采样不同的是,KAZE的各个层级均采用与原始图像相同的分辨率。不同的octave和sub-level分别通过序号o和s来标记,并且通过下式与尺度参数σ相对应:
其中σ0是尺度参数的初始基准值,N=O*S是整个尺度空间包含的图像总数。由前面的介绍知道,非线性扩散滤波模型是以时间为单位的,因此我们需要将像素为单位的尺度参数σi转换至时间单位。在高斯尺度空间下,使用标准差为σ的高斯核对图像进行卷积,相当于对图像进行持续时间为t=σ2/2的滤波(In the case of the Gaussian scale space, the convolution of an image with a Gaussian of standard deviation σ (in pixels) is equivalent to filtering the image for some time t=σ2/2. 这段话不大好理解)。由此我们可得到尺度参数σi转换至时间单位的映射公式如下:
ti被称为进化时间(evolution time)。值得注意的是,这种映射仅用于获取一组进化时间值,并通过这些时间值来构造非线性尺度空间。一般地,在非线性尺度空间里,与ti对应的滤波图像(filtered image)与使用标准差为σ的高斯核对原始图像进行卷积所得的图像并没有直接联系(In general, in the nonlinear scale space at each filtered image ti the resulting image does not correspond with the convolution of the original image with a Gaussian of standard deviation σi. 这句话也不好理解)。不过只要使传导函数g恒等于1(即g是一个常量函数),非线性尺度空间就等同于高斯尺度空间。而且随着尺度层级的提升,除了那些对应于目标轮廓的图像边缘像素外,大部分像素对应的传导函数值将趋于一个常量值。
对于一幅输入图像,KAZE算法首先对其进行高斯滤波;然后计算图像的梯度直方图,从而获取对比度参数k;根据一组进化时间,利用AOS算法即可得到非线性尺度空间的所有图像:
在具体实现时,采用以下的结构体 tevolution 来承载每一个尺度空间的相关参数:
typedef struct { cv::Mat Lx, Ly; // 一阶微分图像(First order spatial derivatives) cv::Mat Lxx, Lxy, Lyy; // 二阶微分图像(Second order spatial derivatives) cv::Mat Lflow; // 传导图像(Diffusivity image) cv::Mat Lt; // 进化图像(Evolution image) cv::Mat Lsmooth; // 平滑图像(Smoothed image) cv::Mat Lstep; // 进化步长更新矩阵(Evolution step update)(!!实际未被使用!!) cv::Mat Ldet; // 检测响应矩阵(Detector response) float etime; // 进化时间(Evolution time) float esigma; // 进化尺度(Evolution sigma. For linear diffusion t = sigma^2 / 2) float octave; // 图像组(Image octave) float sublevel; // 图像层级(Image sublevel in each octave) int sigma_size; // 图像尺度参数的整数值,用于计算检测响应(Integer esigma. For computing the feature detector responses) }tevolution;
结构体的初始化如下:
//******************************************************************************* //******************************************************************************* /** * @brief This method allocates the memory for the nonlinear diffusion evolution */ void KAZE::Allocate_Memory_Evolution(void) { // Allocate the dimension of the matrices for the evolution for( int i = 0; i <= omax-1; i++ ) { for( int j = 0; j <= nsublevels-1; j++ ) { tevolution aux; aux.Lx = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Ly = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Lxx = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Lxy = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Lyy = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Lflow = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Lt = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Lsmooth = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Lstep = cv::Mat::zeros(img_height,img_width,CV_32F); aux.Ldet = cv::Mat::zeros(img_height,img_width,CV_32F); aux.esigma = soffset*pow((float)2.0,(float)(j)/(float)(nsublevels) + i); aux.etime = 0.5*(aux.esigma*aux.esigma); aux.sigma_size = fRound(aux.esigma); aux.octave = i; aux.sublevel = j; evolution.push_back(aux); } } // Allocate memory for the auxiliary variables that are used in the AOS scheme Ltx = cv::Mat::zeros(img_width,img_height,CV_32F); Lty = cv::Mat::zeros(img_height,img_width,CV_32F); px = cv::Mat::zeros(img_height,img_width,CV_32F); py = cv::Mat::zeros(img_height,img_width,CV_32F); ax = cv::Mat::zeros(img_height,img_width,CV_32F); ay = cv::Mat::zeros(img_height,img_width,CV_32F); bx = cv::Mat::zeros(img_height-1,img_width,CV_32F); by = cv::Mat::zeros(img_height-1,img_width,CV_32F); qr = cv::Mat::zeros(img_height-1,img_width,CV_32F); qc = cv::Mat::zeros(img_height,img_width-1,CV_32F); }
值得注意的是,上述参数中,esigma、etime、esigma_size、octave和sublevel等在初始化后就固定了、不再变化。初始化完成后,首先进行非线性尺度空间的构造,其对应函数为:
//******************************************************************************* //******************************************************************************* /** * @brief This method creates the nonlinear scale space for a given image * @param img Input image for which the nonlinear scale space needs to be created * @return 0 if the nonlinear scale space was created successfully. -1 otherwise */ int KAZE::Create_Nonlinear_Scale_Space(const cv::Mat &img) { if( verbosity == true ) { std::cout << "\n> Creating nonlinear scale space." << std::endl; } double t2 = 0.0, t1 = 0.0; if( evolution.size() == 0 ) { std::cout << "---> Error generating the nonlinear scale space!!" << std::endl; std::cout << "---> Firstly you need to call KAZE::Allocate_Memory_Evolution()" << std::endl; return -1; } int64 start_t1 = cv::getTickCount(); // Copy the original image to the first level of the evolution if( verbosity == true ) { std::cout << "-> Perform the Gaussian smoothing." << std::endl; } img.copyTo(evolution[0].Lt); Gaussian_2D_Convolution(evolution[0].Lt,evolution[0].Lt,0,0,soffset); Gaussian_2D_Convolution(evolution[0].Lt,evolution[0].Lsmooth,0,0,sderivatives); // Firstly compute the kcontrast factor Compute_KContrast(evolution[0].Lt,KCONTRAST_PERCENTILE); t2 = cv::getTickCount(); tkcontrast = 1000.0 * (t2 - start_t1) / cv::getTickFrequency(); if( verbosity == true ) { std::cout << "-> Computed K-contrast factor. Execution time (ms): " << tkcontrast << std::endl; std::cout << "-> Now computing the nonlinear scale space!!" << std::endl; } // Now generate the rest of evolution levels for( unsigned int i = 1; i < evolution.size(); i++ ) { Gaussian_2D_Convolution(evolution[i-1].Lt,evolution[i].Lsmooth,0,0,sderivatives); // Compute the Gaussian derivatives Lx and Ly Image_Derivatives_Scharr(evolution[i].Lsmooth,evolution[i].Lx,1,0); Image_Derivatives_Scharr(evolution[i].Lsmooth,evolution[i].Ly,0,1); // Compute the conductivity equation if( diffusivity == 0 ) { PM_G1(evolution[i].Lsmooth,evolution[i].Lflow,evolution[i].Lx,evolution[i].Ly,kcontrast); } else if( diffusivity == 1 ) { PM_G2(evolution[i].Lsmooth,evolution[i].Lflow,evolution[i].Lx,evolution[i].Ly,kcontrast); } else if( diffusivity == 2 ) { Weickert_Diffusivity(evolution[i].Lsmooth,evolution[i].Lflow,evolution[i].Lx,evolution[i].Ly,kcontrast); } // Perform the evolution step with AOS #if HAVE_THREADING_SUPPORT AOS_Step_Scalar_Parallel(evolution[i].Lt,evolution[i-1].Lt,evolution[i].Lflow,evolution[i].etime-evolution[i-1].etime); #else AOS_Step_Scalar(evolution[i].Lt,evolution[i-1].Lt,evolution[i].Lflow,evolution[i].etime-evolution[i-1].etime); #endif if( verbosity == true ) { std::cout << "--> Computed image evolution step " << i << " Evolution time: " << evolution[i].etime << " Sigma: " << evolution[i].esigma << std::endl; } } t2 = cv::getTickCount(); tnlscale = 1000.0*(t2-start_t1) / cv::getTickFrequency(); if( verbosity == true ) { std::cout << "> Computed the nonlinear scale space. Execution time (ms): " << tnlscale << std::endl; } return 0; }
上述函数中K值的计算、传导函数g以及AOS求解等的实现函数在上一篇文章《非线性扩散滤波》已提及。图像微分/梯度的计算用到了Scharr滤波器,这种滤波器具有比Sobel滤波器更好的旋转不变特性。这里涉及的卷积和微分计算函数如下:
//************************************************************************************* //************************************************************************************* /** * @brief This function smoothes an image with a Gaussian kernel * @param src Input image * @param dst Output image * @param ksize_x Kernel size in X-direction (horizontal) * @param ksize_y Kernel size in Y-direction (vertical) * @param sigma Kernel standard deviation */ void Gaussian_2D_Convolution(const cv::Mat &src, cv::Mat &dst, unsigned int ksize_x, unsigned int ksize_y, float sigma) { // Compute an appropriate kernel size according to the specified sigma if( sigma > ksize_x || sigma > ksize_y || ksize_x == 0 || ksize_y == 0 ) { ksize_x = ceil(2.0*(1.0 + (sigma-0.8)/(0.3))); ksize_y = ksize_x; } // The kernel size must be and odd number if( (ksize_x % 2) == 0 ) { ksize_x += 1; } if( (ksize_y % 2) == 0 ) { ksize_y += 1; } // Perform the Gaussian Smoothing with border replication cv::GaussianBlur(src,dst,cv::Size(ksize_x,ksize_y),sigma,sigma,cv::BORDER_REPLICATE); } //************************************************************************************* //************************************************************************************* /** * @brief This function computes image derivatives with Scharr kernel * @param src Input image * @param dst Output image * @param xorder Derivative order in X-direction (horizontal) * @param yorder Derivative order in Y-direction (vertical) * @note Scharr operator approximates better rotation invariance than * other stencils such as Sobel. See Weickert and Scharr, * A Scheme for Coherence-Enhancing Diffusion Filtering with Optimized Rotation Invariance, * Journal of Visual Communication and Image Representation 2002 */ void Image_Derivatives_Scharr(const cv::Mat &src, cv::Mat &dst, unsigned int xorder, unsigned int yorder) { // Compute Scharr filter cv::Scharr(src,dst,CV_32F,xorder,yorder,1,0,cv::BORDER_DEFAULT); }
上面我们介绍了非线性尺度空间的构造原理和实现方法,下一节将介绍KAZE特征点的检测和描述算法。
待续...
Ref:
[4] http://wenku.baidu.com/view/d9dffc34f111f18583d05a6f.html
[5] http://erie.nlm.nih.gov/~yoo/pubs/94-058.pdf