opencv HOG

OpenCV2.0提供了行人检测的例子,用的是法国人Navneet Dalal最早在CVPR2005会议上提出的方法。
最近正在学习它,下面是自己的学习体会,希望共同探讨提高。
1、VC 2008 Express下安装OpenCV2.0--可以直接使用2.1,不用使用CMake进行编译了,避免编译出错
      这是一切工作的基础,感谢版主提供的参考:http://www.opencv.org.cn/index.php/VC_2008_Express下安装OpenCV2.0
2、体会该程序
在DOS界面,进入如下路径: C:\OpenCV2.0\samples\c  peopledetect.exe filename.jpg
其中filename.jpg为待检测的文件名
3、编译程序
 创建一个控制台程序,从C:\OpenCV2.0\samples\c下将peopledetect.cpp加入到工程中;按步骤1的方法进行设置。编译成功,但是在DEBUG模式下生成的EXE文件运行出错,很奇怪 。
改成RELEASE模式后再次编译,生成的EXE文件可以运行。
4程序代码简要说明
1) getDefaultPeopleDetector() 获得3780维检测算子(105 blocks with 4 histograms each and 9 bins per histogram there are 3,780 values)--(为什么是105blocks?)
2).cv::HOGDescriptor hog; 创建类的对象 一系列变量初始化  
winSize(64,128), blockSize(16,16), blockStride(8,8),
cellSize(8,8), nbins(9), derivAperture(1), winSigma(-1),
histogramNormType(L2Hys), L2HysThreshold(0.2), gammaCorrection(true)
3). 调用函数:detectMultiScale(img, found, 0, cv::Size(8,8), cv::Size(24,16), 1.05, 2); 
  参数分别为待检图像、返回结果列表、门槛值hitThreshold、窗口步长winStride、图像padding margin、比例系数、门槛值groupThreshold;通过修改参数发现,就所用的某图片,参数0改为0.01就检测不到,改为0.001可以;1.05改为1.1就不行,1.06可以;2改为1可以,0.8以下不行,(24,16)改成(0,0)也可以,(32,32)也行
该函数内容如下
(1) 得到层数 levels 
某图片(530,402)为例,lg(402/128)/lg1.05=23.4 则得到层数为24
 (2) 循环levels次,每次执行内容如下
HOGThreadData& tdata = threadData[getThreadNum()];
Mat smallerImg(sz, img.type(), tdata.smallerImgBuf.data);
    调用以下核心函数
detect(smallerImg, tdata.locations, hitThreshold, winStride, padding);
其参数分别为,该比例下图像、返回结果列表、门槛值、步长、margin
该函数内容如下:
(a)得到补齐图像尺寸paddedImgSize
(b)创建类的对象 HOGCache cache(this, img, padding, padding, nwindows == 0, cacheStride); 在创建过程中,首先初始化 HOGCache::init,包括:计算梯度 descriptor->computeGradient、得到块的个数105、每块参数个数36 
    (c)获得窗口个数nwindows,以第一层为例,其窗口数为(530+32*2-64)/8+1、(402+32*2-128)/8+1 =67*43=2881,其中(32,32)为winStride参数,也可用(24,16)
(d)在每个窗口执行循环,内容如下
在105个块中执行循环,每个块内容为:通过getblock函数计算HOG特征并归一化,36个数分别与算子中对应数进行相应运算;判断105个块的总和 s >= hitThreshold 则认为检测到目标 
4)主体部分感觉就是以上这些,但很多细节还需要进一步弄清。
5、原文献写的算法流程
文献NavneetDalalThesis.pdf 78页图5.5描述了The complete object detection algorithm.
前2步为初始化,上面基本提到了。后面2步如下
For each scale Si = [Ss, SsSr, . . . , Sn]
(a) Rescale the input image using bilinear interpolation
(b) Extract features (Fig. 4.12) and densely scan the scaled image with stride Ns for object/non-object detections
(c) Push all detections with t(wi) > c to a list
Non-maximum suppression
(a) Represent each detection in 3-D position and scale space yi
(b) Using (5.9), compute the uncertainty matrices Hi for each point
(c) Compute the mean shift vector (5.7) iteratively for each point in the list until it converges to a mode
(d) The list of all of the modes gives the final fused detections
(e) For each mode compute the bounding box from the final centre point and scale

以下内容节选自文献NavneetDalalThesis.pdf,把重要的部分挑出来了。其中保留了原文章节号,便于查找。

4. Histogram of Oriented Gradients Based Encoding of Images
Default Detector.
As a yardstick for the purpose of comparison, throughout this section we compare results to our
default detector which has the following properties: input image in RGB colour space (without
any gamma correction); image gradient computed by applying [?1, 0, 1] filter along x- and yaxis
with no smoothing; linear gradient voting into 9 orientation bins in 0_–180_; 16×16 pixel
blocks containing 2×2 cells of 8×8 pixel; Gaussian block windowing with _ = 8 pixel; L2-Hys
(Lowe-style clipped L2 norm) block normalisation; blocks spaced with a stride of 8 pixels (hence
4-fold coverage of each cell); 64×128 detection window; and linear SVM classifier. We often
quote the performance at 10?4 false positives per window (FPPW) – the maximum false positive
rate that we consider to be useful for a real detector given that 103–104 windows are tested for
each image.
4.3.2 Gradient Computation
The simple [?1, 0, 1] masks give the best performance.
4.3.3 Spatial / Orientation Binning
Each pixel contributes a weighted vote for orientation based on the orientation of the gradient element centred on it.
The votes are accumulated into orientation bins over local spatial regions that we call cells.
To reduce aliasing, votes are interpolated trilinearly between the neighbouring bin centres in both orientation and position.
Details of the trilinear interpolation voting procedure are presented in Appendix D.
The vote is a function of the gradient magnitude at the pixel, either the magnitude itself, its square, its
square root, or a clipped form of the magnitude representing soft presence/absence of an edge at the pixel. In practice, using the magnitude itself gives the best results.
4.3.4 Block Normalisation Schemes and Descriptor Overlap
good normalisation is critical and including overlap significantly improves the performance.
Figure 4.4(d) shows that L2-Hys, L2-norm and L1-sqrt all perform equally well for the person detector.
such as cars and motorbikes, L1-sqrt gives the best results.
4.3.5 Descriptor Blocks
R-HOG.
For human detection, 3×3 cell blocks of 6×6 pixel cells perform best with 10.4% miss-rate
at 10?4 FPPW. Our standard 2×2 cell blocks of 8×8 cells are a close second.
We find 2×2 and 3×3 cell blocks work best.
4.3.6 Detector Window and Context
Our 64×128 detection window includes about 16 pixels of margin around the person on all four
sides.
4.3.7 Classifier
By default we use a soft (C=0.01) linear SVM trained with SVMLight [Joachims 1999].We modified
SVMLight to reduce memory usage for problems with large dense descriptor vectors.
---------------------------------
5. Multi-Scale Object Localisation
the detector scans the image with a detection window at all positions and scales, running the classifier in each window and fusing multiple overlapping detections to yield the final object detections.
We represent detections using kernel density estimation (KDE) in 3-D position and scale space. KDE is a data-driven process where continuous densities are evaluated by applying a smoothing kernel to observed data points. The bandwidth of the smoothing kernel defines the local neighbourhood. The detection scores are incorporated by weighting the observed detection points by their score values while computing the density estimate. Thus KDE naturally incorporates the first two criteria. The overlap criterion follows from the fact that detections at very different scales or positions are far off in 3-D position and scale space, and are thus not smoothed together. The modes (maxima) of the density estimate correspond to the positions and scales of final detections.
Let xi = [xi, yi] and s0i denote the detection position and scale, respectively, for the i-th detection.
the detections are represented in 3-D space as y = [x, y, s], where s = log(s’).
the variable bandwidth mean shift vector is defined as (5.7)

For each of the n point the mean shift based iterative procedure is guaranteed to converge to a mode2.
Detection Uncertainty Matrix Hi.
One key input to the above mode detection algorithm is the amount of uncertainty Hi to be associated with each point. We assume isosymmetric covariances, i.e. the Hi’s are diagonal matrices.
Let diag [H] represent the 3 diagonal elements of H. We use scale dependent covariance
matrices such that diag
[Hi] = [(exp(si)_x)2, (exp(si)_y)2, (_s)2] (5.9)
where _x, _y and _s are user supplied smoothing values.

The term t(wi) provides the weight for each detection. For linear SVMs we usually use threshold = 0.
the smoothing parameters _x, _y,and _s used in the non-maximum suppression stage. These parameters can have a significant impact on performance so proper evaluation is necessary. For all of the results here, unless otherwise noted, a scale ratio of 1.05, a stride of 8 pixels, and _x = 8, _y = 16, _s = log(1.3) are used as default values.
A scale ratio of 1.01 gives the best performance, but significantly slows the overall process.
Scale smoothing of log(1.3)–log(1.6) gives good performance for most object classes.
We group these mode candidates using a proximity measure. The final location is the ode corresponding to the highest density.
----------------------------------------------------
附录 A. INRIA Static Person Data Set
The (centred and normalised) positive windows are supplied by the user, and the initial set of negatives is created once and for all by randomly sampling negative images.A preliminary classifier is thus trained using these. Second, the preliminary detector is used to exhaustively scan the negative training images for hard examples (false positives). The classifier is then re-trained using this augmented training set (user supplied positives, initial negatives and hard examples) to produce the final detector.
INRIA Static Person Data Set
As images of people are highly variable, to learn an effective classifier, the positive training examples need to be properly normalized and centered to minimize the variance among them. For this we manually annotated all upright people in the original images.
The image regions belonging to the annotations were cropped and rescaled to 64×128 pixel image windows. On average the subjects height is 96 pixels in these normalised windows to allow for an approximately16 pixel margin on each side. In practise we leave a further 16 pixel margin around each side of the image window to ensure that flow and gradients can be computed without boundary effects. The margins were added by appropriately expanding the annotations on each side before cropping the image regions.

//<------------------------以上摘自datal的博士毕业论文

关于INRIA Person Dataset的更多介绍,见以下链接
http://pascal.inrialpes.fr/data/human/
Original Images
            Folders 'Train' and 'Test' correspond, respectively, to original training and test images. Both folders have three sub folders: (a) 'pos' (positive training or test images), (b) 'neg' (negative training or test images), and (c) 'annotations' (annotation files for positive images in Pascal Challenge format

你可能感兴趣的:(opencv HOG)