【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备

第二章 启程前的认知准备

文章目录

  • 第二章 启程前的认知准备
    • 前言
    • 2.1 OpenCV官方例程引导与赏析
      • 2.1.1 彩色目标追踪:Camshift
      • 2.1.2 光流:optical flow
      • 2.1.3 点追踪:Ikdemo
      • 2.1.4 人脸识别:objectDetection
    • 2.2 开源的魅力:编译OpenCV源代码
      • 2.2.1 下载安装CMake
      • 2.2.2 使用CMake生成OpenCV源码工程的解决方案
      • 2.2.3 编译OpenCV源代码
    • 2.3 “opencv.hpp”头文件认知
    • 2.4 命名规范约定
      • 2.4.1 本书范例的命名规范
      • 2.4.2 匈牙利命名法
    • 2.5 argc与argv参数解惑
      • 2.5.1 初识main函数中的argc和argv
      • 2.5.2 argc与argv的具体含义
      • 2.5.3 Visual Studio中的main函数的几种写法说明
      • 2.5.4 总结
    • 2.6 格式输出函数printf()简析
      • 2.6.1 格式输出:printf()函数
      • 示例程序:printf函数的用法示例
    • 2.7 智能显示当前使用的OpenCV版本
    • 2.8 本章小结

前言

笔记系列

参考书籍:OpenCV3编程入门

作者:毛星云

版权方:电子工业出版社

出版日期:2015-02

笔记仅供本人参考使用,不具备共通性

笔记中代码均是OpenCV+Qt的代码,并非用vs开发,请勿混淆

2.1 OpenCV官方例程引导与赏析

在OpenCV安装目录下,可以找到OpenCV官方提供的示例代码,具体位于

...\opencv\sources\samples\cpp
【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备_第1张图片
而在...\opencv\sources\samples\cpp\tutorial_code目录下,还存放着官方教程的配套示例程序。其内容按照OpenCV各组件模块而分类,非常适合学习,初学者可以按需查询,分类学习,各个击破。
【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备_第2张图片

2.1.1 彩色目标追踪:Camshift

该程序的用法是根据鼠标框选区域的色度光谱来进行摄像头读入的视频目标的追踪。

其主要采用CamShift算法(“Continuously Adaptive Mean-SHIFT”),它是对MeanShift算法的改进,被称为连续自适应的MeanShift算法。

示例程序在文件夹中的位置:...\opencv\sources\samples\cpp\camshiftdemo.cpp

使用Qt重新编译后如下

文件:main.cpp

#include "mainwindow.h"
#include 
#include "opencv2/core/utility.hpp"
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"
#include 
#include 
#include 
#include 

using namespace cv;
using namespace std;

Mat image;

bool backprojMode = false;
bool selectObject = false;
int trackObject = 0;
bool showHist = true;
Point origin;
Rect selection;
int vmin = 10, vmax = 256, smin = 30;

// User draws box around object to track. This triggers CAMShift to start tracking
static void onMouse( int event, int x, int y, int, void* )
{
    if( selectObject )
    {
        selection.x = MIN(x, origin.x);
        selection.y = MIN(y, origin.y);
        selection.width = std::abs(x - origin.x);
        selection.height = std::abs(y - origin.y);

        selection &= Rect(0, 0, image.cols, image.rows);
    }

    switch( event )
    {
    case EVENT_LBUTTONDOWN:
        origin = Point(x,y);
        selection = Rect(x,y,0,0);
        selectObject = true;
        break;
    case EVENT_LBUTTONUP:
        selectObject = false;
        if( selection.width > 0 && selection.height > 0 )
            trackObject = -1;   // Set up CAMShift properties in main() loop
        break;
    }
}

string hot_keys =
    "\n\nHot keys: \n"
    "\tESC - quit the program\n"
    "\tc - stop the tracking\n"
    "\tb - switch to/from backprojection view\n"
    "\th - show/hide object histogram\n"
    "\tp - pause video\n"
    "To initialize tracking, select the object with mouse\n";

static void help(const char* argv)
{
    cout << "\nThis is a demo that shows mean-shift based tracking\n"
            "You select a color objects such as your face and it tracks it.\n"
            "This reads from video camera (0 by default, or the camera number the user enters\n"
            "Usage: \n\t";
    cout << argv[0] << " [camera number]\n";
    cout << hot_keys;
}

const char* keys =
{
    "{help h | | show help message}{@camera_number| 0 | camera number}"
};


int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    VideoCapture cap;
        Rect trackWindow;
        int hsize = 16;
        float hranges[] = {0,180};
        const float* phranges = hranges;
        CommandLineParser parser(argc, argv, keys);
        if (parser.has("help"))
        {
            help(*argv);
            return 0;
        }
        int camNum = parser.get<int>(0);
        cap.open(camNum);

        if( !cap.isOpened() )
        {
            help(*argv);
            cout << "***Could not initialize capturing...***\n";
            cout << "Current parameter's value: \n";
            parser.printMessage();
            return -1;
        }
        cout << hot_keys;
        namedWindow( "Histogram", 0 );
        namedWindow( "CamShift Demo", 0 );
        setMouseCallback( "CamShift Demo", onMouse, 0 );
        createTrackbar( "Vmin", "CamShift Demo", &vmin, 256, 0 );
        createTrackbar( "Vmax", "CamShift Demo", &vmax, 256, 0 );
        createTrackbar( "Smin", "CamShift Demo", &smin, 256, 0 );

        Mat frame, hsv, hue, mask, hist, histimg = Mat::zeros(200, 320, CV_8UC3), backproj;
        bool paused = false;

        for(;;)
        {
            if( !paused )
            {
                cap >> frame;
                if( frame.empty() )
                    break;
            }

            frame.copyTo(image);

            if( !paused )
            {
                cvtColor(image, hsv, COLOR_BGR2HSV);

                if( trackObject )
                {
                    int _vmin = vmin, _vmax = vmax;

                    inRange(hsv, Scalar(0, smin, MIN(_vmin,_vmax)),
                            Scalar(180, 256, MAX(_vmin, _vmax)), mask);
                    int ch[] = {0, 0};
                    hue.create(hsv.size(), hsv.depth());
                    mixChannels(&hsv, 1, &hue, 1, ch, 1);

                    if( trackObject < 0 )
                    {
                        // Object has been selected by user, set up CAMShift search properties once
                        Mat roi(hue, selection), maskroi(mask, selection);
                        calcHist(&roi, 1, 0, maskroi, hist, 1, &hsize, &phranges);
                        normalize(hist, hist, 0, 255, NORM_MINMAX);

                        trackWindow = selection;
                        trackObject = 1; // Don't set up again, unless user selects new ROI

                        histimg = Scalar::all(0);
                        int binW = histimg.cols / hsize;
                        Mat buf(1, hsize, CV_8UC3);
                        for( int i = 0; i < hsize; i++ )
                            buf.at<Vec3b>(i) = Vec3b(saturate_cast<uchar>(i*180./hsize), 255, 255);
                        cvtColor(buf, buf, COLOR_HSV2BGR);

                        for( int i = 0; i < hsize; i++ )
                        {
                            int val = saturate_cast<int>(hist.at<float>(i)*histimg.rows/255);
                            rectangle( histimg, Point(i*binW,histimg.rows),
                                       Point((i+1)*binW,histimg.rows - val),
                                       Scalar(buf.at<Vec3b>(i)), -1, 8 );
                        }
                    }

                    // Perform CAMShift
                    calcBackProject(&hue, 1, 0, hist, backproj, &phranges);
                    backproj &= mask;
                    RotatedRect trackBox = CamShift(backproj, trackWindow,
                                       TermCriteria( TermCriteria::EPS | TermCriteria::COUNT, 10, 1 ));
                    if( trackWindow.area() <= 1 )
                    {
                        int cols = backproj.cols, rows = backproj.rows, r = (MIN(cols, rows) + 5)/6;
                        trackWindow = Rect(trackWindow.x - r, trackWindow.y - r,
                                           trackWindow.x + r, trackWindow.y + r) &
                                      Rect(0, 0, cols, rows);
                    }

                    if( backprojMode )
                        cvtColor( backproj, image, COLOR_GRAY2BGR );
                    ellipse( image, trackBox, Scalar(0,0,255), 3, LINE_AA );
                }
            }
            else if( trackObject < 0 )
                paused = false;

            if( selectObject && selection.width > 0 && selection.height > 0 )
            {
                Mat roi(image, selection);
                bitwise_not(roi, roi);
            }

            imshow( "CamShift Demo", image );
            imshow( "Histogram", histimg );

            char c = (char)waitKey(10);
            if( c == 27 )
                break;
            switch(c)
            {
            case 'b':
                backprojMode = !backprojMode;
                break;
            case 'c':
                trackObject = 0;
                histimg = Scalar::all(0);
                break;
            case 'h':
                showHist = !showHist;
                if( !showHist )
                    destroyWindow( "Histogram" );
                else
                    namedWindow( "Histogram", 1 );
                break;
            case 'p':
                paused = !paused;
                break;
            default:
                ;
            }
        }

        return 0;
}

运行结果如下:

【1】彩色目标追踪截图

【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备_第3张图片
【2】对应的直方图分析
【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备_第4张图片

2.1.2 光流:optical flow

光流(optical flow)法是目前运动图像分析的重要方法。

光流用来指定时变图像中模式的运动速度,因为当物体在运动时,在图像上对应的亮度模式也在运动。

这种图像亮度模式的表观运动就是光流。

光流表达了图像的变化,由于它包含了目标运动的信息,因此可被观察者用来确定目标的运动情况。

示例程序在文件夹中的位置:...\opencv\sources\samples\cpp\tutorial_code\video\optical_flow\optical_flow.cpp

使用Qt重新编译后如下

文件:main.cpp

#include "mainwindow.h"

#include 

#include 
#include 
#include 
#include 
#include 
#include 

using namespace cv;
using namespace std;

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    const string about =
            "This sample demonstrates Lucas-Kanade Optical Flow calculation.\n"
            "The example file can be downloaded from:\n"
            "  https://www.bogotobogo.com/python/OpenCV_Python/images/mean_shift_tracking/slow_traffic_small.mp4";
        const string keys =
            "{ h help |      | print this help message }"
            "{ @image | vtest.avi | path to image file }";
        CommandLineParser parser(argc, argv, keys);
        parser.about(about);
        if (parser.has("help"))
        {
            parser.printMessage();
            return 0;
        }
        string filename = samples::findFile(parser.get<string>("@image"));
        if (!parser.check())
        {
            parser.printErrors();
            return 0;
        }

        VideoCapture capture(filename);
        if (!capture.isOpened()){
            //error in opening the video input
            cerr << "Unable to open file!" << endl;
            return 0;
        }

        // Create some random colors
        vector<Scalar> colors;
        RNG rng;
        for(int i = 0; i < 100; i++)
        {
            int r = rng.uniform(0, 256);
            int g = rng.uniform(0, 256);
            int b = rng.uniform(0, 256);
            colors.push_back(Scalar(r,g,b));
        }

        Mat old_frame, old_gray;
        vector<Point2f> p0, p1;

        // Take first frame and find corners in it
        capture >> old_frame;
        cvtColor(old_frame, old_gray, COLOR_BGR2GRAY);
        goodFeaturesToTrack(old_gray, p0, 100, 0.3, 7, Mat(), 7, false, 0.04);

        // Create a mask image for drawing purposes
        Mat mask = Mat::zeros(old_frame.size(), old_frame.type());

        while(true){
            Mat frame, frame_gray;

            capture >> frame;
            if (frame.empty())
                break;
            cvtColor(frame, frame_gray, COLOR_BGR2GRAY);

            // calculate optical flow
            vector<uchar> status;
            vector<float> err;
            TermCriteria criteria = TermCriteria((TermCriteria::COUNT) + (TermCriteria::EPS), 10, 0.03);
            calcOpticalFlowPyrLK(old_gray, frame_gray, p0, p1, status, err, Size(15,15), 2, criteria);

            vector<Point2f> good_new;
            for(uint i = 0; i < p0.size(); i++)
            {
                // Select good points
                if(status[i] == 1) {
                    good_new.push_back(p1[i]);
                    // draw the tracks
                    line(mask,p1[i], p0[i], colors[i], 2);
                    circle(frame, p1[i], 5, colors[i], -1);
                }
            }
            Mat img;
            add(frame, mask, img);

            imshow("Frame", img);

            int keyboard = waitKey(30);
            if (keyboard == 'q' || keyboard == 27)
                break;

            // Now update the previous frame and previous points
            old_gray = frame_gray.clone();
            p0 = good_new;
        }
}

运行结果如下:

【1】光流追踪效果

【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备_第5张图片

2.1.3 点追踪:Ikdemo

...\opencv\sources\samples\cpp目录下(实际路径会因为OpenCV版本和安装时的设置不同,可能产生出入),找到Ikdemo.cpp文件

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-pq4zLkLB-1615432211740)(image/【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备/image-20210310215410446.png)]

其具体代码在移植到Qt后如下所示

文件:main.cpp

#include 

#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include "opencv2/highgui.hpp"


#include 
#include 

using namespace cv;
using namespace std;

static void help()
{
    // print a welcome message, and the OpenCV version
    cout << "\nThis is a demo of Lukas-Kanade optical flow lkdemo(),\n"
            "Using OpenCV version " << CV_VERSION << endl;
    cout << "\nIt uses camera by default, but you can provide a path to video as an argument.\n";
    cout << "\nHot keys: \n"
            "\tESC - quit the program\n"
            "\tr - auto-initialize tracking\n"
            "\tc - delete all the points\n"
            "\tn - switch the \"night\" mode on/off\n"
            "To add/remove a feature point click it\n" << endl;
}

Point2f point;
bool addRemovePt = false;

static void onMouse( int event, int x, int y, int /*flags*/, void* /*param*/ )
{
    if( event == EVENT_LBUTTONDOWN )
    {
        point = Point2f((float)x, (float)y);
        addRemovePt = true;
    }
}

int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);

    VideoCapture cap;
        TermCriteria termcrit(TermCriteria::COUNT|TermCriteria::EPS,20,0.03);
        Size subPixWinSize(10,10), winSize(31,31);

        const int MAX_COUNT = 500;
        bool needToInit = false;
        bool nightMode = false;

        help();
        cv::CommandLineParser parser(argc, argv, "{@input|0|}");
        string input = parser.get<string>("@input");

        if( input.size() == 1 && isdigit(input[0]) )
            cap.open(input[0] - '0');
        else
            cap.open(input);

        if( !cap.isOpened() )
        {
            cout << "Could not initialize capturing...\n";
            return 0;
        }

        namedWindow( "LK Demo", 1 );
        setMouseCallback( "LK Demo", onMouse, 0 );

        Mat gray, prevGray, image, frame;
        vector<Point2f> points[2];

        for(;;)
        {
            cap >> frame;
            if( frame.empty() )
                break;

            frame.copyTo(image);
            cvtColor(image, gray, COLOR_BGR2GRAY);

            if( nightMode )
                image = Scalar::all(0);

            if( needToInit )
            {
                // automatic initialization
                goodFeaturesToTrack(gray, points[1], MAX_COUNT, 0.01, 10, Mat(), 3, 3, 0, 0.04);
                cornerSubPix(gray, points[1], subPixWinSize, Size(-1,-1), termcrit);
                addRemovePt = false;
            }
            else if( !points[0].empty() )
            {
                vector<uchar> status;
                vector<float> err;
                if(prevGray.empty())
                    gray.copyTo(prevGray);
                calcOpticalFlowPyrLK(prevGray, gray, points[0], points[1], status, err, winSize,
                                     3, termcrit, 0, 0.001);
                size_t i, k;
                for( i = k = 0; i < points[1].size(); i++ )
                {
                    if( addRemovePt )
                    {
                        if( norm(point - points[1][i]) <= 5 )
                        {
                            addRemovePt = false;
                            continue;
                        }
                    }

                    if( !status[i] )
                        continue;

                    points[1][k++] = points[1][i];
                    circle( image, points[1][i], 3, Scalar(0,255,0), -1, 8);
                }
                points[1].resize(k);
            }

            if( addRemovePt && points[1].size() < (size_t)MAX_COUNT )
            {
                vector<Point2f> tmp;
                tmp.push_back(point);
                cornerSubPix( gray, tmp, winSize, Size(-1,-1), termcrit);
                points[1].push_back(tmp[0]);
                addRemovePt = false;
            }

            needToInit = false;
            imshow("LK Demo", image);

            char c = (char)waitKey(10);
            if( c == 27 )
                break;
            switch( c )
            {
            case 'r':
                needToInit = true;
                break;
            case 'c':
                points[0].clear();
                points[1].clear();
                break;
            case 'n':
                nightMode = !nightMode;
                break;
            }

            std::swap(points[1], points[0]);
            cv::swap(prevGray, gray);
        }

    return a.exec();
}

运行结果如下:

【1】点追踪效果

【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备_第6张图片

2.1.4 人脸识别:objectDetection

人脸识别是图像处理与OpenCV非常重要的应用之一,OpenCV官方专门有教程和代码讲解其实现方法.

这里找来的示例程序就是使用objdetect模块检测摄像头视频流中的人脸,其位于...\opencv\sources\samples\cpp\tutorial_code\objectDetection路径之下.

注意:需要将...\opencv\sources\data\haarcascades路径下的"haarcascade_eye_tree_eyeglasses.xml“和”haarcascade_frontalface_alt.xml"两个文件一起复制到项目文件夹中,才能正确运行的

其具体代码在移植到Qt后如下所示

文件:main.cpp

#include 

#include "opencv2/objdetect.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/videoio.hpp"
#include 

using namespace std;
using namespace cv;

/** Function Headers */
void detectAndDisplay( Mat frame );

/** Global variables */
CascadeClassifier face_cascade;
CascadeClassifier eyes_cascade;

/** @function main */

int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);

    CommandLineParser parser(argc, argv,
                                "{help h||}"
                                "{face_cascade|data/haarcascades/haarcascade_frontalface_alt.xml|Path to face cascade.}"
                                "{eyes_cascade|data/haarcascades/haarcascade_eye_tree_eyeglasses.xml|Path to eyes cascade.}"
                                "{camera|0|Camera device number.}");

       parser.about( "\nThis program demonstrates using the cv::CascadeClassifier class to detect objects (Face + eyes) in a video stream.\n"
                     "You can use Haar or LBP features.\n\n" );
       parser.printMessage();

       String face_cascade_name = samples::findFile( parser.get<String>("face_cascade") );
       String eyes_cascade_name = samples::findFile( parser.get<String>("eyes_cascade") );

       //-- 1. Load the cascades
       if( !face_cascade.load( face_cascade_name ) )
       {
           cout << "--(!)Error loading face cascade\n";
           return -1;
       };
       if( !eyes_cascade.load( eyes_cascade_name ) )
       {
           cout << "--(!)Error loading eyes cascade\n";
           return -1;
       };

       int camera_device = parser.get<int>("camera");
       VideoCapture capture;
       //-- 2. Read the video stream
       capture.open( camera_device );
       if ( ! capture.isOpened() )
       {
           cout << "--(!)Error opening video capture\n";
           return -1;
       }

       Mat frame;
       while ( capture.read(frame) )
       {
           if( frame.empty() )
           {
               cout << "--(!) No captured frame -- Break!\n";
               break;
           }

           //-- 3. Apply the classifier to the frame
           detectAndDisplay( frame );

           if( waitKey(10) == 27 )
           {
               break; // escape
           }
       }

    return a.exec();
}


/** @function detectAndDisplay */
void detectAndDisplay( Mat frame )
{
    Mat frame_gray;
    cvtColor( frame, frame_gray, COLOR_BGR2GRAY );
    equalizeHist( frame_gray, frame_gray );

    //-- Detect faces
    std::vector<Rect> faces;
    face_cascade.detectMultiScale( frame_gray, faces );

    for ( size_t i = 0; i < faces.size(); i++ )
    {
        Point center( faces[i].x + faces[i].width/2, faces[i].y + faces[i].height/2 );
        ellipse( frame, center, Size( faces[i].width/2, faces[i].height/2 ), 0, 0, 360, Scalar( 255, 0, 255 ), 4 );

        Mat faceROI = frame_gray( faces[i] );

        //-- In each face, detect eyes
        std::vector<Rect> eyes;
        eyes_cascade.detectMultiScale( faceROI, eyes );

        for ( size_t j = 0; j < eyes.size(); j++ )
        {
            Point eye_center( faces[i].x + eyes[j].x + eyes[j].width/2, faces[i].y + eyes[j].y + eyes[j].height/2 );
            int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 );
            circle( frame, eye_center, radius, Scalar( 255, 0, 0 ), 4 );
        }
    }

    //-- Show what you got
    imshow( "Capture - Face detection", frame );
}

运行结果如下:

【1】人脸识别效果
【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备_第7张图片

2.2 开源的魅力:编译OpenCV源代码

本节略,具体编译OpenCV的方法,可以参考书中描述,或者以下博客文章

Qt-OpenCV开发环境搭建(史上最详细)

拜小白教你Qt5.8.0+OpenCV3.2.0配置教程(详细版)

编译安装的方法步骤其实都大同小异,此处就不多做赘述了,此外,编译中遇到的大部分问题,在上面两个的博客中基本都有解释的,读者应耐心查看,对照自己电脑的问题,不要毛躁,切忌毛躁!

2.2.1 下载安装CMake

2.2.2 使用CMake生成OpenCV源码工程的解决方案

2.2.3 编译OpenCV源代码

2.3 “opencv.hpp”头文件认知

在任意一个OpenCV程序中,通过转到定义,我们可以发现"#include "一句中的头文件定义类似如下:

#ifndef OPENCV_ALL_HPP
#define OPENCV_ALL_HPP

// File that defines what modules where included during the build of OpenCV
// These are purely the defines of the correct HAVE_OPENCV_modulename values
#include "opencv2/opencv_modules.hpp"

// Then the list of defines is checked to include the correct headers
// Core library is always included --> without no OpenCV functionality available
#include "opencv2/core.hpp"

// Then the optional modules are checked
#ifdef HAVE_OPENCV_CALIB3D
#include "opencv2/calib3d.hpp"
#endif
#ifdef HAVE_OPENCV_FEATURES2D
#include "opencv2/features2d.hpp"
#endif
#ifdef HAVE_OPENCV_DNN
#include "opencv2/dnn.hpp"
#endif
#ifdef HAVE_OPENCV_FLANN
#include "opencv2/flann.hpp"
#endif
#ifdef HAVE_OPENCV_HIGHGUI
#include "opencv2/highgui.hpp"
#endif
#ifdef HAVE_OPENCV_IMGCODECS
#include "opencv2/imgcodecs.hpp"
#endif
#ifdef HAVE_OPENCV_IMGPROC
#include "opencv2/imgproc.hpp"
#endif
#ifdef HAVE_OPENCV_ML
#include "opencv2/ml.hpp"
#endif
#ifdef HAVE_OPENCV_OBJDETECT
#include "opencv2/objdetect.hpp"
#endif
#ifdef HAVE_OPENCV_PHOTO
#include "opencv2/photo.hpp"
#endif
#ifdef HAVE_OPENCV_SHAPE
#include "opencv2/shape.hpp"
#endif
#ifdef HAVE_OPENCV_STITCHING
#include "opencv2/stitching.hpp"
#endif
#ifdef HAVE_OPENCV_SUPERRES
#include "opencv2/superres.hpp"
#endif
#ifdef HAVE_OPENCV_VIDEO
#include "opencv2/video.hpp"
#endif
#ifdef HAVE_OPENCV_VIDEOIO
#include "opencv2/videoio.hpp"
#endif
#ifdef HAVE_OPENCV_VIDEOSTAB
#include "opencv2/videostab.hpp"
#endif
#ifdef HAVE_OPENCV_VIZ
#include "opencv2/viz.hpp"
#endif

// Finally CUDA specific entries are checked and added
#ifdef HAVE_OPENCV_CUDAARITHM
#include "opencv2/cudaarithm.hpp"
#endif
#ifdef HAVE_OPENCV_CUDABGSEGM
#include "opencv2/cudabgsegm.hpp"
#endif
#ifdef HAVE_OPENCV_CUDACODEC
#include "opencv2/cudacodec.hpp"
#endif
#ifdef HAVE_OPENCV_CUDAFEATURES2D
#include "opencv2/cudafeatures2d.hpp"
#endif
#ifdef HAVE_OPENCV_CUDAFILTERS
#include "opencv2/cudafilters.hpp"
#endif
#ifdef HAVE_OPENCV_CUDAIMGPROC
#include "opencv2/cudaimgproc.hpp"
#endif
#ifdef HAVE_OPENCV_CUDAOBJDETECT
#include "opencv2/cudaobjdetect.hpp"
#endif
#ifdef HAVE_OPENCV_CUDAOPTFLOW
#include "opencv2/cudaoptflow.hpp"
#endif
#ifdef HAVE_OPENCV_CUDASTEREO
#include "opencv2/cudastereo.hpp"
#endif
#ifdef HAVE_OPENCV_CUDAWARPING
#include "opencv2/cudawarping.hpp"
#endif

#endif

通过观察可看出来,opencv2\opencv.hpp头文件中已经包含了OpenCV的各模块的头文件,因此原则上,在编写OpenCV代码时,只需要添加

#include

这一个头文件就可以了,因此代码可以精简一些

但是我作为初学者,在写代码时,还是默认会包含相应模块的头文件的

2.4 命名规范约定

2.4.1 本书范例的命名规范

该规范来自《代码大全(第二版)》的命名规则,其中有些微修改,具体如下

描述 实例
类名混合使用大小写,且首字母大写 ClassName
类型定义,包括枚举和typedef,混合使用大小写,首字母大写 TypeName
枚举类型除了混合使用大小写外,还需要以复数形式表示 EnumeratedTypes
局部变量混合使用大小写,且首字母小写,其名字应该与底层数据类型无关,而且应该反应该变量所代表的事务 localVariable
子程序参数的格式混合使用大小写,且每个单词的首字母大写,其名字应该与底层数据类型无关,而且应该反应该变量所代表的事务 RoutineParameter
对类的多个子程序可见(且只对该类可见)的成员变量名用m_前缀 m_ClassVariable
局部变量名用g_前缀 g_GlobalVariable
具名常量全部大写 CONSTANT
宏全部大写,单词间用分隔符"_"隔开 SCREEN_WIDTH
枚举类型成员名用能反应其基础类型的\单数形式的前缀.例如:Color_Red,Color_Blue Base_EnumeratedType

2.4.2 匈牙利命名法

匈牙利命名法

  • 基本原则:变量名=属性+类型+对象描述

  • 变量命名规范如下

    • 前缀写法 类型 描述 实例
      ch char 8位字符 chGrade
      ch TCHAR 如果_UNICODE定义,则为16位字符 chName
      b BOOL 布尔值 bEnable
      n int 整形(其大小依赖于操作系统) nLength
      n UINT 无符号值(其大小依赖于操作系统) nHeight
      w WORD 16位无符号值 wPos
      l LONG 32位有符号整型 lOffset
      dw DWORD 32位无符号整型 dwRange
      p * 指针 pDoc
      lp FAR* 远指针 lpszName
      lpsz LPSTR 32位字符串指针 lpszName
      lpsz LPCSTR 32位常量字符串指针 lpszName
      lpsz LPCTSTR 如果_UNICODE定义,则为32位常量字符串指针 lpszName
      h handle Windows对象句柄 hWnd
      lpfn callback 指向CALLBACK函数的远指针 LpfnName
  • 关键字母组合

    • 描述内容 使用关键字母组合
      最大值 Max
      最小值 Min
      初始化 Init
      临时变量 T(或Temp)
      源对象 src
      目的对象 Dst

2.5 argc与argv参数解惑

2.5.1 初识main函数中的argc和argv

argc和argv中的arg是"参数"的意思

例如arguments,argument counter和argument vector

其中

  • argc为整数,用来统计运行程序时需要送给main函数的命令行参数的个数
  • *argv[]为字符串数组,用来存放指向字符串参数的指针数组,每一个元素指向一个参数

2.5.2 argc与argv的具体含义

其实main(int argc,char *argv[],char **env)才是UNIX和Linux中的标准写法

  • int类型的argc
    • 为整型
    • 用来统计程序运行时发送给main函数的命令行参数的个数
    • 在vs中默认值为1
  • char*类型的argv[]
    • 为字符串数组
    • 用来存放指向字符串参数的指针数组,每个元素指向一个参数
    • 各成员含义如下:
      • argv[0]指向程序运行的全路径名
      • argv[1]指向在DOS命令行中执行程序名后的第一个字符串
      • argv[2]指向执行程序名后的第二个字符串
      • argv[3]指向执行程序名后的第三个字符串
      • argv[argc]为NULL
  • char**类型的env
    • 为字符串数组
    • env[]的每一个元素都包含ENVVAR=value的字符串
      • ENVVAR为环境变量
      • value为ENVVAR的对应值
    • OpenCV中很少用到这个参数

2.5.3 Visual Studio中的main函数的几种写法说明

以下三种写法在vs编译器中绝对合法

【1】返回值为整型带参的main函数

int main(int argc,char** argv)
{
    //函数体内使用或不使用argc和argv都行
    .....
        return 1;
}

【2】返回值为整型不带参的main函数

int main(int argc,char** argv)
{
    //函数体内使用了argc或argv
    .....
        return 1;
}

【3】返回值为void且不带参的main函数

int main()
{
    .....
        return 1;
}

2.5.4 总结

  • int argc表示命令行字符串的个数
  • char* argv[]表示命令行参数的字符串

2.6 格式输出函数printf()简析

本章略

2.6.1 格式输出:printf()函数

示例程序:printf函数的用法示例

2.7 智能显示当前使用的OpenCV版本

CV_VERSION为用于标识当前OpenCV版本的宏,所以我们可以用下面的代码,检测OpenCV的版本

【1】使用printf函数

printf("\t当前使用的OpenCV版本为 %s"CV_VERSION);//这个貌似有问题,或许是因为我在QT环境下的原因,它并不能很好的表示出来

【2】使用cout函数

cout<<"\t当前使用的OpenCV版本为"<<CV_VERSION;

【3】在Qt中使用qDebug函数

qDebug()<<"\t当前使用的OpenCV版本为"<<CV_VERSION;

2.8 本章小结

  • 看来一些OpenCV的官方示例代码,让我知道了OpenCV基本能做些什么
  • 书中还提到了Cmake OpenCV源码的步骤,但因为事先我已经从别的渠道学会了这个,所以一掠而过
  • 简略看了opencv.hpp这个头文件
  • 代码命名规范 — 匈牙利命名法
  • argc与argv
  • 显示OpenCV的版本号

你可能感兴趣的:(Qt+OpenCV基础学习,opencv,qt)