基于Opencv的人脸定位

关于OpenCV库,读者可以自行网上了解,这里不科普介绍。这里只基于opencv开源库来实现人脸定位,它是后续实现美颜,贴纸,一起学喵叫等的基础。

www.opencv.org.cn/forum.php:中…

opencv.org/releases.ht…

  1. Mac下首先安装opencv, 直接brew install opencv. 完成后载路径 /usr/local/Cellar/opencv/3.4.3 下找到安装的目录。并从 opencv.org/releases.ht… 中下载 android对应的最新版本包,后续集成到Android 中会用到,也可以参考里面的Demo。OpenCV-android-sdk/sdk/etc/lbpcascades/lbpcascade_frontalface.xml,解压包里的提供的训练模型

  2. Opencv里提供了两种训练模型,我们这里采用lbpcascade_frontalface.xml。关于LBP算法,特征向量提取。baike.baidu.com/item/lbp/66…

  3. 这里先在Clion下进行实现,然后再移植到Android中,因为在clion中可以直接一行函数就可以唤起PC的摄像头预览,比较方便。

    cmake_minimum_required(VERSION 3.10)
    
    project(OpencvFace)
    
    set(CMAKE_BUILD_TYPE Debug)
    
    set(CMAKE_CXX_STANDARD 11)
    
    set(Opencv_include "/usr/local/Cellar/opencv/3.4.2/include" )
    
    set(Opencv_lib "/usr/local/Cellar/opencv/3.4.2/lib")
    
    #引入头文件
    include_directories(${Opencv_include})
    
    #引入库文件
    link_directories(${Opencv_lib})
    
    add_executable(OpencvFace FaceTracking.cpp FaceTracking.h)
    
    set(OpenCV_LIBS ippicv opencv_line_descriptor ippiw opencv_ml ittnotify opencv_objdetect libjpeg-turbo opencv_optflow libprotobuf opencv_phase_unwrapping libwebp opencv_photo opencv_aruco opencv_plot opencv_bgsegm opencv_reg opencv_bioinspired opencv_rgbd opencv_calib3d opencv_saliency opencv_ccalib opencv_shape opencv_core opencv_stereo opencv_datasets opencv_stitching opencv_dnn opencv_structured_light opencv_dnn_objdetect opencv_superres opencv_dpm opencv_surface_matching opencv_face opencv_tracking opencv_features2d opencv_video opencv_flann opencv_videoio opencv_fuzzy  opencv_videostab opencv_hfs opencv_xfeatures2d opencv_highgui opencv_ximgproc opencv_img_hash opencv_xobjdetect opencv_imgcodecs opencv_xphoto opencv_imgproc)
    
    target_link_libraries(OpencvFace ${OpenCV_LIBS})
    复制代码

这里直接引入所有静态库,后续不知道要调用哪些,所以都导进来了,也可以把他们编成动态库,这里我没再编了。直接看代码吧:

int main(){
  //智能指针
    Ptr classifier = makePtr
            ("/Users/xiuchengyin/Downloads/FFmpeg_Third_Jar/OpenCV-android-sdk/sdk/etc/lbpcascades/lbpcascade_frontalface.xml");
    //创建一个跟踪适配器
    Ptr mainDetector = makePtr(classifier);

    Ptr classifier1 = makePtr
            ("/Users/xiuchengyin/Downloads/FFmpeg_Third_Jar/OpenCV-android-sdk/sdk/etc/lbpcascades/lbpcascade_frontalface.xml");
    Ptr trackingDetector = makePtr(classifier1);

    //拿去用的跟踪器
 DetectionBasedTracker::Parameters DetectorParams;
  Ptr tracker = makePtr(mainDetector, trackingDetector, DetectorParams);
    //开启跟踪器
    tracker->run();

    VideoCapture capture(0);
    Mat img;
    Mat gray;

    while (1)
    {
        capture >> img;
        // img的 颜色空间是 BGR,不像现在,早期的计算机中主流是bgr,而不是rgb
        cvtColor(img, gray,COLOR_BGR2GRAY);
        //增强对比度 (直方图均衡)
        equalizeHist(gray, gray);
        std::vector faces;
        //定位人脸 N个
        tracker->process(gray);
        tracker->getObjects(faces);
        //classifier->detectMultiScale(gray, faces);
        for (Rect face : faces) {
            //画矩形
            //分别指定 bgra
            rectangle(img, face, Scalar(255, 0, 255));
        }
        imshow("摄像头", img);
        //延迟10ms 如果10s内你没有输入 空格
        waitKey(30);
    }
    tracker->stop();
    return 0;
}
复制代码

代码量比较少,都是调用接口,简单说一下几个关键的地方:

img的颜色空间是 BGR,早期的计算机主流为bgr, 而不是rgb。BGR组成的颜色非常多,数据量大,处理慢,去掉颜色变成灰色的,去掉颜色,降噪,减少存储量。

 cvtColor(img, gray,COLOR_BGR2GRAY);
复制代码

创建跟踪器Tracker,传入两个分类器Adpter,传入库里面给的LBP训练模型调用tracker->process(gray);定位,然后tracker->getObjects(faces);获取到定位结果,存入Faces,最终画矩形:

for (Rect face : faces) {
  //画矩形
  //分别指定 bgra
  rectangle(img, face, Scalar(255, 0, 255));
  }
复制代码

imshow()函数只能在调起PC的摄像头,不能调用手机的Camera预览,所以这就是在PC便于调试的原因。

其实,库里还给了眼睛的训练模型,这里没在用,定位方法类似。而且对于脸部关键点有更加精准的算来实现,后续在实现美颜等效果时再引入。

今天比较剪短,直接下源码opencv 人脸识别去执行吧。

IDE,我找的网上开源的方法,很好用,这样贴出来不知道是不是不太好,哈哈[clion

你可能感兴趣的:(基于Opencv的人脸定位)