C# 人脸识别库

.NET 人脸识别库 ViewFaceCore

这是基于 SeetaFace6 人脸识别开发的 .NET 平台下的人脸识别库
这是一个使用超简单的人脸识别库
这是一个基于 .NET Standard 2.0 开发的库
这个库已经发布到 NuGet ,你可以一键集成到你的项目
此项目可以免费商业使用

⭐、开源

开源协议:Apache-2.0
GitHub地址: ViewFaceCore
十分感谢您的小星星

一、示例

示例项目地址:WinForm 摄像头人脸检测
示例项目效果:

C# 人脸识别库_第1张图片

 

二、使用

一分钟在你的项目里集成人脸识别

1. 创建你的 .NET 应用

.NET Standard >= 2.0
.NET Core >= 2.0
.NET Framework >= 4.6.1^2


2. 使用 Nuget 安装 ViewFaceCore

  • Author : View
  • Version >= 0.1.1

此 Nuget 包会自动添加依赖的 C++ 库,以及最精简的识别模型。
如果需要其它场景的识别模型,请下载 SeetaFace6 模型文件。

3. 在项目中编写你的代码

  • 按照 说明 自己编写
  • 或者参考以下代码

简单的调用示例

 1 static void Main()
 2         {
 3             ViewFace viewFace = new ViewFace((str) => { Debug.WriteLine(str); }); // 初始化人脸识别类,并设置 日志回调函数
 4             viewFace.DetectorSetting = new DetectorSetting() { FaceSize = 20, MaxWidth = 2000, MaxHeight = 2000, Threshold = 0.5 };
 5 
 6             // 系统默认使用的轻量级识别模型。如果对精度有要求,请切换到 Normal 模式;并下载需要模型文件 放入生成目录的 model 文件夹中
 7             viewFace.FaceType = FaceType.Normal;
 8             // 系统默认使用5个人脸关键点。//不建议改动,除非是使用口罩模型。
 9             viewFace.MarkType = MarkType.Light;
10 
11             #region 识别老照片
12             float[] oldEigenValues;
13             Bitmap oldImg = (Bitmap)Image.FromFile(@"C:\Users\yangw\OneDrive\图片\Camera Roll\IMG_20181103_142707.jpg"/*老图片路径*/); // 从文件中加载照片 // 或者视频帧等
14             var oldFaces = viewFace.FaceDetector(oldImg); // 检测图片中包含的人脸信息。(置信度、位置、大小)
15             if (oldFaces.Length > 0) //识别到人脸
16             {
17                 { // 打印人脸信息
18                     Console.WriteLine($"识别到的人脸数量:{oldFaces.Length} 。人脸信息:\n");
19                     Console.WriteLine($"序号\t人脸置信度\t位置X\t位置Y\t宽度\t高度");
20                     for (int i = 0; i < oldFaces.Length; i++)
21                     {
22                         Console.WriteLine($"{i + 1}\t{oldFaces[i].Score}\t{oldFaces[i].Location.X}\t{oldFaces[i].Location.Y}\t{oldFaces[i].Location.Width}\t{oldFaces[i].Location.Height}");
23                     }
24                     Console.WriteLine();
25                 }
26                 var oldPoints = viewFace.FaceMark(oldImg, oldFaces[0]); // 获取 第一个人脸 的识别关键点。(人脸识别的关键点数据)
27                 oldEigenValues = viewFace.Extract(oldImg, oldPoints); // 获取 指定的关键点 的特征值。
28             }
29             else { oldEigenValues = new float[0]; /*未识别到人脸*/ }
30             #endregion
31 
32             #region 识别新照片
33             float[] newEigenValues;
34             Bitmap newImg = (Bitmap)Image.FromFile(@"C:\Users\yangw\OneDrive\图片\Camera Roll\IMG_20181129_224339.jpg"/*新图片路径*/); // 从文件中加载照片 // 或者视频帧等
35             var newFaces = viewFace.FaceDetector(newImg); // 检测图片中包含的人脸信息。(置信度、位置、大小)
36             if (newFaces.Length > 0) //识别到人脸
37             {
38                 { // 打印人脸信息
39                     Console.WriteLine($"识别到的人脸数量:{newFaces.Length} 。人脸信息:\n");
40                     Console.WriteLine($"序号\t人脸置信度\t位置X\t位置Y\t宽度\t高度");
41                     for (int i = 0; i < newFaces.Length; i++)
42                     {
43                         Console.WriteLine($"{i + 1}\t{newFaces[i].Score}\t{newFaces[i].Location.X}\t{newFaces[i].Location.Y}\t{newFaces[i].Location.Width}\t{newFaces[i].Location.Height}");
44                     }
45                     Console.WriteLine();
46                 }
47                 var newPoints = viewFace.FaceMark(newImg, newFaces[0]); // 获取 第一个人脸 的识别关键点。(人脸识别的关键点数据)
48                 newEigenValues = viewFace.Extract(newImg, newPoints); // 获取 指定的关键点 的特征值。
49             }
50             else { newEigenValues = new float[0]; /*未识别到人脸*/ }
51             #endregion
52 
53             try
54             {
55                 float similarity = viewFace.Similarity(oldEigenValues, newEigenValues); // 对比两张照片上的数据,确认是否是同一个人。
56                 Console.WriteLine($"阈值 = {Face.Threshold[viewFace.FaceType]}\t相似度 = {similarity}");
57                 Console.WriteLine($"是否是同一个人:{viewFace.IsSelf(similarity)}");
58             }
59             catch (Exception e)
60             { Console.WriteLine(e); }
61 
62             Console.ReadKey();
63         }
ViewFaceCore 使用示例

 

三、说明

命名空间:ViewFaceCore.Sharp : 人脸识别类所在的命名空间

  • 属性说明:
 
属性名称 类型 说明 默认值
ModelPath string 获取或设置模型路径 [ 如非必要,请勿修改 ] ./model/
FaceType FaceType 获取或设置人脸类型 FaceType.Light
MarkType MarkType 获取或设置人脸关键点类型 MarkType.Light
DetectorSetting DetectorSetting 获取或设置人脸检测器设置 new DetectorSetting()

 

  • 方法说明:

 

 1 using System.Drawing;
 2 using ViewFaceCore.Sharp;
 3 using ViewFaceCore.Sharp.Model;
 4 
 5 // 识别 bitmap 中的人脸,并返回人脸的信息。
 6 FaceInfo[] FaceDetector(Bitmap);
 7 
 8 // 识别 bitmap 中指定的人脸信息 info 的关键点坐标。
 9 FaceMarkPoint[] FaceMark(Bitmap, FaceInfo);
10 
11 // 提取人脸特征值。
12 float[] Extract(Bitmap, FaceMarkPoint[]);
13 
14 // 计算特征值相似度。
15 float Similarity(float[], float[]);
16 
17 // 判断相似度是否为同一个人。
18 bool IsSelf(float);

 

四、实现

此项目受到了 SeetaFaceEngine.NET 项目的启发

这个项目本质上来说还是调用了 SeetaFace 的 C++ 类库来实现的人脸识别功能。针对本人遇到过的相关的类库的使用都不太方便,而且使用的 SeetaFace 的版本较老,故萌生了自己重新开发的想法。

本项目在开发完成之后为了方便调用,采用了 Nuget 包的形式,将所有需要的依赖以及最小识别模型一起打包。在使用时非常简单,只需要 nuget 安装,编写代码,运行即可,不需要多余的操作。

首先查看 SeetaFace ,已经更新到了v3(v6即v3)(上面前辈的项目是基于v1开发的),最新版本暂时没有开源,但是可以免费商用。然后是根据以前的经验和 SeetaFace6 文档的指导,以及前辈的项目,做了以下操作。

1.对SeetaFace6 的接口进行了 C++ 形式的封装。

目前主要实现了 人脸检测,关键点提取,特征值提取,特征值对比几个人脸识别中的基础接口。有了这几个接口,可以完整的实现一套人脸识别和验证的流程。

  • c++封装的接口代码如下:
  1 #include "seeta/FaceDetector.h"
  2 #include "seeta/FaceLandmarker.h"
  3 #include "seeta/FaceRecognizer.h"
  4 
  5 #include 
  6 
  7 #define View_Api extern "C" __declspec(dllexport)
  8 
  9 using namespace std;
 10 
 11 typedef void(_stdcall* LogCallBack)(const char* logText);
 12 
 13 string modelPath = "./model/"; // 模型所在路径
 14 LogCallBack logger = NULL; // 日志回调函数
 15 
 16 // 打印日志
 17 void WriteLog(string str) { if (logger != NULL) { logger(str.c_str()); } }
 18 
 19 void WriteMessage(string fanctionName, string message) { WriteLog(fanctionName + "\t Message:" + message); }
 20 void WriteModelName(string fanctionName, string modelName) { WriteLog(fanctionName + "\t Model.Name:" + modelName); }
 21 void WriteRunTime(string fanctionName, int start) { WriteLog(fanctionName + "\t Run.Time:" + to_string(clock() - start) + " ms"); }
 22 void WriteError(string fanctionName, const std::exception& e) { WriteLog(fanctionName + "\t Error:" + e.what()); }
 23 
 24 // 注册日志回调函数
 25 View_Api void V_SetLogFunction(LogCallBack writeLog)
 26 {
 27     logger = writeLog;
 28     WriteMessage(__FUNCDNAME__, "Successed.");
 29 }
 30 
 31 // 设置人脸模型目录
 32 View_Api void V_SetModelPath(const char* path)
 33 {
 34     modelPath = path;
 35     WriteMessage(__FUNCDNAME__, "Model.Path:" + modelPath);
 36 }
 37 // 获取人脸模型目录
 38 View_Api bool V_GetModelPath(char** path)
 39 {
 40     try
 41     {
 42 #pragma warning(disable:4996)
 43         strcpy(*path, modelPath.c_str());
 44 
 45         return true;
 46     }
 47     catch (const std::exception& e)
 48     {
 49         WriteError(__FUNCDNAME__, e);
 50         return false;
 51     }
 52 }
 53 
 54 seeta::FaceDetector* v_faceDetector = NULL;
 55 
 56 // 人脸检测结果
 57 static SeetaFaceInfoArray detectorInfos;
 58 // 人脸数量检测器
 59 View_Api int V_DetectorSize(unsigned char* imgData, int width, int height, int channels, double faceSize = 20, double threshold = 0.9, double maxWidth = 2000, double maxHeight = 2000, int type = 0)
 60 {
 61     try {
 62         clock_t start = clock();
 63 
 64         SeetaImageData img = { width, height, channels, imgData };
 65         if (v_faceDetector == NULL) {
 66             seeta::ModelSetting setting;
 67             setting.set_device(SEETA_DEVICE_CPU);
 68             string modelName = "face_detector.csta";
 69             switch (type)
 70             {
 71             case 1: modelName = "mask_detector.csta"; break;
 72             }
 73             setting.append(modelPath + modelName);
 74             WriteModelName(__FUNCDNAME__, modelName);
 75             v_faceDetector = new seeta::FaceDetector(setting);
 76         }
 77 
 78         if (faceSize != 20) { v_faceDetector->set(seeta::FaceDetector::Property::PROPERTY_MIN_FACE_SIZE, faceSize); }
 79         if (threshold != 0.9) { v_faceDetector->set(seeta::FaceDetector::Property::PROPERTY_THRESHOLD, threshold); }
 80         if (maxWidth != 2000) { v_faceDetector->set(seeta::FaceDetector::Property::PROPERTY_MAX_IMAGE_WIDTH, maxWidth); }
 81         if (maxHeight != 2000) { v_faceDetector->set(seeta::FaceDetector::Property::PROPERTY_MAX_IMAGE_HEIGHT, maxHeight); }
 82 
 83         auto infos = v_faceDetector->detect(img);
 84         detectorInfos = infos;
 85 
 86         WriteRunTime("V_Detector", start); // 此方法已经是人脸检测的全过程,故计时器显示为 人脸识别方法
 87         return infos.size;
 88     }
 89     catch (const std::exception& e)
 90     {
 91         WriteError(__FUNCDNAME__, e);
 92         return -1;
 93     }
 94 }
 95 // 人脸检测器
 96 View_Api bool V_Detector(float* score, int* x, int* y, int* width, int* height)
 97 {
 98     try
 99     {
100         //clock_t start = clock();
101 
102         for (int i = 0; i < detectorInfos.size; i++, detectorInfos.data++)
103         {
104             *score = detectorInfos.data->score;
105             *x = detectorInfos.data->pos.x;
106             *y = detectorInfos.data->pos.y;
107             *width = detectorInfos.data->pos.width;
108             *height = detectorInfos.data->pos.height;
109             score++, x++, y++, width++, height++;
110         }
111         detectorInfos.data = NULL;
112         detectorInfos.size = NULL;
113 
114         //WriteRunTime(__FUNCDNAME__, start); // 此方法只是将 人脸数量检测器 获取到的数据赋值传递,并不耗时。故不显示此方法的调用时间
115         return true;
116     }
117     catch (const std::exception& e)
118     {
119         WriteError(__FUNCDNAME__, e);
120         return false;
121     }
122 }
123 
124 
125 seeta::FaceLandmarker* v_faceLandmarker = NULL;
126 // 人脸关键点数量
127 View_Api int V_FaceMarkSize(int type = 0)
128 {
129     try
130     {
131         clock_t start = clock();
132 
133         if (v_faceLandmarker == NULL) {
134             seeta::ModelSetting setting;
135             setting.set_device(SEETA_DEVICE_CPU);
136             string modelName = "face_landmarker_pts68.csta";
137             switch (type)
138             {
139             case 1: modelName = "face_landmarker_mask_pts5.csta"; break;
140             case 2: modelName = "face_landmarker_pts5.csta"; break;
141             }
142             setting.append(modelPath + modelName);
143             WriteModelName(__FUNCDNAME__, modelName);
144             v_faceLandmarker = new seeta::FaceLandmarker(setting);
145         }
146         int size = v_faceLandmarker->number();
147 
148         WriteRunTime(__FUNCDNAME__, start);
149         return size;
150     }
151     catch (const std::exception& e)
152     {
153         WriteError(__FUNCDNAME__, e);
154         return -1;
155     }
156 }
157 // 人脸关键点
158 View_Api bool V_FaceMark(unsigned char* imgData, int width, int height, int channels, int x, int y, int fWidth, int fHeight, double* pointX, double* pointY, int type = 0)
159 {
160     try
161     {
162         clock_t start = clock();
163 
164         SeetaImageData img = { width, height, channels, imgData };
165         SeetaRect face = { x, y, fWidth, fHeight };
166         if (v_faceLandmarker == NULL) {
167             seeta::ModelSetting setting;
168             setting.set_device(SEETA_DEVICE_CPU);
169             string modelName = "face_landmarker_pts68.csta";
170             switch (type)
171             {
172             case 1: modelName = "face_landmarker_mask_pts5.csta"; break;
173             case 2: modelName = "face_landmarker_pts5.csta"; break;
174             }
175             setting.append(modelPath + modelName);
176             WriteModelName(__FUNCDNAME__, modelName);
177             v_faceLandmarker = new seeta::FaceLandmarker(setting);
178         }
179         std::vector _points = v_faceLandmarker->mark(img, face);
180 
181         if (!_points.empty()) {
182             for (auto iter = _points.begin(); iter != _points.end(); iter++)
183             {
184                 *pointX = (*iter).x;
185                 *pointY = (*iter).y;
186                 pointX++;
187                 pointY++;
188             }
189 
190             WriteRunTime(__FUNCDNAME__, start);
191             return true;
192         }
193         else { return false; }
194     }
195     catch (const std::exception& e)
196     {
197         WriteError(__FUNCDNAME__, e);
198         return false;
199     }
200 }
201 
202 seeta::FaceRecognizer* v_faceRecognizer = NULL;
203 // 获取人脸特征值长度
204 View_Api int V_ExtractSize(int type = 0)
205 {
206     try
207     {
208         clock_t start = clock();
209 
210         if (v_faceRecognizer == NULL) {
211             seeta::ModelSetting setting;
212             setting.set_id(0);
213             setting.set_device(SEETA_DEVICE_CPU);
214             string modelName = "face_recognizer.csta";
215             switch (type)
216             {
217             case 1: modelName = "face_recognizer_mask.csta"; break;
218             case 2: modelName = "face_recognizer_light.csta"; break;
219             }
220             setting.append(modelPath + modelName);
221             WriteModelName(__FUNCDNAME__, modelName);
222             v_faceRecognizer = new seeta::FaceRecognizer(setting);
223         }
224         int length = v_faceRecognizer->GetExtractFeatureSize();
225 
226         WriteRunTime(__FUNCDNAME__, start);
227         return length;
228     }
229     catch (const std::exception& e)
230     {
231         WriteError(__FUNCDNAME__, e);
232         return -1;
233     }
234 }
235 // 提取人脸特征值
236 View_Api bool V_Extract(unsigned char* imgData, int width, int height, int channels, SeetaPointF* points, float* features, int type = 0)
237 {
238     try
239     {
240         clock_t start = clock();
241 
242         SeetaImageData img = { width, height, channels, imgData };
243         if (v_faceRecognizer == NULL) {
244             seeta::ModelSetting setting;
245             setting.set_id(0);
246             setting.set_device(SEETA_DEVICE_CPU);
247             string modelName = "face_recognizer.csta";
248             switch (type)
249             {
250             case 1: modelName = "face_recognizer_mask.csta"; break;
251             case 2: modelName = "face_recognizer_light.csta"; break;
252             }
253             setting.append(modelPath + modelName);
254             WriteModelName(__FUNCDNAME__, modelName);
255             v_faceRecognizer = new seeta::FaceRecognizer(setting);
256         }
257         int length = v_faceRecognizer->GetExtractFeatureSize();
258         std::shared_ptr<float> _features(new float[v_faceRecognizer->GetExtractFeatureSize()], std::default_delete<float[]>());
259         v_faceRecognizer->Extract(img, points, _features.get());
260 
261         for (int i = 0; i < length; i++)
262         {
263             *features = _features.get()[i];
264             features++;
265         }
266 
267         WriteRunTime(__FUNCDNAME__, start);
268         return true;
269 
270     }
271     catch (const std::exception& e)
272     {
273         WriteError(__FUNCDNAME__, e);
274         return false;
275     }
276 }
277 // 人脸特征值相似度计算
278 View_Api float V_CalculateSimilarity(float* leftFeatures, float* rightFeatures, int type = 0)
279 {
280     try
281     {
282         clock_t start = clock();
283 
284         if (v_faceRecognizer == NULL) {
285             seeta::ModelSetting setting;
286             setting.set_id(0);
287             setting.set_device(SEETA_DEVICE_CPU);
288             string modelName = "face_recognizer.csta";
289             switch (type)
290             {
291             case 1: modelName = "face_recognizer_mask.csta"; break;
292             case 2: modelName = "face_recognizer_light.csta"; break;
293             }
294             setting.append(modelPath + modelName);
295             WriteModelName(__FUNCDNAME__, modelName);
296             v_faceRecognizer = new seeta::FaceRecognizer(setting);
297         }
298 
299         auto similarity = v_faceRecognizer->CalculateSimilarity(leftFeatures, rightFeatures);
300         WriteMessage(__FUNCDNAME__, "Similarity = " + to_string(similarity));
301         WriteRunTime(__FUNCDNAME__, start);
302         return similarity;
303     }
304     catch (const std::exception& e)
305     {
306         WriteError(__FUNCDNAME__, e);
307         return -1;
308     }
309 }
310 
311 // 释放资源
312 View_Api void V_Dispose()
313 {
314     if (v_faceDetector != NULL) delete v_faceDetector;
315     if (v_faceLandmarker != NULL) delete v_faceLandmarker;
316     if (v_faceRecognizer != NULL) delete v_faceRecognizer;
317 }
C++ 封装层

2.采用 C# 对上诉接口进行了导入。

因为C++的项目测CPU架构区分x86和x64,所以C# 层也需要区分架构封装

using System.Runtime.InteropServices;
using System.Text;
using ViewFaceCore.Sharp.Model;

namespace ViewFaceCore.Plus
{
    /// 
    /// 日志回调函数
    /// 
    /// 
    public delegate void LogCallBack(string logText);

    class ViewFacePlus64
    {
        const string LibraryPath = @"FaceLibraries\x64\ViewFace.dll";
        /// 
        /// 设置日志回调函数(用于日志打印)
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_SetLogFunction", CallingConvention = CallingConvention.Cdecl)]
        public static extern void SetLogFunction(LogCallBack writeLog);

        /// 
        /// 设置人脸模型的目录
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_SetModelPath", CallingConvention = CallingConvention.Cdecl)]
        private extern static void SetModelPath(byte[] path);
        /// 
        /// 设置人脸模型的目录
        /// 
        /// 
        public static void SetModelPath(string path) => SetModelPath(Encoding.UTF8.GetBytes(path));

        /// 
        /// 释放使用的资源
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_Dispose", CallingConvention = CallingConvention.Cdecl)]
        public extern static void ViewDispose();

        /// 
        /// 获取人脸模型的目录
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_GetModelPath", CallingConvention = CallingConvention.Cdecl)]
        private extern static bool GetModelPathEx(ref string path);
        /// 
        /// 获取人脸模型的目录
        /// 
        public static string GetModelPath() { string path = string.Empty; GetModelPathEx(ref path); return path; }

        /// 
        /// 人脸检测器检测到的人脸数量
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 最小人脸是人脸检测器常用的一个概念,默认值为20,单位像素。
        /// 最小人脸和检测器性能息息相关。主要方面是速度,使用建议上,我们建议在应用范围内,这个值设定的越大越好。SeetaFace采用的是BindingBox Regresion的方式训练的检测器。如果最小人脸参数设置为80的话,从检测能力上,可以将原图缩小的原来的1/4,这样从计算复杂度上,能够比最小人脸设置为20时,提速到16倍。
        /// 
        /// 检测器阈值默认值是0.9,合理范围为[0, 1]。这个值一般不进行调整,除了用来处理一些极端情况。这个值设置的越小,漏检的概率越小,同时误检的概率会提高
        /// 可检测的图像最大宽度。默认值2000。
        /// 可检测的图像最大高度。默认值2000。
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_DetectorSize", CallingConvention = CallingConvention.Cdecl)]
        public extern static int DetectorSize(byte[] imgData, int width, int height, int channels, double faceSize = 20, double threshold = 0.9, double maxWidth = 2000, double maxHeight = 2000, int type = 0);
        /// 
        /// 人脸检测器
        /// 调用此方法前必须先调用 
        /// 
        /// 人脸置信度集合
        /// 人脸位置集合
        /// 人脸位置集合
        /// 人脸大小集合
        /// 人脸大小集合
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_Detector", CallingConvention = CallingConvention.Cdecl)]
        public extern static bool Detector(float[] score, int[] x, int[] y, int[] width, int[] height);

        /// 
        /// 人脸关键点数量
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_FaceMarkSize", CallingConvention = CallingConvention.Cdecl)]
        public extern static int FaceMarkSize(int type = 0);
        /// 
        /// 人脸关键点
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_FaceMark", CallingConvention = CallingConvention.Cdecl)]
        public extern static bool FaceMark(byte[] imgData, int width, int height, int channels, int x, int y, int fWidth, int fHeight, double[] pointX, double[] pointY, int type = 0);

        /// 
        /// 提取特征值
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_Extract", CallingConvention = CallingConvention.Cdecl)]
        public extern static bool Extract(byte[] imgData, int width, int height, int channels, FaceMarkPoint[] points, float[] features, int type = 0);
        /// 
        /// 特征值大小
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_ExtractSize", CallingConvention = CallingConvention.Cdecl)]
        public extern static int ExtractSize(int type = 0);

        /// 
        /// 计算相似度
        /// 
        /// 
        /// 
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_CalculateSimilarity", CallingConvention = CallingConvention.Cdecl)]
        public extern static float Similarity(float[] leftFeatures, float[] rightFeatures, int type = 0);
    }
    class ViewFacePlus32
    {
        const string LibraryPath = @"FaceLibraries\x86\ViewFace.dll";
        /// 
        /// 设置日志回调函数(用于日志打印)
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_SetLogFunction", CallingConvention = CallingConvention.Cdecl)]
        public static extern void SetLogFunction(LogCallBack writeLog);

        /// 
        /// 设置人脸模型的目录
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_SetModelPath", CallingConvention = CallingConvention.Cdecl)]
        private extern static void SetModelPath(byte[] path);
        /// 
        /// 设置人脸模型的目录
        /// 
        /// 
        public static void SetModelPath(string path) => SetModelPath(Encoding.UTF8.GetBytes(path));

        /// 
        /// 释放使用的资源
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_Dispose", CallingConvention = CallingConvention.Cdecl)]
        public extern static void ViewDispose();

        /// 
        /// 获取人脸模型的目录
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_GetModelPath", CallingConvention = CallingConvention.Cdecl)]
        private extern static bool GetModelPathEx(ref string path);
        /// 
        /// 获取人脸模型的目录
        /// 
        public static string GetModelPath() { string path = string.Empty; GetModelPathEx(ref path); return path; }

        /// 
        /// 人脸检测器检测到的人脸数量
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 最小人脸是人脸检测器常用的一个概念,默认值为20,单位像素。
        /// 最小人脸和检测器性能息息相关。主要方面是速度,使用建议上,我们建议在应用范围内,这个值设定的越大越好。SeetaFace采用的是BindingBox Regresion的方式训练的检测器。如果最小人脸参数设置为80的话,从检测能力上,可以将原图缩小的原来的1/4,这样从计算复杂度上,能够比最小人脸设置为20时,提速到16倍。
        /// 
        /// 检测器阈值默认值是0.9,合理范围为[0, 1]。这个值一般不进行调整,除了用来处理一些极端情况。这个值设置的越小,漏检的概率越小,同时误检的概率会提高
        /// 可检测的图像最大宽度。默认值2000。
        /// 可检测的图像最大高度。默认值2000。
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_DetectorSize", CallingConvention = CallingConvention.Cdecl)]
        public extern static int DetectorSize(byte[] imgData, int width, int height, int channels, double faceSize = 20, double threshold = 0.9, double maxWidth = 2000, double maxHeight = 2000, int type = 0);
        /// 
        /// 人脸检测器
        /// 调用此方法前必须先调用 
        /// 
        /// 人脸置信度集合
        /// 人脸位置集合
        /// 人脸位置集合
        /// 人脸大小集合
        /// 人脸大小集合
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_Detector", CallingConvention = CallingConvention.Cdecl)]
        public extern static bool Detector(float[] score, int[] x, int[] y, int[] width, int[] height);

        /// 
        /// 人脸关键点数量
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_FaceMarkSize", CallingConvention = CallingConvention.Cdecl)]
        public extern static int FaceMarkSize(int type = 0);
        /// 
        /// 人脸关键点
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_FaceMark", CallingConvention = CallingConvention.Cdecl)]
        public extern static bool FaceMark(byte[] imgData, int width, int height, int channels, int x, int y, int fWidth, int fHeight, double[] pointX, double[] pointY, int type = 0);

        /// 
        /// 提取特征值
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_Extract", CallingConvention = CallingConvention.Cdecl)]
        public extern static bool Extract(byte[] imgData, int width, int height, int channels, FaceMarkPoint[] points, float[] features, int type = 0);
        /// 
        /// 特征值大小
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_ExtractSize", CallingConvention = CallingConvention.Cdecl)]
        public extern static int ExtractSize(int type = 0);

        /// 
        /// 计算相似度
        /// 
        /// 
        /// 
        /// 
        /// 
        [DllImport(LibraryPath, EntryPoint = "V_CalculateSimilarity", CallingConvention = CallingConvention.Cdecl)]
        public extern static float Similarity(float[] leftFeatures, float[] rightFeatures, int type = 0);
    }
}
C# 导入层

3.采用 C# 的面向对象的封装

因为C#的项目默认都是 AnyCPU,所以为了简化调用,在这一层封装的时候增加了架构判断,当在你的项目中引用的时候,不用做任何修改。

且因为C++的C#导入方法在和原生的C#写法略有差异,且数据的转换和传递比较麻烦,所以类库中对外隐藏了 C# 导入层。并使用大家都更熟悉的C#的面向对象的方式进行进一步的封装和简化。

  1     /// 
  2     /// 人脸识别类
  3     /// 
  4     public class ViewFace
  5     {
  6         bool Platform64 { get; set; } = false;
  7         // 需要模型:
  8 
  9         // ctor
 10         /// 
 11         /// 使用默认的模型目录初始化人脸识别类
 12         /// 
 13         public ViewFace() : this("./model/") { }
 14         /// 
 15         /// 使用指定的模型目录初始化人脸识别类
 16         /// 
 17         /// 模型目录
 18         public ViewFace(string modelPath)
 19         {
 20             Platform64 = IntPtr.Size == 8;
 21             if (Platform64)
 22             { ViewFacePlus64.SetModelPath(modelPath); }
 23             else
 24             { ViewFacePlus32.SetModelPath(modelPath); }
 25         }
 26         /// 
 27         /// 使用指定的日志回调函数初始化人脸识别类
 28         /// 
 29         /// 日志回调函数
 30         public ViewFace(LogCallBack action) : this("./model/", action) { }
 31         /// 
 32         /// 使用指定的模型目录、日志回调函数初始化人脸识别类
 33         /// 
 34         /// 模型目录
 35         /// 日志回调函数
 36         public ViewFace(string modelPath, LogCallBack action) : this(modelPath)
 37         {
 38             if (Platform64)
 39             { ViewFacePlus64.SetLogFunction(action); }
 40             else
 41             { ViewFacePlus32.SetLogFunction(action); }
 42         }
 43 
 44         // public property
 45         /// 
 46         /// 获取或设置模型路径
 47         /// 
 48         public string ModelPath
 49         {
 50             get
 51             {
 52                 if (Platform64)
 53                 { return ViewFacePlus64.GetModelPath(); }
 54                 else
 55                 { return ViewFacePlus32.GetModelPath(); }
 56             }
 57             set
 58             {
 59                 if (Platform64)
 60                 { ViewFacePlus64.SetModelPath(value); }
 61                 else
 62                 { ViewFacePlus32.SetModelPath(value); }
 63             }
 64         }
 65         /// 
 66         /// 获取或设置人脸类型
 67         /// 
 68         /// 此属性可影响到以下方法:
69 ///
70 ///
71 ///
72 /// 73 ///
74 public FaceType FaceType { get; set; } = FaceType.Light; 75 /// 76 /// 获取或设置人脸关键点类型 77 /// 78 /// 此属性可影响到以下方法:
79 ///
80 /// 81 ///
82 public MarkType MarkType { get; set; } = MarkType.Light; 83 /// 84 /// 获取或设置人脸检测器设置 85 /// 86 public DetectorSetting DetectorSetting { get; set; } = new DetectorSetting(); 87 88 89 // public method 90 /// 91 /// 识别 中的人脸,并返回人脸的信息。 92 /// 93 /// 时, 需要模型:
94 /// 时, 需要模型:
95 /// 96 ///
97 /// 包含人脸的图片 98 /// 99 public FaceInfo[] FaceDetector(Bitmap bitmap) 100 { 101 byte[] bgr = ImageSet.Get24BGRFromBitmap(bitmap, out int width, out int height, out int channels); 102 int size; 103 if (Platform64) 104 { size = ViewFacePlus64.DetectorSize(bgr, width, height, channels, DetectorSetting.FaceSize, DetectorSetting.Threshold, DetectorSetting.MaxWidth, DetectorSetting.MaxHeight, (int)FaceType); } 105 else 106 { size = ViewFacePlus32.DetectorSize(bgr, width, height, channels, DetectorSetting.FaceSize, DetectorSetting.Threshold, DetectorSetting.MaxWidth, DetectorSetting.MaxHeight, (int)FaceType); } 107 float[] _socre = new float[size]; 108 int[] _x = new int[size]; 109 int[] _y = new int[size]; 110 int[] _width = new int[size]; 111 int[] _height = new int[size]; 112 if (Platform64) 113 { _ = ViewFacePlus64.Detector(_socre, _x, _y, _width, _height); } 114 else 115 { _ = ViewFacePlus32.Detector(_socre, _x, _y, _width, _height); } 116 List infos = new List(); 117 for (int i = 0; i < size; i++) 118 { 119 infos.Add(new FaceInfo() { Score = _socre[i], Location = new FaceRect() { X = _x[i], Y = _y[i], Width = _width[i], Height = _height[i] } }); 120 } 121 return infos.ToArray(); 122 } 123 124 /// 125 /// 识别 中指定的人脸信息 的关键点坐标。 126 /// 127 /// 时, 需要模型:
128 /// 时, 需要模型:
129 /// 时, 需要模型:
130 /// 131 ///
132 /// 包含人脸的图片 133 /// 指定的人脸信息 134 /// 135 public FaceMarkPoint[] FaceMark(Bitmap bitmap, FaceInfo info) 136 { 137 byte[] bgr = ImageSet.Get24BGRFromBitmap(bitmap, out int width, out int height, out int channels); 138 int size; 139 if (Platform64) 140 { size = ViewFacePlus64.FaceMarkSize((int)MarkType); } 141 else 142 { size = ViewFacePlus32.FaceMarkSize((int)MarkType); } 143 double[] _pointX = new double[size]; 144 double[] _pointY = new double[size]; 145 bool val; 146 if (Platform64) 147 { val = ViewFacePlus64.FaceMark(bgr, width, height, channels, info.Location.X, info.Location.Y, info.Location.Width, info.Location.Height, _pointX, _pointY, (int)MarkType); } 148 else 149 { val = ViewFacePlus32.FaceMark(bgr, width, height, channels, info.Location.X, info.Location.Y, info.Location.Width, info.Location.Height, _pointX, _pointY, (int)MarkType); } 150 if (val) 151 { 152 List points = new List(); 153 for (int i = 0; i < size; i++) 154 { points.Add(new FaceMarkPoint() { X = _pointX[i], Y = _pointY[i] }); } 155 return points.ToArray(); 156 } 157 else 158 { throw new Exception("人脸关键点获取失败"); } 159 } 160 161 /// 162 /// 提取人脸特征值。 163 /// 164 /// 时, 需要模型:
165 /// 时, 需要模型:
166 /// 时, 需要模型:
167 /// 168 ///
169 /// 170 /// 171 /// 172 public float[] Extract(Bitmap bitmap, FaceMarkPoint[] points) 173 { 174 byte[] bgr = ImageSet.Get24BGRFromBitmap(bitmap, out int width, out int height, out int channels); 175 float[] features; 176 if (Platform64) 177 { features = new float[ViewFacePlus64.ExtractSize((int)FaceType)]; } 178 else 179 { features = new float[ViewFacePlus32.ExtractSize((int)FaceType)]; } 180 181 if (Platform64) 182 { ViewFacePlus64.Extract(bgr, width, height, channels, points, features, (int)FaceType); } 183 else 184 { ViewFacePlus32.Extract(bgr, width, height, channels, points, features, (int)FaceType); } 185 return features; 186 } 187 188 /// 189 /// 计算特征值相似度。 190 /// 只能计算相同 计算出的特征值 191 /// 192 /// 时, 需要模型:
193 /// 时, 需要模型:
194 /// 时, 需要模型:
195 /// 196 ///
197 /// 198 /// 199 /// 200 /// 201 /// 202 public float Similarity(float[] leftFeatures, float[] rightFeatures) 203 { 204 if (leftFeatures.Length == 0 || rightFeatures.Length == 0) 205 throw new ArgumentNullException("参数不能为空", nameof(leftFeatures)); 206 if (leftFeatures.Length != rightFeatures.Length) 207 throw new ArgumentException("两个参数长度不一致"); 208 209 210 if (Platform64) 211 { return ViewFacePlus64.Similarity(leftFeatures, rightFeatures, (int)FaceType); } 212 else 213 { return ViewFacePlus32.Similarity(leftFeatures, rightFeatures, (int)FaceType); } 214 } 215 216 /// 217 /// 判断相似度是否为同一个人。 218 /// 219 /// 相似度 220 /// 221 public bool IsSelf(float similarity) => similarity > Face.Threshold[FaceType]; 222 223 /// 224 /// 释放资源 225 /// 226 ~ViewFace() 227 { 228 if (Platform64) 229 { ViewFacePlus64.ViewDispose(); } 230 else 231 { ViewFacePlus32.ViewDispose(); } 232 } 233 }
C# 面向对象层

 

五、也许…

  • 此项目还未实现 SeetaFace6 中的许多特性,也许:

    想起 GitHub 密码,持续更新…
    删除代码仓库跑路…

  • 如果在使用过程中遇到问题,你也许可以:

    在 GitHub 报告Bug…
    向我 发送邮件

你可能感兴趣的:(C# 人脸识别库)