转自:https://blog.csdn.net/qq_34106574/article/details/84022449
(1)下载源码 http://dlib.net/ 生成编译工程(使用cmake),可以选择gup选项,需要cuda9.0支持,点击generate后,Open project使用vs编译即可。
编译完成后创建工程,配置相关头文件,包含dlib文件夹的文件夹添加到#include搜索路径,将静态库加入工程,即可编译。
(2)也可将下载的源码直接加入工程,添加all目录的文件,即可在工程一同编译
但速度较慢。具体步骤:
(1),使用Visual Studio 2015或更新版本在Windows上编译只需创建一个空控制台项目。然后添加dlib/all/source.cpp。
(2),将包含dlib文件夹的文件夹添加到#include搜索路径。然后,您可以通过将示例程序添加到项目中来编译它。
(3),如果需要读取libjpeg和libpng图像文件,在Visual Studio中,dlib只使用jpeg和png文件最简单的方法是
将dlib/external文件夹中的所有libjpeg、libpng和zlib源文件添加到项目中,并定义DLIB_PNG_SUPPORT和DLIB_JPEG_SUPPORT预处理器指令。
(1)下面是dlib提供的人脸识别例程,该算法在LFW上的人脸识别率为99.38%,官网也有训练方法,使用的是嵌入网络将人脸映射到128维子空间,通过对向量的比较判定是否为同一人,经验阈值为小于0.6则判定为同一人。
人脸关键点检测回归参数 下载
shape_predictor_5_face_landmarks.dat
与深度人脸特征提取的模型参数下载
dlib_face_recognition_resnet_model_v1.dat
#include
#include
#include
#include
#include
#include
using namespace dlib;
using namespace std;
// ----------------------------------------------------------------------------------------
// The next bit of code defines a ResNet network. It's basically copied
// and pasted from the dnn_imagenet_ex.cpp example, except we replaced the loss
// layer with loss_metric and made the network somewhat smaller. Go read the introductory
// dlib DNN examples to learn what all this stuff means.
//
// Also, the dnn_metric_learning_on_images_ex.cpp example shows how to train this network.
// The dlib_face_recognition_resnet_model_v1 model used by this example was trained using
// essentially the code shown in dnn_metric_learning_on_images_ex.cpp except the
// mini-batches were made larger (35x15 instead of 5x5), the iterations without progress
// was set to 10000, and the training dataset consisted of about 3 million images instead of
// 55. Also, the input layer was locked to images of size 150.
template class,int,typename> class block, int N, templateclass BN, typename SUBNET>
using residual = add_prev1>>;
template class,int,typename> class block, int N, templateclass BN, typename SUBNET>
using residual_down = add_prev2>>>>>;
template class BN, int stride, typename SUBNET>
using block = BN>>>>;
template using ares = relu>;
template using ares_down = relu>;
template using alevel0 = ares_down<256,SUBNET>;
template using alevel1 = ares<256,ares<256,ares_down<256,SUBNET>>>;
template using alevel2 = ares<128,ares<128,ares_down<128,SUBNET>>>;
template using alevel3 = ares<64,ares<64,ares<64,ares_down<64,SUBNET>>>>;
template using alevel4 = ares<32,ares<32,ares<32,SUBNET>>>;
using anet_type = loss_metric
>>>>>>>>>>>>;
// ----------------------------------------------------------------------------------------
std::vector> jitter_image(
const matrix& img
);
// ----------------------------------------------------------------------------------------
int main(int argc, char** argv) try
{
if (argc != 2)
{
cout << "Run this example by invoking it like this: " << endl;
cout << " ./dnn_face_recognition_ex faces/bald_guys.jpg" << endl;
cout << endl;
cout << "You will also need to get the face landmarking model file as well as " << endl;
cout << "the face recognition model file. Download and then decompress these files from: " << endl;
cout << "http://dlib.net/files/shape_predictor_5_face_landmarks.dat.bz2" << endl;
cout << "http://dlib.net/files/dlib_face_recognition_resnet_model_v1.dat.bz2" << endl;
cout << endl;
return 1;
}
// The first thing we are going to do is load all our models. First, since we need to
// find faces in the image we will need a face detector:
frontal_face_detector detector = get_frontal_face_detector();
// We will also use a face landmarking model to align faces to a standard pose: (see face_landmark_detection_ex.cpp for an introduction)
shape_predictor sp;
deserialize("shape_predictor_5_face_landmarks.dat") >> sp;
// And finally we load the DNN responsible for face recognition.
anet_type net;
deserialize("dlib_face_recognition_resnet_model_v1.dat") >> net;
matrix img;
load_image(img, argv[1]);
// Display the raw image on the screen
image_window win(img);
// Run the face detector on the image of our action heroes, and for each face extract a
// copy that has been normalized to 150x150 pixels in size and appropriately rotated
// and centered.
std::vector> faces;
for (auto face : detector(img))
{
auto shape = sp(img, face);
matrix face_chip;
extract_image_chip(img, get_face_chip_details(shape,150,0.25), face_chip);
faces.push_back(move(face_chip));
// Also put some boxes on the faces so we can see that the detector is finding
// them.
win.add_overlay(face);
}
if (faces.size() == 0)
{
cout << "No faces found in image!" << endl;
return 1;
}
// This call asks the DNN to convert each face image in faces into a 128D vector.
// In this 128D vector space, images from the same person will be close to each other
// but vectors from different people will be far apart. So we can use these vectors to
// identify if a pair of images are from the same person or from different people.
std::vector> face_descriptors = net(faces);
// In particular, one simple thing we can do is face clustering. This next bit of code
// creates a graph of connected faces and then uses the Chinese whispers graph clustering
// algorithm to identify how many people there are and which faces belong to whom.
std::vector edges;
for (size_t i = 0; i < face_descriptors.size(); ++i)
{
for (size_t j = i; j < face_descriptors.size(); ++j)
{
// Faces are connected in the graph if they are close enough. Here we check if
// the distance between two face descriptors is less than 0.6, which is the
// decision threshold the network was trained to use. Although you can
// certainly use any other threshold you find useful.
if (length(face_descriptors[i]-face_descriptors[j]) < 0.6)
edges.push_back(sample_pair(i,j));
}
}
std::vector labels;
const auto num_clusters = chinese_whispers(edges, labels);
// This will correctly indicate that there are 4 people in the image.
cout << "number of people found in the image: "<< num_clusters << endl;
// Now let's display the face clustering results on the screen. You will see that it
// correctly grouped all the faces.
std::vector win_clusters(num_clusters);
for (size_t cluster_id = 0; cluster_id < num_clusters; ++cluster_id)
{
std::vector> temp;
for (size_t j = 0; j < labels.size(); ++j)
{
if (cluster_id == labels[j])
temp.push_back(faces[j]);
}
win_clusters[cluster_id].set_title("face cluster " + cast_to_string(cluster_id));
win_clusters[cluster_id].set_image(tile_images(temp));
}
// Finally, let's print one of the face descriptors to the screen.
cout << "face descriptor for one face: " << trans(face_descriptors[0]) << endl;
// It should also be noted that face recognition accuracy can be improved if jittering
// is used when creating face descriptors. In particular, to get 99.38% on the LFW
// benchmark you need to use the jitter_image() routine to compute the descriptors,
// like so:
matrix face_descriptor = mean(mat(net(jitter_image(faces[0]))));
cout << "jittered face descriptor for one face: " << trans(face_descriptor) << endl;
// If you use the model without jittering, as we did when clustering the bald guys, it
// gets an accuracy of 99.13% on the LFW benchmark. So jittering makes the whole
// procedure a little more accurate but makes face descriptor calculation slower.
cout << "hit enter to terminate" << endl;
cin.get();
}
catch (std::exception& e)
{
cout << e.what() << endl;
}
// ----------------------------------------------------------------------------------------
std::vector> jitter_image(
const matrix& img
)
{
// All this function does is make 100 copies of img, all slightly jittered by being
// zoomed, rotated, and translated a little bit differently. They are also randomly
// mirrored left to right.
thread_local dlib::rand rnd;
std::vector> crops;
for (int i = 0; i < 100; ++i)
crops.push_back(jitter_image(img,rnd));
return crops;
}
(2)测试结果
测试图:
结果
https://blog.csdn.net/qq_34106574/article/details/84022449