人脸识别技术 (一) —— 基于CoreImage实现对静止图片中人脸的识别

版本记录

版本号 时间
V1.0 2018.01.31

前言

人脸识别是图像识别技术中的一种,广泛的应用于很多领域,接下来这几篇我们就一起来研究几种关于人脸识别的技术。

基于CoreImage的人脸识别技术

CoreImage是苹果提供的原生API,它有人脸识别的接口,可以实现人脸的识别,支持对图像中多个人脸的识别。


代码实现

下面我们一起看一下代码。

#import "ViewController.h"

@interface ViewController ()

@property (nonatomic, strong) UIImageView *pictureImageView;

@end

@implementation ViewController

#pragma mark - Override Base Function

- (void)viewDidLoad
{
    [super viewDidLoad];
    
    [self initUI];
    [self detectFaceWithImage];
}

#pragma mark - Object Private Function

- (void)initUI
{
    UIImageView *pictureImageView = [[UIImageView alloc] initWithFrame:self.view.bounds];
    pictureImageView.contentMode = UIViewContentModeScaleAspectFit;
    pictureImageView.image = [UIImage imageNamed:@"face"];
    self.pictureImageView = pictureImageView;
    [self.view addSubview:pictureImageView];
}

- (void)detectFaceWithImage
{
    UIImage *image = [UIImage imageNamed:@"face"];
    // 图像识别能力:可以在CIDetectorAccuracyHigh(较强的处理能力)与CIDetectorAccuracyLow(较弱的处理能力)中选择,因为想让准确度高一些在这里选择CIDetectorAccuracyHigh
    NSDictionary *opts = [NSDictionary dictionaryWithObject:
                          CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
    // 将图像转换为CIImage
    CIImage *faceImage = [CIImage imageWithCGImage:image.CGImage];
    CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:opts];
    // 识别出人脸数组
    NSArray *features = [faceDetector featuresInImage:faceImage];
    // 得到图片的尺寸
    CGSize inputImageSize = [faceImage extent].size;
    //将image沿y轴对称
    CGAffineTransform transform = CGAffineTransformScale(CGAffineTransformIdentity, 1, -1);
    //将图片上移
    transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height);
    
    // 取出所有人脸
    for (CIFaceFeature *faceFeature in features){
        //获取人脸的frame
        CGRect faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform);
        CGSize viewSize = self.pictureImageView.bounds.size;
        CGFloat scale = MIN(viewSize.width / inputImageSize.width,
                            viewSize.height / inputImageSize.height);
        CGFloat offsetX = (viewSize.width - inputImageSize.width * scale) / 2;
        CGFloat offsetY = (viewSize.height - inputImageSize.height * scale) / 2;
        // 缩放
        CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scale, scale);
        // 修正
        faceViewBounds = CGRectApplyAffineTransform(faceViewBounds,scaleTransform);
        faceViewBounds.origin.x += offsetX;
        faceViewBounds.origin.y += offsetY;
        
        //描绘人脸区域
        UIView* faceView = [[UIView alloc] initWithFrame:faceViewBounds];
        faceView.layer.borderWidth = 2;
        faceView.layer.borderColor = [[UIColor redColor] CGColor];
        [self.pictureImageView addSubview:faceView];
        
        // 判断是否有左眼位置
        if(faceFeature.hasLeftEyePosition){
            NSLog(@"检测到左眼");
        }
        // 判断是否有右眼位置
        if(faceFeature.hasRightEyePosition){
            NSLog(@"检测到右眼");
        }
        // 判断是否有嘴位置
        if(faceFeature.hasMouthPosition){
            NSLog(@"检测到嘴部");
        }
    }
}

@end

下面我们看一下输出结果

2018-01-30 23:27:22.402634+0800 JJFaceDetector_demo1[4535:1334965] 检测到左眼
2018-01-30 23:27:22.402767+0800 JJFaceDetector_demo1[4535:1334965] 检测到右眼
2018-01-30 23:27:22.402811+0800 JJFaceDetector_demo1[4535:1334965] 检测到嘴部

2018-01-30 23:27:22.402992+0800 JJFaceDetector_demo1[4535:1334965] 检测到左眼
2018-01-30 23:27:22.403023+0800 JJFaceDetector_demo1[4535:1334965] 检测到右眼
2018-01-30 23:27:22.403136+0800 JJFaceDetector_demo1[4535:1334965] 检测到嘴部

2018-01-30 23:27:22.403321+0800 JJFaceDetector_demo1[4535:1334965] 检测到左眼
2018-01-30 23:27:22.403364+0800 JJFaceDetector_demo1[4535:1334965] 检测到右眼
2018-01-30 23:27:22.403393+0800 JJFaceDetector_demo1[4535:1334965] 检测到嘴部

2018-01-30 23:27:22.403466+0800 JJFaceDetector_demo1[4535:1334965] 检测到左眼
2018-01-30 23:27:22.403494+0800 JJFaceDetector_demo1[4535:1334965] 检测到右眼
2018-01-30 23:27:22.403520+0800 JJFaceDetector_demo1[4535:1334965] 检测到嘴部

2018-01-30 23:27:22.403590+0800 JJFaceDetector_demo1[4535:1334965] 检测到左眼
2018-01-30 23:27:22.403617+0800 JJFaceDetector_demo1[4535:1334965] 检测到右眼
2018-01-30 23:27:22.403641+0800 JJFaceDetector_demo1[4535:1334965] 检测到嘴部

实现效果

接着看一下识别效果

人脸识别技术 (一) —— 基于CoreImage实现对静止图片中人脸的识别_第1张图片

可见,可以精确的识别出所有的人脸。

这里有几点需要注意:

  • UIView坐标系和CoreImage坐标系不一样,UIView原点是左上角,CoreImage是左下角。所以需要使用仿射变换(AffineTransform)将Core Image坐标转换为UIKit坐标。
//y轴对称
CGAffineTransform transform = CGAffineTransformScale(CGAffineTransformIdentity, 1, -1);
//上移
transform = CGAffineTransformTranslate(transform, 0, -inputImageSize.height);

后记

本篇已结束,后面更精彩~~~

人脸识别技术 (一) —— 基于CoreImage实现对静止图片中人脸的识别_第2张图片

你可能感兴趣的:(人脸识别技术 (一) —— 基于CoreImage实现对静止图片中人脸的识别)