Core Image研究1——人脸检测

苹果自带的Core Image可以进行人脸识别,识别效率也很高,比使用OpenCV效果要好,而且Core Image不仅可以人脸识别,还能增加渲染效果,非常好用,但是相关资料比较少,这个项目是按照WWDC2012的一个Keynote视频做的。

讲一下大概流程,
首先换取到本地的图片:

UIImage* image = [UIImage imageNamed:@"face"];

然后转换成CIImage

CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];

下面创建CIDetector类实例来识别:

    CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
    NSDictionary* opts = [NSDictionary dictionaryWithObject:
                          CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                              context:nil options:opts];

对detector调用方法可以获取到识别到的结果数组(可以识别多张脸):

NSArray* features = [detector featuresInImage:ciimage];

下面使用for循环,分别对结果画出脸的位置和眼睛嘴的位置

    for (CIFaceFeature *faceFeature in features){
        
        CGFloat faceWidth = testImage.bounds.size.width/4;
        
        //标出脸的位置
        UIView* faceView = [[UIView alloc] initWithFrame:[self verticalFlipFromRect:faceFeature.bounds inSize:image.size toSize:testImage.bounds.size]];
        faceView.layer.borderWidth = 1;
        faceView.layer.borderColor = [[UIColor redColor] CGColor];
        [testImage addSubview:faceView];
        // 标出左眼
        if(faceFeature.hasLeftEyePosition) {
            UIView* leftEyeView = [[UIView alloc] initWithFrame:
                                   CGRectMake(0,0, faceWidth*0.3, faceWidth*0.3)];
            [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
            [leftEyeView setCenter:[self verticalFlipFromPoint:faceFeature.leftEyePosition inSize:image.size toSize:testImage.bounds.size]];
            leftEyeView.layer.cornerRadius = faceWidth*0.15;
            [testImage addSubview:leftEyeView];
        }
        
        // 标出右眼
        if(faceFeature.hasRightEyePosition) {
            UIView* rightEyeView = [[UIView alloc] initWithFrame:
                               CGRectMake(0,0, faceWidth*0.3, faceWidth*0.3)];
            [rightEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
            [rightEyeView setCenter:[self verticalFlipFromPoint:faceFeature.rightEyePosition inSize:image.size toSize:testImage.bounds.size]];
            rightEyeView.layer.cornerRadius = faceWidth*0.15;
            [testImage  addSubview:rightEyeView];
        }
        // 标出嘴
        if(faceFeature.hasMouthPosition) {
            UIView* mouth = [[UIView alloc] initWithFrame:
                             CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2,
                                        faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
            [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
            [mouth setCenter:[self verticalFlipFromPoint:faceFeature.mouthPosition inSize:image.size toSize:testImage.bounds.size]];
            mouth.layer.cornerRadius = faceWidth*0.2;
            [testImage addSubview:mouth];
        }
    }

这里需要注意,识别结果的位置的坐标系,和苹果普通UI的坐标系不同,这个识别结果的坐标系,是从左下角开始的,所以需要对结果位置需要变换(垂直翻转)

-(CGRect)verticalFlipFromRect:(CGRect)originalRect inSize:(CGSize)originalSize toSize:(CGSize)finalSize{
    CGRect finalRect = originalRect;
    finalRect.origin.y = originalSize.height - finalRect.origin.y - finalRect.size.height;
    CGFloat hRate = finalSize.width / originalSize.width;
    CGFloat vRate = finalSize.height / originalSize.height;
    finalRect.origin.x *= hRate;
    finalRect.origin.y *= vRate;
    finalRect.size.width *= hRate;
    finalRect.size.height *= vRate;
    return finalRect;
    
}

- (CGPoint)verticalFlipFromPoint:(CGPoint)originalPoint inSize:(CGSize)originalSize toSize:(CGSize)finalSize{
    CGPoint finalPoint = originalPoint;
    finalPoint.y = originalSize.height - finalPoint.y;
    CGFloat hRate = finalSize.width / originalSize.width;
    CGFloat vRate = finalSize.height / originalSize.height;
    finalPoint.x *= hRate;
    finalPoint.y *= vRate;
    return finalPoint;
    
}

具体细节可以去GitHub看,里面还有实时视频画面的人脸识别

你可能感兴趣的:(Core Image研究1——人脸检测)