iOS 基于Vision实现图片人脸关键点提取

1.图片要压缩到宽高合适的比例,否则取出来的关键点还是原始image的坐标位置,与UIImageView的contentMode没有关系,所以,会出现偏差,Vision是基于CoreML,机器视觉相关的框架,只不过Vision里面已经集成好了,训练好的模型对象,可以直接使用,如果需要相关的CoreML模型可以到苹果官网下载,可以识别的情形非常得多。

2.代码实现

1.压缩图片
//图片压缩到指定大小
- (UIImage *)scaleImage:(CGFloat )width {
    CGFloat height = self.size.height * width / self.size.width;
    
    UIGraphicsBeginImageContext(CGSizeMake(width, height));
    [self drawInRect:CGRectMake(0, 0, width, height)];
    UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
    
    UIGraphicsEndImageContext();
    NSData *tempData = UIImageJPEGRepresentation(result, 0.5);
    return [UIImage imageWithData:tempData];
}
2.创建UIImageView这个只是为了显示image和Vision识别出来的点能对应上,Vision识别出来是image本身的数据,以宽或者高为基准压缩到合适比例才能对应上
- (void)createImageView {
    UIImageView * imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, m_frame.size.width, self.m_image.size.height)];
    self.m_imageView = imageView;
    self.m_imageView.image = self.m_image;
    self.m_imageView.backgroundColor = [UIColor redColor];
    [self addSubview:imageView];
}
3.识别图片上的人脸关键点位置

1.创建VNImageRequestHandler
2.创建请求VNDetectFaceLandmarksRequest
VNDetectFaceLandmarksRequest是人脸关键点的请求,类似的封装好的还有好几种,比如VNDetectFaceRectanglesRequest-> 识别人脸框的请求,VNDetectRectanglesRequest->识别矩形的请求,还有其他的

  1. VNImageRequestHandler执行VNDetectFaceLandmarksRequest
    4.返回数据的处理是在VNDetectFaceLandmarksRequest的block里面处理
- (void)detectLandmarks {
    //1
    self.m_detectImage = self.m_imageView.image;
    CIImage *faceCIImage = [[CIImage alloc]initWithImage:self.m_detectImage];

    // 1.创建VNImageRequestHandler
    VNImageRequestHandler *vnRequestHeader = [[VNImageRequestHandler alloc] initWithCIImage:faceCIImage options:@{}];
   
    __weak VisionImageView *weakSelf = self;
// 创建请求VNDetectFaceLandmarksRequest
    VNDetectFaceLandmarksRequest *faceRequest = [[VNDetectFaceLandmarksRequest alloc] initWithCompletionHandler:^(VNRequest * _Nonnull request, NSError * _Nullable error) {
        // 处理识别的数据
        [weakSelf faceLandmarks:request.results];
    }];
    

    // 3 VNImageRequestHandler执行VNDetectFaceLandmarksRequest
    [vnRequestHeader performRequests:@[faceRequest] error:NULL];
}
4.获取信息成功后 处理
注意
VNFaceObservation *face 
每一个face表示一张人脸,   
@property (readonly, nonatomic, assign) CGRect boundingBox;
 每一个人脸都有一个矩形框,人脸矩形框也是纹理坐标

@property (readonly, nonatomic, strong, nullable)  VNFaceLandmarks2D *landmarks;
 VNFaceLandmarks2D *landmarks = face.landmarks; 
人脸所有关键信息集合的属性landmarks

landmarks.leftEye 每一个具体的人脸部位信息的数据
@property (readonly, nullable) VNFaceLandmarkRegion2D *leftEye;
人脸的关键点的坐标都是相对于人脸矩形框的纹理坐标

如果要计算人脸关键点的纹理坐标,一定要结合人脸框的boundingBox纹理坐标,因为人脸关键点的纹理坐标是相对于人脸矩形框的

纹理坐标 左下00, 右下10, 左上01, 右上11,取值范围是x 0-1,y 0-1,左下角为00圆点,与iOS屏幕的圆点不一样,如果要使用OpenGL ES渲染的话,比如眼睛放大,还需要跟进对应的纹理坐标计算出对应的OpenGL ES的坐标范围

OpenGL ES顶点坐标,x取值 -1->1,y取值-1->1,左下-1,-1,  右下1,-1, 左上-1,1, 右上1, 1

额头部位也是有关键的,只是没有绘制出来而已,有些关键点是重合的

本例中只是通过CALayer绘制了人脸框CGRect大小的矩形框和关键点位置绘制了对应的Layer添加到了 layer上了

// 获取信息成功后 处理
- (void)faceLandmarks:(NSArray *)faces{
    //红点
    for (CALayer *layer in self.landmarksLayers) {
        [layer removeFromSuperlayer];
    }
    // 可能是多张脸
    [faces enumerateObjectsUsingBlock:^(VNFaceObservation *face, NSUInteger idx, BOOL * _Nonnull stop) {
        
        /*
         * face: VNFaceObservation 对象, 里面包含了 landmarks 位置信息, boundingBox 脸的大小 等等信息
         */
        
        
        // 取出单个脸的 landmarks
        VNFaceLandmarks2D *landmarks = face.landmarks;
        // 声明一个存关键位置的数组
        NSMutableArray *face_landmarks = [NSMutableArray array];
        
        // landmarks 是一个对象,对象中有左眼位置,右眼,鼻子,鼻梁等等属性 根据需求自己添加
        [face_landmarks addObject:landmarks.faceContour];
        [face_landmarks addObject:landmarks.leftEye];
        [face_landmarks addObject:landmarks.rightEye];
        [face_landmarks addObject:landmarks.leftEyebrow];
        [face_landmarks addObject:landmarks.rightEyebrow];
        [face_landmarks addObject:landmarks.outerLips];
        [face_landmarks addObject:landmarks.innerLips];
        [face_landmarks addObject:landmarks.nose];
        [face_landmarks addObject:landmarks.noseCrest];
        [face_landmarks addObject:landmarks.medianLine];
        [face_landmarks addObject:landmarks.outerLips];
        [face_landmarks addObject:landmarks.innerLips];
        [face_landmarks addObject:landmarks.leftPupil];
        [face_landmarks addObject:landmarks.rightPupil];
        
        VNFaceLandmarkRegion2D * leftPupilLandmarks = landmarks.leftPupil;
        VNFaceLandmarkRegion2D * rightPupilLandmarks = landmarks.rightPupil;
       
        CGPoint leftPupil = CGPointZero;
        CGPoint rightPupil = CGPointZero;

        for (NSUInteger i = 0; i < leftPupilLandmarks.pointCount; i++) {
            // 取出点
            leftPupil = leftPupilLandmarks.normalizedPoints[i];
            NSLog(@"leftPupil point == %@", NSStringFromCGPoint(leftPupil));
        }
        
        for (NSUInteger i = 0; i < rightPupilLandmarks.pointCount; i++) {
            // 取出点
            rightPupil = rightPupilLandmarks.normalizedPoints[i];
            NSLog(@"rightPupil point == %@", NSStringFromCGPoint(rightPupil));
        }
        
        NSLog(@"\n");
        // 轮廓点的数组
        NSMutableArray * faceContours = [[NSMutableArray alloc] initWithCapacity:0];
        for (NSUInteger i = 0; i < landmarks.faceContour.pointCount; i++) {
            CGPoint point = landmarks.faceContour.normalizedPoints[i];
            NSLog(@"faceContour point == %@", NSStringFromCGPoint(point));
            [faceContours addObject:NSStringFromCGPoint(point)];
        }
        
        NSLog(@"\n");
        for (NSUInteger i = 0; i < landmarks.medianLine.pointCount; i++) {
            CGPoint point = landmarks.medianLine.normalizedPoints[i];
            NSLog(@"medianLine point == %@", NSStringFromCGPoint(point));
        }

        dispatch_async(dispatch_get_main_queue(), ^{
            __weak VisionImageView *weakSelf = self;
            NSLog(@"weakSelf.m_delegate == %@", weakSelf.m_delegate);
            [self.m_delegate sendFaceDataWithLeftPupil:leftPupil andRightPupil:rightPupil andFaceBoundingBox:face.boundingBox andFaceContours:faceContours];
        });
        

        CGRect oldRect = face.boundingBox;
        CGFloat w = oldRect.size.width * self.bounds.size.width;
        CGFloat h = oldRect.size.height * self.bounds.size.height;
        CGFloat x = oldRect.origin.x * self.bounds.size.width;
        CGFloat y = self.bounds.size.height - (oldRect.origin.y * self.bounds.size.height) - h;
        
        // 添加矩形
        CALayer *testLayer = [[CALayer alloc]init];
        testLayer.borderWidth = 1;
        testLayer.cornerRadius = 3;
        testLayer.borderColor = [UIColor redColor].CGColor;
        testLayer.frame = CGRectMake(x, y, w, h);
        [self.layer addSublayer:testLayer];
        
        [self.landmarksLayers addObject:testLayer];
      

        NSLog(@"boundingBox == %@", NSStringFromCGRect(face.boundingBox));
        // 遍历位置信息
        [face_landmarks enumerateObjectsUsingBlock:^(VNFaceLandmarkRegion2D *obj, NSUInteger idx, BOOL * _Nonnull stop) {
            // VNFaceLandmarkRegion2D *obj 是一个对象. 表示当前的一个部位
            // 遍历当前部分所有的点
            for (int i=0; i
5.相关的头文件
#import 
#import 
#import 
#import 
6.效果图
Vision人脸识别关键点绘制.jpg

你可能感兴趣的:(iOS 基于Vision实现图片人脸关键点提取)