先来一张效果图:
前言
本文需要一定的OpenGL基础,如果你对OpenGL还不熟悉,建议先进行基础知识的学习,仅给出一些笔者学习OpenGL的网站和博客供新手参考入门:
1、https://learnopengl.com (个人认为最适合小白入门OpenGL的网站,里面知识讲解的非常浅显易懂,我也是从这里一步步走向OpenGL的世界的,如果你英文不是太好,建议阅读对应的中文翻译网址:https://learnopengl-cn.github.io)
2、https://www.raywenderlich.com/3664/opengl-tutorial-for-ios-opengl-es-2-0 (看到域名相信许多人并不陌生,来自raywenderlich网站的入门教程,讲解的非常不错)
3、https://www.jianshu.com/nb/2430724 (来自iOS开发者影大神的OpenGL系列教程,从入门到进阶,从中会收获很多东西)
正文
整个处理过程大致分为3个步骤:
1、使用AVFoundation调用摄像头采集视频流获得图片
2、使用CoreImage库判断采集到的图片中是否包含有人脸
3、将结果使用OpenGL渲染显示到屏幕上
一、调用摄像头采集视频
//创建视频管理对象,摄像头的所有操作都需要由它来统一进行
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession setSessionPreset:AVCaptureSessionPresetHigh];
//获取可用采集设备,这里我们默认采用后置摄像头
AVCaptureDevice *captureDevice = nil;
NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in captureDevices) {
if (device.position == AVCaptureDevicePositionBack) {
captureDevice = device;
break;
}
}
//创建输入设备,并添加到session中
self.captureDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:captureDevice error:nil];
if ([self.captureSession canAddInput:self.captureDeviceInput]) {
[self.captureSession addInput:self.captureDeviceInput];
}
//创建输出设备,并添加到session中
self.captureDeviceOutput = [[AVCaptureVideoDataOutput alloc] init];
[self.captureDeviceOutput setAlwaysDiscardsLateVideoFrames:YES];
//设置视频输出格式及代理对象
processQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
//设置视频输出格式
[self.captureDeviceOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[self.captureDeviceOutput setSampleBufferDelegate:delegate queue:processQueue];
if ([self.captureSession canAddOutput:self.captureDeviceOutput]) {
[self.captureSession addOutput:self.captureDeviceOutput];
}
//设置视频采集类型和视频方向
AVCaptureConnection *captureConnection = [self.captureDeviceOutput connectionWithMediaType:AVMediaTypeVideo];
[captureConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
//开始运行
[self.captureSession startRunning];
我们通过代理方法获取到输出视频帧:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//渲染视频帧到视图上
[self.faceDetectionView displayPixelBuffer:pixelBuffer];
}
二、识别图像中的人脸
在iOS端实现人脸识别并不难,苹果在iOS5.0增加了CoreImage库,里面的CIDetector就具有识别人脸的功能,核心代码如下:
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer];
//识别精度,有CIDetectorAccuracyLow/CIDetectorAccuracyHigh两种可供选择,精度高相应消耗时间比较久
NSString *accuracy = CIDetectorAccuracyLow;
NSDictionary *options = [NSDictionary dictionaryWithObject: accuracy forKey:CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
NSArray *featuresArray = [detector featuresInImage:ciImage options:nil];
得到的featuresArray便是识别的结果,是一个包含有CIFaceFeature对象的数组,我们可以使用获得的结果判断是否包含有人脸。
三、使用OpenGL渲染原始视频帧和人脸位置贴图
这是最核心的部分,正常的OpenGL渲染显示视频帧的逻辑为:
- 设置视图Layer层为CAEAGLLayer
+ (Class)layerClass {
return [CAEAGLLayer class];
}
- 视图初始化时设置layer属性
- (void)setupLayer {
self.eaglLayer = (CAEAGLLayer *)self.layer;
self.eaglLayer.drawableProperties = @{kEAGLDrawablePropertyRetainedBacking : @(NO),kEAGLDrawablePropertyColorFormat : kEAGLColorFormatRGBA8};
self.eaglLayer.opaque = true;
self.contentScaleFactor = [UIScreen mainScreen].scale;
}
- 初始化EAGLContext并设置为当前context,这里的EAGLContext可以理解为渲染上下文,跟CGContextRef类似
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (![EAGLContext setCurrentContext:self.context]) {
NSLog(@"failed to setCurrentContext");
exit(1);
}
- 初始化顶点着色器(vertex shader)和片元着色器(fragment shader),这里由于我们要绘制原始视频帧和贴图,采用的是混合模式,因此我们创建两个不同的管理对象分别渲染原始视频帧和贴图纹理
- (void)loadShaders {
//一定要设置视口大小,否则不会显示渲染结果
CGFloat scale = [UIScreen mainScreen].scale;
glViewport(self.frame.origin.x * scale, self.frame.origin.y * scale, self.frame.size.width * scale, self.frame.size.height * scale);
//shaderManager用来渲染显示原始视频帧
self.shaderManager = [[LYShaderManager alloc] initWithVertexShaderFileName:@"FaceDetectionShader" fragmentFileName:@"FaceDetectionShader"];
//顶点坐标
glViewAttributes[ATTRIB_VERTEX] = [self.shaderManager getAttributeLocation:"aPosition"];
//纹理坐标
glViewAttributes[ATTRIB_TEXCOORD] = [self.shaderManager getAttributeLocation:"aTexCoordinate"];
//Y分量
glViewUniforms[UNIFORM_Y] = [self.shaderManager getUniformLocation:"SamplerY"];
//UV分量
glViewUniforms[UNIFORM_UV] = [self.shaderManager getUniformLocation:"SamplerUV"];
//旋转矩阵常量,由于原始视频帧是反向的,我们需要对它做转换操作
glViewUniforms[UNIFORM_ROTATE_MATRIX] = [self.shaderManager getUniformLocation:"rotateMatrix"];
//textureManager主要用来加载显示贴图纹理
self.textureManager = [[LYShaderManager alloc] initWithVertexShaderFileName:@"FaceTextureShader" fragmentFileName:@"FaceTextureShader"];
glViewAttributes[ATTRIB_TEMP_VERTEX] = [self.textureManager getAttributeLocation:"aPosition"];
glViewAttributes[ATTRIB_TEMP_TEXCOORD] = [self.textureManager getAttributeLocation:"aTexCoordinate"];
glViewUniforms[UNIFORM_TEMP_INPUT_IMG_TEXTURE] = [self.textureManager getUniformLocation:"inputTexture"];
}
这里原始视频帧格式为YUV格式,至于为什么是YUV格式,详细解释可以参考百度百科
- 初始化renderBuffer,renderBuffer我们可以将它简单的理解为电脑的显卡,我们所需要显示的颜色或者纹理都必须有它才可以被显示出来
- (void)setupRenderBuffer {
glGenRenderbuffers(1, &_renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);
//将renderBuffer存储到图层上,绘制完成后我们只需要调用图层中存储的renderBuffer显示即可
[self.context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.eaglLayer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &_backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_backingHeight);
}
- 初始化frameBuffer,frameBuffer用于renderBuffer的绑定,在OpenGL的渲染中,frameBuffer起着关键的作用,我们可以将它理解为显卡的插槽,如果没有它,显卡也就无处安放。frameBuffer不仅仅可以关联renderBuffer,还可以关联2D纹理,主要用于多滤镜的处理操作,多滤镜的叠加处理在此就不做过多解释了,有兴趣的可以移步OpenGL ES实践教程(七)多滤镜叠加处理或者GPUImage的多滤镜处理逻辑,作者也是从这两篇博客中理解和学习多滤镜的原理及frameBuffer的作用的(有点扯远了哈。。。)
- (void)setupFrameBuffer {
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
//将renderBuffer关联到frameBuffer上
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _renderBuffer);
}
- 将我们要使用的贴图图片转化成纹理数据,用于识别出人脸位置后的纹理混合操作
- (GLuint)setupTexture:(NSString *)fileName {
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(@"Failed to load image %@", fileName);
exit(1);
}
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte *spriteData = (GLubyte *)calloc(width * height * 4, sizeof(GLubyte));
CGContextRef context = CGBitmapContextCreate(spriteData, width, height, 8, width * 4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
//由于纹理坐标系原点在左下角,需要对图片做坐标转换
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(context);
GLuint texture;
glActiveTexture(GL_TEXTURE2);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int32_t)width, (int32_t)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texture;
}
- 本文的核心部分,渲染视频帧,同时检测有没有人脸,如果有,计算出人脸位置,转换坐标,将贴图渲染上去,直接上代码:
- (void)displayPixelBuffer:(CVPixelBufferRef)pixelBuffer {
if (pixelBuffer != NULL) {
int width = (int)CVPixelBufferGetWidth(pixelBuffer);
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
if (!_videoTextureCache) {
NSLog(@"NO Video Texture Cache");
return;
}
//必须确保当前上下文对象是正确的,否则会出现问题
if ([EAGLContext currentContext] != _context) {
[EAGLContext setCurrentContext:_context];
}
[self cleanUpTextures];
glActiveTexture(GL_TEXTURE0);
//以下关于Y分量和UV分量的纹理操作来自于GPUImage
CVReturn err;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
width,
height,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
if (err) {
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// UV-plane.
glActiveTexture(GL_TEXTURE1);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
width / 2,
height / 2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
if (err) {
NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
}
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//调整视口大小
glViewport(0, 0, _backingWidth, _backingHeight);
}
//设置混合模式
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glClearColor(0, 0, 0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
//绑定FBO
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
[self.shaderManager useProgram];
glUniform1i(glViewUniforms[UNIFORM_Y], 0);
glUniform1i(glViewUniforms[UNIFORM_UV], 1);
//原始视频帧坐标系和纹理坐标系是不一样的,我们对它做旋转操作保证渲染出来的视频是正常的
glUniformMatrix4fv(glViewUniforms[UNIFORM_ROTATE_MATRIX], 1, GL_FALSE, GLKMatrix4MakeXRotation(M_PI).m);
//顶点坐标左下角为(-1, -1),右上角为(1, 1);
GLfloat quadVertexData[] = {
-1, -1,
1, -1 ,
-1, 1,
1, 1,
};
// 更新顶点数据
glVertexAttribPointer(glViewAttributes[ATTRIB_VERTEX], 2, GL_FLOAT, 0, 0, quadVertexData);
glEnableVertexAttribArray(glViewAttributes[ATTRIB_VERTEX]);
GLfloat quadTextureData[] = { // 正常坐标
0, 0,
1, 0,
0, 1,
1, 1
};
//开启顶点属性,传递对应的顶点坐标数据
glVertexAttribPointer(glViewAttributes[ATTRIB_TEXCOORD], 2, GL_FLOAT, GL_FALSE, 0, quadTextureData);
glEnableVertexAttribArray(glViewAttributes[ATTRIB_TEXCOORD]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//检测图片中是否有人脸,如果有将贴图图片渲染到人脸坐标位置
[LYFaceDetector detectCVPixelBuffer:pixelBuffer completionHandler:^(CIFaceFeature *result, CIImage *ciImage) {
if (result) {
[self renderTempTexture:result ciImage:ciImage];
}
}];
glBindRenderbuffer(GL_RENDERBUFFER, _renderBuffer);
if ([EAGLContext currentContext] == _context) {
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
}
- 转换人脸坐标,计算贴图渲染坐标(这里我们为了简单,只返回了一个包含人脸的CIFaceFeature对象)
- (void)renderTempTexture:(CIFaceFeature *)faceFeature ciImage:(CIImage *)ciImage {
dispatch_semaphore_wait(_lock, DISPATCH_TIME_FOREVER);
//得到图片的尺寸
CGSize ciImageSize = [ciImage extent].size;
//初始化transform
CGAffineTransform transform = CGAffineTransformScale(CGAffineTransformIdentity, 1, -1);
transform = CGAffineTransformTranslate(transform,0,-ciImageSize.height);
// 实现坐标转换
CGSize viewSize =self.layer.bounds.size;
CGFloat scale = MIN(viewSize.width / ciImageSize.width,viewSize.height / ciImageSize.height);
CGFloat offsetX = (viewSize.width - ciImageSize.width * scale) / 2;
CGFloat offsetY = (viewSize.height - ciImageSize.height * scale) / 2;
// 缩放
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scale, scale);
//获取人脸的frame
CGRect faceViewBounds = CGRectApplyAffineTransform(faceFeature.bounds, transform);
// 修正
faceViewBounds = CGRectApplyAffineTransform(faceViewBounds,scaleTransform);
faceViewBounds.origin.x += offsetX;
faceViewBounds.origin.y += offsetY;
//使用关联贴图纹理的program
[self.textureManager useProgram];
//绑定对应的贴图纹理
glBindTexture(GL_TEXTURE_2D, _myTexture);
//将纹理单元对应的纹理数据传递给shader
glUniform1i(glViewUniforms[UNIFORM_TEMP_INPUT_IMG_TEXTURE], 2);
CGFloat midX = CGRectGetMidX(self.layer.bounds);
CGFloat midY = CGRectGetMidY(self.layer.bounds);
CGFloat originX = CGRectGetMinX(faceViewBounds);
CGFloat originY = CGRectGetMinY(faceViewBounds);
CGFloat maxX = CGRectGetMaxX(faceViewBounds);
CGFloat maxY = CGRectGetMaxY(faceViewBounds);
//贴图顶点
GLfloat minVertexX = (originX - midX) / midX;
GLfloat minVertexY = (midY - maxY) / midY;
GLfloat maxVertexX = (maxX - midX) / midX;
GLfloat maxVertexY = (midY - originY) / midY;
GLfloat quadData[] = {
minVertexX, minVertexY,
maxVertexX, minVertexY,
minVertexX, maxVertexY,
maxVertexX, maxVertexY,
};
//开启贴图顶点属性,并传递顶点坐标数据
glVertexAttribPointer(glViewAttributes[ATTRIB_TEMP_VERTEX], 2, GL_FLOAT, GL_FALSE, 0, quadData);
glEnableVertexAttribArray(glViewAttributes[ATTRIB_TEMP_VERTEX]);
GLfloat quadTextureData[] = { // 正常坐标
0, 0,
1, 0,
0, 1,
1, 1
};
//开启贴图纹理坐标属性,并传递纹理坐标数据
glVertexAttribPointer(glViewAttributes[ATTRIB_TEMP_TEXCOORD], 2, GL_FLOAT, GL_FALSE, 0, quadTextureData);
glEnableVertexAttribArray(glViewAttributes[ATTRIB_TEMP_TEXCOORD]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
dispatch_semaphore_signal(_lock);
}
其实CoreImage返回的CIFaceFeature对象可以提供很多信息,包括人脸坐标、左右眼是否睁开及对应位置、嘴的位置等,所以如果我们需要做更详细的纹理贴图可以分别转换出眼睛、嘴巴的位置,然后使用我们想要的贴图渲染到对应的纹理坐标系中即可
补充说明
本篇博客只是讲解了整个人脸识别贴纸的实现逻辑,许多细节的优化并没有涉及,如果在实际项目中需要使用,建议使用GPUImage+CoreImage或者OpenCV来实现更好地效果和不同的需求。另外附上一篇基于GPUImage实现的人脸贴纸讲解博客: 传送门。
附上demo的github链接:LYFaceDetection
如果你有更好地建议和实现过程,欢迎指教!
参考链接:
OpenGL ES实践教程(八)blend混合与shader混合
iOS CoreImage (人脸检测/ 换背景/ 抠图 /贴纸/ 实时视频滤镜)