前几章节 将了如何绘制一张图片以及图像翻转,这章节主要介绍如何用 Opengl ES
将 YUV
数据绘制并呈现出来;
关于YUV
的格式,这里不做过多讲解,网上一搜一大堆,我就不在赘述了,这个找了几篇写的不错的文章给大家看,如果还是分不清的话的可以去看一下这些文章;
YUV 格式详解-史上最全
CSDN YUV格式到底是什么?
色彩空间模型:RGB、YUV科普
先来看一下效果吧;
这里我们使用平时最常用的 NV12
也就是420f
,来进行绘制;
这里我们的思路是,拿到一个mp4
的文件,用AVPlayer
对mp4
文件资源加载 并播放,然后对 palyerItem
设置 AVPlayerItemVideoOutput
,获取到每秒30帧的视频帧画面,即pixeBuffer
数据 ,然后将 视频帧数据 pixeBuffer
通过Opengl ES 绘制出来;
1.设置AVplayer 和AVPlayerItemVideoOutput 获取到视频帧
这里 配置 AVPlayerItem
初始化AVPlayer
播放器,加载视频资源的逻辑不在赘述,同时设置一个定时器帮我们以每秒 30FPS
的获取当前播放的视频帧,废话不多少上代码,大部分iOS开发同学应该都是能看懂的,看不懂的可以看代码;
- (void)initParams {
/// 设置ItemVideoOutput 用于从AVPlayerItem 获取实时 的视频帧数据
/// 这里视频帧的格式设置成 kCVPixelFormatType_420YpCbCr8BiPlanarFullRange 也就是YUV 420f
NSDictionary *pixelBufferAttribute = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)};
AVPlayerItemVideoOutput *videoOutput = [[AVPlayerItemVideoOutput alloc]initWithPixelBufferAttributes:pixelBufferAttribute];
_output = videoOutput;
/// 加载视频资源
NSString *path = [[NSBundle mainBundle] pathForResource:@"download" ofType:@"mp4"];
NSURL *pathURL = [NSURL fileURLWithPath:path];
AVPlayerItem *item = [AVPlayerItem playerItemWithURL:pathURL];
[item addOutput:_output];
_resourceURL = pathURL;
/// 初始化播放器
[self playWithItem:item];
/// 开始播放、并起一个定时器用于获取当前视频帧
[self playPlayer];
}
创建一个每秒 30FPS 的定时器,用于在播放器成功播放后,获取视频帧,这里可以用 dispatch_source_t
或CADisplayLink
等
- (void)startTimer {
[self stoptimer];
/// 每秒30帧
NSUInteger FPS = 30;
dispatch_queue_t _queue = dispatch_queue_create("com.render.statistics", NULL);
dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, _queue);
dispatch_time_t start = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0 * NSEC_PER_SEC));
uint64_t interval = (uint64_t)(1.0/FPS * NSEC_PER_SEC);
dispatch_source_set_timer(timer, start, interval, 0);
__weak typeof(self) weakSelf = self;
dispatch_source_set_event_handler(timer, ^{
[weakSelf _tick];
});
dispatch_resume(timer);
_timer = timer;
}
- (void)stoptimer {
if (_timer) dispatch_source_cancel(_timer);
_timer = nil;
}
- (CVPixelBufferRef)_copyTextureFromPlayItem:(AVPlayerItem *)item {
AVPlayerItemVideoOutput *output = _output;
AVAsset *asset = item.asset;
CMTime time = item.currentTime;
float offset = time.value * 1.0f / time.timescale;
float frames = asset.duration.value * 1.0f / asset.duration.timescale;
if (offset == frames) {
[self pausePlayer];
return NULL;
}
CVPixelBufferRef pixelBuffer = [output copyPixelBufferForItemTime:time itemTimeForDisplay:nil];
return pixelBuffer;
}
- (void)_tick {
AVPlayer *player = _player;
CVPixelBufferRef pixelBuffer = [self _copyTextureFromPlayItem:player.currentItem];
/// 将获取到的 pixeBuffer 数据传到自定义的 YUVView 内绘制
self.renderView.pixelBuffer = pixelBuffer;
if (pixelBuffer){
CFRelease(pixelBuffer);
}
}
2. 设置CAEAGLLayer 并初始化需要的配置
前面的代码是帮我们拿到一个YUV420f
的 pixeBuffer
数据,下面的代码才是真正的绘制过程;
和之前的代码一样,我们需要自定 UIView
或者一个CALayer
遵守 CAEAGLLayer
协议;
- (instancetype)initWithFrame:(CGRect)frame{
self = [super initWithFrame:frame];
if (self) {
//1.设置图层
[self setupLayer];
//2.设置图形上下文
[self setupContext];
//3. 加载shader
[self loadShaders];
//4.设置FrameBuffer
[self setupFrameBuffer];
}
return self;
}
+(Class)layerClass
{
return [CAEAGLLayer class];
}
流程上和绘制一张图片的逻辑一样;
-
- 设置图层
-
- 设置图形上下文
-
- 加载 shader (顶点着色器、片元着色器)
-
- 设置frameBuffer 并传入
顶点坐标
和纹理坐标
- 设置frameBuffer 并传入
3. 顶点着色器讲解
const NSString *vertexShader = @" \
attribute vec4 position; \
attribute vec2 texCoord; \
uniform float preferredRotation; \
varying vec2 texCoordVarying; \
void main() \
{ \
mat4 rotationMatrix = mat4(cos(preferredRotation), -sin(preferredRotation), 0.0, 0.0, \
sin(preferredRotation), cos(preferredRotation), 0.0, 0.0, \
0.0, 0.0, 1.0, 0.0, \
0.0, 0.0, 0.0, 1.0); \
gl_Position = position * rotationMatrix; \
texCoordVarying = texCoord; \
} \
";
顶点坐标 position
纹理坐标texCoord
旋转角度preferredRotation
texCoordVarying 是传递到纹理着色器 的变量;
还记得上篇文章中 OpenGLES学习 ---- (3)图片翻转 的关于图片翻转的, 方法一
嘛,这个旋转矩阵也是解决 UIKit
坐标系和OpenGL
坐标系不一致的问题 (视频图像翻转),不然绘制出来的视频画面是倒置的;
4.片元着色器讲解
const NSString *fragmentShader = @" \
varying highp vec2 texCoordVarying; \
precision mediump float; \
uniform sampler2D SamplerY; \
uniform sampler2D SamplerUV; \
uniform mat3 colorConversionMatrix; \
void main() \
{ \
mediump vec3 yuv; \
lowp vec3 rgb; \
yuv.x = (texture2D(SamplerY, texCoordVarying).r - (16.0/255.0)); \
yuv.yz = (texture2D(SamplerUV, texCoordVarying).rg - vec2(0.5, 0.5)); \
rgb = colorConversionMatrix * yuv; \
gl_FragColor = vec4(rgb, 1); \
} \
";
texCoordVarying
纹理坐标;
SamplerY
Y图像 (指黑白的图画)
SamplerUV
UV 图像 (颜色);
colorConversionMatrix
色彩空间;
拿到YUV
数据后,将YUV
数据通过矩阵运算转成RGB
格式的数据,赋值给内置函数 gl_FragColor
从而将帧数据绘制出来 ;
5. BT.601 和BT.709
// BT.601, which is the standard for SDTV.
static const GLfloat kColorConversion601[] = {
1.164, 1.164, 1.164,
0.0, -0.392, 2.017,
1.596, -0.813, 0.0,
};
// BT.709, which is the standard for HDTV.
static const GLfloat kColorConversion709[] = {
1.164, 1.164, 1.164,
0.0, -0.213, 2.112,
1.793, -0.533, 0.0,
};
简单点讲,为什么会有BT601
和BT709
;
本质上来说,他两个是定义的不同的色彩空间;
摄像头采集到的RGB 数据 需要通过 BT601/BT709
标准将其转换为YUV
数据;
那反过来,我们现在将YUV
数据还原成RGB
数据也要用到这个颜色转换矩阵;
BT601和BT709到底什么关系
5.pixelBuffer获取后拿到Y数据和UV 数据
拿到 CVPixelBufferRef 数据后,通过 CVPixelBufferGetPlaneCount()
函数判断pixeBuffer
的planeCount 是否为2 ,如果为2 说明,这个是一个YUV 的数据;
通过CVBufferGetAttachment()
函数 key
为kCVImageBufferYCbCrMatrixKey
,获取当前 pixelBuffer
的色彩空间,从而确定矩阵运算是用BT601
还是BT709
矩阵;
较为重要的是 通过CVOpenGLESTextureCacheCreateTextureFromImage()
函数 拿到 Y分量
和UV 分量
;
从上面上的顶点着色器
和片元着色器
的讲解中,我们已经知道了,现在需要对Y纹理
和UV 纹理
和相应的图层进行绑定;
GLuint samplerY = glGetUniformLocation(_myProgram, "SamplerY");
GLuint samplerUV = glGetUniformLocation(_myProgram, "SamplerUV");
glUniform1i(samplerY, 0);
glUniform1i(samplerUV, 1);
给色彩空间矩阵 赋值;
GLint colorConversionMatrix = glGetUniformLocation(_myProgram, "colorConversionMatrix");
glUniformMatrix3fv(colorConversionMatrix, 1, GL_FALSE, _preferredConversion);
float radius = 180 * 3.14159f / 180.0f;
旋转180度,转换成弧度,然后赋值 给 变量 preferredRotation
;
具体代码可参考如下:
- (void)setPixelBuffer:(CVPixelBufferRef)pixelBuffer{
if (!pixelBuffer) {
return;
}
if (_pixelBuffer) {
CFRelease(_pixelBuffer);
}
_pixelBuffer = CVPixelBufferRetain(pixelBuffer);
[self ensureCurentContext];
uint32_t width = (int)CVPixelBufferGetWidth(_pixelBuffer);
uint32_t height = (int)CVPixelBufferGetHeight(_pixelBuffer);
size_t planeCount = CVPixelBufferGetPlaneCount(_pixelBuffer);
CFTypeRef colorAttachments = CVBufferGetAttachment(_pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);
/// 匹配原始图像pixeBuffer 的颜色空间,BT601 还是BT709
if (CFStringCompare(colorAttachments, kCVImageBufferYCbCrMatrix_ITU_R_601_4, 0) == kCFCompareEqualTo) {
_preferredConversion = kColorConversion601;
} else {
_preferredConversion = kColorConversion709;
}
CVOpenGLESTextureCacheRef _videoTextureCache;
CVReturn error = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _myContext, NULL, &_videoTextureCache);
if (error != noErr) {
NSLog(@"CVOpenGLESTextureCacheCreate error %d",error);
return;
}
glActiveTexture(GL_TEXTURE0);
/// 获取Y纹理
error = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
_pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RED_EXT,
width,
height,
GL_RED_EXT,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
if (error) {
NSLog(@"error for reateTextureFromImage %d",error);
}
glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
/// 获取UV纹理
if (planeCount == 2) {
/// 获取UV纹理
glActiveTexture(GL_TEXTURE1);
error = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
_pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RG_EXT,
width/2,
height/2,
GL_RG_EXT,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
if (error) {
NSLog(@"error for reateTextureFromImage %d",error);
}
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
glDisable(GL_DEPTH_TEST);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glViewport(0, 0, _width, _height);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glUseProgram(_myProgram);
GLuint samplerY = glGetUniformLocation(_myProgram, "SamplerY");
GLuint samplerUV = glGetUniformLocation(_myProgram, "SamplerUV");
/// uniform
GLint colorConversionMatrix = glGetUniformLocation(_myProgram, "colorConversionMatrix");
GLint rotation = glGetUniformLocation(_myProgram, "preferredRotation");
/// 旋转角度
float radius = 180 * 3.14159f / 180.0f;
/// 定义uniform 采样器对应纹理 0 也就是Y 纹理
glUniform1i(samplerY, 0);
glUniform1i(samplerUV, 1);
/// 为当前程序对象指定Uniform变量的值
glUniform1f(rotation, radius);
/// 更新颜色空间矩阵的值 (bt601 /bt709)
glUniformMatrix3fv(colorConversionMatrix, 1, GL_FALSE, _preferredConversion);
/// 开始绘制
glDrawArrays(GL_TRIANGLES, 0, 6);
[_myContext presentRenderbuffer:GL_RENDERBUFFER];
/// 清除纹理、释放内存
[self cleanUpTextures];
CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);
if(_videoTextureCache) {
CFRelease(_videoTextureCache);
}
}
由于篇幅原因,本文不能将全部代码贴出来,只是帖了一些核心的关键代码和编码思路,如果有不明白的同学需要可以看源码
源码地址:https://github.com/hunter858/OpenGL_Study
https://github.com/hunter858/OpenGL_Study/OpenGL014-YUV视频绘制