OpenGLES学习 ---- RGB视频绘制 (7)

上一个章节,介绍了YUV类型的视频数据渲染,这一篇文章,介绍一下RGB格式的视频帧的绘制;其实按难易程度应该是 RGB的绘制比较简单,这篇文章顺带着优化一下demo,上个demo 采用的是顶点数据绘制,这个demo 采用索引绘图的方法,减少顶点的数量;

首先呢,还是放上效果图


可可爱爱.png

思路:

这里我们的思路和绘制YUV视频的思路一样,拿到一个mp4的文件,用AVPlayermp4文件资源加载 并播放,然后对 palyerItem 设置 AVPlayerItemVideoOutput,获取到每秒30帧的视频帧画面,即pixeBuffer数据 ,然后将 视频帧数据 pixeBuffer通过Opengl ES 绘制出来;

重点

不一样的地方是,这里 kCVPixelBufferPixelFormatTypeKey 的配置 改成 kCVPixelFormatType_32BGRA, 因为这次我们要拿到的数据是RGBA的数据 不是YUV420f

1.设置AVplayer 和 AVPlayerItemVideoOutput 获取到视频帧

- (void)initParams {
    
    /// 设置ItemVideoOutput 用于从AVPlayerItem 获取实时 的视频帧数据
    /// 这里视频帧的格式设置成 kCVPixelFormatType_32BGRA 
    NSDictionary *pixelBufferAttribute = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)};
    AVPlayerItemVideoOutput *videoOutput = [[AVPlayerItemVideoOutput alloc]initWithPixelBufferAttributes:pixelBufferAttribute];
    _output = videoOutput;
    
    ///加载视频资源
    NSString *path = [[NSBundle mainBundle] pathForResource:@"download" ofType:@"mp4"];
    NSURL *pathURL = [NSURL fileURLWithPath:path];
    AVPlayerItem *item = [AVPlayerItem playerItemWithURL:pathURL];
    [item addOutput:_output];
    _resourceURL = pathURL;

    /// 初始化播放器
    [self playWithItem:item];
    /// 开始播放、并起一个定时器用于获取当前视频帧
    [self playPlayer];
}

创建一个每秒 30FPS 的定时器,用于在播放器成功播放后,获取32BGRA类型的CVPixelBufferRef,这里可以用 dispatch_source_tCADisplayLink

- (void)startTimer {
    [self stoptimer];
    /// 每秒30帧
    NSUInteger FPS = 30;
    dispatch_queue_t _queue = dispatch_queue_create("com.render.statistics", NULL);
    dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, _queue);
    dispatch_time_t start = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0 * NSEC_PER_SEC));
    uint64_t interval = (uint64_t)(1.0/FPS * NSEC_PER_SEC);
    
    dispatch_source_set_timer(timer, start, interval, 0);
    
    __weak typeof(self) weakSelf = self;
    dispatch_source_set_event_handler(timer, ^{
        [weakSelf _tick];
    });
    dispatch_resume(timer);
    _timer = timer;
}

- (void)stoptimer {
    if (_timer) dispatch_source_cancel(_timer);
    _timer = nil;
}


- (CVPixelBufferRef)_copyTextureFromPlayItem:(AVPlayerItem *)item {
    AVPlayerItemVideoOutput *output = _output;
    
    AVAsset *asset = item.asset;
    CMTime time = item.currentTime;
    float offset = time.value * 1.0f / time.timescale;
    float frames = asset.duration.value * 1.0f / asset.duration.timescale;
    if (offset == frames) {
        [self pausePlayer];
        return NULL;
    }
    CVPixelBufferRef pixelBuffer = [output copyPixelBufferForItemTime:time itemTimeForDisplay:nil];
    return pixelBuffer;
}

创建一个每秒 30FPS 的定时器,用于在播放器成功播放后,获取视频帧,这里可以用 dispatch_source_tCADisplayLink

2. 设置CAEAGLLayer 并初始化需要的配置

前面的代码是帮我们拿到一个BGRGpixeBuffer 数据,下面的代码才是真正的绘制过程;
和之前的代码一样,我们需要自定 UIView 或者一个CALayer 遵守 CAEAGLLayer 协议;

- (instancetype)initWithFrame:(CGRect)frame{
    self = [super initWithFrame:frame];
    if (self) {
        //1.设置图层
        [self setupLayer];
        
        //2.设置图形上下文
        [self setupContext];
        
        //3. 加载shader
        [self loadShaders];
        
        //4.设置FrameBuffer
        [self setupFrameBuffer];
    }
    return self;
}

+(Class)layerClass
{
    return [CAEAGLLayer class];
}


3. 顶点坐标 和 索引数据

GLfloat Vertex[]  = {
    -1.0f, 1.0f,     0.0f, 1.0f, //左上角A
    1.0f, 1.0f,      1.0f, 1.0f, //右上角B
    1.0f, -1.0f,     1.0f, 0.0f, //右下角C
    -1.0f, -1.0f,    0.0f, 0.0f, //左下角D
};

GLuint elementIndex[] =
{
    0, 3, 2,
    0, 2, 1,
};

这里要介绍一下索引绘图
原先的顶点 表示一个矩形 需要 6 个顶点数据绘制2 个三角形
如下图所示的顶点中ABCD四个顶点的所以分别代表的索引为 0 、1、2、3
现在顶点数组只需要4个,然后通过索引绘制的方式, 绘制2个三角形,绘制顺序是 ADCCBA

如下图所示帮助理解

image.png

3. 顶点着色器讲解

position 顶点坐标;
texCoord 纹理坐标;
preferredRotation 旋转弧度;
texCoordVarying 传给片元着色器的纹理坐标;

const NSString *vertexShader = @"                                                           \
attribute vec4 position;                                                                    \
attribute vec4 texCoord;                                                                    \
uniform float preferredRotation;                                                            \
varying vec2 texCoordVarying;                                                               \
void main()                                                                                 \
{                                                                                           \
    mat4 rotationMatrix = mat4(cos(preferredRotation), -sin(preferredRotation), 0.0, 0.0,   \
                               sin(preferredRotation),  cos(preferredRotation), 0.0, 0.0,   \
                               0.0,                        0.0, 1.0, 0.0,                   \
                               0.0,                        0.0, 0.0, 1.0);                  \
    gl_Position = position * rotationMatrix;                                                \
    texCoordVarying = texCoord.xy;                                                          \
}                                                                                           \
";

4.片元着色器讲解

texture RGB类型的纹理 (图片数据);
texCoordVarying 纹理坐标;

const NSString *fragmentShader = @"                                        \
varying highp vec2 texCoordVarying;                                        \
uniform sampler2D texture;                                                 \
void main()                                                                \
{                                                                          \
    gl_FragColor = texture2D(texture, texCoordVarying);                    \
}                                                                          \
";

5.pixelBuffer 处理

较为重要的是 通过 CVOpenGLESTextureCacheCreateTextureFromImage() 函数 拿到 RGB
图像数据;

从上面上的顶点着色器 和片元着色器的讲解中,我们已经知道了,现在需要对 Y纹理UV纹理 和相应的图层进行绑定;

GLuint textureUniform = glGetUniformLocation(_program, "texture");
glUniform1i(textureUniform, 0);

float radius = 180 * 3.14159f / 180.0f;旋转180度,转换成弧度,然后赋值 给 变量preferredRotation

`GLint rotation = glGetUniformLocation(_program, "preferredRotation");`
`float radius = 180 * 3.14159f / 180.0f;`
`glUniform1i(textureUniform, 0);`

索引绘制用 glDrawElements()方法,传入需要绘制的图元类型索引个数数据类型索引数据的首地址
glDrawElements(GL_TRIANGLES, sizeof(elementIndex)/sizeof(elementIndex[0]), GL_UNSIGNED_INT, elementIndex);

- (void)setPixelBuffer:(CVPixelBufferRef)pixelBuffer{
   
   if (!pixelBuffer) {
       return;
   }
   if (_pixelBuffer) {
       CFRelease(_pixelBuffer);
   }
   _pixelBuffer = CVPixelBufferRetain(pixelBuffer);
   [self ensureCurentContext];
   
   uint32_t width = (int)CVPixelBufferGetWidth(_pixelBuffer);
   uint32_t height = (int)CVPixelBufferGetHeight(_pixelBuffer);

   
   CVOpenGLESTextureCacheRef _videoTextureCache;
   CVReturn error = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _eaglContext, NULL, &_videoTextureCache);
   if (error != noErr) {
       NSLog(@"CVOpenGLESTextureCacheCreate error %d",error);
       return;
   }
   
   glActiveTexture(GL_TEXTURE0);
   /// 获取RGB纹理
   error = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
                                                        _videoTextureCache,
                                                        _pixelBuffer,
                                                        NULL,
                                                        GL_TEXTURE_2D,
                                                        GL_RGBA,
                                                        width,
                                                        height,
                                                        GL_BGRA,
                                                        GL_UNSIGNED_BYTE,
                                                        0,
                                                        &_rgbTexture);
   if (error) {
       NSLog(@"error for reateTextureFromImage %d",error);
   }
   glBindTexture(CVOpenGLESTextureGetTarget(_rgbTexture), CVOpenGLESTextureGetName(_rgbTexture));
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
   glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

   glDisable(GL_DEPTH_TEST);
   glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
   glViewport(0, 0, _width, _height);
   glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
   glUseProgram(_program);
   GLuint textureUniform = glGetUniformLocation(_program, "texture");
   
   
   /// uniform
   GLint rotation = glGetUniformLocation(_program, "preferredRotation");
   
   /// 旋转角度
   float radius = 180 * 3.14159f / 180.0f;
   
   /// 定义uniform 采样器对应纹理 0 也就是Y 纹理
   glUniform1i(textureUniform, 0);

   /// 为当前程序对象指定Uniform变量的值
   glUniform1f(rotation, radius);
   /// 开始绘制
   glDrawElements(GL_TRIANGLES, sizeof(elementIndex)/sizeof(elementIndex[0]), GL_UNSIGNED_INT, elementIndex);

   [_eaglContext presentRenderbuffer:GL_RENDERBUFFER];
   
   /// 清除纹理、释放内存
   [self cleanUpTextures];
   CVOpenGLESTextureCacheFlush(_videoTextureCache, 0);
   if(_videoTextureCache) {
       CFRelease(_videoTextureCache);
   }
}

由于篇幅原因,本文不能将全部代码贴出来,只是帖了一些核心的关键代码和编码思路,如果有不明白的同学需要可以看源码
源码地址: https://github.com/hunter858/OpenGL_Study/OpenGL015-RGB视频绘制

你可能感兴趣的:(OpenGLES学习 ---- RGB视频绘制 (7))