Metal 渲染视频

在阅读该文章前请先阅读 Metal 渲染摄像机内容 和 RGB和YUV颜色编码。

先来介绍一下该案例需要实现的功能:
利用AVAssetReader加载视频内容,并且一帧一帧的渲染到MTKView上。

下面我们来看下核心代码。

一、.metal 文件的代码:

#include 
using namespace metal;

#import "CQVideo.h"

typedef struct {
    float4 clipSpacePosition [[position]]; // position的修饰符表示这个是顶点
    float2 textureCoordinate; // 纹理坐标
} RasterizerData;

vertex RasterizerData vertexShaderVideo(uint vertexIndex [[vertex_id]],
                                        constant CQVideoVertex *vertexArray [[buffer(CQVVertexInputIndexVertices)]]) {
    RasterizerData out;
    out.clipSpacePosition = vertexArray[vertexIndex].position;
    out.textureCoordinate = vertexArray[vertexIndex].textureCoordinate;
    return out;
}

fragment float4 fragmentShaderVideo(RasterizerData in [[stage_in]],
                                    texture2d textureY [[texture(CQVFragmentTextureIndexTextureY)]],
                                    texture2d textureUV [[texture(CQVFragmentTextureIndexTextureUV)]],
                                    constant CQConvertMatrix *convertMatrix [[buffer(CQVFragmentBufferIndexMatrix)]]) {
    
    constexpr sampler textureSampler (filter::linear);
    float y = textureY.sample(textureSampler, in.textureCoordinate).r;
    float2 uv = textureUV.sample(textureSampler, in.textureCoordinate).rg;
    float3 yuv = float3(y, uv);
    //将YUV 转化为 RGB值
    float3 rgb = convertMatrix->matrix * (yuv + convertMatrix->offset);
    return float4(rgb, 1.0);
}

这里是 .metal 文件中的所有代码。来简单介绍下片元函数的相关代码:

  • textureSampler:纹理采样器。
  • textureY:我们从OC代码传过来的是一个只包含一个(R)8位规范化的无符号整数组件(MTLPixelFormatR8Unorm)的纹理。所以我们只需要取分量r上的值,就是分量Y对应的纹理的颜色值。
  • textureUV:我们从OC代码传过来的是一个包含两个(RG)8位规范化的无符号整数组件(MTLPixelFormatRG8Unorm)的纹理。所以我们需要取分量rg上的值,就是分量UV对应的纹理的颜色值。
  • convertMatrix:将 YUV 转化为 RGB 值的 转换矩阵

这里有些数据类型是我们自定义在CQVideo.h文件中的,方便metalOC共用。看代码:

#ifndef CQVideo_h
#define CQVideo_h
#include 

typedef struct {
    vector_float4 position;
    vector_float2 textureCoordinate;
} CQVideoVertex;

typedef struct {
    matrix_float3x3 matrix;//三维矩阵
    vector_float3 offset;//偏移量
} CQConvertMatrix;//转换矩阵

typedef enum CQVVertexInputIndex {
    CQVVertexInputIndexVertices = 0,
} CQVVertexInputIndex;//顶点索引

typedef enum CQVFragmentBufferIndex {
    CQVFragmentBufferIndexMatrix = 0,
} CQVFragmentBufferIndex;//片元函数缓存区索引

typedef enum CCFragmentTextureIndex {
    CQVFragmentTextureIndexTextureY     = 0,//Y纹理
    CQVFragmentTextureIndexTextureUV    = 1,//UV纹理
} CQVFragmentTextureIndex;//片元函数纹理索引

#endif /* CQVideo_h */

二、CQMetalVideoVC.m中的代码

先来了解下整体流程,有个大概的思路:

  • 1、setupMTKView:设置MTKView
  • 2、setupAsset:设置视频资源。
  • 3、setupPineline:设置管线。
  • 4、setupVertex:设置顶点数据。
  • 5、setupMatrix:设置转换矩阵。

这五步是笔者划分的步骤,其实就是绘制的准备工作。
还有最后一步:

  • 6、绘制。这一步肯定是在MTKView代理方法drawInMTKView:中进行了。

下面看下每一步的具体操作。

2.1 setupMTKView
    self.mtkView = [[MTKView alloc] initWithFrame:self.view.bounds];
    self.mtkView.device = MTLCreateSystemDefaultDevice();
    if (!self.mtkView.device) {
        NSLog(@"Metal is not supported on this device");
        return;
    }
    self.view = self.mtkView;
    self.mtkView.delegate = self;
    self.viewportSize = (vector_uint2){self.mtkView.drawableSize.width, self.mtkView.drawableSize.height};

这一步就不再赘述了,很简单的操作。

2.2 setupAsset 设置视频资源
NSURL *url = [[NSBundle mainBundle] URLForResource:@"recoder" withExtension:@"mp4"];
self.reader = [[CQAssetReader alloc] initWithUrl:url];
    
CVMetalTextureCacheCreate(NULL, NULL, self.mtkView.device, NULL, &_textureCache);
  • CVMetalTextureCacheCreate(NULL, NULL, self.mtkView.device, NULL, &_textureCache);_textureCache(高速纹理读取缓存区CVMetalTextureCacheRef)的创建。通过CoreVideo提供给CPU/GPU高速缓存通道读取纹理数据。

这里用到了自定义的类CQAssetReader。我们来具体看下:

#pragma mark - readBuffer
- (CMSampleBufferRef)readBuffer {
    [lock lock];
    CMSampleBufferRef sampleBufferRef = nil;
    if (assetReaderTrackOutput) {
        //复制下一个缓存区的内容到sampleBufferRef
        sampleBufferRef = [assetReaderTrackOutput copyNextSampleBuffer];
    }

    if (assetReader && assetReader.status == AVAssetReaderStatusCompleted) {
        assetReaderTrackOutput = nil;
        assetReader = nil;
        [self setupAsset];
    }
    [lock unlock];
    return sampleBufferRef;
}
- (void)setupAsset {
    //默认为NO,YES表示提供精确的时长
    NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey:@(YES)};
    AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:videoUrl options:options];
    
    __weak typeof(self) weakSelf = self;
    NSString *key = @"tracks";
    //管理目标 加载尚未加载的任何指定键的值。
    //对资源所需的键执行标准的异步载入操作,这样就可以访问资源的tracks属性时,就不会受到阻碍.
    [inputAsset loadValuesAsynchronouslyForKeys:@[key] completionHandler:^{
        __strong typeof(self) strongSelf = self;
        
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
            NSError *error;
            AVKeyValueStatus status = [inputAsset statusOfValueForKey:key error:&error];
            if (status != AVKeyValueStatusLoaded) {
                NSLog(@"Asset error: %@", error);
                return;
            }
            [weakSelf processAsset:inputAsset];
        });
    }];
}

- (void)processAsset:(AVAsset *)asset {
    [lock lock];
    NSError *error;
    assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
    if (error != nil) {
        NSLog(@"Reader error: %@", error);
    }
    //kCVPixelBufferPixelFormatTypeKey 像素格式.
    //kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange : 420v
    //kCVPixelFormatType_32BGRA : iOS在内部进行YUV至BGRA格式转换
    NSDictionary *options = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)};
    AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
    assetReaderTrackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:assetTrack outputSettings:options];
    //表示缓存区的数据输出之前是否会被复制.
    //YES:输出总是从缓存区提供复制的数据,你可以自由的修改这些缓存区数据
    assetReaderTrackOutput.alwaysCopiesSampleData = NO;
    
    [assetReader addOutput:assetReaderTrackOutput];
    
    BOOL start = [assetReader startReading];
    if (start == NO) {
        NSLog(@"Error reading from file at URL: %@", asset);
    }
    [lock unlock];
}

对这段代码笔者画了张图,大家可以参考理解一下:

2.3 setupPineline 设置管线
    id defaultLibrary = [self.mtkView.device newDefaultLibrary];
    id vertexFunc = [defaultLibrary newFunctionWithName:@"vertexShaderVideo"];
    id fragmentFunc = [defaultLibrary newFunctionWithName:@"fragmentShaderVideo"];
    
    MTLRenderPipelineDescriptor *renderPipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
    renderPipelineDescriptor.vertexFunction = vertexFunc;
    renderPipelineDescriptor.fragmentFunction = fragmentFunc;
    renderPipelineDescriptor.colorAttachments[0].pixelFormat = self.mtkView.colorPixelFormat;
    
    self.renderPipelineState = [self.mtkView.device newRenderPipelineStateWithDescriptor:renderPipelineDescriptor error:NULL];
    self.commandQueue = [self.mtkView.device newCommandQueue];
2.4 setupVertex 设置顶点数据
    //注意: 为了让视频全屏铺满,所以顶点大小均设置[-1,1]
    static const CQVideoVertex quadVertices[] =
    {   // 顶点坐标,分别是x、y、z、w;    纹理坐标,x、y;
        { {  1.0, -1.0, 0.0, 1.0 },  { 1.f, 1.f } },
        { { -1.0, -1.0, 0.0, 1.0 },  { 0.f, 1.f } },
        { { -1.0,  1.0, 0.0, 1.0 },  { 0.f, 0.f } },
        
        { {  1.0, -1.0, 0.0, 1.0 },  { 1.f, 1.f } },
        { { -1.0,  1.0, 0.0, 1.0 },  { 0.f, 0.f } },
        { {  1.0,  1.0, 0.0, 1.0 },  { 1.f, 0.f } },
    };
    
    //创建顶点缓存区
    self.vertices = [self.mtkView.device newBufferWithBytes:quadVertices
                                                     length:sizeof(quadVertices)
                                                    options:MTLResourceStorageModeShared];
    //计算顶点个数
    self.verticesNum = sizeof(quadVertices) / sizeof(CQVideoVertex);
2.5 setupMatrix设置转换矩阵
    //1.转化矩阵
     // BT.601, which is the standard for SDTV.
     matrix_float3x3 kColorConversion601DefaultMatrix = (matrix_float3x3){
         (simd_float3){1.164,  1.164, 1.164},
         (simd_float3){0.0, -0.392, 2.017},
         (simd_float3){1.596, -0.813,   0.0},
     };
     
     // BT.601 full range
     matrix_float3x3 kColorConversion601FullRangeMatrix = (matrix_float3x3){
         (simd_float3){1.0,    1.0,    1.0},
         (simd_float3){0.0,    -0.343, 1.765},
         (simd_float3){1.4,    -0.711, 0.0},
     };
    
     // BT.709, which is the standard for HDTV.
     matrix_float3x3 kColorConversion709DefaultMatrix[] = {
         (simd_float3){1.164,  1.164, 1.164},
         (simd_float3){0.0, -0.213, 2.112},
         (simd_float3){1.793, -0.533,   0.0},
     };
     
     //2.偏移量
     vector_float3 kColorConversion601FullRangeOffset = (vector_float3){ -(16.0/255.0), -0.5, -0.5};
     
     //3.创建转化矩阵结构体.
     CQConvertMatrix matrix;
     //设置转化矩阵
     matrix.matrix = kColorConversion601FullRangeMatrix;
     matrix.offset = kColorConversion601FullRangeOffset;
     
     //创建转换矩阵缓存区.
     self.convertMatrix = [self.mtkView.device newBufferWithBytes:&matrix
                                                         length:sizeof(CQConvertMatrix)
                                                 options:MTLResourceStorageModeShared];

所有的准备工作已结束,下面我们开始绘制。

2.6 绘制
- (void)drawInMTKView:(nonnull MTKView *)view {
    id commandBuffer = [self.commandQueue commandBuffer];
    MTLRenderPassDescriptor *renderPassDescriptor = view.currentRenderPassDescriptor;
    CMSampleBufferRef sampleBufferRef = [self.reader readBuffer];
    if (renderPassDescriptor && sampleBufferRef) {
        //设置MTLRenderPassDescriptor中颜色附着(默认背景色)
        renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0.0, 0.5, 0.5, 1.0);
        id commandEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
        MTLViewport viewport = {0.0, 0.0, _viewportSize.x, _viewportSize.y, -1.0, 1.0};
        [commandEncoder setViewport:viewport];
        [commandEncoder setRenderPipelineState:self.renderPipelineState];
        [commandEncoder setVertexBuffer:self.vertices offset:0 atIndex:CQVVertexInputIndexVertices];
        [self setupTextureWithEncoder:commandEncoder buffer:sampleBufferRef];
        
        [commandEncoder setFragmentBuffer:self.convertMatrix offset:0 atIndex:CQVFragmentBufferIndexMatrix];
        [commandEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:self.verticesNum];
        [commandEncoder endEncoding];
        [commandBuffer presentDrawable:view.currentDrawable];
    }
    [commandBuffer commit];
}
  • CMSampleBufferRef sampleBufferRef = [self.reader readBuffer];
    这一步我们每次读取视频的下一个CMSampleBufferRef对象数据。
  • 然后我们设置了视口、渲染管线状态、顶点数据、纹理数据、转换矩阵。
    这一系列设置后绘制、结束编码、呈现、提交命令。

我们来看下纹理的设置,如何设置Y纹理以及UV纹理:

- (void)setupTextureWithEncoder:(id)encoder buffer:(CMSampleBufferRef)sampleBuffer {
    //从CMSampleBuffer读取CVPixelBuffer,
    CVPixelBufferRef pixelBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer);
    id textureY = nil;
    id textureUV = nil;
    
    {//textureY 设置
        size_t width = CVPixelBufferGetWidthOfPlane(pixelBufferRef, 0);
        size_t height = CVPixelBufferGetHeightOfPlane(pixelBufferRef, 0);
        //像素格式:普通格式,包含一个8位规范化的无符号整数组件。
        MTLPixelFormat pixelFormat = MTLPixelFormatR8Unorm;
        
        //创建CoreVideo的Metal纹理
        CVMetalTextureRef texture = NULL;
        CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, self.textureCache, pixelBufferRef, NULL, pixelFormat, width, height, 0, &texture);
        if (status == kCVReturnSuccess) {
            textureY = CVMetalTextureGetTexture(texture);
            CFRelease(texture);
        }
    }
    
    {//textureUV 设置
        size_t width = CVPixelBufferGetWidthOfPlane(pixelBufferRef, 1);
        size_t height = CVPixelBufferGetHeightOfPlane(pixelBufferRef, 1);
        MTLPixelFormat pixelFormat = MTLPixelFormatRG8Unorm;
        CVMetalTextureRef texture = NULL;
        CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, self.textureCache, pixelBufferRef, NULL, pixelFormat, width, height, 1, &texture);
        if (status == kCVReturnSuccess) {
            textureUV = CVMetalTextureGetTexture(texture);
            CFRelease(texture);
        }
    }
    
    if(textureY != nil && textureUV != nil) {
        [encoder setFragmentTexture:textureY atIndex:CQVFragmentTextureIndexTextureY];
        [encoder setFragmentTexture:textureUV atIndex:CQVFragmentTextureIndexTextureUV];
    }
    CFRelease(sampleBuffer);
}
  • 利用CMSampleBufferRef获取像素缓存区对象CVPixelBufferRef
  • 获取像素缓存区中平面位置索引处平面的 widthheight
  • 指定像素格式纹理YMTLPixelFormatR8Unorm、 纹理UVMTLPixelFormatRG8Unorm
  • 创建CoreVideoMetal纹理。
  • 生成MTLTexture类型纹理。

该段代码使用到了CoreVideo以及CoreMedia的相关代码,看下流程图:

CoreVideo里使用到的两个函数
CVMetalTextureCacheCreate()
CVMetalTextureCacheCreateTextureFromImage()
具体参数解释可以在 Metal 渲染摄像机内容 文章中看下。

你可能感兴趣的:(Metal 渲染视频)