从ffmpeg的AVFrame得到iOS的CVPixelBuffer

个人在之前的一篇文章《在iOS端使用AVSampleBufferDisplayLayer进行视频渲染》中提到,可以使用iOS8.0新出的AVSampleBufferDisplayLayer进行视频的渲染,那么如果这个时候解码使用的是ffmpeg,解码后得到的是AVFrame,就需要把AVFrame转成CVPixelbuffer在送给AVSampleBufferDisplayLayer渲染。

如何进行转化呢?如下:

- (void)dispatchAVFrame:(AVFrame*) frame{
    if(!frame || !frame->data[0]){
        return;
    }

    CVReturn theError;
    if (!self.pixelBufferPool){
        NSMutableDictionary* attributes = [NSMutableDictionary dictionary];
        [attributes setObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
        [attributes setObject:[NSNumber numberWithInt:frame->width] forKey: (NSString*)kCVPixelBufferWidthKey];
        [attributes setObject:[NSNumber numberWithInt:frame->height] forKey: (NSString*)kCVPixelBufferHeightKey];
        [attributes setObject:@(frame->linesize[0]) forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey];
        [attributes setObject:[NSDictionary dictionary] forKey:(NSString*)kCVPixelBufferIOSurfacePropertiesKey];
        theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &_pixelBufferPool);
        if (theError != kCVReturnSuccess){
            NSLog(@"CVPixelBufferPoolCreate Failed");
        }
    }
    
    CVPixelBufferRef pixelBuffer = nil;
    theError = CVPixelBufferPoolCreatePixelBuffer(NULL, self.pixelBufferPool, &pixelBuffer);
    if(theError != kCVReturnSuccess){
        NSLog(@"CVPixelBufferPoolCreatePixelBuffer Failed");
    }
    
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    size_t bytePerRowY = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
    size_t bytesPerRowUV = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
    void* base = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    memcpy(base, frame->data[0], bytePerRowY * frame->height);
    base = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    memcpy(base, frame->data[1], bytesPerRowUV * frame->height/2);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
    [self dispatchPixelBuffer:pixelBuffer];
}
这里的前提是AVFrame中yuv的格式是nv12;但如果AVFrame是yuv420p,就需要把frame->data[1]和frame->data[2]的每一个字节交叉存储到pixelBUffer的plane1上,即把原来的uuuu和vvvv,保存成uvuvuvuv,如下:

        uint32_t size = frame->linesize[1] * frame->height;
        uint8_t* dstData = new uint8_t[2 * size];
        for (int i = 0; i < 2 * size; i++){
            if (i % 2 == 0){
                dstData[i] = frame->data[1][i/2];
            }else {
                dstData[i] = frame->data[2][i/2];
            }
        }
这样,只要把dstData中的内容拷贝到 pixelBUffer的plane1上即可。

这里使用CVPixelBufferPool的原因是,如果每次都从一个AVFrame去构造一个新的CVPixelbuffer将会非常占用cpu,也比较耗时。


你可能感兴趣的:(iOS音视频开发)