iOS 音视频高级编程:读写VideoToolbox解码回调的CVImageBufferRef中YUV图像

作者:熊皮皮

原文链接:http://www.jianshu.com/p/dac9857b34d0

本文记录读写VideoToolbox VTDecompressionOutputCallbackRecord解码回调函数中的CVImageBufferRef中的YUV或RGB数据的处理方法。CVImageBufferRef是CVPixelBufferRef的别称,两者操作一致,原因如下。

// CVPixelBuffer.htypedef CVImageBufferRef CVPixelBufferRef;

1、读取CVImageBufferRef(CVPixelBufferRef)

在解码回调中,传递过来的帧数据由CVImageBufferRef指向。如果需取出其中像素数据作进一步处理,得访问其中真正存储像素的内存。

由于VideoToolbox解码后的数据并不能直接给CPU访问,需要先用CVPixelBufferLockBaseAddress()锁定地址才能从主存访问,否则调用CVPixelBufferGetBaseAddressOfPlane等函数则返回NULL或无效值。然而,用CVImageBuffer -> CIImage -> UIImage则无需显式调用锁定基地址函数。

// CVPixelBufferLockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly); // 可以不加CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];CIContext *temporaryContext = [CIContext contextWithOptions:nil];CGImageRef videoImage = [temporaryContext

createCGImage:ciImage

fromRect:CGRectMake(0, 0,

CVPixelBufferGetWidth(imageBuffer),

CVPixelBufferGetHeight(imageBuffer))];UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];UIImageView *imageView = [[UIImageView alloc] initWithImage:image];CGImageRelease(videoImage);// CVPixelBufferUnlockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly);

CVPixelBufferIsPlanar可得到像素的存储方式是Planar或Chunky。若是Planar,则通过CVPixelBufferGetPlaneCount获取YUV Plane数量。通常是两个Plane,Y为一个Plane,UV由VTDecompressionSessionCreate创建解码会话时通过destinationImageBufferAttributes指定需要的像素格式(可不同于视频源像素格式)决定是否同属一个Plane,每个Plane可当作表格按行列处理,像素是行顺序填充的。下面以Planar Buffer存储方式作说明。

CVPixelBufferGetPlaneCount得到像素缓冲区平面数量,然后由CVPixelBufferGetBaseAddressOfPlane(索引)得到相应的通道,一般是Y、U、V通道存储地址,UV是否分开由解码会话指定,如前面所述。而CVPixelBufferGetBaseAddress返回的对于Planar Buffer则是指向PlanarComponentInfo结构体的指针,相关定义如下:

/*

Planar pixel buffers have the following descriptor at their base address.

Clients should generally use CVPixelBufferGetBaseAddressOfPlane,

CVPixelBufferGetBytesPerRowOfPlane, etc. instead of accessing it directly.

*/struct CVPlanarComponentInfo {  int32_t             offset;    /* offset from main base address to base address of this plane, big-endian */

uint32_t            rowBytes;  /* bytes per row of this plane, big-endian */};typedef struct CVPlanarComponentInfo      CVPlanarComponentInfo;struct CVPlanarPixelBufferInfo {

CVPlanarComponentInfo  componentInfo[1];

};typedef struct CVPlanarPixelBufferInfo         CVPlanarPixelBufferInfo;struct CVPlanarPixelBufferInfo_YCbCrPlanar {

CVPlanarComponentInfo  componentInfoY;

CVPlanarComponentInfo  componentInfoCb;

CVPlanarComponentInfo  componentInfoCr;

};typedef struct CVPlanarPixelBufferInfo_YCbCrPlanar   CVPlanarPixelBufferInfo_YCbCrPlanar;struct CVPlanarPixelBufferInfo_YCbCrBiPlanar {

CVPlanarComponentInfo  componentInfoY;

CVPlanarComponentInfo  componentInfoCbCr;

};typedef struct CVPlanarPixelBufferInfo_YCbCrBiPlanar   CVPlanarPixelBufferInfo_YCbCrBiPlanar;

根据CVPixelBufferGetPixelFormatType得到像素格式,以对应的方式读取,比如YUV420SP跨距读取所有的U到一个缓冲区。

2、写入CVImageBufferRef(CVPixelBufferRef)

下面代码展示了以向Y、UV Planar拷贝数据的过程:

NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};

CVPixelBufferRef pixelBuffer = NULL;

CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,

width,

height,

kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,

(__bridge CFDictionaryRef)pixelAttributes)

&pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer, 0);uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);memcpy(yDestPlane, yPlane, width * height);uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);if (result != kCVReturnSuccess) {

NSLog(@"Unable to create cvpixelbuffer %d", result);

}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CVPixelBufferRelease(pixelBuffer);

上述代码通过- [CIImage imageWithCVPixelBuffer:]创建CIImage在iPad Air 2、iPhone 6p等真机上存在的问题:

1、当使用kCVPixelFormatType_420YpCbCr8PlanarFullRange时提示[CIImage initWithCVPixelBuffer:options:] failed because its pixel format f420 is not supported.,即不支持由YUV420P格式的CVPixelBuffer创建CIImage。

经测试,视频源格式为yuvj420p(pc, bt709),在VTDecompressionSessionCreate不指定destinationImageBufferAttributes的kCVPixelBufferPixelFormatTypeKey值时,Video Toolbox解码出来的CVImageBufferRef对应为f420。

当指定destinationImageBufferAttributes需要kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange时,解码出来的ImageBuffer为420v,然后创建YUV时指定PixelFormat为f420会出现上述问题。原因是,以420v方式拷贝YUV数据,其存储布局与f420不同,导致创建CIImage失败。

2、决定CVPixelBufferCreate创建的格式是其参数pixelFormatType,而非参数pixelAttributes使用kCVPixelBufferPixelFormatTypeKey指定的像素格式。

3、CVPixelBufferPool内存池

自行创建CVPixelBufferPool且通过CVPixelBufferPool创建CVPixelBuffer,容易出现CVPixelBuffer被错误释放或内存泄露,以ijkplayer为例演示CVPixelBubffer泄露的情况。

CVPixelBuffer泄露

CVPixelBuffer结束引用时引用计数不为0导致内存泄露

而自行创建CVPixelBuffer,则容易出现内存暴涨问题,如创建一个960x480的YUV420SP格式的CVPixelBuffer所占内存为700多M,如果是异步解码且没作内存大小限制,将导致应用崩溃。

CVPixelBufferCreate占用的内存

如果不想自行创建CVPixelBufferPool,也不想自己创建CVPixelBuffer,取巧的办法是,使用解码回调函数的CVPixelBuffer,则无需担心内存消耗问题。前提是,修改后的像素数据在原数据的宽高范围内。

对于解码->图像处理->编码流程,且处理后的图像与原图像大小不同,则创建编码器时再创建CVPixelBufferPool,让系统管理CVPixelBuffer也是可靠的做法。

另外,在图像处理过程中,Video Toolbox无论指定FullRange还是VideoRange,解码出来的YUV420SP数据在图像处理后存在部分区域颜色有误。通过指定Video Toolbox输出yuv420p再进行图像处理则无颜色异常问题。

参考与推荐阅读

Create CVPixelBuffer from YUV with IOSurface backed

你可能感兴趣的:(iOS 音视频高级编程:读写VideoToolbox解码回调的CVImageBufferRef中YUV图像)