iOS UIImage(RGB)转CVPixelBufferRef(YUV)

最近在做一个图像识别的项目,用到了YUV相关知识。
实际中,是从视频样本中获取CVPixelBufferRef,然后分析数据。为了方便测试,用图片模拟视频。
这个过程中,遇到了一个问题。那就是视频样本数据采用的是kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange(YUV格式),而图片是RGB格式,中间需要一层转换。

先来一组转换关系
UIImage --> CGImageRef --> CVImageBufferRef(CVPixelBufferRef)
其中CVPixelBufferRef是别名。

一. UIImage转换为CGImageRef

UIImage *image = [UIImage imageNamed:@"test.png"];
CGImageRef imgRef = [image CGImage];

这一步最简单,只需要调用系统API就能理解。

二. 从CGImageRef中获取图片数据

CGDataProviderRef provider = CGImageGetDataProvider(imageRef);
CFDataRef pixelData = CGDataProviderCopyData(provider);
const unsigned char *data = CFDataGetBytePtr(pixelData);
    
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSLog(@"bitsPerPixel:%lu",bitsPerPixel);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSLog(@"bitsPerComponent:%lu",bitsPerComponent);
    
NSLog(@"\n");
    
size_t frameWidth = CGImageGetWidth(imageRef);
NSLog(@"frameWidth:%lu",frameWidth);
size_t frameHeight = CGImageGetHeight(imageRef);
NSLog(@"frameHeight:%lu",frameHeight);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSLog(@"bytesPerRow:%lu ==:%lu",bytesPerRow,bytesPerRow/4);

CFRelease(pixelData);

其中,data指向图片数据。图片实际按照RGBA形式存储,所以最后获取bytesPerRow时,除以4,得到的值和frameWidth一致。

三. 构造CVPixelBufferRef

NSDictionary *options = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, (__bridge CFDictionaryRef)(options), &pixelBuffer);
NSParameterAssert(status == kCVReturnSuccess && pixelBuffer != NULL);
    
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    
NSLog(@"\n");
    
size_t width = CVPixelBufferGetWidth(pixelBuffer);
NSLog(@"width:%lu",width);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
NSLog(@"height:%lu",height);
size_t bpr = CVPixelBufferGetBytesPerRow(pixelBuffer);
NSLog(@"bpr:%lu",bpr);
    
NSLog(@"\n");
    
size_t wh = width * height;
NSLog(@"wh:%lu\n",wh);
size_t size = CVPixelBufferGetDataSize(pixelBuffer);
NSLog(@"size:%lu",size);
size_t count = CVPixelBufferGetPlaneCount(pixelBuffer);
NSLog(@"count:%lu\n",count);
    
NSLog(@"\n");
    
size_t width0 = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0);
NSLog(@"width0:%lu",width0);
size_t height0 = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0);
NSLog(@"height0:%lu",height0);
size_t bpr0 = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
NSLog(@"bpr0:%lu",bpr0);
    
NSLog(@"\n");
    
size_t width1 = CVPixelBufferGetWidthOfPlane(pixelBuffer, 1);
NSLog(@"width1:%lu",width1);
size_t height1 = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1);
NSLog(@"height1:%lu",height1);
size_t bpr1 = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
NSLog(@"bpr1:%lu",bpr1);
    
unsigned char *bufY = malloc(wh);
unsigned char *bufUV = malloc(wh/2);

size_t offset,p;

int r,g,b,y,u,v;
int a=255;
for (int row = 0; row < height; ++row) {
  for (int col = 0; col < width; ++col) {
    //
    offset = ((width * row) + col);
    p = offset*4;
    //
    r = data[p + 0];
    g = data[p + 1];
    b = data[p + 2];
    a = data[p + 3];
    //
    y = 0.299*r + 0.587*g + 0.114*b;
    u = -0.1687*r - 0.3313*g + 0.5*b + 128;
    v = 0.5*r - 0.4187*g - 0.0813*b + 128;
    //
    bufY[offset] = y;
    bufUV[(row/2)*width + (col/2)*2] = u;
    bufUV[(row/2)*width + (col/2)*2 + 1] = v;
  }
}
uint8_t *yPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memset(yPlane, 0x80, height0 * bpr0);
for (int row=0; row

上面转换成的YUV是NV12格式。

你可能感兴趣的:(iOS UIImage(RGB)转CVPixelBufferRef(YUV))