两者可以相互转换,因为共享同一高速缓冲区
CVPiexlBuffer可以理解为在CPU内存,纹理是GPU显存,GPU显存才需要GL环境,所以CVPiexlBuffer的创建不需要GL环境,就是可以随意传播。比如解码视频或相机拿到CVPiexlBuffer后可以传给一个GL环境生成纹理。生成纹理要区分YUV格式还是RGB格式。
当CVPiexlBuffer是YUV格式时,转纹理:
//亮度纹理,取CVPixelBuffer的0,GL_LUMINANCE表示单通道在openGL 3.0中已经弃用可以用GL_RED_EXT代替
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, VideoTextureCache, CVPixelBuffer, NULL, GL_TEXTURE_2D, GL_LUMINANCE, 宽, 高, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0通道, &luminanceTextureRef);
//色度纹理,取CVPixelBuffer的1,GL_LUMINANCE_ALPHA在openGL 3.0中已经弃用可以用GL_RG_EXT代替
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, VideoTextureCache, CVPixelBuffer, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, 宽/2, 高/2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1通道, &chrominanceTextureRef);
然后可以将luminanceTextureRef和chrominanceTextureRef这两个纹理经过公式转换成一个RGB的纹理,方便后面处理。
那么反过来,如果知道一个RGB的纹理怎么拿到CVPixelBuffer呢?
可以通过将RGB Draw到一个FrameBuffer上,这个FrameBuffer创建时是绑定一个纹理的,这个纹理通过上面的方式和一个CVPixelBuffer共享同一个高速缓冲区,这样就拿到了CVPixelBuffer.
这里需要注意RGB转YUV格式的CVPixelBuffer和转RGB格式的CVPixelBuffer是不一样的。
转YUV的CVPiexlBuffer需要创建两个FrameBuffer,分别绑定Lum纹理0通道和Chorm纹理1通道(和上面一样),这样Dram两次分别Draw到两个FrameBuffer,才能得到一个完整的YUV,Draw一次只能得到一个通道的像素。
//1.创建一个CVPixelBuffer,格式是kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
CVPixelBufferCreate(kCFAllocatorDefault, 宽, 高, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, 字典属性, &self->pixelBuffer);
注意用完要释放CVPixelBufferRelease()或CFRelease()
//2.共享缓冲区,Y通道纹理
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, self->pixelBuffer, NULL, GL_TEXTURE_2D, GL_RED_EXT, 宽, 高, GL_RED_EXT, GL_UNSIGNED_BYTE, 0 (Y通道), &self->yPlaneTextureRef);
//3..绑定FrameBuffer颜色附属缓冲区
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, self->_yPlaneTexture, 0);
Draw到这个FrameBuffer只能拿到Y通道,此时CVPixelBuffer不完整,还要再Draw一个UV通道的。
//4.共享缓冲区,UV通道纹理CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, self->pixelBuffer, NULL, GL_TEXTURE_2D, GL_RG_EXT, 宽/2, 高/2, GL_RG_EXT, GL_UNSIGNED_BYTE, 1 (UV通道), &self->uvPlaneTextureRef);
//5..绑定FrameBuffer颜色附属缓冲区
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, self->_uvPlaneTexture, 0);
Draw到这个FrameBuffer拿到UV通道,此时CVPixelBuffer含有了YUV通道,才算完整。
转RGB的CVPixelBuffer创建一个FrameBuffer就行啦,绑定一个纹理和CVPiexlBuffer共享高速缓冲区:
//1.创建一个CVPixelBuffer,格式是kCVPixelFormatType_32BGRA
CVPixelBufferCreate(kCFAllocatorDefault, 宽, 高, kCVPixelFormatType_32BGRA, 字典属性, &self->pixelBuffer);
//2.共享缓冲区,这里就和YUV的区别开了
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, self->pixelBuffer, NULL, GL_TEXTURE_2D, GL_RGBA, 宽, 高, GL_BGRA????, GL_UNSIGNED_BYTE, 0, &self->renderTexture);
//3.绑定FrameBuffer颜色附属缓冲区
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(self->renderTexture), 0);
通过这三步创建FrameBuffer,将纹理Draw到这个FramBuffer就可以拿到CVPixelBuffer啦
CVPixelBuffer的其他操作
CVPixelBuffer->CIImage->CGImage
CIImage ciImage = [CIImage imageWithCVPixelBuffer:cvPixelBuffer];
CGImageRef cgImager=outputImage.CGImage;
CVPixelBuffer->CGImage
void *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer); //获取地址 size_t width = CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); //构造色彩空间 CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,baseAddress,buffersize,NULL); CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, rgbColorSpace,kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,provider, NULL, true, kCGRenderingIntentDefault);
CIImage->CVPixelBuffer
_context = [[CIContext alloc] init];
[_context render:ciImage toCVPixelBuffer:cvPixelBuffer];
//颜色空间类型
RGB :
kCVPixelFormatType_32BGRA = 'BGRA'
kCVPixelFormatType_32BGRA = 'BGRA',
kCVPixelFormatType_32ABGR = 'ABGR',
kCVPixelFormatType_32RGBA = 'RGBA',
NV12 :
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v' //luma=[16,235] chroma=[16,240]。
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange = '420f', // luma=[0,255] chroma=[1,255]
packed: 先连续存储所有Y分量,然后交叉存储U、V分量;
planar:Y、U、V分别用三个单独数组存储
YUV420P :
kCVPixelFormatType_420YpCbCr8Planar = 'y420'