在阅读Rac,Masonry,AFNetworking源码后,我们稍作放松,来看另一个经典常用的第三方框架GPUImage,利用GPUImage对图片视频进行滤镜效果处理和优化。
下面图片是通过GPUImage添加褐色渲染生成的照片,这一操作和手机上使用的滤镜软件别无差别。
GPUImage框架是BSD许可的iOS库,可让您将GPU加速的滤镜和其他效果应用于图像,实时摄像机视频和电影。与Core
Image(iOS 5.0的一部分)相比,GPUImage允许您编写自己的自定义过滤器,支持部署到iOS
4.0并具有更简单的界面。但是,它目前缺少Core Image的一些更高级的功能,例如面部检测。对于诸如处理图像或实时视频帧的大规模并行操作,GPU比CPU具有明显的性能优势。在iPhone
4上,简单的图像过滤器在GPU上的执行速度比基于CPU的等效过滤器快100倍以上。但是,在GPU上运行自定义滤镜需要大量代码来设置和维护这些滤镜的OpenGL ES 2.0渲染目标。我创建了一个示例项目来做到这一点:
并发现在创建过程中必须编写很多样板代码。因此,我将这个框架汇总在一起,该框架封装了处理图像和视频时遇到的许多常见任务,并且使您不必担心OpenGL
ES 2.0的基础。在处理视频时,此框架与Core Image相比具有优势,在iPhone 4上仅花费2.5
ms即可从相机上传一帧图像,应用伽玛滤镜并显示,而使用Core Image进行相同操作则需要106
ms。基于CPU的处理需要460毫秒,因此GPUImage在此硬件上进行此操作的速度比Core
Image快40倍,比CPU绑定的处理快184倍。在iPhone 4S上,在这种情况下,GPUImage仅比Core
Image快4倍,比CPU绑定处理快102倍。但是,对于更高半径的高斯模糊等更复杂的操作,Core Image当前超过GPUImage。
对于这个框架,我们只分析对于静态图片的使用方式,其他方式与这个方式大同小异。
UIImage *image = [UIImage imageNamed:@"UNADJUSTEDNONRAW_thumb_cf"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
GPUImageBrightnessFilter *brightnessFilter = [[GPUImageBrightnessFilter alloc] init];
brightnessFilter.brightness = 0.9;
GPUImageExposureFilter *exposureFilter = [[GPUImageExposureFilter alloc] init];
exposureFilter.exposure = 4.0;
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentFramebuffer];
self.showImg.image = currentFilteredVideoFrame;
这段代码大概分为这几个操作:
为了能够对图片进行渲染处理,需要先将图片转换成为GPUImagePicture,才能使用渲染效果。
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:image];
进入源码看看到底如何初始化GPUImagePicture。
- (id)initWithImage:(UIImage *)newImageSource;
{
if (!(self = [self initWithImage:newImageSource smoothlyScaleOutput:NO]))
{
return nil;
}
return self;
}
- (id)initWithImage:(UIImage *)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
{
return [self initWithCGImage:[newImageSource CGImage] smoothlyScaleOutput:smoothlyScaleOutput];
}
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
{
return [self initWithCGImage:newImageSource smoothlyScaleOutput:smoothlyScaleOutput removePremultiplication:NO];
}
在这里转化需要注意,先将图片转换成为了C类型的图片[newImageSource CGImage]
下来的代码一定会让你抓狂,这么长,在前面都没见过这样写的,但是,我们依旧只分析条件为真的部分。
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
{
if (!(self = [super init]))
{
return nil;
}
hasProcessedImage = NO;
self.shouldSmoothlyScaleOutput = smoothlyScaleOutput;
imageUpdateSemaphore = dispatch_semaphore_create(0);
dispatch_semaphore_signal(imageUpdateSemaphore);
// TODO: Dispatch this whole thing asynchronously to move image loading off main thread
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
// If passed an empty image reference, CGContextDrawImage will fail in future versions of the SDK.
NSAssert( widthOfImage > 0 && heightOfImage > 0, @"Passed image must not be empty - it should be at least 1px tall and wide");
pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
BOOL shouldRedrawUsingCoreGraphics = NO;
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
{
pixelSizeOfImage = scaledImageSizeToFitOnGPU;
pixelSizeToUseForTexture = pixelSizeOfImage;
shouldRedrawUsingCoreGraphics = YES;
}
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
GLubyte *imageData = NULL;
CFDataRef dataFromImageDataProvider = NULL;
GLenum format = GL_BGRA;
BOOL isLitteEndian = YES;
BOOL alphaFirst = NO;
BOOL premultiplied = NO;
if (!shouldRedrawUsingCoreGraphics) {
/* Check that the memory layout is compatible with GL, as we cannot use glPixelStore to
* tell GL about the memory layout with GLES.
*/
if (CGImageGetBytesPerRow(newImageSource) != CGImageGetWidth(newImageSource) * 4 ||
CGImageGetBitsPerPixel(newImageSource) != 32 ||
CGImageGetBitsPerComponent(newImageSource) != 8)
{
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Check that the bitmap pixel format is compatible with GL */
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(newImageSource);
if ((bitmapInfo & kCGBitmapFloatComponents) != 0) {
/* We don't support float components for use directly in GL */
shouldRedrawUsingCoreGraphics = YES;
} else {
CGBitmapInfo byteOrderInfo = bitmapInfo & kCGBitmapByteOrderMask;
if (byteOrderInfo == kCGBitmapByteOrder32Little) {
/* Little endian, for alpha-first we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedFirst && alphaInfo != kCGImageAlphaFirst &&
alphaInfo != kCGImageAlphaNoneSkipFirst) {
shouldRedrawUsingCoreGraphics = YES;
}
} else if (byteOrderInfo == kCGBitmapByteOrderDefault || byteOrderInfo == kCGBitmapByteOrder32Big) {
isLitteEndian = NO;
/* Big endian, for alpha-last we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedLast && alphaInfo != kCGImageAlphaLast &&
alphaInfo != kCGImageAlphaNoneSkipLast) {
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Can access directly using GL_RGBA pixel format */
premultiplied = alphaInfo == kCGImageAlphaPremultipliedLast || alphaInfo == kCGImageAlphaPremultipliedLast;
alphaFirst = alphaInfo == kCGImageAlphaFirst || alphaInfo == kCGImageAlphaPremultipliedFirst;
format = GL_RGBA;
}
}
}
}
}
// CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();
if (shouldRedrawUsingCoreGraphics)
{
// For resized or incompatible image: redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (size_t)pixelSizeToUseForTexture.width, (size_t)pixelSizeToUseForTexture.height, 8, (size_t)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), newImageSource);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
isLitteEndian = YES;
alphaFirst = YES;
premultiplied = YES;
}
else
{
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider(newImageSource));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
}
if (removePremultiplication && premultiplied) {
NSUInteger totalNumberOfPixels = round(pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height);
uint32_t *pixelP = (uint32_t *)imageData;
uint32_t pixel;
CGFloat srcR, srcG, srcB, srcA;
for (NSUInteger idx=0; idx<totalNumberOfPixels; idx++, pixelP++) {
pixel = isLitteEndian ? CFSwapInt32LittleToHost(*pixelP) : CFSwapInt32BigToHost(*pixelP);
if (alphaFirst) {
srcA = (CGFloat)((pixel & 0xff000000) >> 24) / 255.0f;
}
else {
srcA = (CGFloat)(pixel & 0x000000ff) / 255.0f;
pixel >>= 8;
}
srcR = (CGFloat)((pixel & 0x00ff0000) >> 16) / 255.0f;
srcG = (CGFloat)((pixel & 0x0000ff00) >> 8) / 255.0f;
srcB = (CGFloat)(pixel & 0x000000ff) / 255.0f;
srcR /= srcA; srcG /= srcA; srcB /= srcA;
pixel = (uint32_t)(srcR * 255.0) << 16;
pixel |= (uint32_t)(srcG * 255.0) << 8;
pixel |= (uint32_t)(srcB * 255.0);
if (alphaFirst) {
pixel |= (uint32_t)(srcA * 255.0) << 24;
}
else {
pixel <<= 8;
pixel |= (uint32_t)(srcA * 255.0);
}
*pixelP = isLitteEndian ? CFSwapInt32HostToLittle(pixel) : CFSwapInt32HostToBig(pixel);
}
}
// elapsedTime = (CFAbsoluteTimeGetCurrent() - startTime) * 1000.0;
// NSLog(@"Core Graphics drawing time: %f", elapsedTime);
// CGFloat currentRedTotal = 0.0f, currentGreenTotal = 0.0f, currentBlueTotal = 0.0f, currentAlphaTotal = 0.0f;
// NSUInteger totalNumberOfPixels = round(pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height);
//
// for (NSUInteger currentPixel = 0; currentPixel < totalNumberOfPixels; currentPixel++)
// {
// currentBlueTotal += (CGFloat)imageData[(currentPixel * 4)] / 255.0f;
// currentGreenTotal += (CGFloat)imageData[(currentPixel * 4) + 1] / 255.0f;
// currentRedTotal += (CGFloat)imageData[(currentPixel * 4 + 2)] / 255.0f;
// currentAlphaTotal += (CGFloat)imageData[(currentPixel * 4) + 3] / 255.0f;
// }
//
// NSLog(@"Debug, average input image red: %f, green: %f, blue: %f, alpha: %f", currentRedTotal / (CGFloat)totalNumberOfPixels, currentGreenTotal / (CGFloat)totalNumberOfPixels, currentBlueTotal / (CGFloat)totalNumberOfPixels, currentAlphaTotal / (CGFloat)totalNumberOfPixels);
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:pixelSizeToUseForTexture onlyTexture:YES];
[outputFramebuffer disableReferenceCounting];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
// no need to use self.outputTextureOptions here since pictures need this texture formats and type
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, format, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
glBindTexture(GL_TEXTURE_2D, 0);
});
if (shouldRedrawUsingCoreGraphics)
{
free(imageData);
}
else
{
if (dataFromImageDataProvider)
{
CFRelease(dataFromImageDataProvider);
}
}
return self;
}
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
BOOL shouldRedrawUsingCoreGraphics = NO;
4.调整图片大小
For now, deal with images larger than the maximum texture size by resizing to be within that limit
现在,通过调整大小以使图像大于最大纹理大小
CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
5.检查内存布局是否与GL兼容
Check that the memory layout is compatible with GL, as we cannot use glPixelStore to tell GL about the memory layout with GLES.
检查内存布局是否与GL兼容,因为我们不能使用glPixelStore告诉GL有关GLES的内存布局。
if (!shouldRedrawUsingCoreGraphics) {
/* Check that the memory layout is compatible with GL, as we cannot use glPixelStore to
* tell GL about the memory layout with GLES.
*/
if (CGImageGetBytesPerRow(newImageSource) != CGImageGetWidth(newImageSource) * 4 ||
CGImageGetBitsPerPixel(newImageSource) != 32 ||
CGImageGetBitsPerComponent(newImageSource) != 8)
{
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Check that the bitmap pixel format is compatible with GL */
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(newImageSource);
if ((bitmapInfo & kCGBitmapFloatComponents) != 0) {
/* We don't support float components for use directly in GL */
shouldRedrawUsingCoreGraphics = YES;
} else {
CGBitmapInfo byteOrderInfo = bitmapInfo & kCGBitmapByteOrderMask;
if (byteOrderInfo == kCGBitmapByteOrder32Little) {
/* Little endian, for alpha-first we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedFirst && alphaInfo != kCGImageAlphaFirst &&
alphaInfo != kCGImageAlphaNoneSkipFirst) {
shouldRedrawUsingCoreGraphics = YES;
}
} else if (byteOrderInfo == kCGBitmapByteOrderDefault || byteOrderInfo == kCGBitmapByteOrder32Big) {
isLitteEndian = NO;
/* Big endian, for alpha-last we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedLast && alphaInfo != kCGImageAlphaLast &&
alphaInfo != kCGImageAlphaNoneSkipLast) {
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Can access directly using GL_RGBA pixel format */
premultiplied = alphaInfo == kCGImageAlphaPremultipliedLast || alphaInfo == kCGImageAlphaPremultipliedLast;
alphaFirst = alphaInfo == kCGImageAlphaFirst || alphaInfo == kCGImageAlphaPremultipliedFirst;
format = GL_RGBA;
}
}
}
}
}
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider(newImageSource));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
剩下的就是处理GPU图片基本属性。这里面我们只需要知道在这个贼拉拉长的代码段中,将UIImage先转换为CGImage,然后,将CGImage转换成为了GPUImagePicture类型。
然后对GPUImage进行渲染操作。
GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
GPUImageBrightnessFilter *brightnessFilter = [[GPUImageBrightnessFilter alloc] init];
brightnessFilter.brightness = 0.9;
GPUImageExposureFilter *exposureFilter = [[GPUImageExposureFilter alloc] init];
exposureFilter.exposure = 4.0;
到此,我们初步认识了GPUImage的使用,我们也大致看了GPUImagePicture内部操作,虽然,没有做详细分析,但是,我们知道了大概流程,后面几篇博客我们会从滤镜逐一了解Filter是如何做渲染操作的。
鉴于GPUImage中大量使用图片方面知识,我们会选择性的讲解,主要学习思想和具体封装过程。