iOS UImage 与 RGB 裸数据的相互转换
Touch the data of image in iOS
Get data from a image
较简单,根据已有的 image 的属性,创建 CGBitmapContext, 这个 context 是带有直接访问的指针的。然后将 Image 绘制到这个 context, 得到裸数据。
Code:
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(srcImg.CGImage);
CGColorSpaceRef colorRef = CGColorSpaceCreateDeviceRGB();
float width = srcImg.size.width;
float height = srcImg.size.height;
// Get source image data
uint8_t *imageData = (uint8_t *) malloc(width * height * 4);
CGContextRef imageContext = CGBitmapContextCreate(imageData,
width, height,
8, static_cast(width * 4),
colorRef, alphaInfo);
CGContextDrawImage(imageContext, CGRectMake(0, 0, width, height), srcImg.CGImage);
CGContextRelease(imageContext);
CGColorSpaceRelease(colorRef);
拿到指针就可以操作数据了。
需要注意的地方:alphaInfo, 通常我处理的图像都是带有透明度的,但是 RGBA 和 ARGB 都有遇到过,所以需要看清楚这个信息是哪一种,列举如下:
typedef CF_ENUM(uint32_t, CGImageAlphaInfo) {
kCGImageAlphaNone, /* For example, RGB. */
kCGImageAlphaPremultipliedLast, /* For example, premultiplied RGBA */
kCGImageAlphaPremultipliedFirst, /* For example, premultiplied ARGB */
kCGImageAlphaLast, /* For example, non-premultiplied RGBA */
kCGImageAlphaFirst, /* For example, non-premultiplied ARGB */
kCGImageAlphaNoneSkipLast, /* For example, RBGX. */
kCGImageAlphaNoneSkipFirst, /* For example, XRGB. */
kCGImageAlphaOnly /* No color data, alpha data only */
};
一般来说我们的工程都会打开一个png图片压缩的选项,所以我们拿到的 image 一般来说是 premultiplied 的,也就是说,RGB的值已经是和alpha值相乘的结果了,在 OpenGL 纹理操作的时候要注意。
Create a image from raw data
1) Create BitmapContext and get image from it
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = static_cast(4 * outWidth);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// set the alpha mode RGBA
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
////
// This method is much simple and without coordinate flip
// You can use this method either
// but the UIGraphicsBeginImageContext() is much more modern.
////
CGContextRef cgBitmapCtx = CGBitmapContextCreate(outData,
static_cast(outWidth),
static_cast(outHeight),
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
CGImageRef cgImg = CGBitmapContextCreateImage(cgBitmapCtx);
UIImage *retImg = [UIImage imageWithCGImage:cgImg];
CGContextRelease(cgBitmapCtx);
CGColorSpaceRelease(colorSpaceRef);
free(outData);
主要方法就是 CGBitmapContextCreate
直接将数据地址 outData
作为初始化参数提供,这样这个 context 就是带有正确数据的了,然后就直接获得 CGImage 了。
2) CGDataProvider --> CGImage --> UIImage
这个方法的思路就是直接使用 CGImageCreate()
函数直接创建 CGImage.
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = static_cast(4 * outWidth);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// set the alpha mode RGBA
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, outData, outDataLength, NULL);
CGImageRef imageRef = CGImageCreate(outWidth, outHeight,
bitsPerComponent, bitsPerPixel, bytesPerRow,
colorSpaceRef, bitmapInfo, provider,
NULL, NO, renderingIntent);
UIImage *retImage1 = [UIImage imageWithCGImage:imageRef];
创建 provider 的时候无需回调函数,故直接提供 NULL.
创建 CGImage 時需要提供详细的配置参数,其中部分参数和创建 CGBitmapContext 相同,额外需要提供的就是默认参数以及不需要使用的特性比如 decode array 等等。得到 CGImageRef 后可以直接得到 UIImage 对象,但是我发现我的同事写了如下一段代码:
UIGraphicsBeginImageContext(outSize);
// the same: UIGraphicsBeginImageContextWithOptions(outSize, NO, 1.0f);
CGContextRef cgCtx = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(cgCtx, kCGBlendModeCopy);
CGContextDrawImage(cgCtx, CGRectMake(0.0, 0.0, outWidth, outHeight), imageRef);
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
这段代码的功能是:创建CGContext 然后将 CGimage 绘制到当前的 Context 上面,再得到 UIImage.
基本可以认为是 [UIImage imageWithCGImage:imageRef]
我的不知道为什么同事这样写,可能是这个方法是有什么坑的,我暂时没有遇到。所以我将这段代码先记下来,留着以后用。
关于 UIGraphicsBeginImageContext()
方法,apple 的介绍如下:
iOS Note: iOS applications should use the function UIGraphicsBeginImageContextWithOptions instead of using the low-level Quartz functions described here. If your application creates an offscreen bitmap using Quartz, the coordinate system used by bitmap graphics context is the default Quartz coordinate system. In contrast, if your application creates an image context by calling the function UIGraphicsBeginImageContextWithOptions, UIKit applies the same transformation to the context’s coordinate system as it does to a UIView object’s graphics context. This allows your application to use the same drawing code for either without having to worry about different coordinate systems. Although your application can manually adjust the coordinate transformation matrix to achieve the correct results, in practice, there is no performance benefit to doing so.
上面所述的 low-level Quartz function 就是我们上面用的 CGBitmapContextCreate
这一类方法。所以用这个新的方法直接就将 创建好的 bitmapContext 绑定到当前状态。仍然调用 CGContextDrawImage()
函数,将 CGImage 绘制到指定的 context 上面。然后再获取 UIImage. 总的来说,这个和我们之前的那一套原理是一样的,应该说这样是更新式的做法,推荐的做法。
参考
Quartz 2D Programming Guide
How do I create a CGImage with RGB data?
Converting RGB data into a bitmap in Objective-C++ Cocoa
CGImage to UIImage doesn't work