本文翻译自GPUImage的README.md文档, 仅为个人学习记录. 如果不准确的地方, 欢迎指正.
GPUImage框架是一个遵循BSD协议的iOS类库, 用于实现对图片或视频进行GPU加速的滤镜等效果.
相对于Core Image(iOS 5.0引入), GPUImage运行我们编写自定义的滤镜效果, 支持iOS 4.0, 并且使用接口非常简便.
但目前, GPUImage仍缺乏一些Core Image中的高级特性, 如面部识别(facial detection).
对于诸如图像或视频处理等得大规模并行操作, GPU相比CPU具有非常显著的优势. 在iPhone 4上, 同样处理一个简单的图片滤镜, GPU的处理效率能够比CPU快100多倍.
然而, 在GPU上运行自定义的滤镜, 需要稍多的代码去创建和维护OpenGL ES 2.0的渲染对象. 这里有一个简单的例子:
gpu-accelerated-video-processing-mac-and-ios.
在这个实例中, 我发现有非常多的模板代码需要创建. 因此, 我将这些模板代码整合到一个框架中, 针对图片和视频处理过程中可能会遇到的场景,
概括了很多常见的任务及其接口, 用户使用的时候就不需要过多关注OpenGL ES 2.0的底层框架了.
在处理视频方面, 我将GPUImage与Core Image的效率进行了比较: 在iPhone 4上, 从相机获取一帧画面, 使用伽马滤镜(Gamma filter)处理并展示出来.
对于这个同样的操作, GPUImage耗时2.5ms, 而Core Image耗时106ms, CPU则耗时460ms. 即GPUImage处理效率是Core Image处理效率的40倍, 是CPU处理效率的184倍.
在iPhone 4S上, 对于同样的操作, GPUImage处理效率是Core Image处理效率的4倍, 是CPU处理效率的102倍.
然而, 在更加复杂的操作上, 如对更大的图片进行高斯模糊(Gaussian blurs)处理, Core Image的处理效率目前优于GPUImage.
BSD协议: 完整协议内容请参考License.txt文件.
GPUImage 使用OpenGL ES 2.0 着色器来执行图片和视频处理, 这比使用CPU进行的常规方式要快得多. 同时, 其将OpenGL ES的复杂API接口隐藏起来, 取而代之的是非常简单易用的Ojbective-C API接口.
我们只需要提供输入资源(如图片, 视频, 流水线上的滤镜脚本等), 调用相应API接口, 即可获取处理过后的图片或视频, 将其展示到UIImage或者存储到硬盘. 这里的流水线即为OpenGL ES的处理流程.
源对象(source objects)继承自GPUImageOutput, 可以从中获取图片或者视频的帧(frame), 包含GPUImageVideoCamera(录制视频), GPUImageStillCamera(拍照), GPUImagePicture(相册图片),
GPUImageMovie(视频). 源对象将静态图片传入OpenGL ES中作为纹理, 然后传递给流水线上的下一个步骤.
流水线上的滤镜和后续步骤都继承GPUImageInput协议, 可接收上一个步骤中提供的或处理过的纹理, 进行对应的处理. 处理步骤之后的对象视为目标, 处理过程可以多次叠加目标到一个输出或滤镜中.
例如, 一个APP接收相机中的录像, 转换成棕色(sepia)色调, 在屏幕上显示出来, 这个流水线为: GPUImageVideoCamera -> GPUImageSepiaFilter -> GPUImageView.
注意: 如果想在Swift项目中使用GPUImage, 必须添加”Adding this as a framework”, 而不是如下这些步骤了. Swift需要第三方代码中的模块.
一旦有了GPUImage的最新代码, 将其添加到项目中就非常简单了.
这些framework的头文件要添加成功. 在project settings中设置头文件搜索路径(Header Search Paths)为从项目路径到framework或GPUImage源码目录的相对路径.
确保这个头文件路径可提前搜索到.
个人建议使用cocoapods安装
安装成功之后, 在#import “GPUImage.h”之后, 即可在项目使用GPUImage了.
注意: 如果使用Interface Builder构建一个界面的时候遇到”Unknown class GPUImageView in Interface Builder”或者类似错误, 需要添加-ObjC到项目设置中的Other Linker Flags.
iOS 4.x几乎遇不到了, 不过该段内容仍翻译过来
另外, 如果想在iOS 4.x上使用GPUImage, Xcode(4.3)要求在项目中弱引用Core Video框架, 否则在打包上传到App Store或者ad hoc时会出现
“Symbol not found: _CVOpenGLESTextureCacheCreate”这样的crash信息.
解决方法: 在项目Build Phases->Link Binary With Libraries中, 将CoreVideo.framework从Required改成Optional.
GPUImage是使用ARC的, 如果想要在iOS 4.x上使用MRC, 需要在项目的Other Linker Flags中添加-fobjc-arc.
如果你并不想在Xcode工程中将GPUImage作为依赖引入, 你可以自行为iOS模拟器或设备构建一个通用的静态库. 在命令行运行build.sh即可. 生成的库和头文件会放在build/Release-iphone中.
通过更改build.sh中的IOSSDK_VER变量, 你也可以更改iOS SDK的版本(通过运行xcodebuild -showsdks可以查看所有iOS SDK版本).
Xcode 6, iOS 8, Mac都支持GPUImage所有框架的使用, 这就大大简化了将其添加到APP的步骤. 建议将.xcodeproj工程文件拖拽到你的工程中(同静态库的添加方式).
在你的工程中, 选择target build settings的Build Phases选项卡. 在Target Dependencies grouping中, 将GPUImageFramework(并非用于构建静态库的GPUImage)添加到iOS中,
或者将GPUImage添加到Mac中. 在Link Binary With Libraries中, 添加GPUImage.framework.
这样就可以将GPUImage构建成一个框架. 在Xcode 6中, 会将其构建为一个模块, 在Swift项目中也可以使用. 做完以上步骤之后, 就可以通过import GPUImage来使用了.
接下来, 需要添加新的备份文件构建过程, 设定目标为Frameworks, 然后将GPUImage.framework的构建工程添加进来. 这样可以将框架打包到你的应用中(否则, 会在执行时出现错误”dyld: Library not loaded: @rpath/GPUImage.framework/GPUImage”).
Documentation是用appledoc从头文件注释中生成的. 如要生成该Documentation, 在Xcode中切换到Documentation scheme.
设置”APPLEDOC_PATH”(Build Settings->User-Defined中)指向一个appledoc工具, 可在github中下载或者通过Homebrew安装.
同时, 也会生成一个.docset文件, 可采用documentation工具进行查看.
在iOS真机上使用实时滤镜的代码如下:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
以上代码:
1. 使用iOS设备的后置摄像头建立一个视频源, 即拍摄640x480的视频. 采用肖像(portrait)模式(即竖屏模式), 所以风景模式(即横屏模式)需要在显示的时候做一下翻转.
2. 自定义滤镜, 使用CustomShader.fsh中的着色器脚本, 在流水线中将接收摄像头的视频帧. 经过滤镜处理后的视频帧最终显示在屏幕(UIView)上, 该UIView要能够呈现滤镜处理后的OpenGL ES纹理(GPUImageView即可).
3. 如果源视频的高宽比与UIView的不同, 那么可以通过设置GPUImageView的fillMode属性来改变其填充模式, 则视频会被拉伸或缩放.
4. 对于混合路径等需要接收多个图像的, 需要创建多个输出, 然后添加单个滤镜作为目标输出. 添加输出的顺序很重要, 会直接影响到输出图像的混合效果或处理效果.
5. 如果需要打开麦克风录音, 需要设置audioEncodingTarget为当前的movieWriter对象, 如下:
videoCamera.audioEncodingTarget = movieWriter;
实时拍摄图片并做滤镜处理的步骤与上边的拍摄视频的步骤类似, 使用GPUImageStillCamera如下:
stillCamera = [[GPUImageStillCamera alloc] init]; // stillCamera放到属性或实例变量中
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageGammaFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
可以得到实时的,滤镜处理过的预览图片效果. 需要注意的是, 这种预览效果只适用于iOS 4.3及以上, 所以设置deployment target的时候要注意.
一旦获取到图片, 即可使用如下回调的block进行处理:
[stillCamera capturePhotoProcessedUpToFilter:filter withCompletionHandler:^(UIImage *processedImage, NSError *error){
NSData *dataForJPEGFile = UIImageJPEGRepresentation(processedImage, 0.8);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSError *error = nil;
if (![dataForJPEGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.jpg"] options:NSAtomicWrite error:&error])
{
return;
}
}];
以上代码可拍摄一个经过滤镜处理过的全尺寸图片, 放到预览视图中, 可以以JPEG的格式保存至APP的documents目录中.
注意: GPUImage在比较老型号的机器(低于iPhone 4S, iPad 2, 或Retina iPad)上, 暂时不能处理超过2048像素(高或宽)的图片, 原因在于纹理大小的限制.
这意味着虽然iPhone 4的摄像头像素(高或宽)大于2048, 但不能做类似的拍摄和处理.
A tiling mechanism is being implemented to work around this. All other devices should be able to capture and filter photos using this method.
处理静态图片的方式非常多. 最简单的方式就是创建一个still image source对象, 然后手动添加滤镜流水线:
UIImage *inputImage = [UIImage imageNamed:@"Lambeau.jpg"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];
[stillImageSource addTarget:stillImageFilter];
[stillImageFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentFramebuffer];
注意: 从滤镜中手动截取的图像, 需要设置useNextFrameForImageCapture参数以便告诉滤镜将要从中截图图像.
GPUImage默认会在滤镜中复用framebuffers以便节约资源, 所以如果需要保留滤镜的framebuffer以用于手动图像的截取, 则需要提前声明.
对于单一的滤镜效果, 可以更简便的使用如下:
GPUImageSepiaFilter *stillImageFilter2 = [[GPUImageSepiaFilter alloc] init];
UIImage *quickFilteredImage = [stillImageFilter2 imageByFilteringImage:inputImage];
相比于iOS上的Core Image(iOS 5.0), GPUImage非常明显的一个优势就是可以自定义图像和视频的处理滤镜. 这些滤镜通过OpenGL ES 2.0的fragment shader提供, 以类似C语言的OpenGL着色语言编写.
自定义的滤镜实现如下:
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
其中, fragment shader的扩张文件名为.fsh. 另外, 如果你不想在APP bundle中装载fragment shader, 则可以使用-initWithFragmentShaderFromString:
来将fragment shader转换成string.
片元着色器(Fragment shaders)会针对每个像素进行计算, 然后在滤镜步骤进行相应的渲染. 通过类似C语言的OpenGL Shading Language(GLSL)进行针对2D/3D图像的处理.
如下是采用sepia-tone的滤镜着色器脚本:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 outputColor;
outputColor.r = (textureColor.r * 0.393) + (textureColor.g * 0.769) + (textureColor.b * 0.189);
outputColor.g = (textureColor.r * 0.349) + (textureColor.g * 0.686) + (textureColor.b * 0.168);
outputColor.b = (textureColor.r * 0.272) + (textureColor.g * 0.534) + (textureColor.b * 0.131);
outputColor.a = 1.0;
gl_FragColor = outputColor;
}
在GPUImage框架中使用图像滤镜, 以上代码的前两行是必须的, 分别用于接收textureCoordinate varying(纹理的坐标, normalized to 1.0)和inputImageTexture uniform(实际输入的图像纹理).
接下来的代码根据坐标获取传进来的纹理的像素颜色, 通过处理添加棕色滤镜效果, 将像素颜色输出用于流水线的下一步骤.
要注意的一点是: 当添加fragment shaders到Xcode工程中时, Xcode会认为它们是源码文件. 因此, 需要将shader文件从Compile Sources build phase中移动到Copy Bundle Resources中, 以便将其包含到你的APP bundle中.
视频可以通过类GPUImageMovie装载到GPUImage框架中, 经过滤镜处理, 然后通过GPUImageMovieWriter输出.
以下关于GPUImage性能的问题, 在这几代iPhone/iPad上几乎不存在
GPUImageMovieWriter的速度非常快, 因此可用来实时处理iPhone 4的摄像头拍摄的视频(640x480), 所以直接处理已经过滤镜处理过的视频源也是可以的. 当前, GPUImageMovieWriter的速度已经足够用来处理实时的iPhone 4上20FPS的720p的视频, iPhone 4S上720p和1080p的视频, 新的iPad也没问题.
下边的实例展示了如何加载一个简单的视频, 将其传至滤镜处理, 然后将结果以480x640 h.264的视频存于硬盘.
movieFile = [[GPUImageMovie alloc] initWithURL:sampleURL];
pixellateFilter = [[GPUImagePixellateFilter alloc] init];
[movieFile addTarget:pixellateFilter];
NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];
unlink([pathToMovie UTF8String]);
NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(480.0, 640.0)];
[pixellateFilter addTarget:movieWriter];
movieWriter.shouldPassthroughAudio = YES;
movieFile.audioEncodingTarget = movieWriter;
[movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter];
[movieWriter startRecording];
[movieFile startProcessing];
一旦录制结束, 你需要将movie recorder从滤镜链中移除, 使用如下代码关闭录制:
[pixellateFilter removeTarget:movieWriter];
[movieWriter finishRecording];
上述过程若未完成则该视频就不可用, 所以一旦被中断, 录制内容会丢掉.
GPUImage能使用GPUImageTextureOutput和GPUImageTextureInput, 分别从OpenGL ES中导出和导入纹理.
这样, 可以从一个OpenGL ES scene中得到经过纹理渲染至framebuffer对象中的一段视频, 或者滤镜处理过的视频或图像, 然后将其作为scene中的显示纹理传入OpenGL ES.
This lets you record a movie from an OpenGL ES scene that is rendered to a framebuffer object with a bound texture,
or filter video or images and then feed them into OpenGL ES as a texture to be displayed in the scene.
The one caution with this approach is that the textures used in these processes must be shared between GPUImage’s OpenGL ES context and any other context via a share group or something similar.
目前有125个内建的滤镜, 分为如下这些类别:
GPUImageBrightnessFilter: Adjusts the brightness of the image
GPUImageExposureFilter: Adjusts the exposure of the image
GPUImageContrastFilter: Adjusts the contrast of the image
GPUImageSaturationFilter: Adjusts the saturation of an image
GPUImageGammaFilter: Adjusts the gamma of an image
GPUImageLevelsFilter: Photoshop-like levels adjustment. The min, max, minOut and maxOut parameters are floats in the range [0, 1]. If you have parameters from Photoshop in the range [0, 255] you must first convert them to be [0, 1]. The gamma/mid parameter is a float >= 0. This matches the value from Photoshop. If you want to apply levels to RGB as well as individual channels you need to use this filter twice - first for the individual channels and then for all channels.
GPUImageColorMatrixFilter: Transforms the colors of an image by applying a matrix to them
GPUImageRGBFilter: Adjusts the individual RGB channels of an image
GPUImageHueFilter: Adjusts the hue of an image
GPUImageWhiteBalanceFilter: Adjusts the white balance of an image.
GPUImageToneCurveFilter: Adjusts the colors of an image based on spline curves for each color channel.
GPUImageHighlightShadowFilter: Adjusts the shadows and highlights of an image
GPUImageLookupFilter: Uses an RGB color lookup image to remap the colors in an image. First, use your favourite photo editing application to apply a filter to lookup.png from GPUImage/framework/Resources. For this to work properly each pixel color must not depend on other pixels (e.g. blur will not work). If you need a more complex filter you can create as many lookup tables as required. Once ready, use your new lookup.png file as a second input for GPUImageLookupFilter.
GPUImageAmatorkaFilter: A photo filter based on a Photoshop action by Amatorka: http://amatorka.deviantart.com/art/Amatorka-Action-2-121069631 . If you want to use this effect you have to add lookup_amatorka.png from the GPUImage Resources folder to your application bundle.
GPUImageMissEtikateFilter: A photo filter based on a Photoshop action by Miss Etikate: http://miss-etikate.deviantart.com/art/Photoshop-Action-15-120151961 . If you want to use this effect you have to add lookup_miss_etikate.png from the GPUImage Resources folder to your application bundle.
GPUImageSoftEleganceFilter: Another lookup-based color remapping filter. If you want to use this effect you have to add lookup_soft_elegance_1.png and lookup_soft_elegance_2.png from the GPUImage Resources folder to your application bundle.
GPUImageColorInvertFilter: Inverts the colors of an image
GPUImageGrayscaleFilter: Converts an image to grayscale (a slightly faster implementation of the saturation filter, without the ability to vary the color contribution)
GPUImageMonochromeFilter: Converts the image to a single-color version, based on the luminance of each pixel
GPUImageFalseColorFilter: Uses the luminance of the image to mix between two user-specified colors
GPUImageHazeFilter: Used to add or remove haze (similar to a UV filter)
GPUImageSepiaFilter: Simple sepia tone filter
GPUImageOpacityFilter: Adjusts the alpha channel of the incoming image
GPUImageSolidColorGenerator: This outputs a generated image with a solid color. You need to define the image size using -forceProcessingAtSize:
GPUImageLuminanceThresholdFilter: Pixels with a luminance above the threshold will appear white, and those below will be black
GPUImageAdaptiveThresholdFilter: Determines the local luminance around a pixel, then turns the pixel black if it is below that local luminance and white if above. This can be useful for picking out text under varying lighting conditions.
GPUImageAverageLuminanceThresholdFilter: This applies a thresholding operation where the threshold is continually adjusted based on the average luminance of the scene.
GPUImageHistogramFilter: This analyzes the incoming image and creates an output histogram with the frequency at which each color value occurs. The output of this filter is a 3-pixel-high, 256-pixel-wide image with the center (vertical) pixels containing pixels that correspond to the frequency at which various color values occurred. Each color value occupies one of the 256 width positions, from 0 on the left to 255 on the right. This histogram can be generated for individual color channels (kGPUImageHistogramRed, kGPUImageHistogramGreen, kGPUImageHistogramBlue), the luminance of the image (kGPUImageHistogramLuminance), or for all three color channels at once (kGPUImageHistogramRGB).
GPUImageHistogramGenerator: This is a special filter, in that it’s primarily intended to work with the GPUImageHistogramFilter. It generates an output representation of the color histograms generated by GPUImageHistogramFilter, but it could be repurposed to display other kinds of values. It takes in an image and looks at the center (vertical) pixels. It then plots the numerical values of the RGB components in separate colored graphs in an output texture. You may need to force a size for this filter in order to make its output visible.
GPUImageAverageColor: This processes an input image and determines the average color of the scene, by averaging the RGBA components for each pixel in the image. A reduction process is used to progressively downsample the source image on the GPU, followed by a short averaging calculation on the CPU. The output from this filter is meaningless, but you need to set the colorAverageProcessingFinishedBlock property to a block that takes in four color components and a frame time and does something with them.
GPUImageLuminosity: Like the GPUImageAverageColor, this reduces an image to its average luminosity. You need to set the luminosityProcessingFinishedBlock to handle the output of this filter, which just returns a luminosity value and a frame time.
GPUImageChromaKeyFilter: For a given color in the image, sets the alpha channel to 0. This is similar to the GPUImageChromaKeyBlendFilter, only instead of blending in a second image for a matching color this doesn’t take in a second image and just turns a given color transparent.
GPUImageTransformFilter: This applies an arbitrary 2-D or 3-D transformation to an image
GPUImageCropFilter: This crops an image to a specific region, then passes only that region on to the next stage in the filter
GPUImageLanczosResamplingFilter: This lets you up- or downsample an image using Lanczos resampling, which results in noticeably better quality than the standard linear or trilinear interpolation. Simply use -forceProcessingAtSize: to set the target output resolution for the filter, and the image will be resampled for that new size.
GPUImageSharpenFilter: Sharpens the image
GPUImageUnsharpMaskFilter: Applies an unsharp mask
GPUImageGaussianBlurFilter: A hardware-optimized, variable-radius Gaussian blur
GPUImageBoxBlurFilter: A hardware-optimized, variable-radius box blur
GPUImageSingleComponentGaussianBlurFilter: A modification of the GPUImageGaussianBlurFilter that operates only on the red component
GPUImageGaussianSelectiveBlurFilter: A Gaussian blur that preserves focus within a circular region
GPUImageGaussianBlurPositionFilter: The inverse of the GPUImageGaussianSelectiveBlurFilter, applying the blur only within a certain circle
GPUImageiOSBlurFilter: An attempt to replicate the background blur used on iOS 7 in places like the control center.
GPUImageMedianFilter: Takes the median value of the three color components, over a 3x3 area
GPUImageBilateralFilter: A bilateral blur, which tries to blur similar color values while preserving sharp edges
GPUImageTiltShiftFilter: A simulated tilt shift lens effect
GPUImage3x3ConvolutionFilter: Runs a 3x3 convolution kernel against the image
GPUImageSobelEdgeDetectionFilter: Sobel edge detection, with edges highlighted in white
GPUImagePrewittEdgeDetectionFilter: Prewitt edge detection, with edges highlighted in white
GPUImageThresholdEdgeDetectionFilter: Performs Sobel edge detection, but applies a threshold instead of giving gradual strength values
GPUImageCannyEdgeDetectionFilter: This uses the full Canny process to highlight one-pixel-wide edges
GPUImageHarrisCornerDetectionFilter: Runs the Harris corner detection algorithm on an input image, and produces an image with those corner points as white pixels and everything else black. The cornersDetectedBlock can be set, and you will be provided with a list of corners (in normalized 0..1 X, Y coordinates) within that callback for whatever additional operations you want to perform.
GPUImageNobleCornerDetectionFilter: Runs the Noble variant on the Harris corner detector. It behaves as described above for the Harris detector.
GPUImageShiTomasiCornerDetectionFilter: Runs the Shi-Tomasi feature detector. It behaves as described above for the Harris detector.
GPUImageNonMaximumSuppressionFilter: Currently used only as part of the Harris corner detection filter, this will sample a 1-pixel box around each pixel and determine if the center pixel’s red channel is the maximum in that area. If it is, it stays. If not, it is set to 0 for all color components.
GPUImageXYDerivativeFilter: An internal component within the Harris corner detection filter, this calculates the squared difference between the pixels to the left and right of this one, the squared difference of the pixels above and below this one, and the product of those two differences.
GPUImageCrosshairGenerator: This draws a series of crosshairs on an image, most often used for identifying machine vision features. It does not take in a standard image like other filters, but a series of points in its -renderCrosshairsFromArray:count: method, which does the actual drawing. You will need to force this filter to render at the particular output size you need.
GPUImageDilationFilter: This performs an image dilation operation, where the maximum intensity of the red channel in a rectangular neighborhood is used for the intensity of this pixel. The radius of the rectangular area to sample over is specified on initialization, with a range of 1-4 pixels. This is intended for use with grayscale images, and it expands bright regions.
GPUImageRGBDilationFilter: This is the same as the GPUImageDilationFilter, except that this acts on all color channels, not just the red channel.
GPUImageErosionFilter: This performs an image erosion operation, where the minimum intensity of the red channel in a rectangular neighborhood is used for the intensity of this pixel. The radius of the rectangular area to sample over is specified on initialization, with a range of 1-4 pixels. This is intended for use with grayscale images, and it expands dark regions.
GPUImageRGBErosionFilter: This is the same as the GPUImageErosionFilter, except that this acts on all color channels, not just the red channel.
GPUImageOpeningFilter: This performs an erosion on the red channel of an image, followed by a dilation of the same radius. The radius is set on initialization, with a range of 1-4 pixels. This filters out smaller bright regions.
GPUImageRGBOpeningFilter: This is the same as the GPUImageOpeningFilter, except that this acts on all color channels, not just the red channel.
GPUImageClosingFilter: This performs a dilation on the red channel of an image, followed by an erosion of the same radius. The radius is set on initialization, with a range of 1-4 pixels. This filters out smaller dark regions.
GPUImageRGBClosingFilter: This is the same as the GPUImageClosingFilter, except that this acts on all color channels, not just the red channel.
GPUImageLocalBinaryPatternFilter: This performs a comparison of intensity of the red channel of the 8 surrounding pixels and that of the central one, encoding the comparison results in a bit string that becomes this pixel intensity. The least-significant bit is the top-right comparison, going counterclockwise to end at the right comparison as the most significant bit.
GPUImageLowPassFilter: This applies a low pass filter to incoming video frames. This basically accumulates a weighted rolling average of previous frames with the current ones as they come in. This can be used to denoise video, add motion blur, or be used to create a high pass filter.
GPUImageHighPassFilter: This applies a high pass filter to incoming video frames. This is the inverse of the low pass filter, showing the difference between the current frame and the weighted rolling average of previous ones. This is most useful for motion detection.
GPUImageMotionDetector: This is a motion detector based on a high-pass filter. You set the motionDetectionBlock and on every incoming frame it will give you the centroid of any detected movement in the scene (in normalized X,Y coordinates) as well as an intensity of motion for the scene.
GPUImageHoughTransformLineDetector: Detects lines in the image using a Hough transform into parallel coordinate space. This approach is based entirely on the PC lines process developed by the Graph@FIT research group at the Brno University of Technology and described in their publications: M. Dubská, J. Havel, and A. Herout. Real-Time Detection of Lines using Parallel Coordinates and OpenGL. Proceedings of SCCG 2011, Bratislava, SK, p. 7 (http://medusa.fit.vutbr.cz/public/data/papers/2011-SCCG-Dubska-Real-Time-Line-Detection-Using-PC-and-OpenGL.pdf) and M. Dubská, J. Havel, and A. Herout. PClines — Line detection using parallel coordinates. 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), p. 1489- 1494 (http://medusa.fit.vutbr.cz/public/data/papers/2011-CVPR-Dubska-PClines.pdf).
GPUImageLineGenerator: A helper class that generates lines which can overlay the scene. The color of these lines can be adjusted using -setLineColorRed:green:blue:
GPUImageMotionBlurFilter: Applies a directional motion blur to an image
GPUImageZoomBlurFilter: Applies a directional motion blur to an image
GPUImageChromaKeyBlendFilter: Selectively replaces a color in the first image with the second image
GPUImageDissolveBlendFilter: Applies a dissolve blend of two images
GPUImageMultiplyBlendFilter: Applies a multiply blend of two images
GPUImageAddBlendFilter: Applies an additive blend of two images
GPUImageSubtractBlendFilter: Applies a subtractive blend of two images
GPUImageDivideBlendFilter: Applies a division blend of two images
GPUImageOverlayBlendFilter: Applies an overlay blend of two images
GPUImageDarkenBlendFilter: Blends two images by taking the minimum value of each color component between the images
GPUImageLightenBlendFilter: Blends two images by taking the maximum value of each color component between the images
GPUImageColorBurnBlendFilter: Applies a color burn blend of two images
GPUImageColorDodgeBlendFilter: Applies a color dodge blend of two images
GPUImageScreenBlendFilter: Applies a screen blend of two images
GPUImageExclusionBlendFilter: Applies an exclusion blend of two images
GPUImageDifferenceBlendFilter: Applies a difference blend of two images
GPUImageHardLightBlendFilter: Applies a hard light blend of two images
GPUImageSoftLightBlendFilter: Applies a soft light blend of two images
GPUImageAlphaBlendFilter: Blends the second image over the first, based on the second’s alpha channel
GPUImageSourceOverBlendFilter: Applies a source over blend of two images
GPUImageColorBurnBlendFilter: Applies a color burn blend of two images
GPUImageColorDodgeBlendFilter: Applies a color dodge blend of two images
GPUImageNormalBlendFilter: Applies a normal blend of two images
GPUImageColorBlendFilter: Applies a color blend of two images
GPUImageHueBlendFilter: Applies a hue blend of two images
GPUImageSaturationBlendFilter: Applies a saturation blend of two images
GPUImageLuminosityBlendFilter: Applies a luminosity blend of two images
GPUImageLinearBurnBlendFilter: Applies a linear burn blend of two images
GPUImagePoissonBlendFilter: Applies a Poisson blend of two images
GPUImageMaskFilter: Masks one image using another
GPUImagePixellateFilter: Applies a pixellation effect on an image or video
GPUImagePolarPixellateFilter: Applies a pixellation effect on an image or video, based on polar coordinates instead of Cartesian ones
GPUImagePolkaDotFilter: Breaks an image up into colored dots within a regular grid
GPUImageHalftoneFilter: Applies a halftone effect to an image, like news print
GPUImageCrosshatchFilter: This converts an image into a black-and-white crosshatch pattern
GPUImageSketchFilter: Converts video to look like a sketch. This is just the Sobel edge detection filter with the colors inverted
GPUImageThresholdSketchFilter: Same as the sketch filter, only the edges are thresholded instead of being grayscale
GPUImageToonFilter: This uses Sobel edge detection to place a black border around objects, and then it quantizes the colors present in the image to give a cartoon-like quality to the image.
GPUImageSmoothToonFilter: This uses a similar process as the GPUImageToonFilter, only it precedes the toon effect with a Gaussian blur to smooth out noise.
GPUImageEmbossFilter: Applies an embossing effect on the image
GPUImagePosterizeFilter: This reduces the color dynamic range into the number of steps specified, leading to a cartoon-like simple shading of the image.
GPUImageSwirlFilter: Creates a swirl distortion on the image
GPUImageBulgeDistortionFilter: Creates a bulge distortion on the image
GPUImagePinchDistortionFilter: Creates a pinch distortion of the image
GPUImageStretchDistortionFilter: Creates a stretch distortion of the image
GPUImageSphereRefractionFilter: Simulates the refraction through a glass sphere
GPUImageGlassSphereFilter: Same as the GPUImageSphereRefractionFilter, only the image is not inverted and there’s a little bit of frosting at the edges of the glass
GPUImageVignetteFilter: Performs a vignetting effect, fading out the image at the edges
GPUImageKuwaharaFilter: Kuwahara image abstraction, drawn from the work of Kyprianidis, et. al. in their publication “Anisotropic Kuwahara Filtering on the GPU” within the GPU Pro collection. This produces an oil-painting-like image, but it is extremely computationally expensive, so it can take seconds to render a frame on an iPad 2. This might be best used for still images.
GPUImageKuwaharaRadius3Filter: A modified version of the Kuwahara filter, optimized to work over just a radius of three pixels
GPUImagePerlinNoiseFilter: Generates an image full of Perlin noise
GPUImageCGAColorspaceFilter: Simulates the colorspace of a CGA monitor
GPUImageMosaicFilter: This filter takes an input tileset, the tiles must ascend in luminance. It looks at the input image and replaces each display tile with an input tile according to the luminance of that tile. The idea was to replicate the ASCII video filters seen in other apps, but the tileset can be anything.
GPUImageJFAVoronoiFilter: Generates a Voronoi map, for use in a later stage.
GPUImageVoronoiConsumerFilter: Takes in the Voronoi map, and uses that to filter an incoming image.
You can also easily write your own custom filters using the C-like OpenGL Shading Language, as described above.
Several sample applications are bundled with the framework source. Most are compatible with both iPhone and iPad-class devices. They attempt to show off various aspects of the framework and should be used as the best examples of the API while the framework is under development. These include:
A bundled JPEG image is loaded into the application at launch, a filter is applied to it, and the result rendered to the screen. Additionally, this sample shows two ways of taking in an image, filtering it, and saving it to disk.
A pixellate filter is applied to a live video stream, with a UISlider control that lets you adjust the pixel size on the live video.
A movie file is loaded from disk, an unsharp mask filter is applied to it, and the filtered result is re-encoded as another movie.
From a single camera feed, four views are populated with realtime filters applied to camera. One is just the straight camera video, one is a preprogrammed sepia tone, and two are custom filters based on shader programs.
This demonstrates every filter supplied with GPUImage.
This is used to test the performance of the overall framework by testing it against CPU-bound routines and Core Image. Benchmarks involving still images and video are run against all three, with results displayed in-application.
This demonstrates the ability of GPUImage to interact with OpenGL ES rendering. Frames are captured from the camera, a sepia filter applied to them, and then they are fed into a texture to be applied to the face of a cube you can rotate with your finger. This cube in turn is rendered to a texture-backed framebuffer object, and that texture is fed back into GPUImage to have a pixellation filter applied to it before rendering to screen.
In other words, the path of this application is camera -> sepia tone filter -> cube -> pixellation filter -> display.
A version of my ColorTracking example from http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios ported across to use GPUImage, this application uses color in a scene to track objects from a live camera feed. The four views you can switch between include the raw camera feed, the camera feed with pixels matching the color threshold in white, the processed video where positions are encoded as colors within the pixels passing the threshold test, and finally the live video feed with a dot that tracks the selected color. Tapping the screen changes the color to track to match the color of the pixels under your finger. Tapping and dragging on the screen makes the color threshold more or less forgiving. This is most obvious on the second, color thresholding view.
Currently, all processing for the color averaging in the last step is done on the CPU, so this is part is extremely slow.