是的,我要开始写关于GPUImage 框架的文章了!先来把简介看一波!借助翻译工具也得啃出来哇!!!看来我又得早起学英文了。。。。
我知道大家,估计不会耐心看简介!不过,要了解一个框架还是要看看滴。CC就帮大家做这件事情吧~~~~
翻译不正确的地方,欢迎大家吐槽哦~~~
为提供英文阅读能力,我会把我阅读的所有英文文档都以这样的方式写入到中!
GPUImage下载地址
先来介绍一下,GPUImage吧!
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
(GPUImage框架是一个BSD(伯克利软件套件)许可iOS库,能让你的APP应用GPU加速的过滤器和其他图像处理效果,现场摄像机视频和movies。在Core Image对比(iOS 5的一部分),GPUImage允许你添加自己的自定义过滤器,支持部署到iOS 4,并有一个简单的接口。然而,它目前缺乏Core Image的一些更高级的特性,如人脸检测。)
For massively parallel operations like processing images or live video frames, GPUs have some significant performance advantages over CPUs. On an iPhone 4, a simple image filter can be over 100 times faster to perform on the GPU than an equivalent CPU-based filter.
(大规模用来处理图像或视频直播框架,GPUImage框架有显著的性能优势。在iPhone 4上,一个简单的图像过滤器在GPU上执行的速度比同等CPU的过滤器快100倍以上。)
However, running custom filters on the GPU requires a lot of code to set up and maintain an OpenGL ES 2.0 rendering target for these filters. I created a sample project to do this:
(然而,在GPU上运行自定义过滤器需要大量的代码来设置和维护这些过滤器的OpenGL ES 2渲染目标。我创建了一个示例项目来做这件事:)
示例项目地址
and found that there was a lot of boilerplate code I had to write in its creation. Therefore, I put together this framework that encapsulates a lot of the common tasks you'll encounter when processing images and video and made it so that you don't need to care about the OpenGL ES 2.0 underpinnings.
(你会发现有大量的样板代码我已经写在其创作中。因此,我将这个框架封装起来,封装了处理图像和视频时遇到的许多常见任务,使您不必关心OpenGL ES 2基础。)
This framework compares favorably to Core Image when handling video, taking only 2.5 ms on an iPhone 4 to upload a frame from the camera, apply a gamma filter, and display, versus 106 ms for the same operation using Core Image. CPU-based processing takes 460 ms, making GPUImage 40X faster than Core Image for this operation on this hardware, and 184X faster than CPU-bound processing. On an iPhone 4S, GPUImage is only 4X faster than Core Image for this case, and 102X faster than CPU-bound processing. However, for more complex operations like Gaussian blurs at larger radii, Core Image currently outpaces GPUImage.
(GPUImage框架在处理视频时与Core Image相比是有利的,在iPhone 4上只需2.5毫秒就可以从照相机上传帧,应用gamma滤波器,并使用Core Image对同一操作显示106毫秒。基于CPU的处理需要460毫秒,使GPUImage 40x核心图像比这个操作在该硬件更快,和184x速度比CPU绑定的处理。在iPhone 4S,GPUImage只有快4倍比核心的形象,这种情况下,和102x速度比CPU绑定的处理。然而,对于更复杂的操作,如高斯模糊半径较大,目前超过GPUImage核心形象。)
Technical requirements(技术支持)
OpenGL ES 2.0: Applications using this will not run on the original iPhone, iPhone 3G, and 1st and 2nd generation iPod touches
(OpenGL ES 2:应用程序将不会运行在最初的iPhone,例如iPhone 3G和第一代和第二代iPod触摸)
iOS 4.1 as a deployment target (4.0 didn't have some extensions needed for movie reading). iOS 4.3 is needed as a deployment target if you wish to show live video previews when taking a still photo.
(iOS 4.1作为部署目标的(4.0比没有电影阅读所需的扩展)。如果您希望在拍摄静止照片时显示实时视频预览,则需要iOS 4.3作为部署目标。)
iOS 5.0 SDK to build
Devices must have a camera to use camera-related functionality (obviously)
(显然需要必须有一个摄像机来应用与相机相关的功能)
The framework uses automatic reference counting (ARC), but should support projects using both ARC and manual reference counting if added as a subproject as explained below. For manual reference counting applications targeting iOS 4.x, you'll need add -fobjc-arc to the Other Linker Flags for your application project.
(GPUImage框架使用自动引用计数(ARC),但要支持的项目,如果添加一个子项目解释如下使用手动引用计数。手动引用计数的应用针对iOS 4.X系统,你需要添加-fobjc-arc的其他连接标记到你的应用程序项目。)
General architecture (普遍结构)
GPUImage uses OpenGL ES 2.0 shaders to perform image and video manipulation much faster than could be done in CPU-bound routines. However, it hides the complexity of interacting with the OpenGL ES API in a simplified Objective-C interface. This interface lets you define input sources for images and video, attach filters in a chain, and send the resulting processed image or video to the screen, to a UIImage, or to a movie on disk.
GPUImage使用OpenGL ES 2着色器进行图像和视频处理速度远远超过可以在CPU绑定的程序做的。然而,它隐藏在OpenGLES API简化Objective-C接口OpenGL交互的复杂性。这个接口允许您定义的图像和视频输入源,链连接的过滤器,并发送处理结果的图像或视频的画面到屏幕,一个UIImage,或磁盘上的一个movie。
Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include GPUImageVideoCamera (for live video from an iOS camera), GPUImageStillCamera (for taking photos with the camera), GPUImagePicture (for still images), and GPUImageMovie (for movies). Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.
(视频图像或帧从源对象的上传,这是GPUImageOutput。这些包括GPUImageVideoCamera(从iOS相机录制视频)、GPUImageStillCamera(带相机的照片),GPUImagePicture(静态图片),和GPUImageMovie(电影)。源对象将图像帧上传到OpenGL ES作为纹理,然后将这些纹理传递给处理链中的下一个对象。)
Filters and other subsequent elements in the chain conform to the GPUImageInput protocol, which lets them take in the supplied or processed texture from the previous link in the chain and do something with it. Objects one step further down the chain are considered targets, and processing can be branched by adding multiple targets to a single output or filter.
(链中的过滤器和其他随后的元素符合GPUImageInput协议,这让他们以提供或加工纹理从链中的上一个链接,用它做什么。在链上一步一步的对象被认为是目标,并且处理可以通过将多个目标添加到单个输出或过滤器来进行分支)
For example, an application that takes in live video from the camera, converts that video to a sepia tone, then displays the video onscreen would set up a chain looking something like the following:
(例如,一个应用程序,需要在摄像头获取视频,再转换视频到深褐色调,然后显示视频屏幕将建立一个链,看起来过程有点像下面:)
GPUImageVideoCamera -> GPUImageSepiaFilter -> GPUImageView
小伙伴们阅读后,请喜欢一下。文章更新可以提醒到你哦~~~~