One of the best ways to update an application with a tired two-dimensional (2D) graphical user interface (GUI) is to update its legacy look and feel with some three-dimensional (3D) effects to get more of an Apple* iPhone*–like user experience. By exploiting the Khronos* OpenGL* ES accelerator on Intel® Atom™ processors, you can make such a change without degrading the responsiveness of the UI. But rewriting a 2D application from scratch to use OpenGL* ES is usually not practical. Instead, update your 2D application to use a combination of 2D and 3D rendering by making OpenGL* ES coexist with the legacy 2D application programming interface (API) you already use. This way, your 2D GUI can still be rendered by the legacy 2D API, but then be animated in 3D with transition effects that OpenGL* ES handles well, like rotation, scaling, blending, and lighting effects.
Even when a new application is built on OpenGL* ES from the start, 2D objects—such as GUI widgets and text fonts—are often required that OpenGL* ES does not provide, so mixing 2D and 3D APIs makes more sense than you might think. In fact, the combination of 2D and 3D rendering is powerful, especially when using an application processor that offers accelerators for both, like Intel® Atom™ processors. The trick is to make them play together nicely.
2D and 3D are really different paradigms with important architectural design trade-offs that developers must face to avoid some of the limitations of OpenGL* ES on embedded systems; making efficient use of the limited resources on embedded systems is important if you want a responsive user experience. This article details and contrasts several proven solutions to combining OpenGL* ES with legacy 2D APIs that work on most embedded systems, including Linux* and Google Android. The architectural trade-offs of each approach are explained, and some important pitfalls are identified. These concepts work with either OpenGL* ES 1.1 or 2.0 on embedded Linux systems, with or without a windowing system, such as X11, Qt, or Android. Some code examples are specific to Android, which supports OpenGL* ES through both its framework API and the Native Development Kit (NDK). The API framework supports OpenGL* ES 2.0 beginning with Android 2.2.
Typical legacy applications build 2D screen images piece by piece using BitBlt operations through a 2D API, which may be accelerated by a BitBlt engine. BitBlts typically involve raster operations, transparency, brushes, clipping rectangles, and other features that do not map well to OpenGL* ES. Even worse, BitBlts are typically layered heavily. There may be hundreds of BitBlts to construct a typical screen in a 2D GUI. Also, a typical screen update usually only renders the pixels that have actually changed. In contrast, OpenGL* ES always renders screen frames whole. If your application relies on a 2D API to render GUI widgets such as buttons, scroll bars, icons, and text fonts, don’t plan on moving to OpenGL* ES exclusively, because OpenGL* ES doesn’t provide those elements.
Some examples of the most widely used legacy 2D APIs on Linux* systems are Cairo, GTK+, Motif, FreeType, Pango, DirectFB, and Qt Frameworks—although there are many more. These APIs are used for rendering scalable vector graphics (SVGs), BitBlts, text fonts, GUI widget components, windows, or some combination. All of these APIs produce 2D images that OpenGL* ES can animate on a Linux* or Android platform, but a mechanism is needed to exchange images between these 2D and 3D APIs efficiently.
Think of your legacy application as producing 2D images in which each screen update through the 2D API produces a new 2D image. These images can then be copied into an OpenGL* ES texture to allow OpenGL* ES to display it on the screen. OpenGL* ES can then animate the movement of the entire image as a texture by applying a transform to create a transition effect from one screen image to the next. This animated transition effect can be a rotation, scale up or down, translate, fade in or out, or any combination. The geometry for the texture to achieve these effects can be as simple as a pair of triangles to form a rectangle that matches the shape of the display. The time duration of animated effects is typically just a fraction of a second—just long enough for the user to visualize the animation and provide a 3D experience but not long enough to impede the UI. When animations are complete, the cycle repeats, with the 2D API providing the next texture image to load. OpenGL* ES is efficient at animating textures after they have been loaded, because the 3D accelerator actually does most of the work.
The code example in Listing 1 shows the major steps required to initialize OpenGL* ES 2.0 to perform an animated transition (scale up and rotate) of a 2D texture image. First, a GL Shading Language ES shader program is selected for use by the ShaderHandle, and the locations of its uniforms are retrieved. Next, two matrices are created for a simple perspective projection: the projection matrix and the model view matrix. Then, a texture is created and loaded with a 2D image using the conventional glTexImage2D()
method. The image will be mapped onto a pair of triangles that form a rectangle, so the pointers to the vertex and texture coordinate attributes are passed to the shader with glVertexAttribPointer()
. These same arrays for the triangle pair will be reused for every frame of the animation, but the position of the rectangle on the display is recalculated in each iteration of the loop.
The loop begins by clearing the frame buffer to black. Then, the fModelViewMartix
is recalculated for the next frame of the animation and passed to the shader with glUniformMatrix4fv()
. The samefProjectionMatrix
is used for every frame. Finally, the call to glDrawArrays()
initiates the rendering of the texture image onto the triangle pair by the OpenGL* ES accelerator. The call toeglSwapBuffers()
makes the new rendered frame visible on the display.
Listing 1. Example of an animated texture transition
#include "GLES2/gl2.h"
// Define the vertices for a rectangle comprised of two triangles.
const GLfloat fPositions[] =
{
-1.0f, -1.0f,
1.0f, -1.0f,
1.0f, 1.0f,
-1.0f, 1.0f,
};
// Define the coordinates for mapping the texture onto the triangle pair.
const GLfloat fTexCoords[] =
{
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f
};
// Initialize interface with the shader program.
GLuint ShaderHandle;
glUseProgram(ShaderHandle);
GLint ModelViewMatrixLocation = glGetUniformLocation(ShaderHandle, "ModelViewMatrix");
GLint ProjectionMatrixLocation = glGetUniformLocation(ShaderHandle, "ProjectionMatrix");
GLint TextureLocation = glGetUniformLocation(ShaderHandle, "Texture");
// Initialize the projection and model view matrices.
GLfloat fProjectionMatrix[16];
GLfloat fModelViewMatrix[16];
Identity(fProjectionMatrix);
Frustum(fProjectionMatrix, -0.5f, 0.5f, -0.5f, 0.5f, 1.0f, 100.0f);
glUniformMatrix4fv(ProjectionMatrixLocation, 1, 0, fProjectionMatrix);
glUniformMatrix4fv(ModelViewMatrixLocation, 1, 0, fModelViewMatrix);
// Create and load a texture image the conventional way.
EGLint TextureHandle;
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_2D, TextureHandle);
glTexImage2D(GL_TEXTURE_2D,0, GL_RGBA, Width, Height, 0, GL_RGBA, GL_UNSIGNED_BYTE,pImage);
glUniform1i(TextureLocation, 0);
// Initialize pointers to the vertices and texture coordinates of the triangle fan.
glEnableVertexAttribArray(VERTEX);
glEnableVertexAttribArray(TEXCOORD);
glVertexAttribPointer(VERTEX, 2, GL_FLOAT, 0, 0, &fPositions[0]);
glVertexAttribPointer(TEXCOORD, 2, GL_FLOAT, 0, 0, &fTexCoords[0]);
// Animation loop which scales and rotates the texture and maps it to the triangle pair.
for (float fAnimationTime = 0.0f; fAnimationTime < 1.0f; fAnimationTime += 0.01)
{
// Clear the frame buffer to black.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create matrix to scale and rotate the texture image.
Identity(fModelViewMatrix);
Translate(fModelViewMatrix, 0.0f, 0.0f, -2.0f);
Scale(fModelViewMatrix, fAnimationTime, fAnimationTime, 1.0f);
Rotate(fModelViewMatrix, fAnimationTime * 360.0f, 0.0f, 0.0f, 1.0f);
glUniformMatrix4fv(ModelViewMatrixLocation, 1, 0, fModelViewMatrix);
// Render and display the new frame.
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
eglSwapBuffers(EglDisplayHandle, EglSurfaceHandle);
}
When adding OpenGL* ES to a legacy 2D application, the main point of contention is ownership of the frame buffer. The frame buffer is special, because it is the memory area that is actually shown on the display (as opposed to off-screen buffers). OpenGL* ES expects to acquire the frame buffer from the EGL* driver, which acquires pointers to the frame buffer from the Linux* frame buffer device. Your 2D API is probably rendering to that same frame buffer memory allocated from that same Linux frame buffer device. That is the conflict. Android is a good example of this: It owns the frame buffer. Android has 2D features that are already integrated with OpenGL* ES, which is great if you are writing a new app from scratch. Otherwise, the challenge is to make your legacy 2D API run without owning the frame buffer.
Note that the term frame buffer is really a convenient over-simplification, because there are typically several frame buffers to prevent screen tearing artifacts, plus depth buffers, and so on. The actual number of frame buffers available typically depends on the amount of memory allocated in your system for that purpose when the Linux kernel boots. I conveniently refer to all of these buffers as the frame buffer; regardless of how many frame buffers you actually have, most graphics APIs consider all of them to be their private property to use at the exclusion of all other graphics APIs, 2D or otherwise. Making OpenGL* ES coexist with a 2D API requires resolving this basic conflict of frame buffer ownership.
The solution is to either make your 2D API share the real frame buffer with OpenGL* ES or to redirect the rendering of either of the two APIs into an off-screen (fake) frame buffer, which can then be read and used by the other API. OpenGL* ES can be made to share the frame buffer. So, the first question to ask is, will your 2D API share the frame buffer? In other words, can your application’s legacy 2D rendering be redirected into an off-screen buffer that OpenGL* ES can then read and use as a texture image? If not, then your decision on a rendering order is limited to the first option in the next section.
There are three solutions to this problem, each with its associated trade-offs. They are:
OpenGL* ES Rendering Through an Existing 2D GUI
You can configure OpenGL* ES never to render directly to the real frame buffer but rather to an off-screen buffer (Figure 1). Then, the 2D API must copy each frame to the frame buffer with a BitBlt operation. This approach is usually the easiest to implement, because the legacy GUI retains ownership of the frame buffer and operates without modification. The disadvantage of this approach is that the copy operation slows the 3D rendering somewhat. The loss in 3D performance should be acceptable on systems with a BitBlt accelerator, because that accelerator can be used to perform the operation. Note that this is how OpenGL* ES works in typical windowing system environments like X11 or Qt when it is restricted to rendering into a window that is smaller than the display. The off-screen buffer to which OpenGL* ES renders will either be a frame buffer object (FBO), a pixel buffer, or a pixmap.
Figure 1. 3D rendering through a 2D API
Rendering an Existing 2D GUI Through OpenGL* ES
The opposite solution is to give OpenGL* ES ownership of the frame buffer and adapt the legacy 2D GUI to render through OpenGL* ES (Figure 2). This means that every time the GUI alters a 2D image, the image must be updated on the screen by copying it into an OpenGL* ES texture. Obviously, this approach reduces the performance of the 2D GUI but maximizes 3D rendering performance. This option represents the design when a legacy 2D GUI is ported to Android, because OpenGL* ES has ownership of the frame buffer and provides acceleration of both 2D and 3D. With this design, it is critical that you use fast texture-loading capabilities, such as the EGL* image extension, to minimize the loss of performance, because loading texture images into OpenGL* ES is inherently a slow operation.
Figure 2. 2D rendering through OpenGL ES
Using a Shared Frame Buffer for 2D and 3D Rendering
It is possible to have the best 2D and 3D rendering performance without sacrificing either: The trick is to allow the two APIs to share direct access to the frame buffer. This approach requires configuring OpenGL* ES and the 2D API for the same frame buffer (Figure 3). If OpenGL* ES is running in a different execution thread than the 2D GUI, you must use a mutex to control which API has ownership of the frame buffer at any particular time to prevent one from rendering over the other.
Figure 3.Sharing the frame buffer for rendering
This approach also requires paying particular attention to how the two APIs advance the frame buffer display sequence. With OpenGL* ES, the frame buffer typically consists of three actual frames, so that the 3D accelerator can always render the next frame while the previous frame is displayed without tearing artifacts. OpenGL* ES applications advance the frame buffer display sequence by callingeglSwapBuffers()
, which then calls the Linux frame buffer device through the FBIOPAN_DISPLAY ioctl()
method. For a 2D API to share the same set of frame buffers, it too must call the same frame buffer device. It is critical that the presentation order of the frame buffers be maintained when rendering is switched between the 2D and 3D APIs, or the API will periodically render to a buffer that is currently displayed (the front buffer), which causes ugly rendering artifacts. The solution is always to read the current value of the yoffset parameter to determine which frame buffer is currently displayed before rendering the next frame. The EGL* and eglSwapBuffers()
method already do this for OpenGL* ES rendering, so your 2D rendering must do the same, as shown in Listing 2.
Listing 2. Example of advancing the frame buffer display sequence
#include
#include
struct fb_var_screeninfo varinfo;
// Open the linux frame buffer device.
int fbDeviceHandle = open("/dev/fb0", O_RDWR);
// Get the variable screen information from the fb device.
ioctl(fbDeviceHandle, FBIOGET_VSCREENINFO, &varinfo);
// Determine which framebuffer is currently displayed by the EGL.
int FrameIndex = varinfo.yoffset / FrameHeight;
// Advance to the next framebuffer.
if (++FrameIndex > 2)
FrameIndex = 0;
// Flip displayed framebuffer to display new rendering.
varinfo.xoffset = 0;
varinfo.yoffset = FrameIndex * FrameHeight;
ioctl(fbDeviceHandle, FBIOPAN_DISPLAY, &varinfo);
Exact terminology is important here, because the Khronos Group has defined several ways for OpenGL* ES to render into off-screen buffers. The most widely used solution is the FBO with an attached texture. Pixel buffers (or pbuffers) are obsolete and have performance problems. Pixmaps are useful if your 2D API is compatible with your EGL* driver, which is usually not the case. However, Android does support pixmaps, but they are called native GraphicBuffers. This is the preferred way to exchange 2D images between Android and OpenGL* ES.
Attaching a texture to an FBO is typically done to implement render-to-texture techniques, where the rendered output from OpenGL* ES is reused as a texture for the finished scene, such as a reflection or mirror effect. But it is also useful for passing rendered 3D frames to your 2D API, because you can retrieve the address of the texture map using the EGL* image extension. If you can obtain the physical address of a texture buffer, you can use an accelerated 2D API to BitBlt rendered frames between the 2D and 3D APIs quickly. But even a memcpy()
method is still typically faster than using glTexImage2D()
to load textures.
Another approach worth mentioning is to use an FBO with an attached render buffer and theglReadPixels()
method to copy the rendered frames. However, the performance ofglReadPixels()
will be poor unless it is accelerated by your OpenGL* ES driver.
Typically, it’s a good idea to use compression and mip maps when creating textures for OpenGL* ES. However, that is for static images and way too expensive for dynamic images. The texture compression algorithms implemented in 3D accelerators are asymmetrical, meaning that it is much more compute intensive to compress an image than to decompress the same image. So, for good performance loading dynamic images, use an uncompressed common RGB format without mip maps, such as RGB_565 (16 bit), RGB_888 (24 bit), or ARGB_8888 (32 bit).
A common problem is that your 2D API might use a nonstandard pixel format that is not supported directly by OpenGL* ES. You can usually handle this issue for images coming into OpenGL* ES 2.0 as textures by writing a custom pixel shader that swaps the red, green, blue, or alpha pixel color components as needed, as shown in Listing 3.
Listing 3. Example fragment shader to convert pixel formats
void main()
{
vec3 color_rgb = texture2D(Texture, TexCoord).bgr; // Swap red and blue components
gl_FragColor = vec4(color_rgb, 1.0); // Append alpha component
}
However, custom shader code cannot change the format with which OpenGL* ES renders its output, so if that is being directed into an FBO to be read by your 2D API, it must be able to handle one of the output formats that the OpenGL* ES driver supports—either RGB_565 (16 bit) or ARGB_8888 (32 bit).
Figure 4 illustrates the preferred mechanisms for exchanging 2D images between a 2D API and OpenGL* ES. An EGL* image is allocated and associated with each texture so that pointers to the texture buffers can be obtained. These pointers can then be used to transfer rendered images between the APIs with either a software copy or an accelerated BitBlt. OpenGL* ES can render into a texture that is attached to an FBO. This is an off-screen buffer that can also be read through its associated EGL* image.
Figure 4. Exchanging images between a 2D API and OpenGL* ES
The conventional way to copy an image into a texture is with either the glTexImage2D()
orglTexSubImage2D()
methods, but these methods are slow because of how they convert the format of the image data as it is copied. These are really intended for loading static images, not dynamic ones. Moving images between OpenGL* ES textures and another graphics API quickly requires direct access to the memory in which the texture image is stored. Ideally, the image should be copied by an accelerated 2D BitBlt, but that requires the physical address of the image. Otherwise, you can use amemcpy()
method instead, which only requires the virtual address of the image.
The EGL* image extension is an extension to the EGL* standard defined by the Khronos Group that provides the virtual or physical addresses of an OpenGL* ES texture. With these addresses, images can be copied to or from OpenGL* ES textures quickly. This technique is so fast that it is possible to stream uncompressed video into OpenGL* ES, but doing so typically requires converting the pixels from the YUV to RGB color space, which is beyond the scope of this article.
The official name of the EGL* image extension is GL_OES_EGL_image. It is widely supported on most platforms, including Android. To confirm which extensions are available on any platform, use the functions provided in Listing 4 to return strings that list all of the available extensions by name for your OpenGL* ES and EGL* drivers.
Listing 4. Checking for available OpenGL* ES and EGL* extensions
glGetString(GL_EXTENSIONS);
eglQueryString(eglGetCurrentDisplay(), EGL_EXTENSIONS);
The header file eglext.h
defines the names of the rendering surface types that the EGL* and OpenGL* ES drivers for your platform support. Table 1 provides a summary of the EGL* image surface types that are available for Android. Note that Android lists support for the EGL_KHR_image_pixmap extension, but it is actually the EGL_NATIVE_BUFFER_ANDROID
surface type that you must use, notEGL_NATIVE_PIXMAP_KHR
.
Table 1. Surface types for EGL* images on Android
Extension | Surface type |
EGL_NATIVE_PIXMAP_KHR | Pixmap surface (not available on Android) |
EGL_GL_TEXTURE_2D_KHR | Conventional 2D texture |
EGL_GL_TEXTURE_3D_KHR | Conventional 3D texture |
EGL_GL_RENDERBUFFER_KHR | Render buffer surface for glReadPixels() |
EGL_NATIVE_BUFFER_ANDROID | For Android’s native graphics API |
The code in Listing 5 shows how to use the EGL* image extension in two ways. First, on the Android platform, a native GraphicBuffer
surface is created and locked. This buffer can be accessed for rendering while it is locked. When this buffer is unlocked, it can be imported into a new EGL* image with the ClientBufferAddress parameter to eglCreateImageKHR()
. This EGL* image is then bound to GL_TEXTURE_2D with glEGLImageTargetTexture2DOES()
, to be used as any texture can be used in OpenGL* ES. This is accomplished without ever copying the image, as the native GraphicBuffer
and the OpenGL* ES texture are actually sharing the same image data. This example demonstrates how images can be exchanged quickly between OpenGL* ES and Android or any 2D API on the Android platform. Note that the GraphicBuffer class is only available in the Android framework API, not the NDK.
If you are not using Android, you can still import images into OpenGL* ES textures in the same way. Set the ClientBufferAddress
to point to your image data, and set the SurfaceType asEGL_GL_TEXTURE_2D_KHR
. Refer to your eglext.h include file for a complete list of the surface types that are available on your platform. Use eglQuerySurface()
to obtain the address, pitch (stride), and origin of the new EGL* image buffer after it is created. Be sure to use eglGetError()
after each call to the EGL* to check for any returned errors.
Listing 5. Example of using the EGL* image extension with Android
#include
#include
#ifdef ANDROID
GraphicBuffer * pGraphicBuffer = new GraphicBuffer(ImageWidth, ImageHeight, PIXEL_FORMAT_RGB_565, GraphicBuffer::USAGE_SW_WRITE_OFTEN | GraphicBuffer::USAGE_HW_TEXTURE);
// Lock the buffer to get a pointer
unsigned char * pBitmap = NULL;
pGraphicBuffer->lock(GraphicBuffer::USAGE_SW_WRITE_OFTEN,(void **)&pBitmap);
// Write 2D image to pBitmap
// Unlock to allow OpenGL ES to use it
pGraphicBuffer->unlock();
EGLClientBuffer ClientBufferAddress = pGraphicBuffer->getNativeBuffer();
EGLint SurfaceType = EGL_NATIVE_BUFFER_ANDROID;
#else
EGLint SurfaceType = EGL_GL_TEXTURE_2D_KHR;
#endif
// Make an EGL Image at the same address of the native client buffer
EGLDisplay eglDisplayHandle = eglGetDisplay(EGL_DEFAULT_DISPLAY);
// Create an EGL Image with these attributes
EGLint eglImageAttributes[] = {EGL_WIDTH, ImageWidth, EGL_HEIGHT, ImageHeight, EGL_MATCH_FORMAT_KHR, EGL_FORMAT_RGB_565_KHR, EGL_IMAGE_PRESERVED_KHR, EGL_TRUE, EGL_NONE};
EGLImageKHR eglImageHandle = eglCreateImageKHR(eglDisplayHandle, EGL_NO_CONTEXT, SurfaceType, ClientBufferAddress, eglImageAttributes);
// Create a texture and bind it to GL_TEXTURE_2D
EGLint TextureHandle;
glGenTextures(1, &TextureHandle);
glBindTexture(GL_TEXTURE_2D, TextureHandle);
// Attach the EGL Image to the same texture
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImageHandle);
// Get the address and pitch (stride) of the new texture image
eglQuerySurface(eglDisplayHandle, eglImageHandle, EGL_BITMAP_POINTER_KHR, &BitmapAddress);
eglQuerySurface(eglDisplayHandle, eglImageHandle, EGL_BITMAP_PITCH_KHR, &BitmapPitch);
eglQuerySurface(eglDisplayHandle, eglImageHandle, EGL_BITMAP_ORIGIN_KHR, &BitmapOrigin);
// Check for errors after each call to the EGL
if (eglGetError() != EGL_SUCCESS)
break;
// Delete the EGL Image to free the memory when done
eglDestroyImageKHR(eglDisplayHandle, eglImageHandle);
One of the best ways to update an application with a tired 2D GUI is to exploit the accelerated OpenGL* ES features of Android on the Intel® Atom™ platform. Even though 2D and 3D are really different paradigms, the combination of the two is powerful. The trick is to make them cooperate by either sharing the frame buffer or sharing images through textures and the EGL* image extension. Use of this extension with OpenGL* ES is essential for achieving a good user experience, because the conventional method of loading textures with glTexImage2D()
is too slow for dynamic images. Fortunately, this extension is well supported on most embedded platforms today, including Android.
Clay D. Montgomery is a leading developer of drivers and apps for OpenGL* on embedded systems. His experience includes the design of graphics accelerator hardware, graphics drivers, APIs, and OpenGL* applications across many platforms at STB Systems, VLSI Technology, Philips Semiconductors, Nokia, Texas Instruments, AMX, and as an independent consultant. He was instrumental in the development of some of the first OpenGL* ES, OpenVG*, and SVG drivers and applications for the Freescale i.MX and TI OMAP platforms and the Vivante, AMD, and PowerVR graphics cores. He has developed and taught workshops on OpenGL* ES development on embedded Linux and represented several companies in the Khronos Group.