1、帧缓存:一般指保存我们正在渲染图像的那块内存。渲染可以在显示器屏幕上进行,一个文件上进行,一个AVI中的一帧,或者是一张纹理上。
The frame buffer is the memory of the graphics display device, which means the image is displayed on your screen.
OpenGL does not render (draw) these primitives directly on the screen. Instead, rendering is done in a buffer, which is later swapped to the screen. We refer to these two buffers as thefront(the screen) and back color buffers. By default, OpenGL commands are rendered into the back buffer, and when you call glutSwapBuffers (or your operating system–specific buffer swap function), the front and back buffers are swapped so that you can see the rendering results. You can, however, render directly into the front buffer if you want. (glDrawBuffer)
When you do single-buffered rendering, it is important to call either glFlush or glFinish whenever you want to see the results actually drawn to screen. A buffer swap implicitly performs a flush of the pipeline and waits for rendering to complete before the swap actually occurs.
2、3D中,水平方向上的缩放值和垂直方向上的比例要适中,否则图像将会变形。一般情况下,这个比例要和显示图像的窗口的水平方向和垂直方向的大小比例保持一致。
3、object space = modeling space = local space ------------ model transformation
camera space = eye space
clip space = canonical view volume space -------------- clip transformation
4、Two principal tasks are required to create an image of a three-dimensional scene: modeling and rendering. The modeling task generates a model, which is the description of an object that is going to be used by the graphics system. Models must be created for every object in a scene; they should accurately capture the geometric shape and appearance of the object. Some or all of this task commonly occurs when the application is being developed, by creating and storing model descriptions as part of the application’s data.
The second task, rendering, takes models as input and generates pixel values for the final image. OpenGL is principally concerned with object rendering; it does not provide explicit support for creating object models. The model input data is left to the application to provide. The OpenGL architecture is focused primarily on rendering polygonal models; it doesn’t directly support more complex object descriptions, such as implicit surfaces.
To provide accurate rendering of a model’s appearance or surface shading, the modeler may also have to determine color values, shading normals, and texture coordinates for the model’s vertices and faces.
5、To smoothly shade an object, a given vertex normal should be used by all polygons that share that vertex. Ideally, this vertex normal is the same as the surface normal at the corresponding point on the original surface. However, if the true surface normal isn’t available, the simplest way to approximate one is to add all (normalized) normals from the common facets then renormalize the result (Gouraud, 1971). This provides reasonable results for surfaces that are fairly smooth, but does not look good for surfaces with sharp edges.
6、Since the polygon winding may be used to cull back or front-facing triangles, for performance reasons it is important that models are made consistent; a polygon wound inconsistently with its neighbors should have its vertex order reversed. A good way to accomplish this is to find all common edges and verify that neighboring polygon edges are drawn in the opposite order.
To ensure that the rewound model is oriented properly (i.e., all polygons are wound so that their front faces are on the outside surface of the object), the algorithm begins by choosing and properly orienting the seed polygon. One way to do this is to find the geometric center of the object: compute the object’s bounding box, then compute its mid-point. Next, select a vertex that is the maximum distance from the center point and compute a (normalized) out vector from the center point to this vertex. One of the polygons using that vertex is chosen as the seed. Compute the normal of the seed polygon, then compute the dot product of the normalwith the out vector. A positive result indicates that seed is oriented correctly. A negative result indicates the polygon’s normal is facing inward. If the seed polygon is backward, reverse its winding before using it to rewind the rest of the model.
7、vertex buffer objects
In OpenGL 1.5, vertex buffer objects were added to the specification to enable the same server placement optimizations that are used with display lists. Vertex buffer objects allow the application to allocate vertex data storage that is managed by the OpenGL implementation and can be allocated from accelerator memory. The application can store vertex data to the buffer using an explicit transfer command (glBufferData), or by mapping the buffer (glMapBuffer). The vertex buffer data can also be examined by the application allowing dynamic modification of the data, though it may be slower if the buffer storage is now in the accelerator. Having dynamic read-write access allows geometric data to be modified each frame, without requiring the application to maintain a separate copy of the data or explicitly copy it to and from the accelerator. Vertex buffer objects are used with the vertex array drawing commands by binding a vertex buffer object to the appropriate array binding point (vertex, color, normal, texture coordinate) using the array point commands (for example, glNormalPointer). When an array has a bound buffer object, the array pointer is interpreted relative to the buffer object storage rather than application memory addresses.
8、The OpenGL transformation pipeline can be thought of as a series of cartesian coordinate spaces connected by transformations that can be directly set by the application. Five spaces are used: object space, which starts with the application’s coordinates, eye space, where the scene is assembled, clip space, which defines the geometry that will be visible in the scene, NDC space, the canonical space resulting from perspective division, and window space, which maps to the framebuffer’s pixel locations.
The pipeline begins with texture, vertex, and light position coordinates, along with normal vectors, sent down from the application. These untransformed values are said to be in object space. If the application has enabled the generation of object space texture coordinates, they are created here from untransformed vertex positions.
The modelview matrix is typically used to assemble a series of objects into a coherent scene viewed from a particular vantage.
An important use of the modelview matrix is modifying the parameters of OpenGL light sources. When a light position is issued using the glLight() command, the position or direction of the light is transformed by the current modelview matrix before being stored. The transformed position is used in the lighting computations until it’s updated with a new call to glLight().
The eye space coordinate system is where object lighting is applied and eye-space texture coordinate generation occurs. OpenGL makes certain assumptions about eye space. The viewer position is defined to be at the origin of the eye-space coordinate system. The direction of view is assumed to be the negative z-axis, and the viewer’s up position is the y-axis.
Normals are consumed by the pipeline in eye space. If lighting is enabled, they are used by the lighting equation—along with eye position and light positions—to modify the current vertex color. The projection transform transforms the remaining vertex and texture coordinates into clip space. If the projection transform has perspective elements in it, the w values of the transformed vertices are modified.
If new vertices are generated as a result of clipping, the new vertices will have texture coordinates and colors interpolated to match the new vertex positions. The exact shape of the viewvolume depends on the type of projection transform; a perspective transformation results in a frustum (a pyramid with the tip cut off), while an orthographic projection will create a parallelepiped volume.
9. Render: Rendering is the act of taking a geometric description of a three-dimensional object and turning it into an image of that object onscreen.
10. Perspective: Perspective refers to the angles between lines that lend the illusion of three dimensions.
11. You expect the front of an object to obscure the back of the object from view. For solid surfaces, we call this hidden surface removal.
12. This technique of applying an image to a polygon to supply additional detail is called texture mapping. The image you supply is called a texture, and the individual elements of the texture are called texels. Finally, the process of stretching or compressing the texels over the surface of an object is called filtering.
13. Blending(混合) is the combination of colors or objects on the screen. By varying the amount each object is blended with the scene, you can make objects look transparent such that you see the object and what is behind it (such as glass or a ghost image).
14. By carefully blending the lines with the back-ground color, you can eliminate the jagged edges and give the lines a smooth appearance, This blending technique is called antialiasing. 简单的说也就是将图像边缘及其两侧的像素颜色进行混合,然后用新生成的具有混合特性的点来替换原来位置上的点以达到柔化物体外形、消除锯齿的效果。
15. With both immediate mode and retained mode, new commands have no effect on rendering commands that have already been executed.
16. A window is measured physically in terms of pixels. Before you can start plotting points, lines, and shapes in a window, you must tell OpenGL how to translate specified coordinate pairs into screen coordinates. You do this by specifying the region of Cartesian space that occupies the window; this region is known as the clipping region.
17. Viewports: Mapping Drawing Coordinates to Window Coordinates. Rarely will your clipping area width and height exactly match the width and height of the window in pixels. The coordinate system must therefore be mapped from logical Cartesian coordinates to physical screen pixel coordinates. This mapping is specified by a setting
known as the viewport. The viewport is the region within the window’s client area that is used for drawing the clipping area. The viewport simply maps the clipping area to a region of the window. Usually, the viewport is defined as the entire window, but this is not strictly necessary; for instance, you might want to draw only in the lower half of the window.
18. The Vertex—A Position in Space. A vertex is nothing more than a coordinate in 2D or 3D space.
19. Primitives are one- or two-dimensional entities or surfaces such as points, lines, and polygons (a flat, multisided shape) that are assembled in 3D space to create 3D objects.
20. OpenGL is a procedural rather than a descriptive graphics API. Instead of describing the scene and how it should appear, the programmer actually prescribes the steps necessary to achieve a certain appearance or effect. These “steps” involve calls to the many OpenGL commands. These commands are used to draw graphics primitives such as points, lines, and polygons in three dimensions. In addition, OpenGL supports lighting and shading, texture mapping, blending, transparency, animation, and many other special effects and capabilities.
21. OpenGL的数据类型