Opengl SuperBible 7th摘抄



OpenGL SuperBible学习
第五章
Data

1. void glCreateBuffers(GLsizei n, GLuint* buffers);
buffers is the address of the variable or variables that will be used to store the names of the buffer objects.

2. void glBindBuffer(GLenum target, GLuint buffer);

3. The functions that are used to allocate memory using a buffer object are glBufferStorage() and glNamedBufferStorage(). Their protetypes are
void glBufferStorage(GLenum target, GLsizeptr size, const void* data, GLbitfield flags);
void glNamedBufferStorage(GLuint uffer, GLsizeptr size, const void* data, GLbitfield flags);

4. To be clear, the contents of the buffer object's data store can be changed, but its size or usage flags may not.

5. There are a handful of ways to get dat into the buffer object.

6. Had we instead supplied a pointer to some data, that data would have been used to initialize the buffer object. Using this pointer ,however, allow us to set only the initial data to be stored in the buffer.

7. void glBufferSubData(GLenum target, GLintptr offset, GLsizeiptr size, const GLvoid* data);
void glNameBufferSubData(Gluint buffer, GLintptr offset, GLsizeiptr size, const void* data);

8. void* glMapBuffer(GLenum target, GLenum usage);
void* glMapNamedBuffer(GLuint buffer, GLenum usage);
one that affects the buffer bound to one fo the targets of the current context, and one that operates directly on a buffer whose name you specify.

9. If you map a buffer, you can simply read the contents of the file directly into the mapped buffer.

10. void* glMapBufferRange(GLenum target, GLintptr offset, GLsizeptr length, GLbitfield access);
void* glMapNamedBufferRange(GLuint buffer, GLintptr offset, GLsizeiptr length, GLbitfield access);
These functions, rather than mapping the entire buffer object map only a specific range of the buffer object.

11. However, because of the additional control and stronger contract provided by glMapBufferRange() and glMapNamedBufferRange(), it is generally preferred to call these functions rather than glMapNamedBuffer().


12. glClearBufferSubData -- glClearNamedBufferSubData
glCopyBufferSubData -- glCopyNamedBufferSubData

13. To tell OpenGL which buffer object our data is in and where in that buffer object the data resides, we use the glVertexArrayVertexBuffer() function to bind a buffer to one of the vertex buffer bindings. We use the glVertexArrayAttribFormat() function to describe the layout and format of the data, and finally we enable automatic filling of the attribute by calling glEnableVertexAttribArray().

14. OpenGL allows you to combine a group of uniforms into a uniform block and store the whole block in a buffer object.

15. To tell OpenGL that you want to use the standard layout, you need to declare the uniform block with a layout qualifier.




第四章
Math for 3D Graphics

1. A vector is first, and most simply, a direction from the origin toward a point in space.

2. Normalizing a vector scales it such that its length becomes 1 and the vector is then said to be normalized.

3. The w coordinate is added to make the vector homogeneous but is typically set to 1.0.

4. The dot product between two (three-component) vectors returns a scalar (just one value) that is the cosine of the angle between the two vectors acaled by the product of their lengths.

5. The cross product between two vectors is a third vector that is perpendicular to the plane in which the first two vectors lie.

6. A scalar is just an ordinary single number used to represent a magnitude or a specific quantity.

7. Multiplying a point (represented by a vector) by a matrix (representing a transformation) yields a new transformed point (another vector).

8. We refer to the projection whenever we want to describe the type of transformation (orthographic or perspective) that occurs during vertex processing, but projection is only one of the types of transformations that occur in OpenGL.

9. Model space -- World space -- View space -- Clip space -- Normalized device coordinate (NDC) space -- window space

10. In object space, positions of vertices are interpreted relative to a local origin.

11. Once in world space, all objects exist in a common frame. Often, this is the space in which lighting and physics calculations are performed.

12. Clearly, ifthe resulting w component of a clip space coordinate is 1.0, then clip space and NDC space become identical.

13. Gimbal lock occurs when a rotation by one angle reorients one of the axes to be aligned with another of the axes.

14. A sequence of rotations can be represented by a series of quaternions multiplied together, producing a single resulting quaternion that encodes the whole lot in one go.

15. Once your vertices are in view space, we need to get them into clip space, which we do by applying our projection matrix, which may represent a perspective or orthographic projection.

16. Thus, the integer part of t determines the curve segment along which we are interpolating and the fractional part of t is used to interpolate along that segment.

第三章
Following the Pipeline

1. In GLSL, the mechanism for getting data in and out of shaders is to declare global variables with the in and out storage qualifiers.

2. Vertex attributes are how vertex data is introduced into the OpenGL pipeline.

3. void glVertexAttrib4fv(GLuint index, const GLfloat* v);
the parameter index is used to reference the attribute and v is a pointer to the new data to put into the attribute.

4. Anything you write to an output variable in one shader is sent to a similarly named variable declared with the in keyword in the subsequent stage.

5. To achieve this, we can group together a number of variables into an interface block.

6. Tessellation is the process of breaking a high-order primitive (which is known as a patch in OpenGL) into many smaller, simpler primitives such as triangles for rendering.

7. Logically, the tessellation phase sits directly after the vertex shading stage in the OpenGL pipeline and is made up of three parts: the tessellation control shader, the fixed-function tessellation engine, and the tessellation evaluation shader.

8. The tessellation control shader takes its input from the vertex shader and is primarily responsible form two things: the determination of the level of tessellation that will be sent to the tessellation engine, and the generation of data that will be sent to the tessellation evaluation shader that is run after tessellation has occurred.

9. glPatchParameteri(GLenum pname, GLint value);
panme set to GL_PATCH_VERTICES and value set to the number of control points that will be used to construct each patch.

10. That is, vertices are used as control points and the result of the vertex shader is passed in batches to the tessellation control shader as its input.

11. The output tessellation factors are written to the gl_TessLevelInner and gl_TessLevelOuter built-in output variables, whereas any other data that is passed down the pipeline is written to user-defined output variables (those declared using the out keyword, or the special built-in gl_out array) as normal.

12. The built-in variable gl_InvocationID is used as an index into the gl_in and gl_out arrays.

13. Before the tessellation engine receives a patch, the tessellation control shader processes the incoming control points and sets tessellation factors that are used to break down the patch.

14. At the beginning of the shader is a layout qualifier that sets the tessellation mode.

15. The first is gl_TessCoord, which is the barycentric coordinate(质心坐标) of the vertex generated by the tessellator.

16. void glPolygonMode(GLenum face, GLenum mode);
The face parameter specifies which type of polygons we want to affect.

17. The geometry shader runs once per primitive and has access to all of the input vertex data for all of the vertices that make up the primitive being processed.

18. Geometry shaders, in contrast, include two functions -- EmitVertex() and EndPrimitive() -- that explicitly produce vertices that are sent to primitive assembly and rasterization.

19. The homogeneous coordinate system is used in projective geometry because much of the math ends up being simpler in homogeneous coordinate space than it does in regular Cartesian space.

20. After the projective division, the resulting position is in normalized device space.

21. void glViewport(GLint x, GLint y, GLsizei width, GLsizei height);
void glDepthRange(GLdouble nearVal, GLdouble farVal);

22. The sense of this computation can be reversed by calling glFrontFace() with dir set to either GL_CW or GL_CCW.

23. To turn on culling, call glEnable() with cap set to GL_CULL_FACE.

24. To change which types of triangles are culled, call glCullFace() with face set to GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK.

25. Rasterization is the process of detemining which fragments might be covered by a primitive such as a line or a triangle.

26. This stage is responsible for detemining the color of each fragment before it is sent to the framebuffer for possible composition into the window.

27. In a real-world application, the fragment shader would normally be substantially more complex and be responsible for performing calculations related to lighting, applying materials, and even determining the depth of the fragment.

28. In short, OpenGL is capable of using a wide range of functions that take components of the output of your fragment shader and of the current content of the framebuffer and calculate new values that are writtten back to the framebuffer.

29. Each compute shader operates on a single unit of work known as a work item; these items are, in turn, collected together into small groups called local workgroups.

30. ARB extensions are an official part of OpenGL because they are approved by the OpenGL governing body, the Architecture Review Board(ARB).

31. const GLubyte glGetStringi(glenum name, GLuint index);
you should pass GL_EXTENSIONS as the name parameter, and a value between 0 and 1 less than the number of supported extension in index.

第二章
Our First OpenGL Program

1. void glClearBufferfv(GLenum buffer, GLint drawBuffer, const GLfloat* value);
tells OpenGL to clear the buffer specified the first parameter to the value specified in its third parameter.

2. The source code for your shader is placed into a shader object and compiled, and then multiple shader objects can be linked together to form a program object.

3. All variables that start with gl_ are part of OpenGL and connect shaders to each other or to the various parts of fixed functionality in OpenGL.

4. glCreateShader -- glShaderSource -- glCompileShader -- glCreateProgram -- glAttachShader -- glLinkProgram -- glDeleteShader.

5. One final thing that we need to do before we can draw anything is to create a vertex array object object(VAO), which is an object that represents the vertex fetch stage of the OpenGL pipeline and is used to supply input to the vertex shader.

6. void glCreateVertexArrays(GLsizei n, GLuint* array);
void glBindVertexArray(GLuint array);

7. void glPointSize(GLfloat size);
sets the diameter of the point in pixels to the value you specify in size.

8. The gl_VertexID input starts counting from the value given by the first parameter of glDrawArrays() and counts upward one vertex at a time for count vertices.

第一章
Introduction

1. OpenGL is an interface that your application can use to access and control the graphics subsystem of the device on which it runs.

2. Through a combination of pipelining and parallelism, incredible performance of modern graphics processors is realized.

3. The goal of OpenGL is to provide an abstraction layer between your application and the underlying graphics subsystem, which is often a hardware accelerator made up of one or more custom, high-performance processors with dedicated memory, display outputs, and so on.

4. Current GPUs consist of large number of small programmable processors called shader cores that run mini-programs called shaders.

5. Vertex Fetch -> Vertex Shader -> Tessellation Control Shader -> Tessellation -> Tessellation Evaluation Shader -> Geometry Shader -> Rasterization -> Fragment Shader -> Framebuffer Operations.

6. The first is the modern, core profile, which removes a number of legacy features, leaving only those that are truly accelerated by current graphics hardware.

7. The fundamental unit of rendering in OpenGL is known as the primitive. OpenGL supports many types of primitives, but the three basic renderable primitive types are points, lines, and triangles.

8. The rasterizer is dedicated(专注于) hardware that converts the three-dimensional representation of a triangle into a series of pixels that need to be drawn onto the screen.

9. The graphics pipeline is broken down into two major parts. The first part, often known as the front end, process vertices and primitives, eventually forming them into the points, lines, and triangles that will be handed off to the rasterizer. This is known as primitive assemly. After going through the rasterizer, the geometry has been converted from waht is essentially a vector representation into a large number of independent pixels. These are handed off to the back end, which includes depth and stencil testing, fragment shading, blending, and updating of the output image.

你可能感兴趣的:(opengl)