with WebGL. That is, after retrieving the HTML element by using getEle-
mentById() and the id of the element, you write a message to the element in JavaScript.
In this sample program, you modify the following element to show the message such
as “near: 0.0, far: 0.5”:
12
The near and far values are displayed here.
CHAPTER 7 Toward the 3D World
248
This element is retrieved at line 27 in OrthoView.js using getElementById() as before.
Once you’ve retrieved the element, you need to specify the string ( 'nearFar' ) that was
bound to id at line 12 in the HTML file, as follows:
26 // Retrieve nearFar element
27 var nf = document.getElementById('nearFar');
Once you retrieve the element into the variable nf (actually, nf is a JavaScript object),
you just need to change the content of this element. This is straightforward and uses the
innerHTML property of the object. For example, if you write:
nf.innerHTML = 'Good Morning, Marisuke-san!';
You will see the message “Good Morning, Marisuke-san!” on the web page. You can also
insert HTML tags in the message. For example, ‘Good Morning, Marisuke -san!’
will highlight “Marisuke.”
In OrthoView.js , you use the following equation to display the current near and far
values. These values are stored in the global variables g_near and g_far declared at line
117. When printing them, they are formatted using Math.round() as follows:
139 // Display the current near and far values
140 nf.innerHTML = 'near: ' + Math.round(g_near*100)/100 + ', far: ' +
➥Math.round(g_far*100)/100;
The Processing Flow of the Vertex Shader
As you can see with the following code, the processing flow in the vertex shader is almost
the same as that in LookAtRotatedTriangles.js except that the uniform variable name
( u_ProjMatrix ) at line 6 was changed. This variable holds the matrix used to set the
viewing volume. So you just need to multiply the matrix ( u_ProjMatrix ) by the vertex
coordinates to set the viewing volume at line 9:
2 // Vertex shader program
3 var VSHADER_SOURCE =
...
6 'uniform mat4 u_ProjMatrix;\n' +
7 'varying vec4 v_Color;\n' +
8 'void main() {\n' +
9 ' gl_Position = u_ProjMatrix * a_Position;\n' +
10 ' v_Color = a_Color;\n' +
11 '}\n';
Line 62 registers the event handler for the arrow key press. Note that nf is passed as the
last argument to the handler to allow it to access the
element. The event handler use
the key press to determine the contents of the element in draw() , which is called in the
handler:
Specifying the Visible Range (Box Type)
249
61 // Register the event handler to be called on key press
62 document.onkeydown = function(ev) { keydown(ev, gl, n, u_ProjMatrix,
➥projMatrix, nf); };
The keydown() at line 121 identifies which arrow key is pressed and then modifies the
value of g_near and g_far before calling draw() at line 127. Line 117 defines g_near and
g_far , which are used by the setOrtho() method. These are defined as global variables
because they are used in both keydown() and draw() :
116 // The distances to the near and far clipping plane
117 var g_near = 0.0, g_far = 0.5;
118 function keydown(ev, gl, n, u_ProjMatrix, projMatrix, nf) {
119 switch(ev.keyCode) {
120 case 39: g_near += 0.01; break; // The right arrow key was pressed
...
123 case 40: g_far -= 0.01; break; // The down arrow key was pressed
124 default: return; // Prevent the unnecessary drawing
125 }
126
127 draw(gl, n, u_ProjMatrix, projMatrix, nf);
128 }
Let’s examine the function draw() . The processing flow of draw() , defined at line 130, is
the same as in LookAtTrianglesWithKeys.js except for changing the message on the web
page at line 140:
130 function draw(gl, n, u_ProjMatrix, projMatrix, nf) {
131 // Set the viewing volume
132 projMatrix.setOrtho(-1.0, 1.0, -1.0, 1.0, g_near, g_far);
133
134 // Set the projection matrix to u_ProjMatrix variable
135 gl.uniformMatrix4fv(u_ProjMatrix, false, projMatrix.elements);
...
139 // Display the current near and far values
140 nf.innerHTML = 'near: ' + Math.round(g_near * 100)/100 + ', far: ' +
➥Math.round(g_far*100)/100;
141
142 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw the triangles
143 }
Line 132 calculates the matrix for the viewing volume ( projMatrix ) and passes it to
u_ProjMatrix at line 135. Line 140 displays the current near and far value on the web
page. Finally, at line 142, the triangles are drawn.
CHAPTER 7 Toward the 3D World
250
Changing Near or Far
When you run this program and increase the near value (right-arrow key), the display will
change, as shown in Figure 7.14 .
Figure 7.14 Increase the near value using the right arrow key
By default, near is 0.0, so all three triangles are displayed. Next, when you increase near
using the right arrow key, the blue triangle (the front triangle) disappears because the
viewing volume moves past it, as shown in Figure 7.15 . This result is shown as the middle
figure in Figure 7.14 .
z
x
y
Figure 7.15 The blue triangle went outside the viewing volume
Again, if you continue to increase near by pressing the right arrow key, when near becomes
larger than 0.2, the near plane moves past the yellow triangle, so it is outside the viewing
volume and disappears. This leaves only the green triangle (the right figure in Figure 7.14 ).
At this point, if you use the left arrow key to decrease near so it becomes less than 0.2, the
yellow triangle becomes visible again. Alternatively, if you keep on increasing near , the
green triangle will also disappear, leaving the black canvas.
Specifying the Visible Range (Box Type)
251
As you can imagine, the behavior when you alter the far value is similar. As shown in
Figure 7.16 , when far becomes less than 0.4, the back triangle (the green one) will disap-
pear. Again, if you keep decreasing far , only the blue triangle will remain.
Figure 7.16 Decrease the far value using the down arrow key
This example should clarify the role of the viewing volume. Essentially, for any object you
want to display, you need to place it inside the viewing volume.
Restoring the Clipped Parts of the Triangles
(LookAtTrianglesWithKeys_ViewVolume.js)
In LookAtTrianglesWithKeys , when you kept pressing the arrow keys, part of the triangle
is clipped, as shown in Figure 7.17 . From the previous discussion, it’s clear this is because
some part went outside the viewing volume. In this section, you will modify the sample
program to display the triangle correctly by setting the appropriate viewing volume.
Figure 7.17 A part of the triangle is clipped.
CHAPTER 7 Toward the 3D World
252
As you can see from the figure, the far corner of the triangle from the eye point is clipped.
Obviously, the far clipping plane is too close to the eye point, so you need to move the far
clipping plane farther out than the current one. To achieve this, you can modify the argu-
ments of the viewing volume so that left =–1.0, right=1.0, bottom =–1.0, top =1.0, near =0.0,
and far =2.0.
You will use two matrices in this program: the matrix that sets the viewing volume (the
orthographic projection matrix), and the matrix that sets the eye point and the line of
sight (view matrix). Because setOrtho() sets the viewing volume from the eye point, you
need to set the position of the eye point and then set the viewing volume. Consequently,
you will multiply the view matrix by the vertex coordinates to get the vertex coordi-
nates, which are “viewed from the eye position” first, and then multiply the orthographic
projection matrix by the coordinates. You can calculate them as shown in Equation 7.3 .
Equation 7.3
〈 〉× 〈 〉× 〈 〉 orthographic projection matrix view matrix vertex coordinates
This can be implemented in the vertex shader, as shown in Listing 7.7 .
Listing 7.7 LookAtTrianglesWithKeys_ViewVolume.js
1 // LookAtTrianglesWithKeys_ViewVolume.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'uniform mat4 u_ViewMatrix;\n' +
7 'uniform mat4 u_ProjMatrix;\n' +
8 'varying vec4 v_Color;\n' +
9 'void main() {\n' +
10 ' gl_Position = u_ProjMatrix * u_ViewMatrix * a_Position;\n' +
11 ' v_Color = a_Color;\n' +
12 '}\n';
...
24 function main() {
...
51 // Get the storage locations of u_ViewMatrix and u_ProjMatrix
52 varu_ViewMatrix = gl.getUniformLocation(gl.program,'u_ViewMatrix');
53 var u_ProjMatrix = gl.getUniformLocation(gl.program,'u_ProjMatrix');
...
59 // Create the matrix to specify the view matrix
60 var viewMatrix = new Matrix4();
61 // Register the event handler to be called on key press
62 document.onkeydown = function(ev) { keydown(ev, gl, n, u_ViewMatrix,
➥viewMatrix); };
Specifying the Visible Range (Box Type)
253
63
64 // Create the matrix to specify the viewing volume and pass it to u_ProjMatrix
65 var projMatrix = new Matrix4();
66 projMatrix.setOrtho(-1.0, 1.0, -1.0, 1.0, 0.0, 2.0);
67 gl.uniformMatrix4fv(u_ProjMatrix, false, projMatrix.elements);
68
69 draw(gl, n, u_ViewMatrix, viewMatrix); // Draw the triangles
70 }
Line 66 calculates the orthographic projection matrix ( projMatrix ) by modifying far from
1.0 to 2.0. The result matrix is passed to u_ProjMatrix in the vertex shader at line 67.
A uniform variable is used because the elements in the matrix are uniform for all vertex
coordinates. If you run this sample program and move the eye point as before, you can
see that the triangle no longer gets clipped (see Figure 7.18 ).
Figure 7.18 LookAtTrianglesWithKeys_ViewVolume
Experimenting with the Sample Program
As we explained in the section “Specify the Viewing Volume,” if the aspect ratio of
is different from that of the near clipping plane, distorted objects are displayed.
Let’s explore this. First, in OrthoView_halfSize (based on Listing 7.7 ), you reduce the
current size of the near clipping plane to half while keeping its aspect ratio:
projMatrix.setOrtho(-0.5, 0.5, -0.5, 0.5, 0, 0.5);
The result is shown on the left of Figure 7.19 . As you can see, the triangles appear twice
as large as those of the previous sample because the size of is the same as before.
Note that the parts of the triangles outside the near clipping plane are clipped.
CHAPTER 7 Toward the 3D World
254
Figure 7.19 Modify the size of the near clipping plane
In OrthoView_halfWidth , you reduce only the width of the near clipping plane by chang-
ing the first two arguments in setOrtho() as follows:
projMatrix.setOrtho(-0.3, 0.3, -1.0, 1.0, 0.0, 0.5);
You can see the results on the right side of Figure 7.19 . This is because the near clipping
plane is horizontally reduced and then horizontally extended (and thus distorted) to fit
the square-shaped when the plane is displayed.
Specifying the Visible Range Using a Quadrangular
Pyramid
Figure 7.20 shows a tree-lined road scene. In this picture, all the trees on the left and right
sides are approximately of the same height, but the farther back they are, the smaller
they look. Equally, the building in the distance appears smaller than the trees that are
closer to the viewer, even though the building is actually taller than the trees. This effect
of distant objects looking smaller gives the feeling of depth. Although our eyes perceive
reality in this way, it’s interesting to notice that children’s drawings rarely show this kind
of perspective.
Specifying the Visible Range Using a Quadrangular Pyramid
255
Figure 7.20 Tree-lined road
In the case of the box-shaped viewing volume explained in the previous section, identi-
cally sized triangles are drawn the same size, regardless of their distance from the eye
point. To overcome this constraint, you can use the quadrangular pyramid viewing
volume, which allows you to give this sense of depth, as seen in Figure 7.20 .
Here you construct the sample program PerspectiveView , which sets a quadrangular
pyramid viewing volume that points along the negative z-axis from the eye point set at
(0, 0, 5). Figure 7.21 shows a screen shot of PerspectiveView and the location of each
triangle.
z
-0.75
0.75
x
y
1.0
-1.0
0.0
-2.0
-4.0
Figure 7.21 PerspectiveView; location of each triangle
CHAPTER 7 Toward the 3D World
256
As can be seen from the figure on the right, three identically sized triangles are positioned
on the right and left sides along the coordinate’s axes, in a way similar to the tree-lined
road. By using a quadrangular pyramid viewing volume, WebGL can automatically display
remote objects as if they are smaller, thus achieving the sense of depth. This is shown in
the left side of the figure.
To really notice the change in size, as in the real world, the objects need to be located
at a substantial distance. For example, when looking at the box, to actually make the
background area looks smaller than the foreground area, this box needs to have consider-
able depth. So this time, you will use a slightly more distant position (0, 0, 0.5) than the
default value (0, 0, 0) for the eye point.
Setting the Quadrangular Pyramid Viewing Volume
The quadrangular pyramid viewing volume is shaped as shown in Figure 7.22 . Just like
the box-shaped configuration, the viewing volume is set at the eye point along the line of
sight, and objects located between the far and near clipping planes are displayed. Objects
positioned outside the viewing volume are not shown, while those straddling the bound-
ary will only have parts located inside the viewing volume visible.
v o f
r a e n
r a f
z
y
eye point
eye point
up
vector
near clipping plane
far clipping plane
aspect
(aspect of near clipping plane)
invisible
invisible
visible
line of sight
Figure 7.22 Quadrangular pyramid viewing volume
Regardless of whether it is a quadrangular pyramid or a box, you set the viewing volume
using matrices, but the arguments differ. The Matrix4 ’s method setPerspective() is used
to configure the quadrangular pyramid viewing volume.
Specifying the Visible Range Using a Quadrangular Pyramid
257
Matrix4.setPerspective(fov, aspect, near, far)
Calculate the matrix (the perspective projection matrix) that defines the viewing volume
specified by its arguments, and store it in Matrix4 . However, the near value must be less
than the far value .
Parameters fov Specifies field of view, angle formed by the top and bottom
planes. It must be greater than 0 .
aspect Specifies the aspect ratio of the near plane (width/height).
near, far Specify the distances to the near and far clipping planes along the
line of sight ( near > 0 and far > 0).
Return value None
The matrix that sets the quadrangular pyramid viewing volume is called the perspective
projection matrix .
Note that the specification for the near plane is different from that of the box type with
the second argument, aspect, representing the near plane aspect ratio. For example, if we
set the height to 100 and the width to 200, the aspect ratio is 0.5.
The positioning of the triangles with regard to the viewing volume we are using is illus-
trated in Figure 7.23 . It is specified by near =1.0, far =100, aspect =1.0 (the same aspect ratio
as the canvas), and fov =30.0.
eye point
(0, 0, 5)
(0, 0, 4)
near = 1
far=100
z
-0.75
0.75
x
y
1.0
-1.0
Figure 7.23 The positions of the triangles with respect to the quadrangular pyramid viewing
volume
CHAPTER 7 Toward the 3D World
258
The basic processing flow is similar to that of LookAtTrianglesWithKeys_ViewVolume.js in
the previous section. So let’s take a look at the sample program.
Sample Program (PerspectiveView.js)
The sample program is detailed in Listing 7.8 .
Listing 7.8 PerspectiveView.js
1 // PerspectiveView.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'uniform mat4 u_ViewMatrix;\n' +
7 'uniform mat4 u_ProjMatrix;\n' +
8 'varying vec4 v_Color;\n' +
9 'void main() {\n' +
10 ' gl_Position = u_ProjMatrix * u_ViewMatrix * a_Position;\n' +
11 ' v_Color = a_Color;\n' +
12 '}\n';
...
24 function main() {
...
41 // Set the vertex coordinates and color (blue triangle is in front)
42 var n = initVertexBuffers(gl);
...
51 // Get the storage locations of u_ViewMatrix and u_ProjMatrix
52 varu_ViewMatrix = gl.getUniformLocation(gl.program,'u_ViewMatrix');
53 var u_ProjMatrix = gl.getUniformLocation(gl.program,'u_ProjMatrix');
...
59 var viewMatrix = new Matrix4(); // The view matrix
60 var projMatrix = new Matrix4(); // The projection matrix
61
62 // Calculate the view and projection matrix
63 viewMatrix.setLookAt(0, 0, 5, 0, 0, -100, 0, 1, 0);
64 projMatrix.setPerspective(30, canvas.width/canvas.height, 1, 100);
65 // Pass The view matrix and projection matrix to u_ViewMatrix and u_ProjMatrix
66 gl.uniformMatrix4fv(u_ViewMatrix, false, viewMatrix.elements);
67 gl.uniformMatrix4fv(u_ProjMatrix, false, projMatrix.elements);
...
72 // Draw the rectangles
73 gl.drawArrays(gl.TRIANGLES, 0, n);
74 }
75
Specifying the Visible Range Using a Quadrangular Pyramid
259
76 function initVertexBuffers(gl) {
77 var verticesColors = new Float32Array([
78 // Three triangles on the right side
79 0.75, 1.0, -4.0, 0.4, 1.0, 0.4, // The green triangle in back
80 0.25, -1.0, -4.0, 0.4, 1.0, 0.4,
81 1.25, -1.0, -4.0, 1.0, 0.4, 0.4,
82
83 0.75, 1.0, -2.0, 1.0, 1.0, 0.4, // The yellow triangle in middle
84 0.25, -1.0, -2.0, 1.0, 1.0, 0.4,
85 1.25, -1.0, -2.0, 1.0, 0.4, 0.4,
86
87 0.75, 1.0, 0.0, 0.4, 0.4, 1.0, // The blue triangle in front
88 0.25, -1.0, 0.0, 0.4, 0.4, 1.0,
89 1.25, -1.0, 0.0, 1.0, 0.4, 0.4,
90
91 // Three triangles on the left side
92 -0.75, 1.0, -4.0, 0.4, 1.0, 0.4, // The green triangle in back
93 -1.25, -1.0, -4.0, 0.4, 1.0, 0.4,
94 -0.25, -1.0, -4.0, 1.0, 0.4, 0.4,
95
96 -0.75, 1.0, -2.0, 1.0, 1.0, 0.4, // The yellow triangle in middle
97 -1.25, -1.0, -2.0, 1.0, 1.0, 0.4,
98 -0.25, -1.0, -2.0, 1.0, 0.4, 0.4,
99
100 -0.75, 1.0, 0.0, 0.4, 0.4, 1.0, // The blue triangle in front
101 -1.25, -1.0, 0.0, 0.4, 0.4, 1.0,
102 -0.25, -1.0, 0.0, 1.0, 0.4, 0.4,
103 ]);
104 var n = 18; // Three vertices per triangle * 6
...
138 return n;
139 }
The vertex and fragment shaders are completely identical (including the names of the
variables) to the ones used in LookAtTriangles_ViewVolume.js .
The processing flow of main() in JavaScript is also similar. Calling initVertexBuffers()
at line 42 writes the vertex coordinates and colors of the six triangles to be displayed into
the buffer object. In initVertexBuffers() , the vertex coordinates and colors for the six
triangles are specified: three triangles positioned on the right side from line 79 and three
triangles positioned on the left side from line 92. As a result, the number of vertices to be
drawn at line 104 is changed to 18 (3×6=18, to handle six triangles).
At lines 52 and 53 in main() , the locations of the uniform variables that store the view
matrix and perspective projection matrix are retrieved. Then at line 59 and 60, the vari-
ables used to hold the matrices are created.
CHAPTER 7 Toward the 3D World
260
At line 63, the view matrix is calculated, with the eye point set at (0, 0, 5), the line of
sight set along the z-axis in the negative direction, and the up direction set along the
y-axis in the positive direction. Finally at line 64, the projection matrix is set up using a
quadrangular pyramid viewing volume:
64 projMatrix.setPerspective(30, canvas.width/canvas.height, 1, 100);
The second argument aspect (the horizontal to vertical ratio of the near plane) is derived
from the width and height ( width and height property), so any modification of
the aspect ratio doesn’t lead to distortion of the objects displayed.
Next, as the view and perspective projection matrices are available, you pass them to the
appropriate uniform variables at lines 66 and 67. Finally, you draw the triangles at line 73,
and upon execution you get a result including perspective similar to that shown in Figure
7.20 .
Finally, one aspect touched on earlier but not fully explained is why matrices are used to
set the viewing volume. Without using mathematics, let’s explore that a little.
The Role of the Projection Matrix
Let’s start by examining the perspective projection matrix. Looking at the screen shot of
PerspectiveView in Figure 7.24 , you can see that, after applying the projection matrix, the
objects in the distance are altered in two ways.
Figure 7.24 PerspectiveView
First, the farther away the triangles are, the smaller they appear. Second, the triangles are
parallel shifted so they look as if they are positioned inward toward the line of sight. In
comparison to the identically sized triangles that are laid out as shown on the left side
of Figure 7.25 , the following two transformations have been applied: (1) triangles farther
Specifying the Visible Range Using a Quadrangular Pyramid
261
from the viewer are scaled down (transformed) in proportion to the distance from the
viewer, and (2) the triangles are then transformed to be shifted toward the line of sight, as
illustrated on the right side of Figure 7.25 . These two transformations, shown on the right
side of Figure 7.25 , enable the effect you see in the photograph scene shown in Figure
7.20 .
z x
y
x
y
point at
infinity
z
Figure 7.25 Conceptual rendering of the perspective projection transformation
This means that the specification of the viewing volume can be represented as a combina-
tion of transformations, such as the scaling or translation of geometric shapes and objects,
in accordance with the shape of the viewing volume. The Matrix4 object’s method
setPerspective() automatically calculates this transformation matrix from the arguments
of the specified viewing volume. The elements of the matrix are discussed in Appendix C ,
“Projection Matrices.” If you are interested in the mathematical explanation of the coordi-
nate transform related to the viewing volume, please refer to the book Computer Graphics .
To put it another way, the transformation associated with the perspective projection trans-
forms the quadrangular pyramid viewing volume into a box-shaped viewing volume (right
part of Figure 7.25 ).
Note that the orthographic projection matrix does not perform all the work needed for
this transformation to generate the required optical effect. Rather, it performs the prelimi-
nary preparation that is required by the post vertex shader processing—where the actual
processing is done. If you are interested in this, please refer to Appendix D , “WebGL/
OpenGL: Left or Right Handed?”
The projection matrix, combined with the model matrix and the view matrix, is able
to handle all the necessary geometric transformations (translation, rotation, scaling) for
achieving the different optical effects. The following section will explore how to combine
these matrices to do that using a simple example.
CHAPTER 7 Toward the 3D World
262
Using All the Matrices (Model Matrix, View Matrix, and Projection
Matrix)
One of the issues with PerspectiveView.js is the amount of code needed to set up the
vertex coordinates and the color data. Because we only have to deal with six triangles in
this case, it’s still manageable, but it could get messy if the number of triangles increased.
Fortunately, there is an effective drawing technique to handle this problem.
If you take a close look at the triangles, you will notice that the configuration is identical
to that in Figure 7.26 , where the dashed triangles are shifted along the x-axis in the posi-
tive (0.75) and negative (–0.75) directions, respectively.
z
-0.75
0.75
x
y
Figure 7.26 Drawing after translation
Taking advantage of this, it is possible to draw the triangles in PerspectiveView in the
following way:
1. Prepare the vertex coordinates data of the three triangles that are laid out centered
along the z-axis.
2. Translate the original triangles by 0.75 units along the x-axis, and draw them.
3. Translate the original triangles by –0.75 units along the x-axis, and draw them.
Now let’s try to use this approach in some sample code ( PerspectiveView_mvp) .
In the original PerspectiveView program the projection and view matrices were used to
specify the viewer’s viewpoint and viewing volume and PerspectiveView_mvp , the model
matrix, was used to perform the translation of the triangles.
At this point, it’s worthwhile to review the actions these matrices perform. To do that,
let’s refer to LookAtTriangles , which you wrote earlier to allow the viewer to look at a
rotated triangle from a specific location. At that time, you used this expression, which is
identical to Equation 7.1 :
Specifying the Visible Range Using a Quadrangular Pyramid
263
〈 〉× 〈 〉× 〈 〉 view matrix model matrix vertex coordinates
Building on that, in LookAtTriangles_ViewVolume , which correctly displays the clipped
triangle, you used the following expression, which, when you use projection matrix
to include either orthographic projection or perspective projection, is identical to
Equation 7.3 :
〈 〉× 〈 〉× 〈 〉 projection matrix view matrix vertex coordinates
You can infer the following from these two expressions:
Equation 7.4
〈 〉× 〈 〉× 〈 〉× 〈 〉 projection matrix view matrix model matrix vertex coordinates
This expression shows that, in WebGL, you can calculate the final vertex coordinates
by using three types of matrices: the model matrix, the view matrix, and the projection
matrix.
This can be understood by considering that Equation 7.1 is identical to Equation 7.4 , in
which the projection matrix becomes the identity matrix, and Equation 7.3 is identical
to Equation 7.4 , whose model matrix is turned into the identity matrix. As explained in
Chapter 4 , the identity matrix behaves for matrix multiplication like the scalar 1 does with
scalar multiplication. Multiplying by the identity matrix has no effect on the other matrix.
So let’s construct the sample program using Equation 7.4 .
Sample Program (PerspectiveView_mvp.js)
PerspectiveView_mvp.js is shown in Listing 7.9 . The basic processing flow is similar to
that of PerspectiveView.js . The only difference is the modification of the calculation in
the vertex shader (line 11) to implement Equation 7.4 , and the passing of the additional
matrix ( u_ModelMatrix ) used for the calculation.
Listing 7.9 PerspectiveView_mvp.js
1 // PerspectiveView_mvp.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'uniform mat4 u_ModelMatrix;\n' +
7 'uniform mat4 u_ViewMatrix;\n' +
8 'uniform mat4 u_ProjMatrix;\n' +
9 'varying vec4 v_Color;\n' +
10 'void main() {\n' +
11 ' gl_Position = u_ProjMatrix * u_ViewMatrix * u_ModelMatrix * a_Position;\n' +
12 ' v_Color = a_Color;\n' +
CHAPTER 7 Toward the 3D World
264
13 '}\n';
...
25 function main() {
...
42 // Set the vertex coordinates and color (blue triangle is in front)
43 var n = initVertexBuffers(gl);
...
52 // Get the storage locations of u_ModelMatrix, u_ViewMatrix, and u_ProjMatrix.
53 var u_ModelMatrix = gl.getUniformLocation(gl.program, 'u_ModelMatrix');
54 var u_ViewMatrix = gl.getUniformLocation(gl.program,'u_ViewMatrix');
55 var u_ProjMatrix = gl.getUniformLocation(gl.program,'u_ProjMatrix');
...
61 var modelMatrix = new Matrix4(); // Model matrix
62 var viewMatrix = new Matrix4(); // View matrix
63 var projMatrix = new Matrix4(); // Projection matrix
64
65 // Calculate the model matrix, view matrix, and projection matrix
66 modelMatrix.setTranslate(0.75, 0, 0); // Translate 0.75 units
67 viewMatrix.setLookAt(0, 0, 5, 0, 0, -100, 0, 1, 0);
68 projMatrix.setPerspective(30, canvas.width/canvas.height, 1, 100);
69 // Pass the model, view, and projection matrix to uniform variables.
70 gl.uniformMatrix4fv(u_ModelMatrix, false, modelMatrix.elements);
71 gl.uniformMatrix4fv(u_ViewMatrix, false, viewMatrix.elements);
72 gl.uniformMatrix4fv(u_ProjMatrix, false, projMatrix.elements);
73
74 gl.clear(gl.COLOR_BUFFER_BIT);// clear
75
76 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw triangles on right
77
78 // Prepare the model matrix for another pair of triangles
79 modelMatrix.setTranslate(-0.75, 0, 0); // Translate -0.75
80 // Modify only the model matrix
81 gl.uniformMatrix4fv(u_ModelMatrix, false, modelMatrix.elements);
82
83 gl.drawArrays(gl.TRIANGLES, 0, n);// Draw triangles on left
84 }
85
86 function initVertexBuffers(gl) {
87 var verticesColors = new Float32Array([
88 // Vertex coordinates and color
89 0.0, 1.0, -4.0, 0.4, 1.0, 0.4, // The back green triangle
90 -0.5, -1.0, -4.0, 0.4, 1.0, 0.4,
91 0.5, -1.0, -4.0, 1.0, 0.4, 0.4,
92
93 0.0, 1.0, -2.0, 1.0, 1.0, 0.4, // The middle yellow triangle
Specifying the Visible Range Using a Quadrangular Pyramid
265
94 -0.5, -1.0, -2.0, 1.0, 1.0, 0.4,
95 0.5, -1.0, -2.0, 1.0, 0.4, 0.4,
96
97 0.0, 1.0, 0.0, 0.4, 0.4, 1.0, // The front blue triangle
98 -0.5, -1.0, 0.0, 0.4, 0.4, 1.0,
99 0.5, -1.0, 0.0, 1.0, 0.4, 0.4,
100 ]);
...
135 return n;
136 }
This time, you need to pass the model matrix to the vertex shader, so u_ModelMatrix is
added at line 6. The matrix is used at line 11, which implements Equation 7.5:
11 ' gl_Position = u_ProjMatrix * u_ViewMatrix * u_ModelMatrix * a_Position;\n' +
Next, main() in JavaScript calls initVertexBuffers() at line 43. In this function, the
vertex coordinates of the triangles to be passed to the buffer object are defined (line 87).
This time, you are handling the vertex coordinates of three triangles centered along the
z-axis instead of the six triangles used in PerspectiveView.js . As mentioned before, this is
because you will use the three triangles in conjunction with a translation.
At line 53, the storage location of u_ModelMatrix in the vertex shader is obtained. At
line 61, the arguments for the matrix ( modelMatrix ) passed to the uniform variable are
prepared, and at line 66, the matrix is calculated. First, this matrix will translate the trian-
gles by 0.75 units along the x-axis:
65 // Calculate the view matrix and the projection matrix
66 modelMatrix.setTranslate(0.75, 0, 0); // Translate 0.75
...
70 gl.uniformMatrix4fv(u_ModelMatrix, false, modelMatrix.elements);
...
76 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw a triangle
The matrix calculations, apart from the model matrix at line 66, are the same as in
PerspectiveView.js . The model matrix is passed to u_ModelMatrix at line 70 and used to
draw the right side row of triangles (line 76).
In a similar manner, the row of triangles for the left side is translated by –0.75 units along
the x-axis, and then the model matrix is calculated again at line 79. Because the view
matrix and projection matrix make use of this model matrix, you only need to assign
the model matrix to the uniform variable once (line 81). Once the matrix is set up, you
perform the draw operation at line 83 with gl.drawArrays() :
78 // Prepare the model matrix for another pair of triangles
79 modelMatrix.setTranslate(-0.75, 0, 0); // Translate -0.75
80 // Modify only the model matrix
CHAPTER 7 Toward the 3D World
266
81 gl.uniformMatrix4fv(u_ModelMatrix, false, modelMatrix.elements);
82
83 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw triangles on left
As you have seen, this approach allows you to draw two sets of triangles from a single set
of triangle data, which reduces the number of vertices needed but increases the number
of calls to gl.drawArrays() . The choice of which approach to use for better performance
depends on the application and the WebGL implementation.
Experimenting with the Sample Program
In PerspectiveView_mvp , you calculated 〈
〉× 〈 〉× 〈 〉 projection matrix view matrix model matrix
directly inside the vertex shader. This calculation of is the same for all the vertices, so
there is no need to recalculate it inside the shader for each vertex. It can be computed in
advance inside the JavaScript code, as it was in LookAtRotatedTriangles_mvMatrix earlier
in the chapter, allowing a single matrix to be passed to the vertex shader. This matrix is
called the model view projection matrix , and the name of the variable that passes it is
u_MvpMatrix . The sample program used to show this is ProjectiveView_mvpMatrix , in
which the vertex shader is modified as shown next and, as you can see, is significantly
simpler:
1 // PerspectiveView_mvpMatrix.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'uniform mat4 u_MvpMatrix;\n' +
7 'varying vec4 v_Color;\n' +
8 'void main() {\n' +
9 ' gl_Position = u_MvpMatrix * a_Position;\n' +
10 ' v_Color = a_Color;\n' +
11 '}\n';
In JavaScript, main() , the storage location of u_ModelMatrix is retrieved at line 51, and
then the matrix to be stored in the uniform variable is calculated at line 57:
50 // Get the storage location of u_MvpMatrix
51 var u_MvpMatrix = gl.getUniformLocation(gl.program, 'u_MvpMatrix');
...
57 var modelMatrix = new Matrix4(); // The model matrix
58 var viewMatrix = new Matrix4(); // The view matrix
Correctly Handling Foreground and Background Objects
267
59 var projMatrix = new Matrix4(); // The projection matrix
60 var mvpMatrix = new Matrix4(); // The model view projection matrix
61
62 // Calculate the model, view, and projection matrices
63 modelMatrix.setTranslate(0.75, 0, 0);
64 viewMatrix.setLookAt(0, 0, 5, 0, 0, -100, 0, 1, 0);
65 projMatrix.setPerspective(30, canvas.width/canvas.height, 1, 100);
66 // Calculate the model view projection matrix
67 mvpMatrix.set(projMatrix).multiply(viewMatrix).multiply(modelMatrxi);
68 // Pass the model view projection matrix to u_MvpMatrix
69 gl.uniformMatrix4fv(u_MvpMatrix, false, mvpMatrix.elements);
...
73 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw a rectangle
74
75 // Prepare the model matrix for another pair of triangles
76 modelMatrix.setTranslate(-0.75, 0, 0);
77 // Calculate the model view projection matrix
78 mvpMatrix.set(projMatrix).multiply(viewMatrix).multiply(modelMatrxi);
79 // Pass the model view projection matrix to u_MvpMatrix
80 gl.uniformMatrix4fv(u_MvpMatrix, false, mvpMatrix.elements);
81
82 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw a rectangle
83 }
The critical calculation is carried out at line 67. The projection matrix ( projMatrix ) is
assigned to mvpMatrix . Then the view matrix ( viewMatrix ) is multiplied by the model
matrix ( modelMatrix ), and the result is written back to mvpMatrix , using the set version
of the method. This is in turn assigned to u_MvpMatrix at line 69, and the triangles on
the right side are drawn at line 73. Similarly, the calculation of the model view projec-
tion matrix for the triangles on the left side is performed at line 78. It is then passed to
u_MvpMatrix at line 80, and the triangles are drawn at line 82.
With this information, you are now able to write code that moves the eye point, sets the
viewing volume, and allows you to view three-dimensional objects from various angles.
Additionally, you have learned how to deal with clipping that resulted in partially missing
objects. However, one potential problem remains. As you move the eye point to a differ-
ent location, it’s possible for objects in the foreground to be hidden by objects in the
background. Let’s look at how this problem comes about.
Correctly Handling Foreground and Background
Objects
In the real world, if you place two boxes on a desk as shown in Figure 7.27 , the fore-
ground box partially hides the background one.
CHAPTER 7 Toward the 3D World
268
Figure 7.27 The front object partially hides the back object
Looking at the sample programs constructed so far, such as the screen shot of
PerspectiveView (refer to Figure 7.21 ), the green triangle located at the back is partially
hidden by the yellow and blue triangles. It looks as if WebGL, being designed for display-
ing 3D objects, has naturally figured out the correct order.
However, that is unfortunately not the case. By default, WebGL, to accelerate the drawing
process, draws objects in the order of the vertices specified inside the buffer object. Up
until now, you have always arranged the order of the vertices so that the objects located
in the background are drawn first, thus resulting in a natural rendering.
For example, in PerspectiveView_mvpMatrix.js , you specified the coordinates and color
of the triangles in the following order. Note the z coordinates (bold font):
var verticesColors = new Float32Array([
// vertex coordinates and color
0.0, 1.0, -4.0 , 0.4, 1.0, 0.4, // The green one at the back
-0.5, -1.0, -4.0 , 0.4, 1.0, 0.4,
0.5, -1.0, -4.0 , 1.0, 0.4, 0.4,
0.0, 1.0, -2.0 , 1.0, 1.0, 0.4, // The yellow one in the middle
-0.5, -1.0, -2.0 , 1.0, 1.0, 0.4,
0.5, -1.0, -2.0 , 1.0, 0.4, 0.4,
0.0, 1.0, 0.0 , 0.4, 0.4, 1.0, // The blue one in the front
-0.5, -1.0, 0.0 , 0.4, 0.4, 1.0,
0.5, -1.0, 0.0 , 1.0, 0.4, 0.4,
]);
WebGL draws the triangles in the order z in which you specified the vertices (that is,
the green triangle [back], then the yellow triangle [middle], and finally the blue triangle
Correctly Handling Foreground and Background Objects
269
[front]). This ensures that objects closer to the eye point cover those farther away, as seen
in Figure 7.13 .
To verify this, let’s modify the order in which the triangles are specified by first drawing
the blue triangle in the front, then the yellow triangle in the middle, and finally the green
triangle at the back:
var verticesColors = new Float32Array([
// vertex coordinates and color
0.0, 1.0, 0.0, 0.4, 0.4, 1.0, // The blue one in the front
-0.5, -1.0, 0.0, 0.4, 0.4, 1.0,
0.5, -1.0, 0.0, 1.0, 0.4, 0.4
0.0, 1.0, -2.0, 1.0, 1.0, 0.4, // The yellow one in the middle
-0.5, -1.0, -2.0, 1.0, 1.0, 0.4,
0.5, -1.0, -2.0, 1.0, 0.4, 0.4,
0.0, 1.0, -4.0, 0.4, 1.0, 0.4, // The green one at the back
-0.5, -1.0, -4.0, 0.4, 1.0, 0.4,
0.5, -1.0, -4.0, 1.0, 0.4, 0.4,
]);
When you run this, you’ll see the green triangle, which is supposed to be located at the
back, has been drawn at the front (see Figure 7.28 ).
Figure 7.28 The green triangle in the back is displayed at the front
Drawing objects in the specified order, the default behavior in WebGL, can be quite
efficient when the sequence can be determined beforehand and the scene doesn’t
CHAPTER 7 Toward the 3D World
270
subsequently change. However, when you examine the object from various directions by
moving the eye point, it is impossible to decide the drawing order in advance.
Hidden Surface Removal
To cope with this problem, WebGL provides a hidden surface removal function. This
function eliminates surfaces hidden behind foreground objects, allowing you to draw the
scene so that the objects in the back are properly hidden by those in front, regardless of
the specified vertex order. This function is already embedded in WebGL and simply needs
to be enabled.
Enabling hidden surface removal and preparing WebGL to use it requires the following
two steps:
1. Enabling the hidden surface removal function
gl.enable(gl.DEPTH_TEST);
2. Clearing the depth buffer used for the hidden surface removal before drawing
gl.clear(gl.DEPTH_BUFFER_BIT);
The function gl.enable() , used in step 1, actually enables various functions in WebGL.
gl.enable(cap)
Enable the function specified by cap (capability).
Parameters cap Specifies the function to be
enabled.
gl.DEPTH_TEST
2
Hidden surface removal
gl.BLEND Blending (see Chapter 9 ,
“Hierarchical Objects”)
gl.POLYGON_OFFSET_FILL Polygon offset (see the next
section), and so on
3
Return value None
Errors : INVALID_ENUM None of the acceptable values is specified in cap
3 Although not covered in this book, you can also specify gl.CULL_FACE , gl.DITHER , gl.SAMPLE_
ALPHA_TO_COVERAGE , gl.SAMPLE_COVERAGE , gl.SCISSOR_TEST , and gl.STENCIL_TEST . See the
book OpenGL Programming Guide for more information on these.
2 A “DEPTH_TEST” in the hidden surface removal function might sound strange, but actually its name
comes from the fact that it decides which objects to draw in the foreground by verifying (TEST) the
depth (DEPTH) of each object.
Correctly Handling Foreground and Background Objects
271
The depth buffer cleared in the
gl.clear() statement (step 2) is a buffer used internally
to remove hidden surfaces. While WebGL draws objects and geometric shapes in the color
buffer displayed on the , hidden surface removal requires the depth (from the
eye point) for each geometrical shape and object. The depth buffer holds this information
(see Figure 7.29 ). The depth direction is the same as the z-axis direction, so it is sometimes
called the z-buffer.
m e t s y S L G b e W
r e d a h S x e t r e V
t p i r c S a v a J
{ ) ( n i a m n o i t c n u f
… L G b e W t e g = l g r a v
…
; ) … ( s r e d a h S t i n i
…
}
x
y
r e f u B h t p e D
h t p e D
t s e T
l a v o m e R e c a f r u S n e d d i H r e f u B r o l o C
t n e m g a r F
r e d a h S
Figure 7.29 Depth buffer used in hidden surface removal
Because the depth buffer is used whenever a drawing command is issued, it must be
cleared before any drawing operation; otherwise, you will see incorrect results. You specify
the depth buffer using gl.DEPTH_BUFFER_BIT and proceed as follows to clear it:
gl.clear(gl.DEPTH_BUFFER_BIT);
Up until now, you only cleared the color buffer. Because you now need to also clear the
depth buffer, you can clear both buffers simultaneously by taking the bitwise or (|) of
gl.COLOR_BUFFER_BIT (which represents the color buffer) and gl.DEPTH_BUFFER_BIT
(which represents the depth buffer) and specifying it as an argument to gl.clear() :
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
You can use the bitwise or operation this way whenever you need to clear both buffers at
the same time.
To disable the function you enabled with gl.enable() , you use gl.disable() .
gl.disable(cap)
Disable the function specified by cap (capability).
Parameters cap Same as gl.enable() .
Return value None
Errors INVALID_ENUM None of the acceptable values is specified in cap
CHAPTER 7 Toward the 3D World
272
Sample Program (DepthBuffer.js)
Let’s add the hidden surface removal methods from (1) and (2) to PerspectiveView_
mvpMatrix.js and change the name to DepthBuffer.js . Note that the order of the vertex
coordinates specified inside the buffer object is not changed, so you will draw from
front to back the blue, yellow, and green triangles. The result is identical to that of the
PerspectiveView_mvpMatrix . We detail the program in Listing 7.10 .
Listing 7.10 DepthBuffer.js
1 // DepthBuffer.js
...
23 function main() {
...
41 var n = initVertexBuffers(gl);
...
47 // Specify the color for clearing
48 gl.clearColor(0, 0, 0, 1);
49 // Enable the hidden surface removal function
50 gl.enable(gl.DEPTH_TEST);
73 // Clear the color and depth buffer
74 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
75
76 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw triangles
...
85 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw triangles
86 }
87
88 function initVertexBuffers(gl) {
89 var verticesColors = new Float32Array([
90 // Vertex coordinates and color
91 0.0, 1.0, 0.0, 0.4, 0.4, 1.0, // The blue triangle in front
92 -0.5, -1.0, 0.0, 0.4, 0.4, 1.0,
93 0.5, -1.0, 0.0, 1.0, 0.4, 0.4,
94
95 0.0, 1.0, -2.0, 1.0, 1.0, 0.4, // The yellow triangle in middle
96 -0.5, -1.0, -2.0, 1.0, 1.0, 0.4,
97 0.5, -1.0, -2.0, 1.0, 0.4, 0.4,
98
99 0.0, 1.0, -4.0, 0.4, 1.0, 0.4, // The green triangle in back
100 -0.5, -1.0, -4.0, 0.4, 1.0, 0.4,
101 0.5, -1.0, -4.0, 1.0, 0.4, 0.4,
102 ]);
103 var n = 9;
...
Correctly Handling Foreground and Background Objects
273
137 return n;
138 }
If you run DepthBuffer , you can see that hidden face removal is performed and that
objects placed at the back are hidden by objects located at the front. This demonstrates
that the hidden surface removal function can eliminate the hidden surfaces regardless of
the position of the eye point. Equally, this also shows that in anything but a trivial 3D
scene, you will always need to enable hidden surface removal and systematically clear the
depth buffer before any drawing operation.
You should note that hidden surface removal requires you to correctly set up the viewing
volume. If you fail to do this (use WebGL in its default configuration), you are likely
to see incorrect results. You can specify either a box or a quadrangular pyramid for the
viewing volume.
Z Fighting
Hidden surface removal is a sophisticated and powerful feature of WebGL that correctly
handles most of the cases where surfaces need to be removed. However, it fails when two
geometrical shapes or objects are located at extremely close positions and results in the
display looking a little unnatural. This phenomenon is known as Z fighting and is illus-
trated in Figure 7.30 . Here, we draw two triangles sharing the same z coordinate.
Figure 7.30 Visual artifact generated by Z fighting (the left side)
The Z fighting occurs because of the limited precision of the depth buffer and means the
system is unable to asses which object is in front and which is behind. Technically, when
handling 3D models, you could avoid this by paying thorough attention to the z coordi-
nates’ values at the model creation stage; however, implementing this workaround would
prove to be unrealistic when dealing with the animation of several objects.
CHAPTER 7 Toward the 3D World
274
To help resolve this problem, WebGL provides a feature known as the polygon offset .
This works by automatically adding an offset to the z coordinate, whose value is a func-
tion of each object’s inclination with respect to the viewer’s line of sight. You only need
to add two lines of code to enable this function.
1. Enabling the polygon offset function:
gl.enable(gl.POLYGON_OFFSET_FILL);
2. Specifying the parameter used to calculate the offset (before drawing):
gl.polygonOffset(1.0, 1.0);
The same method that enabled the hidden surface removal function is used, but with a
different parameter. The details for gl.polygonOffset() are shown here.
gl.polygonOffset(factor, units)
Specify the offset to be added to the z coordinate of each vertex drawn afterward.
The offset is calculated with the formula m * factor + r * units , where m represents the
inclination of the triangle with respect to the line of sight, and where r is the smallest
difference between two z coordinates values the hardware can distinguish.
Return value None
Errors None
Let’s look at the program Zfighting , which uses the polygon offset to reduce z fighting
(see Listing 7.11 ).
Listing 7.11 Zfighting.js
1 // Zfighting.js
...
23 function main() {
...
69 // Enable the polygon offset function
70 gl.enable(gl.POLYGON_OFFSET_FILL);
71 // Draw a rectangle
72 gl.drawArrays(gl.TRIANGLES, 0, n/2); // The green triangle
73 gl.polygonOffset(1.0, 1.0); // Set the polygon offset
74 gl.drawArrays(gl.TRIANGLES, n/2, n/2); // The yellow triangle
75 }
76
77 function initVertexBuffers(gl) {
Hello Cube
275
78 var verticesColors = new Float32Array([
79 // Vertex coordinates and color
80 0.0, 2.5, -5.0 , 0.0, 1.0, 0.0, // The green triangle
81 -2.5, -2.5, -5.0 , 0.0, 1.0, 0.0,
82 2.5, -2.5, -5.0 , 1.0, 0.0, 0.0,
83
84 0.0, 3.0, -5.0 , 1.0, 0.0, 0.0, // The yellow triangle
85 -3.0, -3.0, -5.0 , 1.0, 1.0, 0.0,
86 3.0, -3.0, -5.0 , 1.0, 1.0, 0.0,
87 ]);
88 var n = 6;
If you look at the program from line 80, you can see that the z coordinate for each vertex
is set to –5.0, so z fighting should occur.
Within the rest of the code, the polygon offset function is enabled at line 70. After
that, the green and yellow triangles are drawn at lines 72 and 74. For ease of reading,
the program uses only one buffer object, so gl.drawArrays() requires the second and
third arguments to be correctly set. The second argument represents the number of the
vertex to start from, while the third argument gives the number of vertices to be drawn.
Once the green triangle has been drawn, the polygon offset parameter is set using gl.
polygonOffset() . Subsequently, all the vertices drawn will have their z coordinate offset.
If you load this program, you will see the two triangles drawn correctly with no z fight-
ing effects, as in Figure 7.28 (right side). If you now comment out line 73 and reload the
program, you will notice that z fighting occurs and looks similar to the left side of Figure
7.28 .
Hello Cube
So far, the explanation of various WebGL features has been illustrated using simple trian-
gles. You now have enough understanding of the basics to draw 3D objects. Let’s start by
drawing the cube shown in Figure 7.31 . (The coordinates for each vertex are shown on the
right side.) The program used is called HelloCube , in which the eight vertices that define
the cube are specified using the following colors: white, magenta (bright reddish-violet),
red, yellow, green, cyan (bright blue), blue, and black. As was explained in Chapter 5 ,
“Using Colors and Texture Images,” because colors between the vertices are interpolated,
the resulting cube is shaded with an attractive color gradient (actually a “color solid,” an
analog of the two-dimensional “color wheel”).
CHAPTER 7 Toward the 3D World
276
v1(-1,1,1)
v3(1,-1,1)
v4(1,-1,-1)
v0(1,1,1)
v5(1,1,-1)
v2(-1,-1,1)
v7(-1,-1,-1)
v6(-1,1,-1)
white
red
yellow
green
cyan
blue
black
magenta
x
y
z
Figure 7.31 HelloCube and its vertex coordinates
Let’s consider the case where you would like to draw the cube like this with the command
you’ve been relying upon until now: gl.drawArrays() . In this case, you need to draw
using one of the following modes: gl.TRIANGLES , gl.TRIANGLE_STRIP , or gl.TRIANGLE_FAN .
The most simple and straightforward method would consist of drawing each face with
two triangles. In other words, you can draw a face defined by four vertices (v0, v1, v2, v3),
using two triangles defined by the two sets of three vertices (v0, v1, v2) and (v0, v2, v3),
respectively, and repeat the same process for all the other faces. In this case, the vertices
coordinates specified inside the buffer object would be these:
var vertices = new Float32Array([
1.0, 1.0, 1.0, -1.0, 1.0, 1.0, -1.0, -1.0, 1.0, // v0, v1, v2
1.0, 1.0, 1.0, -1.0, -1.0, 1.0, 1.0, -1.0, 1.0, // v0, v2, v3
1.0, 1.0, 1.0, 1.0, -1.0, 1.0, 1.0, -1.0, -1.0, // v0, v3, v4
...
]);
Because one face is made up of two triangles, you need to know the coordinates of six
vertices to define it. There are six faces, so a total of 6×6 = 36 vertices are necessary. After
having specified the coordinates of each of the 36 vertices, write them in the buffer
object and then call gl.drawArrays(gl.TRIANGLES, 0, 36) , which draws the cube. This
approach requires that you specify and handle 36 vertices, although the cube actually only
requires 8 unique vertices because several triangles share common vertices.
You could, however, take a more frugal approach by drawing a single face with gl.
TRIANGLE_FAN . Because gl.TRIANGLE_FAN allows you to draw a face defined by the 4-vertex
set (v0, v1, v2, v3), you end up only having to deal with a total of 4×6=24 vertices
4 .
However, you now need to call gl.drawArrays() separately for each face (six faces). So,
each of these two approaches has both advantages and drawbacks, but neither seems ideal.
4 You can cut down on the number of vertices using this kind of representation. It decreases the
number of necessary vertices to 14, which can be drawn with gl.TRIANGLE_STRIP .
Hello Cube
277
As you would expect, WebGL has a solution: gl.drawElements() . It’s an alternative way to
directly draw a three-dimensional object in WebGL, with a minimum of vertices. To use
this method, you will need the vertex coordinates of the entire object, which you will use
to explicitly describe how you want WebGL to draw the shape (the cube).
If we break our cube (see the right side of Figure 7.31 ) into vertices that constitute trian-
gles, we get the structure shown in Figure 7.32 . Looking at the left side of the figure,
you can see that Cube points to a Faces list, which, as the name implies, shows that the
cube is split into six faces: front, right, left, top, bottom, and back. In turn, each face is
composed of two triangles picked up from the Triangles list. The numbers in the Triangles
list represent the indices assigned to the Coordinate list. The vertex coordinates’ indices
are numbered in order starting from zero.
t h g i r
t n o r f
4
7
6
4
6
5
cube
1, 1, 1
-1, 1, 1
-1, -1, 1
1, -1, 1
1, -1, -1
1, 1, -1
-1, 1, -1
-1, -1, -1
0
1
2
0
2
3
0
3
4
0
4
5
0
1
2
3
4
5
6
7
1, 1, 1
1, 0, 1
1, 0, 0
1, 1, 0
0, 1, 0
0, 1, 1
0, 0, 1
0, 0, 0
v1(-1,1,1)
v3(1,-1,1)
v4(1,-1,-1)
v0(1,1,1)
v5(1,1,-1)
v2(-1,-1,1)
v7(-1,-1,-1)
front
back
down
left
up
right
p u
v6(-1,1,-1)
Figure 7.32 The associations of the faces that make up the cube, triangles, vertex
coordinates, and colors
This approach results in a data structure that describes the way the object (a cube) can be
built from its vertex and color data.
Drawing the Object with Indices and Vertices Coordinates
So far, you have been using gl.drawArrays() to draw vertices. However, WebGL supports
an alternative approach, gl.drawElements() , that looks similar to that of gl.draw
Arrays() . However, it has some advantages that we’ll explain later. First, let’s look at
how to use gl.drawElements() . You need to specify the indices based, not on gl.ARRAY_
BUFFER , but on gl.ELEMENT_ARRAY_BUFFER (introduced in the explanation of the buffer
object in Chapter 4 ). The key difference is that gl.ELEMENT_ARRAY_BUFFER handles data
structured by the indices.
CHAPTER 7 Toward the 3D World
278
gl.drawElements(mode, count, type, offset)
Executes the shader and draws the geometric shape in the specified mode using the
indices specified in the buffer object bound to gl.ELEMENT_ARRAY_BUFFER .
Parameters mode Specifies the type of shape to be drawn (refer to Figure
3.17 ).
The following symbolic constants are accepted:
gl.POINTS, gl.LINE_STRIP, gl.LINE_LOOP, gl.LINES,
gl.TRIANGLE_STRIP, gl.TRIANGLE_FAN , or gl.TRIANGLES
count Number of indices to be drawn (integer).
type Specifies the index data type: gl.UNSIGNED_BYTE or gl.
UNSIGNED_SHORT
5
offset Specifies the offset in bytes in the index array where you
want to start rendering.
Return value None
Errors INVALID_ENUM mode is none of the preceding values.
INVALID_VALUE A negative value is specified for count or offset
Writing indices to the buffer object bound to gl.ELEMENT_ARRAY_BUFFER is done in the
same way you write the vertex information to the buffer object with gl.drawArrays() .
That is to say, you use gl.bindBuffer() and gl.bufferData() , but the only difference is
that the first argument, target , is set to gl.ELEMENT_ARRAY_BUFFER . Let’s take a look at the
sample program.
Sample Program (HelloCube.js)
The sample program is shown in Listing 7.12 . The vertex and fragment shaders set a quad-
rangular pyramid viewing volume and perform a perspective projection transformation
like ProjectiveView_mvpMatrix.js . It’s important to understand that gl.drawElements()
doesn’t do anything special. The vertex shader simply transforms the vertex coordinates,
and the fragment shader sets the color passed by the varying variable to gl_FragColor .
The key difference from the previous programs comes down to the processing of the
buffer object in initVertexBuffers() .
5 Even if type doesn’t correspond to the type ( Uint8Array or Uint16Array ) of the data array specifi ed
in gl.ELEMENT_ARRAY_BUFFER , no error is returned. However, if, for example, you specify the index
with a Uint16Array type, and set type to gl.UNSIGNED_BYTE , in some cases, the object might not
be completely displayed.
Hello Cube
279
Listing 7.12 HelloCube.js
1 // HelloCube.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
...
8 'void main() {\n' +
9 ' gl_Position = u_MvpMatrix * a_Position;\n' +
10 ' v_Color = a_Color;\n' +
11 '}\n';
12
13 // Fragment shader program
14 var FSHADER_SOURCE =
...
19 'void main() {\n' +
20 ' gl_FragColor = v_Color;\n' +
21 '}\n';
22
23 function main() {
...
40 // Set the vertex coordinates and color
41 var n = initVertexBuffers(gl);
...
47 // Set the clear color and enable the hidden surface removal
48 gl.clearColor(0.0, 0.0, 0.0, 1.0);
49 gl.enable(gl.DEPTH_TEST);
...
58 // Set the eye point and the viewing volume
59 var mvpMatrix = new Matrix4();
60 mvpMatrix.setPerspective(30, 1, 1, 100);
61 mvpMatrix.lookAt(3, 3, 7, 0, 0, 0, 0, 1, 0);
62
63 // Pass the model view projection matrix to u_MvpMatrix
64 gl.uniformMatrix4fv(u_MvpMatrix, false, mvpMatrix.elements);
65
66 // Clear the color and depth buffer
67 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
68
69 // Draw the cube
70 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
71 }
72
73 function initVertexBuffers(gl) {
...
82 var verticesColors = new Float32Array([
CHAPTER 7 Toward the 3D World
280
83 // Vertex coordinates and color
84 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, // v0 White
85 -1.0, 1.0, 1.0, 1.0, 0.0, 1.0, // v1 Magenta
86 -1.0, -1.0, 1.0, 1.0, 0.0, 0.0, // v2 Red
...
91 -1.0, -1.0, -1.0, 0.0, 0.0, 0.0 // v7 Black
92 ]);
93
94 // Indices of the vertices
95 var indices = new Uint8Array([
96 0, 1, 2, 0, 2, 3, // front
97 0, 3, 4, 0, 4, 5, // right
98 0, 5, 6, 0, 6, 1, // up
99 1, 6, 7, 1, 7, 2, // left
100 7, 4, 3, 7, 3, 2, // down
101 4, 7, 6, 4, 6, 5 // back
102 ]);
103
104 // Create a buffer object
105 var vertexColorBuffer = gl.createBuffer();
106 var indexBuffer = gl.createBuffer();
...
111 // Write the vertex coordinates and color to the buffer object
112 gl.bindBuffer(gl.ARRAY_BUFFER, vertexColorBuffer);
113 gl.bufferData(gl.ARRAY_BUFFER, verticesColors, gl.STATIC_DRAW);
114
115 var FSIZE = verticesColors.BYTES_PER_ELEMENT;
116 // Assign the buffer object to a_Position and enable it
117 var a_Position = gl.getAttribLocation(gl.program, 'a_Position');
...
122 gl.vertexAttribPointer(a_Position, 3, gl.FLOAT, false, FSIZE * 6, 0);
123 gl.enableVertexAttribArray(a_Position);
124 // Assign the buffer object to a_Position and enable it
125 var a_Color = gl.getAttribLocation(gl.program, 'a_Color');
...
130 gl.vertexAttribPointer(a_Color, 3, gl.FLOAT, false, FSIZE * 6, FSIZE * 3);
131 gl.enableVertexAttribArray(a_Color);
132
133 // Write the indices to the buffer object
134 gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
135 gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
136
137 return indices.length;
138 }
Hello Cube
281
The processing flow in the JavaScript main() is the same as in ProjectiveView_mvpMatrix.
js , but let’s quickly review it. After having written the vertex data in the buffer object
through a call to initVertexBuffers() at line 41, you enable the hidden surface removal
function at line 49. This is necessary to allow WebGL to correctly draw the cube, taking
into consideration the relationship between the front and the back faces.
You set the eye point and the viewing volume from line 59 to line 61 and pass the model
view projection matrix to the vertex shader’s uniform variable u_MvpMatrix .
At line 67, you clear the color and depth buffers and then draw the cube using
gl.drawElements() at line 70. The use of gl.drawElements() in this program is the
main difference to ProjectiveView_mvpMatrix.js , so let’s take a look at that.
Writing Vertex Coordinates, Colors, and Indices to the Buffer Object
The method to assign the vertex coordinates and the color information to the attribute
variable using the buffer object in initVertexBuffers() is unchanged. This time, because
you won’t necessarily use the vertex information in the order specified in the object
buffer, you need to additionally specify in which order you will use it. For that you will
use the vertex order specified in verticesColors as indices. In short, the vertex informa-
tion specified first in the buffer object will be set to index 0, the vertex information speci-
fied in second place in the buffer object will be set to index 1, and so on. Here, we show
the part of the program that specifies the indices in initVertexBuffers() :
73 function initVertexBuffers(gl) {
...
82 var verticesColors = new Float32Array([
83 // Vertex coordinates and color
84 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, // v0 White
85 -1.0, 1.0, 1.0, 1.0, 0.0, 1.0, // v1 Magenta
...
91 -1.0, -1.0, -1.0, 0.0, 0.0, 0.0 // v7 Black
92 ]);
93
94 // Indices of the vertex coordinates
95 var indices = new Uint8Array([
96 0, 1, 2, 0, 2, 3, // front
97 0, 3, 4, 0, 4, 5, // right
98 0, 5, 6, 0, 6, 1, // up
99 1, 6, 7, 1, 7, 2, // left
100 7, 4, 3, 7, 3, 2, // down
101 4, 7, 6, 4, 6, 5 // back
102 ]);
103
104 // Create a buffer object
105 var vertexColorBuffer = gl.createBuffer();
CHAPTER 7 Toward the 3D World
282
106 var indexBuffer = gl.createBuffer();
...
136 // Write the indices to the buffer object
137 gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
138 gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
139
140 return indices.length;
141 }
As you may have noticed, at line 106, you create the buffer object ( indexBuffer ) in which
to write the indices. These indices are stored in the array indices at line 95. Because the
indices are integers (0, 1, 2, ...), you use an integer typed array Uint8Array (unsigned
8-bit encoded integer). If there are more than 256 indices, use Uint16Array instead. The
content of this array is the triangles list of Figure 7.33 , where each grouping of three
indices points to the three vertex coordinates for that triangle. Generally, this index
doesn’t need to be manually created because 3D modeling tools, introduced in the next
chapter, usually generate it along with the vertices information.
cube
1, 1, 1
-1, 1, 1
-1, -1, 1
1, -1, 1
1, -1, -1
1, 1, -1
-1, 1, -1
-1, -1, -1
0
1
2
0
2
3
0
3
4
0
4
5
0
1
2
3
4
5
6
7
1, 1, 1
1, 0, 1
1, 0, 0
1, 1, 0
0, 1, 0
0, 1, 1
0, 0, 1
0, 0, 0
gl.ELEMENT_ARRAY_BUFFER gl.ARRAY_BUFFER
v6(-1,1,-1)
v1(-1,1,1)
v3(1,-1,1)
v4(1,-1,-1)
v0(1,1,1)
v5(1,1,-1)
v2(-1,-1,1)
v7(-1,-1,-1)
front
back
down
left
up
right
Figure 7.33 Contents of gl.ELEMENT_ARRAY_BUFFER and gl.ARRAY_BUFFER
The setup for the specified indices is performed at lines 134 and 135. This is similar to the
way buffer objects have been written previously, with the difference that the first argu-
ment is modified to gl.ELEMENT_ARRAY_BUFFER . This is to let the WebGL system know that
the contents of the buffer are indices.
Once executed, the internal state of the WebGL system is as detailed in Figure 7.34 .
Hello Cube
283
t n e m g a r F
r e d a h S
t p i r c S a v a J
{ ) ( n i a m n o i t c n u f
… L G b e W t e g = l g r a v
…
; ) … ( r e f u B d n i b . l g
R E F F U B _ Y A R R A _ T N E M E L E . l g
t c e j b O r e f u B
, 3 , 2 , 0 , 2 , 1 , 0
, 5 , 4 , 0 , 4 , 3 , 0
, 1 , 6 , 0 , 6 , 5 , 0
, 2 , 7 , 1 , 7 , 6 , 1
, 2 , 3 , 7 , 3 , 4 , 7
5 , 6 , 4 , 6 , 7 , 4
t c e j b O r e f u B
, 1 , 1 , 1 , 1 , 1 , 1
, 1 , 0 , 1 , 1 , 1 , 1 -
, 0 , 0 , 1 , 1 , 1 - , 1 -
, 0 , 1 , 1 , 1 , 1 - , 1
, 0 , 1 , 0 , 1 - , 1 - , 1
, 1 , 1 , 0 , 1 - , 1 , 1
, , , , - , , 1 0 0 1 1 1 -
0 , 0 , 0 , 1 - , 1 - , 1 -
R E F F U B _ Y A R R A . l g
e l b a i r a v e t u b i r t a
Figure 7.34 gl.ELEMENT_ARRAY_BUFFER and gl.ARRAY_BUFFER
Once set up, the call to gl.drawElements() at line 70 draws the cube:
69 // Draw the cube
70 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
You should note that the second argument of gl.drawElements() , the number of indices,
represents the number of vertex coordinates involved in the drawing, but it is not identi-
cal to the number of vertices coordinates written to gl.ARRAY_BUFFER .
When you call gl.drawElements() , the indices are extracted from the buffer object
( indexBuffer ) bound to gl.ELEMENT_ARRAY_BUFFER , while the associated vertex informa-
tion is retrieved from the buffer object ( vertexColorBuffer ) bound to gl.ARRAY_BUFFER .
All these pieces of information are then passed to the attribute variable. The process is
repeated for each index, and then the whole cube gets drawn by a single call to
gl.drawElements() . With this approach, because you refer to the vertex information
through indices, you can recycle the vertex information. Although gl.drawElements()
allows you to curb memory usage by sharing the vertex information, the cost is a process
to convert the indices to vertex information (that is, a level of indirection). This means
that the choice between gl.drawElements() and gl.drawArrays() , because they both
have pros and cons, will eventually depend on the system implementation.
At this stage, although it’s clear that gl.drawElements() is an efficient way to draw 3D
shapes, one key feature is missing. There is no way to control color, so it is helpful to draw
a cube using a single solid color, as shown in Figure 7.31 .
For example, let’s consider the case where you would like to modify the color of each face
of the cube, as shown in Figure 7.35 , or map textures to the faces. You need to know the
color or texture information for each face, yet you cannot implement this with the combi-
nation of indices, triangle list, and vertex coordinates shown in Figure 7.33 .
CHAPTER 7 Toward the 3D World
284
Figure 7.35 Cube with differently colored faces
In the following section, we will examine how to address this problem and specify the
color information for each face.
Adding Color to Each Face of a Cube
As discussed before, you can only pass per-vertex information to the vertex shader. This
implies that you need to pass the face’s color and the vertices of the triangles as vertex
information to the vertex shader. For instance, to draw the “front” face in blue, made up
of v0, v1, v2, and v3 ( Figure 7.33 ), you need to specify the same blue color for each of the
vertices.
However, as you may have noticed, v0 is also shared by the “right” and “top” faces as well
as the “front” face. Therefore, if you specify the color blue for the vertices that form the
“front” face, you are then unable to choose a different color for those vertices that also
belong to another face. To cope with this problem, although this might not seem as effi-
cient, you must create duplicate entries for the shared vertices in the vertices coordinates
listing, as illustrated in Figure 7.36 . Doing so, you will have to handle common vertices
with identical coordinates in the face’s triangle list as separate entities.
6
6 If you break down all the faces into triangles and draw using gl.drawArrays() , you have
to process 6 vertices * 6 faces = 36 vertices, so the difference between gl.drawArrays() and
gl.drawElements() in memory usage is negligible. This is because a cube or a cuboid is a special
3D object whose faces are connected vertically; therefore, each vertex needs to have three colors.
However, in the case of complex 3D models, specifying several colors to a single vertex would be rare.
Hello Cube
285
e b u c
0 z , 0 y , 0 x
1 z , 1 y , 1 x
2 z , 2 y , 2 x
3 z , 3 y , 3 x
0 z , 0 y , 0 x
3 z , 3 y , 3 x
4 z , 4 y , 4 x
5 z , 5 y , 5 x
0
1
2
3
4
5
6
7
0 2
1 2
2 2
3 2
4 z , 4 y , 4 x
7 z , 7 y , 7 x
6 z , 6 y , 6 x
5 z , 5 y , 5 x
1 , 0 , 0
1 , 0 , 0
1 , 0 , 0
1 , 0 , 0
0 , 1 , 0
0 , 1 , 0
0 , 1 , 0
0 , 1 , 0
1 , 1 , 0
1 , 1 , 0
1 , 1 , 0
1 , 1 , 0
… …
) e u l b ( t n o r f
) d e r ( p u
) n e e r g ( t h g i r
back
down
left
up
right
front
0
1
2
0
2
3
4
5
6
4
6
7
20
21
22
20
22
23
v6(-1,1,-1)
v1(-1,1,1)
v3(1,-1,1)
v4(1,-1,-1)
v0(1,1,1)
v5(1,1,-1)
v2(-1,-1,1)
v7(-1,-1,-1)
Figure 7.36 The faces that constitute the cube, the triangles, and the relationship between
vertices coordinates (configured so that you can choose a different color for each face)
When opting for such a configuration, the contents of the index list, which consists of
the face’s triangle list, will differ from face to face, thus allowing you to modify the color
for each face. This approach can also be used if you want to map a texture to each face.
You would need to specify the texture coordinates for each vertex, but you can actually
deal with this by rewriting the color list ( Figure 7.36 ) as texture coordinates. The sample
program in the section “Rotate Object” in Chapter 10 covers this approach in more detail.
Let’s take a look at the sample program ColoredCube , which displays a cube with each face
painted a different color. The screen shot of ColoredCube is identical to Figure 7.35 .
Sample Program (ColoredCube.js)
The sample program is shown in Listing 7.13 . Because the only difference from
HelloCube.js is the method of storing vertex information into the buffer object, let’s look
in more detail at the code related to the initVertexBuffers() . The main differences to
HelloCube.js are
• In HelloCube.js , the vertex coordinates and color are stored in a single buffer
object, but because this make the array unwieldy, the program has been modified so
that they are now stored in separate buffer objects.
• The respective contents of the vertex array (which stores the vertex coordinates), the
color array (which stores the color information), and the index array (which stores
the indices) are modified in accordance with the configuration described in Figure
7.36 (lines 83, 92, and 101).
CHAPTER 7 Toward the 3D World
286
• To keep the sample program as compact as possible, the function
initArrayBuffer() is defined, which bundles the buffer object creation, binding,
writing of data, and enabling (lines 116, 119, and 129).
As you examine the program, take note of how the second bullet is implemented to match
the structure shown in Figure 7.36 .
Listing 7.13 ColoredCube.js
1 // ColoredCube.js
...
23 function main() {
...
40 // Set the vertex information
41 var n = initVertexBuffers(gl);
...
69 // Draw the cube
70 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
71 }
72
73 function initVertexBuffers(gl) {
...
83 var vertices = new Float32Array([ // Vertex coordinates
84 1.0, 1.0, 1.0, -1.0, 1.0, 1.0, -1.0,-1.0, 1.0, 1.0,-1.0, 1.0,
85 1.0, 1.0, 1.0, 1.0,-1.0, 1.0, 1.0,-1.0,-1.0, 1.0, 1.0,-1.0,
86 1.0, 1.0, 1.0, 1.0, 1.0,-1.0, -1.0, 1.0,-1.0, -1.0, 1.0, 1.0,
...
89 1.0,-1.0,-1.0, -1.0,-1.0,-1.0, -1.0, 1.0,-1.0, 1.0, 1.0,-1.0
90 ]);
91
92 var colors = new Float32Array([ // Colors
93 0.4, 0.4, 1.0, 0.4, 0.4, 1.0, 0.4, 0.4, 1.0, 0.4, 0.4, 1.0,
94 0.4, 1.0, 0.4, 0.4, 1.0, 0.4, 0.4, 1.0, 0.4, 0.4, 1.0, 0.4,
95 1.0, 0.4, 0.4, 1.0, 0.4, 0.4, 1.0, 0.4, 0.4, 1.0, 0.4, 0.4,
...
98 0.4, 1.0, 1.0, 0.4, 1.0, 1.0, 0.4, 1.0, 1.0, 0.4, 1.0, 1.0
99 ]);
100
101 var indices = new Uint8Array([ // Indices of the vertices
102 0, 1, 2, 0, 2, 3, // front
103 4, 5, 6, 4, 6, 7, // right
104 8, 9,10, 8,10,11, // up
...
107 20,21,22, 20,22,23 // back
108 ]);
Hello Cube
287
109
110 // Create a buffer object
111 var indexBuffer = gl.createBuffer();
...
115 // Write the vertex coordinates and color to the buffer object
116 if (!initArrayBuffer(gl, vertices, 3, gl.FLOAT, 'a_Position'))
117 return -1;
118
119 if (!initArrayBuffer(gl, colors, 3, gl.FLOAT, 'a_Color'))
120 return -1;
...
122 // Write the indices to the buffer object
123 gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
124 gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
125
126 return indices.length;
127 }
128
129 function initArrayBuffer(gl, data, num, type, attribute) {
130 var buffer = gl.createBuffer(); // Create a buffer object
...
135 // Write date into the buffer object
136 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
137 gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
138 // Assign the buffer object to the attribute variable
139 var a_attribute = gl.getAttribLocation (gl.program, attribute);
...
144 gl.vertexAttribPointer(a_attribute, num, type, false, 0, 0);
145 // Enable the assignment of the buffer object to the attribute variable
146 gl.enableVertexAttribArray(a_attribute);
147
148 return true;
149 }
Experimenting with the Sample Program
In ColoredCube , you specify a different color for each face. So what happens when you
choose an identical color for all the faces? For example, let’s try to set the color infor-
mation in ColoredCube.js ’s colors array to “white,” as shown next. We will call this
program ColoredCube_singleColor.js :
1 // ColoredCube_singleColor.js
...
92 var colors = new Float32Array([
93 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
CHAPTER 7 Toward the 3D World
288
94 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
...
98 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1
99 ]);
When you execute the program, you see an output like the screenshot shown in Figure
7.37 . One result of using a single color is that it becomes difficult to actually recognize
the cube. Up until now you could differentiate each face because they were differently
colored; therefore, you could recognize the whole shape as a solid. However, when you
switch to a unique color, you lose this three-dimensional impression.
Figure 7.37 Cube with its faces being identically colored
In contrast, in the real world, when you put a white box on a table, you can identify it as
a solid (see Figure 7.38 ). This is because each face, although the same white color, presents
a slightly different appearance because each is lit slightly differently. In ColoredCube_
singleColor , such an effect is not programmed, so the cube is hard to recognize. We will
explore how to correctly light 3D scenes in the next chapter.
Hello Cube
289
Figure 7.38 White box in the real world
Summary
In this chapter, through the introduction of the depth information, you have examined
setting the viewer’s eye point and viewing volume, looked at how to draw real 3D objects,
and briefly examined the local and world coordinate system. Many of the examples were
similar to those previously explained for the two-dimensional world, except for the intro-
duction of the z-axis to handle depth information.
The next chapter explains how to light 3D scenes and how to draw and manipulate
three-dimensional shapes with complex structures. We will also return to the function
initShaders() , which has hidden a number of complex issues that you now have enough
understanding to explore.
This page intentionally left blank
Chapter 8
Lighting Objects
This chapter focuses on lighting objects, looking at different light sources and
their effects on the 3D scene. Lighting is essential if you want to create realistic
3D scenes because it helps to give the scene a sense of depth.
The following key points are discussed in this chapter:
• Shading, shadows, and different types of light sources including point,
directional, and ambient
• Reflection of light in the 3D scene and the two main types: diffuse and
ambient
• The details of shading and how to implement the effect of light to make
objects, such as the pure white cube in the previous chapter, look three-
dimensional
By the end of this chapter, you will have all the knowledge you need to create
lighted 3D scenes populated with both simple and complex 3D objects.
Lighting 3D Objects
When light hits an object in the real world, part of the light is reflected by the
surface of the object. Only after this reflected light enters your eyes can you see
the object and distinguish its color. For example, a white box reflects white light
which, when it enters your eyes, allows you to tell that the box is white.
CHAPTER 8 Lighting Objects
292
In the real world, two important phenomena occur when light hits an object
(see Figure 8.1 ):
• Depending on the light source and direction, surface color is shaded.
• Depending on the light source and direction, objects “cast” shadows on the ground
or the floor.
Shadowing
Shading
Figure 8.1 Shading and shadowing
In the real world, you usually notice shadows, but you quite often don’t notice shading,
which gives 3D objects their feeling of depth. Shading is subtle but always present. As
shown in Figure 8.1 , even surfaces of a pure white cube are distinguishable because each
surface is shaded differently by light. As you can see, the surfaces hit by more light are
brighter, and the surfaces hit by less light are darker, or more shaded. These differences
allow you to distinguish each surface and ensure that the cube looks cubic.
In 3D graphics, the term shading
1 is used to describe the process that re-creates this
phenomenon where the colors differ from surface to surface due to light. The other
phenomenon, that the shadow of an object falls on the floor or ground, is re-created using
a process called shadowing . This section discusses shading. Shadowing is discussed in
Chapter 10 , which focuses on a set of useful techniques that build on your basic knowl-
edge of WebGL.
1 Shading is so critical to 3D graphics that the core language, GLSL ES, is a shader language, the
OpenGL ES Shading Language. The original purpose of shaders was to re-create the phenomena of
shading.
Lighting 3D Objects
293
When discussing shading, you need to consider two things:
• The type of light source that is emitting light
• How the light is reflected from surfaces of an object and enters the eye
Before we begin to program, let’s look at different types of light sources and how light is
reflected from different surfaces.
Types of Light Source
When light illuminates an object, a light source emits the light. In the real world, light
sources are divided into two main categories: directional light , which is something like
the sun that emits light naturally, and point light , which is something like a light bulb
that emits light artificially. In addition, there is ambient light that represents indirect
light (that is, light emitted from all light sources and reflected by walls or other objects
(see Figure 8.2 ). In 3D graphics, there are additional types of light sources. For example,
there is a spot light representing flashlights, headlamps, and so on. However, in this book,
we don’t address these more specialized light sources. Refer to the book OpenGL ES 2.0
Programming Guide for further information on these specialized light sources.
t h g i l t n e i b m A t h g i l t n i o P t h g i l l a n o i t c e r i D
Figure 8.2 Directional light, point light, and ambient light
Focusing on the three main types of light source covered in this book:
Directional light: A directional light represents a light source whose light rays are paral-
lel. It is a model of light whose source is considered to be at an infinite distance, such
as the sun. Because of the distance travelled, the rays are effectively parallel by the time
they reach the earth. This light source is considered the simplest, and because its rays are
parallel can be specified using only direction and color.
Point light: A point light represents a light source that emits light in all directions from
one single point. It is a model of light that can be used to represent light bulbs, lamps,
CHAPTER 8 Lighting Objects
294
flames, and so on. This light source is specified by its position and color.
2 However, the
light direction is determined from the position of the light source and the position at
which the light strikes a surface. As such, its direction can change considerably within
the scene.
Ambient light: Ambient light (indirect light) is a model of light that is emitted from
the other light source (directional or point), reflected by other objects such as walls, and
reaches objects indirectly. It represents light that illuminates an object from all directions
and has the same intensity.
3 For example, if you open the refrigerator door at night, the
entire kitchen becomes slightly lighter. This is the effect of the ambient light. Ambient
light does not have position and direction and is specified only by its color.
Now that you know the types of light sources that illuminate objects, let’s discuss how
light is reflected by the surface of an object and then enters the eye.
Types of Reflected Light
How light is reflected by the surface of an object and thus what color the surface will
become is determined by two things: the type of the light and the type of surface of the
object. Information about the type of light includes its color and direction. Information
about the surface includes its color and orientation.
When calculating reflection from a surface, there are two main types: diffuse reflection
and environment (or ambient ) reflection . The remainder of this section describes how to
calculate the color due to reflection using the two pieces of information described earlier.
There is a little bit of math to be considered, but it’s not complicated.
Diffuse Reflection
Diffuse reflection is the reflection of light from a directional light or a point light. In
diffuse reflection, the light is reflected (scattered) equally in all directions from where
it hits (see Figure 8.3 ). If a surface is perfectly smooth like a mirror, all incoming light
is reflected; however, most surfaces are rough like paper, rock, or plastic. In such cases,
the light is scattered in random directions from the rough surface. Diffuse reflection is a
model of this phenomenon.
2 This type of light actually attenuates; that is, it is strong near the source and becomes weaker farther
from the source. For the sake of simplicity of the description and sample programs, light is treated
as nonattenuating in this book. For attenuation, please refer to the book OpenGL ES 2.0 Programming
Guide .
3 In fact, ambient light is the combination of light emitted from light sources and refl ected by various
surfaces. It is approximated in this way because it would otherwise need complicated calculations to
take into account all the many light sources and how and where they are refl ected.
Lighting 3D Objects
295
θ
orientation of
the surface
A B C
orientation of
the surface
diffuse reflection
light
direction
light
source
light
source
diffuse reflection (reflection differs by light direction)
Figure 8.3 Diffuse reflection
In diffuse reflection, the color of the surface is determined by the color and the direction
of light and the base color and orientation of the surface. The angle between the light
direction and the orientation of the surface is defined by the angle formed by the light
direction and the direction “perpendicular” to the surface. Calling this angle θ , the surface
color by diffuse reflection is calculated using the following formula.
Equation 8.1
θ
〈 〉 =
〈 〉× 〈 〉×
surface color by diffuse reflection
light color base color of surface
cos
where < light color > is the color of light emitted from a directional light or a point light.
Multiplication with the is performed for each RGB component
of the color. Because light by diffuse reflection is scattered equally in all directions from
where it hits, the intensity of the reflected light at a certain position is the same from
any angle (see Figure 8.4 ).
e c a f r u s e h t f o n o i t a t n e i r o
Figure 8.4 The intensity of light at a given position is the same from any angle
Ambient Reflection
Ambient reflection is the reflection of light from another light source. In ambient reflec-
tion, the light is reflected at the same angle as its incoming angle. Because an ambient
light illuminates an object equally from all directions with the same intensity, its bright-
ness is the same at any position (see Figure 8.5 ). It can be approximated as follows.
CHAPTER 8 Lighting Objects
296
Equation 8.2
〈 〉 =
〈 〉× 〈 〉
surface color by ambient reflection
light color base color of surface
where is the color of light emitted from other light source.
eflection r t n e i b m a eflection r eflection r t n e i b m a e m a s e h t ( ) n o i t i s o p y n a t a
orientation of the surface
Figure 8.5 Ambient reflection
When both diffuse reflection and ambient reflection are present, the color of the surface is
calculated by adding, as follows.
Equation 8.3
〈 〉 =
〈 〉+ 〈 〉
surface color by diffuse and ambient reflection
surface color by diffuse reflection surface color by ambient reflection
Note that it is not required to always use both light sources, or use the formulas exactly
as mentioned here. You are free to modify each formula to achieve the effect you require
when showing the object.
Now let’s construct some sample programs that perform shading (shading and coloring
the surfaces of an object by placing a light source at an appropriate position). First let’s try
to implement shading due to directional light and its diffuse reflection.
Shading Due to Directional Light and Its Diffuse Reflection
As described in the previous section, surface color is determined by light direction and the
orientation of the surface it strikes when considering diffuse reflection. The calculation of
the color due to directional light is easy because its direction is constant. The formula for
calculating the color of a surface by diffuse reflection ( Equation 8.1 ) is shown again here:
θ
〈 〉 =
〈 〉× 〈 〉×
surface color by diffuse reflection
light color base color of surface
cos
The following three pieces of information are used:
• The color of the light source (directional light)
• The base color of the surface
• The angle ( θ ) between the light and the surface
Lighting 3D Objects
297
The color of a light source may be white, such as sunlight, or other colors, such as the
orange of lighting in road tunnels. As you know, it can be represented by RGB. White
light such as sunlight has an RGB value of (1.0, 1.0, 1.0). The base color of a surface
means the color that the surface was originally defined to have, such as red or blue. To
calculate the color of a surface, you need to apply the formula for each of the three RGB
components; the calculation is performed three times.
For example, assume that the light emitted from a light source is white (1.0, 1.0, 1.0), and
the base color of the surface is red (1.0, 0.0, 0.0). From Equation 8.1 , when θ is 0.0 (that is,
when the light hits perpendicularly), cos θ becomes 1.0. Because the R component of the
light source is 1.0, the R component of the base surface color is 1.0, and the cos θ is 1.0,
the R component of the surface color by diffuse reflection is calculated as follows:
R = 1.0 * 1.0 * 1.0 = 1.0
The G and B components are also calculated in the same way, as follows:
G = 1.0 * 0.0 * 1.0 = 0.0
B = 1.0 * 0.0 * 1.0 = 0.0
From these calculations, when white light hits perpendicularly on a red surface, the
surface color by diffuse reflection turns out to be (1.0, 0.0, 0.0), or red. This is consistent
with real-world experience. Conversely, when the color of the light source is red and the
base color of a surface is white, the result is the same.
Let’s now consider the case when θ is 90 degrees, or when the light does not hit the
surface at all. From your real-world experience, you know that in this case the surface will
appear black. Let’s validate this. Because cos θ is 0 when θ is 90 degrees, and anything
multiplied by zero is zero, the result of the formula is 0 for R, G, and B; that is, the surface
color becomes (0.0, 0.0, 0.0), or black, as expected. Equally, when θ is 60 degrees, you’d
expect that a small amount of light falling on a red surface would result in a darker red
color, and because cos θ is 0.5, the surface color is (0.5, 0.0, 0.0), which is dark red, as
expected.
These simple examples have given you a good idea of how to calculate surface color due
to diffuse reflection. To allow you to factor in directional light, let’s transform the preced-
ing formula to make it easy to handle so you can then explore how to draw a cube lit by
directional light.
Calculating Diffuse Reflection Using the Light Direction and the
Orientation of a Surface
In the previous examples, an arbitrary value for θ was chosen. However, typically it is
complicated to get the angle θ between the light direction and the orientation of a surface.
For example, when creating a model, the angle at which light hits each surface cannot
be determined in advance. In contrast, the orientation of each surface can be determined
CHAPTER 8 Lighting Objects
298
regardless of where light hits from. Because the light direction is also determined when
its light source is determined, it seems convenient to try to use these two pieces of
information.
Fortunately, mathematics tells us that cos θ is derived by calculating the dot product of
the light direction and the orientation of a surface. Because the dot product is so often
used, GLSL ES provides a function to calculate it.
4 (More details can be found in Appendix
B , “Built-In Functions of GLSL ES 1.0.”) When representing the dot product by “ · ”, cos θ is
defined as follows:
θ = 〈 〉 〈 〉 light direction orientation of a surface cos i
From this, Equation 8.1 can be transformed as following Equation 8.4 :
Equation 8.4
( )
〈 〉 =
〈 〉× 〈 〉×
〈 〉 〈 〉
surface color by diffuse reflection
light color base color of surface
light direction orientation of a surface
i
Here, there are two points to be considered: the length of the vector and the light direc-
tion. First, the length of vectors that represent light direction and orientation of the
surface, such as (2.0, 2.0, 1.0), must be 1.0,
5 or the color of the surface may become too
dark or bright. Adjusting the components of a vector so that its length becomes 1.0 is
called normalization .
6 GLSL ES provides functions for normalizing vectors that you can
use directly.
The second point to consider concerns the light direction for the reflected light. The light
direction is the opposite direction from that which the light rays travel (see Figure 8.6 ).
4 Mathematically, the dot product of two vectors n and l is written as follows:
n • 1 = | n | x |1| x cos θ
where || means the length of the vector. From this equation, you can see that when the lengths of n
and l are 1.0, the dot product is equal to cos θ . If n is ( n
x , n y , n z ) and l is ( l x , l y , l z ), then n l = n x * l x + n y *
l
y + n z * l z from the law of cosines.
5 If the components of the vector n are (n
x , n y , n z ), its length is as follows:
6 Normalized n is (n
x /m, n y /m, n z /m), where m is the length of n. |n| = sqrt(9) = 3. The vector (2.0, 2.0,
1.0) above is normalized into (2.0/3.0, 2.0/3.0, 1.0/3.0).
= = + + n n n length of n |n | x y z
2 2 2
Lighting 3D Objects
299
θ
orientation of
the surface
diffuse reflection
light
direction
light
source
Figure 8.6 The light direction is from the reflecting surface to the light source
Because we aren’t using an angle to specify the orientation of the surface, we need another
mechanism to do that. The solution is to use normal vectors.
The Orientation of a Surface: What Is the Normal?
The orientation of a surface is specified by the direction perpendicular to the surface and
is called a normal or a normal vector . This direction is represented by a triple number,
which is the direction of a line from the origin (0, 0, 0) to (n
x , n y , n z ) specified as the
normal. For example, the direction of the normal (1, 0, 0) is the positive direction of the
x-axis, and the direction of the normal (0, 0, 1) is the positive direction of the z-axis.
When considering surfaces and their normals, two properties are important for our
discussion.
A Surface Has Two Normals
Because a surface has a front face and a back face, each side has its own normal; that is,
the surface has two normals. For example, the surface perpendicular to the z-axis has a
front face that is facing toward the positive direction of the z-axis and a back face that is
facing the negative direction of the z-axis, as shown in Figure 8.7 . Their normals are (0, 0,
1) and (0, 0, –1), respectively.
x
y
z
x
y
z
) 1 , 0 , 0 ( l a m r o n e h t , 1) 0 , 0 ( l a m r o n e h t
0 v
1 v
2 v
3 v
Figure 8.7 Normals
CHAPTER 8 Lighting Objects
300
In 3D graphics, these two faces are distinguished by the order in which the vertices are
specified when drawing the surface. When you draw a surface specifying vertices in the
order
7 v0, v1, v2, and v3, the front face is the one whose vertices are arranged in a clock-
wise fashion when you look along the direction of the normal of the face (same as the
right-handed rule determining the positive direction of rotation in Chapter 3 , “Drawing
and Transforming Triangles”). So in Figure 8.7 , the front face has the normal (0, 0, –1) as
in the right side of the figure.
The Same Orientation Has the Same Normal
Because a normal just represents direction, surfaces with the same orientation have the
same normal regardless of the position of the surfaces.
If there is more than one surface with the same orientation placed at different positions,
the normals of these surfaces are identical. For example, the normals of a surface perpen-
dicular to the z-axis, whose center is placed at (10, 98, 9), are still (0, 0, 1) and (0, 0, –1).
They are the same as when it is positioned at the origin (see Figure 8.8 ).
x
y
z
x
y
z
) 1 , 0 , 0 ( l a m r o n e h t ) 1 - , 0 , 0 ( l a m r o n e h t
0 v
1 v
2 v
3 v
Figure 8.8 If the orientation of the surface is the same, the normal is identical regardless of
its position
The left side of Figure 8.9 shows the normals that are used in the sample programs in this
section. Normals are labeled using, for example “n(0, 1, 0)” as in this figure.
7 Actually, this surface is composed of two triangles: a triangle drawn in the order v0, v1, and v2, and a
triangle drawn in the order v0, v2, and v3.
Lighting 3D Objects
301
x
y
z
) 1 , 0 , 0 ( n
) 0 , 1 , 0 ( n
) 0 , 0 , 1 ( n
) 0 , 0 , 1 - ( n
) 1 - , 0 , 0 ( n
x
y
z
) 0 , 0 , 1 ( n
) . d e y a l p s i d t o n e r a s l a m r o n l l a (
) 0 , 1 - , 0 ( n
v4(1,-1,-1)
v4(1,-1,-1)
v3(1,-1,1) v3(1,-1,1)
v2(-1,-1,1)
v2(-1,-1,1)
v7(-1,-1,-1)
v0(1, 1, 1)
v1(-1,1,1)
v1(-1,1,1)
v6(-1, 1,-1)
v6(-1, 1,-1)
v5(1, 1,-1)
v5(1, 1,-1)
v0(1, 1, 1)
v7(-1,-1,-1)
) 1 , 0 , ( n 0
) 0 , 1 , 0 ( n
Figure 8.9 Normals of the surfaces of a cube
Once you have calculated the normals for a surface, the next task is to pass that data to
the shader programs. In the previous chapter, you passed color data for a surface to the
shader as “per-vertex data.” You can pass normal data using the same approach: as per-
vertex data stored in a buffer object. In this section, as shown in Figure 8.9 (right side),
the normal data is specified for each vertex, and in this case there are three normals per
vertex, just as there are three color data specified per vertex.
8
Now let’s construct a sample program LightedCube that displays a red cube lit by a white
directional light. The result is shown in Figure 8.10 .
Figure 8.10 LightedCube
8 Cubes or cuboids are simple but special objects whose three surfaces are connected perpendicularly.
They have three different normals per vertex. On the other hand, smooth objects such as game
characters have one normal per vertex.
CHAPTER 8 Lighting Objects
302
Sample Program (LightedCube.js)
The sample program is shown in Listing 8.1 . It is based on ColoredCube from the previous
chapter, so the basic processing flow of this program is the same as ColoredCube .
As you can see from Listing 8.1 , the vertex shader has been significantly modified so that
it calculates Equation 8.4 . In addition, the normal data is added in initVertexBuffers()
defined at line 89, so that they can be passed to the variable a_Normal . The fragment
shader is the same as in ColoredCube , and unmodified. It is reproduced so that you can
see that no fragment processing is needed.
Listing 8.1 LightedCube.js
1 // LightedCube.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'attribute vec4 a_Normal;\n' + // Normal
7 'uniform mat4 u_MvpMatrix;\n' +
8 'uniform vec3 u_LightColor;\n' + // Light color
9 'uniform vec3 u_LightDirection;\n' + // world coordinate, normalized
10 'varying vec4 v_Color;\n' +
11 'void main() {\n' +
12 ' gl_Position = u_MvpMatrix * a_Position ;\n' +
13 // Make the length of the normal 1.0
14 ' vec3 normal = normalize(vec3(a_Normal));\n' +
15 // Dot product of light direction and orientation of a surface
16 ' float nDotL = max(dot(u_LightDirection, normal), 0.0);\n' +
17 // Calculate the color due to diffuse reflection
18 ' vec3 diffuse = u_LightColor * vec3(a_Color) * nDotL;\n' +
19 ' v_Color = vec4(diffuse, a_Color.a);\n' +
20 '}\n';
21
22 // Fragment shader program
...
28 'void main() {\n' +
29 ' gl_FragColor = v_Color;\n' +
30 '}\n';
31
32 function main() {
...
49 // Set the vertex coordinates, the color, and the normal
50 var n = initVertexBuffers(gl);
...
Lighting 3D Objects
303
61 var u_MvpMatrix = gl.getUniformLocation(gl.program, 'u_MvpMatrix');
62 var u_LightColor = gl.getUniformLocation(gl.program, 'u_LightColor');
63 var u_LightDirection = gl.getUniformLocation(gl.program, 'u_LightDirection');
...
69 // Set the light color (white)
70 gl.uniform3f(u_LightColor, 1.0, 1.0, 1.0);
71 // Set the light direction (in the world coordinate)
72 var lightDirection = new Vector3([0.5, 3.0, 4.0]);
73 lightDirection.normalize(); // Normalize
74 gl.uniform3fv(u_LightDirection, lightDirection.elements);
75
76 // Calculate the view projection matrix
77 var mvpMatrix = new Matrix4(); // Model view projection matrix
78 mvpMatrix.setPerspective(30, canvas.width/canvas.height, 1, 100);
79 mvpMatrix.lookAt(3, 3, 7, 0, 0, 0, 0, 1, 0);
80 // Pass the model view projection matrix to the variable u_MvpMatrix
81 gl.uniformMatrix4fv(u_MvpMatrix, false, mvpMatrix.elements);
...
86 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);// Draw a cube
87 }
88
89 function initVertexBuffers(gl) {
...
98 var vertices = new Float32Array([ // Vertices
99 1.0, 1.0, 1.0, -1.0, 1.0, 1.0, -1.0,-1.0, 1.0, 1.0,-1.0, 1.0,
100 1.0, 1.0, 1.0, 1.0,-1.0, 1.0, 1.0,-1.0,-1.0, 1.0, 1.0,-1.0,
...
104 1.0,-1.0,-1.0, -1.0,-1.0,-1.0, -1.0, 1.0,-1.0, 1.0, 1.0,-1.0
105 ]);
...
117
118 var normals = new Float32Array([ // Normals
119 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0,
120 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0,
...
124 0.0, 0.0,-1.0, 0.0, 0.0,-1.0, 0.0, 0.0,-1.0, 0.0, 0.0,-1.0
125 ]);
...
140 if(!initArrayBuffer(gl,'a_Normal', normals, 3, gl.FLOAT)) return -1;
...
154 return indices.length;
155 }
CHAPTER 8 Lighting Objects
304
As a reminder, here is the calculation that the vertex shader performs ( Equation 8.4 ):
( )
〈 〉 =
〈 〉× 〈 〉×
〈 〉 〈 〉
surface color by diffuse reflection
light color base color of surface
light direction orientation of a surface
i
You can see that four pieces of information are needed to calculate this equation: (1) light
color, (2) a surface base color, (3) light direction, and (4) surface orientation. In addition,
and must be normalized (1.0 in length).
Processing in the Vertex Shader
From the four pieces of information necessary for Equation 8.4 , the base color of a
surface is passed as a_Color at line 5 in the following code, and the surface orientation is
passed as a_Normal at line 6. The light color is passed using u_LightColor at line 8, and
the light direction is passed as u_LightDirection at line 9. You should note that only
u_LightDirection is passed in the world coordinate
9 system and has been normalized in
the JavaScript code for ease of handling. This avoids the overhead of normalizing it every
time it’s used in the vertex shader:
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' + <-(2) surface base color
6 'attribute vec4 a_Normal;\n' + // Normal <-(4) surface orientation
7 'uniform mat4 u_MvpMatrix;\n' +
8 'uniform vec3 u_LightColor;\n' + // Light color <-(1)
9 'uniform vec3 u_LightDirection;\n' + // world coordinate,normalized <-(3)
10 'varying vec4 v_Color;\n' +
11 'void main() {\n' +
12 ' gl_Position = u_MvpMatrix * a_Position ;\n' +
13 // Make the length of the normal 1.0
14 ' vec3 normal = normalize(vec3(a_Normal));\n' +
15 // Dot product of light direction and orientation of a surface
16 ' float nDotL = max(dot(u_LightDirection, normal), 0.0);\n' +
17 // Calculate the color due to diffuse reflection
18 ' vec3 diffuse = u_LightColor * vec3(a_Color) * nDotL;\n' +
19 ' v_Color = vec4(diffuse, a_Color.a);\n' +
20 '}\n';
Once the necessary information is available, you can carry out the calculation. First, the
vertex shader normalizes the vector at line 14. Technically, because the normal used in
this sample program is 1.0 in length, this process is not necessary. However, it is good
practice, so it is performed here:
9 In this book, the light effect with shading is calculated in the world coordinate system (see Appendix
G , “World Coordinate System Versus Local Coordinate System”) because it is simpler to program and
more intuitive with respect to the light direction. It is also safe to calculate it in the view coordinate
system but more complex.
Lighting 3D Objects
305
14 ' vec3 normal = normalize(vec3(a_Normal));\n' +
Although a_Normal is of type vec4 , a normal represents a direction and uses only the x, y,
and z components. So you extract these components with . xyz and then normalize. If you
pass the normal using a type vec3 , this process is not necessary. However, it is passed as
a type vec4 in this code because a vec4 will be needed when we extend the code for the
next example. We will explain the details in a later sample program. As you can see, GLSL
ES provides normalize() , a built-in function to normalize a vector specified as its argu-
ment. In the program, the normalized normal is stored in the variable normal for use later.
Next, you need to calculate the dot product 〈
〉 〈 〉 light direction surface orientation i
from
Equation 8.4 . The light direction is stored in u_LightDirection . Because it is already
normalized, you can use it as is. The orientation of the surface is the normal that was
normalized at line 14. The dot product “ · ” can then be calculated using the built-in
function dot() , which again is provided by GLSL ES and returns the dot product of the
two vectors specified as its arguments. That is, calling dot(u_LightDirection, normal)
performs 〈 〉 〈 〉 light direction surface orientation i . This calculation is performed at line 16.
16 ' float nDotL = max(dot(u_LightDirection, normal), 0.0);\n' +
Once the dot product is calculated, if the result is positive, it is assigned to nDotL . If it is
negative then 0.0 is assigned. The function max() used here is a GLSL ES built-in function
that returns the greater value from its two arguments.
A negative dot product means that θ in cos θ is more than 90 degrees. Because θ is the
angle between the light direction and the surface orientation, a value of θ greater than 90
degrees means that light hits the surface on its back face (see Figure 8.11 ). This is the same
as no light hitting the front face, so 0.0 is assigned to nDotL .
θ 0 9 >
orientation of
the surface
diffuse reflection
light
direction
light
source
Figure 8.11 A normal and light in case θ is greater than 90 degrees
Now that the preparation is completed, you can calculate Equation 8.4 . This is performed
at line 18, which is a direct implementation of Equation 8.4 . a_Color , which is of type
vec4 and holds the RGBA values, is converted to a vec3 ( .rgb ) because its transparency
(alpha value) is not used in lighting.
CHAPTER 8 Lighting Objects
306
In fact, transparency of an object’s surface has a significant effect on the color of the
surface. However, because the calculation of the light passing through an object is compli-
cated, we ignore transparency and don’t use the alpha value in this program:
18 ' vec3 diffuse = u_LightColor * vec3(a_Color) * nDotL;\n' +
Once calculated, the result, diffuse , is assigned to the varying variable v_Color at line 19.
Because v_Color is of type vec4 , diffuse is also converted to vec4 with 1.0 :
19 ' v_Color = vec4(diffuse, 1.0);\n' +
The result of the processing steps above is that a color, depending on the direction of
the vertex’s normal, is calculated, passed to the fragment shader, and assigned to gl_
FragColor . In this case, because you use a directional light, vertices that make up the same
surface are the same color, so each surface will be a solid color.
That completes the vertex shader code. Let’s now take a look at how the JavaScript
program passes the data needed for Equation 8.4 to the vertex shader.
Processing in the JavaScript Program
The light color ( u_LightColor ) and the light direction ( u_LightDirection ) are passed to
the vertex shader from the JavaScript program. Because the light color is white (1.0, 1.0,
1.0), it is simply written to u_LightColor using gl.uniform3f() :
69 // set the Light color (white)
70 gl.uniform3f(u_LightColor, 1.0, 1.0, 1.0);
The next step is to set up the light direction, which must be passed after normalization, as
discussed before. You can normalize it with the normalize() function for Vector3 objects
that is provided in cuon-matrix.js . Usage is simple: Create the Vector3 object that speci-
fies the vector you want to normalize as its argument (line 72), and invoke the normal-
ize() method on the object. Note that the notation in JavaScript is different from that of
GLSL ES:
71 // Set the light direction (in the world coordinate)
72 var lightDirection = new Vector3([0.5, 3.0, 4.0]);
73 lightDirection.normalize(); // Normalize
74 gl.uniform3fv(u_LightDirection, lightDirection.elements);
The result is stored in the elements property of the object in an array of type
Float32Array and then assigned to u_LightDirection using gl.uniform3fv() (line 74).
Finally, the normal data is written in initVertexBuffers() , defined at line 89. Actual
normal data is stored in the array normals at line 118 per vertex along with the color
data, as in ColoredCube.js . Data is assigned to a_Normal in the vertex shader by invoking
initArrayBuffer() at line 140:
140 if(!initArrayBuffer(gl, 'a_Normal', normals, 3, gl.FLOAT)) return -1;
Lighting 3D Objects
307
initArrayBuffer() , which was also used in ColoredCube , assigns the array specified by
the third argument ( normals ) to the attribute variable that has the name specified by the
second argument ( a_Normal ).
Add Shading Due to Ambient Light
Although at this stage you have successfully added lighting to the scene, as you can see
from Figure 8.9 , when you run LightedCube , the cube is a little different from the box in
the real world. In particular, the surface on the opposite side of the light source appears
almost black and not clearly visible. You can see this problem more clearly if you animate
the cube. Try the sample program LightedCube_animation (see Figure 8.12 ) to see the
problem more clearly.
Figure 8.12 The result of LightedCube_animation
Although the scene is correctly lit as the result of Equation 8.4 , our real-world experiences
tells us that something isn’t right. It is unusual to see such a sharp effect because, in the
real world, surfaces such as the back face of the cube are also lit by diffuse or reflected
light. The ambient light described in the previous section represents this indirect light
and can be used to make the scene more lifelike. Let’s add that to the scene and see if the
effect is more realistic. Because ambient light models the light that hits an object from all
directions with constant intensity, the surface color due to the reflection is determined
only by the light color and the base color of the surface. The formula that calculates this
was shown as Equation 8.2 . Let’s see it again:
〈 〉 =
〈 〉× 〈 〉
surface color by ambient reflection
light color base color of surface
CHAPTER 8 Lighting Objects
308
Let’s try to add the color due to ambient light described by this formula to the sample
program LightedCube . To do this, use Equation 8.3 shown here:
〈 〉 =
〈 〉+ 〈 〉
surface color by diffuse and ambient reflection
surface color by diffuse reflection surface color by ambient reflection
Ambient light is weak because it is the light reflected by other objects like the walls. For
example, if the ambient light color is (0.2, 0.2, 0.2) and the base color of a surface is red,
or (1.0, 0.0, 0.0), then, from Equation 8.2 , the surface color due to the ambient light is
(0.2, 0.0, 0.0). For example, if there is a white box in a blue room—that is, the base color
of the surface is (1.0, 1.0, 1.0) and the ambient light is (0.0, 0.0, 0.2)—the color becomes
slightly blue (0.0, 0.0, 0.2).
Let’s implement the effect of ambient reflection in the sample program LightedCube_
ambient , which results in the cube shown in Figure 8.13 . You can see that the surface that
the light does not directly hit is now also slightly colored and more closely resembles the
cube in the real world.
Figure 8.13 LightedCube_ambient
Sample Program (LightedCube_ambient.js)
Listing 8.2 illustrates the sample program. Because it is almost the same as LightedCube ,
only the modified parts are shown.
Listing 8.2 LightedCube_ambient.js
1 // LightedCube_ambient.js
2 // Vertex shader program
...
Lighting 3D Objects
309
8 'uniform vec3 u_LightColor;\n' + // Light color
9 'uniform vec3 u_LightDirection;\n' + // World coordinate, normalized
10 'uniform vec3 u_AmbientLight;\n' + // Color of an ambient light
11 'varying vec4 v_Color;\n' +
12 'void main() {\n' +
...
16 // The dot product of the light direction and the normal
17 ' float nDotL = max(dot(lightDirection, normal), 0.0);\n' +
18 // Calculate the color due to diffuse reflection
19 ' vec3 diffuse = u_LightColor * a_Color.rgb * nDotL;\n' +
20 // Calculate the color due to ambient reflection
21 ' vec3 ambient = u_AmbientLight * a_Color.rgb;\n' +
22 // Add surface colors due to diffuse and ambient reflection
23 ' v_Color = vec4(diffuse + ambient, a_Color.a);\n' +
24 '}\n';
...
36 function main() {
...
64 // Get the storage locations of uniform variables and so on
...
68 var u_AmbientLight = gl.getUniformLocation(gl.program, 'u_AmbientLight');
...
80 // Set the ambient light
81 gl.uniform3f(u_AmbientLight, 0.2, 0.2, 0.2);
...
95 }
u_AmbientLight at line 10 is added to the vertex shader to pass in the color of ambient
light. After Equation 8.2 is calculated using it and the base color of the surface ( a_Color ),
the result is stored in the variable ambient (line 21). Now that both diffuse and ambient
are determined, the surface color is calculated at line 23 using Equation 8.3 . The result is
passed to v_Color , just like in LightedCube , and the surface is painted with this color.
As you can see, this program simply adds ambient at line 23, causing the whole cube to
become brighter. This implements the effect of the ambient light hitting an object equally
from all directions.
The examples so far have been able to handle static objects. However, because objects are
likely to move within a scene, or the viewpoint changes, you have to be able to handle
such transformations. As you will recall from Chapter 4 , “More Transformations and Basic
Animation,” an object can be translated, scaled, or rotated using coordinate transforma-
tions. These transformations may also change the normal direction and require a recalcu-
lation of lighting as the scene changes. Let’s take a look at how to achieve that.
CHAPTER 8 Lighting Objects
310
Lighting the Translated-Rotated Object
The program LightedTranslatedRotatedCube uses a directional light source to light a cube
that is rotated 90 degrees clockwise around the z-axis and translated 0.9 units in the y-axis
direction. A part from a directional light as described in the previous section, the sample,
LightedCube_ambient , uses diffuse reflection and ambient reflection and rotates and trans-
lates the cube. The result is shown in Figure 8.14 .
Figure 8.14 LightedTranslatedRotatedCube
You saw in the previous section that the normal direction may change when coordinate
transformations are applied. Figure 8.15 shows some examples of that. The leftmost figure
in Figure 8.15 shows the cube used in this sample program looking along the negative
direction of the z-axis. The only normal (1, 0, 0), which is toward the positive direction of
the x-axis, is shown. Let’s perform some coordinate transforms on this figure, which are
the three figures on the right.
l a m r o n e h t
n o i t c e r i d
1 ( 0 ) 0
n o i t c e r i d l a m r o n e h t
1 ( 0 ) 0
n o i t c e r i d l a m r o n e h t
1 ( 1 ) 0
n o i t c e r i d l a m r o n e h t
) 0 , 5 . 0 , 1 (
x
y
n o i t a l s n a r t ) 1 (
) s i x a - y e h t g n o l a e t a l s n a r t (
n o i t a t o r ) 2 (
) s e e r g e d 5 4 e t a t o r (
g n i l a c s ) 3 (
g n o l a 2 y b e l a c s (
) s i x a - y e h t
l a m r o n l a n i g i r o e h t
Figure 8.15 The changes of the normal direction due to coordinate transformations
Lighting the Translated-Rotated Object
311
You can see the following from Figure 8.15 :
• The normal direction is not changed by a translation because the orientation of the
object does not change.
• The normal direction is changed by a rotation according to the orientation of the
object.
• Scaling has a more complicated effect on the normal. As you can see, the object in
the rightmost figure is rotated i and then scaled two times only in the y-axis. In
this case, the normal direction is changed because the orientation of the surface
changes. On the other hand, if an object is scaled equally in all axes the normal
direction is not changed. Finally, even if an object is scaled unequally, the normal
direction may not change . For example, when the leftmost figure (the original
normal) is scaled two times only in the y-axis direction, the normal direction does
not change.
Obviously, the calculation of the normal under various transformations is complex, partic-
ularly when dealing with scaling. However, a mathematical technique can help.
The Magic Matrix: Inverse Transpose Matrix
As described in Chapter 4 , the matrix that performs a coordinate transformation on an
object is called a model matrix. The normal direction can be calculated by multiplying the
normal by the inverse transpose matrix of a model matrix. The inverse transpose matrix
is the matrix that transposes the inverse of a matrix.
The inverse of the matrix M is the matrix R, where both R*M and M*R become the iden-
tity matrix. The term transpose means the operation that exchanges rows and columns of
a matrix. The details of this are explained in Appendix E , “The Inverse Transpose Matrix.”
For our purposes, it can be summarized simply using the following rule:
Rule: You can calculate the normal direction if you multiply the normal by the
inverse transpose of the model matrix.
The inverse transpose matrix is calculated as follows:
1. Invert the original matrix.
2. Transpose the resulting matrix.
This can be carried out using convenient methods supported by the Matrix4 object (see
Table 8.1 ).
CHAPTER 8 Lighting Objects
312
Table 8.1 Matrix4 Methods for an Inverse Transpose Matrix
Method Description
Matrix4.setInverseOf(m) Calculates the inverse of the matrix stored in m and stores the
result in the Matrix4 object, where m is a Matrix4 object
Matrix4.transpose() Transposes the matrix stored in the Matrix4 object and writes
the result back into the Matrix4 object
Assuming that a model matrix is stored in modelMatrix , which is a Matrix4 object, the
following code snippet will get its inverse transpose matrix. The result is stored in the vari-
able named normalMatrix , because it performs the coordinate transformation of a normal:
Matrix4 normalMatrix = new Matrix4();
// Calculate the model matrix
...
// Calculate the matrix to transform normal according to the model matrix
normalMatrix.setInverseOf(modelMatrix);
normalMatrix.transpose();
Now let’s see the program LightedTranslatedRotatedCube.js that lights the cube, which
is rotated 90 degrees clockwise around the z-axis and translated 0.9 along the y-axis, all
using directional light. You’ll use the cube that was transformed by the model matrix in
LightedCube_ambient from the previous section.
Sample Program (LightedTranslatedRotatedCube.js)
Listing 8.3 shows the sample program. The changes from LightedCube_ambient are that
u_NormalMatrix is added (line 8) to pass the matrix for coordinate transformation of the
normal to the vertex shader, and the normal is transformed at line 16 using this matrix.
u_NormalMatrix is calculated within the JavaScript.
Listing 8.3 LightedTranslatedRotatedCube.js
1 // LightedTranslatedRotatedCube.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
...
6 'attribute vec4 a_Normal;\n' +
7 'uniform mat4 u_MvpMatrix;\n' +
8 'uniform mat4 u_NormalMatrix;\n'+ // Transformation matrix of normal
9 'uniform vec3 u_LightColor;\n' + // Light color
10 'uniform vec3 u_LightDirection;\n' + // World coordinate, normalized
11 'uniform vec3 u_AmbientLight;\n' + // Ambient light color
12 'varying vec4 v_Color;\n' +
Lighting the Translated-Rotated Object
313
13 'void main() {\n' +
14 ' gl_Position = u_MvpMatrix * a_Position;\n' +
15 // Recalculate normal with normal matrix and make its length 1.0
16 ' vec3 normal = normalize(vec3(u_NormalMatrix * a_Normal));\n' +
17 // The dot product of the light direction and the normal
18 ' float nDotL = max(dot(u_LightDirection, normal), 0.0);\n' +
19 // Calculate the color due to diffuse reflection
20 ' vec3 diffuse = u_LightColor * a_Color.rgb * nDotL;\n' +
21 // Calculate the color due to ambient reflection
22 ' vec3 ambient = u_AmbientLight * a_Color.rgb;\n' +
23 // Add the surface colors due to diffuse and ambient reflection
24 ' v_Color = vec4(diffuse + ambient, a_Color.a);\n' +
25 '}\n';
...
37 function main() {
...
65 // Get the storage locations of uniform variables and so on
66 var u_MvpMatrix = gl.getUniformLocation(gl.program, 'u_MvpMatrix');
67 var u_NormalMatrix = gl.getUniformLocation(gl.program, 'u_NormalMatrix');
...
85 var modelMatrix = new Matrix4(); // Model matrix
86 var mvpMatrix = new Matrix4(); // Model view projection matrix
87 var normalMatrix = new Matrix4(); // Transformation matrix for normal
88
89 // Calculate the model matrix
90 modelMatrix.setTranslate(0, 1, 0); // Translate to y-axis direction
91 modelMatrix.rotate(90, 0, 0, 1); // Rotate around the z-axis
92 // Calculate the view projection matrix
93 mvpMatrix.setPerspective(30, canvas.width/canvas.height, 1, 100);
94 mvpMatrix.lookAt(-7, 2.5, 6, 0, 0, 0, 0, 1, 0);
95 mvpMatrix.multiply(modelMatrix);
96 // Pass the model view projection matrix to u_MvpMatrix
97 gl.uniformMatrix4fv(u_MvpMatrix, false, mvpMatrix.elements);
98
99 // Calculate matrix to transform normal based on the model matrix
100 normalMatrix.setInverseOf(modelMatrix);
101 normalMatrix.transpose();
102 // Pass the transformation matrix for normal to u_NormalMatrix
103 gl.uniformMatrix4fv(u_NormalMatrix, false, normalMatrix.elements);
...
110 }
CHAPTER 8 Lighting Objects
314
The processing in the vertex shader is almost the same as in LightedCube_ambient . The
difference, in line with the preceding rule, is that you multiply a_Normal by the inverse
transpose of the model matrix at line 16 instead of using it as-is:
15 // Recalculate normal with normal matrix and make its length 1.0
16 ' vec3 normal = normalize(vec3(u_NormalMatrix * a_Normal));\n' +
Because you passed a_Normal as type vec4 , you can multiply it by u_NormalMatrix , which
is of type mat4 . You only need the x, y, and z components of the result of the multiplica-
tion, so the result is converted into type vec3 with vec3() . It is also possible to use . xyz
as before, or write (u_NormalMatrix * a_Normal).xyz . However, vec3() is used here for
simplicity. Now that you understand how the shader calculates the normal direction
resulting from the rotation and translation of the object, let’s move on to the explanation
of the JavaScript program. The key point here is the calculation of the matrix that will be
passed to u_NormalMatrix in the vertex shader.
u_NormalMatrix is the inverse transpose of the model matrix, so the model matrix is first
calculated at lines 90 and 91. Because this program rotates an object around the z-axis
and translates it in the y-axis direction, you can use the setTranslate() and rotate()
methods of a Matrix4 object as described in Chapter 4 . It is at lines 100 and 101 that the
inverse transpose matrix is actually calculated. It is passed to u_NormalMatrix in the vertex
shader at line 103, in the same way as mvpMatrix at line 97. The second argument of gl.
uniformMatrix4f() specifies whether to transpose the matrix ( Chapter 3 ):
99 // Calculate matrix to transform normal based on the model matrix
100 normalMatrix.setInverseOf(modelMatrix);
101 normalMatrix.transpose();
102 // Pass the normal transformation matrix to u_NormalMatrix
103 gl.uniformMatrix4fv(u_NormalMatrix, false, normalMatrix.elements);
When run, the output is similar to Figure 8.14 . As you can see, the shading is the same as
LightedCube_ambient with the cube translated in the y-axis direction. That is because (1)
the translation doesn’t change the normal direction, (2) neither does the rotation by 90
degrees, because the rotation simply switches the surfaces of the cube, (3) the light direc-
tion of the directional light does not change regardless of the position of the object, and
(4) diffuse reflection reflects the light in all directions with equal intensity.
You now have a good understanding of the basics of how to implement light and shade in
3D graphics. Let’s build on this by exploring another type of light source: the point light.
Using a Point Light Object
In contrast to a directional light, the direction of the light from a point light source differs
at each position in the 3D scene (see Figure 8.16 ). So, when calculating shading, you need
to calculate the light direction at the specific position on the surface where the light hits.
Using a Point Light Object
315
α
β
a point light
Figure 8.16 The direction of a point light varies by position
In the previous sample programs, you calculated the color at each vertex by passing the
normal and the light direction for each vertex. You will use the same approach here, but
because the light direction changes, you need to pass the position of the light source and
then calculate the light direction at each vertex position.
Here, you construct the sample program PointLightedCube that displays a red cube lit
with white light from a point light source. We again use diffuse reflection and ambient
reflection. The result is shown in Figure 8.17 , which is a version of LightedCube_ambient
from the previous section but now lit with a point light.
Figure 8.17 PointLightedCube
Sample Program (PointLightedCube.js)
Listing 8.4 shows the sample program in which only the vertex shader is changed from
LightedCube_ambient . The variable u_ModelMatrix for passing the model matrix and the
variable u_LightPosition representing the light position are added. Note that because you
CHAPTER 8 Lighting Objects
316
use a point light in this program, you will use the light position instead of the light direc-
tion. Also, to make the effect easier to see, we have enlarged the cube.
Listing 8.4 PointLightedCube.js
1 // PointLightedCube.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
...
8 'uniform mat4 u_ModelMatrix;\n' + // Model matrix
9 'uniform mat4 u_NormalMatrix;\n' + // Transformation matrix of normal
10 'uniform vec3 u_LightColor;\n' + // Light color
11 'uniform vec3 u_LightPosition;\n' + // Position of the light source (in the
➥world coordinate system)
12 'uniform vec3 u_AmbientLight;\n' + // Ambient light color
13 'varying vec4 v_Color;\n' +
14 'void main() {\n' +
15 ' gl_Position = u_MvpMatrix * a_Position;\n' +
16 // Recalculate normal with normal matrix and make its length 1.0
17 ' vec3 normal = normalize(vec3(u_NormalMatrix * a_Normal));\n' +
18 // Calculate the world coordinate of the vertex
19 ' vec4 vertexPosition = u_ModelMatrix * a_Position;\n' +
20 // Calculate the light direction and make it 1.0 in length
21 ' vec3 lightDirection = normalize(u_LightPosition – vec3(vertexPosition));\n' +
22 // The dot product of the light direction and the normal
23 ' float nDotL = max(dot( lightDirection, normal), 0.0);\n' +
24 // Calculate the color due to diffuse reflection
25 ' vec3 diffuse = u_LightColor * a_Color.rgb * nDotL;\n' +
26 // Calculate the color due to ambient reflection
27 ' vec3 ambient = u_AmbientLight * a_Color.rgb;\n' +
28 // Add surface colors due to diffuse and ambient reflection
29 ' v_Color = vec4(diffuse + ambient, a_Color.a);\n' +
30 '}\n';
...
42 function main() {
...
70 // Get the storage locations of uniform variables and so on
71 var u_ModelMatrix = gl.getUniformLocation(gl.program, 'u_ModelMatrix');
...
74 var u_LightColor = gl.getUniformLocation(gl.program,'u_LightColor');
75 var u_LightPosition = gl.getUniformLocation(gl.program, 'u_LightPosition');
...
82 // Set the light color (white)
Using a Point Light Object
317
83 gl.uniform3f(u_LightColor, 1.0, 1.0, 1.0);
84 // Set the position of the light source (in the world coordinate)
85 gl.uniform3f(u_LightPosition, 0.0, 3.0, 4.0);
...
89 var modelMatrix = new Matrix4(); // Model matrix
90 var mvpMatrix = new Matrix4(); // Model view projection matrix
91 var normalMatrix = new Matrix4(); // Transformation matrix for normal
92
93 // Calculate the model matrix
94 modelMatrix.setRotate(90, 0, 1, 0); // Rotate around the y-axis
95 // Pass the model matrix to u_ModelMatrix
96 gl.uniformMatrix4fv(u_ModelMatrix, false, modelMatrix.elements);
...
The key differences in the processing within the vertex shader are at line 19 and 21. At
line 19, you transform the vertex coordinates into world coordinates in order to calculate
the light direction at the vertex coordinates. Because a point light emits light in all direc-
tions from its position, the light direction at a vertex is the result of subtracting the vertex
position from the light source position. Because the light position is passed to the variable
u_LightPosition using world coordinates at line 11, you also have to convert the vertex
coordinates into world coordinates to calculate the light direction. The light direction
is then calculated at line 21. Note that it is normalized with normalize() so that it will
be 1.0 in length. Using the resulting light direction ( lightDirection ), the dot product is
calculated at line 23 and then the surface color at each vertex is calculated based on this
light direction.
If you run this program, you will see a more realistic result, as shown in Figure 8.17 .
Although this result is more realistic, a closer look reveals an artifact: There are unnatural
lines of shade on the cube’s surface (see Figure 8.18 ). You can see this more easily if the
cube rotates as it does when you load PointLightedCube_animation .
CHAPTER 8 Lighting Objects
318
Figure 8.18 The unnatural appearance when processing the point light at each vertex
This comes about because of the interpolation process discussed in Chapter 5 , “Using
Colors and Texture Images.” As you will remember, the WebGL system interpolates the
colors between vertices based on the colors you supply at the vertices. However, because
the direction of light from a point light source varies by position to shade naturally, you
have to calculate the color at every position the light hits instead of just at each vertex.
You can see this problem more clearly using a sphere illuminated by a point light, as
shown in Figure 8.19 .
per-vertex
calculation
per-position
calculation
Figure 8.19 The spheres illuminated by a point light
As you can see, the border between the brighter parts and darker parts is unnatural in the
left figure. If the effect is hard to see on the page, the left figure is PointLightedSphere ,
and the right is PointLightedSphere_perFragment . We will describe how to draw them
correctly in the next section.
Using a Point Light Object
319
More Realistic Shading: Calculating the Color per Fragment
At first glance, it may seem daunting to have to calculate the color at every position on a
cube surface where the light hits. However, essentially it means calculating the color per
fragment, so the power of the fragment shader can now be used.
This sample program you will use is PointLightedCube_perFragment , and its result is
shown in Figure 8.20 .
Figure 8.20 PointLightedCube_perFragment
Sample Program (PointLightedCube_perFragment.js)
The sample program, which is based on PointLightedCube.js , is shown in Listing 8.5 .
Only the shader code has been modified and, as you can see, there is less processing in the
vertex shader and more processing in the fragment shader.
Listing 8.5 PointLightedCube_perFragment.js
1 // PointLightedCube_perFragment.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
...
8 'uniform mat4 u_ModelMatrix;\n' + // Model matrix
9 'uniform mat4 u_NormalMatrix;\n' + // Transformation matrix of normal
10 'varying vec4 v_Color;\n' +
11 'varying vec3 v_Normal;\n' +
12 'varying vec3 v_Position;\n' +
13 'void main() {\n' +
CHAPTER 8 Lighting Objects
320
14 ' gl_Position = u_MvpMatrix * a_Position;\n' +
15 // Calculate the vertex position in the world coordinate
16 ' v_Position = vec3(u_ModelMatrix * a_Position);\n' +
17 ' v_Normal = normalize(vec3(u_NormalMatrix * a_Normal));\n' +
18 ' v_Color = a_Color;\n' +
19 '}\n';
20
21 // Fragment shader program
22 var FSHADER_SOURCE =
...
26 'uniform vec3 u_LightColor;\n' + // Light color
27 'uniform vec3 u_LightPosition;\n' + // Position of the light source
28 'uniform vec3 u_AmbientLight;\n' + // Ambient light color
29 'varying vec3 v_Normal;\n' +
30 'varying vec3 v_Position;\n' +
31 'varying vec4 v_Color;\n' +
32 'void main() {\n' +
33 // Normalize normal because it's interpolated and not 1.0 (length)
34 ' vec3 normal = normalize(v_Normal);\n' +
35 // Calculate the light direction and make it 1.0 in length
36 ' vec3 lightDirection = normalize(u_LightPosition - v_Position);\n' +
37 // The dot product of the light direction and the normal
38 ' float nDotL = max(dot( lightDirection, normal), 0.0);\n' +
39 // Calculate the final color from diffuse and ambient reflection
40 ' vec3 diffuse = u_LightColor * v_Color.rgb * nDotL;\n' +
41 ' vec3 ambient = u_AmbientLight * v_Color.rgb;\n' +
42 ' gl_FragColor = vec4(diffuse + ambient, v_Color.a);\n' +
43 '}\n';
To calculate the color per fragment when light hits, you need (1) the position of the frag-
ment in the world coordinate system and (2) the normal direction at the fragment posi-
tion. You can utilize interpolation ( Chapter 5 ) to obtain these values per fragment by just
calculating them per vertex in the vertex shader and passing them via varying variables to
the fragment shader.
These calculations are performed at lines 16 and 17, respectively, in the vertex shader. At
line 16, the vertex position in world coordinates is calculated by multiplying each vertex
coordinate by the model matrix. After assigning the vertex position to the varying vari-
able v_Position , it will be interpolated between vertices and passed to the corresponding
variable ( v_Position ) in the fragment shader as the world coordinate of the fragment. The
normal calculation at line 17 is carried out for the same purpose.
10 By assigning the result
to v_Normal , it is also interpolated and passed to the corresponding variable ( v_Normal ) in
the fragment shader as the normal of the fragment.
Summary
321
Processing in the fragment shader is the same as that in the vertex shader of
PointLightedCube.js . First, at line 34, the interpolated normal passed from the vertex
shader is normalized. Its length may not be 1.0 anymore because of the interpolation.
Next, at line 36, the light direction is calculated and normalized. Using these results, the
dot product of the light direction and the normal is calculated at line 38. The colors due
to the diffuse reflection and ambient reflection are calculated at lines 40 and 41 and added
to get the fragment color, which is assigned to gl_FragColor at line 42.
If you have more than one light source, after calculating the color due to diffuse reflec-
tion and ambient reflection for each light source, you can obtain the final fragment color
by adding all the colors. In other words, you only have to calculate Equation 8.3 as many
times as the number of light sources.
Summary
This chapter explored how to light a 3D scene, the different types of light used, and how
light is reflected and diffused through the scene. Using this knowledge, you then imple-
mented the effects of different light sources to illuminate a 3D object and examined
various shading techniques to improve the realism of the objects. As you have seen, a
mastery of lighting is essential to adding realism to 3D scenes, which can appear flat and
uninteresting if they’re not correctly lit.
10 In this sample program, this normalization is not necessary because all normals are passed to
a_Normal with a length of 1.0. However, we normalize them here as good programming practice so
the code is more generic.
This page intentionally left blank
This chapter is the final one that describes the core features and how to program
with WebGL. Once you’ve read it, you will have mastered the basics of WebGL
and will have enough knowledge to create realistic and interactive 3D scenes.
This chapter focuses on hierarchical objects, which are important because
they allow you to progress beyond single objects like cubes or blocks to more
complex objects that you can use for game characters, robots, and even humans.
The following key points are discussed in this chapter:
• Modeling complex connected structures such as a robot arm using a hierar-
chical structure.
• Drawing and manipulating hierarchical objects made up of multiple
simpler objects.
• Combining model and rotation matrices to mimic joints such as elbow or
wrist joints
• Internally implementing initShaders() , which you’ve used but not exam-
ined so far.
By the end of this chapter, you will have all the knowledge you need to create
compelling 3D scenes populated by both simple and complex 3D objects.
Chapter 9
Hierarchical Objects
CHAPTER 9 Hierarchical Objects
324
Drawing and Manipulating Objects Composed of
Other Objects
Until now, we have described how to translate and rotate a single object, such as a two-
dimensional triangle or a three-dimensional cube. But many of the objects in 3D graphics,
game characters, robots, and so on, consist of more than one object (or segment). For a
simple example, a robot arm is shown in Figure 9.1 . As you can see, this consists of multi-
ple boxes. The program name is MultiJointModel . First, let’s load the program and experi-
ment by pressing the arrow, x, z, c, and v keys to understand what you will construct in
the following sections.
Figure 9.1 A robot arm consisting of multiple objects
One of the key issues when drawing an object consisting of multiple objects (segments)
is that you have to program to avoid conflicts when the segments move. This section will
explore this issue by describing how to draw and manipulate a robot arm that consists
of multiple segments. First, let’s consider the structure of the human body from the
shoulder to the fingertips to understand how to model our robot arm. An arm consists of
multiple segments, such as the upper arm, lower arm, palm, and fingers, each of which is
connected by a joint, as shown on the left of Figure 9.2 .
Drawing and Manipulating Objects Composed of Other Objects
325
m r a r e p p u
m l a p
t n i o j w o b l e
t n i o j r e d l u o h s
t n i o j r e d l u o h s e h t e t a t o r
t n i o j t s i r w
t n i o j t s i r w e h t e t a t o r
t n i o j r e d l u o h s
m r a r e w o l
m r a r e w o l
m r a r e p p u
m l a p
finger1
finger2 finger1
finger2
Figure 9.2 The structure and movement from the arm to the fingers
Each segment moves around a joint as follows:
• When you move the upper arm by rotating around the shoulder joint, depending
on the upper arm movement, the lower arm, palm, and fingers move (the middle of
Figure 9.2 ) accordingly.
• When you move the lower arm using an elbow joint, the palm and fingers move but
the upper arm does not.
• When you move the palm using the wrist joint, both palm and fingers move but the
upper and lower arm do not (the right of Figure 9.2 ).
• When you move fingers, the upper arm, lower arm, and palm do not move.
To summarize, when you move a segment, the segments located below it move, while the
segments located above are not affected. In addition, all movement, including twisting, is
actually rotation around a joint.
Hierarchical Structure
The typical method used to draw and manipulate the object with such features is to draw
each part object (such as a box) in the order of the object’s hierarchical structure from
upper to lower, applying each model matrix (rotation matrix) at every joint. For example,
in Figure 9.2 , shoulder, elbow, wrist, and finger joints all have respective rotation matrices.
It is important to note that, unlike humans or robots, segments in 3D graphics are not
physically joined. So if you inadvertently rotate the object corresponding to an upper arm
at the shoulder joint, the lower parts would be left behind. When you rotate the shoulder
joint, you should explicitly make the lower parts follow the movement. To do this, you
need to rotate the lower elbow and wrist joints through the same angle that you rotate the
shoulder joint.
CHAPTER 9 Hierarchical Objects
326
It is straightforward to program so that the rotation of one segment propagates to the
lower segments and simply requires that you use the same model matrix for the rotation
of the lower segments. For example, when you rotate a shoulder joint through 30 degrees
using one model matrix, you can draw the lower elbow and wrist joints rotated through
30 degrees using the same model matrix (see Figure 9.3 ). Thus, by changing only the angle
of the shoulder rotation, the lower segments are automatically rotated to follow the move-
ment of the shoulder joint.
30 degrees
30 degrees
rotate the shoulder joint
Figure 9.3 The lower segments following the rotation of the upper segment
For more complex cases, such as when you want to rotate the elbow joint 10 degrees after
rotating the shoulder joint 30 degrees, you can rotate the elbow joint by using the model
matrix and rotating 10 degrees more than the shoulder-joint model matrix. This can be
calculated by multiplying the shoulder-joint model matrix by a 10-degree rotation matrix,
which we refer to as the “elbow-joint model matrix.” The parts below the elbow will
follow the movement of the elbow when drawn using this elbow-joint model matrix.
By programming in such a way, the upper segments are not affected by rotation of the
lower segments. Thus, the upper segments will not move no matter how much the lower
segments move.
Now that you have a good understanding of the principles involved when moving multi-
segment objects, let’s look at a sample program.
Single Joint Model
Let’s begin with a simple single joint model. You will construct the program JointModel
that draws a robot arm consisting of two parts that can be manipulated with the arrow
keys. The screen shot and the hierarchy structure are shown on the left and right of Figure
9.4 , respectively. This robot arm consists of arm1 and arm2, which are joined by joint1.
You should imagine that the arm is raised above the shoulder and that arm1 is the upper
part and arm2 the lower part. When you add the hand later, it will become clearer.
Drawing and Manipulating Objects Composed of Other Objects
327
2 m r a
s i x a - y e h t d n u o r a g n i t a t o r
s i x a - z e h t d n u o r a g n i t a t o r
arm2
arm1
joint1
1 m r a
Figure 9.4 JointModel and the hierarchy structure used in the program
If you run the program, you will see that arm1 is rotated around the y-axis using the right
and left arrow keys, and joint1 is rotated around the z-axis with the up and down arrow
keys ( Figure 9.5 ). When pressing the down arrow key, joint1 is rotated and arm2 leans
forward, as shown on the left of Figure 9.5 . Then if you press the right arrow key, arm1 is
rotated, as shown on the right of Figure 9.5 .
arm2
Figure 9.5 The display change when pressing the arrow keys in JointModel
As you can see, the movement of arm2 by rotation of joint1 does not affect arm1. In
contrast, arm2 is rotated if you rotate arm1.
arm1
CHAPTER 9 Hierarchical Objects
328
Sample Program (JointModel.js)
JointModel.js is shown in Listing 9.1 . The actual vertex shader is a little complicated
because of the shading process and has been removed from the listing here to save space.
However, if you are interested in how the lessons learned in the earlier part of the chapter
are applied, please look at the full listing available by downloading the examples from the
book website. The lighting used is a directional light source and simplified diffuse reflec-
tion, which makes the robot arm look more three-dimensional. However, as you can see,
there are no special lighting calculations needed for this joint model, and all the code
required to draw and manipulate the joint model is in the JavaScript program.
Listing 9.1 JointModel.js
1 // JointModel.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Normal;\n' +
6 'uniform mat4 u_MvpMatrix;\n' +
...
9 'void main() {\n' +
10 ' gl_Position = u_MvpMatrix * a_Position;\n' +
11 // Shading calculation to make the arm look three-dimensional
...
17 '}\n';
...
29 function main() {
...
46 // Set the vertex coordinate.
47 var n = initVertexBuffers(gl);
...
57 // Get the storage locations of uniform variables
58 var u_MvpMatrix = gl.getUniformLocation(gl.program, 'u_MvpMatrix');
59 var u_NormalMatrix = gl.getUniformLocation(gl.program, 'u_NormalMatrix');
...
65 // Calculate the view projection matrix
66 var viewProjMatrix = new Matrix4();
67 viewProjMatrix.setPerspective(50.0, canvas.width / canvas.height, 1.0, 100.0);
68 viewProjMatrix.lookAt(20.0, 10.0, 30.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
69
70 // Register the event handler to be called when keys are pressed
71 document.onkeydown = function(ev){ keydown(ev, gl, n, viewProjMatrix,
➥u_MvpMatrix, u_NormalMatrix); };
72 // Draw robot arm
73 draw(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix);
74 }
Drawing and Manipulating Objects Composed of Other Objects
329
75
76 var ANGLE_STEP = 3.0; // The increments of rotation angle (degrees)
77 var g_arm1Angle = 90.0; // The rotation angle of arm1 (degrees)
78 var g_joint1Angle = 0.0; // The rotation angle of joint1 (degrees)
79
80 function keydown(ev, gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix) {
81 switch (ev.keyCode) {
82 case 38: // Up arrow key -> positive rotation of joint1 (z-axis)
83 if (g_joint1Angle < 135.0) g_joint1Angle += ANGLE_STEP;
84 break;
85 case 40: // Down arrow key -> negative rotation of joint1 (z-axis)
86 if (g_joint1Angle > -135.0) g_joint1Angle -= ANGLE_STEP;
87 break;
...
91 case 37: // Left arrow key -> negative rotation of arm1 (y-axis)
92 g_arm1Angle = (g_arm1Angle - ANGLE_STEP) % 360;
93 break;
94 default: return;
95 }
96 // Draw the robot arm
97 draw(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix);
98 }
99
100 function initVertexBuffers(gl) {
101 // Vertex coordinates
...
148 }
...
174 // Coordinate transformation matrix
175 var g_modelMatrix = new Matrix4(), g_mvpMatrix = new Matrix4();
176
177 function draw(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix) {
...
181 // Arm1
182 var arm1Length = 10.0; // Length of arm1
183 g_modelMatrix.setTranslate(0.0, -12.0, 0.0);
184 g_modelMatrix.rotate(g_arm1Angle, 0.0, 1.0, 0.0); // Rotate y-axis
185 drawBox(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix); // Draw
186
187 // Arm2
188 g_modelMatrix.translate(0.0, arm1Length, 0.0); // Move to joint1
189 g_modelMatrix.rotate(g_joint1Angle, 0.0, 0.0, 1.0);// Rotate z-axis
190 g_modelMatrix.scale(1.3, 1.0, 1.3); // Make it a little thicker
191 drawBox(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix); // Draw
192 }
CHAPTER 9 Hierarchical Objects
330
193
194 var g_normalMatrix = new Matrix4(); // Transformation matrix for normal
195
196 // Draw a cube
197 function drawBox(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix) {
198 //Calculate the model view project matrix and pass it to u_MvpMatrix
199 g_mvpMatrix.set(viewProjMatrix);
200 g_mvpMatrix.multiply(g_modelMatrix);
201 gl.uniformMatrix4fv(u_MvpMatrix, false, g_mvpMatrix.elements);
202 // Calculate the normal transformation matrix and pass it to u_NormalMatrix
203 g_normalMatrix.setInverseOf(g_modelMatrix);
204 g_normalMatrix.transpose();
205 gl.uniformMatrix4fv(u_NormalMatrix, false, g_normalMatrix.elements);
206 // Draw
207 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
208 }
The function main() from line 29 follows the same structure as before, with the first major
difference being the initVertexBuffers() function call at line 47. In initVertexBuf-
fers() , the vertex data for arm1 and arm2 are written into the appropriate buffer objects.
Until now, you’ve been using cubes, with each side being 2.0 in length and the origin
at the center of the cube. Now, to better model the arm, you will use a cuboid like that
shown in the left side of Figure 9.6 . The cuboid has its origin at the center of the bottom
surface and is 3.0 by 3.0 and 10.0 units in height. By setting the origin at the center of the
bottom surface, its rotation around the z-axis is the same as that of joint1 in Figure 9.5 ,
making it convenient to program. Both arm1 and arm2 are drawn using this cuboid.
x
y
z
0 . 0 1
x
y
z
0 . 3
0 . 3
v6(-1, 1, -1)
v1(-1, 1, 1)
v7(-1, -1, -1)
v2(-1, -1, 1)
v3(1, -1, 1)
v4(1, -1, -1)
v0(1, 1, 1)
v5(1, 1, -1)
the previous cube
Figure 9.6 A cuboid for drawing the robot arm
Drawing and Manipulating Objects Composed of Other Objects
331
From lines 66 to 68, a view projection matrix ( viewProjMatrix ) is calculated with the
specified viewing volume, the eye position, and the view direction.
Because the robot arm in this program is moved by using the arrow keys, the event
handler keydown() is registered at line 71:
70 // Register the event handler to be called when keys are pressed
71 document.onkeydown = function(ev){ keydown(ev, gl, n, viewProjMatrix,
➥u_MvpMatrix, u_NormalMatrix); };
72 // Draw the robot arm
73 draw(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix);
The keydown() function itself is defined at line 80. Before that, at lines 76, 77, and 78, the
definition of global variables used in keydown() is defined:
76 var ANGLE_STEP = 3.0; // The increments of rotation angle (degrees)
77 var g_arm1Angle = -90.0; // The rotation angle of arm1 (degrees)
78 var g_joint1Angle = 0.0; // The rotation angle of joint1 (degrees)
79
80 function keydown(ev, gl, n, u_MvpMatrix, u_NormalMatrix) {
81 switch (ev.keyCode) {
82 case 38: // Up arrow key -> the positive rotation of joint1 (z-axis)
83 if (g_joint1Angle < 135.0) g_joint1Angle += ANGLE_STEP;
84 break;
...
88 case 39: // Right arrow key -> the positive rotation of arm1 (y-axis)
89 g_arm1Angle = (g_arm1Angle + ANGLE_STEP) % 360;
90 break;
...
95 }
96 // Draw the robot arm
97 draw(gl, n, u_MvpMatrix, u_NormalMatrix);
98 }
ANGLE_STEP at line 76 is used to control how many degrees arm1 and joint1 are rotated
each time the arrow keys are pressed and is set at 3.0 degrees. g_arm1Angle (line 77) and
g_joint1Angle (line 78) are variables that store the current rotation angle of arm1 and
joint1, respectively (see Figure 9.7 ).
CHAPTER 9 Hierarchical Objects
332
g_joint1Angle
g_arm1Angle
Figure 9.7 g_joint1Angle and g_arm1Angle
The keydown() function , from line 80, increases or decreases the value of the rotation
angle of arm1 ( g_arm1Angle ) or joint1 ( g_joint1Angle ) by ANGLE_STEP , according to which
key is pressed. joint1 can only be rotated through the range from –135 degrees to 135
degrees so that arm2 does not interfere with arm1. Then the whole robot arm is drawn at
line 97 using the function draw() .
Draw the Hierarchical Structure (draw())
The draw() function draws the robotic arm according to its hierarchical structure and is
defined at line 177. Two global variables, g_modelMatrix and g_mvpMatrix , are created at
line 175 and will be used in both draw() and drawBox() :
174 // Coordinate transformation matrix
175 var g_modelMatrix = new Matrix4(), g_mvpMatrix = new Matrix4();
176
177 function draw(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix) {
...
181 // Arm1
182 var arm1Length = 10.0; // Length of arm1
183 g_modelMatrix.setTranslate(0.0, -12.0, 0.0);
184 g_modelMatrix.rotate(g_arm1Angle, 0.0, 1.0, 0.0); // Rotate y-axis
185 drawBox(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix); // Draw
186
187 // Arm2
188 g_modelMatrix.translate(0.0, arm1Length, 0.0); // Move to joint1
189 g_modelMatrix.rotate(g_joint1Angle, 0.0, 0.0, 1.0); // Rotate z-axis
190 g_modelMatrix.scale(1.3, 1.0, 1.3); // Make it a little thicker
191 drawBox(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix); // Draw
192 }
Drawing and Manipulating Objects Composed of Other Objects
333
As you can see, draw() draws the segments by using drawBox() , starting with the upper
part (arm1) followed by the lower part (arm2).
When drawing each part, the same process is repeated: (1) translation ( setTranslate() ,
translate() ), (2) rotation ( rotate() ), and (3) drawing the part ( drawBox() ).
When drawing a hierarchical model performing a rotation, typically you will process from
upper to lower in the order of (1) translation, (2) rotation, and (3) drawing segments.
arm1 is translated to (0.0, –12.0, 0.0) with setTranslate() at line 183 to move to an easily
visible position. Because this arm is rotated around the y-axis, its model matrix ( g_model-
Matrix ) is multiplied by the rotation matrix around the y-axis at line 184. g_arm1Angle
is used here. Once arm1’s coordinate transformation has been completed, you then draw
using the drawBox() function.
Because arm2 is connected to the tip of arm1, as shown in Figure 9.7 , it has to be drawn
from the tip of arm1. This can be achieved by translating it along the y-axis in the posi-
tive direction by the length of arm1 ( arm1Length ) and applying the translation to the
model matrix, which is used when drawing arm1 ( g_modelMatrix ).
This is done as shown in line 188, where the second argument of translate() is
arm1Length . Also notice that the method uses translate() rather than setTranslate()
because arm2 is drawn at the tip of arm1:
187 // Arm2
188 g_modelMatrix.translate(0.0, arm1Length, 0.0); // Move to joint1
189 g_modelMatrix.rotate(g_joint1Angle, 0.0, 0.0, 1.0); // Rotate z-axis
190 g_modelMatrix.scale(1.3, 1.0, 1.3); // Make it a little thicker
191 drawBox(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix); // Draw
Line 189 handles the rotation of arm2 which, as can be seen, uses g_joint1Angle . You
make arm2 a little thicker at line 190 by scaling it along the x and z direction. This makes
it easier to distinguish between the two arm segments but is not essential to the robotic
arm’s movement.
Now, by updating g_arm1Angle and g_joint1Angle in keydown() as described in the previ-
ous section and then invoking draw() , arm1 is rotated by g_arm1Angle and arm2 is, in
addition, rotated by g_joint1Angle .
The drawBox() function is quite simple. It calculates a model view project matrix and
passes it to the u_MvpMatrix variable at lines 199 and 200. Then it just calculates the
normal transformation matrix for shading from the model matrix, sets it to
u_NormalMatrix at lines 203 and 204, and draws the cuboid in Figure 9.6 at line 207.
This basic approach, although used here for only a single joint, can be used for any
complex hierarchical models simply by repeating the process steps used earlier.
CHAPTER 9 Hierarchical Objects
334
Obviously, our simple robot arm, although modeled on a human arm, is more like a
skeleton than a real arm. A more realistic model of a real arm would require the skin to
be modeled, which is beyond the scope of this book. Please refer to the OpenGL ES 2.0
Programming Guide for more information about skinning.
A Multijoint Model
Here, you will extend JointModel to create MultiJointModel , which draws a multijoint
robot arm consisting of two arm segments, a palm, and two fingers, all of which you can
manipulate using the keyboard. As shown in Figure 9.8 , we call the arm extending from
the base arm1, the next segment arm2, and the joint between the two arms joint1. There
is a palm at the tip of arm2. The joint between arm2 and the palm is called joint2. The
two fingers attached at the end of the palm are respectively finger1 and finger2.
s i x a - y e h t d n u o r a e t a t o r
s i x a - x e h t d n u o r a e t a t o r
s i x a - z e h t d n u o r a e t a t o r
s i x a - y e h t d n u o r a e t a t o r
1 m r a
2 m r a
m l a p
e s a b
arm1
base
palm
finger1 finger2
arm2
joint1
joint2
finger1 finger2
Figure 9.8 The hierarchical structure of MultiJointModel
Manipulation of arm1 and joint1 using the arrow keys is the same as JointModel . In addi-
tion, you can rotate joint2 (wrist) with the X and Z keys and move (rotate) the two fingers
with the C and V keys. The variables controlling the rotation angle of each part are shown
in Figure 9.9 .
Drawing and Manipulating Objects Composed of Other Objects
335
g_joint3Angle
g_joint2Angle
g_joint1Angle
g_arm1Angle
Figure 9.9 The variables controlling the rotation of segments
Sample Program (MultiJointModel.js)
This program is similar to JointModel , except for extensions to keydown() to handle the
additional control keys, and draw() , which draws the extended hierarchical structure. First
let’s look at keydown() in Listing 9.2 .
Listing 9.2 MultiJointModel.js (Code for Key Processing)
1 // MultiJointModel.js
...
76 var ANGLE_STEP = 3.0; // The increments of rotation angle (degrees)
77 var g_arm1Angle = 90.0; // The rotation angle of arm1 (degrees)
78 var g_joint1Angle = 45.0; // The rotation angle of joint1 (degrees)
79 var g_joint2Angle = 0.0; // The rotation angle of joint2 (degrees)
80 var g_joint3Angle = 0.0; // The rotation angle of joint3 (degrees)
81
82 function keydown(ev, gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix) {
83 switch (ev.keyCode) {
84 case 40: // Up arrow key -> positive rotation of joint1 (z-axis)
...
95 break;
96 case 90: // Z key -> the positive rotation of joint2
97 g_joint2Angle = (g_joint2Angle + ANGLE_STEP) % 360;
98 break;
99 case 88: // X key -> the negative rotation of joint2
100 g_joint2Angle = (g_joint2Angle - ANGLE_STEP) % 360;
101 break;
CHAPTER 9 Hierarchical Objects
336
102 case 86: // V key -> the positive rotation of joint3
103 if (g_joint3Angle < 60.0) g_joint3Angle = (g_joint3Angle +
➥ANGLE_STEP) % 360;
104 break;
105 case 67: // C key -> the negative rotation of joint3
106 if (g_joint3Angle > -60.0) g_joint3Angle = (g_joint3Angle –
➥ANGLE_STEP) % 360;
107 break;
108 default: return;
109 }
110 // Draw the robot arm
111 draw(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix);
112 }
keydown() is basically the same as that of JointAngle , but in addition to changing
g_arm1Angle and g_joint1Angle based on key presses, it processes the Z, X, V, and C
keys at lines 96, 99, 102, and 105. These key presses change g_joint2Angle , which is the
rotation angle of joint2, and g_joint3Angle , which is the rotation angle of joint3, respec-
tively. After changing them, it calls draw() at line 111 to draw the hierarchy structure.
Let’s take a look at draw() in Listing 9.3 .
Although you are using the same cuboid for the base, arm1, arm2, palm, finger1, and
finger2, the segments are different in width, height, and depth. To make it easy to draw
these segments, let’s extend drawBox() with three more arguments than that used in the
single-joint model:
function drawBox(gl, n, width, height, depth , viewProjMatrix, u_MvpMatrix ,
➥u_NormalMatrix)
By specifying the width, height, and depth using the third to fifth argument, this function
draws a cuboid of the specified size with its origin at the center of the bottom surface.
Listing 9.3 MultiJointModel.js (Code for Drawing the Hierarchy Structure)
188 // Coordinate transformation matrix
189 var g_modelMatrix = new Matrix4(), g_mvpMatrix = new Matrix4();
190
191 function draw(gl, n, viewProjMatrix, u_MvpMatrix, u_NormalMatrix) {
192 // Clear color buffer and depth buffer
193 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
194
195 // Draw a base
196 var baseHeight = 2.0;
197 g_modelMatrix.setTranslate(0.0, -12.0, 0.0);
198 drawBox(gl, n, 10.0, baseHeight, 10.0, viewProjMatrix, u_MvpMatrix,
➥u_NormalMatrix);
Drawing and Manipulating Objects Composed of Other Objects
337
199
200 // Arm1
201 var arm1Length = 10.0;
202 g_modelMatrix.translate(0.0, baseHeight, 0.0); // Move onto the base
203 g_modelMatrix.rotate(g_arm1Angle, 0.0, 1.0, 0.0); // Rotation
204 drawBox(gl, n, 3.0, arm1Length, 3.0, viewProjMatrix, u_MvpMatrix,
➥u_NormalMatrix); // Draw
295
206 // Arm2
...
212 // A palm
213 var palmLength = 2.0;
...
218 // Move to the center of the tip of the palm
219 g_modelMatrix.translate(0.0, palmLength, 0.0);
220
221 // Draw finger1
222 pushMatrix(g_modelMatrix);
223 g_modelMatrix.translate(0.0, 0.0, 2.0);
224 g_modelMatrix.rotate(g_joint3Angle, 1.0, 0.0, 0.0); // Rotation
225 drawBox(gl, n, 1.0, 2.0, 1.0, viewProjMatrix, u_MvpMatrix, u_NormalMatrix);
226 g_modelMatrix = popMatrix();
227
228 // Draw finger2
229 g_modelMatrix.translate(0.0, 0.0, -2.0);
230 g_modelMatrix.rotate(-g_joint3Angle, 1.0, 0.0, 0.0); // Rotation
231 drawBox(gl, n, 1.0, 2.0, 1.0, viewProjMatrix, u_MvpMatrix, u_NormalMatrix);
232 }
233
234 var g_matrixStack = []; // Array for storing a matrix
235 function pushMatrix(m) { // Store the specified matrix to the array
236 var m2 = new Matrix4(m);
237 g_matrixStack.push(m2);
238 }
239
240 function popMatrix() { // Retrieve the matrix from the array
241 return g_matrixStack.pop();
242 }
The draw() function operates in the same way as in JointModel ; that is, each part
is handled following the order of (1) translation, (2) rotation, and (3) draw (using
drawBox() ). First, because the base is not rotated, after moving to the appropriate posi-
tion at line 197, it draws a base there with drawBox() . The third to fifth arguments of
drawBox() specify a width of 10, height of 2, and depth of 10, which cause a flat stand to
be drawn.
CHAPTER 9 Hierarchical Objects
338
The arm1, arm2, and palm are each drawn following the same order of (1) translation,
(2) rotation, and (3) draw and by moving down the object hierarchy toward the lower
level in the same manner as JointModel .
The main difference in this sample program is the drawing of finger1 and finger2 from
line 222. Because they do not have a parent-child relationship, a little more care is needed.
In particular, you have to pay attention to the contents of the model matrix. First, let’s
look at finger1, whose position is translated 2.0 along the z-axis direction from the center
of the tip of the palm and rotated around the x-axis. finger1 can be drawn in the order of
(1) translating, (2) rotating, and (3) drawing segments as before. The program is as follows:
g_modelMatrix.translate(0.0, 0.0, 2.0);
g_modelMatrix.rotate(g_joint3Angle, 1.0, 0.0, 0.0); // Rotation
drawBox(gl, n, 1.0, 2.0, 1.0, u_MvpMatrix, u_NormalMatrix);
Next, looking at finger2, if you follow the same procedure a problem occurs. finger2’s
intended position is a translation of –2.0 units along the z-axis direction from the center
of the tip of the palm and rotated around the x-axis. However, because the model matrix
has changed, if you draw finger2, it will be drawn at the tip of finger1.
Clearly, the solution is to restore the model matrix to its state before finger1 was drawn. A
simple way to achieve this is to store the model matrix before drawing finger1 and retriev-
ing it after drawing finger1. This is actually done at lines 222 and 226 and uses the func-
tions pushMatrix() and popMatrix() to store the specified matrix and retrieve it. At line
222, you store the model matrix specified as pushMatrix() ’s argument ( g_modelMatrix ).
Then, after drawing finger1 at lines 223 to 225, you retrieve the old model matrix at line
226, with popMatrix() , and assign it to g_modelMatrix . Now, because the model matrix
has reverted back, you can draw finger2 in the same way as before.
pushMatrix() and popMatrix() are shown next. pushMatrix() stores the matrix specified
as its argument in an array named g_matrixStack at line 234. popMatrix() retrieves the
matrix stored in g_matrixStack and returns it:
234 var g_matrixStack = []; // Array for storing matrices
235 function pushMatrix(m) { // Store the specified matrix
236 var m2 = new Matrix4(m);
237 g_matrixStack.push(m2);
238 }
239
240 function popMatrix() { // Retrieve a matrix from the array
241 return g_matrixStack.pop();
242 }
This approach can be used to draw an arbitrarily long robot arm. It will scale when new
segments are added to the hierarchy. You only need to use pushMatrix() and popMatrix()
when the hierarchy structure is a sibling relation, not a parent-child relation.
Drawing and Manipulating Objects Composed of Other Objects
339
Draw Segments (drawBox())
Finally, let’s take a look at drawBox() , which draws the segments of the robot arm using
the following arguments:
247 function drawBox(gl, n, width, height, depth, viewMatrix, u_MvpMatrix,
➥u_NormalMatrix) {
The third to fifth arguments, width , height, and depth , specify the width, height, and depth
of the cuboid being drawn. As for the remaining argument, viewMatrix is a view matrix,
and u_MvpMatrix and u_NormalMatrix are the arguments for setting the coordinate trans-
formation matrices to the corresponding uniform variables in the vertex shader, just like
JointModel.js . The model view projection matrix is passed to u_MvpMatrix , and the
matrix for transforming the coordinates of the normal, described in the previous section,
is passed to u_NormalMatrix .
The three-dimensional object used here, unlike JointModel, is a cube whose side is 1.0
unit long. Its origin is located at the center of the bottom surface so that you can easily
rotate the arms, the palm, and the fingers. The function drawBox() is shown here:
244 var g_normalMatrix = new Matrix4();// Transformation matrix for normal
245
246 // Draw a cuboid
247 function drawBox(gl, n, width, height, depth, viewProjMatrix,
➥u_MvpMatrix, u_NormalMatrix) {
248 pushMatrix(g_modelMatrix); // Save the model matrix
249 // Scale a cube and draw
250 g_modelMatrix.scale(width, height, depth);
251 // Calculate model view project matrix and pass it to u_MvpMatrix
252 g_mvpMatrix.set(viewProjMatrix);
253 g_mvpMatrix.multiply(g_modelMatrix);
254 gl.uniformMatrix4fv(u_MvpMatrix, false, g_mvpMatrix.elements);
255 // Calculate transformation matrix for normals and pass it to u_NormalMatrix
...
259 // Draw
260 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
261 g_modelMatrix = popMatrix(); // Retrieve the model matrix
262 }
As you can see, the model matrix is multiplied by a scaling matrix at line 250 so that
the cube will be drawn with the size specified by width , height, and depth . Note that you
store the model matrix at line 248 and retrieve it at line 261 using pushMatrix() and
popMatrix() . Otherwise, when you draw arm2 after arm1, the scaling used for arm1 is left
in the model matrix and affects the drawing of arm2. By retrieving the model matrix at
line 261, which is saved at line 248, the model matrix reverts to the state before scaling
was applied at line 250.
CHAPTER 9 Hierarchical Objects
340
As you can see, the use of pushMatrix() and popMatrix() adds an extra degree of
complexity but allows you to specify only one set of vertex coordinates and use scaling
to create different cuboids. The alternative approach, using multiple objects specified by
different sets of vertices, is also possible. Let’s take a look at how you would program that.
Draw Segments (drawSegment())
In this section, we will explain how to draw segments by switching between buffer objects
in which the vertex coordinates representing the shape of each segment are stored.
Normally, you would need to specify the vertex coordinates, the normal, and the indices
for each segment. However, in this example, because all segments are cuboids, you can
share the normals and indices and simply specify the vertices for each segment. For each
segment (the base, arm1, arm2, palm, and fingers), the vertices are stored in their respec-
tive object buffers, which are then switched when drawing the arm parts. Listing 9.4
shows the sample program.
Listing 9.4 MultiJointModel_segment.js
1 // MultiJointModel_segment.js
...
29 function main() {
...
47 var n = initVertexBuffers(gl);
...
57 // Get the storage locations of attribute and uniform variables
58 var a_Position = gl.getAttribLocation(gl.program, 'a_Position');
...
74 draw(gl, n, viewProjMatrix, a_Position, u_MvpMatrix, u_NormalMatrix);
75 }
...
115 var g_baseBuffer = null; // Buffer object for a base
116 var g_arm1Buffer = null; // Buffer object for arm1
117 var g_arm2Buffer = null; // Buffer object for arm2
118 var g_palmBuffer = null; // Buffer object for a palm
119 var g_fingerBuffer = null; // Buffer object for fingers
120
121 function initVertexBuffers(gl){
122 // Vertex coordinate (Coordinates of cuboids for all segments)
123 var vertices_base = new Float32Array([ // Base(10x2x10)
124 5.0, 2.0, 5.0, -5.0, 2.0, 5.0, -5.0, 0.0, 5.0, 5.0, 0.0, 5.0,
125 5.0, 2.0, 5.0, 5.0, 0.0, 5.0, 5.0, 0.0,-5.0, 5.0, 2.0,-5.0,
...
129 5.0, 0.0,-5.0, -5.0, 0.0,-5.0, -5.0, 2.0,-5.0, 5.0, 2.0,-5.0
130 ]);
131
Drawing and Manipulating Objects Composed of Other Objects
341
132 var vertices_arm1 = new Float32Array([ // Arm1(3x10x3)
133 1.5, 10.0, 1.5, -1.5, 10.0, 1.5, -1.5, 0.0, 1.5, 1.5, 0.0, 1.5,
134 1.5, 10.0, 1.5, 1.5, 0.0, 1.5, 1.5, 0.0,-1.5, 1.5, 10.0,-1.5,
...
138 1.5, 0.0,-1.5, -1.5, 0.0,-1.5, -1.5, 10.0,-1.5, 1.5, 10.0,-1.5
139 ]);
...
159 var vertices_finger = new Float32Array([ // Fingers(1x2x1)
...
166 ]);
167
168 // normals
169 var normals = new Float32Array([
...
176 ]);
177
178 // Indices of vertices
179 var indices = new Uint8Array([
180 0, 1, 2, 0, 2, 3, // front
181 4, 5, 6, 4, 6, 7, // right
...
185 20,21,22, 20,22,23 // back
186 ]);
187
188 // Write coords to buffers, but don't assign to attribute variables
189 g_baseBuffer = initArrayBufferForLaterUse(gl, vertices_base, 3, gl.FLOAT);
190 g_arm1Buffer = initArrayBufferForLaterUse(gl, vertices_arm1, 3, gl.FLOAT);
...
193 g_fingerBuffer = initArrayBufferForLaterUse(gl, vertices_finger, 3, gl.FLOAT);
...
196 // Write normals to a buffer, assign it to a_Normal, and enable it
197 if (!initArrayBuffer(gl, 'a_Normal', normals, 3, gl.FLOAT)) return null;
198
199 // Write indices to a buffer
200 var indexBuffer = gl.createBuffer();
...
205 gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
206 gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indices, gl.STATIC_DRAW);
207
208 return indices.length;
209 }
...
255 function draw(gl, n, viewProjMatrix, a_Position, u_MvpMatrix, u_NormalMatrix) {
...
CHAPTER 9 Hierarchical Objects
342
259 // Draw a base
260 var baseHeight = 2.0;
261 g_modelMatrix.setTranslate(0.0, -12.0, 0.0);
262 drawSegment(gl, n, g_baseBuffer, viewProjMatrix, a_Position,
➥u_MvpMatrix, u_NormalMatrix);
263
264 // Arm1
265 var arm1Length = 10.0;
266 g_modelMatrix.translate(0.0, baseHeight, 0.0); // Move to the tip of the base
267 g_modelMatrix.rotate(g_arm1Angle, 0.0, 1.0, 0.0); // Rotate y-axis
268 drawSegment(gl, n, g_arm1Buffer, viewProjMatrix, a_Position,
➥u_MvpMatrix, u_NormalMatrix);
269
270 // Arm2
...
292 // Finger2
...
295 drawSegment(gl, n, g_fingerBuffer, viewProjMatrix, a_Position,
➥u_MvpMatrix, u_NormalMatrix);
296 }
...
310 // Draw segments
311 function drawSegment(gl, n, buffer, viewProjMatrix, a_Position,
➥u_MvpMatrix, u_NormalMatrix) {
312 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
313 // Assign the buffer object to the attribute variable
314 gl.vertexAttribPointer(a_Position, buffer.num, buffer.type, false, 0, 0);
315 // Enable the assignment
316 gl.enableVertexAttribArray(a_Position);
317
318 // Calculate the model view project matrix and set it to u_MvpMatrix
...
322 // Calculate matrix for normal and pass it to u_NormalMatrix
...
327 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
328 }
The key points in this program are (1) creating the separate buffer objects that contain the
vertex coordinates for each segment, (2) before drawing each segment, assigning the corre-
sponding buffer object to the attribute variable a_Position , and (3) enabling the buffer
and then drawing the segment.
The main() function from line 29 in the JavaScript code follows the same steps as before.
Switching between buffers for the different segments is added to initVertexBuffers() ,
Drawing and Manipulating Objects Composed of Other Objects
343
called at line 47. The stored location of a_Position is retrieved at line 58, and then draw()
is called at line 73.
Let’s examine initVertex() , defined at line 121. Lines 115 to 119 declare the buffer
objects as global variables, used to store the vertex coordinates of each segment. Within
the function, one of the main differences from MultiJointModel.js is the definition of
the vertex coordinates from line 123. Because you are not using a single cuboid trans-
formed differently for the different segments, you need to define the vertex coordinates
for all the parts separately (for example, the base ( vertices_base ) at line 123, coordinates
for arm1 ( vertices_arm1 ), at line 132. The actual creation of the buffer objects for each
part occurs in the function initArrayBufferForLaterUse() from line 189 to 193. This
function is shown here:
211 function initArrayBufferForLaterUse(gl, data, num, type){
212 var buffer = gl.createBuffer(); // Create a buffer object
...
217 // Write data to the buffer object
218 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
219 gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
220
221 // Store information to assign it to attribute variable later
222 buffer.num = num;
223 buffer.type = type;
224
225 return buffer;
226 }
initArrayBufferForLaterUse() simply creates a buffer object at line 212 and writes data
to it at lines 218 and 219. Notice that assigning it to an attribute variable ( gl.vertex
AttribPointer() ) and enabling the assignment ( gl.enableVertexAttribAray() ) are not
done within the function but later, just before drawing. To assign the buffer object to the
attribute variable a_Position later, the data needed is stored as properties of the buffer
object at lines 222 and 223.
Here you take advantage of an interesting feature of JavaScript that allows you to freely
add new properties of an object and assign data to them. You can do this by simply
appending the .property-name to the object name and assigning a value. Using this
feature, you store the number of items in the num property (line 222), and the type in the
type property (line 223). Of course, you can access the contents of the newly made prop-
erties using the same name. Note, you must be careful when referring to properties created
in this way, because JavaScript gives no error indications even if you misspell only one
character in the property name. Equally, be aware that, although convenient, appending
properties has a performance overhead. A better approach, user-defined types, is explained
in Chapter 10 , “Advanced Techniques,” but let’s stick with this approach for now.
CHAPTER 9 Hierarchical Objects
344
Finally, the draw() function, invoked at line 255, is the same as used in MultiJointModel
in terms of drawing parts according to the hierarchical structure, but it’s different in
terms of using drawSegment() to draw each segment. In particular, the third argument of
drawSegment() , shown next, is the buffer object in which the vertex coordinates of the
parts are stored.
262 drawSegment(gl, n, g_baseBuffer, viewProjMatrix, u_MvpMatrix, u_NormalMatrix);
This function is defined at line 311 and operates as follows. It assigns a buffer object to
the attribute variable a_Position and enables it at lines 312 to 316 before drawing at line
327. Here, num and type , which are just stored as buffer object properties, are used.
310 // Draw segments
311 function drawSegment(gl, n, buffer, viewProjMatrix, a_Position,
➥u_MvpMatrix, u_NormalMatrix) {
312 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
313 // Assign the buffer object to the attribute variable
314 gl.vertexAttribPointer(a_Position, buffer.num, buffer.type, false, 0, 0);
315 // Enable the assignment
316 gl.enableVertexAttribArray(a_Position);
317
318 // Calculate model view project matrix and set it to u_MvpMatrix
...
322 // Calculate transformation matrix for normal and set it to u_NormalMatrix
...
327 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
328 }
This time you don’t need to scale objects with the model matrix because you have
prepared the vertex coordinates per part, so there is no need to store and retrieve the
matrix. Therefore, pushMatrix() and popMatrix() are not necessary.
Shader and Program Objects: The Role of
initShaders()
Finally, before we wrap up this chapter, let’s examine one of the convenience functions
defined for this book: initShaders() . This function has been used in all the sample
programs and has hidden quite a lot of complex detail about setting up and using shaders.
We have deliberately left this explanation to the end of this chapter to ensure you have a
good understanding of the basics of WebGL before tackling some of these complex details.
We should note that it’s not actually necessary to master these details. For some readers
it will be sufficient to simply reuse the initShaders() function we supply and skip this
section. However, for those who are interested, let’s take a look.
Shader and Program Objects: The Role of initShaders()
345
initShaders() carries out the routine work to make shaders available in WebGL. It
consists of seven steps:
1. Create shader objects ( gl.createShader() ).
2. Store the shader programs (to avoid confusion, we refer to them as “source code”) in
the shader objects ( g.shaderSource() ).
3. Compile the shader objects ( gl.compileShader() ).
4. Create a program object ( gl.createProgram() ).
5. Attach the shader objects to the program object ( gl.attachShader() ).
6. Link the program object ( gl.linkProgram() ).
7. Tell the WebGL system the program object to be used ( gl.useProgram() ).
Each step is simple but when combined can appear complex, so let’s take a look at them
one by one. First, as you know from earlier, two types of objects are necessary to use
shaders: shader objects and program objects.
Shader object A shader object manages a vertex shader or a fragment shader. One
shader object is created per shader.
Program object A program object is a container that manages the shader objects. A vertex
shader object and a fragment shader object (two shader objects in total)
must be attached to a program object in WebGL.
The relationship between a program object and shader objects is shown in Figure 9.10 .
A program object
shader object
(vertex shader)
shader object
(fragment shader)
Figure 9.10 The relationship between a program object and shader objects
Using this information, let’s discuss the preceding seven steps sequentially.
Create Shader Objects (gl.createShader())
All shader objects have to be created with a call to gl.createShader() before using them.
CHAPTER 9 Hierarchical Objects
346
gl.createShader(type)
Create a shader of the specified type .
Parameters type Specifies the type of shader object to be created: either
gl.VERTEX_SHADER (a vertex shader) or gl.FRAGMENT_
SHADER (a fragment shader).
Return value Non-null The created shader object.
null The creation of the shader object failed.
Errors INVALID_ENUM The specified type is none of the above.
gl.createShader() creates a vertex shader or a fragment shader according to the specified
type . If you do not need the shader any more, you can delete it with gl.deleteShader() .
gl.deleteShader(shader)
Delete the shader object.
Parameters shader Specifies the shader object to be deleted.
Return value None
Errors None
Note that the specified shader object is not deleted immediately if it is still in use (that is,
it is attached to a program object using gl.attachShader() , which is discussed in a few
pages). The shader object specified as an argument of gl.deleteShader() will be deleted
when a program object no longer uses it.
Store the Shader Source Code in the Shader Objects
(g.shaderSource())
A shader object has storage to store the shader source code (written as a string in the
JavaScript program or in the separate file; see Appendix F , “Loading Shader Programs from
Files”). You use gl.shaderSource() to store the source code in a shader object.
Shader and Program Objects: The Role of initShaders()
347
gl.shaderSource(shader, source)
Store the source code specified by source in the shader object specified by shader . If any
source code was previously stored in the shader object, it is replaced by new source code.
Parameters shader Specifies the shader object in which the program is stored.
source Specifies the shader source code (string)
Return value None
Errors None
Compile Shader Objects (gl.compileShader())
After storing the shader source code in the shader object, you have to compile it so that
it can be used in the WebGL system. Unlike JavaScript, and like C or C++, shaders need
to be compiled before use. In this process, the source code stored in a shader object is
compiled to executable format (binary) and kept in the WebGL system. Use gl.compile-
Shader() to compile. Note, if you replace the source code in the shader object with a call
to gl.shaderSource() after compiling, the compiled binary kept in the shader object is
not replaced. You have to recompile it explicitly.
gl.compileShader(shader)
Compile the source code stored in the shader object specified by shader .
Parameters shader Specifies the shader object in which the source code to be
compiled is stored.
Return Value None
Errors None
When executing gl.compileShader() , it is possible a compilation error occurs due to
mistakes in the source code. You can check for such errors, as well as the status of the
shader object, using gl.getShaderParameter() .
CHAPTER 9 Hierarchical Objects
348
gl.getShaderParameter(shader, pname)
Get the information specified by pname from the shader object specified by shader .
Parameters shader Specifies the shader object.
pname Specifies the information to get from the shader:
gl.SHADER_TYPE , gl.DELETE_STATUS , or
gl.COMPILE_STATUS .
Return value The following depending on pname :
gl.SHADER_TYPE The type of shader ( gl.VERTEX_SHADER or gl.FRAGMENT_
SHADER )
gl.DELETE_
STATUS
Whether the deletion has succeeded ( true or false )
gl.COMPILE_
STATUS
Whether the compilation has succeeded ( true or false )
Errors INVALID_ENUM pname is none of the above values.
To check whether the compilation succeeded, you can call gl.getShaderParameter() with
gl.COMPILE_STATUS specified in pname .
If the compilation has failed, gl.getShaderParameter() returns false , and the error infor-
mation is written in the information log for the shader in the WebGL system. This infor-
mation can be retrieved with gl.getShaderInfoLog() .
gl.getShaderInfoLog(shader)
Retrieve the information log from the shader object specified by shader .
Parameters shader Specifies the shader object from which the information log is
retrieved.
Return value non-null The string containing the logged information .
null Any errors are generated.
Errors None
Although the exact details of the logged information is implementation specific, almost
all WebGL systems return error messages containing the line numbers where the compiler
Shader and Program Objects: The Role of initShaders()
349
has detected the errors in the program. For example, assume that you compiled a fragment
shader program as follows:
var FSHADER_SOURCE =
'void main() {\n' +
' gl.FragColor = vec4(1.0, 0.0, 0.0, 1.0);\n' +
'}\n';
Because the second line is incorrect in this case ( gl. must be gl_ ), the error messages
displayed in the JavaScript console of Chrome will be similar to those shown in Figure 9.11 .
Figure 9.11 A compile error in a shader
The first message indicates that gl at line 2 is undeclared.
failed to compile shader: ERROR: 0: 2 : 'gl' : undeclared identifier
cuon-utils.js:88
The reference to cuon-utils.js:88 on the right means that the error has been detected in
gl.getShaderInfoLog() , which was invoked at line 88 of the cuon-utils.js file, where
initShaders() is defined.
Create a Program Object (gl.createProgram())
As mentioned before, a program object is a container to store the shader objects and is
created by gl.createProgram() . You are already familiar with this program object because
it is the object you pass as the first argument of gl.getAttribLocation() and
gl.getUniformLocation() .
gl.createProgram()
Create a program object.
Parameters None
Return value non-null The newly created program object.
null Failed to create a program object.
Errors None
CHAPTER 9 Hierarchical Objects
350
A program object can be deleted by using gl.deleteProgram() .
gl.deleteProgram(program)
Delete the program object specified by program . If the program object is not referred to
from anywhere, it is deleted immediately. Otherwise, it will be deleted when it is no
longer referred to.
Parameters program Specifies the program object to be deleted.
Return value None
Errors None
Once the program object has been created, you attach the two shader objects to it.
Attach the Shader Objects to the Program Object (gl.attachShader())
Because you always need two shaders in WebGL—a vertex shader and a fragment shader—
you must attach both of them to the program object with gl.attachShader() .
gl.attachShader(program, shader)
Attach the shader object specified by shader to the program object specified by program .
Parameters program Specifies the program object.
shader Specifies the shader object to be attached to
program .
Return value None
Errors INVALID_OPERATION Shader had already been attached to program .
It is not necessary to compile or store any source code before it is attached to the program
object. You can detach the shader object with gl.detachShader() .
Shader and Program Objects: The Role of initShaders()
351
gl.detachShader(program, shader)
Detach the shader object specified by shader from the program object specified by
program .
Parameters program Specifies the program object.
shader Specifies the shader object to be detached from
program.
Return value None
Errors INVALID_OPERATION shader is not attached to program .
Link the Program Object (gl.linkProgram())
After attaching shader objects to a program object, you need to link the shader objects.
You use gl.linkProgram() to link the shader objects in the program object.
gl.linkProgram(program)
Link the program object specified by program.
Parameters program Specifies the program object to be linked.
Return value None
Errors None
During linking, various constraints of the WebGL system are checked: (1) when varying
variables are declared in a vertex shader, whether varying variables with the same names
and types are declared in a fragment shader, (2) whether a vertex shader has written data
to varying variables used in a fragment shader, (3) when the same uniform variables
are used in both a vertex shader and a fragment shader, whether their types and names
match, (4) whether the numbers of attribute variables, uniform variables, and varying
variables does not exceed an upper limit, and so on.
After linking the program object, it is always good programming practice to check whether
it succeeded. The result of linking can be confirmed with gl.getProgramParameters() .
CHAPTER 9 Hierarchical Objects
352
gl.getProgramParameter(program, pname)
Return information about pname for the program object specified by program . The return
value differs depending on pname .
Parameters program Specifies the program object.
pname Specifies any one of gl.DELETE_STATUS , gl.LINK_STATUS ,
gl.VALIDATE_STATUS , gl.ATTACHED_SHADERS , gl.ACTIVE_
ATTRIBUTES , or gl.ACTIVE_UNIFORMS .
Return value Depending on pname , the following values can be returned:
gl.DELETE_STATUS Whether the program has been
deleted ( true or false )
gl.LINK_STATUS Whether the program was linked
successfully ( true or false )
gl.VALIDATE_STATUS Whether the program was validated
successfully ( true or false )
1
gl.ATTACHED_SHADERS The number of attached shader
objects
gl.ACTIVE_ATTRIBUTES The number of attribute variables in
the vertex shader
gl.ACTIVE_UNIFORMS The number of uniform variables
Errors INVALID_ENUM pname is none of the above values.
If linking succeeded, you are returned an executable program object. Otherwise, you can
get the information about the linking from the information log of the program object
with gl.getProgramInfoLog() .
gl.getProgramInfoLog(program)
Retrieve the information log from the program object specified by program .
Parameters program Specifies the program object from which the information log is
retrieved.
Return value The string containing the logged information
Errors None
1 A program object may fail to execute even if it was linked successfully, such as if no texture units are
set for the sampler. This can only be detected when drawing, not when linking. Because this check
takes time, check for these errors only when debugging and turn off otherwise.
Shader and Program Objects: The Role of initShaders()
353
Tell the WebGL System Which Program Object to Use
(gl.useProgram())
The last step is to tell the WebGL system which program object to use when drawing by
making a call to gl.useProgram() .
gl.useProgram(program)
Tell the WebGL system that the program object specified by program will be used.
Parameters program Specifies the program object to be used.
Return value None
Errors None
One powerful feature of this function is that you can use it during drawing to switch
between multiple shaders prepared in advance. This will be discussed and used in
Chapter 10 .
With this final step, the preparation for drawing with the shaders is finished. As you have
seen, initShaders() hides quite a lot of detail and can be safely used without worrying
about this detail. Essentially, once executed, the vertex and fragment shaders are set up
and can be used with calls to gl.drawArrays() or gl.drawElements() .
Now that you have an understanding of the steps and appropriate WebGL functions used
in initShaders() , let’s take a look at the program flow of initShaders() as defined in
cuon-utils.js .
The Program Flow of initShaders()
initShaders() is composed of two main functions: createProgram() , which creates a
linked program object, and loadShader() , called from createProgram() , which creates the
compiled shader objects. Both are defined in cuon-utils.js . Here, you will work through
initShader() in order from the top (see Listing 9.5 ). Note that in contrast to the normal
code samples used in the book, the comments in this code are in the JavaDoc form, which
is used in the convenience libraries.
Listing 9.5 initShaders()
1 // cuon-utils.js
2 /**
3 * Create a program object and make current
4 * @param gl GL context
5 * @param vshader a vertex shader program (string)
6 * @param fshader a fragment shader program (string)
7 * @return true, if the program object was created and successfully made current
CHAPTER 9 Hierarchical Objects
354
8 */
9 function initShaders(gl, vshader, fshader) {
10 var program = createProgram(gl, vshader, fshader);
...
16 gl.useProgram(program);
17 gl.program = program;
18
19 return true;
20 }
First, initShaders() creates a linked program object with createProgram() at line 10 and
tells the WebGL system to use the program object at line 16. Then it sets the program
object to the property named program of the gl object.
Next, look at createProgram() in Listing 9.6 .
Listing 9.6 createProgram()
22 /**
23 * Create the linked program object
24 * @param gl GL context
25 * @param vshader a vertex shader program(string)
26 * @param fshader a fragment shader program(string)
27 * @return created program object, or null if the creation has failed.
28 */
29 function createProgram(gl, vshader, fshader) {
30 // Create shader objects
31 var vertexShader = loadShader(gl, gl.VERTEX_SHADER, vshader);
32 var fragmentShader = loadShader(gl, gl.FRAGMENT_SHADER, fshader);
...
37 // Create a program object
38 var program = gl.createProgram();
...
43 // Attach the shader objects
44 gl.attachShader(program, vertexShader);
45 gl.attachShader(program, fragmentShader);
46
47 // Link the program object
48 gl.linkProgram(program);
49
50 // Check the result of linking
51 var linked = gl.getProgramParameter(program, gl.LINK_STATUS);
...
60 return program;
61 }
Shader and Program Objects: The Role of initShaders()
355
The function createProgram() creates the shader objects for the vertex and the frag-
ment shaders, which are loaded using loadShader() at lines 31 and 32. The shader
object returned from loadShader() contains the stored shader source code and compiled
versions.
The program object, to which the shader objects created here will be attached, is created
at line 38, and the vertex and fragment shader objects are attached at lines 44 and 45.
Then createProgram() links the program object at line 48 and checks the result at line 51.
If the linking has succeeded, it returns the program object at line 60.
Finally, let’s look at loadShader() ( Listing 9.7 ) which was invoked at lines 31 and 32 from
within createProgram() .
Listing 9.7 loadShader()
63 /**
64 * Create a shader object
65 * @param gl GL context
66 * @param type the type of the shader object to be created
67 * @param source a source code of a shader (string)
68 * @return created shader object, or null if the creation has failed.
69 */
70 function loadShader(gl, type, source) {
71 // Create a shader object
72 var shader = gl.createShader(type);
...
78 // Set source codes of the shader
79 gl.shaderSource(shader, source);
80
81 // Compile the shader
82 gl.compileShader(shader);
83
84 // Check the result of compilation
85 var compiled = gl.getShaderParameter(shader, gl.COMPILE_STATUS);
...
93 return shader;
94 }
First loadShader() creates a shader object at line 72. It associates the source code to the
object at line 79 and compiles it at line 82. Finally, it checks the result of compilation
at line 85 and, if no errors have occurred, returns the shader object to createShader() ,
which attaches it to the program object.
CHAPTER 9 Hierarchical Objects
356
Summary
This chapter is the final one to explore basic features of WebGL. It looked at how to draw
and manipulate complex 3D objects composed of multiple segments organized in a hier-
archical structure. This technique is important for understanding how to use simple 3D
objects like cubes or blocks to build up more complex objects like robots or game charac-
ters. In addition, you looked at one of the most complex convenience functions we have
provided for this book, initShaders() , which has been treated as a black box up until
now. You saw the details of how shader objects are created and managed by program
objects, so you have a better sense of the internal structure of shaders and how WebGL
manages them through program objects.
At this stage you have a full understanding of WebGL and are capable of writing your
own complex 3D scenes using the expressive power of WebGL. In the next chapter, we
will outline various advanced techniques used in 3D graphics and leverage what you have
learned so far to show how WebGL can support these techniques.
Chapter 10
Advanced Techniques
This chapter includes a “grab-bag” of interesting techniques that you should find useful
for creating your WebGL applications. The techniques are mostly stand-alone, so you
can select and read any section that interests you. Where there are dependencies, they
are clearly identified. The explanations in this chapter are terse in order to include as
many techniques as possible. However, the sample programs on the website include
comprehensive comments, so please refer to them as well.
Rotate an Object with the Mouse
When creating WebGL applications, sometimes you want users to be able to control 3D
objects with the mouse. In this section, you construct a sample program RotateObject ,
which allows users to rotate a cube by dragging it with the mouse. To make the
program simple, it uses a cube, but the basic method is applicable to any object. Figure
10.1 shows a screen shot of the cube that has a texture image mapped onto it.
Figure 10.1 A screen shot of RotateObject
CHAPTER 10 Advanced Techniques
358
How to Implement Object Rotation
Rotating a 3D object is simply the application of a technique you’ve already studied for
2D objects—transforming the vertex coordinates by using the model view projection
matrix. The process requires you to create a rotation matrix based on the mouse move-
ment, change the model view projection matrix, and then transform the coordinates by
using the matrix.
You can obtain the amount of mouse movement by simply recording the position where
the mouse is initially clicked and then subtracting that position from the new position as
the mouse moves. Clearly, an event handler will be needed to calculate the mouse move-
ment, and then this will be converted into an angle that will rotate the object. Let’s take a
look at the sample program.
Sample Program (RotateObject.js)
Listing 10.1 shows the sample program. As you can see, the shaders do not do anything
special. Line 9 in the vertex shader transforms the vertex coordinates by using the model
view projection matrix, and line 10 maps the texture image onto the cube.
Listing 10.1 RotateObject.js
1 // RotateObject.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
...
8 'void main() {\n' +
9 ' gl_Position = u_MvpMatrix * a_Position;\n' +
10 ' v_TexCoord = a_TexCoord;\n' +
11 '}\n';
...
24 function main() {
...
42 var n = initVertexBuffers(gl);
...
61 viewProjMatrix.setPerspective(30.0, canvas.width / canvas.height,
➥1.0, 100.0);
62 viewProjMatrix.lookAt(3.0, 3.0, 7.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
63
64 // Register the event handler
65 var currentAngle = [0.0, 0.0]; // [x-axis, y-axis] degrees
66 initEventHandlers(canvas, currentAngle);
...
74 var tick = function() { // Start drawing
75 draw(gl, n, viewProjMatrix, u_MvpMatrix, currentAngle);
76 requestAnimationFrame(tick, canvas);
77 };
Rotate an Object with the Mouse
359
78 tick();
79 }
...
138 function initEventHandlers(canvas, currentAngle) {
139 var dragging = false; // Dragging or not
140 var lastX = -1, lastY = -1; // Last position of the mouse
141
142 canvas.onmousedown = function(ev) { // Mouse is pressed
143 var x = ev.clientX, y = ev.clientY;
144 // Start dragging if a mouse is in
145 var rect = ev.target.getBoundingClientRect();
146 if (rect.left <= x && x < rect.right && rect.top <= y && y < rect.bottom) {
147 lastX = x; lastY = y;
148 dragging = true;
149 }
150 };
151 // Mouse is released
152 canvas.onmouseup = function(ev) { dragging = false; };
153
154 canvas.onmousemove = function(ev) { // Mouse is moved
155 var x = ev.clientX, y = ev.clientY;
156 if (dragging) {
157 var factor = 100/canvas.height; // The rotation ratio
158 var dx = factor * (x - lastX);
159 var dy = factor * (y - lastY);
160 // Limit x-axis rotation angle to -90 to 90 degrees
161 currentAngle[0] = Math.max(Math.min(currentAngle[0] + dy, 90.0), -90.0);
162 currentAngle[1] = currentAngle[1] + dx;
163 }
164 lastX = x, lastY = y;
165 };
166 }
167
168 var g_MvpMatrix = new Matrix4(); // The model view projection matrix
169 function draw(gl, n, viewProjMatrix, u_MvpMatrix, currentAngle) {
170 // Calculate the model view projection matrix
171 g_MvpMatrix.set(viewProjMatrix);
172 g_MvpMatrix.rotate(currentAngle[0], 1.0, 0.0, 0.0); // x-axis
173 g_MvpMatrix.rotate(currentAngle[1], 0.0, 1.0, 0.0); // y-axis
174 gl.uniformMatrix4fv(u_MvpMatrix, false, g_MvpMatrix.elements);
175
176 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
177 gl.drawElements(gl.TRIANGLES, n, gl.UNSIGNED_BYTE, 0);
178 }
CHAPTER 10 Advanced Techniques
360
At lines 61 and 62 of main() in JavaScript, the view projection matrix is calculated in
advance. You will have to change the model matrix on-the-fly according to the amount of
mouse movement.
The code from line 65 registers the event handlers, a key part of this sample program. The
variable currentAngle is initialized at line 65 and used to hold the current rotation angle.
Here, it is an array because it needs to handle two rotation angles around the x-axis and
y-axis. The actual registration of the event handlers is done inside initEventHandlers() ,
called at line 66. It draws the cube using the function tick() that is defined from line 74.
initEventHandlers() is defined at line 138. The code from line 142 handles mouse down,
the code from line 152 handles mouse up, and the code from line 154 handles the mouse
movement.
The processing when the mouse button is first pushed at line 142 is simple. Line 146
checks whether the mouse has been pressed inside the element. If it is inside the
, line 147 saves that position in lastX and lastY . Then the variable dragging ,
which indicates dragging has begun, is set to true at line 148.
The processing of the mouse button release at line 152 is simple. Because this indicates
that dragging is done, the code simply sets the variable dragging back to false .
The processing from line 154 is the critical part and tracks the movement of the mouse.
Line 156 checks whether dragging is taking place and, if it is, lines 158 and 159 calculate
how long it has moved, storing the results to dx and dy . These values are scaled, using
factor , which is a function of the canvas size. Once the distance dragged has been calcu-
lated, it can be used to determine the new angle by directly adding to the current angles
at line 161 and 162. The code limits rotation from –90 to +90 degrees simply to show the
technique; you are free to remove this. Because the mouse has moved, its position is saved
in lastX and lastY .
Once you have successfully transformed the movement of the mouse into a rotation
angle, you can let the rotation matrix handle the updates and draw the results using
tick() . These operations are done at lines 172 and 173.
This quick review of a technique to calculate the rotation angle is only one approach.
Others, such as placing virtual track balls around the object, are described in detail in the
book 3D User Interfaces .
Select an Object
When your application requires users to be able to control 3D objects interactively, you
will need a technique to allow users to select objects. There are many uses of this tech-
nique, such as selecting a 3D button created by a 3D model instead of the conventional
2D GUI button, or selecting a photo among multiple photos in a 3D scene.
Select an Object
361
Selecting a 3D object is generally more complex than selecting a 2D one because of the
mathematics required to determine if the mouse is over a nonregular shape. However,
you can use a simple trick, shown in the sample program, to avoid that complexity. In
this sample, PickObject , the user can click a rotating cube, which causes a message to be
displayed (see Figure 10.2 ). First, run the sample program and experiment with it for a
while to get the feeling of how it works.
Figure 10.2 PickObject
Figure 10.2 shows with the message displayed when clicking the cube. The message says,
“The cube was selected!” Also check what happens when you click the black part of the
background.
How to Implement Object Selection
This program goes through the following steps to check whether the cube was clicked:
1. When the mouse is pressed, draw the cube with a single color “red” (see the middle
of Figure 10.3 ).
2. Read the pixel value (color) of the selected point.
3. Redraw the cube with its original color (right in Figure 10.3 ).
4. If the color of the pixel is red, display, “The cube was selected!”
When the cube is drawn with a single color (red in this case), you can quickly see which
part of the drawing area the cube occupies. After reading the pixel value at the position
of the mouse pointer when the mouse is clicked, you can determine that the mouse was
above the cube if the pixel color is red.
CHAPTER 10 Advanced Techniques
362
Click
Figure 10.3 The object drawn at the point of mouse pressing
To ensure that the viewer doesn’t see the cube flash red, you need to draw and redraw in
the same function. Let’s take a look at the actual sample program.
Sample Program (PickObject.js)
Listing 10.2 shows the sample program. The processing in this sample mainly takes place
in the vertex shader. To implement step 1, you must inform the vertex shader that the
mouse has been clicked so that it draws the cube red. The variable u_Clicked transmits
this information and declared at line 7 in the vertex shader. When the mouse is pressed,
u_Clicked is set to true in the JavaScript and tested at line 11. If true , the color red is
assigned to v_Color ; if not, the color of the cube ( a_Color ) is directly assigned to v_Color .
This turns the cube red when the mouse is pressed.
Listing 10.2 PickObject.js
1 // PickObject.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
...
6 'uniform mat4 u_MvpMatrix;\n' +
7 'uniform bool u_Clicked;\n' + // Mouse is pressed
8 'varying vec4 v_Color;\n' +
9 'void main() {\n' +
10 ' gl_Position = u_MvpMatrix * a_Position;\n' +
11 ' if (u_Clicked) {\n' + // Draw in red if mouse is pressed <-(1)
12 ' v_Color = vec4(1.0, 0.0, 0.0, 1.0);\n' +
13 ' } else {\n' +
14 ' v_Color = a_Color;\n' +
15 ' }\n' +
16 '}\n';
17
18 // Fragment shader program
Select an Object
363
...
25 ' gl_FragColor = v_Color;\n' +
...
30 function main() {
...
60 var u_Clicked = gl.getUniformLocation(gl.program, 'u_Clicked');
...
71 gl.uniform1i(u_Clicked, 0); // Pass false to u_Clicked
72
73 var currentAngle = 0.0; // Current rotation angle
74 // Register the event handler
75 canvas.onmousedown = function(ev) { // Mouse is pressed
76 var x = ev.clientX, y = ev.clientY;
77 var rect = ev.target.getBoundingClientRect();
78 if (rect.left <= x && x < rect.right && rect.top <= y && y < rect.bottom) {
79 // Check if it is on object
80 var x_in_canvas = x - rect.left, y_in_canvas = rect.bottom - y;
81 var picked = check(gl, n, x_in_canvas, y_in_canvas, currentAngle,
➥u_Clicked, viewProjMatrix, u_MvpMatrix);
82 if (picked) alert('The cube was selected! '); <-(4)
83 }
84 }
...
92 }
...
147 function check(gl, n, x, y, currentAngle, u_Clicked, viewProjMatrix,
➥u_MvpMatrix) {
148 var picked = false;
149 gl.uniform1i(u_Clicked, 1); // Draw the cube with red
150 draw(gl, n, currentAngle, viewProjMatrix, u_MvpMatrix);
151 // Read pixel at the clicked position
152 var pixels = new Uint8Array(4); // Array for storing the pixels
153 gl.readPixels(x, y, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixels); <-(2)
154
155 if (pixels[0] == 255) // The mouse in on cube if pixels[0] is 255
156 picked = true;
157
158 gl.uniform1i(u_Clicked, 0); // Pass false to u_Clicked: redraw cube
159 draw(gl, n, currentAngle, viewProjMatrix, u_MvpMatrix); // <-(3)
160
161 return picked;
162 }
Let’s take a look from line 30 of main() in JavaScript. Line 60 obtains the storage location
for u_Clicked , and line 71 assigns the initial value of u_Clicked to be false .
CHAPTER 10 Advanced Techniques
364
Line 75 registers the event handler to be called when the mouse has been clicked. This
event handler function does a sanity check to see if the clicked position is inside the
element at line 78. Then it calls to check() at line 81 if it is. This function checks
whether the position, specified by the third and fourth arguments, is on the cube (see next
paragraph). If so, it returns true which causes a message to be displayed at line 82.
The function check() begins from line 147. This function processes steps (2) and (3) from
the previous section together. Line 149 informs the vertex shader that the click event has
occurred by passing 1 ( true ) to u_Clicked . Then line 150 draws the cube with the current
rotation angle. Because u_Clicked is true , the cube is drawn in red. Then the pixel value
of the clicked position is read from the color buffer at line 153. The following shows the
gl.readPixels() function used here.
gl.readPixels(x, y, width, height, format, type, pixels)
Read a block of pixels from the color buffer
1 and store it to the array pixels . x , y , width ,
and height define the block as a rectangle.
Parameters x, y Specify the position of the first pixel that is read from the
buffer.
width, height Specify the dimensions of the pixel rectangle.
format Specifies the format of the pixel data. gl.RGBA must be
specified.
type Specifies the data type of the pixel data. gl.UNSIGNED_BYTE
must be specified.
pixels Specifies the typed array ( Uint8Array ) for storing the pixel
data.
Return value None
Errors INVALID_VALUE: pixels is null . Either width or height is negative.
INVALID_OPERATION: pixels is not large enough to store the pixel data.
INVALID_ENUM: format or type is none of the above values.
The pixel value that was read is stored in the pixels array. This array is defined at line
152, and the R, G, B, and A values are stored in pixels[0] , pixels[1] , pixels[2] , and
pixels[3] , respectively. Because, in this sample program, you know that the only colors
used are red for the cube and black for the background, you can see if the mouse is on
the cube by checking the values for pixels[0] . This is done at line 155, and if it is red, it
changes picked to true .
1 If a framebuffer object is bound to gl.FRAMEBUFFER , this method reads the pixel values from the
object. We explain the object in the later section “Use What You’ve Drawn as a Texture Image.”
Select an Object
365
Then line 158 sets u_Clicked to false and redraws the cube at line 159. This turns the
cube back to its original color. Line 161 returns picked as the return value.
Note, if at this point you call any function that returns control to the browser, such as
alert() , the content of the color buffer will be displayed on the at that point.
For example, if you execute alert('The cube was selected!') at line 156, the red cube
will be displayed when you click the cube.
This approach, although simple, can handle more than one object by assigning differ-
ent colors to each object. For example, red, blue, and green are enough if there are three
objects. For larger numbers of objects, you can use individual bits. Because there are 8 bits
for each component in RGBA, you can represent 255 objects just by using the R compo-
nent. However, if the 3D objects are complex or the drawing area is large, it will take
some time to process the selection of objects. To overcome this disadvantage, you can use
simplified models to select objects or shrink the drawing area. In such cases, you can use
the framebuffer object, which will be explained in the section “Use What You’ve Drawn as
a Texture Image” later in this chapter.
Select the Face of the Object
You can also apply the method explained in the previous section to select a particular
face of an object. Let’s customize PickObject to build PickFace , a program that turns the
selected face white. Figure 10.4 shows PickFace .
Click
Figure 10.4 PickFace
PickFace is easy once you understand how PickObject works. PickObject drew the cube
in red when the mouse was clicked, resulting in the object’s display area in the color
buffer being red. By reading the pixel value of the clicked point and seeing if the color
of the pixel at the position was red, the program could determine if the object had been
selected. PickFace goes one step further and inserts the information of which face has
been selected into the color buffer. Here, you will insert the information in the alpha
component of the RGBA value. Let’s take a look at the sample program.
CHAPTER 10 Advanced Techniques
366
Sample Program (PickFace.js)
PickFace.js is shown in Listing 10.3 . Some parts, such as the fragment shader, are
omitted for brevity.
Listing 10.3 PickFace.js
1 // PickFace.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'attribute float a_Face;\n' + // Surface number (Cannot use int)
7 'uniform mat4 u_MvpMatrix;\n' +
8 'uniform int u_PickedFace;\n' + // Surface number of selected face
9 'varying vec4 v_Color;\n' +
10 'void main() {\n' +
11 ' gl_Position = u_MvpMatrix * a_Position;\n' +
12 ' int face = int(a_Face);\n' + // Convert to int
13 ' vec3 color = (face == u_PickedFace) ? vec3(1.0):a_Color.rgb;\n'+
14 ' if(u_PickedFace == 0) {\n' + // Insert face number into alpha
15 ' v_Color = vec4(color, a_Face/255.0);\n' +
16 ' } else {\n' +
17 ' v_Color = vec4(color, a_Color.a);\n' +
18 ' }\n' +
19 '}\n';
...
33 function main() {
...
50 // Set vertex information
51 var n = initVertexBuffers(gl);
...
74 // Initialize selected surface
75 gl.uniform1i(u_PickedFace, -1);
76
77 var currentAngle = 0.0; // Current rotation angle (degrees)
78 // Register event handlers
79 canvas.onmousedown = function(ev) { // Mouse is pressed
80 var x = ev.clientX, y = ev.clientY;
81 var rect = ev.target.getBoundingClientRect();
82 if (rect.left <= x && x < rect.right && rect.top <= y && y < rect.bottom) {
83 // If clicked position is inside the , update the face
84 var x_in_canvas = x - rect.left, y_in_canvas = rect.bottom - y;
85 var face = checkFace(gl, n, x_in_canvas, y_in_canvas,
➥currentAngle, u_PickedFace, viewProjMatrix, u_MvpMatrix);
86 gl.uniform1i(u_PickedFace, face); // Pass the surface number
87 draw(gl, n, currentAngle, viewProjMatrix, u_MvpMatrix);
Select an Object
367
88 }
89 }
...
99 function initVertexBuffers(gl) {
...
109 var vertices = new Float32Array([ // Vertex coordinates
110 1.0, 1.0, 1.0, -1.0, 1.0, 1.0, -1.0,-1.0, 1.0, 1.0,-1.0, 1.0,
111 1.0, 1.0, 1.0, 1.0,-1.0, 1.0, 1.0,-1.0,-1.0, 1.0, 1.0,-1.0,
...
115 1.0,-1.0,-1.0, -1.0,-1.0,-1.0, -1.0, 1.0,-1.0, 1.0, 1.0,-1.0
116 ]);
...
127 var faces = new Uint8Array([ // Surface number
128 1, 1, 1, 1, // v0-v1-v2-v3 Front
129 2, 2, 2, 2, // v0-v3-v4-v5 Right
...
133 6, 6, 6, 6, // v4-v7-v6-v5 Depth
134 ]);
...
154 if (!initArrayBuffer(gl, faces, gl.UNSIGNED_BYTE, 1,
➥'a_Face')) return -1; // Surface Information
...
164 }
165
166 function checkFace(gl, n, x, y, currentAngle, u_PickedFace, viewProjMatrix,
➥u_MvpMatrix) {
167 var pixels = new Uint8Array(4); // Array for storing the pixel
168 gl.uniform1i(u_PickedFace, 0); // Write surface number into alpha
169 draw(gl, n, currentAngle, viewProjMatrix, u_MvpMatrix);
170 // Read the pixels at (x, y). pixels[3] is the surface number
171 gl.readPixels(x, y, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
172
173 return pixels[3];
174 }
Let’s take a look from the vertex shader. a_Face at line 6 is the attribute variable used to
pass the surface number, which is then “coded” into the alpha value when the mouse
is clicked. The surface numbers are set up in initVertexBuffers() defined at line 99
and simply map vertices to a surface. Lines 127 onward define these mappings. So, for
example, vertices v0-v1-v2-v3 define a surface that is numbered 1, vertices v0-v3-v4-v5 are
numbered 2, and so on. Because each vertex needs a number to pass to the vertex shader,
there are four 1s written at line 128 to represent the first face.
If a face is already selected, u_PickedFace at line 8 informs the vertex shader of the
selected face number, allowing the shader to switch the way it draws the face based on
this information.
CHAPTER 10 Advanced Techniques
368
Line 12 converts a_Face , the surface number that is a float type, into an int type because
an int type cannot be used in the attribute variables ( Chapter 6 , “The OpenGL ES Shading
Language [GLSL ES]”). If the selected surface number is the same as the surface number
currently being manipulated, white is assigned to color at line 13. Otherwise, the original
surface color is assigned. If the mouse has been clicked (that is, u_PickedFace is set to 0),
the a_Face value is inserted into the alpha value and the cube is drawn (line 15).
Now, by passing 0 into u_PickedFace when the mouse is clicked, the cube is drawn with
an alpha value set to the surface number. u_PickedFace is initialized to –1 at line 75.
There is no surface with the number –1 (refer to the faces array at line 127), so the cube is
initially drawn without surfaces selected.
Let’s take a look at the essential processing of the event handler. u_PickedFace is passed
as an argument to checkFace() at line 85, which returns the surface number of the picked
face, checkFace() , at line 166. At line 168, 0 is passed to u_PickedFace to tell the vertex
shader that the mouse has been clicked. When draw() is called in the next line, the
surface number is inserted into the alpha value and the object is redrawn. Line 171 checks
the pixel value of the clicked point, and line 173 retrieves the inserted surface number
by using pixels[3] . (It is the alpha value, so the subscript is 3.) This surface number is
returned to the main code and then used at lines 86 and 87 to draw the cube. The vertex
shader handles the rest of the processing, as described earlier.
HUD (Head Up Display)
The Head Up Display, originally developed for aircraft, is a transparent display that pres-
ents data without requiring users to look away from their usual viewpoints. A similar
effect can be achieved in 3D graphics and used to overlay textual information on the 3D
scene. Here, you will construct a sample program that will display a diagram and some
information on top of the 3D graphics (HUD), as you can see in Figure 10.5 .
Figure 10.5 HUD
HUD (Head Up Display)
369
The goal of the program is to draw a triangle and some simple information about the 3D
scene, including the current rotation angle of the cube (from PickObject ) that will change
as the cube rotates.
How to Implement a HUD
This HUD effect can be implemented using HTML and the canvas function without
WebGL. This is done as follows:
1. In the HTML file, prepare a to draw the 3D graphics using WebGL and
another to draw the HUD using the canvas function. In other words,
prepare two and place the HUD on top of the WebGL canvas.
2. Draw the 3D graphics using the WebGL API on the for WebGL.
3. Draw the HUD using the canvas functions on the for the HUD.
As you can see, this is extremely simple and shows the power of WebGL and its ability to
mix 2D and 3D graphics within the browser. Let’s take a look at the sample program.
Sample Program (HUD.html)
Because we need to make changes to the HTML file to add the extra canvas, we show HUD.
html in Listing 10.4 , with the additions in bold.
Listing 10.4 HUD.html
1
2
...
8
9
10 Please use a browser that supports "canvas"
11
12
...
18
19
20
The style attribute, used to define how an element looks or how it is arranged, allows you
to place the HUD canvas on top of the WebGL canvas. Style information is composed of
the property name and the value separated with a : as seen at line 9: style="position:
absolute" . Multiple style elements are separated with ; .
CHAPTER 10 Advanced Techniques
370
In this example, you use position , which specifies how the element is placed, and the
z-index , which specifies the hierarchical relationship.
You can specify the position of the element in the absolute coordinate if you use absolute
for the position value. Unless you specify the position, all the elements specified to this
attribute will be place at the same position. z-index specifies the order in which elements
are displayed when multiple elements are at the same position. The element with the
larger number will be displayed over the one with a smaller number. In this case, the
z-index of the for the HUD at line 12 is 1.
The result of this code is two elements, placed at the same location with
the that displays the HUD on top of the that displays the WebGL.
Conveniently, the background of the canvas element is transparent by default, so the
WebGL canvas can be seen through the HUD canvas. Anything that is drawn on the HUD
canvas will appear over the 3D objects and create the effect of a HUD.
Sample Program (HUD.js)
Next, let’s take a look at HUD.js in Listing 10.5 . There are two changes made compared to
PickObject.js :
1. Retrieve the rendering context to draw in the for the HUD and use it to
draw.
2. Register the event handler when the mouse is clicked to the for the HUD
and not to the for WebGL.
Step 1 simply uses the source code used in Chapter 2 , “Your First Step with WebGL,” to
draw a triangle onto the . Step 2 is required to ensure that mouse click informa-
tion is passed to the HUD canvas rather than the WebGL canvas. The vertex shader and
fragment shader are the same as PickObject.js .
Listing 10.5 HUD.js
1 // HUD.js
...
30 function main() {
31 // Retrieve element
32 var canvas = document.getElementById('webgl');
33 var hud = document.getElementById('hud');
...
40 // Get the rendering context for WebGL
41 var gl = getWebGLContext(canvas);
42 // Get the rendering context for 2DCG
43 var ctx = hud.getContext('2d');
...
82 // Register the event handler
83 hud.onmousedown = function(ev) { // Mouse is pressed
HUD (Head Up Display)
371
...
89 check(gl, n, x_in_canvas, y_in_canvas, currentAngle, u_Clicked,
➥viewProjMatrix, u_MvpMatrix);
...
91 }
92
93 var tick = function() { // Start drawing
94 currentAngle = animate(currentAngle);
95 draw2D(ctx, currentAngle); // Draw 2D
96 draw(gl, n, currentAngle, viewProjMatrix, u_MvpMatrix);
97 requestAnimationFrame(tick, canvas);
98 };
99 tick();
100 }
...
184 function draw2D(ctx, currentAngle) {
185 ctx.clearRect(0, 0, 400, 400); // Clear
186 // Draw triangle with white lines
187 ctx.beginPath(); // Start drawing
188 ctx.moveTo(120, 10); ctx.lineTo(200, 150); ctx.lineTo(40, 150);
189 ctx.closePath();
190 ctx.strokeStyle = 'rgba(255, 255, 255, 1)'; // Set the line color
191 ctx.stroke(); // Draw triangle with white lines
192 // Draw white letters
193 ctx.font = '18px "Times New Roman"';
194 ctx.fillStyle = 'rgba(255, 255, 255, 1)'; // Set the letter color
195 ctx.fillText('HUD: Head Up Display', 40, 180);
196 ctx.fillText('Triangle is drawn by Hud API.', 40, 200);
197 ctx.fillText('Cube is drawn by WebGL API.', 40, 220);
198 ctx.fillText('Current Angle: '+ Math.floor(currentAngle), 40, 240);
199 }
Because the processing flow of the program is straightforward, let’s take a look from
main() at line 30. First, line 33 obtains the element for the HUD. This is used to
get the drawing context for the 2D graphics ( Chapter 2 ) at line 43, which is used to draw
the HUD. You register the mouse-click event handler for the HUD canvas ( hud ) instead of
the WebGL canvas in PickObject.js . This is because the event goes to the HUD canvas,
which is placed on top of the WebGL canvas.
The code from line 93 handles the animation and uses draw2D() , added at line 95, to draw
the HUD information.
draw2D() is defined at line 184 and takes ctx parameters, the context to draw on the
canvas, and the current rotation angle, currentAngle . Line 185 clears the HUD canvas
using the clearRect() method, which takes the upper-left corner, the width, and the
height of the rectangle to clear. Lines 187 to 191 draw the triangle which, unlike drawing
CHAPTER 10 Advanced Techniques
372
a rectangle as explained in Chapter 2 , requires that you define the path (outline) of a
triangle to draw it. Lines 187 to 191 define the path, set the color, and draw the triangle.
Lines 193 onward specify the text color and font and then use fillText() , which speci-
fies the letters to draw as the first parameter and the x and y coordinates to draw as the
second and third parameters, to actually write the text. Line 198 displays the current
rotation angle and uses Math.floor() to truncate the numbers below the decimal point.
Line 185 clears the canvas because the displayed value (rotation angle) changes at each
drawing.
Display a 3D Object on a Web Page (3DoverWeb)
Displaying a 3D object on a web page is simple with WebGL and the inverse of the HUD
example. In this case, the WebGL canvas is on top of the web page, and the canvas is set
to transparent. Figure 10.6 shows 3DoverWeb .
Figure 10.6 3DoverWeb
2
3DoverWeb.js is based on PickObject.js with almost no changes. The only change is that
the alpha value of the clear color is changed from 1.0 to 0.0 at line 55.
55 gl.clearColor(0.0, 0.0, 0.0, 0.0 );
By making the alpha value 0.0, the background of the WebGL canvas becomes transpar-
ent, and you can see the web page behind the WebGL . You can also experiment
with the alpha value; any value other than 1.0 changes the transparency and makes the
web page more or less visible.
Fog (Atmospheric Effect)
In 3D graphics, the term fog is used to describe the effect that makes a distant object seem
hazy. The term describes objects in any medium, so objects underwater can also have a
2 The sentences on the web page on the background are from the book The Design of Design (by
Frederick P. Brooks Jr, Pearson).
Fog (Atmospheric Effect)
373
fog effect applied. Here, you construct a sample program Fog that realizes the fog effect.
Figure 10.7 shows a screen shot. You can adjust the density of the fog with the up/down
arrow keys. Try running the sample program and experiment with the effect.
Figure 10.7 Fog
How to Implement Fog
There are various ways to calculate fog, but here you will use a linear computation ( linear
fog ) because the calculation is easy. The linear fog method determines the density of the
fog by setting the starting point (the distance where the object starts to become hazy) and
the end point (where the object is completely obscured). The density of the fog between
these points is changed linearly. Note that the end point is not where the fog ends; rather,
it is where the fog becomes so dense that it obscures all objects. We will call how clearly
we can see the object the fog factor ; it is calculated, in the case of linear fog, as follows:
Equation 10.1
( )
( )
=
−
−
fog factor
end point distance from eye point
end point starting point
Where
≤ ≤ starting point distance from eye point end point
When the fog factor is 1.0, you can see the object completely, and if it 0.0, you cannot see
it at all (see Figure 10.8 ). The fog factor is 1.0 when the (distance from eye point)
< (starting point) , and 0.0 when (end point) < (distance from eye point) .
CHAPTER 10 Advanced Techniques
374
r o t c a f g o f
m o r f e c n a t s i d
t n i o p e y e
e c n a t s i d d n e e c n a t s i d t r a t s
0 . 1
eye point
Figure 10.8 Fog factor
You can calculate the color of a fragment based on the fog factor, as follows in Equation
10.2 .
Equation 10.2
( )
=
× + × −
fragment color
surface color fog factor fog color fog factor
1
Now, let’s take a look at the sample program.
Sample Program (Fog.js)
The sample program is shown in Listing 10.6 . Here, you (1) calculate the distance of the
object (vertex) from the eye point in the vertex shader, and based on that, you (2) calcu-
late the fog factor and the color of the object based on the fog factor in the fragment
shader. Note that this program specifies the position of the eye point with the world
coordinate system (see Appendix G , “World Coordinate System Versus Local Coordinate
System”) so the fog calculation takes place in the world coordinate system.
Listing 10.6 Fog.js
1 // Fog.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
...
7 'uniform mat4 u_ModelMatrix;\n' +
8 'uniform vec4 u_Eye;\n' + // The eye point (world coordinates)
9 'varying vec4 v_Color;\n' +
10 'varying float v_Dist;\n' +
11 'void main() {\n' +
12 ' gl_Position = u_MvpMatrix * a_Position;\n' +
13 ' v_Color = a_Color;\n' +
14 // Calculate the distance to each vertex from eye point <-(1)
15 ' v_Dist = distance(u_ModelMatrix * a_Position, u_Eye);\n' +
16 '}\n';
17
Fog (Atmospheric Effect)
375
18 // Fragment shader program
19 var FSHADER_SOURCE =
...
23 'uniform vec3 u_FogColor;\n' + // Color of Fog
24 'uniform vec2 u_FogDist;\n' + // Fog starting point, end point)
25 'varying vec4 v_Color;\n' +
26 'varying float v_Dist;\n' +
27 'void main() {\n' +
28 // Calculate the fog factor <-(2)
29 ' float fogFactor = clamp((u_FogDist.y - v_Dist) / (u_FogDist.y –
➥u_FogDist.x), 0.0, 1.0);\n' +
30 // u_FogColor * (1 - fogFactor) + v_Color * fogFactor
31 ' vec3 color = mix(u_FogColor, vec3(v_Color), fogFactor);\n' +
32 ' gl_FragColor = vec4(color, v_Color.a);\n' +
33 '}\n';
34
35 function main() {
...
53 var n = initVertexBuffers(gl);
...
59 // Color of fog
60 var fogColor = new Float32Array([0.137, 0.231, 0.423]);
61 // Distance of fog [fog starts, fog completely covers object]
62 var fogDist = new Float32Array([55, 80]);
63 // Position of eye point (world coordinates)
64 var eye = new Float32Array([25, 65, 35]);
...
76 // Pass fog color, distances, and eye point to uniform variable
77 gl.uniform3fv(u_FogColor, fogColor); // Fog color
78 gl.uniform2fv(u_FogDist, fogDist); // Starting point and end point
79 gl.uniform4fv(u_Eye, eye); // Eye point
80
81 // Set clear color and enable hidden surface removal function
82 gl.clearColor( fogColor[0], fogColor[1], fogColor[2] , 1.0);
...
93 mvpMatrix.lookAt( eye[0], eye[1], eye[2] , 0, 2, 0, 0, 1, 0);
...
97 document.onkeydown = function(ev){ keydown(ev, gl, n, u_FogDist, fogDist); };
...
The calculation of the distance from the eye point to the vertex, done by the vertex
shader, is straightforward. You simply transform the vertex coordinates to the world coor-
dinates using the model matrix and then call the built-in function distance() with the
position of the eye point (world coordinates) and the vertex coordinates. The distance()
CHAPTER 10 Advanced Techniques
376
function calculates the distance between two coordinates specified by the arguments. This
calculation takes place at line 15, and the result is then written to the v_Dist variable and
passed to the fragment shader.
The fragment shader calculates the fogged color of the object using Equations 10.1 and
10.2 . The fog color, fog starting point, and fog end point, which are needed to calculate
the fogged color, are passed in the uniform variables u_FogColor and u_FogDist at lines 23
and 24. u_FogDist.x is the starting point, and u_FogDist.y is the end point.
The fog factor is calculated at line 29 using Equation 10.1 . The clamp() function is a built-
in function; if the value specified by the first parameter is outside the range specified by
the second and third parameter ([0.0 0.1] in this case), it will fix the value to one within
the range. In other words, the value is fixed to 0.0 if the value is smaller than 0.0, and 1.0
if the value is larger than 1.0. If the value is within the range, the value is unchanged.
Line 31 is the calculation of the fragment color using the fog factor. This implements
Equation 10.2 and uses a built-in function, mix() , which calculates x*(1–z)+y*z, where x is
the first parameter, y is the second, and z is the third.
The processing in JavaScript’s main() function from line 35 sets up the values necessary
for calculating the fog in the appropriate uniform variables.
You should note that there are many types of fog calculations other than linear fog, for
example exponential fog, used in OpenGL (see the book OpenGL Programming Guide ). You
can implement these fog calculations using the same approach, just changing the calcula-
tion method in the fragment shader.
Use the w Value (Fog_w.js)
Because the distance calculation within the shader can affect performance, an alternative
method allows you to easily approximate the calculation of the distance from the eye
point to the object (vertex) by using the w value of coordinates transformed by the model
view projection conversion. In this case, the coordinates are substituted in gl_Position .
The fourth component, w of gl_Position which you haven’t used explicitly before, is the
z value of each vertex in the view coordinate system multiplied by –1. The eye point is the
origin in the view coordinates, and the view direction is the negative direction of z, so z
is a negative value. The w value, which is the z value multiplied by –1, can be used as an
approximation of the distance.
If you reimplement the calculation in the vertex shader using w, as shown in Listing 10.7 ,
the fog effect will work as before.
Listing 10.7 Fog_w.js
1 // Fog_w.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
...
Make a Rounded Point
377
7 'varying vec4 v_Color;\n' +
8 'varying float v_Dist;\n' +
9 'void main() {\n' +
10 ' gl_Position = u_MvpMatrix * a_Position;\n' +
11 ' v_Color = a_Color;\n' +
12 // Use the negative z value of vertex in view coordinate system
13 ' v_Dist = gl_Position.w;\n' +
14 '}\n';
Make a Rounded Point
In Chapter 2 , you constructed a sample program that draws a point to help you under-
stand the basics of shaders. However, to allow you to focus on the operation of the
shaders, the point displayed wasn’t “round” but actually “square,” which is simpler to
draw. In this section, you construct a sample program, RoundedPoint , which draws a
round point (see Figure 10.9 ).
Figure 10.9 A screen shot of RoundedPoint
How to Implement a Rounded Point
To draw a “round” point, you just have to make the “rectangle” point round. This can be
achieved using the rasterization process that takes place between the vertex shader and
the fragment shader and was explained in Chapter 5 , “Using Colors and Texture Images.”
This rasterization process generates a rectangle consisting of multiple fragments, and each
fragment is passed to the fragment shader. If you draw these fragments as-is, a rectangle
will be displayed. So you just need to modify the fragment shader to draw only the frag-
ments inside the circle, as shown in Figure 10.10 .
CHAPTER 10 Advanced Techniques
378
d e d r a c s i d e b o t s t n e m g a r f
Figure 10.10 Discarding fragments to turn a rectangle into a circle
To achieve this, you need to know the position of each fragment created during rasteriza-
tion. In Chapter 5 , you saw a sample program that uses the built-in variable gl_FragCoord
to pass (input) the data to the fragment shader. In addition to this, there is one more
built-in variable gl_PointCoord , which is suitable for drawing a round point (see Table
10.1 ).
Table 10.1 Built-In Variables of Fragment Shader (Input)
Type and Name of Variable Description
vec4 gl_FragCoord Window coordinates of fragment
vec4 gl_PointCoord Position of fragment in the drawn point (0.0 to 1.0)
gl_PointCoord gives the position of each fragment taken from the range (0.0, 0.0) to (1.0,
1.0), as shown in Figure 10.11 . To make the rectangle round, you simply have to discard
the fragments outside the circle centered at (0.5, 0.5) with radius 0.5. You can use the
discard statement to discard these fragments.
) 5 . 0 , 5 . 0 (
5 . 0
) 0 . 0 , 0 . 0 (
) 0 . 1 , 0 . 1 (
Figure 10.11 Coordinates of gl_PointCoord
Sample Program (RoundedPoints.js)
The sample program is shown in Listing 10.8 . This is derived from MultiPoint.js , which
was used in Chapter 4 , “More Transformations and Basic Animation,” to draw multiple
Make a Rounded Point
379
points. The only difference is in the fragment shader. The vertex shader is also shown for
reference.
Listing 10.8 RoundedPoint.js
1 // RoundedPoints.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'void main() {\n' +
6 ' gl_Position = a_Position;\n' +
7 ' gl_PointSize = 10.0;\n' +
8 '}\n';
9
10 // Fragment shader program
11 var FSHADER_SOURCE =
...
15 'void main() {\n' + // Center coordinate is (0.5, 0.5)
16 ' float dist = distance(gl_PointCoord, vec2(0.5, 0.5));\n' +
17 ' if(dist < 0.5) {\n' + // Radius is 0.5
18 ' gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);\n' +
19 ' } else { discard; }\n' +
20 '}\n';
21
22 function main() {
...
53 gl.drawArrays(gl.POINTS, 0, n);
54 }
The key difference is the calculation, starting at line 16, which determines whether a frag-
ment should be discarded. gl_PointCoord holds the fragment’s position (specified in the
range 0.0 to 0.1), and the center point is (0.5, 0.5). Therefore, to make a rectangle point
round, you have to do the following:
1. Calculate the distance from the center (0.5, 0.5) to each fragment.
2. Draw those fragments for which the distance is less than 0.5.
In RoundedPoint.js , the distance calculation takes place at line 16. Here, you just have to
calculate the distance between the center point (0.5, 0.5) and gl_PointCoord . Because the
gl_PointCoord is a vec2 type, you need to pass (0.5, 0.5) to distance() as a vec2 .
Once you have calculated the distance from the center, it is used at line 17 to check
whether the distance is less than 0.5 (in other words, whether the fragment is in the
circle). If the fragment is in circle, the fragment is drawn so line 18 uses gl_FragColor to
set the draw color. Otherwise, at line 19, the discard statement causes WebGL to auto-
matically throw away the fragment.
CHAPTER 10 Advanced Techniques
380
Alpha Blending
The alpha value controls the transparency of drawn objects. If you specify 0.5 as the alpha
value, the object becomes semi-transparent, allowing anything drawn underneath it to be
partially visible. As the alpha value approaches 0, more of the background objects appear.
If you try this yourself, you’ll actually see that as you decrease the alpha value, WebGL
objects become white. This is because WebGL’s default behavior is to use the same alpha
value for both objects and the . In the sample programs, the web page behind
the is white, so this shows through.
Let’s construct a sample program that shows how to use alpha blending to get the desired
effect. The function that allows the use of the alpha value is called an alpha blending (or
simply blending ) function . This function is already built into WebGL, so you just need to
enable it to tell WebGL to start to use the alpha values supplied.
How to Implement Alpha Blending
You’ll need the following two steps to enable and use the alpha blending function.
1. Enable the alpha blending function:
gl.enable(gl.BLEND);
2. Specify the blending function:
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
The blending function will be explained later, so let’s try using the sample program. Here,
we will reuse LookAtTrianglesWithKey_ViewVolume described in Chapter 7 , “Toward the
3D World.” As shown in Figure 10.12 , this program draws three triangles and allows the
position of the eye point to be changed using the arrow key.
Figure 10.12 A screen shot of LookAtTrianglesWithKeys_ViewVolume
Alpha Blending
381
Let’s add the code for steps 1 and 2, specify 0.4 as the alpha value of the color of the
triangles, and call the resulting program LookAtBlendedTriangles . Figure 10.13 shows the
effect when run. As you can see, all triangles became semitransparent, and you are able to
see the triangles behind. When you move the eye point with the arrow key, you can see
that the blending is continuously taking place.
Figure 10.13 A screen shot of LookAtBlendedTriangles
Let’s look at the sample program.
Sample Program (LookAtBlendedTriangles.js)
LookAtBlendedTriangles.js is shown in Listing 10.9 . The code that has changed is in
lines 51 to 54, and the alpha value (0.4) is added to the definition of color information
in initVertexBuffer() at lines 81 to 91. Accordingly, the size and stride parameters have
changed for gl.vertexAttribPointer() .
Listing 10.9 LookAtBlenderTriangles.js
1 // LookAtBlendedTriangles.js
2 // LookAtTrianglesWithKey_ViewVolume.js is the original
...
25 function main() {
...
43 var n = initVertexBuffers(gl);
...
51 // Enable alpha blending
52 gl.enable (gl.BLEND);
53 // Set blending function
54 gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
...
CHAPTER 10 Advanced Techniques
382
75 draw(gl, n, u_ViewMatrix, viewMatrix);
76 }
77
78 function initVertexBuffers(gl) {
79 var verticesColors = new Float32Array([
80 // Vertex coordinates and color(RGBA)
81 0.0, 0.5, -0.4, 0.4, 1.0, 0.4, 0.4 ,
82 -0.5, -0.5, -0.4, 0.4, 1.0, 0.4, 0.4 ,
...
91 0.5, -0.5, 0.0, 1.0, 0.4, 0.4, 0.4 ,
92 ]);
93 var n = 9;
...
127 return n;
128 }
Blending Function
Let’s explore the blending function gl.blendFunc() to understand how this can be used
to achieve the blending effect. You need two colors for blending: the color to blend
(source color) and the color to be blended (destination color). For example, when you
draw one triangle on top of the other, the color of the triangle already drawn is the desti-
nation color, and the color of the triangle drawn on top is the source color.
gl.blendFunc(src_factor, dst_factor)
Specify the method to blend the source color and the destination color. The blended
color is calculated as follows:
( ) = × + × color RGB source color src_factor destination color dst_factor
Parameters src_factor Specifies the multiplier for the source color ( Table 10.2 ).
dst_factor Specifies the multiplier for the destination color ( Table
10.2 ).
Return value None
Errors INVALID_ENUM src_factor and dst_factor are none of the values in Table
10.2
Table 10.2 Constant Values that Can Be Specified as src_factor and dst_factor
3
Constant Multiplicand for R Multiplicand for G Multiplicand for B
gl.ZERO 0.0 0.0 0.0
gl.ONE 1.0 1.0 1.0
Alpha Blending
383
Constant Multiplicand for R Multiplicand for G Multiplicand for B
gl.SRC_COLOR Rs Gs Bs
gl.ONE_MINUS_SRC_COLOR (1 – Rs) (1 – Gs) (1 – Bs)
gl.DST_COLOR Rd Gd Bd
gl.ONE_MINUS_DST_COLOR (1 – Rd) (1 – Bd) (1 – Gd)
gl.SRC_ALPHA As As As
gl.ONE_MINUS_SRC_ALPHA (1 – As) (1 – As) (1 – As)
gl.DST_ALPHA Ad Ad Ad
gl.ONE_MINUS_DST_ALPHA (1 – Ad) (1 – Ad) (1 – Ad)
gl.SRC_ALPHA_SATURATE min(As, Ad) min(As, Ad) min(As, Ad)
3 gl.CONSTANT_COLOR, gl.ONE_MINUSCONSTANT_COLOR, gl.CONSTANT_ALPHA , and gl.ONE_
MINUS_CONSTANT_ALPHA are removed from OpenGL.
(Rs,Gs,Bs,As) is the source color and (Rd,Gd,Bd,Ad) is the destination color.
In the sample program, you used the following:
54 gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
For example, if the source color is semitransparent green (0.0, 1.0, 0.0, 0.4) and the desti-
nation color is yellow (1.0, 1.0, 0.0, 1.0), src_factor becomes the alpha value 0.4 and dst_
factor becomes (1 – 0.4)=0.6. The calculation is shown in Figure 10.14 .
= ) B G R ( r o l o c r o t c a f _ t s d * r o l o c n o i t a n i t s e d + r o t c a f _ c r s * r o l o c e c r u o s
B G R
0 . 0 0 . 1 0 . 0 r o l o c e c r u o s
4 . 0 4 . 0 4 . 0 r o t c a f _ c r s
0 . 0 4 . 0 0 . 0
B G R
0 . 0 0 . 1 0 . 1 r o l o c n o i t a n i t s e d
6 . 0 6 . 0 6 . 0 r o t c a f _ t s d
0 . 0 6 . 0 6 . 0
0 . 0 0 . 1 6 . 0 r o l o c d e d n e l B
* *
Figure 10.14 Calculation of gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
You can experiment with the other possible parameter values for src_factor , dst_factor , but
one that is often used is additive blending. When used, the result will become brighter
than the original value because it is a simple addition. It can be used for an indicator or
the lighting effect resulting from an explosion.
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
CHAPTER 10 Advanced Techniques
384
Alpha Blend 3D Objects (BlendedCube.js)
Let’s now explore the effects of alpha blending on a representative 3D object, a cube, by
making it semitransparent. You will reuse the ColoredCube sample program from Chapter
7 to create BlendedCube , which adds the two steps needed for blending (see Listing 10.10 ).
Listing 10.10 BlendedCube.js
1 // BlendedCube.js
...
47 // Set the clear color and enable the depth test
48 gl.clearColor(0.0, 0.0, 0.0, 1.0);
49 gl.enable(gl.DEPTH_TEST);
50 // Enable alpha blending
51 gl.enable (gl.BLEND);
52 // Set blending function
53 gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
Unfortunately, if you run this program as-is, you won’t see the expected result (right side
of Figure 10.15 ); rather, you will see something similar to the left side, which is no differ-
ent from the original ColoredCube used in Chapter 7 .
Figure 10.15 BlendedCube
This is because of the hidden surface removal function enabled at line 49. Blending only
takes place on the drawn surfaces. When the hidden surface removal function is enabled,
the hidden surfaces are not drawn, so there is no other surface to be blended with.
Therefore, you don’t see the blending effect as expected. To solve this problem, you can
simply comment out line 49 that enables the hidden surface removal function.
Alpha Blending
385
48 gl.clearColor(0.0, 0.0, 0.0, 1.0);
49 // gl.enable(gl.DEPTH_TEST);
50 // Enable alpha blending
51 gl.enable (gl.BLEND);
How to Draw When Alpha Values Coexist
This is a quick solution, but it’s not very satisfactory because, as we’ve seen in Chapter 7 ,
hidden surface removal is often needed to correctly draw a 3D scene.
You can overcome this problem by drawing objects while turning the hidden surface
removal function on and off.
1. Enable the hidden surface removal function.
gl.enable(gl.DEPTH_TEST);
2. Draw all the opaque objects (whose alpha values are 1.0).
3. Make the depth buffer ( Chapter 7 ), which is used in the hidden surface removal,
read-only.
gl.depthMask(false);
4. Draw all the transparent objects (whose alpha values are smaller than 1.0). Note,
they should be sorted by the depth order and drawn back to front.
5. Make the depth buffer readable and writable.
gl.depthMask(true);
If you completely disable the hidden surface removal function, when there are transpar-
ent objects behind opaque objects, the transparent object will not be hidden behind the
opaque objects. So you need to control that with gl.depthMask() . gl.depthMask() has the
following specification.
gl.depthMask(mask)
Enable or disable writing into the depth buffer.
Parameters mask Specifies whether the depth buffer is enabled for writing. If mask
is false , depth buffer writing is disabled.
Return value None
Errors None
CHAPTER 10 Advanced Techniques
386
The depth buffer was briefly introduced in Chapter 7 . The z values of fragments (which
are normalized to a value between 0.0 and 1.0) are written into the buffer. For example,
say there are two triangles on top of each other and you draw from the triangle on top.
First, the z value of the triangle on top is written into the depth buffer. Then, when the
triangle on bottom is drawn, the hidden surface removal function compares the z value
of its fragment that is going to be drawn, with the z value already written in the depth
buffer. Then only when the z value of the fragment that is going to be drawn is smaller
than the existing value in the buffer (that is, when it’s closer to the eye point) will the
fragment be drawn into the color buffer. This approach ensures that hidden surface
removal is achieved. Therefore, after drawing, the z value of the fragment of the surface
that can be seen from the eye point is left in the depth buffer.
Opaque objects are drawn into the color buffer in the correct order by removing the
hidden surfaces in the processing of steps 1 and 2, and the z value that represents the
order is written in the depth buffer. Transparent objects are drawn into the color buffer
using that z value in steps 3, 4, and 5, so the hidden surfaces of the transparent objects
behind the opaque objects will be removed. This results in the correct image being shown
where both objects coexist.
Switching Shaders
The sample programs in this book draw using a single vertex shader and a single fragment
shader. If all objects can be drawn with the same shaders, there is no problem. However,
if you want to change the drawing method for each object, you need to add significant
complexity to the shaders to achieve multiple effects. A solution is to prepare more than
one shader and then switch between these shaders as required. Here, you construct a
sample program, ProgramObject , which draws a cube colored with a single color and
another cube with a texture image. Figure 10.16 shows a screen shot.
Figure 10.16 A screen shot of ProgramObject
Switching Shaders
387
This program is also an example of the shading of an object with a texture image.
How to Implement Switching Shaders
The shaders can be switched easily by creating program objects, as explained in Chapter 8 ,
“Lighting Objects,” and switching them before drawing. Switching is carried out using the
function gl.useProgram() . Because you are explicitly manipulating shader objects, you
cannot use the convenience function initShaders() . However, you can use the function
createProgram() in cuon-utils.js , which is called from initShaders() .
The following is the processing flow of the sample program. It performs the same proce-
dure twice, so it looks long, but the essential code is simple:
1. Prepare the shaders to draw an object shaded with a single color.
2. Prepare the shaders to draw an object with a texture image.
3. Create a program object that has the shaders from step 1 with createProgram() .
4. Create a program object that has the shaders from step 2 with createProgram() .
5. Specify the program object created by step 3 with gl.useProgram() .
6. Enable the buffer object after assigning it to the attribute variables.
7. Draw a cube (drawn in a single color).
8. Specify the program object created in step 4 using gl.useProgram() .
9. Enable the buffer object after assigning it to the attribute variables.
10. Draw a cube (texture is pasted).
Now let’s look at the sample program.
Sample Program (ProgramObject.js)
The key program code for steps 1 to 4 is shown in Listing 10.11 . Two types of vertex
shader and fragment shader are prepared: SOLID_VSHADER_SOURCE (line 3) and SOLID_
FSHADER_SOURCE (line 19) to draw an object in a single color, and TEXTURE_VSHADER_SOURCE
(line 29) and TEXTURE_FSHADER_SOURCE (line 46) to draw an object with a texture image.
Because the focus here is on how to switch the program objects, the contents of the
shaders are omitted.
Listing 10.11 ProgramObject (Process for Steps 1 to 4)
1 // ProgramObject.js
2 // Vertex shader for single color drawing <- (1)
3 var SOLID_VSHADER_SOURCE =
...
18 // Fragment shader for single color drawing
CHAPTER 10 Advanced Techniques
388
19 var SOLID_FSHADER_SOURCE =
...
28 // Vertex shader for texture drawing <- (2)
29 var TEXTURE_VSHADER_SOURCE =
...
45 // Fragment shader for texture drawing
46 var TEXTURE_FSHADER_SOURCE =
...
58 function main() {
...
69 // Initialize shaders
70 var solidProgram = createProgram (gl, SOLID_VSHADER_SOURCE,
➥SOLID_FSHADER_SOURCE); <- (3)
71 var texProgram = createProgram (gl, TEXTURE_VSHADER_SOURCE,
➥TEXTURE_FSHADER_SOURCE); <- (4)
...
77 // Get the variables in the program object for single color drawing
78 solidProgram.a_Position = gl.getAttribLocation(solidProgram, 'a_Position');
79 solidProgram.a_Normal = gl.getAttribLocation(solidProgram, 'a_Normal');
...
83 // Get the storage location of attribute/uniform variables
84 texProgram.a_Position = gl.getAttribLocation(texProgram, 'a_Position');
85 texProgram.a_Normal = gl.getAttribLocation(texProgram, 'a_Normal');
...
89 texProgram.u_Sampler = gl.getUniformLocation(texProgram, 'u_Sampler');
...
99 // Set vertex information
100 var cube = initVertexBuffers(gl, solidProgram);
...
106 // Set texture
107 var texture = initTextures(gl, texProgram);
...
122 // Start drawing
123 var currentAngle = 0.0; // Current rotation angle (degrees)
124 var tick = function() {
125 currentAngle = animate(currentAngle); // Update rotation angle
...
128 // Draw a cube in single color
129 drawSolidCube(gl, solidProgram, cube, -2.0, currentAngle, viewProjMatrix);
130 // Draw a cube with texture
131 drawTexCube(gl, texProgram, cube, texture, 2.0, currentAngle,
➥viewProjMatrix);
132
133 window.requestAnimationFrame(tick, canvas);
Switching Shaders
389
134 };
135 tick();
136 }
137
138 function initVertexBuffers(gl, program) {
...
148 var vertices = new Float32Array([ // Vertex coordinates
149 1.0, 1.0, 1.0, -1.0, 1.0, 1.0, -1.0,-1.0, 1.0, 1.0,-1.0, 1.0,
150 1.0, 1.0, 1.0, 1.0,-1.0, 1.0, 1.0,-1.0,-1.0, 1.0, 1.0,-1.0,
...
154 1.0,-1.0,-1.0, -1.0,-1.0,-1.0, -1.0, 1.0,-1.0, 1.0, 1.0,-1.0
155 ]);
156
157 var normals = new Float32Array([ // Normal
...
164 ]);
165
166 var texCoords = new Float32Array([ // Texture coordinates
...
173 ]);
174
175 var indices = new Uint8Array([ // Indices for vertices
...
182 ]);
183
184 var o = new Object(); // Use Object to return buffer objects
185
186 // Write vertex information to buffer object
187 o.vertexBuffer = initArrayBufferForLaterUse(gl, vertices, 3, gl.FLOAT);
188 o.normalBuffer = initArrayBufferForLaterUse(gl, normals, 3, gl.FLOAT);
189 o.texCoordBuffer = initArrayBufferForLaterUse(gl, texCoords, 2, gl.FLOAT);
190 o.indexBuffer = initElementArrayBufferForLaterUse(gl, indices,
➥gl.UNSIGNED_BYTE);
...
193 o.numIndices = indices.length;
...
199 return o;
200 }
Starting with the main() function in JavaScript, you first create a program object for
each shader with createProgram() at lines 70 and 71. The arguments of the createPro-
gram() are the same as the initShaders() , and the return value is the program object.
You save each program object in solidProgram and texProgram . Then you retrieve the
storage location of the attribute and uniform variables for each shader at lines 78 to 89.
You will store them in the corresponding properties of the program object, as you did in
CHAPTER 10 Advanced Techniques
390
MultiJointModel_segment.js . Again, you leverage JavaScript’s ability to freely append a
new property of any type to an object.
The vertex information is then stored in the buffer object by initVertexBuffers() at line
100. You need (1) vertex coordinates, (2) the normals, and (3) indices for the shader to
draw objects in a single color. In addition, for the shader to draw objects with a texture
image, you need the texture coordinates. The processing in initVertexBuffers() handles
this and binds the correct buffer object to the corresponding attribute variables when the
program object is switched.
initVertexBuffers() prepares the vertex coordinates from line 148, normals from line
157, texture coordinates from line 166, and index arrays from line 175. Line 184 creates
object ( o ) of type Object . Then you store the buffer object to the property of the object
(lines 187 to 190). You can maintain each buffer object as a global variable, but that intro-
duces too many variables and makes it hard to understand the program. By using proper-
ties, you can more conveniently manage all four buffer objects using one object o.
4
You use initArrayBufferForLaterUse() , explained in MultiJointModel_segment.js , to
create each buffer object. This function writes vertex information into the buffer object
but does not assign it to the attribute variables. You use the buffer object name as its
property name to make it easier to understand. Line 199 returns the object o as the return
value.
Once back in main() in JavaScript, the texture image is set up in initTextures() at line
107, and then everything is ready to allow you to draw the two cube objects. First, you
draw a single color cube using drawSolidCube() at line 129, and then you draw a cube
with a texture image by using drawTexCube() at line 131. Listing 10.12 shows the latter
half of the steps, steps 5 through 10.
Listing 10.12 ProgramObject.js (Processes for Steps 5 through 10)
236 function drawSolidCube(gl, program, o, x, angle, viewProjMatrix) {
237 gl.useProgram(program); // Tell this program object is used <-(5)
238
239 // Assign the buffer objects and enable the assignment <-(6)
240 initAttributeVariable(gl, program.a_Position, o.vertexBuffer);
241 initAttributeVariable(gl, program.a_Normal, o.normalBuffer);
242 gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, o.indexBuffer);
243
244 drawCube(gl, program, o, x, angle, viewProjMatrix); // Draw <-(7)
245 }
246
247 function drawTexCube(gl, program, o, texture, x, angle, viewProjMatrix) {
4 To keep the explanation simple, the object ( o ) was used. However, it is better programming practice
to create a new user-defi ned type for managing the information about a buffer object and to use it to
manage the four buffers.
Switching Shaders
391
248 gl.useProgram(program); // Tell this program object is used <-(8)
249
250 // Assign the buffer objects and enable the assignment <-(9)
251 initAttributeVariable(gl, program.a_Position, o.vertexBuffer);
252 initAttributeVariable(gl, program.a_Normal, o.normalBuffer);
253 initAttributeVariable(gl, program.a_TexCoord, o.texCoordBuffer);
254 gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, o.indexBuffer);
255
256 // Bind texture object to texture unit 0
257 gl.activeTexture(gl.TEXTURE0);
258 gl.bindTexture(gl.TEXTURE_2D, texture);
259
260 drawCube(gl, program, o, x, angle, viewProjMatrix); // Draw <-(10)
261 }
262
263 // Assign the buffer objects and enable the assignment
264 function initAttributeVariable(gl, a_attribute, buffer) {
265 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
266 gl.vertexAttribPointer(a_attribute, buffer.num, buffer.type, false, 0, 0);
267 gl.enableVertexAttribArray(a_attribute);
268 }
...
275 function drawCube(gl, program, o, x, angle, viewProjMatrix) {
276 // Calculate a model matrix
...
281 // Calculate transformation matrix for normal
...
286 // Calculate a model view projection matrix
...
291 gl.drawElements(gl.TRIANGLES, o.numIndices, o.indexBuffer.type, 0);
292 }
drawSolidCube() is defined at line 236 and uses gl.useProgram() at line 237 to tell
the WebGL system that you will use the program (program object, solidProgram )
specified by the argument. Then you can draw using solidProgram . The buffer objects
for vertex coordinates and normals are assigned to attribute variables and enabled by
initAttributeVariable() at lines 240 and 241. This function is defined at line 264. Line
242 binds the buffer object for the indices to gl.ELEMENT_ARRAY_BUFFER . With everything
set up, you then call drawCube() at line 244, which uses gl.drawElements() at line 291 to
perform the draw operation.
drawTexCube() , defined at line 247, follows the same steps as drawSolidCube() . Line 253
is added to assign the buffer object for texture coordinates to the attribute variables, and
lines 257 and 258 are added to bind the texture object to the texture unit 0. The actual
drawing is performed in drawCube() , just like drawSolidCube() .
CHAPTER 10 Advanced Techniques
392
Once you’ve mastered this basic technique, you can use it to switch between any number
of shader programs. This way you can use a variety of different drawing effects in a single
scene.
Use What You’ve Drawn as a Texture Image
One simple but powerful technique is to draw some 3D objects and then use the result-
ing image as a texture image for another 3D object. Essentially, if you can use the content
you’ve drawn as a texture image, you are able to generate images on-the-fly. This means
you do not need to download images from the network, and you can apply special effects
(such as motion blur and depth of field) before displaying the image. You can also use
this technique for shadowing, which will be explained in the next section. Here, you will
construct a sample program, FramebufferObject , which maps a rotating cube drawn with
WebGL to a rectangle as a texture image. Figure 10.17 shows a screen shot.
Figure 10.17 FramebufferObject
If you actually run the program, you can see a rotating cube with a texture image of a
summer sky pasted to the rectangle as its texture. Significantly, the image of the cube that
is pasted on the rectangle is not a movie prepared in advance but a rotating cube drawn
by WebGL in real time. This is quite powerful, so let’s take a look at what WebGL must do
to achieve this.
Framebuffer Object and Renderbuffer Object
By default, the WebGL system draws using a color buffer and, when using the hidden
surface removal function, a depth buffer. The final image is kept in the color buffer.
The framebuffer object is an alternative mechanism you can use instead of a color buffer
or a depth buffer ( Figure 10.18 ). Unlike a color buffer, the content drawn in a framebuffer
Use What You’ve Drawn as a Texture Image
393
object is not directly displayed on the . Therefore, you can use it if you want to
perform different types of processing before displaying the drawn content. Or you can use
it as a texture image. Such a technique is often referred to as offscreen drawing .
m e t s y S L G b e W
r e d a h S x e t r e V t n e m g a r F
r e d a h S
t p i r c S a v a J
{ ) ( n i a m n o i t c n u f
… L G b e W t e g = l g r a v
…
; ) … ( s r e d a h S t i n i
…
}
g n i s s e c o r p t n e m g a r f - r e p g n i s s e c o r p x e t r e v - r e p
r e f f u B r o l o C
x
y
r e f f u b e m a r F
t c e j b O
Figure 10.18 Framebuffer object
The framebuffer object has the structure shown in Figure 10.19 and supports substitutes
for the color buffer and the depth buffer. As you can see, drawing is not carried out in
the framebuffer itself, but in the drawing areas of the objects that the framebuffer points
to. These objects are attached to the framebuffer using its attachment function. A color
attachment specifies the destination for drawing to be a replacement for the color buffer.
A depth attachment and a stencil attachment specify the replacements for the depth
buffer and stencil buffer.
r e f u b e m a r F
t c e j b o
r o l o C
t n e m h c a t a
h t p e D
t n e m h c a t a
l i c n e t S
t n e m h c a t a
e r u t x e T
t c e j b o
g n i w a r D
a e r a
e r u t x e T
t c e j b o
g n i w a r D
a e r a
Render-
buffer
object
g n i w a r D
a e r a
Render-
buffer
object
g n i w a r D
a e r a
Figure 10.19 Framebuffer object, texture object, renderbuffer object
WebGL supports two types of objects that can be used to draw objects within: the texture
object that you saw in Chapter 5 , and the renderbuffer object . With the texture object,
the content drawn into the texture object can be used as a texture image. The render-
buffer object is a more general-purpose drawing area, allowing a variety of data types to
be written.
CHAPTER 10 Advanced Techniques
394
How to Implement Using a Drawn Object as a Texture
When you want to use the content drawn into a framebuffer object as a texture object,
you actually need to use the content drawn into the color buffer for the texture object.
Because you also want to remove the hidden surfaces for drawing, you will set up the
framebuffer object as shown in Figure 10.20 .
r e f u b e m a r F
t c e j b o
r o l o C
t n e m h c a t a
h t p e D
t n e m h c a t a
l i c n e t S
t n e m h c a t a
e r u t x e T
t c e j b o
g n i w a r D
a e r a
g n i w a r D
a e r a
Render-
buffer
object
a e r a g n i w a r d h c a e f o e z i s e h T
. l a c i t n e d i e b t s u m
Figure 10.20 Configuration of framebuffer object when using drawn content as a texture
The following eight steps are needed for realizing this configuration. These processes are
similar to the process for the buffer object. Step 2 was explained in Chapter 5 , so there are
essentially seven new processes:
1. Create a framebuffer object ( gl.createFramebuffer() ).
2. Create a texture object and set its size and parameters ( gl.createTexture() ,
gl.bindTexture() , gl.texImage2D() , gl.Parameteri() ).
3. Create a renderbuffer object ( gl.createRenderbuffer() ).
4. Bind the renderbuffer object to the target and set its size ( gl.bindRenderbuffer() ,
gl.renderbufferStorage() ).
5. Attach the texture object to the color attachment of the framebuffer object
( gl.bindFramebuffer() , gl.framebufferTexture2D() ).
6. Attach the renderbuffer object to the depth attachment of the framebuffer object
( gl.framebufferRenderbuffer() ).
7. Check whether the framebuffer object is configured correctly ( gl.checkFramebuffer-
Status() ).
8. Draw using the framebuffer object ( gl.bindFramebuffer() ).
Now let’s look at the sample program. The numbers in the sample program indicate the
code used to implement the steps.
Use What You’ve Drawn as a Texture Image
395
Sample Program (FramebufferObjectj.js)
Steps 1 to 7 of FramebufferObject.js are shown in Listing 10.13 .
Listing 10.13 FramebufferObject.js (Processes for Steps 1 to 7)
1 // FramebufferObject.js
...
24 // Size of offscreen
25 var OFFSCREEN_WIDTH = 256;
26 var OFFSCREEN_HEIGHT = 256;
27
28 function main() {
...
55 // Set vertex information
56 var cube = initVertexBuffersForCube(gl);
57 var plane = initVertexBuffersForPlane(gl);
...
64 var texture = initTextures(gl);
...
70 // Initialize framebuffer object (FBO)
71 var fbo = initFramebufferObject(gl);
...
80 var viewProjMatrix = new Matrix4();/ For color buffer
81 viewProjMatrix.setPerspective(30, canvas.width/canvas.height, 1.0, 100.0);
82 viewProjMatrix.lookAt(0.0, 0.0, 7.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
83
84 var viewProjMatrixFBO = new Matrix4(); // For FBO
85 viewProjMatrixFBO.setPerspective(30.0, OFFSCREEN_WIDTH/OFFSCREEN_HEIGHT,
➥1.0, 100.0);
86 viewProjMatrixFBO.lookAt(0.0, 2.0, 7.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
...
92 draw(gl, canvas, fbo, plane, cube, currentAngle, texture, viewProjMatrix,
➥viewProjMatrixFBO);
...
96 }
...
263 function initFramebufferObject(gl) {
264 var framebuffer, texture, depthBuffer;
...
274 // Create a framebuffer object (FBO) <-(1)
275 framebuffer = gl.createFramebuffer();
...
281 // Create a texture object and set its size and parameters <-(2)
282 texture = gl.createTexture(); // Create a texture object
...
CHAPTER 10 Advanced Techniques
396
287 gl.bindTexture(gl.TEXTURE_2D, texture);
288 gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, OFFSCREEN_WIDTH,
➥OFFSCREEN_HEIGHT, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
289 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
290 framebuffer.texture = texture; // Store the texture object
291
292 // Create a renderbuffer object and set its size and parameters
293 depthBuffer = gl.createRenderbuffer(); // Create a renderbuffer <-(3)
...
298 gl.bindRenderbuffer(gl.RENDERBUFFER, depthBuffer); <-(4)
299 gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16,
➥OFFSCREEN_WIDTH, OFFSCREEN_HEIGHT);
300
301 // Attach the texture and the renderbuffer object to the FBO
302 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
303 gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
➥gl.TEXTURE_2D, texture, 0); <-(5)
304 gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT,
➥gl.RENDERBUFFER, depthBuffer); <-(6)
305
306 // Check whether FBO is configured correctly <-(7)
307 var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
308 if (e !== gl.FRAMEBUFFER_COMPLETE) {
309 console.log('Framebuffer object is incomplete: ' + e.toString());
310 return error();
311 }
312
...
319 return framebuffer;
320 }
The vertex shader and fragment shader are omitted because this sample program uses the
same shaders as TexturedQuad.js in Chapter 5 , which pasted a texture image on a rect-
angle. The sample program in this section draws two objects: a cube and a rectangle. Just
as you did in ProgramObject.js in the previous section, you assign multiple buffer objects
needed for drawing each object as properties of an Object object. Then you store the
object to the variables cube and plane . You will use them for drawing by assigning each
buffer in the object to the attribute variable.
The key point of this program is the initialization of the framebuffer object by init-
FramebufferObject() at line 71. The initialized framebuffer object is stored in a variable
fbo and passed as the third argument of draw() at line 92. You’ll return to the function
draw() later. For now let’s examine initFramebufferObject() , at line 263, step by step.
This function performs steps 1 to 7. The view projection matrix for the framebuffer object
is prepared separately at line 84 because it is different from the one used for a color buffer.
Use What You’ve Drawn as a Texture Image
397
Create Frame Buffer Object (gl.createFramebuffer())
You must create a framebuffer object before you can use it. The sample program creates it
at line 275:
275 framebuffer = gl.createFramebuffer();
You will use gl.createFramebuffer() to create the framebuffer object.
gl.createFramebuffer()
Create a framebuffer object.
Parameters None
Return value non-null The newly created framebuffer object.
null Failed to create a framebuffer object.
Errors None
You use gl.deleteFramebuffer() to delete the created framebuffer object.
gl.deleteFramebuffer(framebuffer)
Delete a framebuffer object.
Parameters framebuffer Specifies the framebuffer object to be deleted.
Return value None
Errors None
Once you have created the framebuffer object, you need to attach a texture object to the
color attachment and a renderbuffer object to the depth attachment in the framebuffer
object. Let’s start by creating the texture object for the color attachment.
Create Texture Object and Set Its Size and Parameters
You have already seen how to create a texture object and set up its parameters
( gl.TEXTURE_MIN_FILTER ) in Chapter 5 . You should note that its width and height are
OFFSCREEN_WIDTH and OFFSCREEN_HEIGHT , respectively. The size is smaller than that of the
to make the drawing process faster.
282 texture = gl.createTexture(); // Create a texture object
...
287 gl.bindTexture(gl.TEXTURE_2D, texture);
CHAPTER 10 Advanced Techniques
398
288 gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, OFFSCREEN_WIDTH, OFFSCREEN_HEIGHT, 0,
➥gl.RGBA, gl.UNSIGNED_BYTE, null );
289 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
290 framebuffer.texture = texture; // Store the texture object
The gl.texImage2D() at line 288 allocates a drawing area in a texture object. You can allo-
cate a drawing area by specifying null to the last argument, which is used to specify an
Image object. You will use this texture object later, so store it in framebuffer.texture at
line 290.
That completes the preparation for a texture object that is attached to the color attach-
ment. Next, you need to create a renderbuffer object for the depth buffer.
Create Renderbuffer Object (gl.createRenderbuffer())
Like texture buffers, you need to create a renderbuffer object before using it. The sample
program does this at line 293.
293 depthBuffer = gl.createRenderbuffer(); // Create a renderbuffer
You use gl.createRenderbuffer() to create the renderbuffer object.
gl.createRenderbuffer()
Create a renderbuffer object.
Parameters None
Return value Non-null The newly created renderbuffer object.
Null Failed to create a renderbuffer object.
Errors None
You use gl.deleteRenderbuffer() to delete the created renderbuffer object.
gl.deleteRenderbuffer(renderbuffer)
Delete a renderbuffer object.
Parameters renderbuffer Specifies the renderbuffer object to be deleted.
Return value None
Errors None
Use What You’ve Drawn as a Texture Image
399
The created renderbuffer object is used as a depth buffer here, so you store it in a variable
named depthBuffer .
Bind Renderbuffer Object to Target and Set Size
(gl.bindRenderbuffer(), gl.renderbufferStorage())
When using the created renderbuffer object, you need to bind the renderbuffer object to a
target and perform the operation on that target.
298 gl.bindRenderbuffer(gl.RENDERBUFFER, depthBuffer);
299 gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16,
➥OFFSCREEN_WIDTH, OFFSCREEN_HEIGHT);
The renderbuffer object is bound to a target with gl.bindRenderbuffer() .
gl.bindRenderbuffer(target, renderbuffer)
Bind the renderbuffer object specified by renderbuffer to target . If null is specified as
renderbuffer , the renderbuffer is unbound from the target .
Parameters target Must be gl.RENDERBUFFER.
renderbuffer Specifies the renderbuffer object.
Return value None
Errors INVALID_ENUM target is not gl.RENDERBUFFER
When the binding is complete, you can set the format, width, and height of the render-
buffer object by using gl.renderbufferStorage() . You must set the same width and
height as the texture object that is used as the color attachment.
gl.renderbufferStorage(target, internalformat, width, height)
Create and initialize a renderbuffer object’s data store.
Parameters target Must be gl.RENDERBUFFER.
internalformat Specifies the format of the renderbuffer.
gl.DEPTH_
COMPONENT16
The renderbuffer is used as a depth buffer.
gl.STENCIL_
INDEX8
The renderbuffer is used as a stencil buffer.
CHAPTER 10 Advanced Techniques
400
gl.RGBA4 The renderbuffer is used as a color buffer. gl.RGBA4
(each RGBA component has 4, 4, 4, and 4 bits, respec-
tively), gl.RGB5_A1 (each RGB component has 5 bits,
and A has 1 bit), gl.RGB565 (each RGB component has
5, 6, and 5 bits, respectively)
gl.RGB5_A1
gl.RGB565
width, height Specifies the width and height of the renderbuffer in
pixels.
Return value None
Errors INVALID_ENUM Target is not gl.RENDERBUFFER or internalformat is none
of the preceding values.
INVALID_OPERATION No renderbuffer is bound to target .
The preparations of the texture object and renderbuffer object of the framebuffer object
are now complete. At this stage, you can use the object for offscreen drawing.
Set Texture Object to Framebuffer Object (gl.bindFramebuffer(),
gl.framebufferTexture2D())
You use a framebuffer object in the same way you use a renderbuffer object: You need to
bind it to a target and operate on the target, not the framebuffer object itself.
302 gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer); // Bind to target
303 gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D,
➥texture, 0);
A framebuffer object is bound to a target with gl.bindFramebuffer() .
gl.bindFramebuffer(target, framebuffer)
Bind a framebuffer object to a target. If framebuffer is null , the binding is broken.
Parameters target Must be gl.FRAMEBUFFER.
framebuffer Specify the framebuffer object.
Return value None
Errors INVALID_ENUM target is not gl.FRAMEBUFFER
Once the framebuffer object is bound to target , you can use the target to write a texture
object to the framebuffer object. In this sample, you will use the texture object instead of
a color buffer so you attach the texture object to the color attachment of the framebuffer.
Use What You’ve Drawn as a Texture Image
401
You can assign the texture object to the framebuffer object with gl.
framebufferTexture2D() .
gl.framebufferTexture2D(target, attachment, textarget, texture,
level)
Attach a texture object specified by texture to the framebuffer object bound by target.
Parameters target Must be gl.FRAMEBUFFER .
attachment Specifies the attachment point of the framebuffer.
gl.COLOR_ATTACHMENT0 texture is used as a color buffer
gl.DEPTH_ATTACHMENT texture is used as a depth buffer
textarget Specifies the first argument of gl.texImage2D()
( gl.TEXTURE_2D or gl.CUBE_MAP_TEXTURE ).
texture Specifies a texture object to attach to the frame-
buffer attachment point.
level Specifies 0 (if you use a MIPMAP in texture , you
should specify its level).
Return value None
Errors INVALID_ENUM target is not gl.FRAMEBUFFER . attachment
or textarget is none of the preceding values.
INVALID_VALUE level is not valid.
INVALID_OPERATION No framebuffer object is bound to target.
The 0 in the gl.COLOR_ATTACHMENT0 used for the attachment parameter is because a frame-
buffer object in OpenGL, the basis of WebGL, can hold multiple color attachments
( gl.COLOR_ATTACHMENT0 , gl.COLOR_ATTACHMENT1 , gl.COLOR_ATTACHMENT2 ...). However,
WebGL can use just one of them.
Once the color attachment has been attached to the framebuffer object, you need to
assign a renderbuffer object as a depth attachment. This follows a similar process.
Set Renderbuffer Object to Framebuffer Object
(gl.framebufferRenderbuffer())
You will use gl.framebufferRenderbuffer() to attach a renderbuffer object to a frame-
buffer object. You need a depth buffer because this sample program will remove hidden
surfaces. So the depth attachment needs to be attached.
CHAPTER 10 Advanced Techniques
402
304 gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT,
gl.RENDERBUFFER, depthBuffer);
gl.framebufferRenderbuffer(target, attachment, renderbuffertarget,
renderbuffer)
Attach a renderbuffer object specified by renderbuffer to the framebuffer object bound by
target.
Parameters target Must be gl.FRAMEBUFFER .
attachment Specifies the attachment point of the framebuffer.
gl.COLOR_ATTACHMENT0 renderbuffer is used as a color buffer.
gl.DEPTH_ATTACHMENT renderbuffer is used as a depth buffer.
gl.STENCIL_ATTACHMENT renderbuffer is used as a stencil buffer.
renderbuffertarget Must be gl.RENDERBUFFER.
renderbuffer Specifies a renderbuffer object to attach to the
framebuffer attachment point
Return value None
Errors INVALID_ENUM target is not a gl.FRAMEBUFFER . attachment is
none of the above values. renderbuffertarget is
not gl.RENDERBUFFER .
Now that you’ve completed the preparation of the color attachment and depth attach-
ment to the framebuffer object, you are ready to draw. But before that, let’s check that the
configuration of the framebuffer object is correct.
Check Configuration of Framebuffer Object
(gl.checkFramebufferStatus())
Obviously, when you use a framebuffer that is not correctly configured, an error occurs. As
you have seen in the past few sections, preparing a texture object and renderbuffer object
that are needed to configure the framebuffer object is a complex process that sometimes
generates mistakes. You can check whether the created framebuffer object is configured
correctly and is available with gl.checkFramebufferStatus() .
307 var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); <- (7)
308 if (gl.FRAMEBUFFER_COMPLETE !== e) {
309 console.log('Frame buffer object is incomplete:' + e.toString());
310 return error();
311 }
Use What You’ve Drawn as a Texture Image
403
The following shows the specification of gl.checkFramebufferStatus() .
gl.checkFramebufferStatus(target)
Check the completeness status of a framebuffer bound to target.
Parameters target Must be gl.FRAMEBUFFER.
Return value 0 Target is not gl.FRAMEBUFFER.
Others
gl.FRAMEBUFFER_COMPLETE The framebuffer object is configured
correctly.
gl.FRAMEBUFFER_ INCOMPLETE_
ATTACHMENT
One of the framebuffer attachment points
is incomplete. (The attachment is not suffi-
cient. The texture object or the renderbuf-
fer object is invalid.)
gl.FRAMEBUFFER_ INCOMPLETE_
DIMENSIONS
The width or height of the texture object
or renderbuffer object of the attachment is
different.
gl.FRAMEBUFFER_ INCOMPLETE_
MISSING_ ATTACHMENT
The framebuffer does not have at least
one valid attachment.
Errors INVALID_ENUM target is not gl.FRAMEBUFFER .
That completes the preparation of the framebuffer object. Let’s now take a look at the
draw() function.
Draw Using the Framebuffer Object
Listing 10.14 shows draw() . It switches the drawing destination to fbo (the framebuffer)
and draws a cube in the texture object. Then drawTexturedPlane() uses the texture object
to draw a rectangle to the color buffer.
Listing 10.14 FramebufferObject.js (Process of (8))
321 function draw(gl, canvas, fbo, plane, cube, angle, texture, viewProjMatrix,
➥viewProjMatrixFBO) {
322 gl.bindFramebuffer(gl.FRAMEBUFFER, fbo); <-(8)
323 gl.viewport(0, 0, OFFSCREEN_WIDTH, OFFSCREEN_HEIGHT); // For FBO
324
325 gl.clearColor(0.2, 0.2, 0.4, 1.0); // Color is slightly changed
326 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); // Clear FBO
CHAPTER 10 Advanced Techniques
404
327 // Draw the cube
328 drawTexturedCube(gl, gl.program, cube, angle, texture, viewProjMatrixFBO);
329 // Change the drawing destination to color buffer
330 gl.bindFramebuffer(gl.FRAMEBUFFER, null);
331 // Set the size of view port back to that of
332 gl.viewport(0, 0, canvas.width, canvas.height);
333 gl.clearColor(0.0, 0.0, 0.0, 1.0);
334 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
335 // Draw the plane
336 drawTexturedPlane(gl, gl.program, plane, angle, fbo.texture , viewProjMatrix);
337 }
Line 322 switches the drawing destination to the framebuffer object using gl.bindFrame-
buffer() . As a result, draw operations using gl.drawArrays() or gl.drawElements() are
performed for the framebuffer object. Line 332 uses gl.viewport() to specify the draw
area in the buffer (an offscreen area).
gl.viewport(x, y, width, height)
Set the viewport where gl.drawArrays() or gl.drawElements() draws. In WebGL, x and
y are specified in the coordinate system.
Parameters x, y Specify the lower-left corner of the viewport rectangle (in
pixels).
width, height Specify the width and height of the viewport (in pixels).
Return value None
Errors None
Line 326 clears the texture image and the depth buffer bound to the framebuffer object.
When a cube is drawn at line 328, it is drawn in the texture image. To make it easier to
see the result, the clear color at line 325 is changed to a purplish blue from black. The
result of this is that the cube has been drawn into the texture buffer and is now available
for use as a texture image. The next step is to draw a rectangle ( plane ) using this texture
image. In this case, because you want to draw in the color buffer, you need to set the
drawing destination back to the color buffer. This is done at line 330 by specifying null
for the second argument of gl.bindFramebuffer() (that is, cancelling the binding). Then
line 336 draws the plane . You should note that fbo.texture is passed as the texture argu-
ment and used to map the drawn content to the rectangle. You will notice that in this
sample program, the texture image is mapped onto the back side of the rectangle. This is
because WebGL, by default, draws both sides of a polygon. You can eliminate the back
face drawing by enabling the culling function using gl.enable(gl.CULL_FACE) , which
increases the drawing speed (ideally making it twice as fast).
Display Shadows
405
Display Shadows
Chapter 8 explained shading, which is one of the phenomena when light hits an object.
We briefly mentioned shadowing, another phenomena, but didn’t explain how to imple-
ment it. Let’s take a look at that now. There are several methods to realize shadowing,
but we will explain a method that uses a shadow map (depth map). This method is quite
expressive and used in a variety of computer graphics situations and even in special effects
in movies.
How to Implement Shadows
The shadow map method is based on the idea that the sun cannot see the shadow of
objects. Essentially, it works by considering the viewer’s eye point to be at the same posi-
tion as the light source and determining what can be seen from that point. All the objects
you can see would appear to be in the light. Anything behind those objects would be in
shadow. With this method, you can use the distance to the objects (in fact, you will use
the z value, which is the depth value) from the light source to judge whether the objects
are visible. As you can see in Figure 10.21 , where there are two points on the same line,
P1 and P2, P2 is in the shadow because the distance from the light source to P2 is longer
than P1.
z
y
1 p
2 p
w o d a h s y b d e n e k r a d a e r a e h T
Figure 10.21 Theory of shadow map
You need two pairs of shaders for this process: (1) a pair of shaders that calculate the
distance from the light source to the objects, and (2) a pair of shaders that draws the
shadow using the calculated distance. Then you need a method to pass the distance data
from the light source calculated in the first pair of shaders to the second pair of shaders.
You can use a texture image for this purpose. This texture image is called the shadow
map , so this method is called shadow mapping . The shadow mapping technique consists
of the following two processes:
1. Move the eye point to the position of the light source and draw objects from there.
Because the fragments drawn from the position are hit by the light, you write the
distances from the light source to each fragment in the texture image (shadow map).
2. Move the eye point back to the position from which you want to view the objects
and draw them from there. Compare the distance from the light source to the frag-
ments drawn in this step and the distance recorded in the shadow map from step
CHAPTER 10 Advanced Techniques
406
1. If the former distance is greater, you can draw the fragment as in shadow (in the
darker color).
You will use the framebuffer object in step 1 to save the distance in the texture image.
Therefore, the configurations of the framebuffer object used here is the same as that of
FramebufferObject.js in Figure 10.20 . You also need to switch pairs of shaders between
steps 1 and 2 using the technique you learned in the section “Switching Shaders,” earlier
in this chapter. Now let’s take a look at the sample program Shadow . Figure 10.22 shows
a screen shot where you can see a shadow of the red triangle cast onto the slanted white
rectangle.
Figure 10.22 Shadow
Sample Program (Shadow.js)
The key aspects of shadowing take place in the shaders, which are shown in Listing 10.15 .
Listing 10.15 Shadow.js (Shader part)
1 // Shadow.js
2 // Vertex shader program to generate a shadow map
3 var SHADOW_VSHADER_SOURCE =
...
6 'void main() {\n' +
7 ' gl_Position = u_MvpMatrix * a_Position;\n' +
8 '}\n';
9
10 // Fragment shader program for creating a shadow map
11 var SHADOW_FSHADER_SOURCE =
Display Shadows
407
...
15 'void main() {\n' +
16 ' gl_FragColor = vec4(gl_FragCoord.z, 0.0, 0.0, 0.0);\n' + <-(1)
17 '}\n';
18
19 // Vertex shader program for regular drawing
20 var VSHADER_SOURCE =
...
23 'uniform mat4 u_MvpMatrix;\n' +
24 'uniform mat4 u_MvpMatrixFromLight;\n' +
25 'varying vec4 v_PositionFromLight;\n' +
26 'varying vec4 v_Color;\n' +
27 'void main() {\n' +
28 ' gl_Position = u_MvpMatrix * a_Position;\n' +
29 ' v_PositionFromLight = u_MvpMatrixFromLight * a_Position;\n' +
30 ' v_Color = a_Color;\n' +
31 '}\n';
32
33 // Fragment shader program for regular drawing
34 var FSHADER_SOURCE =
...
38 'uniform sampler2D u_ShadowMap;\n' +
39 'varying vec4 v_PositionFromLight;\n' +
40 'varying vec4 v_Color;\n' +
41 'void main() {\n' +
42 ' vec3 shadowCoord =(v_PositionFromLight.xyz/v_PositionFromLight.w)
➥/ 2.0 + 0.5;\n' +
43 ' vec4 rgbaDepth = texture2D(u_ShadowMap, shadowCoord.xy);\n' +
44 ' float depth = rgbaDepth.r;\n' + // Retrieve the z value from R
45 ' float visibility = (shadowCoord.z > depth + 0.005) ? 0.7:1.0;\n'+ <-(2)
46 ' gl_FragColor = vec4(v_Color.rgb * visibility, v_Color.a);\n' +
47 '}\n';
Step 1 is performed in the shader responsible for the shadow map, defined from lines 3 to
17. You just switch the drawing destination to the framebuffer object, pass a model view
projection matrix in which an eye point is located at a light source to u_MvpMatrix , and
draw the objects. This results in the distance from the light source to the fragments being
written into the texture map (shadow map) attached to the framebuffer object. The vertex
shader at line 7 just multiplies the model view projection matrix by the vertex coordinates
to calculate this distance. The fragment shader is more complex and needs to calculate the
distance from the light source to the drawn fragments. For this purpose, you can utilize
the built-in variable gl_FragCoord of the fragment shader used in Chapter 5 .
gl_FragCoord is a vec4 type built-in variable that contains the coordinates of each frag-
ment. gl_FragCoord.x and gl_FragCoord.y represents the position of the fragment on the
CHAPTER 10 Advanced Techniques
408
screen, and gl_FragCoord.z contains the normalized z value in the range of [0, 1]. This is
calculated using ( gl_Position.z / gl.Position.w)/2.0+0.5. (See Section 2.12 of OpenGL ES
2.0 specification for further details.) gl_FragCoord.z is specified in the range of 0.0 to 1.0,
with 0.0 representing the fragments on the near clipping plane and 1.0 representing those
on the far clipping plane. This value is written into the R (red) component value (any
component could be used) in the shadow map at line 16.
16 ' gl_FragColor = vec4(gl_FragCoord.z, 0.0, 0.0, 0.0);\n' + <-(1)
Subsequently, the z value for each fragment drawn from the eye point placed at the light
source is written into the shadow map. This shadow map is passed to u_ShadowMap at
line 38.
For step 2, you need to draw the objects again after resetting the drawing destination
to the color buffer and moving the eye point to its original position. After drawing the
objects, you decide a fragment color by comparing the z value of the fragment with
that stored in the shadow map. This is done in the normal shaders from lines 20 to 47.
u_MvpMatrix is the model view projection matrix where the eye point is placed at the orig-
inal position and uMvpMatrixFromLight , which was used to create the shadow map, is the
model view projection matrix where the eye point is moved to the light source. The main
task of the vertex shader defined at line 20 is calculating the coordinates of each fragment
from the light source and passing them to the fragment shader (line 29) to obtain the z
value of each fragment from the light source.
The fragment shader uses the coordinates to calculate the z value (line 42). As mentioned,
the shadow map contains the value of (gl_Position.z/gl.Position.w)/2.0+0.5 . So
you could simply calculate the z value to compare with the value in the shadow map
by (v_PositionFromLight.z/v_PositionFromLight.w)/2.0+0.5 . However, because you
need to get the texel value from the shadow map, line 42 performs the following extra
calculation using the same operation. To compare to the value in the shadow map, you
need to get the texel value from the shadow map whose texture coordinates correspond
to the coordinates ( v_PositionFromLight.x, v_PositionFromLight.y ). As you know,
v_PositionFromLight.x and v_PositionFromLight.y are the x and y coordinates in the
WebGL coordinate system (see Figure 2.18 in Chapter 2 ), and they range from –1.0 to 1.0.
On the other hand, the texture coordinates s and t in the shadow map range from 0.0 to
1.0 (see Figure 5.20 in Chapter 5 ). So, you need to convert the x and y coordinates to the s
and t coordinates. You can also do this with the same expression to calculate the z value.
That is:
The texture coordinate s is (v_PositionFromLight.x/v_PositionFromLight.w)/2.0 + 0.5 .
The texture coordinate t is (v_PositionFromLight.y/v_PositionFromLight.w)/2.0 + 0.5 .
See also Section 2.12 of the OpenGL ES 2.0 specification
5 for further details about this calcu-
lation. These are carried out using the same type of calculation and can be achieved in
one line, as shown at line 42:
5 www.khronos.org/registry/gles/specs/2.0/es_full_spec_2.0.25.pdf
Display Shadows
409
42 ' vec3 shadowCoord =(v_PositionFromLight.xyz/v_PositionFromLight.w)
➥/ 2.0 + 0.5;\n' +
43 ' vec4 rgbaDepth = texture2D(u_ShadowMap, shadowCoord.xy);\n' +
44 ' float depth = rgbaDepth.r;\n' + // Retrieve the z value from R
You retrieve the value from the shadow map at lines 43 and 44. Only the R value is
retrieved using rgbaDepth.r at line 44 because you wrote it into the R component at line
16. Line 45 checks whether that fragment is in the shadow. When the position of the
fragment is determined to be greater than the depth (that is, shadowCoord.z > depth) , a
value of 0.7 is stored in visibility . The visibility is used at line 46 to draw the shadow
with a darker color:
45 ' float visibility = (shadowCoord.z > depth + 0.005) ? 0.7:1.0;\n'+
46 ' gl_FragColor = vec4(v_Color.rgb * visibility, v_Color.a);\n' +
Line 45 adds a small offset of 0.005 to the depth value. To understand why this is needed,
try running the sample program without this number. You will see a striped pattern as
shown in Figure 10.23 , referred to as the Mach band .
Figure 10.23 Striped pattern
The value of 0.005 is added to suppress the stripe pattern. The stripe pattern occurs
because of the precision of the numbers you can store in the RGBA components. It’s a
little complex, but it’s worth understanding because this problem occurs elsewhere in 3D
graphics. The z value of the shadow map is stored in the R component of RGBA in the
texture map, which is an 8-bit number. This means that the precision of R is lower than
its comparison target ( shadowCoord.z ), which is of type float . For example, let the z value
simply be 0.1234567. If you represent the value using 8 bits, in other words using 256
CHAPTER 10 Advanced Techniques
410
possibilities, you can represent the value in a precision of 1/256 (=0.0390625). So you can
represent 0.1234567 as follows:
0.1234567 / (1 / 256) = 31.6049152
Numbers below the decimal point cannot be used in 8 bits, so only 31 can be stored
in 8 bits. When you divide 31 by 256, you obtain 0.12109375 which, as you can see, is
smaller than the original value (0.1234567). This means that even if the fragment is at
the same position, its z value stored in the shadow map becomes smaller than its z value
in shadowCoord.z . As a result, the z value in shadowCoord.z becomes larger than that in
the shadow map according to the position of the fragment resulting in the stripe patterns.
Because this happens because the precision of the R value is 1/256 (=0.00390625), by
adding a small offset, such as 0.005, to the R value, you can stop the stripe pattern from
appearing. Note that any offset greater than 1/256 will work; 0.005 was chosen because it
is 1/256 plus a small margin.
Next, let’s look at the JavaScript program that passes the data to the shader (see Listing
10.16 ) with a focus on the type of transformation matrices passed. To draw a shadow
clearly, the size of a texture map for the offscreen rendering defined at line 49 is larger
than that of the .
Listing 10.16 Shadow.js (JavaScript Part)
49 var OFFSCREEN_WIDTH = 1024, OFFSCREEN_HEIGHT = 1024;
50 var LIGHT_X = 0, LIGHT_Y = 7, LIGHT_Z = 2;
51
52 function main() {
...
63 // Initialize shaders for generating a shadow map
64 var shadowProgram = createProgram(gl, SHADOW_VSHADER_SOURCE,
➥SHADOW_FSHADER_SOURCE);
...
72 // Initialize shaders for regular drawing
73 var normalProgram = createProgram(gl, VSHADER_SOURCE, FSHADER_SOURCE);
...
85 // Set vertex information
86 var triangle = initVertexBuffersForTriangle(gl);
87 var plane = initVertexBuffersForPlane(gl);
...
93 // Initialize a framebuffer object (FBO)
94 var fbo = initFramebufferObject(gl);
...
99 gl.activeTexture(gl.TEXTURE0); // Set a texture object to the texture unit
100 gl.bindTexture(gl.TEXTURE_2D, fbo.texture);
...
106 var viewProjMatrixFromLight = new Matrix4(); // For the shadow map
Display Shadows
411
107 viewProjMatrixFromLight.setPerspective(70.0,
➥OFFSCREEN_WIDTH/OFFSCREEN_HEIGHT, 1.0, 100.0);
108 viewProjMatrixFromLight.lookAt(LIGHT_X, LIGHT_Y, LIGHT_Z, 0.0, 0.0, 0.0, 0.0,
➥1.0, 0.0);
109
110 var viewProjMatrix = new Matrix4(); // For regular drawing
111 viewProjMatrix.setPerspective(45, canvas.width/canvas.height, 1.0, 100.0);
112 viewProjMatrix.lookAt(0.0, 7.0, 9.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
113
114 var currentAngle = 0.0; // Current rotation angle [degrees]
115 var mvpMatrixFromLight_t = new Matrix4(); // For triangle
116 var mvpMatrixFromLight_p = new Matrix4(); // For plane
117 var tick = function() {
118 currentAngle = animate(currentAngle);
119 // Change the drawing destination to FBO
120 gl.bindFramebuffer(gl.FRAMEBUFFER, fbo);
...
124 gl.useProgram(shadowProgram); // For generating a shadow map
125 // Draw the triangle and the plane (for generating a shadow map)
126 drawTriangle(gl, shadowProgram, triangle, currentAngle,
➥viewProjMatrixFromLight);
127 mvpMatrixFromLight_t.set(g_mvpMatrix); // Used later
128 drawPlane(gl, shadowProgram, plane, viewProjMatrixFromLight);
129 mvpMatrixFromLight_p.set(g_mvpMatrix); // Used later
130 // Change the drawing destination to color buffer
131 gl.bindFramebuffer(gl.FRAMEBUFFER, null);
...
135 gl.useProgram(normalProgram); // For regular drawing
136 gl.uniform1i(normalProgram.u_ShadowMap, 0); // Pass gl.TEXTURE0
137 // Draw the triangle and plane (for regular drawing)
138 gl.uniformMatrix4fv(normalProgram.u_MvpMatrixFromLight, false,
➥mvpMatrixFromLight_t.elements);
139 drawTriangle(gl, normalProgram, triangle, currentAngle, viewProjMatrix);
140 gl.uniformMatrix4fv(normalProgram.u_MvpMatrixFromLight, false,
➥mvpMatrixFromLight_p.elements);
141 drawPlane(gl, normalProgram, plane, viewProjMatrix);
142
143 window.requestAnimationFrame(tick, canvas);
144 };
145 tick();
146 }
Let’s look at the main() function from line 52 in the JavaScript program. Line 64
initializes the shaders for generating the shadow map. Line 73 initializes the shaders
for normal drawing. Lines 86 and 87, which set up the vertex information and
CHAPTER 10 Advanced Techniques
412
initFramebufferObject() at line 94, are the same as the FramebufferObject.js . Line 94
prepares a framebuffer object, which contains the texture object for a shadow map. Lines
99 and 100 enable texture unit 0 and bind it to the target. This texture unit is passed to
u_ShadowMap in the shaders for normal drawing.
Lines 106 to 108 prepare a view projection matrix to generate a shadow map. The key
point is that the first three arguments (that is, the position of an eye point) at line 108 are
specified as the position of the light source. Lines 110 to 112 prepare the view projection
matrix from the eye point where you want to view the scene.
Finally, you draw the triangle and plane using all the preceding information. First you
generate the shadow map, so you switch the drawing destination to the framebuffer
object at line 120. You draw the objects by using the shaders for generating a shadow map
( shadowProgram ) at lines 126 and 128. You should note that lines 127 and 129 save the
model view projection matrices from the light source. Then the shadow map is generated,
and you use it to draw shadows with the code from line 135. Line 136 passes the map to
the fragment shader. Lines 138 and 140 pass the model view projection matrices saved at
line 127 and 129, respectively, to u_MvpMatrixFromLight .
Increasing Precision
Although you’ve successfully calculated the shadow and drawn the scene with the shadow
included, the example code is only able to handle situations in which the light source is
close to the object. To see this, let’s change the y coordinate of the light source position
to 40:
50 var LIGHT_X = 0, LIGHT_Y = 40, LIGHT_Z = 2;
If you run the modified sample program, you can see that the shadow is not displayed—as
in the left side of Figure 10.24 . Obviously, you want the shadow to be displayed correctly,
as in the figure on the right.
The reason the shadow is no longer displayed when the distance from the light source to
the object is increased is that the value of gl_FragCoord.z could not be stored in the R
component of the texture map because it has only an 8-bit precision. A simple solution to
this problem is to use not just the R component but the B, G, and A components. In other
words, you save the value separately in 4 bytes. There is a routine procedure to do this, so
let’s see the sample program. Only the fragment shader is changed.
Display Shadows
413
Sample Program (Shadow_highp.js)
Listing 10.17 shows the fragment shader of Shadow_highp.js . You can see that the
processing to handle the z value is more complex than that in Shadow.js .
Listing 10.17 Shadow_highp.js
1 // Shadow_highp.js
...
10 // Fragment shader program for creating a shadow map
11 var SHADOW_FSHADER_SOURCE =
...
15 'void main() {\n' +
16 ' const vec4 bitShift = vec4(1.0, 256.0, 256.0 * 256.0, 256.0 * 256.0 *
➥256.0);\n' +
17 ' const vec4 bitMask = vec4(1.0/256.0, 1.0/256.0, 1.0/256.0, 0.0);\n' +
18 ' vec4 rgbaDepth = fract(gl_FragCoord.z * bitShift);\n' +
19 ' rgbaDepth -= rgbaDepth.gbaa * bitMask;\n' +
20 ' gl_FragColor = rgbaDepth;\n' +
21 '}\n';
...
37 // Fragment shader program for regular drawing
38 var FSHADER_SOURCE =
...
45 // Recalculate the z value from the rgba
46 'float unpackDepth(const in vec4 rgbaDepth) {\n' +
47 ' const vec4 bitShift = vec4(1.0, 1.0/256.0, 1.0/(256.0 * 256.0),
➥1.0/(256.0 * 256.0 * 256.0));\n' +
Figure 10.24 The shadow is not displayed
CHAPTER 10 Advanced Techniques
414
48 ' float depth = dot(rgbaDepth, bitShift);\n' +
49 ' return depth;\n' +
50 '}\n' +
51 'void main() {\n' +
52 ' vec3 shadowCoord = (v_PositionFromLight.xyz /
➥v_PositionFromLight.w)/2.0 + 0.5;\n' +
53 ' vec4 rgbaDepth = texture2D(u_ShadowMap, shadowCoord.xy);\n' +
54 ' float depth = unpackDepth(rgbaDepth);\n' + // Recalculate the z
55 ' float visibility = (shadowCoord.z > depth + 0.0015)? 0.7:1.0;\n'+
56 ' gl_FragColor = vec4(v_Color.rgb * visibility, v_Color.a);\n' +
57 '}\n';
The code that splits gl_FragCoord.z into 4 bytes (RGBA) is from lines 16 to 19. Because 1
byte can represent up to 1/256, you can store the value greater than 1/256 in R, the value
less than 1/256 and greater than 1/(256*256) in G, the value less than 1/(256*256) and
greater than 1/(256*256*256) in B, and the rest of value in A. Line 18 calculates each value
and stores it in the RGBA components, respectively. It can be written in one line using a
vec4 data type. The function fract() is a built-in one that discards numbers below the
decimal point for the value specified as its argument. Each value in vec4 , calculated at
line 18, has more precision than 1 byte, so line 19 discards the value that does not fit in 1
byte. By substituting this result to gl_FragColor at line 20, you can save the z value using
all four components of the RGBA type and achieve higher precision.
unpackDepth() at line 54 reads out the z value from the RGBA. This function is defined at
line 46. Line 48 performs the following calculation to convert the RGBA value to the origi-
nal z value. As you can see, the calculation is the same as the inner product, so you use
dot() at line 48.
( ) ( )
= × + +
×
+
× ×
depth rgbDepth r
rgbaDepth g rgbaDepthb rgbaDepth a
. 1.0
.
256.0
.
256.0 256.0
.
256.0 256.0 256.0
Now you have retrieved the distance (z value) successfully, so you just have to draw the
shadow by comparing the distance with shadowCoord.z at line 55. In this case, 0.0015
is used as the value for adjusting the error (the stripe pattern), instead of 0.005. This is
because the precision of the z value stored in the shadow map is a float type of medium
precision (that is, its precision is 2
–10 = 0.000976563, as shown in Table 6.15 in Chapter 6 ).
So you add a little margin to it and chose 0.0015 as the value. After that, the shadow can
be drawn correctly.
Load and Display 3D Models
In the previous chapters, you drew 3D objects by specifying their vertex coordinates
and color information by hand and stored them in arrays of type Float32Array in
the JavaScript program. However, as mentioned earlier in the book, in most cases you
will actually read the vertex coordinates and color information from 3D model files
constructed by a 3D modeling tool.
Load and Display 3D Models
415
In this section, you construct a sample program that reads a 3D model constructed using
a 3D modeling tool. For this example, we use the Blender
6 modeling tool, which is a
popular tool with a free version available. Blender is able to export 3D model files using
the well-known OBJ format, which is text based and easy to read, understand, and parse.
OBJ is a geometry definition file format originally developed by Wavefront Technologies.
This file format is open and has been adopted by other 3D graphics vendors. Although
this means it is reasonably well known and used, it also means that there are a number
of variations in the format. To simplify the example code, we have made a number of
assumptions, such as not using textures. However, the example gives you a good under-
standing of how to read model data into your programs and provides a basis for you to
begin experimentation. The approach taken in the example code is designed to be reason-
ably generic and can be used for other text-based formats.
Start Blender and create a cube like that shown in Figure 10.25 . The color of one face of
this cube is orange, and the other faces are red. Then export the model to a file named
cube.obj . (You can find an example of it in the resources directory with the sample
programs.) Let’s take a look at cube.obj , which, because it is a text file, can be opened
with a simple text editor.
Figure 10.25 Blender, 3D modeling tool
6. www.blender.org/
CHAPTER 10 Advanced Techniques
416
Figure 10.26 shows the contents of cube.obj . Line numbers have been added to help with
the explanation and would not normally be in the file.
1 ' : e l i F J B O ) 0 b u s ( 0 6 . 2 v r e d n e l B #
2 g r o . r e d n e l b . w w w #
3 l t m . e b u c b i l l t m
4 e b u C o
5 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 v
6 0 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 v
7 0 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 - v
8 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 - v
9 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 v
0 1 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 v
1 0 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 - v
2 0 0 0 0 0 0 . 1 - 0 0 0 0 0 0 . 1 0 0 0 0 0 0 . 1 - v
3 l a i r e t a M l t m e s u
4 4 3 2 1 f
5 6 7 8 5 f
6 3 7 6 2 f
7 4 8 7 3 f
8 8 4 1 5 f
9 1 0 0 . l a i r e t a M l t m e s u
0 2 6 5 1 f
Figure 10.26 cube.obj
Once the model file has been created by the modeling tool, your program needs to read
the data and store it in the same type of data structures that you’ve used before. The
following steps are required:
1. Prepare the array ( vertices ) of type Float32Array and read the vertex coordinates of
the model from the file into the array.
2. Prepare the array ( colors ) of type Float32Array and read the colors of the model
from the file into the array.
3. Prepare the array ( normals ) of type Float32Array and read the normals of the model
form the file into the array.
4. Prepare the array ( indices ) of type Uint16Array (or Uint8Array ) and read the indices
of the vertices that specify the triangles that make up the model from the file into
the array.
5. Write the data read during steps 1 through 4 into the buffer object and then draw
the model using gl.drawElements() .
So in this case, you read the data described in cube.obj (shown in Figure 10.26 ) in the
appropriate arrays and then draw the model in step 5. Reading data from the file requires
understanding the format of the file cube.obj (referred to as the OBJ file).
Load and Display 3D Models
417
The OBJ File Format
An OBJ file is made up of several sections,
7 including vertex positions, face definitions,
and material definitions. There may be multiple vertices, normals, and faces within their
sections:
• Lines beginning with a hash character (#) are comments. Lines 1 and 2 in Figure
10.26 are comments generated by Blender describing its version number and origin.
The remaining lines define the 3D model.
• Line 3 references an external materials file. The OBJ format maintains the material
information of the model in an external material file called an MTL file.
mtllib
specifies that the materials file is cube.mtl .
• Line 4 specifies the named object in the following format:
This sample program does not use this information.
• Lines 5 to 12 define vertex positions in the following format using (x,y,z[,w]) coordi-
nates, where w is optional and defaults to 1.0.
v x y z [w]
In this example, it has eight vertices because the model is a standard cube.
• Lines 13 to 20 specify a material and the faces that use the material. Line 13 specifies
the material name, as defined in the MTL file referenced at line 4, and the specific
material using the following format:
usemtl
• The following lines, 14 to 18, define faces of the model and the material to be
applied to them. Faces are defined using lists of vertex, texture, and normal indices.
f v1 v2 v3 v4 ...
v1, v2, v3, ... are the vertex indices starting from 1 and matching the correspond-
ing vertex elements of a previously defined vertex list. This sample program handles
vertex and normals. Figure 10.26 does not contain normals, but if a face has a
normal, the following format would be used:
f v1 // vn1 v2 // vn2 v3 // vn3 ...
vn1, vn2, vn3, ... are the normal indices starting from 1.
7 See http://en.wikipedia.org/wiki/Wavefront_.obj_fi le
CHAPTER 10 Advanced Techniques
418
The MTL File Format
The MTL file may define multiple materials. Figure 10.27 shows cube.mtl .
' ' : e l i F L T M r e d n e l B # 1
2 : t n u o C l a i r e t a M # 2
l a i r e t a M l t m w e n 3
0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 a K 4
0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 1 d K 5
0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 s K 6
1 3 4 8 7 0 . 6 9 s N 7
0 0 0 0 0 0 . 1 i N 8
0 0 0 0 0 0 . 1 d 9
0 m u l l i 0 1
1 0 0 . l a i r e t a M l t m w e n 1 1
0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 a K
0 0 0 0 0 0 . 0 0 0 0 0 5 4 . 0 0 0 0 0 0 0 . 1 d K
2 1
3 1
0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 0 0 0 0 0 0 . 0 s K 4 1
1 3 4 8 7 0 . 6 9 s N 5 1
0 0 0 0 0 0 . 1 i N 6 1
0 0 0 0 0 0 . 1 d 7 1
0 m u l l i 8 1
Figure 10.27 cube.mtl
• Lines 1 and 2 are comments that Blender generates.
• Each new material (from line 3) starts with the newmtl command:
newmtl
This is the material name that is used in the OBJ file.
• Lines 4 to 6 define the ambient, diffuse, and specular color using Ka , Kd , and Ks ,
respectively. Color definitions are in RGB format, where each component is between
0 and 1. This sample program uses only diffuse color.
• Line 7 specifies the weight of the specular color using Ns . Line 8 specifies the optical
density for the surface using Ni . Line 9 specifies transparency using d . Line 10 speci-
fies illumination models using illum . The sample program does not use this item of
information.
Given this understanding of the structure of the OBJ and MTL files, you have to extract
the vertex coordinates, colors, normals, and indices describing a face from the file, write
them into the buffer objects, and draw with gl.drawElements() . The OBJ file may not
have the information on normals, but you can calculate them from the vertex coordinates
that make up a face by using a “cross product.”
8
Let’s look at the sample program.
8 If the vertices of a triangle are v0, v1, and v2, the vector of v0 and v1 is (x1, y1, z1), and the vector
of v1 and v2 is (x2, y2, z2), then the cross product is defi ned as (y1*z2 – z1*y2, z1*x2 – x1*z2, x1*y2
– y1*z2). The result will be the normal for the triangle. (See the book 3D Math Primer for Graphics and
Game Development. )
Load and Display 3D Models
419
Sample Program (OBJViewer.js)
The basic steps are as follows: (1) prepare an empty buffer object, (2) read an OBJ file (an
MTL file), (3) parse it, (4) write the results into the buffer object, and (5) draw. These steps
are implemented as shown in Listing 10.18 .
Listing 10.18 OBJViewer.js
1 // OBJViewer.js (
...
28 function main() {
...
40 if (!initShaders(gl, VSHADER_SOURCE, FSHADER_SOURCE)) {
41 console.log('Failed to initialize shaders.');
42 return;
43 }
...
49 // Get the storage locations of attribute and uniform variables
50 var program = gl.program;
51 program.a_Position = gl.getAttribLocation(program, 'a_Position');
52 program.a_Normal = gl.getAttribLocation(program, 'a_Normal');
53 program.a_Color = gl.getAttribLocation(program, 'a_Color');
...
63 // Prepare empty buffer objects for vertex coordinates, colors, and normals
64 var model = initVertexBuffers(gl, program);
...
75 // Start reading the OBJ file
76 readOBJFile('../resources/cube.obj', gl, model, 60, true);
...
81 draw(gl, gl.program, currentAngle, viewProjMatrix, model);
...
85 }
86
87 // Create a buffer object and perform the initial configuration
88 function initVertexBuffers(gl, program) {
89 var o = new Object();
90 o.vertexBuffer = createEmptyArrayBuffer(gl, program.a_Position, 3, gl.FLOAT);
91 o.normalBuffer = createEmptyArrayBuffer(gl, program.a_Normal, 3, gl.FLOAT);
92 o.colorBuffer = createEmptyArrayBuffer(gl, program.a_Color, 4, gl.FLOAT);
93 o.indexBuffer = gl.createBuffer();
...
98 return o;
99 }
100
101 // Create a buffer object, assign it to attribute variables, and enable the
➥assignment
CHAPTER 10 Advanced Techniques
420
102 function createEmptyArrayBuffer(gl, a_attribute, num, type) {
103 var buffer = gl.createBuffer(); // Create a buffer object
...
108 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
109 gl.vertexAttribPointer(a_attribute, num, type, false, 0, 0);
110 gl.enableVertexAttribArray(a_attribute); // Enable the assignment
111
112 return buffer;
113 }
114
115 // Read a file
116 function readOBJFile(fileName, gl, model, scale, reverse) {
117 var request = new XMLHttpRequest();
118
119 request.onreadystatechange = function() {
120 if (request.readyState === 4 && request.status !== 404) {
121 onReadOBJFile(request.responseText, fileName, gl, model, scale, reverse);
122 }
123 }
124 request.open('GET', fileName, true); // Create a request to get file
125 request.send(); // Send the request
126 }
127
128 var g_objDoc = null; // The information of OBJ file
129 var g_drawingInfo = null; // The information for drawing 3D model
130
131 // OBJ file has been read
132 function onReadOBJFile(fileString, fileName, gl, o, scale, reverse) {
133 var objDoc = new OBJDoc(fileName); // Create a OBJDoc object
134 var result = objDoc.parse(fileString, scale, reverse);
135 if (!result) {
136 g_objDoc = null; g_drawingInfo = null;
137 console.log("OBJ file parsing error.");
138 return;
139 }
140 g_objDoc = objDoc;
141 }
Within the JavaScript, the processing in initVertexBuffers() , called at line 64, has been
changed. The function simply prepares an empty buffer object for the vertex coordinates,
colors, and normals for the 3D model to be displayed. After parsing the OBJ file, the infor-
mation corresponding to each buffer object will be written in the object.
Load and Display 3D Models
421
The initVertexBuffers() function at line 88 creates the appropriate empty buffer objects
at lines 90 to 92 using createEmptyArrayBuffer() and assigns them to an attribute vari-
able. This function is defined at line 102 and, as you can see, creates a buffer object
(line 103), assigns it to an attribute variable (line 109), and enables the assignment (line
110), but it does not write the data. After storing these buffer objects to model at line
64, the preparations of the buffer object are completed. The next step is to read the OBJ
file contents into this buffer, which takes place at line 76 using readOBJFile() . The first
argument is the location of the file (URL), the second one is gl , and the third one is the
Object object ( model ) that packages the buffer objects. The tasks carried out by this func-
tion are similar to those when loading a texture image using the Image object and are
shown here:
(2.1) Create an XMLHttpRequest object (line 117).
(2.2) Register the event handler to be called when the loading of the file is completed
(line 119).
(2.3) Create a request to acquire the file using the open() method (line 124).
(2.4) Send the request to acquire the file (line 125).
Line 117 creates the XMLHttpRequest object, which sends an HTTP request to a web server.
Line 119 is the registration of the event handler that will be called after the browser has
loaded the file. Line 124 creates the request to acquire the file using the open() method.
Because you are requesting a file, the first argument is GET , and the second one is the URL
for the file. The last one specifies whether or not the request is asynchronous. Finally, line
125 uses the send() method to send the request to the web server to get the file.
9
Once the browser has loaded the file, the event handler at line 119 is called. Line 120
checks for any errors returned by the load request. If the readyState property is 4, it indi-
cates that the loading process is completed. However, if the readyState is not 4 and the
status property is 404, it indicates that the specified file does not exist. The 404 error
is the same as “404 Not Found,” which is displayed when you try to display a web page
that does not exist. When the file has been loaded successfully, onReadOBJFile() is called,
which is defined at line 132 and takes five arguments. The first argument, responseText ,
contains the contents of the loaded file as one string. An OBJDoc object is created at line
133, which will be used, via the parse() method, to extract the results in a form that
WebGL can easily use. The details will be explained next. Line 140 assigns the objDoc ,
which contains the parsing result in g_objDoc for rendering the model later.
9 Note: When you want to run the sample programs that use external fi les in Chrome from your
local disk, you should add the option --allow-file-access-from-files to Chrome. This is for
security reasons. Chrome, by default, does not allow access to local fi les such as ../resources/
cube.obj . For Firefox, the equivalent parameter, set via account:config , is security.fileuri.
strict_origin_policy , which should be set to false . Remember to set it back as you open a
security loophole if local fi le access is enabled.
CHAPTER 10 Advanced Techniques
422
User-Defined Object
Before proceeding to the explanation of the remaining code of OBJViewer.js , you need to
understand how to create your own (user-defined) objects in JavaScript. OBJViewer.js uses
user-defined objects to parse an OBJ file. In JavaScript, you can create user-defined objects
which, once created, are treated in the same way as built-in objects like Array and Date .
The following is the StringParser object used in OBJViewer.js . The key aspects are how
to define a constructor to create a user-defined object and how to add methods to the
object. The constructor is a special method that is called when creating an object with
new . The following is the constructor for the StringParser object:
595 // Constructor
596 var StringParser = function(str) {
597 this.str; // Store the string specified by the argument
598 this.index; // Position in the string to be processed
599 this.init(str);
600 }
You can define the constructor with the anonymous function (see Chapter 2 ). Its param-
eter is the one that will be specified when creating the object with new . Lines 597 and
598 are the declaration of properties that can be used for this new object type, similar to
properties like the length property of Array. You can define the property by writing the
keyword this followed by . and the property name. Line 599 then calls init() , an initial-
ization method that has been defined for this user-defined object.
Let’s take a look at init() . You can add a method to the object by writing the method
name after the keyword prototype . The body of the method is also defined using an
anonymous function:
601 // Initialize StringParser object
602 StringParser.prototype.init = function(str) {
603 this.str = str;
604 this.index = 0;
605 }
What is convenient here is that you can access the property that is defined in the
constructor from the method. The this.str at line 603 refers to this.str defined at line
597 in the constructor. The this.index at line 604 refers to this.index at line 598 in the
constructor. Let’s try using this StringParse object:
Load and Display 3D Models
423
var sp = new StringParser('Tomorrow is another day.');
alert(sp.str); // "Tomorrow is another day." is displayed.
sp.str = 'Quo Vadis'; // The content of str is changed to "Quo Vadis".
alert(sp.str); // "Quo Vadis" is displayed
sp.init('Cinderella, tonight?');
alert(sp.str); // "Cinderella, tonight?" is displayed
Let’s look at another method, skipDelimiters() , that skips the delimiters (tab, space, (, ),
or ”) in a string:
608 StringParser.prototype.skipDelimiters = function() {
609 for(var i = this.index, len = this.str.length; i < len; i++) {
610 var c = this.str.charAt(i);
611 // Skip TAB, Space, (, ), and "
612 if (c == '\t'|| c == ' ' || c == '(' || c == ')' || c == '"') continue;
613 break;
614 }
615 this.index = i;
616 }
The charAt() method at line 610 is supported by the String object that manages a string
and retrieves the character specified by the argument from the string.
Now let’s look at the parser code in OBJViewer.js .
Sample Program (Parser Code in OBJViewer.js)
OBJViewer.js parses the content of an OBJ file line by line and converts it to the structure
shown in Figure 10.28 . Each box in Figure 10.28 is a user-defined object. Although the
parser code in OBJViewer.js looks quite complex, the core parsing process is simple. The
complexity comes because it is repeated several times. Let’s take a look at the core process-
ing, which once you understand will allow you to understand the whole process.
424
CHAPTER 10 Advanced Techniques
t c e j b O J B O t c e j b O J B O
x e t r e V
x
y
z
c o D J B O
s l t m
s t c e j b o
s l a m r o n
s e c i t r e v
t c e j b O J B O
e m a n
s e c a f
c o D L T M
e t e l p m o c
s l a i r e t a m
l a i r e t a M
e m a n
r o l o c
r o l o C
l a i r e t a M
e m a n
r o l o c
r o l o C
l a i r e t a M
e m a n
r o l o c
r o l o C
…
e m a n
s e c a f
e m a n
s e c a f
…
e c a F
e m a N l a i r e t a m
s e c i d n I v
s e c i d n I n
e c a F
e m a N l a i r e t a m
s e c i d n I v
s e c i d n I n
e c a F
e m a N l a i r e t a m
s e c i d n I v
s e c i d n I n
…
x e t r e V
x
y
z
x e t r e V
x
y
z
x e t r e V
x
y
z
x e t r e V
x
y
z
…
x
y
z
…
x
y
z
x
y
z
x
y
z
x
y
z
l a m r o N l a m r o N l a m r o N l a m r o N
l a m r o N
r g b a r g b a r g b a
Figure 10.28 The internal structure after parsing an OBJ file
Listing 10.19 shows the basic code of OBJViewer.js .
Listing 10.19 OBJViewer.js (Parser Part)
214 // OBJDoc object
215 // Constructor
216 var OBJDoc = function(fileName) {
217 this.fileName = fileName;
218 this.mtls = new Array(0); // Initialize the property for MTL
219 this.objects = new Array(0); // Initialize the property for Object
220 this.vertices = new Array(0); // Initialize the property for Vertex
221 this.normals = new Array(0); // Initialize the property for Normal
222 }
223
224 // Parsing the OBJ file
225 OBJDoc.prototype.parse = function(fileString, scale, reverseNormal) {
Load and Display 3D Models
425
226 var lines = fileString.split('\n'); // Break up into lines
227 lines.push(null); // Append null
228 var index = 0; // Initialize index of line
229
230 var currentObject = null;
231 var currentMaterialName = "";
232
233 // Parse line by line
234 var line; // A string in the line to be parsed
235 var sp = new StringParser(); // Create StringParser
236 while ((line = lines[index++]) != null) {
237 sp.init(line); // init StringParser
238 var command = sp.getWord(); // Get command
239 if(command == null) continue; // check null command
240
241 switch(command){
242 case '#':
243 continue; // Skip comments
244 case 'mtllib': // Read Material chunk
245 var path = this.parseMtllib(sp, this.fileName);
246 var mtl = new MTLDoc(); // Create MTL instance
247 this.mtls.push(mtl);
248 var request = new XMLHttpRequest();
249 request.onreadystatechange = function() {
250 if (request.readyState == 4) {
251 if (request.status != 404) {
252 onReadMTLFile(request.responseText, mtl);
253 }else{
254 mtl.complete = true;
255 }
256 }
257 }
258 request.open('GET', path, true); // Create a request to get file
259 request.send(); // Send the request
260 continue; // Go to the next line
261 case 'o':
262 case 'g': // Read Object name
263 var object = this.parseObjectName(sp);
264 this.objects.push(object);
265 currentObject = object;
266 continue; // Go to the next line
267 case 'v': // Read vertex
268 var vertex = this.parseVertex(sp, scale);
269 this.vertices.push(vertex);
270 continue; // Go to the next line
CHAPTER 10 Advanced Techniques
426
271 case 'vn': // Read normal
272 var normal = this.parseNormal(sp);
273 this.normals.push(normal);
274 continue; // Go to the next line
275 case 'usemtl': // Read Material name
276 currentMaterialName = this.parseUsemtl(sp);
277 continue; // Go to the next line
278 case 'f': // Read face
279 var face = this.parseFace(sp, currentMaterialName, this.vertices,
➥reverse);
280 currentObject.addFace(face);
281 continue; // Go to the next line
282 }
283 }
284
285 return true;
286 }
Lines 216 to 222 define the constructor for the OBJDoc object, which consists of five prop-
erties that will be parsed and set up. The actual parsing is done in the parse() method at
line 225. The content of the OBJ file is passed as one string to the argument fileString of
the parse() method and then split into manageable pieces using the split() method.
This method splits a string into pieces delimited by the characters specified as the argu-
ment. As you can see at line 226, the argument specifies “\n” (new line), so each line is
stored in this.line s as an array. null is appended at the end of the array at line 227 to
make it easy to find the end of the array. this.index indicates how many lines have been
parsed and is initialized to 0 at line 228.
You have already seen the StringParser object, which is created at line 235, in the previ-
ous section. This object is used for parsing the content of the line.
Now you are ready to start parsing the OBJ file. Each line is stored in line using this.
lines[this.index++] at line 236. Line 237 writes the line to sp ( StringParser ). Line 238
gets the first word of the line using sp.getWord() and stores it in command . You use the
methods shown in Table 10.3 , where “word” in the table indicates a string surrounded by
a delimiter (tab, space, (, ), or ”).
Table 10.3 Method that StringParser Supports
Method Description
StringParser.init(str) Initialize StringParser to be able to parse str.
StringParser.getWord() Get a word.
StringParser.skipToNext-
Word()
Skip to the beginning of the next word.
Load and Display 3D Models
427
Method Description
StringParser.getInt() Get a word and convert it to an integer number.
StringParser.getFloat() Get a word and convert it to a floating point number.
The switch statement at line 241 checks the command to determine how to process the
following lines in the OBJ file.
If the command is # (line 242), the line is a comment. Line 243 skips it using continue .
If the command is mtllib (line 241), the line is a reference to an MTL file. Line 245 gener-
ates the path to the file. Line 246 creates an MTLDoc object for storing the material infor-
mation in the MTL file, and line 247 stores it in this.mtls . Then lines 248 to 259 read the
file in the same way that you read an OBJ file. The MTL file is parsed by onReadMTLfile() ,
which is called when it is loaded.
If the command is o (line 261) or g (line 262), it indicates a named object or group. Line
263 parses the line and returns the results in OBJObject . This object is stored in this.
objects at line 264 and currentObject .
If the command is v , the line is a vertex position. Line 268 parses (x, y, z) and returns the
result in Vertex object. This object is stored in this.vertices at line 269.
If the command is f , it indicates that the line is a face definition. Line 279 parses it and
returns the result in the Face object. This object is stored in the currentObject . Let’s take
a look at parseVertex() , which is shown in Listing 10.20 .
Listing 10.20 OBJViewer.js (parseVertex())
302 OBJDoc.prototype.parseVertex = function(sp, scale) {
303 var x = sp.getFloat() * scale;
304 var y = sp.getFloat() * scale;
305 var z = sp.getFloat() * scale;
306 return (new Vertex(x, y, z));
307 }
Line 303 retrieves the x value from the line using sp.getFloat() . A scaling factor is
applied when the model is too small or large. After retrieving the three coordinates, line
306 creates a Vertex object using x, y, and z and returns it.
Once the OBJ file and MTL files have been fully parsed, the arrays for the vertex coordi-
nates, colors, normals, and indices are created from the structure shown in Figure 10.28 .
Then onReadComplete() is called to write them into the buffer object (see Listing 10.21 ).
CHAPTER 10 Advanced Techniques
428
Listing 10.21 OBJViewer.js (onReadComplete())
176 // OBJ File has been read completely
177 function onReadComplete(gl, model, objDoc) {
178 // Acquire the vertex coordinates and colors from OBJ file
179 var drawingInfo = objDoc.getDrawingInfo();
180
181 // Write date into the buffer object
182 gl.bindBuffer(gl.ARRAY_BUFFER, model.vertexBuffer);
183 gl.bufferData(gl.ARRAY_BUFFER, drawingInfo.vertices,gl.STATIC_DRAW);
184
185 gl.bindBuffer(gl.ARRAY_BUFFER, model.normalBuffer);
186 gl.bufferData(gl.ARRAY_BUFFER, drawingInfo.normals, gl.STATIC_DRAW);
187
188 gl.bindBuffer(gl.ARRAY_BUFFER, model.colorBuffer);
189 gl.bufferData(gl.ARRAY_BUFFER, drawingInfo.colors, gl.STATIC_DRAW);
190
191 // Write the indices to the buffer object
192 gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, model.indexBuffer);
193 gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, drawingInfo.indices, gl.STATIC_DRAW);
194
195 return drawingInfo;
196 }
This method is straightforward and starts at Line 178, which retrieves the drawing infor-
mation from objDoc that contains the results from parsing the OBJ file. Lines 183, 186,
189, and 193 write vertices, normals, colors, and indices into the respective buffer objects.
The function getDrawingInfo() at line 451 retrieves the vertices, normals, colors, and
indices from the objDoc and is shown in Listing 10.22 .
Listing 10.22 OBJViewer.js (Retrieving the Drawing Information)
450 // Retrieve the information for drawing 3D model
451 OBJDoc.prototype.getDrawingInfo = function() {
452 // Create an array for vertex coordinates, normals, colors, and indices
453 var numIndices = 0;
454 for (var i = 0; i < this.objects.length; i++){
455 numIndices += this.objects[i].numIndices;
456 }
457 var numVertices = numIndices;
458 var vertices = new Float32Array(numVertices * 3);
459 var normals = new Float32Array(numVertices * 3);
460 var colors = new Float32Array(numVertices * 4);
461 var indices = new Uint16Array(numIndices);
462
463 // Set vertex, normal, and color
Load and Display 3D Models
429
464 var index_indices = 0;
465 for (var i = 0; i < this.objects.length; i++){
466 var object = this.objects[i];
467 for (var j = 0; j < object.faces.length; j++){
468 var face = object.face[j];
469 var color = this.findColor(face.materialName);
470 var faceNormal = face.normal;
471 for (var k = 0; k < face.vIndices.length; k++){
472 // Set index
473 indices[index_indices] = index_indices;
474 // Copy vertex
475 var vIdx = face.vIndices[k];
476 var vertex = this.vertices[vIdx];
477 vertices[index_indices * 3 + 0] = vertex.x;
478 vertices[index_indices * 3 + 1] = vertex.y;
479 vertices[index_indices * 3 + 2] = vertex.z;
480 // Copy color
481 colors[index_indices * 4 + 0] = color.r;
482 colors[index_indices * 4 + 1] = color.g;
483 colors[index_indices * 4 + 2] = color.b;
484 colors[index_indices * 4 + 3] = color.a;
485 // Copy normal
486 var nIdx = face.nIndices[k];
487 if(nIdx >= 0){
488 var normal = this.normals[nIdx];
489 normals[index_indices * 3 + 0] = normal.x;
490 normals[index_indices * 3 + 1] = normal.y;
491 normals[index_indices * 3 + 2] = normal.z;
492 }else{
493 normals[index_indices * 3 + 0] = faceNormal.x;
494 normals[index_indices * 3 + 1] = faceNormal.y;
495 normals[index_indices * 3 + 2] = faceNormal.z;
4 96 }
49 7 index_indices ++;
498 }
499 }
500 }
501
502 return new DrawingInfo(vertices, normals, colors, indices);
503 };
Line 454 calculates the number of indices using a for loop. Then lines 458 to 461 create
typed arrays for storing vertices, normals, colors, and indices that are assigned to the
appropriate buffer objects. The size of each array is determined by the number of indices
at line 454.
CHAPTER 10 Advanced Techniques
430
The program traverses the OBJObject objects and its Face objects in the order shown in
Figure 10.28 and stores the information in the arrays vertices , colors , and indices .
The for statement at line 465 loops, extracting each OBJObject one by one from the result
of the earlier parsing. The for statement at line 467 does the same for each Face object
that makes up the OBJObject and performs the following steps for each Face :
1. Lines 469 finds the color of the Face using materialName and stores the color in
color . Line 468 stores the normal of the face in faceNomal for later use.
2. The for statement at line 471 loops, extracting vertex indices from the face, storing
its vertex position in vertices (lines 477 to 479), and storing the r, g, and b compo-
nents of the color in colors (lines 482 to 484). The code from line 486 handles
normals. OBJ files may or may not contain normals, so line 487 checks for that. If
normals are found in the OBJ file, lines 487 to 489 store them in normals . Lines 492
to 494 then store the normals this program generates.
Once you complete these steps for all OBJObjects , you are ready to draw. Line 502 returns
the information for drawing the model in a DrawingInfo object, which manages the
vertex information that has to be written in the buffer object, as described previously.
Although this has been, by necessity, a rapid explanation, at this stage you should under-
stand how the contents of the OBJ file can be read in, parsed, and displayed with WebGL.
If you want to read multiple model files in a single scene, you would repeat the preceding
processes. There are several other models stored as OBJ files in the resources directory of
the sample programs, which you can look at and experiment with to confirm your under-
standing (see Figure 10.29 ).
Figure 10.29 Various 3D models
Handling Lost Context
WebGL uses the underlying graphics hardware, which is a shared resource managed by the
operating system. There are several situations where this resource can be “taken away,”
resulting in information stored within the graphics hardware being lost. These include
Handling Lost Context
431
situations when another program takes over the hardware or when the machine hiber-
nates. When this happens, information that WebGL uses to draw correctly, its “context,”
can be lost. A good example is when you run a WebGL program on a notebook PC or
smart phone and it enters hibernation mode. Often, an error message is displayed before
the machine hibernates. When the machine awakes after you press the power button,
the system returns to the original state, but browser that is running the WebGL program
may display nothing on the screen, as on the right side of Figure 10.30 . Because the back-
ground color of the web page that this sample program draws is white, the web browser
shows a completely white screen.
Before Hibernation After Hibernation
Figure 10.30 WebGL program stops after returning from a hibernation mode
For example, if you are running RotatingTriangle , the following message may be
displayed on the console:
WebGL error CONTEXT_LOST_WEBGL in uniformMatrix4fv([object WebGLUniformLocation,
false, [object Float32Array]]
This indicates that the error occurred when the program performed the gl.uniformMa-
trix4fv() either before the system entered the hibernation mode or on return from hiber-
nation. The error message will differ slightly depending on what the program was trying
to do at the time of hibernation. In this section, we will explain how to deal with this
problem.
How to Implement Handling Lost Context
As previously discussed, context can be lost for any number of reasons. However, WebGL
supports two events to indicate state changes within the system: a context lost event
( webglcontextlost ) and a context restore event ( webglcontextrestored ). See Table 10.4 .
CHAPTER 10 Advanced Techniques
432
Table 10.4 The Context Events
Event Description
Webglcontextlost Occurs when the rendering context for WebGL is lost
webglcontextrestored Occurs when the browser completes a reset of the WebGL system
When the context lost event occurs, the rendering context acquired by getWebGLContext()
(that is gl in the sample programs) becomes invalid, and any operations carried out using
the gl context are invalidated. These processes include creating buffer objects and texture
objects, initializing shaders, setting the clear color, and more. After the browser resets the
WebGL system, the context restore event is generated, and your program needs to redo
these operations. The other variables in your JavaScript program are not affected and can
be used as normal.
Before taking a look at the sample program, you need to use the addEventListener()
method of the to register the event handlers for the context lost event and the
context restore event. This is because the does not support a specific property
that you can use to register context event handlers. Remember that in previous examples
you used the onmousedown property of to register the event handler for the
mouse event.
canvas.addEventListener(type, handler, useCapture)
Register the event handler specified by handler to the element.
Parameters type Specifies the name of the event to listen for (string).
handler Specifies the event handler to be called when the event
occurs. This function is called with one argument (event
object).
useCapture Specifies whether the event needs to be captured or not
(boolean). If true , the event is not dispatched to other
elements. If false , the event is dispatched to others.
Return value None
Sample Program (RotatingTriangle_contextLost.js)
In this section, you will construct a sample program, RotatingTriangle_contextLost ,
which modifies RotatingTriangle to make it possible to deal with the context lost event
(shown in Figure 10.30 ). The sample program is shown in Listing 10.23 .
Handling Lost Context
433
Listing 10.23 RotatingTriangle_contextLost.js
1 // RotatingTriangle_contextLost.js
...
16 function main() {
17 // Retrieve element
18 var canvas = document.getElementById('webgl');
19
20 // Register event handler for context lost and restored events
21 canvas.addEventListener('webglcontextlost', contextLost, false);
22 canvas.addEventListener('webglcontextrestored', function(ev)
➥{ start(canvas); }, false);
23
24 start(canvas); // Perform WebGL-related processes
25 }
...
29 // Current rotation angle
30 var g_currentAngle = 0.0; // Changed from local variable to global
31 var g_requestID; // The return value of requestAnimationFrame()
32
33 function start(canvas) {
34 // Get the rendering context for WebGL
35 var gl = getWebGLContext(canvas);
...
41 // Initialize shaders
42 if (!initShaders(gl, VSHADER_SOURCE, FSHADER_SOURCE)) {
...
45 }
46
47 var n = initVertexBuffers(gl); // Set vertex coordinates
...
55 // Get storage location of u_ModelMatrix
56 var u_ModelMatrix = gl.getUniformLocation(gl.program, 'u_ModelMatrix');
...
62 var modelMatrix = new Matrix4(); // Create a model matrix
63
64 var tick = function() { // Start drawing
65 g_currentAngle = animate(g_currentAngle); // Update rotation angle
66 draw(gl, n, g_currentAngle, modelMatrix, u_ModelMatrix);
67 g_requestID = requestAnimationFrame(tick, canvas);
68 };
69 tick();
70 }
71
72 function contextLost(ev) { // Event handler for context lost event
CHAPTER 10 Advanced Techniques
434
73 cancelAnimationFrame(g_requestID); // Stop animation
74 ev.preventDefault(); // Prevent the default behavior
75 }
The processing of the context lost event has no implications for the shaders, so let’s focus
on the main() function in the JavaScript program starting at line 16. Line 21 registers the
event handler for the context lost event, and line 22 registers the event handler for the
context restore event. The main() function ends by calling the function start() at
line 24.
The start() function, defined at line 33, contains the same steps as in RotatingTriangle.
js . They are the processes you have to redo when the context lost event occurs. There are
two changes from RotatingTriangle.js to handle lost context.
First, the current rotation angle, at line 65, is stored in a global variable g_currentAngle
(line 30) instead of a local variable. This allows you to draw the triangle using the angle
held in the global variable when a context restore event occurs. Line 67 stores the return
value of requestAnimationFrame() in the global variable g_requestID (line 31). This is
used to cancel the registration of the function when the context lost event occurs.
Let’s take a look at the actual event handlers. The event handler for the context lost event,
contextLost() , is defined at line 72 and has only two lines. Line 73 cancels the regis-
tration of the function used to carry out the animation, ensuring no further attempt at
drawing is made until the context is correctly restored. Then at Line 74 you prevent the
browser’s default behavior for this event. This is because, by default, the browser doesn’t
generate the context restore event. However, in our case, the event is needed, so you must
prevent this default behavior.
The event handler for the context restore event is straightforward and makes a call to
start() , which rebuilds the WebGL context. This is carried out by registering the event
handler at line 22, which calls start() by using an anonymous function.
Note that when a context lost event occurs, the following alert is always displayed on the
console:
WARNING: WebGL content on the page might have caused the graphics card to reset
By implementing these handlers for the lost context events, your WebGL applications will
be able to deal with situations where the WebGL context is lost.
Summary
This chapter explained a number of miscellaneous techniques that are useful to know
when creating WebGL applications. Due to space limitations, the explanations have been
kept brief but contain sufficient information for you to master and use the techniques
in your own WebGL applications. Although there are many more techniques you could
learn, we have chosen these because they will help you begin to apply the lessons in this
book to building your own 3D applications.
Summary
435
As you have seen, WebGL is a powerful tool for creating 3D applications and one that is
capable of creating sophisticated and visually stunning 3D graphics. Our aim in this book
has been to provide you with a step-by-step introduction to the basics of WebGL and give
you a strong enough foundation on which to begin building your own WebGL applica-
tions and exploring further. There are many other resources available to help you in that
exploration. However, our hope is that as you begin to venture out and explore WebGL
yourself, you will return to this book and find it valuable as a reference and guide as you
build your knowledge.
This page intentionally left blank
Appendix A
No Need to Swap Buffers in WebGL
For those of you with some experience in developing OpenGL applications on PCs, you may have
noticed that none of the examples in this book seem to swap color buffers, which is something
that most OpenGL implementations require.
As you know, OpenGL uses two buffers: a “front” color buffer and a “back” color buffer with the
contents of the front color buffer being displayed on the screen. Usually, when you draw some-
thing using OpenGL, it is drawn into the back color buffer. When you want to actually display
something, you need to copy the contents of the back buffer to the front buffer to cause it to be
displayed. If you were to draw directly into the front buffer, you would see visual artifacts (such
as flickers) because the screen was being updated before you had finalized the data in the buffer.
To support this dual-buffer approach, OpenGL provides a mechanism to swap the back buffer
and the front buffer. In some systems this is automatic; in others, explicit calls to swap buffers,
such as glutSwapBuffers() or eglSwapBuffers() , are needed after drawing into the back buffer.
For example, a typical OpenGL application has the following user-defined “display” function:
void display(void) {
// Clear color buffer and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
draw(); // Draw something
glutSwapBuffers(); // Swap color buffers
}
In contrast, WebGL relies on the browser to automatically manage the display update, reliev-
ing you of the need to do it explicitly in your applications. Referring to Figure A.1 (which is the
same as Figure 2.10 ), when WebGL applications draw something in the color buffer, the browser
detects the drawing and displays the content on the screen. Therefore, WebGL supports only one
color buffer.
APPENDIX A No Need to Swap Buffers in WebGL
438
m e t s y S L G b e W
r e d a h S x e t r e V t n e m g a r F
r e d a h S
t p i r c S a v a J
{ ) ( n i a m n o i t c n u f
… L G b e W t e g = l g r a v
…
; ) … ( s r e d a h S t i n i
…
}
n o i t a r e p o t n e m g a r f - r e P n o i t a r e p o x e t r e v - r e P
r e s w o r B
x
y
: n o i t i s o P
) 0 . 1 , 0 . 0 , 0 . 0 , 0 . 0 (
0 . 0 1 : e z i S
: r o l o C
) 0 . 1 , 0 . 0 , 0 . 0 , 0 . 1 (
) d a o l n o ( d e t u c e x e s i m a r g o r p t p i r c S a v a J
r e f u b r o l o c e h t o t r e d n e R
d e t u c e x e e r a s d o h t e m d e t a l e r - L G b e W
r e f u B r o l o C
y a l p s i D
Figure A.1 The processing flow from executing a JavaScript program to displaying the result in
a browser
This approach works, because as seen in the sample programs in this book, all WebGL
programs are executed in the browser by executing the JavaScript in the form of a method
invocation from the browser.
Because the programs are not independently executed, the browser has a chance to
check whether the content of the color buffer was modified after the JavaScript program
executes and exits. If the contents have been modified, the browser is responsible for
ensuring it is displayed on the screen.
For example, in HelloPoint1 , we execute the JavaScript function ( main() ) from the HTML
file ( HelloPoint1.html ) as follows:
This causes the browser to execute the JavaScript function main() after loading the
element. Within main() , the draw operation modifies the color buffer.
main(){
...
// Draw a point
gl.drawArrays(gl.POINTS, 0, 1);
}
When main() exits, the control returns to the browser that called the function. The
browser then checks the content of the color buffer, and if anything has been changed,
causes it to be displayed. One useful side effect of this approach is that the browser
No Need to Swap Buffers in WebGL
439
handles combining the color buffer with the rest of the web page, allowing you to
combine 3D graphics with your web pages. Note that HelloPoint1 shows only the
element on the page, because HelloPoint1.html contains no other elements
than the element.
This implies that if you call methods that return control to the browser, such as alert()
or confirm() , the browser may then display the contents of the color buffer to the screen.
This may not be what you expect, so take care when using these methods in your WebGL
programs.
The browser behaves in the same way when JavaScript draws something in an event
handler. This is because the event handler is also called from the browser, and then the
control is returned to the browser after the handler exits.
This page intentionally left blank
Appendix B
Built-In Functions of GLSL ES 1.0
This appendix details all embedded functions supported by GLSL ES 1.0, including
many that are not explained in this book but which are often used in programming
shaders.
Note that, in all but texture lookup functions, the operations on vector or matrix argu-
ments are carried out component-wise. For example,
vec2 deg = vec2(60, 80);
vec2 rad = radians(deg);
In these examples, the components of the variable rad are assigned values converted
from 60 and 80 degrees, respectively.
Angle and Trigonometry Functions
Syntax Description
float radians(float degree )
vec2 radians(vec2 degree )
vec3 radians(vec3 degree )
vec4 radians(vec4 degree )
Converts degrees to radians; that is, π * degree /180.
float degrees(float radian )
vec2 degrees(vec2 radian )
vec3 degrees(vec3 radian )
vec4 degrees(vec4 radian )
Converts radians to degrees; that is, 180 * radian /π.
APPENDIX B Built-In Functions of GLSL ES 1.0
442
Syntax Description
float sin(float angle )
vec2 sin(vec2 angle )
vec3 sin(vec3 angle )
vec4 sin(vec4 angle )
The standard trigonometric sine function. angle is in
radians.
The range of the return value is [–1, 1].
float cos(float angle )
vec2 cos(vec2 angle )
vec3 cos(vec3 angle )
vec4 cos(vec4 angle )
The standard trigonometric cosine function. angle is in
radians.
The range of the return value is [–1, 1].
float tan(float angle )
vec2 tan(vec2 angle )
vec3 tan(vec3 angle )
vec4 tan(vec4 angle )
The standard trigonometric tangent function. angle is in
radians.
float asin(float x )
vec2 asin(vec2 x )
vec3 asin(vec3 x )
vec4 asin(vec4 x )
Arc sine. Returns an angle (in radians) whose sine is
x . The range of the return value is [–π/2, π/2]. Results
are undefined if x > –1 or x > +1.
float acos(float x )
vec2 acos(vec2 x )
vec3 acos(vec3 x )
vec4 acos(vec4 x )
Arc cosine. Returns an angle (in radians) whose cosine
is x . The range of the return value is [0, π]. Results are
undefined if x > –1 or x > +1.
float atan(float y , float x )
vec2 atan(vec2 y , vec2 x )
vec3 atan(vec3 y , vec3 x )
vec4 atan(vec4 y , vec4 x )
Arc tangent. Returns an angle (in radians) whose
tangent is y / x . The signs of x and y are used to deter-
mine what quadrant the angle is in. The range of the
return value is [–π, π]. Results are undefined if x and y
are both 0.
Note, for vectors, this is a component-wise operation.
float atan(float y _over_ x )
vec2 atan(vec2 y _over_ x )
vec3 atan(vec3 y _over_ x )
vec4 atan(vec4 y _over_ x )
Arc tangent. Returns an angle whose tangent is y_
over_x . The range of the return value is [–π/2, π/2].
Note, for vectors, this is a component-wise operation.
Exponential Functions
443
Exponential Functions
Syntax Description
float pow(float x , float y )
vec2 pow(vec2 x , vec2 y )
vec3 pow(vec3 x , vec3 y )
vec4 pow(vec4 x , vec4 y )
Returns x raised to the y power; that is, x
y .
Results are undefined if x < 0.
Results are undefined if x = 0 and y ≤ 0.
Note, for vectors, this is a component-wise operation.
float exp(float x )
vec2 exp(vec2 x )
vec3 exp(vec3 x )
vec4 exp(vec4 x )
Returns the natural exponentiation of x ; that is, e
x .
float log(float x )
vec2 log(vec2 x )
vec3 log(vec3 x )
vec4 log(vec4 x )
Returns the natural logarithm of x ; that is, returns the value
y , which satisfies the equation x = e
y . Results are undefined
if x ≤ 0.
float exp2(float x )
vec2 exp2(vec2 x )
vec3 exp2(vec3 x )
vec4 exp2(vec4 x )
Returns 2 raised to the x power; that is, 2
x .
float log2(float x )
vec2 log2(vec2 x )
vec3 log2(vec3 x )
vec4 log2(vec4 x )
Returns the base 2 logarithm of x ; that is, returns the value
y , which satisfies the equation x =2
y .
Results are undefined if x ≤ 0.
float sqrt(float x )
vec2 sqrt(vec2 x )
vec3 sqrt(vec3 x )
vec4 sqrt(vec4 x )
Returns x .
Results are undefined if x < 0.
float inversesqrt(float x )
vec2 inversesqrt(vec2 x )
vec3 inversesqrt(vec3 x )
vec4 inversesqrt(vec4 x )
Returns 1/ x
.
Results are undefined if x ≤ 0.
APPENDIX B Built-In Functions of GLSL ES 1.0
444
Common Functions
Syntax Description
float abs(float x )
vec2 abs(vec2 x )
vec3 abs(vec3 x )
vec4 abs(vec4 x )
Returns the non-negative value of x without
regard to its sign; that is, x if x ≥ 0, otherwise it
returns – x .
float sign(float x )
vec2 sign(vec2 x )
vec3 sign(vec3 x )
vec4 sign(vec4 x )
Returns 1.0 if x > 0, 0.0 if x = 0, or –1.0 if
x < 0.
float floor(float x )
vec2 floor(vec2 x )
vec3 floor(vec3 x )
vec4 floor(vec4 x )
Returns a value equal to the nearest integer that
is less than or equal to x.
float ceil(float x )
vec2 ceil(vec2 x )
vec3 ceil(vec3 x )
vec4 ceil(vec4 x )
Returns a value equal to the nearest integer that
is greater than or equal to x .
float fract(float x )
vec2 fract(vec2 x )
vec3 fract(vec3 x )
vec4 fract(vec4 x )
Returns the fractional part of x ; that is,
x – floor ( x ).
float mod(float x , float y )
vec2 mod(vec2 x , vec2 y )
vec3 mod(vec3 x , vec3 y )
vec4 mod(vec4 x , vec4 y )
vec2 mod(vec2 x , float y )
vec3 mod(vec3 x , float y )
vec4 mod(vec4 x , float y )
Modulus (modulo). Returns the remainder of the
division of x by y; that is, ( x – y * floor ( x / y )).
Given two positive numbers x and y, mod(x, y) is
the remainder of the division of x by y.
Note, for vectors, this is a component-wise
operation.
Common Functions
445
Syntax Description
float min(float x , float y )
vec2 min(vec2 x , vec2 y )
vec3 min(vec3 x , vec3 y )
vec4 min(vec4 x , vec4 y )
vec2 min(vec2 x , float y )
vec3 min(vec3 x , float y )
vec4 min(vec4 x , float y )
Returns the smallest value; that is, y if y < x ,
otherwise it returns x .
Note, for vectors, this is a component-wise
operation.
float max(float x , float y )
vec2 max(vec2 x , vec2 y )
vec3 max(vec3 x , vec3 y )
vec4 max(vec4 x , vec4 y )
vec2 max(vec2 x , float y )
vec3 max(vec3 x , float y )
vec4 max(vec4 x , float y )
Returns the largest value; that is, y if x < y ,
otherwise it returns x .
Note, for vectors, this is a component-wise
operation.
float clamp(float x , float minVal ,
float maxVal )
vec2 clamp(vec2 x , vec2 minVal ,
vec2 maxVal )
vec3 clamp(vec3 x , vec3 minVal ,
vec3 maxVal )
vec4 clamp(vec4 x , vec4 minVal ,
vec4 maxVal )
vec2 clamp(vec2 x , float minVal ,
float maxVal )
vec3 clamp(vec3 x , float minVal ,
float maxVal )
vec4 clamp(vec4 x , float minVal ,
float maxVal )
Constrains x to lie between minVal and maxVal;
that is, returns min (max ( x , minVal ), maxVal ).
Results are undefined if minVal > maxVal .
APPENDIX B Built-In Functions of GLSL ES 1.0
446
Syntax Description
float mix(float x , float y, float a )
vec2 mix(vec2 x , vec2 y, float a )
vec3 mix(vec3 x , vec3 y, float a )
vec4 mix(vec4 x , vec4 y, float a )
vec2 mix(vec2 x , float y, vec2 a )
vec3 mix(vec3 x , float y, vec3 a )
vec4 mix(vec4 x , float y, vec4 a )
vec2 mix(vec2 x , vec2 y, vec2 a )
vec3 mix(vec3 x , vec3 y, vec3 a )
vec4 mix(vec4 x , vec4 y, vec4 a )
Returns the linear blend of x and y ; that is, x *
(1– a ) + y * a.
float step(float edge , float x )
vec2 step(vec2 edge , vec2 x )
vec3 step(vec3 edge , vec3 x )
vec4 step(vec4 edge , vec4 x )
vec2 step(float edge , vec2 x )
vec3 step(float edge , vec3 x )
vec4 step(float edge , vec4 x )
Generates a step function by comparing two
values; that is, returns 0.0 if x < edge , otherwise
it returns 1.0.
float smoothstep(float edge0 ,
float edge1 , float x )
vec2 smoothstep(vec2 edge0 ,
vec2 edge1 , vec2 x )
vec3 smoothstep(vec3 edge0 ,
vec3 edge1 , vec3 x )
vec4 smoothstep(vec4 edge0 ,
vec4 edge1 , vec4 x )
Hermite interpolation.
Returns 0.0 if x ≤ edge0 and 1.0 if x ≥ edge1
and performs smooth Hermite interpolation
between 0 and 1 when edge0 < x < edge1 . This
is equivalent to:
// genType is float, vec2, vec3, or vec4
genType t;
t = clamp (( x – edge0 ) / ( edge1 – edge0 ), 0, 1);
return t * t * (3 – 2 * t);
Results are undefined if edge0 ≥ edge1 .
The following functions determine which components of their arguments will be used
depending on the functionality of the function.
Geometric Functions
447
Geometric Functions
Syntax Description
float length(float x )
float length(vec2 x )
float length(vec3 x )
float length(vec4 x )
Returns the length of vector x .
float distance(float p0 , float p1 )
float distance(vec2 p0 , vec2 p1 )
float distance(vec3 p0 , vec3 p1 )
float distance(vec4 p0 , vec4 p1 )
Returns the distance between p0 and p1 ; that is,
length ( p0 – p1 ).
float dot(float x , float y )
float dot(vec2 x , vec2 y )
float dot(vec3 x , vec3 y )
float dot(vec4 x , vec4 y )
Returns the dot product of x and y , in case of
vec3, x [0]* y [0]+ x [1]* y [1]+ x [2]* y [2].
vec3 cross(vec3 x , vec3 y ) Returns the cross product of x and y , in case of
vec3,
result[0] = x [1]* y [2] - y [1]* x [2]
result[1] = x [2]* y [0] - y [2]* x [0]
result[2] = x [0]* y [1] - y [0]* x [1]
float normalize(float x )
vec2 normalize(vec2 x )
vec3 normalize(vec3 x )
vec4 normalize(vec4 x )
Returns a vector in the same direction as x but
with a length of 1; that is, x /length( x ).
float faceforward(float N , float I ,
float Nref )
vec2 faceforward(vec2 N , vec2 I ,
vec2 Nref )
vec3 faceforward(vec3 N , vec3 I ,
vec3 Nref )
vec4 faceforward(vec4 N , vec4 I ,
vec4 Nref )
Reverse the normal. Adjust the vector N according to
the incident vector I and the reference vector Nref .
If dot( Nref , I ) < 0 return N , otherwise return – N .
APPENDIX B Built-In Functions of GLSL ES 1.0
448
Syntax Description
float reflect(float I , float N )
vec2 reflect(vec2 I , vec2 N )
vec3 reflect(vec3 I , vec3 N )
vec4 reflect(vec4 I , vec4 N )
Calculate reflection vector. For the incident vector
I and surface orientation N , returns the reflection
direction: I – 2 * dot( N , I ) * N
N must already be normalized to achieve the
desired result.
float refract(float I , float N ,
float eta )
vec2 refract(vec2 I , vec2 N , float
eta )
vec3 refract(vec3 I , vec3 N , float
eta )
vec4 refract(vec4 I , vec4 N , float
eta )
Calculate the change in direction of light due to its
medium by calculating the incident vector using the
ratio of indices of refraction. For the incident vector
I and surface normal N , and the ratio of indices of
refraction eta , return the refraction vector using the
following:
k = 1.0 – eta * eta * (1.0 – dot( N , I ) * dot( N , I ))
if (k < 0.0)
// genTyp is float, vec2, vec3, or vec4
return genType(0.0)
else
return eta * I - ( eta * dot( N , I ) + sqrt(k)) * N
The input parameters for the incident vector I and
the surface normal N must already be normalized.
Matrix Functions
Syntax Description
mat2 matrixCompMult(mat2 x , mat2 y )
mat3 matrixCompMult(mat3 x , mat3 y )
mat4 matrixCompMult(mat4 x , mat4 y )
Multiply matrix x by matrix y component-wise; that
is, if result = matrixCompMatrix( x , y ) then
result[i][j] = x [i][j] * y [i][j].
Vector Functions
449
Vector Functions
Syntax Description
bvec2 lessThan(vec2 x , vec2 y )
bvec3 lessThan(vec3 x , vec3 y )
bvec4 lessThan(vec4 x , vec4 y )
bvec2 lessThan(ivec2 x , ivec2 y )
bvec3 lessThan(ivec3 x , ivec3 y )
bvec4 lessThan(ivec4 x , ivec4 y )
Return the component-wise comparison of
x < y .
bvec2 lessThanEqual(vec2 x , vec2 y )
bvec3 lessThanEqual(vec3 x , vec3 y )
bvec4 lessThanEqual(vec4 x , vec4 y )
bvec2 lessThanEqual(ivec2 x , ivec2 y )
bvec3 lessThanEqual(ivec3 x , ivec3 y )
bvec4 lessThanEqual(ivec4 x , ivec4 y )
Return the component-wise comparison of
x ≤ y .
bvec2 greaterThan(vec2 x , vec2 y )
bvec3 greaterThan(vec3 x , vec3 y )
bvec4 greaterThan(vec4 x , vec4 y )
bvec2 greaterThan(ivec2 x , ivec2 y )
bvec3 greaterThan(ivec3 x , ivec3 y )
bvec4 greaterThan(ivec4 x , ivec4 y )
Return the component-wise comparison of
x > y .
bvec2 greaterThanEqual(vec2 x , vec2 y )
bvec3 greaterThanEqual(vec3 x , vec3 y )
bvec4 greaterThanEqual(vec4 x , vec4 y )
bvec2 greaterThanEqual(ivec2 x , ivec2 y )
bvec3 greaterThanEqual(ivec3 x , ivec3 y )
bvec4 greaterThanEqual(ivec4 x , ivec4 y )
Return the component-wise comparison of
x ≥ y .
APPENDIX B Built-In Functions of GLSL ES 1.0
450
Syntax Description
bvec2 equal(vec2 x , vec2 y )
bvec3 equal(vec3 x , vec3 y )
bvec4 equal(vec4 x , vec4 y )
bvec2 equal(ivec2 x , ivec2 y )
bvec3 equal(ivec3 x , ivec3 y )
bvec4 equal(ivec4 x , ivec4 y )
Return the component-wise comparison of
x == y .
bvec2 notEqual(vec2 x , vec2 y )
bvec3 notEqual(vec3 x , vec3 y )
bvec4 notEqual(vec4 x , vec4 y )
bvec2 notEqual(ivec2 x , ivec2 y )
bvec3 notEqual(ivec3 x , ivec3 y )
bvec4 notEqual(ivec4 x , ivec4 y )
Return the component-wise comparison of
x != y .
bool any(bvec2 x )
bool any(bvec3 x )
bool any(bvec4 x )
Return true if any component of x is true .
bool all(bvec2 x )
bool all(bvec3 x )
bool all(bvec4 x )
Return true only if all components of x are
true .
bvec2 not(bvec2 x )
bvec3 not(bvec3 x )
bvec4 not(bvec4 x )
Return the component-wise logical comple-
ment of x .
Texture Lookup Functions
451
Texture Lookup Functions
Syntax Description
vec4 texture2D(
sampler2D sampler , vec2 coord )
vec4 texture2D(
sampler2D sampler , vec2 coord ,
float bias )
vec4 texture2DProj(
sampler2D sampler , vec3 coord )
vec4 texture2DProj(
sampler2D sampler , vec3 coord ,
float bias )
vec4 texture2DProj(
sampler2D sampler , vec4 coord )
vec4 texture2DProj(
sampler2D sampler , vec4 coord ,
float bias )
vec4 texture2DLod(
sampler2D sampler , vec2 coord ,
float lod )
vec4 texture2DProjLod(
sampler2D sampler , vec3 coord ,
float lod )
vec4 texture2DProjLod(
sampler2D sampler , vec4 coord ,
float lod )
Use the texture coordinate coord
to read out texel values in the
2D texture currently bound to
sampler . For the projective (Proj)
versions, the texture coordinate
( coord .s, coord .t) is divided by
the last component of coord .
The third component of coord
is ignored for the vec4 coord
variant. The bias parameter
is only available in fragment
shaders. It specifies the value
to add the current lod when a
MIPMAP texture is bound to
sampler .
vec4 textureCube(
samplerCube sampler , vec3 coord )
vec4 textureCube(
samplerCube sampler , vec3 coord ,
float bias )
vec4 textureCubeLod(
samplerCube sampler , vec3 coord ,
float lod )
Use the texture coordinate
coord to read out a texel from
the cube map texture currently
bound to sampler . The direction
of coord is used to select the
face from the cube map texture.
This page intentionally left blank
Appendix C
Projection Matrices
Orthogonal Projection Matrix
The following matrix is created by Matrix4.setOrtho(left , right , bottom , top , near , far) .
2
0 0
2
0 0
2
0 0
0 0 0 1
right left
t f e l t h g i r t f e l t h g i r
top bottom
top bottom top bottom
far near
far near far near
+ ⎤ ⎡
−
⎥ ⎢
− −
⎥ ⎢
+ ⎥ ⎢
−
⎥ ⎢
− −
⎥ ⎢
+ ⎥ ⎢
− −
⎥ ⎢
− −
⎥ ⎢
⎥ ⎢
⎦ ⎣
Perspective Projection Matrix
The following matrix is created by Matrix4.setPerspective(fov , aspect , near , far) .
1
0 0 0
*tan( )
2
1
0 0 0
tan( )
2
2* *
0 0
0 0 1 0
fov
aspect
fov
far near far near
far near far near
⎤ ⎡
⎥ ⎢
⎥ ⎢
⎥ ⎢
⎥ ⎢
⎥ ⎢
⎥ ⎢
⎥ ⎢
⎥ ⎢ +
− −
⎥ ⎢
− −
⎥ ⎢
⎥ ⎢
−
⎦ ⎣
This page intentionally left blank
Appendix D
WebGL/OpenGL: Left or Right Handed?
In Chapter 2 , “Your First Step with WebGL,” the coordinate system of WebGL was
introduced as a right-handed system. However, you will probably come across tuto-
rials and other material on the web that contradict this. In this appendix, you’ll
learn the “real” coordinate systems used by WebGL by examining what will happen
when something is drawn using WebGL’s default settings. Because WebGL is based
on OpenGL, what you learn is equally applicable to OpenGL. You should read this
appendix after reading Chapter 7 , “Toward the 3D World,” because it refers back to
sample programs and explanations in that chapter.
Let’s start by referring to the “font of all knowledge”: the original specification.
Specifically, the authorized specification of OpenGL ES 2.0, which is the base specifi-
cation of WebGL, published by the Khronos group,
1 states in Appendix B :
7. The GL does not force left- or right-handedness on any of its coordinate systems.
If this is the case, and WebGL is agnostic about handedness, then why do many books
and tutorials, and in fact this book, describe WebGL as right handed? Essentially, it’s
a convention. When you are developing your applications, you need to decide which
coordinate system you are using and stick with it. That’s true for your applications,
but it’s also true for the many libraries that have been developed to help people use
WebGL (and OpenGL). Many of those libraries choose to adopt the right-handed
convention, so over time it becomes the accepted convention and then becomes
synonymous with the GL itself, leading people to believe that the GL is right handed.
So why the confusion? If everybody accepts the same convention, there shouldn’t be
a problem. That’s true, but the complication arises because WebGL (and OpenGL) at
certain times requires the GL to choose a handedness to carry out its operations, a
default behavior if you will, and that default isn’t always right handed!
1. www.khronos.org/registry/gles/specs/2.0/es_cm_spec_2.0.24.pdf
APPENDIX D WebGL/OpenGL: Left or Right Handed?
456
In this appendix, we explore the default behavior of WebGL to give you a clearer under-
standing of the issue and how to factor this into your own applications.
To begin the exploration of WebGL’s default behavior, let’s construct a sample program
CoordinateSystem as a test bed for experimentation. We’ll use this program to go back to
first principals, starting with the simplest method of drawing triangles and then adding
features to explore how WebGL draws multiple objects. The goal of our sample program is
to draw a blue triangle at –0.1 on the z-axis and then a red triangle at –0.5 on the z-axis.
Figure D.1 shows the triangles, their z coordinates, and colors.
d e r
e u l b
5 . 0 -
z
0 . 0
1 . 0 -
Figure D.1 The triangles used in this appendix and their colors
As this appendix will show, to achieve our relatively modest goal, we actually have to get
a number of interacting features to work together, including the basic drawing, hidden
surface removal, and viewing volume. Unless all three are set up correctly, you will get
unexpected results when drawing, which can lead to confusion about left and right
handedness.
Sample Program CoordinateSystem.js
Listing D.1 shows CoordinateSystem.js . The code for error processing and some
comments have been removed to allow all lines in the program to be shown in a limited
space, but as you can see, it is a complete program.
Listing D.1 CoordinateSystem
1 // CoordinateSystem.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'varying vec4 v_Color;\n' +
7 'void main() {\n' +
8 ' gl_Position = a_Position;\n' +
9 ' v_Color = a_Color;\n' +
Sample Program CoordinateSystem.js
457
10 '}\n';
11
12 // Fragment shader program
13 var FSHADER_SOURCE =
14 '#ifdef GL_ES\n' +
15 'precision mediump float;\n' +
16 '#endif\n' +
17 'varying vec4 v_Color;\n' +
18 'void main() {\n' +
19 ' gl_FragColor = v_Color;\n' +
20 '}\n';
21
22 function main() {
23 var canvas = document.getElementById('webgl'); // Retrieve
24 var gl = getWebGLContext(canvas); // Get the context for WebGL
25 initShaders(gl, VSHADER_SOURCE, FSHADER_SOURCE);// Initialize shaders
26 var n = initVertexBuffers(gl); // Set vertex coordinates and colors
27
28 gl.clearColor(0.0, 0.0, 0.0, 1.0); // Specify the clear color
29 gl.clear(gl.COLOR_BUFFER_BIT); // Clear
30 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw the triangles
31 }
32
33 function initVertexBuffers(gl) {
34 var pc = new Float32Array([ // Vertex coordinates and color
35 0.0, 0.5, -0.1, 0.0, 0.0, 1.0, // The blue triangle in front
36 -0.5, -0.5, -0.1, 0.0, 0.0, 1.0,
37 0.5, -0.5, -0.1, 1.0, 1.0, 0.0,
38
39 0.5, 0.4, -0.5, 1.0, 1.0, 0.0, // The red triangle behind
40 -0.5, 0.4, -0.5, 1.0, 0.0, 0.0,
41 0.0, -0.6, -0.5, 1.0, 0.0, 0.0,
42 ]);
43 var numVertex = 3; var numColor = 3; var n = 6;
44
45 // Create a buffer object and write data to it
46 var pcbuffer = gl.createBuffer();
47 gl.bindBuffer(gl.ARRAY_BUFFER, pcbuffer);
48 gl.bufferData(gl.ARRAY_BUFFER, pc, gl.STATIC_DRAW);
49
50 var FSIZE = pc.BYTES_PER_ELEMENT; // The number of byte
51 var STRIDE = numVertex + numColor; // Calculate the stride
52
53 // Assign the vertex coordinates to attribute variable and enable it
54 var a_Position = gl.getAttribLocation(gl.program, 'a_Position');
APPENDIX D WebGL/OpenGL: Left or Right Handed?
458
55 gl.vertexAttribPointer(a_Position, numVertex, gl.FLOAT, false, FSIZE *
➥STRIDE, 0);
56 gl.enableVertexAttribArray(a_Position);
57
58 // Assign the vertex colors to attribute variable and enable it
59 var a_Color = gl.getAttribLocation(gl.program, 'a_Color');
60 gl.vertexAttribPointer(a_Color, numColor, gl.FLOAT, false, FSIZE *
➥STRIDE, FSIZE * numVertex);
61 gl.enableVertexAttribArray(a_Color);
62
63 return n;
64 }
When the sample program is run, it produces the output shown in Figure D.2 . Although
it’s not easy to see in black and white (remember, you can run these examples in your
browser from the book’s website), the red triangle is in front of the blue triangle. This is
the opposite of what you might expect because lines 32 to 42 specify the vertex coordi-
nates of the blue triangle before the red triangle.
Figure D.2 CoordinateSystem
However, as explained in Chapter 7 , this is actually correct. What is happening is that
WebGL is first drawing the blue triangle, because its vertex coordinates are specified first,
and then it’s drawing the red triangle over the blue triangle. This is a little like oil paint-
ing; once you lay down a layer of paint, anything painted on top has to overwrite the
paint below.
For many newcomers to WebGL, this can be counterintuitive. Because WebGL is a system
for drawing 3D graphics, you’d expect it to “do the right thing” and draw the red triangle
behind the blue one. However, by default WebGL draws in the order specified in the
application code, regardless of the position on the z-axis. If you want WebGL to “do the
right thing,” you are required to enable the Hidden Surface Removal feature discussed in
Hidden Surface Removal and the Clip Coordinate System
459
Chapter 7 . As you saw in Chapter 7 , Hidden Surface Removal tells WebGL to be smart
about the 3D scene and to remove surfaces that are actually hidden. In our case, this
should deal with the red triangle problem because in the 3D scene, most of the red trian-
gle is hidden behind the blue one.
Hidden Surface Removal and the Clip Coordinate
System
Let’s turn on Hidden Surface Removal in our sample program and examine its effect. To
do that, enable the function using gl.enable(gl.DEPTH_TEST) , clear the depth buffer, and
then draw the triangles. First, you add the following at line 27.
27 gl.enable(gl.DEPTH_TEST);
Then you modify line 29 as follows:
29 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
Now if you rerun the program after making these changes, you’d expect to see the
problem resolved and the blue triangle in front of the red one. However, what you actu-
ally see is that the red triangle is still in front. Again, although it’s difficult to see in black
and white, Figure D.3 shows the result.
Figure D.3 CoordinateSystem using the hidden surface removal function
This is unexpected and is part of the confusion surrounding WebGL’s left versus right
handedness. We have correctly programmed our example based on the belief that WebGL
is right handed, but it seems to be that WebGL is either telling us that –0.5 is located
in front of –0.1 on the z-axis or that WebGL does in fact use the left-handed coordinate
system, where the positive direction of the z-axis points into the screen ( Figure D.4 ).
APPENDIX D WebGL/OpenGL: Left or Right Handed?
460
x
z
y
Figure D.4 The left-handed coordinate system
The Clip Coordinate System and the Viewing Volume
So our application example follows the convention that WebGL is right handed, but
our program clearly shows a left-handed system is in place. What’s the explanation?
Essentially, hidden surface removal, when enabled, uses the clip coordinate system (see
Figure G.5 in Appendix G ), which itself uses the “left-handed” coordinate system, not the
right-handed one.
In WebGL (OpenGL), hidden surface removal is performed using the value of gl_
Position , the coordinates produced by the vertex shader. As you can see at line 8 in
the vertex shader in Listing D.1 , a_Position is directly assigned to gl_Position in
CoordinateSystem.js . This means that the z coordinate of the red triangle is passed as
–0.5 and that of the blue one is passed as –0.1 to the clip coordinate system (the left-
handed coordinate system). As you know, the positive direction of the z-axis in the left-
handed coordinate system points into the screen, so the smaller value of the z coordinate
(–0.5) is located in front of the bigger one (–0.1). Therefore, it is the right behavior for the
WebGL system to display the red triangle in front of the blue one in this situation.
This obviously contradicts the explanation in Chapter 3 (that WebGL uses the right-
handed coordinate system). So how do we achieve our goal of having the red triangle
displayed behind the blue triangle, and what does this tell us about WebGL’s default
behaviors? Until now, the program hasn’t considered the viewing volume that needs to be
set up correctly for Hidden Surface Removal to work with our coordinate system. When
used correctly, the viewing volume requires that the near clipping plane be located in
front of the far clipping plane (that is near < far ). However, the values of near and far are
the distance from the eye point toward the direction of line of sight and can take any
value. Therefore, it is possible to specify a value of far that is actually smaller than that of
near or even use negative values. (The negative values means the distance from the eye
point toward the opposite direction of line of sight.) Obviously, the values set for near and
far depend on whether we are assuming a right- or left-handed coordinate system.
Returning to the sample program, after setting the viewing volume correctly, let’s
carry out the hidden surface removal. Listing D.2 shows only the differences from
CoordinateSystem.js .
The Clip Coordinate System and the Viewing Volume
461
Listing D.2 CoordinateSystem_viewVolume.js
1 // CoordinateSystem_viewVolume.js
2 // Vertex shader program
3 var VSHADER_SOURCE =
4 'attribute vec4 a_Position;\n' +
5 'attribute vec4 a_Color;\n' +
6 'uniform mat4 u_MvpMatrix;\n' +
7 'varying vec4 v_Color;\n' +
8 'void main() {\n' +
9 'gl_Position = u_MvpMatrix * a_Position;\n' +
10 'v_Color = a_Color;\n' +
11 '}\n';
...
23 function main() {
...
29 gl.enable(gl.DEPTH_TEST); // Enable hidden surface removal function
30 gl.clearColor(0.0, 0.0, 0.0, 1.0); // Set the clear color
31 // Get the storage location of u_MvpMatrix
32 var u_MvpMatrix = gl.getUniformLocation(gl.program, 'u_MvpMatrix');
33
34 var mvpMatrix = new Matrix4();
35 mvpMatrix.setOrtho(-1, 1, -1, 1, 0, 1); // Set the viewing volume
36 // Pass the view matrix to u_MvpMatrix
37 gl.uniformMatrix4fv(u_MvpMatrix, false, mvpMatrix.elements);
38
39 gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
40 gl.drawArrays(gl.TRIANGLES, 0, n); // Draw the triangle
41 }
Once you run this sample program, you can see the result shown in Figure D.5 , in which
the blue triangle is displayed in front of the red one.
Figure D.5 CoordinateSystem_viewVolume
APPENDIX D WebGL/OpenGL: Left or Right Handed?
462
The critical change is that the uniform variable ( u_MvpMatrix ) for passing a view matrix
was added to the vertex shader. It was multiplied by a_Position , and then its result was
assigned to gl_Position . Although we used the setOrtho() method to specify the viewing
volume, setPerspective() has the same result.
What Is Correct?
Let’s compare the process of the vertex shader in CoordinateSystem.js with that in
CoordinateSystem_viewVolume.js .
Line 8 in CoordinateSystem.js :
8 ' gl_Position = a_Position;\n' +
became line 9 in CoordinateSystem_viewVolume.js :
9 ' gl_Position = u_MvpMatrix * a_Position;\n' +
As you can see, in CoordinateSystem_viewVolume.js , which displays the order of triangles
as was intended, the transformation matrix (in this case, a view matrix) is multiplied by
a vertex coordinate. To understand this operation, let’s examine how to rewrite line 8
in CoordinateSystem.js into the form * just like line 9 in
CoordinateSystem_viewVolume.js .
Line 8 assigns the vertex coordinate ( a_Position ) to gl_Position directly. To ensure that
the matrix multiplication operation has no effect the must have the following
elements (that is, the identity matrix):
⎡
⎣
⎢
⎢
⎢
⎢
⎤
⎦
⎥
⎥
⎥
⎥
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Therefore, line 8 in CoordinateSystem.js actually has the same effect as passing the iden-
tity matrix to u_MvpMatrix in line 9 in CoordinateSystem_viewVolume.js . In essence, this
matrix is controlling the default behavior of WebGL.
To understand this behavior better, let’s clarify what is happening if the projection matrix
is the identity matrix. You can understand this by using the matrix in Appendix C (see
Figure D.6 ) and the identify matrix to find left , right , top , bottom , near , and far.
What Is Correct?
463
2
0 0
2
0 0
2
0 0
0 0 0 1
right left
t f e l t h g i r t f e l t h g i r
top bottom
top bottom top bottom
far near
far near far near
+ ⎤ ⎡
−
⎥ ⎢
− −
⎥ ⎢
+ ⎥ ⎢
−
⎥ ⎢
− −
⎥ ⎢
+ ⎥ ⎢
− −
⎥ ⎢
− −
⎥ ⎢
⎥ ⎢
⎦ ⎣
Figure D.6 The projection matrix generated by setOrtho()
In this case, right – left = 2 and right + left = 0, which resolves to left = –1, right = 1. Equally,
far – near =–2 and far + near = 0, resolving to near = 1 and far = –1. That is:
left = -1, right = 1, bottom = -1, top = 1, near = 1, and far = -1
Using these parameters to setOrtho() as follows:
mvpMatrix.setOrtho(-1, 1, -1, 1, 1, -1);
results in near being greater than far . This means that the far clipping plane is placed in
front of the near clipping plane along the direction of the line of sight (see Figure D.7 ).
y
x z
left=-1.0
top=1.0
Eye point
near=1.0
right=1.0
far=-1.0
bottom= -1.0
Figure D.7 The viewing volume created by the identity matrix
If you specify the viewing volume by yourself, you will observe the same phenomenon
when you specify near > far to setOrtho() . That is, WebGL (OpenGL) follows the right-
handed coordinate system when you specify the viewing volume in this way.
Then look at the matrix representing the viewing volume in which the objects are
displayed correctly:
mvpMatrix.setOrtho(-1, 1, -1, 1, -1, 1);
APPENDIX D WebGL/OpenGL: Left or Right Handed?
464
This method generates the following projection matrix:
−
⎡
⎣
⎢
⎢
⎢
⎢
⎤
⎦
⎥
⎥
⎥
⎥
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
You will recognize that this matrix is a scaling matrix described in Chapter 4 , “More
Transformations and Basic Animation.” That is the matrix generated by setScale(1, 1,
-1) . You should note that the scaling factor of the z-axis is –1, meaning that the sign
of the z coordinates will be reversed. So this matrix transforms the conventional right-
handed coordinate system used in this book (and assumed by most WebGL libraries) to
the left-handed coordinate system used in the clip coordinate system by reversing the z
coordinates.
Summary
In summary, we know from the specification that WebGL doesn’t enforce either right
or left handedness. We have seen that many WebGL libraries and applications adopt
the convention that WebGL is right handed, as do we in this book. When WebGL’s
default behavior contradicts this (for example, when working in clip-space where it uses
a left-handed coordinate system), we can compensate programmatically, by reversing,
for example, the z coordinates. This allows us to continue to follow the convention that
WebGL is right handed. However, as previously stated, it’s only a convention. It’s one
that most people follow, but one that will occasionally trip you up if you aren’t aware of
WebGL’s default behaviors and how to handle them.
Appendix E
The Inverse Transpose Matrix
The inverse transpose matrix, previously introduced in Chapter 8 , “Lighting Objects,” is
a matrix that determines the inverse of a given matrix and then transposes it. As shown
in Figure E.1 , the direction of the normal vector of an object is subject to change
depending on the type of the coordinate transformation. However, if you use the
inverse transpose of the model matrix, you can safely ignore this in calculations.
l a m r o n e h t
n o i t c e r i d
1 ( 0 ) 0
n o i t c e r i d l a m r o n e h t
1 ( 0 ) 0
n o i t c e r i d l a m r o n e h t
1 ( 1 ) 0
n o i t c e r i d l a m r o n e h t
) 0 , 5 . 0 , 1 (
x
y
n o i t a l s n a r t ) 1 (
) s i x a - y e h t g n o l a e t a l s n a r t (
n o i t a t o r ) 2 (
) s e e r g e d 5 4 e t a t o r (
g n i l a c s ) 3 (
g n o l a 2 y b e l a c s (
) s i x a - y e h t
l a m r o n l a n i g i r o e h t
Figure E.1 The direction of normal vector changes along with the coordinate transformation
In Chapter 8 , you saw how to use the inverse transpose of the model matrix to trans-
form normals. However, there are actually some cases where you can also determine
the normal vector direction with the model matrix. For instance, when rotating, you
can determine the direction of the normal vector by multiplying the normal vector by
the rotation matrix. When calculating the direction of the normal vector, whether you
resort to the model matrix itself or its inverse transpose depends on which transform