在bump mapping中使用normalize cube map的原因

The bump mapping relies on modulation by a dot product of 
a unit-length light direction and a unit-length surface normal. 
So you want your fragment program to use unit-length light 
directions. 

The light directions are either data specified per vertex or 
generated per vertex. The typical example is having a 
point light. The vertex program computes P - V as the light 
direction, where P is the light position and V is the vertex 
position. You can normalize to U = (P-V)/Length(P-V) in the 
vertex program, but this costs you some GPU cycles. Moreover, 
the rasterizer will interpolate these per-vertex unit-length 
vectors, generating per-pixel directions that are usually not 
unit length. 

When the fragment program gets one of these light directions, 
you can normalize it (assuming a powerful enough shader model) 
using an inverse-sqrt operation. This is a per-pixel cost, which 
is expensive. The normalization cube map is a cheap alternative 
that instead uses a texture lookup to produce a unit-length light 
direction. [If you have bilinear filtering enabled for the cube map, 
there is an interpolation in the lookup, producing a non-unit-length 
vector, but for a sufficiently large resolution cube map, this is 
not really noticeable. With nearest filtering, you will get a 
unit-length result.] 

When using normalization cube maps, there is no need to do the 
normalization of P-V in the vertex program. So overall, you avoid 
the inverse-sqrt operations in both the vertex and fragment 

programs. 



http://www.groupsrv.com/computers/about300243.html

你可能感兴趣的:(在bump mapping中使用normalize cube map的原因)