Linear or non-linear shadow maps?


Quote:
Original post by RobMaddison
I understand that, for aliasing mitigation, it might be beneficial to use linear depth maps as opposed to non-linear depth maps when rendering to a shadowmap floating point texture, but how is that normally achieved? Is it as simple as converting the regular float depth value to a 16.16 fixed point float or something like that?


First, some background on depth buffers. The "problem" with depth buffers is that with a perspective projection, the resulting depth value you get by dividing z by w will increase non-linearly from 0 to 1 as you move from the near-clip plane from the far-clip plane. So you end up with most of your viewable range ending up in the [0.9, 1.0] range which gives you a very uneven distribution of precision for your depth buffer. See  this and  this for more info.

Now for shadow maps, whether this is an issue depends on how you're storing your shadow maps and also the type of light source. For lights where you use a perspective projection for generating the shadow map (usually spot and point lights), this can be a problem if you store z/w in your shadow map. If you're manually writing the depth to a floating-point render target the problem is even worse...this is because floating-point values have more precision closer to 0 than they do closer to 1, and this compounds with the perspective projection problem to give you even more error. But for this particular case you have a very simple remedy: just store something other than z/w. For instance you can store the world-space distance from the light position to the pixel position, or you can store view-space z. For directional lights an orthographic projection is typically used, and with an orthographic projection your z/w value will be increase linearly so you won't have precision distribution problems.

Where things get tricky is if you don't write your shadow map values into a render target, and instead just directly sample a depth buffer. This is very common in commercial games, since it saves you from having to write to a render target in your shadow map generation step. This saves you bandwidth and memory, and also most modern GPU's have some sort of "fast-z" mode which allows to operate more quickly when only writing z values. In these cases you can't directly control the z value being output into your depth buffer unless you output depth from your pixel shader, and doing that is a pretty bad idea since it disables early-z optimizations. Also if you try to muck with the the z and w values in your vertex shader, you'll usually end up breaking some combination of rasterization, early-z, or z compression. The only remedy I know of for this situation is to use a floating-point depth buffer (if it's available), and flip the near and far planes in your projection.

Quote:
Original post by RobMaddison
Secondly, I've read in several places that there is some hardware-assistance when dealing with shadowmaps - could someone please explain what this assistance is exactly?

Thanks in advance


This typically refers to Nvidia's hardware PCF extension, which has been around for a long time. Basically it performs 2x2 PCF on the depth values for you in the texture unit, so that you don't have write pixel shader code for doing the 4 samples, the 4 compares, and the filtering. It's still available as a "driver hack" in D3D9, but in 10/11 it's no longer necessary since the API has generalized support for performing a compare operation during a texture fetch. ATI actually supports the D3D9 hack for their most recent hardware.

你可能感兴趣的:(map)