[ZZ] RGBM and RGBE encoding for HDR

 

Deferred lighting separate lighting rendering and make lighting a completely image-space technique. This is very different the forward rendering. At first as the limitation of the hardware, we could make per-object lit by max number of 8 lights at one time, everything (light source setting, materials, texture) must be set up by the render states before rendering; As the hardware become more powerful, we could do additive lighting, one light one pass, all lighting will be blend together. It seems no max number of light source here. If we put too much lighting calculation in the pxiel shader, the situation will become worse. Because of z-test, a lots of pixels that do the lighting calculation need to be discard.  But this will be become the advantages of deferred lighting.

In deferred lighting, lights major cast is based on the screen area coverd; all lighting is per-pixel and all surface are lit equally (every objects will use the same light equation); light allows fast hardware Z-Reject. At the other hand, some disadvantages comes along: large frame-buffer size required; At some situation will become high fill-rate (a lots of lights do full screen lighting); It is difficult to support multiple light equation; It is very hard to handle transparncy objects; And hig hardward specification.

Deferred lighting could be adapted to photo-realistic rendering, that could achive very complex visual effects and very high visual quality than the tranditional forward rendering. At the same time, keep the frame rate still high. This the main reason why some latest TV or video games vastly using it.

 

G-Buffers

The key point of the deferred lighting is the G-Buffers or the image contents. What kind of parameters we should save for geometries in it depends on our light equation. For example, if we do direct lighting with Phong / Blinn Model,  usually we need diffuse color, normal, position, diffuse coefficient, specular color, specular coefficient and so on.  Choosing of lighting parameters depends on you, you could even ignore and discard some of them. You could discard the specular part, and do not use a dedicated buffer to save the surface position but just save the depth buffer (the postion could be restored by the screen space depth value)(There is such a HDR Deferred rendering sample on the Nvidia site). Here is a comparsion between two implements that I download beyond3D and Nvidia[HDR Deferred Shading]. 
 
After G-buffers created, the light contribution will be calculated and saved into the light buffer. Sometimes, the final result value may beyond 1, you could ignore it and saturate the result with the hardware support or use a high dynamic range buffer to hold it. If you consider to use a HDR buffer, you should find some way to make those HDR display correctly on the LDR monitor. Here comes the tone mapping. 

 

High Dynamic Range (HDR) 
Sometimes, using HDR buffers will waste too much memory. Actually we could Encode the HDR value(RGB) and store them into LDR buffers(RGBA). A lots of encoded method could be found on the Internet. Usually, we could use RGBM or RGBE. Some weak hardware like WII perfer to use RGBM encoding, and powerful device like PS3, XBOX360 perfer to use RGBE encoding.

 

Transparency Objects 
There is no cheap solution for tranparency objects  in deferred lighting. One way may be fall-back to forward rendering after deferred lighting; Or we could do the color blend with pixel shader in the post-process phase.

 

Light Optimization 
As the deferred lighting is based on the screen space covered, so the best way to improve it is to find the minimal screen space covered. For the directional light, the full screen light shader will be used. For the volume light like point light and spot light, the convex light volume will be used to best enclosed and to find the minimal clip screen area(sphere for point light, frustum for spot light). Here we need to make sure that those light volume will not clip the near and far clip plane at the same time. Otherwise, there will be lighting hole.

Another method, we could use was use stencil buffer to accurately figure out the light screen areas.

转载于:https://www.cnblogs.com/kylegui/p/3871181.html

你可能感兴趣的:([ZZ] RGBM and RGBE encoding for HDR)