来自wolf96的测试
未矫正的模糊颜色不准确,而且丢失了大量细节
One mistake most of us have done at some point is to do perform an image bluring downsizeing (or other linear pixel operation) on photographs without taking gamma into account. And that's because photographs are encoded in gamma space for best display quality. Ideally your image blurring algorithm should preserve the image brightness intact and only blur the detail in it, or in other words, if the blur kernel is going to be energy conserving, it will have to operate in linear space rather than gamma space. So, when blurring an photograph, one should degamma the image pixels before accumulating them, and then probably apply gamma back to the result after averaging for display. This, of course, is something we now always do, because it's a pain and most of the times slow if done by hand (thankfully most GPU hardware these days can do this for you if you choose the correct internal format for your textures).
Wrong way to do a box blur: vec3 col = vec3(0.0);
for( int j=-w; j<=w; j++ )
for( int i=-w; i<=w; i++ )
{
col += src[x+i,y+j];
}
dst[x,y] = col / ((2*w+1)*(2*w+1)); |
Correct way to do a box blur: vec3 col = vec3(0.0);
for( int j=-w; j<=w; j++ )
for( int i=-w; i<=w; i++ )
{
col += DeGamma( src[x+i,y+j] );
}
dst[x,y] = Gamma( col / ((2*w+1)*(2*w+1)) ); |
So, remember. If you are in the GPU, use sRGB for textures that containt photographics material and are ready to be displayed as they are. If you are in the CPU, remember to apply the inverse gamma correction. You probably want to do with a LUT with 256 entries that stores 16 bit integers with the fixed point representation of the de-gammaed value (mapping 8 bit to 8 bit would give you a lose of quality, for that point of storing images in gamma space is exatcly to improve perceptual quantification quality).
http://iquilezles.org/www/articles/gamma/gamma.htm