【翻译】SG Series Part 1: A Brief (and Incomplete) History of Baked Lighting Representations

古从军行
[唐代][李颀]
白日登山望烽火,黄昏饮马傍交河。
行人刁斗风沙暗,公主琵琶幽怨多。
野营万里无城郭,雨雪纷纷连大漠。
胡雁哀鸣夜夜飞,胡儿眼泪双双落。
闻道玉门犹被遮,应将性命逐轻车。
年年战骨埋荒外,空见蒲萄入汉家。

原文地址

A Brief (and Incomplete) History of Baked Lighting Representations

This is part 1 of a series on Spherical Gaussians and their applications for pre-computed lighting. You can find the other articles here:

Part 1 – A Brief (and Incomplete) History of Baked Lighting Representations

Part 2 – Spherical Gaussians 101

Part 3 – Diffuse Lighting From an SG Light Source

Part 4 – Specular Lighting From an SG Light Source

Part 5 – Approximating Radiance and Irradiance With SG’s

Part 6 – Step Into The Baking Lab

For part 1 of this series, I’m going to provide some background material for our research into Spherical Gaussians. The main purpose is cover some of the alternatives to the approach we used for The Order: 1886, and also to help you understand why we decided to persue Spherical Gaussians. The main empahasis is going to be on discussing what exactly we store in pre-baked lightmaps and probes, and how that data is used to compute diffuse or specular lighting. If you’re already familiar with the concepts of pre-computing radiance or irradiance and approximating them using basis functions like the HL2 basis or Spherical Harmonics, then you will probably want to skip to the next article.

作为Spherical Gassian(球面高斯,下面用SG代替)系列的第一篇文章,本文准备介绍一下SG的一些背景,主要是介绍我们团队在研发《The Order:1886》时尝试过的一些方案,以及为什么我们最终选择了SG作为最终的实现方案。

重点讨论烘焙的lightmap中存储的具体数据以及这些数据是如何用于进行漫反射与镜面反射光照的计算。

已经有相关知识储备的同学可以跳过这章直接进入下一章。

Before we get started, here’s a quick glossary of the terms I use the formulas:

  • Lo – the outgoing radiance (lighting) towards the viewer:出射辐射亮度(出射光亮度)

  • Li – the incoming radiance (lighting) hitting the surface:入射辐射亮度(入射光亮度)

  • O – the direction pointing towards the viewer (often denoted as “V” in shader code dealing with lighting):出射方向(指向相机)

  • i – the direction pointing towards the incoming radiance hitting the surface (often denoted as “L” in shader code dealing with lighting):入射方向(指向光源)

  • n – the direction of the surface normal:平面法线方向

  • x – the 3D location of the surface point:平面3D坐标

  • – integral about the hemisphere:半球积分

  • – the angle between the surface normal and the incoming radiance direction:平面法线与入射光方向之间的夹角

  • – the angle between the surface normal and the outgoing direction towards the viewer:平面法线与出射光线之间的夹角

  • f() – the BRDF of the surface:平面的BRDF(Bidirectional Reflectance Distribution Function)

这里给出了后面公式中会用到的一些符号定义。

radiance(辐射亮度)是单位面积单位立体角对应的光照功率;而irradiance(辐射照度)则指的是单位面积对应的光照功率(注意,没有立体角的限制),具体可以参考这篇文章。

The Olden Days – Storing Irradiance

Games have used pre-computed lightmaps for almost as long as they have been using shaded 3D graphics, and they’re still quite popular in 2016. The idea is simple: pre-compute a lighting value for every texel, then sample those lighting values at runtime to determine the final appearance of a surface. It’s a simple concept to grasp, but there are some details you might not think about if you’re just learning how they work. For instance, what exactly does it mean to store “lighting” in a texture? What exact value are we computing, anyway? In the early days the value fetched from the lightmap was simply multiplied with the material’s diffuse albedo color (typically done with fixed-function texture stages), and then directly output to the screen. Ignoring the issue of gamma correction and sRGB transfer functions for the moment, we can work backwards from this simple description to describe this old-school approach in terms of the rendering equation. This might seem like a bit of a pointless exercise, but I think it helps build a solid base that we can use to discuss more advanced techniques.

预生成的光照贴图技术的使用最早可以追溯到3D游戏的开发初期,而在2016年这种技术还依然非常流行。

这种技术的基本实现思路大概可以归纳如下:

1.为场景中的每个像素(每个物体上的每个像素?)预生成一个光照值,并用贴图形式存储下来(同时需要存储各个顶点的光照贴图uv,俗称的2uv)

2.在运行时,根据2uv对光照贴图进行采样,用作光照结果。

这种思路非常简单,不过具体实现起来还有一些细节需要注意,比如:

1.光照贴图里存储的到底是什么数据?光照的颜色数据吗?还是光照亮度?

2.如何实现动态光照?如何实现高光效果?

在早期的时候,光照贴图中存储的是光照对于物体的叠加颜色,在使用的时候,通常是将光照贴图中的颜色取出来并与diffuse albedo(基色)相乘并将结果用作最终的输出结果(这一步不需要可编程管线就可完成)。

略过伽马矫正以及sRGB等颜色空间设置不提,我们可以对这种实现方式进行一下推导,虽然这种做法好像是无意义的,但是实际上有助于理解后面更进一步的推导。

So we know that our lightmap contains a single fixed color per-texel, and we apply it the same way regardless of the viewing direction for a given pixel. This implies that we’re using a simple Lambertian diffuse BRDF, since it lacks any sort of view-dependence. Recall that we compute the outgoing radiance for a single point using the following integral:

按照前面的描述,场景中的每个像素在光照贴图中都存有一个对应的颜色值,由于只有一个值,所以不论观察方向如何变化,这个数值都是不变的。按照这个说法,比较符合这一描述的光学方程就是Lambertian diffuse BRDF(Lambertian Diffuse指的是从这种表面反射的出射光在任意方向上的数值都是相同的)了。通用的出射光亮度的计算公式可以给出如下:

某点的出射光亮度,等于这一点以法线为上方向的上半球的球面积分,积分项为对应角度的输入光亮度与BRDF的乘积乘上与法线夹角的余弦。

If we substitute the standard diffuse BRDF of

for our BRDF (where Cdiffuse is the diffuse albedo of the surface), then we get the following:

如果将上面的BRDF函数替换成标准的漫反射BRDF

的话(其中Cdiffuse指的是表面上当前点位置的漫反射基色),上面的公式就变成了如下的模样:

On the right side we see that we can pull the constant terms out the integral (the constant term is actually the entire BRDF!), and what we’re left with lines up nicely with how we handle lightmaps: the expensive integral part is pre-computed per-texel, and then the constant term is applied at runtime per-pixel. The “integral part” is actually computing the incident irradiance, which lets us finally identify the quantity being stored in the lightmap: it’s irradiance! In practice however most games would not apply the 1 / π term at runtime, since it would have been impractical to do so. Instead, let’s assume that the 1 / π was “baked” into the lightmap, since it’s constant for all surfaces (unlike the diffuse albedo, which we consider to be spatially varying). In that case, we’re actually storing a reflectance value that takes the BRDF into account. So if we wanted to be precise, we would say that it contains “the diffuse reflectance of a surface with Cdiffuse = 1.0″, AKA the maximum possible outgoing radiance for a surface with a diffuse BRDF.

在这个公式中,由于Lambertian的BRDF是一个常量,因此可以直接从积分公式中提取出来。在这种情况下lightmap的处理方式就可以分成以下两步:1.计算消耗比较高的积分部分可以在离线的时候针对每个像素进行烘焙计算;2.常量部分则可以在运行时直接乘以lightmap的结果。

积分部分实际上计算的是光辐射亮度,也就是说,刚才我们对于lightmap中存储的数据的疑问有了最终的答案,那就是光的辐射亮度。实际上,大多数游戏都不会在运行时进行1 / π 的叠加计算,因为不切实际(为何有此一说,是因为会有计算消耗,还是由于数据存储导致进行此计算会存在其他的异常?),且由于这一项是对于所有像素都是相同的(不像Cdiffuse会随着像素不同而变化),为此我们考虑将这一项直接烘焙到lightmap中。在这种情况下,lightmap存储的就是考虑了BRDF之后的反射结果,或者更精确一点就是在表面漫反射系数Cdiffuse为1的情况下的漫反射结果,也就是在Lambertian Diffuse BRDF函数下的最大出射光亮度。

Light Map: Meet Normal Map

One of the key concepts of lightmapping is the idea of reconstructing the final surface appearance using data that’s stored at different rates in the spatial domain. Or in simpler words, we store lightmaps using one texel density while combining it with albedo maps that have a different (usually higher) density(存储时候的像素密度与使用时候的像素密度相比要低一点). This lets us retain the appearance of high-frequency details without actually computing irradiance integrals per-pixel. But what if we want to take this concept a step further? What it we also want the irradiance itself to vary in response to texture maps, and not just the diffuse albedo? By the early 2000’s normal maps were starting to see common use for this purpose, however they were generally only used when computing the contribution from punctual light sources. Normal maps were no help with light maps that only stored a single (scaled) irradiance value, which meant that that pure ambient lighting would look very flat compared to areas using dynamic lighting:

lightmap的核心作用是在避免进行高复杂度积分计算的前提下重构场景的光照效果细节。传统的lightmap只支持与对应基色贴图绑定的静态光照效果,而不支持随着其他贴图(比如法线)数据而变化的动态光照效果。在2000年的时候,就有人尝试将法线贴图数据用于实现这种动态lightmap效果,不过只能用在电光的光照效果上。实际上,对于这种每个像素只存储了一个对应的光辐射亮度值的情况来说,法线贴图也无能为力,从而使得在纯粹的环境光的情况下光照效果看起来非常的扁平,而非动态光照效果那样非常有立体感:

Areas in direct lighting (on the right) have a varying appearance due to a normal map, but areas in shadow (on the left) have no variation due to being lit by a baked lightmap containing only a single irradiance value.

图中黑色区域是使用lightmap后的效果,而高亮区域则是实时光照与表面法线结合后的效果。

To make lightmaps work with normal mapping, we need to stop storing a single value and instead somehow store a distribution of irradiance values for every texel. Normal maps contain a range of normal directions, where those directions are generally restricted to the hemisphere around a point’s surface normal. So if we want our lightmap to store irradiance values for all possible normal map values, then it must contain a distribution of irradiance that’s defined for that same hemisphere. One of the earliest and simplest examples of such a distribution was used by Half-Life 2[1], and was referred to as Radiosity Normal Mapping[2]:

想要让lightmap支持法线贴图的凹凸效果,在lightmap中对于每个像素就不能只存储一个数值(如前面的存储的光辐射亮度恐怕就无法识别表面上的凹凸),而应该存储入射光辐射亮度的分布函数。法线贴图中存储的法线有一个范围,这个范围就是当前点的法线为上方向的一个半球,因此如果我们想要在lightmap上支持随着法线贴图的变化而变化的凹凸感,对应的光辐射亮度就应该同样包含这样的半球面的分布函数数据。这种实现方法最早曾被半条命实现过,当时这种方法称为Radiosity Normal Mapping:

Image from “Shading in Valve’s Source Engine “, SIGGRAPH 2006

Valve essentially modified their lightmap baker to compute 3 values instead of 1, with each value computed by projecting the irradiance signal onto one of the corresponding orthogonal basis vectors in the above image. At runtime, the irradiance value used for shading would be computed by blending the 3 lightmap values based on the cosine of the angle between the normal map direction and the 3 basis directions (which is cheaply computed using a dot product). This allowed them to effectively vary the irradiance based on the normal map direction, thus avoiding the “flat ambient” problem described above.

Valve的Radiosity Normal Mapping的具体实现方法是为每个像素计算三个数值:将光辐射亮度信号投影到上图中的三个正交基。光辐射亮度信号是一个向量吗,这个是怎么实现投影的呢?实际上,所有的光辐射亮度是带有方向的,经过与平面上点的材质交互之后反射出去,这是一个带有方向的信号,因此可以实现投影运算。而在运行时,用于像素着色的光辐射亮度就可以通过将当前点的法线与三个正交基的夹角的余弦乘以对应的光辐射分量而得到(其基本原理是,将输出光亮度信号用三个分量表示,在进行这个余弦相乘的运算之后,就相当于将三个分量分别投影到对应的像素的法线上,从而得到随着法线变化而变化的输出光亮度了,其实这个地方我觉得是存在问题的,因为经过这样的投影之后,得到的光亮度是有损耗的,而实际上法线不同应该仅仅只是改变输出光向量的方向而已,之后在根据视角的不同才会存在亮度强度的变化,后续有机会可以自行实验一下)。经过这种做法之后,经过lightmap处理之后的效果就不再是之前的扁平效果了。

While this worked for their static geometry, there still remained the issue of applying pre-computed lighting to dynamic objects and characters. Some early games (such as the original Quake) used tricks like sampling the lightmap value at a character’s feet, and using that value to compute ambient lighting for the entire mesh. Other games didn’t even do that much, and would just apply dynamic lights combined with a global ambient term. Valve decided to take a more sophisticated approach that extended their hemispherical lightmap basis into a full spherical basis formed by 6 orthogonal basis vectors:

这种方法对于静态物体来说已经足够了,不过对于那些动态的物体乃至人物角色,想要使用lightmap的话,可能还依然存在问题。早起的一些游戏(比如原版的Quake,《雷神之锤》)采用的是一种trick的方式来解决这个问题:为角色脚底的部分区域生成lightmap,之后在运行时就用这个数据来计算整个角色mesh的漫反射光照效果。而其他的游戏实现的方式就更简单粗糙:直接将动态光照跟一个全局的漫反射项相结合作为输出。Valve公司在思考之后则决定采用一种更成熟可靠的方法,那就是将他们之前的半球lightmap的正交基扩展成由六个正交向量组成的整球基向量。(六个并非两两正交,而是将之前的三个正交基的反向量也加入其中,如下图所示)

Image from “Shading in Valve’s Source Engine “, SIGGRAPH 2006

The basis vectors coincided with the 6 face directions of a unit cube, which led Valve to call this basis the “Ambient Cube”. By projecting irradiance in all directions around a point in space (instead of a hemisphere surrounding a surface normal) onto their basis functions, a dynamic mesh could sample irradiance for any normal direction and use it to compute diffuse lighting. This type of representation is often referred to as a lighting probe, or often just “probe” for short.

这六个基向量正好对应于立方体的六个面,因此Valve公司将这种方式称为lightmap cube。通过将来自六面八方的所有入射光辐射亮度的综合辐射亮度投影到这六个方向上(将同一个信号投影到互为相反数的两个向量上,得到的结果难道不是正好相反吗,这样存储得到的数据有什么意义?跟前面不一样,前面的投影数据是来自三面四方(也就是上半球))之后,任意的网格模型(不论动态还是静态)都能够计算在任何法线向量方向上得到出射光辐射亮度,从而可以得到随着位置的变化而变化的动态的光照效果,而这种表示光辐射亮度的方式被称为lighting probe,简称probe。

Going Specular

With Valve’s basis we can combine normal maps and light maps to get diffuse lighting that can vary in response to high-frequency normal maps. So what’s next? For added realism we would ideally like to support more complex BRDF’s, including view-dependent specular BRDF’s. Half-Life 2 handled environment specular by pre-generating cubemaps at hand-placed probe locations, which is still a common approach used by modern games (albeit with the addition of pre-filtering[3] used to approximate the response from a microfacet BRDF). However the large memory footprint of cubemaps limits the practical density of specular probes, which can naturally lead to issues caused by incorrect parallax or disocclusion.

按照Valve公司的Ambient Cube方法之后,我们就已经能够实现随着法线变化而变化的光照效果,这些效果都是建立在一些基本的BRDF(比如Lambertian Diffuse BRDF)假设基础之上的,那么之后就是准备实现复杂一点的BRDF了(比如随着观察角度的变化而变化的带高光的BRDF)。《半条命2》通过在人工指定probe的位置提前生成相应的cubemap的方式来实现环境高光效果,这种方法在当今的很多游戏中还依然非常流行(即使这种方式需要通过pre-filtering技术来拟合微表面的BRDF效果)。不过由于cubemap需要占用比较高的内存而使得用这种方法指定的高光probe的密度不能过高,从而导致视差或者遮挡效果异常:比如下图中球体下方的亮光实际上是来源于球体自身的高光反射,这会使得红色墙体看起来像是未与地板衔接。

A combination of incorrect parallax and disocclusion when using a pre-filtered environment as a source for environment specular. Notice the bright edges on the sphere, which are actually caused by the sphere reflecting itself!

With that in mind it would nice to be able to get some sort of specular response out of our lightmaps, even if only for a subset of materials. But if that is our goal, then our approach of storing an irradiance distribution starts to become a hinderance. Recall from earlier that with a diffuse BRDF we were able to completely pull the BRDF out of the irradiance integral, since the Lambertian diffuse BRDF is just a constant term. This is no longer the case even with a simple specular BRDF, whose value varies depending on both the viewing direction as well as the incident lighting direction.

如果能通过lightmap实现高光效果会使得场景表现更上层楼,即使这种高光效果只能对部分材质生效也很值得。不过值得注意的是,如果我们想要通过存储光辐射亮度的分布的方式来实现高光lightmap的话,那可能就不太现实了。记得我们之前说到过,我们之所以能够将BRDF从光照计算公式中提取出来,是因为使用的Lambertian Diffuse BRDF对于积分变量而言是一个常数项,而对于Specular BRDF而言就完全不是这么回事了:Specular BRDF不但会随着视角方向的变化而变化,而且还会随输入光的方向而变化(想想镜面反射)。

If you’re working with the Half-Life 2 basis (or something similar), a tempting option might be to compute a specular term as if the 3 basis directions were directional lights. If you think about what this means, it’s basically what you get if you decide to say “screw it” and pull the specular BRDF out of the irradiance integral. So instead of Integrate(BRDF * Lighting * cos(theta)), you’re doing BRDF * Integrate(Lighting * cos(theta)). This will definitely give you something, and it’s perhaps a lot better than nothing. But you’ll also effectively lose out on a ton of your specular response, since you’ll only get specular when your viewing direction appropriately lines up with your basis directions according the the BRDF slice. To show you what I mean by this, here’s a comparison:

如果你曾经试过《半条命2》的正交基实现lightmap的方法,那么你会很容易陷入这样的一个误区,那就是直接将三个正交基看成三个对应方向的光照,之后如果进一步使用这三个方向光作为输出光来计算高光的话,你会得到这样一种结果:只有在这三个正交基对应的方向上,才会看得到高光。至少是有了高光,你会这样想,但是在实际中,高光方向是不可能固定为三个方向的,也就是说这种结果肯定是有问题的,那么问题出在哪里了呢。这是因为,在这种计算的开始的时候,就忽略了一个假设:之所以能使用三个正交基来代替光照辐射亮度,是因为我们可以将BRDF从积分公式中提取出来,而实际上高光的BRDF是不能提取的。也就是说,身不正影子自然就斜了。下面几张图就是使用了这种方法后的一些错误结果:

The top image shows a path-traced rendering of a green wall being lit by direct sun lighting. The middle image shows the indirect specular component of the top image, with exposure increased by 4x. The bottom image shows the resulting specular from treating the HL2 basis directions as directional lights.

Hopefully these images clearly show the problem that I’m describing. In the bottom image, you get specular reflections that look just like they came from a few point lights, since that’s effectively what you’re simulating. Meanwhile in the middle image with proper environment reflections, you can see that the the entire green wall effectively acts as an area light, and you get a very broad specular reflections across the entire floor. In general the problem tends to be less noticeable though as roughness increases, since higher roughness naturally results in broader, less-defined reflections that are harder to notice.

从这些图中都能明显的看到对应的问题,不过这些问题会随着粗糙度的增加而变得微弱。

Let’s Try Spherical Harmonics

If we want to do better, we must instead find a way to store a radiance distribution and then efficiently integrate it against our BRDF. It’s at this point that we turn to spherical harmonics. Spherical harmonics (SH for short) have become a popular tool for real-time graphics, typically as a way to store an approximation of indirect lighting at discrete probe locations. I’m not going to go into the full specifics of SH since that could easily fill an entire article[4] on its own. If you have no experience with SH, the key thing to know about them is that they basically let you approximate a function defined on a sphere using a handful of coefficients (typically either 4 or 9 floats per RGB channel). It’s sort-of as if you had a compact cubemap, where you can take a direction vector and get back a value associated with that direction. The big catch is that you can only represent very low-frequency (fuzzy) signals with lower-order SH, which can limit what sort of things you can do with it. You can project detailed, high-frequency signals onto SH if you want to, but the resulting projection will be very blurry. Here’s an example showing what an HDR environment map looks like projected onto L2 SH, which requires 27 coefficients for RGB:

要想多快好省的实现lightmap高光,就得找到一种能够与BRDF实现快速积分运算的光辐射亮度的存储方法,我们想到Spherical Harmonics(球谐函数,后面简称SH),SH是实时渲染中非常常用的在离散点存储近似间接光照数据的一种非常有效的方法。具体的细节可以参考上面链接中的文章。SH的关键在于可以通过几个屈指可数的参数(通常对于RGB的每个通道,只需要四个或者九个参数)就能够表述一个球面函数映射关系:就像使用三个正交基表示一个坐标系一样。SH的问题在于使用低阶SH只能得到一个非常低频的信号数据(模糊),从而导致表现效果精度问题。在实际实现的时候,即使用于投影到SH的数据是非常高频的详细数据,通过SH反向计算回来的数据也会是非常模糊的。下面是使用L2 SH(2级?用27个系数表示RGB,每个通道9个系数)来实现HDR环境贴图的效果:

HDR环境贴图,高频详细数据

投影到SH之后反计算回来的数据,非常模糊

The top image is an HDR environment map containing incoming radiance values about a sphere, while the bottom image shows the result of projecting that environment onto L2 spherical harmonics.

In the case of irradiance, SH can work pretty well since it’s naturally low-frequency. The integration of incoming radiance against the cosine term effectively acts as a low-pass filter, which makes it a suitable candidate for approximation with SH. So if we project irradiance onto SH for every probe location or lightmap texel, we can now do an SH “lookup” (which is basically a few computations followed by a dot product with the coefficients) to get the irradiance in any direction on the sphere. This means we can get spatial variation from albedo and normal maps just like with the HL2 basis!

至于说到存储光辐射亮度,SH就正好合适,因为本来就是低频数据。输入光辐射亮度与cos函数相乘之后的积分,其表现就是一个低通滤波,使得其结果非常适合用SH来存储。如果我们将每个光照贴图像素或者说每个probe位置的光辐射亮度投影到SH空间,那么在运行时,就可以对SH贴图直接查找(这个过程通常指需要几个简单运算外加一个与各个系数的点乘运算就能完成)到在球面上任意方向上的光辐射亮度,这也就意味着我们可以从基色贴图以及法线贴图中拿到对应的随空间变化的高光数据(就跟之前《半条命2》中的正交基一样)

It also turns out that SH is pretty useful for computing irradiance from input radiance, since we can do it really cheaply. In fact it can do it so cheaply, it can be done at runtime by folding it into the SH lookup process. The reason it’s so cheap is because SH is effectively a frequency-domain representation of the signal, and when you’re in the frequency domain convolutions can be done with simple multiplication. In the spatial domain, convolution with a cubemap is an N^2 operation involving many samples from an input radiance cubemap. If you’re interested in the full details, the process was described in Ravi Ramamoorthi’s seminal paper[5] from 2001, with derivations provided in another article[6].

此外,在根据输入光辐射照度计算输出光辐射亮度的过程中,SH的表现也非常令人满意,因为整个计算过程实在是太简单太轻松了,实际上,因为这种计算非常低耗,使得这个过程可以直接放置在运行时完成:一边查表使用一边计算。而这个计算过程之所以会这么轻松,是因为SH本身是一个频域信号表示,对于在正常的光照空间域与cubemap进行卷积时间复杂度可能是N^2,而在频域计算则只是一个简单的乘法就能完成。(如果感兴趣可以参考上面链接中给出的推导)

The Stanford Bunny model being lit with diffuse lighting from an L2 spherical harmonics probe

So we’ve established that SH works for approximating irradiance, and that we can convert from radiance to irradiance at runtime. But what does this have to do with specular? By storing an approximation of radiance instead of irradiance in our probes or lightmaps (albeit, a very blurry version of radiance), we now have the signal that we need to integrate our specular BRDF against in order to produce specular reflections. All we need is an SH representation of our BRDF, and we’re a dot product away from environment specular! The only problem we have to solve is how to actually get an SH representation of our BRDF.

通过SH方法,我们可以在运行时将输入光辐射照度转换为输出光辐射亮度,不过这跟我们之前提到的高光数据有什么关系呢?通过在lightmap中存储辐射光照度而非辐射光亮度,我们可以得到用于与BRDF进行积分的输入光数据,而之后如果我们能够将高光BRDF也用SH表示的话,那么整个积分过程就变成两个频域信号(SH)的点积,那么问题就转化为我们要如何得到高光BRDF的SH表示了。

Unfortunately a microfacet specular BRDF is quite a bit more complicated than a Lambertian diffuse BRDF, which makes our lives more difficult. For diffuse lighting we only needed to worry about the cosine lobe, which has the same shape regardless of the material or viewing direction. However a specular lobe will vary in shape and intensity depending on the viewing direction, material roughness, and the fresnel term at zero incidence (AKA F0). If all else fails, we can always use monte-carlo techniques to pre-compute the coefficients and store the result in a lookup texture. At first it may seem like we need at parameterize our lookup table on 4 terms, since the viewing direction is two-dimensional.

很不幸的是微表面高光BRDF比单纯的Lambertian Diffuse BRDF要复杂得多,真是猿生艰难。对于漫反射光照,我们只需要考虑cosine lobe,这个部分是不随材质或者观察角度而变化的。而specular lobe的形状与强度却是会随着观察角度,材质粗糙度以及一个菲涅尔常量(F0)而变化。如果实在找不到好方法计算高光BRDF,那么我们最终还可以尝试使用蒙特卡洛方法来对SH的系数进行预计算并将结果存储到一张表里,在运行时通过查表来完成我们的伟大愿景。在起初的时候,我们预计我么的LUT可能需要四个参数,毕竟观察方向就占据了两维。

However we can drop a dimension if we follow in the footsteps[7] of the intrepid engineers at Bungie, who used a neat trick for their SH specular implementation in Halo 3[8]. The key insight that they shared was that the specular lobe shape doesn’t actually change as the viewer rotates around the local Z axis of the shading point (AKA the surface normal). It actually only changes based on the viewing angle, which is the angle between the view vector and the local Z axis of the surface. If we exploit this knowledge, we can pre-compute the coefficients for the set of possible viewing directions that are aligned with the local X axis. Then at runtime, we can rotate the coefficients so that the resulting lobe lines up with the actual viewing direction. Here’s an image to show you what I mean:

后来我们发现Bungie的一个勇敢的工程师给出了一种方法可以让我们丢弃一个参数,他们在实现《光晕3》的时候用了一个非常干净的trick。这种trick的关键点在于他们发现,specular lobe的形状不会随着观察者绕着着色点的Z轴(也就是此点的表面法线)旋转而改变。根据这个规律,我们可以为一系列与X轴平行的观察方向提前计算出对应的系数,之后再运行时再通过对这些系数进行旋转的方法来使得最终的lobe的方向正好对准观察方向,如下图所示:

Rotating a specular lobe from the X axis to its actual location based on the viewing direction, which is helpful for pre-computing the SH coefficients into a lookup texture

So in this image the checkerboard is the surface being shaded, and the red, green and blue arrows are the local X, Y, and Z axes of the surface. The transparent lobe represents the specular lobe that we precomputed for a viewpoint that’s aligned with the X axis, but has the same viewing angle. The blue arrow shows how we can rotate the specular lobe from its original position to the actual position of the lobe based on the current viewing position, giving us the desired specular response. Here’s a comparison showing what it looks like it in action:

上图中的棋盘格就是等待渲染的表面,而红绿蓝色箭头分别对应局部XYZ轴。透明lobe表示的是我们提前计算出来的与X轴平行的观察方向的specular lobe。蓝色的旋转箭头指示了我们可以通过怎样旋转可以使得原来的X轴正好对准观察方向,而此时的specular lobe也就是我们需要的specular lobe,下图是通过这种方法的实现效果与path-traced indirect specular方法的实现效果的对比:

The top image is a scene rendered with a path tracer. The middle image shows the indirect specular as rendered by a path tracer, with exposure increased 4x. The bottom image shows the indirect specular term computing an L2 SH lightmap, also with exposure increased by 4x.

Not too bad, eh? Or at least…not too bad as long as we’re willing to store 27 coefficients per lightmap texel, and we’re only concerned with rough materials. The comparison image used a GGX α parameter of 0.39, which is fairly rough.

上图的效果是为每个lightmap像素存储27个系数的前提下得到的,而且这个图片中给出的实际上是非常粗糙的物件的效果

One common issue with SH is a phenomenon known as “ringing”, which is described in Peter-Pike Sloan’s Stupid Spherical Harmonics Tricks[9]. Ringing artifacts tends to show up when you have a very intense light source one side of the sphere. When this happens, the SH projection will naturally result in negative lobes on the opposite side of the sphere, which can result very low (or even negative!) values when evaluated. It’s generally not too much of an issue for 2D lightmaps, since lightmaps are only concerned with the incoming radiance for a hemisphere surrounding the surface normal. However they often show up in probes, which store radiance or irradiance about the entire sphere. The solution suggested by Peter-Pike Sloan is to apply a windowing function to the SH coefficients, which will filter out the ringing artifacts. However the windowing will also introduce additional blurring, which may remove high-frequency components from the original signal being projected. The following image shows how ringing artifacts manifest when using SH to compute irradiance from an environment with a bright area light, and also shows how windowing affects the final result:

SH的一个通病就是会存在ringing现象,这个现象在上面链接中有详细描述。这种问题通常会在光源比较密集的情况下(在球面的每个方向上都有一个光源)出现。当这个条件满足的时候,SH投影的结果就会出现反向lobe(在球体的相反方向上出现的lobe?),从而导致极小值甚至负值。不过这种问题对于二维的lightmap出现概率比较低,这是因为lightmap只存储某个包裹像素法线向量的半球的输入光辐射照度。这种情况通常会出现在probe中(高光SH,通常会存储整个球体的光辐射照度或者亮度)。Peter-Pike Sloan给出的解决方法就是对SH的系数施加一个窗口函数,通过窗口函数对ringing进行过滤。不过窗口函数也会引入额外的模糊效果,从而导致高光数据的进一步衰减。下图给出了ringing问题的表现以及窗口函数对最终结果的影响:

A sphere with a Lambertian diffuse BRDF being lit by a lighting environment with a strong area light source. The left image shows the ground-truth result of using monte-carlo integration. The middle image shows the result of projecting radiance onto L2 SH, and then computing irradiance. The right image shows the result of applying a windowing function to the L2 SH coefficients before computing irradiance.

左图是使用蒙特卡洛方法积分得到的正确结果;中间图是用L2 SH得到的带有ringing问题的结果(注意观察阴影末端反常的白色区域);右图给出的是使用了窗口函数之后的L2 Sh效果。

下图是对本文的总结

References

[1] Shading in Valve’s Source Engine (SIGGRAPH 2006) – http://www.valvesoftware.com/publications/2006/SIGGRAPH06_Course_ShadingInValvesSourceEngine.pdf

[2] Half Life 2 / Valve Source Shading – http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf

[3] Real Shading in Unreal Engine 4 – http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf

[4] Spherical Harmonic Lighting: The Gritty Details – https://basesandframes.files.wordpress.com/2016/05/spherical_harmonic_lighting_gritty_details_green_2003.pdf[5] An Efficient Representation for Irradiance Environment Maps – https://cseweb.ucsd.edu/~ravir/papers/envmap/

[6] On the Relationship between Radiance and Irradiance: Determining the illumination from images of a convex Lambertian object – https://cseweb.ucsd.edu/~ravir/papers/invlamb/

[7] The Lighting and Material of Halo 3 (Slides) – https://developer.amd.com/wordpress/media/2012/10/S2008-Chen-Lighting_and_Material_of_Halo3.pdf

[8] The Lighting and Material of Halo 3 (Course Notes) – http://developer.amd.com/wordpress/media/2013/01/Chapter01-Chen-Lighting_and_Material_of_Halo3.pdf

[9] Stupid Spherical Harmonics Tricks – http://www.ppsloan.org/publications/StupidSH36.pdf

Measure

Measure

你可能感兴趣的:(【翻译】SG Series Part 1: A Brief (and Incomplete) History of Baked Lighting Representations)