Ray March 体积云

Volume Raymarching

The basic concept behind volumetric rendering is to evaluate rays of light as they pass through a volume. This generally means returning an Opacity and a Color for each pixel that intersects the volume. If your volume is an analytical function you can probably calculate the result directly, but if your volume is stored in a texture, you will need to take multiple steps through the volume, looking up the texture at each step. This can be broken down into two parts:

体绘制背后的基本概念是评估光线通过一个体。这通常意味着返回一个不透明度和颜色为每个像素相交的体积。如果你的体积是一个分析函数,你可能可以直接计算出结果,但如果你的体积存储在一个纹理中,你将需要在体积中采取多个步骤,在每一步查找纹理。这可以分为两部分:

1) Opacity (Light Absorption)

1)不透明度(光吸收)

2) Color (Illumination, Scattering)

2)颜色(照明、散射)

Opacity Sampling

To generate an opacity for a volume, the density or thickness at each visible point must be known. If the volume is assumed to have a constant density and color, all that is needed is the total length of each ray before it hits an opaque occluder. For simple untextured fog, this is just the Scene Depth which gets remapped using a standard function: D3DFOG_EXP.  This function is defined as:

为了生成一个体的不透明度,必须知道每个可见点的密度或厚度。如果体积假设有一个恒定的密度和颜色,所有需要的是每条射线的总长度在它到达一个不透明的闭塞器。对于简单的无纹理雾,这只是使用标准函数D3DFOG_EXP重新映射的场景深度。这个函数定义为:

F = 1/ e ^(t * d).

Where t is the distance traveled through some media and d is the density of the media. This is how cheap unlit fog has been calculated in games for quite some time. This comes from the Beer-Lambert law which defines transmittance through a volume of particles as:

其中t是通过某种介质的距离,d是介质的密度。这是游戏中很长一段时间以来所计算出的廉价的未点亮雾。这来自于比尔-朗伯定律,该定律将通过一体积粒子的透射率定义为:

Transmittance = e ^ (-t * d).

Theses may look similar, because they are exactly the same thing. Note that x^(-y) is the same as 1/(x^y),  so the Exponential Fog function is really just an applied version of the Beer-Lambert law. To understand how these functions apply to volumetrics, we can point out an equation from an old paper by Drebin [1]. It describes how much light will exit a voxel in the ray direction as it passes through it. It is designed to return an accurate color for a volume having a unique color at every voxel:

这些可能看起来很相似,因为它们是完全相同的东西。注意x^(-y)等于1/(x^y)所以指数雾函数实际上只是比尔-朗伯定律的一个应用版本。为了理解这些函数如何应用于体积度量,我们可以从Drebin[1]的一篇旧论文中指出一个方程。它描述了在通过体素时,有多少光会以射线方向穿过体素。它被设计为在每个体素有一个独特的颜色的体积返回一个准确的颜色:

Cout(v) = Cin(v) * (1 - Opacity(x)) + Color(x) * Opacity(x)

Cin(v) is the light color before it passes the voxel, Cout(v) is the color after passing through it. This states that as a ray of light passes through a volume, at every voxel, the color of the light will multiplied by the inverse opacity of the current voxel to simulate absorption, and the color of the current voxel times the opacity of the current voxel will be added to simulate scattering. This code can work as is, as long as the volume is traced back to front. If we track a variable for Transmittance that is initialized to 1, the volume can be traced in either direction. Transmittance can be thought of as the inverse to opacity.

Cin(v)是通过体素之前的浅色,Cout(v)是通过体素之后的颜色。这表明,当一束光穿过一个体素时,在每个体素上,光的颜色将乘以当前体素的不透明度的倒数来模拟吸收,而当前体素的颜色乘以当前体素的不透明度将被添加来模拟散射。这段代码可以正常工作,只要将卷跟踪到前面。如果我们跟踪一个初始化为1的透光率变量,则可以在任意方向跟踪体积。透过率可以被认为是不透明度的反比。

This is where Exp, or the e^x function comes into play. Similar to the problem of bank account interest, the more often you apply interest to an account, the more money that will be earned but only up to a certain point. That point is defined by e. The same effect is found when comparing the results of integrating density over a volume. The more steps that are taken, the more that the final result will converge on a solution defined by the function Exp or e raised to some power. This is where the Beer-Lambert Law as well as the D3DFOG_EXP functions come from.

这就是Exp或者说e^x函数发挥作用的地方。与银行存款利息的问题类似,存款利息越多,就能赚到越多的钱,但只有在一定的范围内。这个点由e定义。当比较密度除以体积的积分结果时,可以发现同样的效应。采取的步骤越多,最终结果就越会收敛于Exp函数或e的某次方所定义的解。这就是比尔-朗伯定律和D3DFOG_EXP函数的由来。

The math we have explored so far gives us some hints about how to proceed to build a custom volume renderer. We know we need to figure out the thickness of the volume at each point. This thickness value can then be used with an exponential density function to approximate how much light the volume would block.

到目前为止,我们所探索的数学为我们提供了一些关于如何继续构建一个自定义体积渲染器的提示。我们知道我们需要算出每一点的体积厚度。然后,这个厚度值可以与指数密度函数一起使用,以估计体积将阻挡多少光。

To sample the density of our volume, several steps are taken along each ray passing through the volume and value of the volume texture is read at each point.  This example shows an imagined volume texture of a sphere. The camera rays show the result of sampling the volume at regular intervals to measure distance traveled within the media:

为了对我们的体积的密度进行采样,沿着穿过体积的每条射线采取几个步骤,并在每个点读取体积纹理的值。这个例子展示了一个想象的球体的体积纹理。相机射线以一定的间隔显示对体积进行采样的结果,以测量介质中移动的距离:

If the ray is inside the media during a step, the step length is added to an accumulation variable. If the ray is outside of the media during a step, nothing is accumulated during that step. At the end of this, for each pixel, we have a value describing how far the camera ray traveled while inside of the media in the volume texture. Because the distance is also multiplied by the opacity at each point, the final distance returned represents Linear Density.

如果光线在一个步骤中是在媒体内,步长被添加到一个积累变量。如果射线在一个步骤的媒体之外,没有积累在该步骤。最后,对于每个像素,我们都有一个值来描述摄像机射线在体纹理媒体内部传播的距离。因为距离还要乘以每个点的不透明度,所以最终返回的距离代表线性密度。

That distance is represented in the above example as the yellow line between the yellow dots. Note that when low step counts are used like in the above example, the distances may not match the actual content very well and slicing artifacts become visible. These kinds of artifacts and solutions will be described in more detail further on.

在上面的例子中,这个距离用黄点之间的黄线表示。注意,当像上面的例子那样使用低步数时,距离可能与实际内容不太匹配,切片工件变得可见。这些类型的工件和解决方案将在后面更详细地描述。

At this point we are just accumulating linear values and returning a linear distance at the end. In order to make this look volumetric, we use an exponential function to remap the final value. The standard Direct3D exponential fog function D3DFOG_EXP mentioned above works well for this.

在这里,我们只是积累线性值,并在最后返回一个线性距离。为了让它看起来更有体积感,我们使用指数函数来重新映射最终值。上面提到的标准Direct3D指数雾函数D3DFOG_EXP可以很好地完成这个工作。

Example Opacity-Only Ray March

It is possible to do all of the ray marching code in the custom node, but that requires nested function calls which requires multiple custom nodes. Custom nodes get auto-named by the translator which means you have to call them assuming you know the order the compiler will add them (ie, CustomExpression0, 1, 2...). The compiler can start renaming the functions just by adding new ones or changing how they are hooked up between various material pins. 

可以在自定义节点中执行所有射线推进代码,但这需要嵌套的函数调用,这需要多个自定义节点。自定义节点由翻译器自动命名,这意味着你必须调用它们,前提是你知道编译器添加它们的顺序(例如,CustomExpression0, 1,2…)。编译器可以通过添加新函数或改变它们在不同材质引脚之间的连接方式来开始重命名函数。

To make this part a bit easier, I have added a PsuedoVolumeTexture function into the common.usf file.  Simply download and overwrite the common.usf located in Engine\Shaders. You can do this with the editor running and it should work immediately. This is basically just repeated code from the previous post on pseudo volume textures. Having this function greatly simplifies the raymarching code and it can just be swapped for a standard 3d texture sample when a future version of ue4 adds support. If you do not have one of the versions below, download one of them and just copy the last 2 functions into your version. I suggest using 4.13.2 for now over 4.14 until the 4.14.1 version is released. I will go into that at the very end.

为了使这部分更容易一些,我在通用代码中添加了一个PsuedoVolumeTexture函数。普遍服务基金文件。只需下载并覆盖通用代码。usf位于引擎\着色器。您可以在编辑器运行时这样做,它应该立即工作。这基本上是重复的代码从上一个帖子的伪体积纹理。这个功能极大地简化了射线行进代码,当未来版本的ue4添加支持时,它可以被替换为标准的3d纹理样本。如果您没有下面的一个版本,下载其中一个,只需复制最后两个函数到您的版本。我建议现在使用4.13.2而不是4.14,直到4.14.1版本发布。我会在最后讲到。

common.usf (UE4.14):

https://www.dropbox.com/s/1ee9630r6fqbese/Common.usf?dl=0

common.usf (UE4.13.2):

https://www.dropbox.com/s/bagvoru81yc3aij/Common.usf?dl=0

Example Volume Texture of Smoke Ball:

https://www.dropbox.com/s/9h98z1mlhp1yw55/T_Volume_Wisp_01.tga?dl=0

RayMarching Code:

float numFrames = XYFrames * XYFrames;

float accumdist = 0;

float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) );

float StepSize = 1 / MaxSteps;

for (int i = 0; i < MaxSteps; i++)

{

float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;

accumdist += cursample * StepSize;

CurPos += -localcamvec * StepSize;

}

return accumdist;

This simple code advances a ray through a specified volume texture over a distance of 0-1 in texture space and returns the linear density of the particulates traveled through. It is by no means complete and missing crucial details. Some bits will be added to the code later and some of the details will be provided in the form of material nodes. 

这个简单的代码在纹理空间中以0-1的距离推进一条光线通过指定的体积纹理,并返回经过的粒子的线性密度。它绝不是完整的,而且遗漏了关键的细节。一些位稍后将添加到代码中,一些细节将以材料节点的形式提供。

This allows you to control the number of steps and frame layout you want to use. 

这允许您控制您想要使用的步骤数量和框架布局。

In this simplified example, the node BoundingBoxBased_0-1_UVW is used because its an easy way to get a local 0-1 starting position. It works with box or sphere meshes, but it is not what we will end up using by the end of this for reasons that will be soon apparent. 

在这个简化的示例中,使用节点BoundingBoxBased_0-1_UVW,因为它是获取本地0-1起始位置的一种简单方法。它适用于框网格或球体网格,但它不是我们将在本节结束时使用的,原因很快就会明白。

Here is what this should look like if you put it on StaticMesh'/Engine/EditorMeshes/EditorCube.EditorCube' with 64 steps:

如果你把它放在StaticMesh'/Engine/ editormesh /EditorCube上,这应该是这样的。EditorCube'包含64个步骤:

A random volumetric puffball, neat! But lets not get too excited yet. With the above 64 steps, the result looks pretty smooth. With 32 steps, strange slicing artifacts appear:

随机体积的蓬松球,整齐!但我们先别太激动。通过上面的64个步骤,结果看起来非常平滑。在32个步骤中,奇怪的切片工件出现:

These artifacts betray the box geometry used to render the material.  They are a kind of moire pattern that results from tracing the volume texture starting at exactly the surface of the box intersection. Doing that causes the pattern of sampling to continue the box shape and give it that pattern. By snapping the start positions to view aligned planes, the artifacts can be reduced.

这些文物背叛了渲染材料的盒子几何形状。它们是一种云纹图案的结果,从立方体交点的表面开始跟踪体积纹理。这样做会使采样的模式延续盒子的形状,并形成那个模式。通过捕捉开始位置来查看对齐的平面,工件可以减少。

This is an example of emulating a geometric slicing approach using only the pixel shader. It still has slicing artifacts in motion but they are far less noticeable and do not betray the box geometry which is key. Additional sampling improvements can be had with low step counts by introducing temporal jitter. More on that later. Here is the additional code to align the samples. 

这是一个仅使用像素着色器来模拟几何切片方法的例子。它仍然有运动中的切片工件,但它们远没有那么明显,而且不会泄露盒子的几何形状,这是关键。通过引入时间抖动,可以对低步长计数进行额外的采样改进。稍后再详细说明。下面是调整示例的附加代码。

// Plane Alignment

// get object scale factor

//NOTE: This assumes the volume will only be UNIFORMLY scaled. Non uniform scale would require tons of little changes.

float scale = length( TransformLocalVectorToWorld(Parameters, float3(1.00000000,0.00000000,0.00000000)).xyz);

float worldstepsize = scale * Primitive.LocalObjectBoundsMax.x*2 / MaxSteps;

float camdist = length( ResolvedView.WorldCameraOrigin - GetObjectWorldPosition(Parameters) );

float planeoffset = GetScreenPosition(Parameters).w / worldstepsize;

float actoroffset = camdist / worldstepsize;

planeoffset = frac( planeoffset - actoroffset);

float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) );

float3 offsetvec = localcamvec * StepSize * planeoffset;

return float4(offsetvec, planeoffset * worldstepsize);

Notice that both the depth and actorposition are both accounted for. That stabilizes the slices relative the actor so there no movement as the camera moves towards or away. I put this into another custom node for now. It will help to keep the setup part of the code separate from the core raymarching code so that other primitives like spheres can be added more easily. This is not a nested custom node since the value is used directly and only once. It is never called specifically by other custom nodes.

请注意,深度和角色位置都被考虑在内。这就稳定了相对于演员的切片,所以当摄像机移动时没有移动。现在我把它放到另一个自定义节点中。这将有助于保持代码的设置部分与核心射线推进代码分离,这样就可以更容易地添加其他原语,如球体。这不是嵌套的自定义节点,因为该值被直接使用且仅使用一次。它从不被其他定制节点专门调用。

The next task is to control the step count more carefully. You may have noticed that the code so far is saturating the ray position to keep it inside the 0-1 space. That means whenever the tracer hits the edge of the box, it continues to waste time checking the volume. It also will never trace the full corner to corner distance of the volume since the trace distance is limited to 1, and the corner to corner distance of the volume is 1.732. This just happens to not be a problem in the example volume so far because the content is roundish. One way to fix this is by checking to see if the ray exits the volume during the loop, but a solution like that is not ideal because it adds to the overhead of the loop and that should be kept as simple as possible. A better solution is to pre-calculate the number of steps that fit.

下一个任务是更仔细地控制步数。您可能已经注意到,到目前为止的代码是饱和的射线位置,以保持它在0-1空间内。这意味着跟踪器只要碰到盒子的边缘,它就会继续浪费时间检查体积。它也永远不会跟踪整个卷的角到角的距离,因为跟踪距离被限制为1,而卷的角到角的距离是1.732。到目前为止,这在示例卷中还不是问题,因为内容是圆的。解决这个问题的一种方法是检查光线是否在循环期间退出体积,但这样的解决方案不是理想的,因为它增加了循环的开销,这应该保持尽可能简单。一个更好的解决方案是预先计算适合的步数。

It helps to use a simple primitive like a box or a sphere so that you can use simple math to determine thickness. While spheres may be the more performant shape due to covering less screen pixels, boxes let us display the entire content of volume textures and tends to be more flexible when distorting the volume. For now we will just deal with using a box. Here is how we precalculate the steps for a box. The world->local transforms allow the mesh to move. Note that this actually changes a few thing about how we calculate the above plane alignment so I just rolled the above code into this. Now the function returns the local Ray Entry Position and Thickness directly:

下一个任务是更仔细地控制步数。您可能已经注意到,到目前为止的代码是饱和的射线位置,以保持它在0-1空间内。这意味着跟踪器只要碰到盒子的边缘,它就会继续浪费时间检查体积。它也永远不会跟踪整个角到角的距离。它有助于使用一个简单的原语,如一个盒子或一个球体,以便您可以使用简单的数学来确定厚度。由于覆盖的屏幕像素更少,球体可能是性能更好的形状,而盒子让我们显示体积纹理的全部内容,在扭曲体积时往往更灵活。现在我们只用一个盒子。下面是我们如何预先计算一个盒子的步骤。world->局部变换允许网格移动。请注意,这实际上改变了我们如何计算上述平面对齐的一些事情,所以我只是把上述代码滚动到这里。现在函数直接返回本地光线进入位置和厚度:ce的体积,因为跟踪距离被限制为1,和角落到角落的距离的体积是1.732。到目前为止,这在示例卷中还不是问题,因为内容是圆的。解决这个问题的一种方法是检查光线是否在循环期间退出体积,但这样的解决方案不是理想的,因为它增加了循环的开销,这应该保持尽可能简单。一个更好的解决方案是预先计算适合的步数。

//bring vectors into local space to support object transforms

float3 localcampos = mul(float4( ResolvedView.WorldCameraOrigin,1.00000000), (Primitive.WorldToLocal)).xyz;

float3 localcamvec = -normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) );

//make camera position 0-1

localcampos = (localcampos / (Primitive.LocalObjectBoundsMax.x * 2)) + 0.5;

float3 invraydir = 1 / localcamvec;

float3 firstintersections = (0 - localcampos) * invraydir;

float3 secondintersections = (1 - localcampos) * invraydir;

float3 closest = min(firstintersections, secondintersections);

float3 furthest = max(firstintersections, secondintersections);

float t0 = max(closest.x, max(closest.y, closest.z));

float t1 = min(furthest.x, min(furthest.y, furthest.z));

float planeoffset = 1-frac( ( t0 - length(localcampos-0.5) ) * MaxSteps );

t0 += (planeoffset / MaxSteps) * PlaneAlignment;

t0 = max(0, t0);

float boxthickness = max(0, t1 - t0);

float3 entrypos = localcampos + (max(0,t0) * localcamvec);

return float4( entrypos, boxthickness );

The node marked "Ray Entry" hooks to theCurPosinput on the main ray marching node. The parameterPlane Alignmentallows toggling the alignment on and off.

标记为“Ray Entry”的节点连接到主射线行进节点上的curposinput。parameterPlane alignment允许打开和关闭对齐。

Note that parts of the code now assume that you are using a Box static mesh that has its pivot at the center of the box and not on the floor the box.

注意,现在代码的部分部分假设您使用的是一个Box静态网格,它的枢轴在Box的中心,而不是在Box的地板上。

Sorting

So far we have been using the local position of the geometry to easily start a trace from the outside, but that won't let the camera go inside the volume. To support going inside, we can instead use the Ray Entry Position output from the already solved box intersection above, and then flip the faces of the polygons on the box geometry so they face inwards.  This works because we know where the ray would have intersected the outside of the volume and we also know how long the ray will travel through the volume.

到目前为止,我们一直使用几何体的局部位置来轻松地从外部开始跟踪,但这不会让相机进入体积内部。为了支持进入内部,我们可以使用来自上面已经解决的盒子交点的Ray Entry Position输出,然后翻转盒子几何体上的多边形面,使它们面向内部。这是可行的,因为我们知道光线与外部物体的交点我们也知道光线穿过物体的时间。

Flipping the faces and using the intersection will allow the camera to go inside the volume but it will not make objects sort correctly. Any object inside the cube will appear to draw completely on top of the volume. To solve that, we just need to take the localized scene depth into account when calculating the ray distance within the box. This requires a few new lines to be added to the setup function:

翻转脸部和使用交叉将允许相机进入体积内,但它不会使对象正确排序。立方体内的任何对象看起来都完全画在体积的顶部。为了解决这个问题,我们只需要在计算框内的射线距离时考虑局部场景深度。这需要在setup函数中添加几行新代码:

float scale = length( TransformLocalVectorToWorld(Parameters, float3(1.00000000,0.00000000,0.00000000)).xyz);

float localscenedepth = CalcSceneDepth(ScreenAlignedPosition(GetScreenPosition(Parameters)));

float3 camerafwd = mul(float3(0.00000000,0.00000000,1.00000000),ResolvedView.ViewToTranslatedWorld);

localscenedepth /= (Primitive.LocalObjectBoundsMax.x * 2 * scale);

localscenedepth /= abs( dot( camerafwd, Parameters.CameraVector ) );

//this line goes just before the line: t0 = max(0, t0);

t1 = min(t1, localscenedepth);

Now, in the material settings,Disable Depth Test should be set to true in order to gain control over how the material blends with the scene. Sorting with other translucent objects will be done on a per object basis and we won't have much control over that, but at least we can solve sorting with opaque objects. While in the material settings, also change the blend mode to AlphaComposite to avoid edge blending artifacts that occur with translucency. Also make sure the material is set to unlit.

现在,在材质设置中,禁用深度测试应该设置为true,以控制材质如何与场景混合。与其他半透明对象的排序将基于每个对象,我们不会对此有太多的控制,但至少我们可以解决与不透明对象的排序。在材质设置中,也将混合模式更改为AlphaComposite,以避免出现半透明的边缘混合工件。同时确保素材设置为未点亮状态。

Now we can generate accurate sorting with opaque geometry by adding one Scene Depth lookup. This automatically causes the ray marcher to return the correct opacity because we are stopping the ray from accumulating beyond the scene depth. There is still one artifact to fix though. Because we are stopping the ray march using whole step sizes, we will see stair step like artifacts where opaque geometry intersects the volume:

现在我们可以通过添加一个场景深度查找来生成不透明几何的精确排序。这将自动使光线游行者返回正确的不透明度,因为我们正在阻止光线积累超过场景深度。但是仍然有一个工件需要修复。因为我们使用整个步长来阻止光线前进,所以我们将看到像人工制品一样的楼梯步,其中不透明的几何图形与体积相交:

To fix those slicing artifacts requires just taking one additional step. We track how many steps would have fit up to the scene depth and then take one final step sized to fit the remainder. That assures we end up taking a final sample right at the depth location which smooths out those seams. In order to keep the main tracing loop as simple as possible, we do this outside of the main loop as an additional density/shadow pass.

要修复这些切片工件,只需要采取一个额外的步骤。我们跟踪多少步适合场景深度,然后调整最后一步的大小以适应剩余的步骤。这就保证了我们最终会在深度位置取下最终样本,从而抚平接缝。为了让主跟踪循环尽可能简单,我们在主循环之外做这个作为额外的密度/阴影通道。

The resulting blend with opaque objects appears accurate as objects move and the view direction changes:

随着物体的移动和视图方向的改变,与不透明物体的混合结果显示准确:

https://youtu.be/0kzmFcmV3Ag

So far we have a fairly functional density only ray marcher. As you can see, the core ray marching part of a shader is probably the simplest part. Handling the tracing behavior for different primitives, sampling and sorting problems are the tricky bits. 

到目前为止,我们有一个相当有效的密度只有射线游行者。正如你所看到的,着色器的核心光线移动部分可能是最简单的部分。处理不同原语的跟踪行为、采样和排序问题是比较棘手的部分。

Light Sampling

To render convincingly lit volumes, the behavior of light transport must be modeled. As rays of light pass through a volume, a certain amount of that light will be absorbed and scattered by the particulates in the volume. Absorption is how much light energy is lost to the volume and scattering is how much light is reflected out. The ratio of Absorption (A) to Scattering (S) determines the diffuse brightness of the particulates [shopf2007].

为了渲染出令人信服的光照体,必须模拟光传输的行为。当光线穿过一个体积时,一定数量的光会被体积中的微粒吸收和散射。吸收是指有多少光能损失到体积和散射是有多少光被反射出去。吸收(A)和散射(S)的比率决定了粒子的漫射亮度[shopf2007]。

In this case, we are only going to care about one kind of scattering for simplicity and performance reasons:Out-Scattering. That is basically how much light that hits the volume will be reflected back out isotropically or diffusely.In-Scattering refers to light bouncing from within the volume and that is generally too expensive to do in real time but it can be decently approximated by blurring the results of the Out-Scattering. To know the out-scattering at a given point, it must be know how much light energy was lost due to absorption as the photons reached that point from the light source as well as how much energy will then be lost heading towards the eye back out of the volume.

在本例中,出于简单性和性能考虑,我们只关心一种散射:外散射。这基本上就是照射到物体上的光线会以各向同性或漫反射的方式反射回来。内散射指的是光线从物体内部反射过来,这通常太昂贵了,无法实时进行,但可以通过模糊外散射的结果来适当地近似。要知道某一点的外散射,就必须知道当光子从光源到达该点时,由于吸收而损失了多少光能,以及有多少能量在从体积返回到眼睛时损失了。

There are a number of techniques to calculate these values, but this post will deal primarily with the brute force method of performing a nested ray march towards the light from each density sample. This method is quite expensive as it means the cost of the shader will be DensitySteps * ShadowSteps, or N*M. It is also by far the easiest and most flexible to implement.

有许多技术来计算这些值,但这篇文章将主要处理从每个密度样本执行一个嵌套的光线行进的蛮力方法。这个方法是非常昂贵的,因为它意味着着色器的成本将是DensitySteps * ShadowSteps,或N*M。到目前为止,它也是最容易实现和最灵活的。

The above example shows nested shadow samples being traced from each density sample originating from a single camera ray. Note that only density samples that are inside of the volume media have to perform the shadow samples, and the shadow loop can quit early if a ray reaches the volume border, or if the shadow density exceeds a threshold where close to full absorption has occurred. These few things can reduce the drastic N * M situation a bit.

上面的例子显示了嵌套的阴影样本被追踪从每个密度样本来自一个单一的相机射线。注意,只有在体积介质内部的密度样本才能执行阴影样本,如果光线到达体积边界,或者如果阴影密度超过了接近完全吸收的阈值,阴影循环可以提前退出。这些方法可以稍微减少激烈的N * M情况。

At each sample, the density is taken and used to determine how much light that sample can scatter back out. That also affects how much transmittance will decrease for the next iteration. The shader then shoots rays towards the light and see how much of the potential light energy made it to that point. Thus, the visible light transmitted from the point to the camera is controlled by the total photon path length through the volume and the scattering coefficient of the point itself. This process can still be described by the prior formula from Drebin, 1988 [1]:

在每个样本上,都要取其密度,并用来确定该样本能散射出多少光。这也会影响下一次迭代时透射率的下降。然后着色器向光发射光线,看看有多少潜在的光能到达那一点。因此,从点传输到相机的可见光是由光子通过体积的总路径长度和点本身的散射系数控制的。这个过程仍然可以用Drebin 1988[1]先前的公式来描述:

Cout(v) = Cin(v) * (1 - Opacity(x)) + Color(x) * Opacity(x)

But the above formula only describes a single light path to the camera. To be able to propagate light from out-scattering as well as calculate volume opacity, we need to recreate that iterative ray sample at each sample location, towards the light. Let's define a few basic functions which describe out lighting calculations.

但上面的公式只描述了到相机的一条光路。为了能够从外散射传播光线,并计算体积不透明度,我们需要在每个样本位置重新创建迭代的光线样本,朝向光线。让我们定义几个基本的函数来描述照明计算。

Linear Density is defined at each point x along the ray as simply Opacity * Density Parameter. The parameter allows user tweaking of the density but will be dropped from the equations for simplicity from here on out, as it could also be pre-multiplied into the volume opacity.

线性密度在沿着光线的每个点x上定义为简单的不透明度*密度参数。该参数允许用户调整密度,但从这里开始将从公式中删除,因为它也可以预先乘以体积不透明度。

Linear Density is accumulated along a ray from point x to point x' like this:

线性密度沿着从点x到点x'的射线累积,像这样:

Thus, Transmittance over the length of a ray from point x to x' is defined as:

因此,一条射线从x点到x'点的透射率定义为:

This is how we calculated the density for the density-only ray march started above. To add lighting, we now need to account for the light scattering and absorption at each point along the ray. This involves nesting a bunch of these terms. At a point x within the volume, the amount of out-scattering that makes it to that point from a light from direction w is equal to:

这就是我们如何计算上面开始的只有密度的射线行进的密度。为了增加照明,我们现在需要考虑沿着光线的每一点的光散射和吸收。这涉及到嵌套一堆这样的项。在体积内的一点x处,w方向的光向外散射的量等于:

Where w is the light direction and l is a point outside the volume towards the negative light direction. The term -LinearDensity(x,l) represents the linear density accumulation from point x towards the light until the volume boundary is reached which represents the amount of particulate that would absorb light. Note that this is still only the value for the amount of light visible at that point, it does not yet account for the fraction of that light absorbed based on the opacity of the sample. For that, the OutScattering term gets multiplied by Opacity(x). It also does not account for further transmission loss as that light exits back out of the volume. To account for that loss, the transmittance from the camera to the point x must be determined. 

其中w为光照方向,l为体外一个朝向负光照方向的点。术语-线性密度(x,l)表示从x点到光的线性密度积累,直到达到体积边界,这表示会吸收光的颗粒的数量。请注意,这仍然只是该点可见光量的值,它还没有考虑到基于样品不透明度吸收的光的比例。为此,OutScattering项需要乘以不透明度(x)。它也不能解释当光从体积中退出时进一步的传输损失。为了考虑这种损失,必须确定从相机到点x的透光率。

We can make a modified function TotalOutScattering(x', w) which describes how much out-scattering is visible along a ray w from point x  to point x', rather than just describing it for a single point:

我们可以做一个修改的函数TotalOutScattering(x', w)来描述沿着光线w从点x到点x'可见的散射量,而不是只描述单个点的散射量:

Note that OS and T are short for the OutScattering and Transmission terms above. OS should also by multiplied by Opacity(s) which I forgot to add but may recreate the expression later. This function will return the total scattering from all points along a view ray through the volume. It is actually a few nested integrals which is too nasty to bother writing out in the expanded form so we might as well start dealing with the code itself. Terms like OutScattering are implied to be multiplied by light color and diffuse color at the beginning.

注意OS和T是上面OutScattering和Transmission术语的缩写。OS也应该乘以不透明度(s),我忘记添加,但可能会重新创建表达式后。这个函数将返回从所有点沿着一个视图射线通过体积散射的总数。它实际上是一些嵌套的积分,太麻烦了,不愿意把它写成展开的形式,所以我们可以开始处理代码本身。像OutScattering这样的术语意味着在开始时要乘以浅色和漫射色。

Traditionally you may see this equation written as Radiance (L) in other papers but I have excluded that because for radiance you also account for the amount of background color transmitted into the volume which is basically just SceneColor * FinalOpacity. We won't add that into the math here for reasons that I somewhat arbitrarily decided upon:

传统上,你可能会在其他文章中看到这个方程被写成辐亮度(L),但我已经排除了它,因为辐亮度还包括传输到体积中的背景颜色的数量,基本上就是SceneColor * final不透明度。我们不会把它加到数学中,因为我有些武断地决定:

1) We aren't going to blend the background color like that. Instead we will just use the AlphaComposite blend mode and plug in our opacity.

1)我们不打算像那样混合背景颜色。相反,我们将只使用AlphaComposite混合模式,并插入不透明度。

2) We aren't actually going to be blurring or scattering the background color which is why I am not going to bother talking about that term too much. For much more detail on the full math, see Shopf [2]. Much of the math on this page is based on equations from that page but I have attempted to make them more artist friendly by using real words instead of greek symbols and explaining the relationships in more simplified ways.

2)我们实际上不会模糊或分散背景颜色,这就是为什么我不想费事谈论太多这个术语。有关完整数学的更多详细信息,请参阅Shopf[2]。这一页的大部分数学都是基于那一页的公式,但我试图通过使用真实的单词而不是希腊符号,并以更简化的方式解释关系,使它们更具有艺术性。

Example Shadowed Volume Code

float numFrames = XYFrames * XYFrames;

float curdensity = 0;

float transmittance = 1;

float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) ) * StepSize;

float shadowstepsize = 1 / ShadowSteps;

LightVector *= shadowstepsize;

ShadowDensity *= shadowstepsize;

Density *= StepSize;

float3 lightenergy = 0;

for (int i = 0; i < MaxSteps; i++)

{

float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;

//Sample Light Absorption and Scattering

if( cursample > 0.001)

{

float3 lpos = CurPos;

float shadowdist = 0;

for (int s = 0; s < ShadowSteps; s++)

{

lpos += LightVector;

float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;

shadowdist += lsample;

}

curdensity = saturate(cursample * Density);

float shadowterm = exp(-shadowdist * ShadowDensity);

float3 absorbedlight = shadowterm * curdensity;

lightenergy += absorbedlight * transmittance;

transmittance *= 1-curdensity;

}

CurPos -= localcamvec;

}

return float4( lightenergy, transmittance);

As you can see, just adding basic shadowing adds quite a lot of complexity to the simple density only tracer we started with.

正如您所看到的,仅仅添加基本的阴影就给我们开始使用的简单的仅密度跟踪器增加了相当多的复杂性。

Notice that in this version, the cameravector and lightvector get pre-multiplied by their respective stepsize in the beginning, outside of the loop. That is because shadow tracing makes the shader much more expensive so we want to move as many operations outside of the loops as possible (especially the inner loop).

注意,在这个版本中,在循环之外,cameravector和lightvector在一开始就预先乘以了它们各自的步长。这是因为阴影跟踪使着色器更加昂贵,所以我们想把尽可能多的操作移到循环之外(特别是内部循环)。

In the current form, the shader code above is still very slow. We did add one optimization: the shader only evaluates a voxel if it has an opacity > 0.001. This can potentially save a lot of time if our volume texture has a lot of empty space, but it won't help at all if the whole volume is written to. We need more optimizations to make this shader practical.

在当前的形式中,上面的着色器代码仍然非常慢。我们确实添加了一个优化:shader只评估不透明度> 0.001的体素。如果我们的卷纹理有很多空空间,这可能会节省很多时间,但如果要写入整个卷,这一点帮助都没有。我们需要更多的优化使这个着色器实用。

The biggest problem with the above version is that it is going to run all shadow steps for all density samples. So if we used something like 64 density steps and 64 shadow steps, that would be 4096 samples. Because our pseudovolume function requires 2 lookups, that means our shader would be doing 8192 texture lookups per pixel! That is pretty bad, but we can optimize it significantly by quitting early if either the ray leaves the volume or full absorption is reached.

上面的版本最大的问题是,它将运行所有密度样本的所有阴影步骤。如果我们用64个密度步和64个阴影步,那就是4096个样本。因为我们的伪体积函数需要2次查找,这意味着我们的着色器将每像素执行8192次纹理查找!这很糟糕,但我们可以通过尽早退出来优化它,如果射线离开体积或完全吸收。

The first part can be handled by checking if the ray has left the volume at each shadow iteration. That would be something like:

第一部分可以通过检查光线是否在每次阴影迭代时离开了体积来处理。这就像:

if(lpos.x > 1 || lpos.x < 0 || lpos.y > 1 || lpos.y < 0 || lpos.z > 1 || lpos.z < 0) break;

While a check like that works, it turns out to be pretty slow since the shadow loop runs so many times. I have also tried precalculating the number of shadow steps before each shadow loop instead, very similar to how I precalculated the number of density iterations for a box shape. Surprisingly that turned out to be the slowest method. The fastest method I have found so far to early-terminate the shadow loop is with this simple box test math:

虽然这样的检查是有效的,但由于影子循环运行了很多次,所以它变得非常缓慢。我还尝试在每个阴影循环之前预先计算阴影步骤的数量,这与我预先计算盒子形状的密度迭代数量非常相似。令人惊讶的是,这是最慢的方法。到目前为止,我发现的最快的提前终止阴影循环的方法是使用这个简单的盒子测试:

float3 shadowboxtest = floor( 0.5 + ( abs( 0.5 - lpos ) ) );

float exitshadowbox = shadowboxtest .x + shadowboxtest .y + shadowboxtest .z;

if(exitshadowbox >= 1) break;

The next bit we need to add is early termination based on an absorption threshold. Typically this means you quit the shadow loop once the transmittance is below some small number such as 0.001. The larger this threshold, the more artifacts will appear so this value should be tweaked to be as large as is visually acceptable.

接下来我们需要添加的是基于吸收阈值的提前终止。通常,这意味着当透光率低于一些小数值(如0.001)时,就退出阴影环。这个阈值越大,就会出现更多的工件,所以这个值应该调整到视觉上可以接受的大小。

If we wrote the shadow marching loop by just multiplying the light transmittance by the inverse opacity at each point then we would implicitly know the transmittance at every iteration and checking for the threshold would be as simple a checking:

如果我们只通过在每个点上用反不透明度乘以光的透射率来编写阴影前进循环,那么我们就可以隐式地知道每次迭代的透射率,并且检查阈值就像检查一样简单:

if( transmittance < threshold) break;

But notice that we are not actually calculating transmittance during shadow iterations. We are accumulating linear density just like in our first density-only example. This is in an effort to make the shadow loop as cheap as possible, since doing a single add for each shadow accumulation is much cheaper than doing two multiplies and a 1-x which would otherwise be required. This just means we need to use some math to determine our shadow threshold in terms of a distance rather than a transmission value.

但是请注意,我们实际上并没有在阴影迭代期间计算透光率。我们在积累线性密度就像我们第一个只考虑密度的例子一样。这是为了使阴影循环尽可能便宜,因为为每个阴影积累做一次加法要比做两次乘法和1-x(否则就需要)便宜得多。这只是意味着我们需要使用一些数学来根据距离而不是传输值来确定我们的阴影阈值。

To do that, we simply invert the final transmittance term which is calculated as e ^ (-t * d). So we want to determine for what value of t would transmittance be less than our threshold. Thankfully this is exactly what the function log(x) does. The default base of log is e. It returns an answer to the question "e raised to what power equals x". So if we want to know at what value of t the transmittance would be less than 0.001, we can calculate:

为了做到这一点,我们只需将最后的透射率项(e -t * d)求反,因此我们想要确定透射率小于阈值的t值是多少。谢天谢地,这正是log(x)函数的作用。log的默认底数是e。它会返回“e的几次方等于x”这个问题的答案。因此,如果我们想知道在t的什么值下透光率会小于0.001,我们可以计算:

DistanceThreshold = -log(0.001) / d;

Assuming the user defined density d = 1,  this would give us a linear accumulation value of 6.907755 needed to reach 0.001 transmittance. We add this to our shader code with the line:

假设用户定义的密度d = 1,这将给我们一个线性累加值6.907755,需要达到0.001的透光率。我们将这一行添加到着色器代码中:

float shadowthresh = -log(ShadowThreshold) / ShadowDensity;

Where ShadowThreshold is a user defined transmittance threshold and ShadowDensity is a user defined shadow density multiplier. This line needs to go after the line that multiplies ShadowDensity by shadowstepsize, above the loops.

其中ShadowThreshold是用户定义的透光阈值,ShadowDensity是用户定义的阴影密度倍增器。这条线需要放在将ShadowDensity乘以shadowstepsize的线之后,在循环的上方。

Updated Shadow Code:

Adding in the shadow exit and transmittance thresholds, as well as the final partial step evaluation outside of the loop (which also has to perform the same shadow steps) yields this code:

添加阴影出口和透光阈值,以及循环外的最终部分步骤评估(也必须执行相同的阴影步骤),产生以下代码:

float numFrames = XYFrames * XYFrames;

float accumdist = 0;

float curdensity = 0;

float transmittance = 1;

float3 localcamvec = normalize( mul(Parameters.CameraVector, Primitive.WorldToLocal) ) * StepSize;

float shadowstepsize = 1 / ShadowSteps;

LightVector *= shadowstepsize;

ShadowDensity *= shadowstepsize;

Density *= StepSize;

float3 lightenergy = 0;

float shadowthresh = -log(ShadowThreshold) / ShadowDensity;

for (int i = 0; i < MaxSteps; i++)

{

float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;

//Sample Light Absorption and Scattering

if( cursample > 0.001)

{

float3 lpos = CurPos;

float shadowdist = 0;

for (int s = 0; s < ShadowSteps; s++)

{

lpos += LightVector;

float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;

float3 shadowboxtest = floor( 0.5 + ( abs( 0.5 - lpos ) ) );

float exitshadowbox = shadowboxtest .x + shadowboxtest .y + shadowboxtest .z;

shadowdist += lsample;

if(shadowdist > shadowthresh || exitshadowbox >= 1) break;

}

curdensity = saturate(cursample * Density);

float shadowterm = exp(-shadowdist * ShadowDensity);

float3 absorbedlight = shadowterm * curdensity;

lightenergy += absorbedlight * transmittance;

transmittance *= 1-curdensity;

}

CurPos -= localcamvec;

}

CurPos += localcamvec * (1 - FinalStep);

float cursample = PseudoVolumeTexture(Tex, TexSampler, saturate(CurPos), XYFrames, numFrames).r;

//Sample Light Absorption and Scattering

if( cursample > 0.001)

{

float3 lpos = CurPos;

float shadowdist = 0;

for (int s = 0; s < ShadowSteps; s++)

{

lpos += LightVector;

float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;

float3 shadowboxtest = floor( 0.5 + ( abs( 0.5 - lpos ) ) );

float exitshadowbox = shadowboxtest .x + shadowboxtest .y + shadowboxtest .z;

shadowdist += lsample;

if(shadowdist > shadowthresh || exitshadowbox >= 1) break;

}

curdensity = saturate(cursample) * Density;

float shadowterm = exp(-shadowdist * ShadowDensity);

float3 absorbedlight = shadowterm * curdensity;

lightenergy += absorbedlight * transmittance;

transmittance *= 1-curdensity;

}

return float4( lightenergy, transmittance);

Now we have a functioning translucent ray volume ray marcher that can self shadow from one directional light. The above shadow steps would have to be repeated for each additional light supported. The code can easily support point lights in addition to directional lights by calculating inverse squared falloff in addition to each shadow term, but the vector from CurPos to the light must be calculated at each density sample. 

现在我们有了一个半透明的光线体,光线游行者可以从一个方向的光自阴影。上面的阴影步骤必须为每一个额外的光支持重复。通过计算每个阴影项的平方衰减的倒数,代码可以很容易地支持点光源和方向光源,但是从CurPos到光源的矢量必须在每个密度样本中计算。

Ambient Light

So far we have only been dealing with Out-Scattering contributed from a single light. This generally will not look very good as if the light is fully shadowed the volume will appear flat in the shadow. Usually some kind of ambient light term is added to address this. There are lots of ways to handle the ambient light. One way is to pre-calculate the ambience inside of the volume texture, like deep shadow maps. The downside to that approach is you won't be able to rotate and instance the volumes as the ambient light would remain fixed. A realtime approach is to cast a few sparse rays up from each voxel to estimate overhead shadowing. This can be done with one additional offset sample, but the results get better with each additional averaged sample.

到目前为止,我们只处理了单个光的外散射。这通常看起来不是很好,如果光完全被阴影遮挡,体积会在阴影中显得平坦。通常会添加一些环境光术语来解决这个问题。有很多方法来处理环境光。一种方法是预先计算体纹理内部的氛围,就像深阴影地图。这种方法的缺点是你不能旋转和实例化音量,因为环境光将保持固定。一种实时的方法是从每个体素中投射一些稀疏的光线来估计头顶的阴影。这可以通过一个额外的偏移样本来完成,但是每个额外的平均样本的结果会更好。

Another reason to favor a dynamic ambient term over a prebaked one is if you are planning to procedurally stack multiple volume textures. One example of this is described in the Horizon Zero Dawn cloud paper [3]. In this paper, one volume texture describes the macro shape of unique detail over an entire area and a second tiling volume texture is used to modulate the density of the base volume. An approach like this is very powerful as volume rendering techniques are currently limited by resolution. Applying blend modulation is a great way to create the appearance of more detail, but it means methods that precalculate lighting will not match the new details that arise from the combination of volume textures.

另一个喜欢动态环境的原因是,如果你计划循序渐进地堆叠多个卷纹理。在地平线零点黎明云纸[3]中描述了这样一个例子。在本文中,一个体纹理描述整个区域独特细节的宏观形状,第二个平铺体纹理用于调节基础体的密度。这种方法非常强大,因为体积渲染技术目前受到分辨率的限制。应用混合调制是创建更多细节外观的一个很好的方法,但这意味着预计算光照的方法将不匹配从体积纹理的组合产生的新细节。

Here is how we take three additional offset sample to estimate overhead ambient occlusion. This can go just after the transmittance was multiplied in the main loop:

下面是我们如何采取三个额外的偏移样本来估计开销环境遮挡。这可以在主回路的透光率相乘后进行:

//Sky Lighting

shadowdist = 0;

lpos = CurPos + float3(0,0,0.05);

float lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;

shadowdist += lsample;

lpos = CurPos + float3(0,0,0.1);

lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;

shadowdist += lsample;

lpos = CurPos + float3(0,0,0.2);

lsample = PseudoVolumeTexture(Tex, TexSampler, saturate(lpos), XYFrames, numFrames).r;

shadowdist += lsample;

//shadowterm = exp(-shadowdist * AmbientDensity);

//absorbedlight = exp(-shadowdist * AmbientDensity) * curdensity;

lightenergy += exp(-shadowdist * AmbientDensity) * curdensity * SkyColor * transmittance;

The two commented out terms were just an attempt to reduce the number of temporaries used. The same can be done to all of the code.

这两个被注释掉的术语只是为了减少临时使用的数量。可以对所有代码进行相同的操作。

Light Extinction Color

Notice that we are only applying the LightColor to the shadow term once per density sample. Doing it in this way does not allow the scattering to change color with depth. The scattering from clouds in real life is mostly from mie scattering which scatters light wavelengths equally, so the single color scatter is not bad for clouds. Still, colored extinction can emulate extinction spectra in liquids, sunset IBL response or artistic effects just by replacing the ShadowDensity parameter with a V3. You divide the Shadow Density by the color you want it to show:

注意,我们只对每个密度样本的阴影项应用一次LightColor。这样做不允许散射随深度改变颜色。现实生活中云的散射主要是均匀散射光波长的mie散射,所以单色散射对云来说并不坏。不过,彩色消光可以模拟液体的消光光谱,日落IBL反应或艺术效果,只需用V3替换ShadowDensity参数。你用阴影密度除以你想要显示的颜色:

Here is what the entire material should look like now:

下面是整个材质现在的样子:

Notice a phase function was added to the light color (that function exists in engine\content but is not exposed to the function library). It was done this way rather than on the output side of the ray marcher so that the phase function could be separated to just the directional light and not affect the ambient light.

注意,一个相位函数被添加到浅色中(该函数存在于引擎\内容中,但没有公开给函数库)。它是这样做的,而不是在射线走行器的输出端,这样,相位函数可以被分离到只方向光,而不影响环境光。

Additional Shadowing Options

It is possible to add support for various shadowing methods, such as the custom per-object depth based shadow maps discussed in a previous post. While a solution like that can work here, depth based shadowmaps do not look great for volumetrics because the shadow will be crisp without performing expensive custom blurring (and remember we are already inside of a crazy expensive nested loop).

我们可以添加对各种阴影方法的支持,比如在之前的文章中讨论过的基于对象深度的自定义阴影贴图。虽然这样的解决方案可以在这里工作,但基于深度的阴影贴图对于体积指标来说并不是很好,因为在不执行昂贵的自定义模糊的情况下,阴影将是清晰的(记住,我们已经处于一个昂贵的嵌套循环中)。

I have only experimented so far with enabling Distance Field Shadows. Distance field shadows are nice for volumetrics because the shadows can be made soft without extra cost. The downside is that looking up the global distance fields many times for volumetric purposes is extremely expensive and the resolution of the distance fields themselves is not great. Only try this if you have a 980+ level gpu.

到目前为止,我只试验了启用距离场阴影。距离场阴影对于体积测量来说是很好的,因为阴影可以在没有额外成本的情况下变得柔和。缺点是,为了体积的目的而多次查找全局距离字段非常昂贵,而且距离字段本身的分辨率并不高。只有当你拥有980级以上的gpu时才能尝试这种方法。

To add distance field shadows requires also passing in or re-computing the world space light vector outside of the loop preferably:

添加距离场阴影还需要传入或重新计算循环外的世界空间光向量:

float3 LightVectorWS = normalize( mul( LightVector, Primitive.LocalToWorld));

Then inside of the main loop, just after the shadow steps:

然后在主循环中,就在阴影步骤之后:

float3 dfpos = 2 * (CurPos - 0.5) * Primitive.LocalObjectBoundsMax.x;

dfpos = TransformLocalPositionToWorld(Parameters, dfpos).xyz;

float dftracedist = 1;

float dfshadow = 1;

float curdist = 0;

float DistanceAlongCone = 0;

for (int d = 0; d < DFSteps; d++)

{

DistanceAlongCone += curdist;

curdist = GetDistanceToNearestSurfaceGlobal(dfpos.xyz);

float SphereSize = DistanceAlongCone * LightTangent;

dfshadow = min( saturate(curdist / SphereSize) , dfshadow);

dfpos.xyz += LightVectorWS * dftracedist * curdist;

dftracedist *= 1.0001;

}

Then the term dfshadow gets multiplied by the absorbed light.

然后dfshadow乘以被吸收的光。

Temporal Jitter

Sometimes slicing artifacts will show up even with high step counts and other times the resolution of the volume texture itself can cause artifacts. When low step counts are used, still images can be improved by using the plane snapping described above, but camera motion will still show the slicing artifacts as the slices rotate. Temporal Jitter basically randomly moves around the starting locations every frame and smooths the result. It generally works well unless you have moving objects in front of the jittered surface.

有时即使步长很高,切片工件也会显示出来,而有时体积纹理本身的分辨率也会造成工件。当使用低步数时,静态图像可以通过使用上面描述的平面捕捉来改进,但当切片旋转时,摄像机运动仍然会显示切片伪影。时间抖动基本上在每一帧的起始位置周围随机移动,并平滑结果。它通常工作良好,除非你有移动的物体在抖动的表面前。

In the past I used the DitherTemporalAA material function to do this, but there is a cheaper and better way now, thanks toMarc Olano'simproved psuedorandom functions added to UE4 in 4.12. It boils down to these three lines (note that localcamvec has bee pre-multiplied by step size at this point):

在过去,我使用DitherTemporalAA材质函数来做这个,但现在有一个更便宜和更好的方法,感谢toMarc Olano在4.12中添加到UE4的改进伪随机函数。它归结为以下三行(注意,localcamvec在这一点上被预先乘以了步长):

int3 randpos = int3(Parameters.SvPosition.xy, View.StateFrameIndexMod8);

float rand =float(Rand3DPCG16(randpos).x) / 0xffff;

CurPos += localcamvec * rand.x * Jitter;

https://youtu.be/KTdj9nzZJWo

Final Notes

Earlier I suggested using 4.13.2 since 4.14 introduced a regression that prevents the material compiler from sharing instructions between pins. So connecting the opacity and emissive color means the entire raymarch function is done twice. One workaround in 4.14 is to use 1.0 for opacity and then use the opacity to lerp between emissive and scene color.

之前我建议使用4.13.2,因为4.14引入了一个回归,阻止material编译器在引脚之间共享指令。所以连接不透明度和发射色意味着整个raymarch函数被做了两次。在4.14中一个解决方案是使用1.0的不透明度,然后使用不透明度在发射色和场景颜色之间进行lerp。

(I had more notes but turns out this blog template limits the post length and simply omits things beyond that point, so I will add more information in a followup post. it wont even let me fit all references).

(我有更多的注释,但结果发现这个博客模板限制了文章的长度,并简单地省略了超出这一点的东西,所以我将在后续的帖子中添加更多的信息。它甚至不让我适合所有的参考资料)。

Citations:

[1]: Drebin, R. A., Carpenter, L., and Hanrahan, P. Volume rendering.

In SIGGRAPH ’88: Proceedings of the 15th annual conference on Computer

graphics and interactive techniques (1988), pp. 65–74.

你可能感兴趣的:(Ray March 体积云)