【Unity3D】URP下的GrabPass方案

  GrabPass和AlphaBlend都有渲染物体包含背景物体颜色的特点,不同的是,AlphaBlend渲染像素时,只能基于该像素的前一次DrawCall结果来混合,而GrabPass渲染此像素时,可以得到其他像素颜色,能做出扰动、折射之类的效果。
  然而URP管线并不支持GrabPass,虽然官方事例中提到可以用OpaqueTexture代替功能,但顾名思义,OpaqueTexture是在不透明物体渲染之后截取的一张RT,其中不包含半透明物体,因此我们需要手动实现GrabTexture功能。
  注:OpaqueTexture实际是在半透明物体渲染之前,也就是说是在天空盒渲染之后,甚至在BeforeRenderTransparent之后CopyColor。
  如果想要完全和GrabPass一致,可能是做不到的,Unity自带的GrabPass,能在此物体DrawCall的前一次DrawCall来Blit一张图片,而我们是无法打开一个渲染队列的,例如半透明物体整体在一个队列中,我们无法在其中插一个Blit。
  OnRenderImage、OnPerPost之类的方法也不推荐研究,不说URP是否支持,就算支持,他们对渲染位置的控制也有限。
  比他们控制能力更高一层的是CommandBuffer,CommandBuffer的控制精度在渲染队列之间,例如能在半透明渲染之后、或后处理之前插上几个渲染命令。

曾经Unity2018的替换方案

  我在2018BuildIn管线下替代GrabPass(原因是当时说GrabPass性能消耗高)就用的这种方法,在每个需要Grab的物体上加个脚本,脚本将自己注册到一个全局的单例管理器中,同时关闭粒子的Renderer组件,单例管理器组织一个CommandBuffer,先Blit一张图片,然后渲染需要GrabTexture的物体,这样的缺点是,当渲染物体为半透明物体(实际上需要Grab的物体大多是放在半透明渲染队列中,并且基本是粒子)时,半透明物体的排序不受Unity控制,可能需要花心思重新排序。场景切换时,需要花心思维护管理器,维护管理器中的Renderer(物体增减)、Camera(多相机)、CommandBuffer(重新排列)等。

2019URP下的GrabPass方案

  我一开始也沿用上述的方案,只不过URP下Camera.AddCommandBuffer无法正确将CommandBuffer加到渲染队列中,我采用了RenderFeature来载入CommandBuffer在渲染队列之间的位置。
  不幸的是,需要Grab的特效依旧没有被渲染出来,经过排查,CommandBuffer.DrawRenderer方法无法渲染粒子的ParticleSystemRenderer,在google上查,发现别人也出现了类似的问题,MeshRenderer以及SpriteRenderer可以正常渲染,经过对Unity官方人员的询问得知,这是Unity一直以来的Bug,在2018.4中被修复,但因为未来可能需要重新用DOTS实现,所以2019暂时不去修复。
  Unity人员建议用多相机解决这个问题,这和我们想法一致。

多相机解决方案

  这个方案就是为每个需要渲染GrabTexture特效、物体的相机,增加一个特效相机,这个特效相机除了cullingmask外,参数和生成它的主要相机完全一致。
  利用layer,让主要相机不去渲染这个物体,在后处理前用CommandBuffer Blit一张纹理,设置特效相机的cullingmask只渲染这个需要GrabTexture的物体。
  这里注意下URP下的多相机流程,和BuildIn下不同,BuildIn下,多相机是用相机深度来控制相机渲染次序,而URP下,是利用相机的Camera Stack,将主要相机类型设置为Base,其他在此之后渲染的相机设为Overlay,将Overlay相机加入到Base Camera的Camera Stack中。
当时我的脚本是这样的:

[RequireComponent(typeof(Camera))]
public class AddEffectCamera : MonoBehaviour
{
    public LayerMask EffectLayer;

    private Camera selfCamera;
    private Camera effectCamera;

    private LayerMask originSelfCameraLayerMask;

    private bool AttachToMainCamera = false;
    private bool Attached = false;
    private Scene preScene;

    private void Awake()
    {
        selfCamera = GetComponent();
        var selfCameraData = selfCamera.GetUniversalAdditionalCameraData();
        originSelfCameraLayerMask = selfCamera.cullingMask;
        selfCamera.cullingMask &= (~EffectLayer);

        if (effectCamera == null)
        {
            GameObject go = new GameObject(name + "'s Effect Camera");
            go.transform.parent = transform;
            go.transform.localPosition = Vector3.zero;
            go.transform.localRotation = Quaternion.identity;
            go.transform.localScale = Vector3.zero;
            effectCamera = go.AddComponent();
            var cameraData = effectCamera.GetUniversalAdditionalCameraData();
            cameraData.renderType = CameraRenderType.Overlay;
            cameraData.renderShadows = false;
            cameraData.clearDepth = false; //readonly, 要改URP源码,多加一个set属性

            effectCamera.cullingMask = EffectLayer;
            effectCamera.orthographic = selfCamera.orthographic;
            effectCamera.useOcclusionCulling = false;

            if (selfCameraData.renderType == CameraRenderType.Base)//主相机的特效相机,直接挂在最前边
                selfCameraData.cameraStack.Insert(0, effectCamera);
            else//其他相机可能需要考虑场景切换,Update时检查
                AttachToMainCamera = true;
        }

        preScene = SceneManager.GetActiveScene();
    }

    private void Update()
    {
//如果切换场景,则重新设置给主相机
        Scene currScene = SceneManager.GetActiveScene();
        if (currScene != preScene)
        {
            preScene = currScene;
            Attached = false;
        }
        if(AttachToMainCamera && !Attached && Camera.main != null)
        {
            var mainCameraData = Camera.main.GetUniversalAdditionalCameraData();
            int parentIndex = mainCameraData.cameraStack.FindIndex((Camera camera) => camera == selfCamera);
            if(parentIndex != -1)
            {
                mainCameraData.cameraStack.Insert(parentIndex + 1, effectCamera);
                Attached = true;
            }
        }

        effectCamera.fieldOfView = selfCamera.fieldOfView;
        effectCamera.nearClipPlane = selfCamera.nearClipPlane;
        effectCamera.farClipPlane = selfCamera.farClipPlane;
        effectCamera.orthographicSize = selfCamera.orthographicSize;

        selfCamera.cullingMask = (selfCamera.cullingMask & ~EffectLayer);
        effectCamera.cullingMask = EffectLayer;
    }

    public void SetEffectLayerMask(LayerMask effectLayer)
    {
        EffectLayer = effectLayer;
    }
}

  除此之外还需要写一个RenderFeature,这个Feature在每个相机后处理前截一张RT,因为这个RT我们是交给外面的粒子用,所以Execute最后不释放,等下一次执行时再释放再重新申请:

using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class CustomGrabPassFeature : ScriptableRendererFeature
{
    [System.Serializable]
    public class Setting
    {
        public string textureName = "_GrabTexture";
        [Range(0, 1)]public float sampleDown = 0.5f;
        public RenderPassEvent passEvent = RenderPassEvent.AfterRenderingPostProcessing;
        public bool useBlitMat = true;
    }
    public Setting settings = new Setting();

    class BlitPass : ScriptableRenderPass
    {
        Setting setting;
        Material BlitMat;
        //RenderTexture rt;
        //RenderTargetIdentifier colorBuffer;

        public BlitPass(Setting setting)
        {
            this.setting = setting;
            BlitMat = new Material(Shader.Find("Hidden/Universal Render Pipeline/Blit"));
        }

        ///public void Setup(RenderTargetIdentifier colorBuffer) => this.colorBuffer = colorBuffer;

        public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
        {
            if (renderingData.cameraData.camera.cameraType == CameraType.SceneView)
                return;

            CommandBuffer cmd = CommandBufferPool.Get("Grab " + setting.textureName);

            RenderTextureDescriptor desc = renderingData.cameraData.cameraTargetDescriptor;
            int width = (int)(desc.width * setting.sampleDown);
            int height = (int)(desc.height * setting.sampleDown);

            int textureId = Shader.PropertyToID(setting.textureName);
            //cmd.GetTemporaryRT(textureId, desc);

            cmd.ReleaseTemporaryRT(textureId);
            cmd.GetTemporaryRT(textureId, width, height, 0, FilterMode.Bilinear, RenderTextureFormat.ARGB32);

            
            if(setting.useBlitMat)
                cmd.Blit(renderingData.cameraData.targetTexture, textureId, BlitMat, 0);
            else
                cmd.Blit(renderingData.cameraData.targetTexture, textureId);

            //Debug.LogWarning("Blit Camera: " + renderingData.cameraData.camera.name +  " to " + setting.textureName);

            context.ExecuteCommandBuffer(cmd);

            //cmd.ReleaseTemporaryRT(textureId);
            CommandBufferPool.Release(cmd);
        }

        
    }

    BlitPass mBlitPass;

    public override void Create()
    {
        mBlitPass = new BlitPass(settings) { renderPassEvent = settings.passEvent};
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        //mBlitPass.Setup(renderer.cameraColorTarget);
        renderer.EnqueuePass(mBlitPass);
    }

}

  这里说几个坑,官方文档提到:在照相机渲染完成后,将删除所有未明确释放的临时纹理,但我们需要的是跨相机渲染,这个临时申请的RT我们还可以拿到,并且内容没有被清空或覆盖。
  在只有Game窗口时可以正常挨个执行,但假如中间穿插了SceneView相机,可能就会渲染出问题。
  其次CommandBuffer.Blit截取RT时,如果RT尺寸不同,例如复制到的RT时复制源相机的1/4,那么截到的画面只能是左上角的1/4。如果用CommandBuffer.Blit传入纹理的重载,则不会出现这个问题,但Blit是否传入纹理还会影响Blit的调用时机,所以不建议降采样。
  这个方法的好处是,将渲染粒子完全交给了Unity,我们无需考虑排序,只要考虑在什么地方截帧就好了。
  缺点是增加了相机管理的复杂度,因为几乎每个相机(除了阴影相机外)都会被传入RenderFeature,所以每增加一个其他用途的相机都要小心翼翼,如果出错就要重新排查。

ScriptableRenderContext.DrawRenderers解决方案

  上面说用多相机的原因,是因为CommandBuffer无法渲染粒子系统。不过URP不可能渲染不出粒子,经过实验,ScriptableRenderContext,也就是RenderFeature.Execute传进来的context参数,通过调用DrawRenderers方法,可以渲染粒子系统。这和上述的多相机方案差不多,都是利用Unity本身的机制渲染物体,Unity可以帮助我们排序、剪裁。

using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;

public class CustomGrabPassFeature : ScriptableRendererFeature
{
    [System.Serializable]
    public class Setting
    {
        public string textureName = "_ScreenGrabTexture";
        //[Range(0, 1)]public float sampleDown = 0.5f;
        public RenderPassEvent passEvent = RenderPassEvent.AfterRenderingTransparents;
        //Shader的Tag->LightMode
        public List shaderTagIdList = new List();
        //public bool useBlitMat = true;
    }
    public Setting settings = new Setting();

    class BlitPass : ScriptableRenderPass
    {
        Setting setting;
        //Material BlitMat;
        RenderTargetHandle mRT = RenderTargetHandle.CameraTarget;

        RenderStateBlock renderStateBlock;
        FilteringSettings mFilteringSettings;

        public List shaderTagIdList = new List();

        public BlitPass(Setting setting)
        {
            this.setting = setting;
            //BlitMat = new Material(Shader.Find("Hidden/Universal Render Pipeline/Blit"));
            mRT.Init(setting.textureName);

            renderStateBlock = new RenderStateBlock(RenderStateMask.Nothing);
            mFilteringSettings = new FilteringSettings(RenderQueueRange.transparent);

            foreach (string tag in setting.shaderTagIdList)
            {
                shaderTagIdList.Add(new ShaderTagId(tag));
            }
        }

        public override void Configure(CommandBuffer cmd, RenderTextureDescriptor cameraTextureDescriptor)
        {
            cameraTextureDescriptor.depthBufferBits = 0;
            //cameraTextureDescriptor.width = (int)(cameraTextureDescriptor.width * setting.sampleDown);
            //cameraTextureDescriptor.height = (int)(cameraTextureDescriptor.height * setting.sampleDown);
            cameraTextureDescriptor.width = (int)(cameraTextureDescriptor.width);
            cameraTextureDescriptor.height = (int)(cameraTextureDescriptor.height);
            //ARGB32格式的取值再0-1中,无法进行Bloom等后效处理,所以沿用BackBuffer的RGB111110Float格式
            //cameraTextureDescriptor.colorFormat = RenderTextureFormat.ARGB32;

            cmd.GetTemporaryRT(mRT.id, cameraTextureDescriptor, FilterMode.Bilinear);
            //if(setting.useBlitMat)
            //{
            //    cmd.Blit(colorAttachment, mRT.Identifier(), BlitMat, 0);
            //}
            //else
            cmd.Blit(colorAttachment, mRT.Identifier());
            
            cmd.SetGlobalTexture(setting.textureName, mRT.Identifier());
        }

        public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
        {

            CommandBuffer drawCMD = CommandBufferPool.Get("Draw " + setting.textureName);
            using (new ProfilingSample(drawCMD, "Draw " + setting.textureName))
            {
                context.ExecuteCommandBuffer(drawCMD);
                drawCMD.Clear();
                drawCMD.SetRenderTarget(colorAttachment);

                var drawSettings = CreateDrawingSettings(shaderTagIdList, ref renderingData, SortingCriteria.CommonTransparent);
                //mFilteringSettings.layerMask = renderingData.cameraData.camera.cullingMask;

                context.DrawRenderers(renderingData.cullResults, ref drawSettings, ref mFilteringSettings, ref renderStateBlock);

            }
            context.ExecuteCommandBuffer(drawCMD);
            CommandBufferPool.Release(drawCMD);
        }

        public override void FrameCleanup(CommandBuffer cmd)
        {
            cmd.ReleaseTemporaryRT(mRT.id);
        }
    }



    BlitPass mBlitPass;

    public override void Create()
    {
        mBlitPass = new BlitPass(settings) { renderPassEvent = settings.passEvent};
    }

    public override void AddRenderPasses(ScriptableRenderer renderer, ref RenderingData renderingData)
    {
        //不检查的话,当shaderTagIdList为空时,不会断Warning
        if (settings.shaderTagIdList == null || settings.shaderTagIdList.Count == 0)
            return;
        renderer.EnqueuePass(mBlitPass);
    }
}

  只有shader的tag中的LightMode属性在这个Feature的shaderTagIdList中才会被渲染出来,此外DrawRenderers还能控制队列、排序方法等。
  比起2018手动组织渲染队列,以及多相机方案,这个方法的好处是复杂度很低,无需考虑渲染排序(管理中的Grab物体无需考虑,管理之外的半透物体依旧需要考虑),无需考虑多场景等等。
  如果说有什么劣势,就是对抓屏纹理降采样,至少我做不到,当我试图给Blit方法传递材质时,总会出现各种各样的错误。

你可能感兴趣的:(【Unity3D】URP下的GrabPass方案)