高级后处理

简介

本教程描述了一种在Godot中进行后处理的高级方法。值得注意的是,它将解释如何编写使用深度缓冲区的后处理着色器。您应该已经熟悉后处理,特别是使用 custom post-processing tutorial 中介绍的方法。

In the previous post-processing tutorial, we rendered the scene to a Viewport and then rendered the Viewport in a ViewportContainer to the main scene. One limitation of this method is that we could not access the depth buffer because the depth buffer is only available in spatial shaders and Viewports do not maintain depth information.

全屏四核

In the custom post-processing tutorial, we covered how to use a Viewport to make custom post-processing effects. There are two main drawbacks of using a Viewport:

  1. 无法访问深度缓冲区
  2. 在编辑器中看不到后处理着色器的效果

要解决使用深度缓冲区的限制,请使用 MeshInstance 并使用 QuadMesh 原语。这允许我们使用空间着色器并访问场景的深度纹理。接下来,使用顶点着色器使四边形始终覆盖屏幕,以便始终应用后处理效果,包括在编辑器中。

First, create a new MeshInstance and set its mesh to a QuadMesh. This creates a quad centered at position (0, 0, 0) with a width and height of 1. Set the width and height to 2. Right now, the quad occupies a position in world space at the origin; however, we want it to move with the camera so that it always covers the entire screen. To do this, we will bypass the coordinate transforms that translate the vertex positions through the difference coordinate spaces and treat the vertices as if they were already in clip space.

The vertex shader expects coordinates to be output in clip space, which are coordinates ranging from -1 at the left and bottom of the screen to 1 at the top and right of the screen. This is why the QuadMesh needs to have height and width of 2. Godot handles the transform from model to view space to clip space behind the scenes, so we need to nullify the effects of Godot’s transformations. We do this by setting the POSITION built-in to our desired position. POSITION bypasses the built-in transformations and sets the vertex position directly.

  1. shader_type spatial;
  2. void vertex() {
  3. POSITION = vec4(VERTEX, 1.0);
  4. }

Even with this vertex shader, the quad keeps disappearing. This is due to frustum culling, which is done on the CPU. Frustum culling uses the camera matrix and the AABBs of Meshes to determine if the Mesh will be visible before passing it to the GPU. The CPU has no knowledge of what we are doing with the vertices, so it assumes the coordinates specified refer to world positions, not clip space positions, which results in Godot culling the quad when we turn away from the center of the scene. In order to keep the quad from being culled, there are a few options:

  1. 将QuadMesh作为子节点添加到相机,因此相机始终指向它
  2. 在QuadMesh中将几何属性 extra_cull_margin 设置得尽可能大

The second option ensures that the quad is visible in the editor, while the first option guarantees that it will still be visible even if the camera moves outside the cull margin. You can also use both options.

深度纹理

要从深度纹理中读取,请使用 texture() 和统一变量 DEPTH_TEXTURE 执行纹理查找。

  1. float depth = texture(DEPTH_TEXTURE, SCREEN_UV).x;

注解

Similar to accessing the screen texture, accessing the depth texture is only possible when reading from the current viewport. The depth texture cannot be accessed from another viewport to which you have rendered.

DEPTH_TEXTURE 返回的值介于 0``和 ``1 之间,并且是非线性的。当直接从“DEPTH_TEXTURE”显示深度时,除非它非常接近,否则一切都会看起来几乎是白色的。这是因为深度缓冲区使用比进一步更多的位来存储更靠近相机的对象,因此深度缓冲区中的大部分细节都靠近相机。为了使深度值与世界或模型坐标对齐,我们需要将值线性化。当我们将投影矩阵应用于顶点位置时,z值是非线性的,所以为了线性化它我们将它乘以投影矩阵的倒数,在Godot中可以用变量 INV_PROJECTION_MATRIX 访问它。

Firstly, take the screen space coordinates and transform them into normalized device coordinates (NDC). NDC run from -1 to 1, similar to clip space coordinates. Reconstruct the NDC using SCREEN_UV for the x and y axis, and the depth value for z.

  1. void fragment() {
  2. float depth = texture(DEPTH_TEXTURE, SCREEN_UV).x;
  3. vec3 ndc = vec3(SCREEN_UV, depth) * 2.0 - 1.0;
  4. }

Convert NDC to view space by multiplying the NDC by INV_PROJECTION_MATRIX. Recall that view space gives positions relative to the camera, so the z value will give us the distance to the point.

  1. void fragment() {
  2. ...
  3. vec4 view = INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
  4. view.xyz /= view.w;
  5. float linear_depth = -view.z;
  6. }

Because the camera is facing the negative z direction, the position will have a negative z value. In order to get a usable depth value, we have to negate view.z.

The world position can be constructed from the depth buffer using the following code. Note that the CAMERA_MATRIX is needed to transform the position from view space into world space, so it needs to be passed to the fragment shader with a varying.

  1. varying mat4 CAMERA;
  2. void vertex() {
  3. CAMERA = CAMERA_MATRIX;
  4. }
  5. void fragment() {
  6. ...
  7. vec4 world = CAMERA * INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
  8. vec3 world_position = world.xyz / world.w;
  9. }

优化

您可以使用单个大三角形而不是使用全屏四边形。解释的原因在 这里 。但是,这种好处非常小,只有在运行特别复杂的片段着色器时才有用。

将MeshInstance中的Mesh设置为 ArrayMesh。 ArrayMesh是一个工具,允许您从顶点,法线,颜色等方便地从数组构造网格。

现在,将脚本附加到MeshInstance并使用以下代码:

  1. extends MeshInstance
  2. func _ready():
  3. # Create a single triangle out of vertices:
  4. var verts = PoolVector3Array()
  5. verts.append(Vector3(-1.0, -1.0, 0.0))
  6. verts.append(Vector3(-1.0, 3.0, 0.0))
  7. verts.append(Vector3(3.0, -1.0, 0.0))
  8. # Create an array of arrays.
  9. # This could contain normals, colors, UVs, etc.
  10. var mesh_array = []
  11. mesh_array.resize(Mesh.ARRAY_MAX) #required size for ArrayMesh Array
  12. mesh_array[Mesh.ARRAY_VERTEX] = verts #position of vertex array in ArrayMesh Array
  13. # Create mesh from mesh_array:
  14. mesh.add_surface_from_arrays(Mesh.PRIMITIVE_TRIANGLES, mesh_array)

注解

三角形在标准化设备坐标中指定。回想一下,NDC在 x 和``y``方向都从``-1``到 1 运行。这使得屏幕 2 单位宽, 2 单位高。为了用一个三角形覆盖整个屏幕,使用一个 4 单位宽和 4 单位高的三角形,高度和宽度加倍。

从上面分配相同的顶点着色器,所有内容应该看起来完全相同。

使用ArrayMesh而不是使用QuadMesh的一个缺点是ArrayMesh在编辑器中不可见,因为在运行场景之前不会构造三角形。为了解决这个问题,在建模程序中构建一个三角形Mesh,然后在MeshInstance中使用它。