【Unity shader】Basics of water rendering-underwater perspective effect

Next is the last article on the basics of water rendering, seeing underwater objects through the water surface and showing the depth effect.

1. Build a simple demonstration scene

Let's just set up a small scene.
Insert image description here
Add water surface, give UV deformed water surface material, and increase transparency settings.

Insert image description here

SubShader
    {
    
    
        Tags {
    
     "RenderType"="Transparent" "Queue" = "Transparent" }
        LOD 100

        Pass
        {
    
    
            //Tags {"LightMode" = "ForwardBase"}

            ZWrite Off
            Blend SrcAlpha OneMinusSrcAlpha
            //.......返回的color结果,添加一个控制透明度的参数
         }
         //注意FallBack也要注释掉
    }

2. Realize water depth effect based on fog effect

Water absorbs light, so real water is not completely transparent. In addition, water bodies have different light absorption rates at different frequencies, with blue light being absorbed the least.
Therefore, the deeper the depth, the objects in the water will turn blue.

Of course we can directly apply a global fog, but here we are better off using fog effect calculation only for water bodies.

Starting here, we add a cginc related to underwater calculations and a function that returns the color result of the underwater fragment.
When creating a new cginc file, we need to pay attention to creating a txt file in the windows folder, and be careful to modify the file suffix.

#include "LookingThroughWater.cginc"
float3 ColorBelowWater () 
{
    
    
	//目前只返回黑色
    return 0;
}

//返回值乘以ColorBelowWater()的结果,透明度调整为1

First, we defined the simplest underwater fragment color calculation and obtained a petroleum-like fluid.
Insert image description here
To calculate the depth fog, first we need a camera depth map.

// in xxx.cginc
sampler2D _CameraDepthTexture;

In addition, during shader calculation, we need to obtain the corresponding screen space coordinates.

Add screenPos to the surface shader:

struct Input 
{
    
    
	float2 uv_MainTex;
	float4 screenPos;
};void surf (Input IN, inout SurfaceOutputStandard o) 
{
    
    
	…
	
	o.Albedo = ColorBelowWater(IN.screenPos);
	o.Alpha = 1;
}

In the unlit shader, we can add VPOS semantics to the fragment shader to introduce plane space coordinates.

Since VPOS and SV_POSITION cannot exist in the same v2f structure, we must delete SV_POSITION in the original v2f and make it output separately through out semantics in the parameters of the vertex shader.

struct v2f
{
    
    
    float2 uv : TEXCOORD0;
    //......
    // float4 vertex : SV_POSITION;
};

v2f vert (appdata_tan v, out float4 vertex : SV_POSITION)
{
    
    
    v2f o;
    vertex = UnityObjectToClipPos(v.vertex);
    //.......
    
    return o;
}

fixed4 frag (v2f i, UNITY_VPOS_TYPE screenPos : VPOS) : SV_Target
{
    
    
    //....
}

Similarly, we can also perform calculations directly, so there is no need to use VPOS or delete the SV_POSITION definition in the v2f structure.

v2f vert (appdata_tan v)
{
    
    
    v2f o;
    
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.screenpos = ComputeScreenPos(o.vertex);

    //.......
}

2.1 Get the distance from the underwater fragment to the water surface

The logic is not difficult, that is, through the depth of the fragment at the bottom of the water - the depth of the fragment on the water surface, the thickness of the corresponding fragment water body is found.

float3 ColorBelowWater (float4 screenPos) 
{
    
    
    float2 uv = screenPos.xy / screenPos.w;
    
    float backgroundDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv));
	float surfaceDepth = UNITY_Z_0_FAR_FROM_CLIPSPACE(screenPos.z);

    float depthDifference = backgroundDepth - surfaceDepth;
    
    //除以二十,拉开层次差别,这个20的常量,我们可以理解为最大深度
    //所有常量,最好都根据实际搭的场景深度,进行灵活调整
	return depthDifference/20;
}

Note that the return value of the fragment shader here is replaced by the result of pure ColorBelowWater().
Insert image description here
If we get an inverted black and white result at this time, it may be that the v coordinate of the depth map is calculated from top to bottom. In this case, we need to negate the uv in the v dimension.

//in xxxx.cginc
flaot4 _CameraDepthTexture_TexelSize;

float3 ColorBelowWater (float4 screenPos) 
{
    
    
    float2 uv = screenPos.xy / screenPos.w;
    #if UNITY_UV_STARTS_AT_TOP
		if (_CameraDepthTexture_TexelSize.y < 0) {
    
    
			uv.y = 1 - uv.y;
		}
	#endif
    
    //.........
}

2.2 Obtain the underwater rendering frame buffer

After solving the calculation of depth information, a new problem arises. If we directly multiply the depth information by calculating the existing results, it will not be able to correctly reflect the underwater color information.

//in frag shader
return fixed4((col * _BaseColor + diffuse + specular)* ColorBelowWater(i.screenpos), _AlphaScale);

Insert image description here
Of course, we can be clever and adjust the alpha value to dilute the black effect, but this will directly destroy the depth effect.
Insert image description here
Therefore, we need to difference-mix the original underwater rendering color results and the depth settlement results.

Because the original water shader only calculates the color result of the water surface, it is definitely impossible for us to complete the mixing in a single pass.

We add a GrabPass separately to store the rendering results of other objects in advance. Since the rendering order of transparent objects is after non-transparent objects, if you want to obtain GrabPass for non-transparent objects, you must pay attention to the rendering order issue.

According to the description of the unity documentation , grabpass can only capture frame buffer information and has two calling methods. When the target texture is not provided, the result will be stored in _GrabTexture; when the user needs to specify the temporary texture output by grabpass, it needs to be given in double quotes.
Insert image description here

SubShader
    {
    
    
        Tags {
    
     "RenderType"="Transparent" "Queue" = "Transparent" }
        LOD 100
		
		//增加一个GrabPass,将背景物体渲染的结果预先存储到_WaterBackground中,供后续颜色插值混合使用
        GrabPass {
    
    "_WaterBackground"}

        Pass
        {
    
    
            //水体渲染pass
        }
    }

//in xxx.cginc

float3 ColorBelowWater (float4 screenPos) 
{
    
    
    float2 uv = screenPos.xy / screenPos.w;
    #if UNITY_UV_STARTS_AT_TOP
		if (_CameraDepthTexture_TexelSize.y < 0) {
    
    
			uv.y = 1 - uv.y;
		}
	#endif
    
    float3 backgroundColor = tex2D(_WaterBackground, uv).rgb;

    return backgroundColor;
}

You can see that when the alpha is full, there is also a perspective effect.
Insert image description here

2.3 Complete background rendering and water depth interpolation blending

Two new fog effect-related parameters are added to the properties:

_WaterFogColor ("Water Fog Color", Color) = (0, 0, 0, 0)
_WaterFogDensity ("Water Fog Density", Range(0, 2)) = 0.1

According to the water bottom color (fog color), the background color is differentially mixed, and the interpolation factor is a combination of fog concentration and depth.

//in xxx.cginc
// update ColorBelowWater( )
float3 backgroundColor = tex2D(_WaterBackground, uv).rgb;
float fogFactor = exp2(-_WaterFogDensity * depthDifference);

return lerp(_WaterFogColor, backgroundColor, fogFactor);

Insert image description here
Now, we can use _WaterFogDensity to control the scattering effect of water to achieve depth differences in the water body.
Insert image description here
Then adjust the alphascale value. Obviously now we don't need a parameter to control the transparency of the output color, but a parameter to control whether the water body is rendered as transparent depth.

The original alphascale is mainly used to control the transparency of the color calculation results of the fragment shader.
Insert image description here
Now we adjust it so that it affects the final calculation result of fogFactor and fixes the w value returned by the fragment shader to 1

//in .cginc ColorBelowWater
return lerp(_WaterFogColor, backgroundColor, fogFactor * _AlphaScale);

Unlike the effect of adjusting _WaterFogDensity, adjusting _AlphaScale mainly affects whether the depth blending effect is calculated.
Insert image description here

3. Realize the distortion of underwater objects

Friends with life experience all know that underwater objects will be distorted when there are water waves. Refer to the edge of the underwater fish in the picture below, which appears to a certain extent with the water waves.
Insert image description here
The implementation logic is not complicated, that is, the sampled UV of the underwater part is offset along the normal direction of the water wave (x, z direction).

//额外新增了参数_RefractionStrength,用于控制采样水下颜色结果的uv的偏移程度
float3 ColorBelowWater (float4 screenPos, float3 worldNormal) 
{
    
    
    float2 uvoffset = worldNormal.xz * _RefractionStrength;
    float2 uv = (screenPos.xy + uvoffset) / screenPos.w;
    //.........
}

Comparison of underwater distortion phenomenon from scratch
Insert image description here

3.1 Correct the false offset of objects on the water

Looking at the code above, you can see that since UV offset is a global calculation, it will cause many objects that are not underwater to have their corresponding water surfaces distorted in color.
Insert image description here
The correction method is very simple, that is, after judgment, we only do uv offset for the underwater fragments. For the uv above the water, we use the original uv, and pay attention to the need to recalculate the depthDifference.

float3 ColorBelowWater (float4 screenPos, float3 worldNormal) 
{
    
    
    //......新增一个originUV,用于存储偏移前的uv

    if(depthDifference < 0)
    {
    
    
        uv = originUV;
        #if UNITY_UV_STARTS_AT_TOP
		if (_CameraDepthTexture_TexelSize.y < 0) {
    
    
			uv.y = 1 - uv.y;
		}
	    #endif
	    //使用偏移前的uv采样颜色缓冲,会导致偏移后的uv采样的深度差,与颜色不匹配
	    //这里同时重采样backgroundDepth,再算一次depthDifference 
        backgroundDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv));
        depthDifference = backgroundDepth - surfaceDepth;
    }

    float3 backgroundColor = tex2D(_WaterBackground, uv).rgb;
    float fogFactor = exp2(-_WaterFogDensity * depthDifference);

    return lerp(_WaterFogColor, backgroundColor, fogFactor * _AlphaScale);
}

Insert image description here

4. Realize water surface fluctuations

After adjusting the underwater refraction effect, we still feel that the effect of the water surface is obviously so choppy, but the horizontal surface is still as calm as a mirror, which is obviously not consistent with common sense.
Insert image description here
Finally, based on the previous rendering, we sample the flowmap in the vertex shader, solve for its vector length, and use it for the position (y direction) of the too high vertex.

Of course, a 100x100 plane is used here. In addition, tex2D cannot be used for texture sampling in the vertex shader. We need to use tex2Dlod to sample lowmap and provide four-bit floating point uv.

We give the third and fourth positions of uv 0.

Insert image description here

v2f vert (appdata_tan v)
{
    
    
    v2f o;
    o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
    float2 flowVec;
    flowVec = tex2Dlod(_FlowMap, float4(o.uv + _Time.y * _Speed, 0.0, 0.0)).rg;
    flowVec = flowVec * 2 -1;

    o.vertex = UnityObjectToClipPos(v.vertex + float3(0.0, length(flowVec) * _HeightScale, 0.0));
    o.screenpos = ComputeScreenPos(o.vertex);

    float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
    float3 worldNormal = UnityObjectToWorldNormal(v.normal);
    float3 worldTangent = UnityObjectToWorldDir(v.tangent);
    float3 worldBiTangent = cross(worldNormal, worldTangent) * v.tangent.w;

    o.t2w_0 = float4(worldTangent.x,worldBiTangent.x,worldNormal.x, worldPos.x);
    o.t2w_1 = float4(worldTangent.y,worldBiTangent.y,worldNormal.y, worldPos.y);
    o.t2w_2 = float4(worldTangent.z,worldBiTangent.z,worldNormal.z, worldPos.z);
    
    return o;
}

In this way, the interaction between the water body and the edge of the object immersed in the water body will fluctuate over time, making it appear more realistic.
Insert image description here

Guess you like

Origin blog.csdn.net/misaka12807/article/details/132594033