I'm currently working on a voxel raymarching shader where I render an inverse box with a 3D texture which holds the voxel data.
Everything works quiete well, I can raymarch against the voxels in local space and got all outputs like color, normal, depth, position etc.

However, using a spatial shader in godot, I'm not able to change the VERTEX constant to the actual world position I've calculated in the fragment shader code (as it obviously states that "Constants can not be modified").

I've looked at the actual compiled shader via RenderDoc and to me it seems that it should be fairly easy to change the "vertex" variable but the shader editor just won't let met do this.

So my question is - is there anything I can do or do?
Do I have to change the actual engine code so VERTEX becomes a parameter rather than a constant?

Some (uncomplete) code for reference:

void fragment() {
RaymarchHit hit;

mat4 LocalToWorld = VIEW_MATRIX * MODEL_MATRIX;
mat4 WorldToLocal = inverse(MODEL_MATRIX);

vec3 vertexPosWS = (INV_VIEW_MATRIX * vec4(VERTEX, 1.0)).xyz;

vec3 cameraPosLS = TransformWorldToObject(WorldToLocal, CAMERA_POSITION_WORLD);
vec3 rayOrigin = cameraPosLS * VoxelScale + VoxelBoxSize * 0.5;


vec3 cameraDirWS = normalize(vertexPosWS - CAMERA_POSITION_WORLD);
vec3 rayDir = TransformWorldToObjectNormal(WorldToLocal, cameraDirWS);
vec3 albedo = vec3(0);
vec3 normal = vec3(0);
float depth = 0.0;

if (RaymarchLoop(
	rayOrigin, 
	rayDir, 
	VoxelBoxSize, 
	DataTexture, 
	hit)) {
		
	float invVoxelScale = 1.0 / VoxelScale;
	vec3 localPos = (hit.position - VoxelBoxSize * 0.5) * invVoxelScale;
	vec3 worldPos = (LocalToWorld * vec4(localPos, 1.0)).xyz;
	
	vec4 projPos = PROJECTION_MATRIX * vec4(worldPos, 1.0);
	
	depth = (projPos.z / projPos.w );
        albedo = hit.color;
	normal = (  (mat3(LocalToWorld)) * hit.normal);

         // Something like this, might be transformed to another space first but you get the point:
	// VERTEX = worldPos;
        // VIEW = -normalize(worldPos);
} else {
	discard;
}

ALBEDO = albedo;
NORMAL = normal;
DEPTH = depth;

}

    lunatix VERTEX is a varying. It makes no sense changing it after the vertex() function has been executed. You can only affect it in the vertex function. In fact that's its main purpose. What exactly do you need this for?

    lunatix you should be able to make a parameter, put THAT value in vertex and you can manipulate that parameter through code externally.

    Ah I get it now. You're trying to displace the pixels, not the vertices at all.
    Sorry I'm not knowledgeable about that, but maybe this will help you

    • xyz replied to this.

      award Hm, I think the OP actually may not be aware that shader code can have the vertex() function. But we can't be sure until we hear from them.

      Sorry that it took so long, I did simple not find the time to reply.

      I don't want to modify the vertices, I want to modify the fragments actual 3D position or displace the pixels like @award wrote.
      Let me explain once again what I'm basically doing:

      1. Import a 3D texture where each data point is the color of a voxel (or empty, when alpha is zero) like this one

      2. Render a box in the size of the voxel model (like 20x40x10) with flipped faces and the raymarching shader applied to it

      3. The vertex shader just transforms the vertices of the box

      4. The fragment shader will trace a ray from the camera position in the direction of the 3D pixel coordinate
        If it hits a voxel along the rays path, that is the actual position, depth, color and normal of the pixel.
        If no voxel is hit, discard that fragment.

      5. Use a depth prepass for depth testing

      6. Use the voxel position instead of the vertex position from the shader to calculate shadows, lights etc.

      So, I've managed to get everything working except the shadows because I'm not able to tell the final shader to use my displaced position from the raymarching algorithm.
      I've extracted the final shader via RenderDoc and it looks like it should in theory be possible:

      void fragment_shader(SceneData scene_data)
      {
          uint instance_index = instance_index_interp;
          vec3 vertex = vertex_interp; 
          vec3 eye_offset = vec3(0.0);
          vec3 view = -normalize(vertex_interp);
          vec3 albedo = vec3(1.0);
          // ... more variables
          // ... source from gdshader file
          albedo = m_albedo;
          normal = m_normal;
          gl_FragDepth = m_depth;
         // and if here I could do:
         vertex = worldPos;
         view = -normalize(vertex);

      But without this, the shadow will be applied to the box just like this:

      So I hope this makes is more clear now 🙂
      Also, here is a very interesting presentation of how the game Teardown does it:

      • xyz replied to this.

        lunatix I don't want to modify the vertices

        Why ask how to manipulate vertex positions then?

        Because VERTEX is being replaced with the variable vertex and it is simply named "vertex" like you can see in the extracted shader code: vec3 vertex = vertex_interp;

        • xyz replied to this.

          lunatix Then this is just some regular custom variable. It has nothing to do with Godot's VERTEX built in.

          VERTEX is a placeholder in a gdshader and will be replaced with "vertex" when transpiling from godots shader language to actual glsl.
          And vertex is a variable of type vec3, by default assigned to vertex_interp.

          So in my opinion the name is a little misleading, using something like position would be better in terms of fragment shader.
          However, that is not important at all as my shader needs to modify this variable but because it's defined as a constant in gdshader language, one just can't write to it without getting a compile (transpile) error while it should be possible to do.

          I've already forked the repo and playing around with it because, from looking at "scene_forward_clustered.glsl", it should be fairly straight forward to make it overridable in a gdshader.

          Excerpt from "scene_forward_clustered.glsl":

          void fragment_shader(in SceneData scene_data) {
          	uint instance_index = instance_index_interp;
          
          	//lay out everything, whatever is unused is optimized away anyway
          	vec3 vertex = vertex_interp;

          From "shader_types.h":

          shader_modes[RS::SHADER_SPATIAL].functions["fragment"].built_ins["VERTEX"] = constt(ShaderLanguage::TYPE_VEC3);
          sader_modes[RS::SHADER_SPATIAL].functions["fragment"].built_ins["VIEW"] = constt(ShaderLanguage::TYPE_VEC3;

          From "scene_shader_forward_clustered.cpp":

          actions.renames["VERTEX"] = "vertex";
          actions.renames["VIEW"] = "view";

          So, in short, "VERTEX" directly translated to "vertex".
          It's just a misleading variable name and should not be marked as const rather than a parameter so fragment shaders can actually manipulate the interpolated fragment position to archieve certain effects.

          And I think I've given the answer to my question myself now, as I looked at the code and simply do not see any way of archieving this except forking the project and either use my customized version or create a PR to godot and hopefully get it approved.

            Here is a version with "constt(...)" removed from the two lines which shows correct shadows:

            And this one is without setting VERTEX and VIEW:

            lunatix If you aren't modifying the actual vertex (used by the vertex shader) Can't you just copy 'vertex' to a 2nd, non const variable, and do what you want with it then?

            vec3 vertex2 = vertex;

            lunatix I think you're misunderstanding how VERTEX works in Godot's shading language. Although it has the same name, it's not the same thing in vertex() function and in fragment() function.

            In vertex() it can be read from and written to. In fragment() you can only read it. Its initial input value you get in the vertex() function is vertex position in object space. You can choose to alter it or not.

            Between the execution of your vertex() and fragment() functions, Godot will transform it from object space to camera space. It will also interpolate this value for each pixel, between three vertices that comprise currently rasterized triangle. It makes no sense to modify this value in fragment(). It's the result of interpolation of values that fragment shader can't even read, let alone alter.

            You also can't affect the position of a pixel. Shaders simply don't operate this way. The pixel is always fixed and the fragment() function is executed in isolation per each fixed pixel, with no knowledge whatsoever of any other pixel.

            In pre GLSL 1.5 terminology, VERTEX is a varying, or in modern terminology it's out in vertex shader and in in fragment shader. And the name is completely appropriate.

            No, I don't misunderstand how this works and how a vertex is interpolated from vertex to fragment shader but still a good explanation.
            But I finally got what you actually trying to tell me and I think we are talking past each other so let me clearify some things:

            I know that I can't alter the value of a varying sent (and interpolated) from vertex to fragment shader.
            I also know that you can't modify the position of a fragment.

            But I can interpret a specific fragment how I like, for example, manipulate it's color, lighting, shadow etc.
            And that's exactly what I want to achieve, I want to tell the code being executed after my custom shader what position it should use for the shadow calculations.

            I don't want to modify the vertex varying, I'm just searching a way to tell the shader what my raymarching algrithm calculated.

            So what I basically want is a POSITION parameter which can be read and written to, like this:

            // Code before custom shader function
            vec3 vertex = vertex_interp;
            vec3 view = -normalize(vertex);
            
            #if POSITION_USED
               vec3 position = vertex_interp;
            #endif
            
            // custom shader function
            vec3 displacedPos = raymarch(...);
            POSITION = displacedPos;
            
            // Code after custom shader:
            #if POSITION_USED
               view = -normalize(position);
            #endif
            
            // shadow calculation
            			float shadow = 1.0;
            
            			if (directional_lights.data[i].shadow_opacity > 0.001) {
            				float depth_z = -position.z; // <-- use position here instead of vertex
            				vec3 light_dir = directional_lights.data[i].direction;
                                            // this also looks fishy as the fragment shader allows to write to NORMAL but uses normal_interp here
            				vec3 base_normal_bias = normalize(normal_interp) * (1.0 - max(0.0, dot(light_dir, -normalize(normal_interp))));

            I hope I made it crystal clear now what I want to achieve and what my problem actually is.
            I'm not good at explaining things in an academic way, I'm better at practially showing what I want to do via code and images, so that may be the reason why my intentions where not good to understand.

            P.S.: VERTEX is still being directly replaced with vertex so what I'm actually writing to after transpiling is the vec3 vertex variable, not the vertex_interp varying coming from the vertex stage.

            • xyz replied to this.

              lunatix I want to tell the code being executed after my custom shader what position it should use for the shadow calculations.

              You can't tell it that. Shadow mapping doesn't work like that. It doesn't use pixels rendered in your fragment shader. At all. Any shadows that your voxels cast on other voxels in your volume are responsibility of your raymarcher. And any shadows that they are casting onto other objects can only be properly calculated if a shadow map renderer runs the raymarcher again (from the light's viewpoint) when rendering the shadow depth texture, and write the raymarch results into it. This depth is what needs to be displaced, not pixel's world positions in your fragment shader. Those can't affect the shadow map. Engine could not use this information for really anything further down the pipeline. That's why it doesn't exist as an output from the fragment function.

              I think we still not on the same boat but that's okay, I won't discuss this any further because I already solved it on my own with my custom fork.
              This fork just makes it possible to manipulate the "vertex" and geometric normal in the custom shader code and makes sure that, if used, afterwards some calculations are repeated to align multiview stuff view vector.

              And just to prove it, here is a screenshot and code snippet on how to use it:

              void fragment() {
              	RaymarchHit hit;
              	
                      mat4 LocalToWorld = VIEW_MATRIX * MODEL_MATRIX;
              	mat4 WorldToLocal = inverse(MODEL_MATRIX);
              	
              	vec3 cameraPosLS = TransformWorldToObject(WorldToLocal, CAMERA_POSITION_WORLD);
              	vec3 rayOrigin = cameraPosLS * VoxelScale + VoxelBoxSize * 0.5;
              	vec3 vertexPosWS = (INV_VIEW_MATRIX * vec4(VERTEX, 1.0)).xyz;
              	
                      vec3 cameraDirWS = normalize(vertexPosWS - CAMERA_POSITION_WORLD);
              	vec3 rayDir = TransformWorldToObjectNormal(WorldToLocal, cameraDirWS);
              
              	if (!RaymarchLoop(rayOrigin, rayDir, VoxelBoxSize, DataTexture, hit)) {
              		discard;
              	}
              	
              	float invVoxelScale = 1.0 / VoxelScale;
              	vec3 localPos = (hit.position - VoxelBoxSize * 0.5) * invVoxelScale;
              	vec3 worldPos = (LocalToWorld * vec4(localPos, 1.0)).xyz;
              	vec4 projPos = PROJECTION_MATRIX * vec4(worldPos, 1.0);
              	
                      ALBEDO = hit.color;
              	NORMAL = ((mat3(LocalToWorld)) * hit.normal);
              	DEPTH = (projPos.z / projPos.w);
              	VERTEX_OUT = worldPos; // this one is new
              	NORMAL_GEOM = normalize(hit.normal); // this one is new
              }

              So this shader is now able to fake 3D voxel geometry by just rendering an inverse box and raymarching against a 3D texture.
              It outputs a custom depth to gl_FragDepth and manipulates parameters in the fragment stage to make shadow mapping work (because it uses z coordinate of the vertex parameter).

              And if you still don't believe me, here is a PR in my fork which shows the modifications I had to do: https://github.com/Lunatix89/godot/pull/1/files

              • xyz replied to this.

                Note that while you can't modify the (x,y) position of a fragment, you can modify its depth value.

                out float DEPTH

                Custom depth value (0..1). If DEPTH is being written to in any shader branch, then you are responsible for setting the DEPTH for all other branches. Otherwise, the graphics API will leave them uninitialized.

                https://docs.godotengine.org/en/stable/tutorials/shaders/shader_reference/spatial_shader.html#doc-spatial-shader
                (Search for 'depth')

                lunatix So this shader is now able to fake 3D voxel geometry by just rendering an inverse box and raymarching against a 3D texture.
                It outputs a custom depth to gl_FragDepth and manipulates parameters in the fragment stage to make shadow mapping work (because it uses z coordinate of the vertex parameter).

                I doubt you can get correct looking shadows without rendering the volume into shadow map. So that might be the intervention worth a fork.

                I'd like to see your solution in motion with rotating volume, changing directional light direction or a couple of moving omni lights, and a 3D texture that has some see-through holes.

                I'll take a closer look at this in the next days and will post an update.
                I had the same shader in Unity and it worked pretty well, however, I always used the deferred rendering path and forward only for transparents.

                • xyz replied to this.