• Godot HelpShaders
  • Help with compute shaders. Using structs in a buffer that contain arrays.

xyz Yeah good point. I was expecting it to not really be fast due to iterating over each float again. I will look into the PackedVector3Array. Thanks for all the help!

  • xyz replied to this.

    HuskyDreaming I just noticed you actually need Vector3 arrays, so best to immediately convert to PackedVector3Array (edited my previous post)

    HuskyDreaming Here's the function. Didn't test it but it should work:

    func decode_to_vector3_array(buffer: PackedByteArray) -> PackedVector3Array:
    	var size = buffer.size() / 12 # number of Vector3s in the buffer
    	var header = PackedInt32Array([36, size]).to_byte_array() # first int is type id, second is the element count
    	return bytes_to_var(header + buffer) 

      xyz You are saying I just need to change the decoding part? I can keep the buffer inside the compute shader to be the same as is? and by passing in 8 bytes do you mean update the storage buffer size to be 4 * 3 * points_array_size * 8?

      • xyz replied to this.

        HuskyDreaming This has nothing to do with the shader or the buffer. It's just a fast way to convert packed byte data (shader's output in this case) into GDScript vectors. The function doesn't do anything to the original buffer. It only reads from it. Adding those 8 bytes creates a copy that's passed to bytes_to_var(). So yeah, everything else stays the same.

          xyz That makes sense. Thank you the above decode function works great. I am doing more testing at the moment.

          xyz I seem to have run into an issue for some reason the data I am getting back seems to be incorrect.

          This is the data I expect:

          [(-0.032415, -0.840178, -0.541341), (-0.034749, -0.813649, -0.580316), (-0.037005, -0.785307, -0.617999), (-0.039179, -0.755216, -0.654305), (-0.041266, -0.723441, -0.689152), (-0.04326, -0.690054, -0.722464), (-0.045159, -0.65513, -0.754166), (-0.046956, -0.618745, -0.784187), (-0.048649, -0.580982, -0.812461)]

          This is the data I am receiving:
          [(-0.032414, -0.84018, -0.541338), (0, -0.032414, -0.84018), (-0.541338, 0, -0.032414), (-0.84018, -0.541338, 0), (-0.032414, -0.84018, -0.541338), (0, -0.032414, -0.84018), (-0.541338, 0, -0.032414), (-0.84018, -0.541338, 0), (-0.032414, -0.84018, -0.541338)]

          Notice how the 0 shifts over every time? o.O

          Edit: Both Arrays are 9 in size just so you know

            HuskyDreaming The decoding is correct I can confirm this. However I am wondering if it's something todo with how the shader is written.

            Well, it looks like the incorrect calculation in the shader. How did you calculate the expected data?

            2 months later

            layout(set = 0, binding = 5, std430)

            The ‘std430’ means data is aligned with 16-bytes, i.e. four float value. That's why the data you actually received shifts a 0 for every vec3, it supplements the alignment automatically.

            For correct data, you should use vec4 instead of vec3 in both gdscript side and compute shader side.

            • xyz replied to this.