I would like to ray cast and get the color of the pixel of the texture of the mesh that the ray cast hits.

I realize this can be done with viewports but hoping not to use this approach. Viewports are performance problems for us. Also I don't need the shaded/lighted pixel result as in the final rendering. I would like to get the original pixel color value as in the original texture.

I expect the GPU can have some differences in exactly how the texture is rendered. A close approximation would be fine.

So I am wondering if from GDScript there may be a procedure to implement this? I need to invert the mapping of the texture.

Many Thanks

  • xyz replied to this.

    Hi,
    i've also thought about that a while ago. If you cannot or will not use use a viewport to capture a frame from the raycaster viewpoint with special shaders on every object, there is no easy/performant way to archive this now.

    Currently a raycast only delivers you information wich shape has been hit and at what world position. You do not get a specific triangle and therefor you cant get the uv coordinates (that you would need to get the color of this texel). In theory you could calculate the texel but the steps to calculate this would be overwhelmingly slow.

    So, take this as a NO.

    But what do you mean with "i need to invert the mapping of the texture"?

    zaphara What's your final goal with this? Maybe there's an alternative way of achieving it.

    The default screen is a viewport. You can access the root viewport, get the texture, and read from it. But this will be slow and not a good idea if you need to do it often (as it locks the viewport).

    Also to clarify the goal of the project - the user will have distance sensors on a robot they can point at any target. Ray cast gives the range. If I could also give the color it gives us interesting options. But small Viewports seem to slow down our frame rate quite a lot. So I have been wondering if there is an alternative that would be faster.

    I should also mention that perhaps we just use Viewport incorrectly and maybe that is the correct approach if we can make it faster. We add a small 16x2 viewport for a camera on our robots. This can be a decent drop in frame rate.

    Then we access the texture to read the data with code like this:

    var im = viewport.get_texture().get_data()
    im.convert(Image.FORMAT_RGB8)
    var data = im.get_data()

    On some machines the get_texture() access is also costly (but not all machines) - I assume this has to do with different Gpu access rules and locking up the Viewport (as mentioned above).

    But the first problem is that just the existence of a small second Viewport seems quite costly - regardless of whether we access the data or not. If I could make a tiny viewport work well I could use it for a single pixel reader as well.

    This may also be relevant for getting performance: The distance sensor is usually hitting a static object and may be on the same mesh and texture for a relatively long time. So perhaps I can exploit this.

      You'll need to read from a viewport texture at some point, I don't think there is any way around it. I think the best bet is making a copy of the texture (from the root viewport) when the player starts the scan. Then you can lock and read from that texture without effecting the performance. The copy does have some cost, so you don't want to do it every frame, but if you do it once when the scan starts (or maybe if the laser moves a certain amount of pixels, copy again). I think it should be fine.

      I don't think you have to convert the image format. You might have to flip it though. So get the root viewport, get the texture. Make an image from it by copying it. Then use the get_pixel to get the color at the mouse position (or screen position of the laser collision).

        zaphara You didn't describe your final goal. Why exactly do you need that pixel color? But if you insist on doing it like this you can achieve it with a little bit of computational geometry, It's actually quite simple but it involves iteration through the collider mesh data. So for performance reasons you'd probably need to write it in C++ if you need to do it every frame.

        Maintain your convex collider mesh data (along with corresponding UVs) and when the ray hits a collider, iterate through all of the collider triangles to determine which one is actually hit. Then interpolate the UVs of that triangle's vertices to get hitpoint's UV position.

          cybereality Thanks - I think as you said, only taking the the texture when needed, can be the winning approach. The ray hits a new collider and then I obtain the target texture and any other preparation I can do on that collider. This optimization might be why calculating it manually versus using a second viewport to line up on the target might go faster.

          To clarify the ray sensors will be pointing at things that might not be visible to the user in the main viewport at that time. So I either need a second viewport with a camera lined up on the ray. Or a manual calculation based on the ray result. The robot might have several of these sensors all active at the same time.

          xyz See above I have described the goals briefly. But more specifically - a line might be painted on the floor. The user's robot may have a few color sensors and use this to track the line to guide the robot. Or they might read a wall color and use that as a clue for an upcoming obstacle.

          I said the original texture value is best but I can make it work with the Viewport value (lighted) as well. We give the world some lighting for better visuals but the shadows and variation can create difficulties for the students. So the original raw texture can be fine.

          I'll try the calculation and see how it goes. I'll definitely move this to C++ later if it is useful and works well - maybe I'll find I need native methods anyways for the mesh data. I am hoping to prototype in GDScript first. We may migrate to Godot 4 soon as well which may help us.

          Thank you for all the responses. I feel confident now that this is at least worth trying. Exactly what the performance cost will be is unclear until I measure it.

          • xyz replied to this.

            You can render another full camera at quarter resolution (half width, half height) and it should perform okay. I've done up to 3 viewports at once (full resolution) but above that is too much.

            zaphara One thing I did to improve performance that helps a lot, was to do indexation of faces of all meshes in a in memory custom hash with position in space index, so when you need to get the pixel, you know where you hit the collision in space, so you can use this coordinate to get from index the faces close to it in world space, then you just use the uv calculation to get the pixel you want preaty fast, you dont even need c++.

              IceBergyn Just an observation, that technique uses a lot of time of processing to create the indexing depending of your game, but you can put this at the start of the game, do a thread processing when the player is in the menu, or load it when the scene is loading and show a progress bar or something.

              zaphara See above I have described the goals briefly. But more specifically - a line might be painted on the floor. The user's robot may have a few color sensors and use this to track the line to guide the robot. Or they might read a wall color and use that as a clue for an upcoming obstacle.

              Yeah but why do you need textures/colors for that? Can't you just use a distinct collider for every such situation?

                xyz the color sensor is my end goal (not a means) - I want my students to use it as they would with real robots. They are not learning about Godot (they won't know what Godot is).

                • xyz replied to this.

                  Working on this I realized our artists have switched pretty much all of our meshes to multiple surfaces with different albedo colors instead of using textures (makes it much easier for me).

                  So using the forum post above - I was able to get it all working fairly easily - I can scan the surfaces, find which one has a face matching the collision point, and read off the color.

                  For those who may be interested I was losing about 200-300 us on the whole calculation which requires the MeshTool.

                  But after optimizing so I cache the last mesh and face and surface (which usually doesn't change) this was reduced to about 50us which is quite acceptable. I can have maybe 10-20 of these sensors running at 30Hz without much impact on performance.

                  We often use simple primitives with solid color - obviously those are much faster and the mesh size will be an issue as well.

                  So for now I am running well - in the next few weeks I will solve two additional problems:

                  (1) instead of making an exact point to face match - I'll search nearest best fit face. This will allow me to get a decent color approximation if I use simple collision such as a cube but the visual mesh has small differences. Eventually I think I can solve this exactly by running a second ray trace on the exact collision once I detect a hit on the approximate collision. I keep the simple collision because the objects are part of rigid bodies.

                  (2) Make it work for a textured mesh (the original goal). If I get this working I'll post some performance numbers here.

                  • xyz replied to this.

                    zaphara
                    Right. Then writing a C++ extension that returns UV coordinates from the point on mesh would probably be the way to go.

                    If you happen to implement this, it'd be nice to release it to public as a plugin.