zaphara You didn't describe your final goal. Why exactly do you need that pixel color? But if you insist on doing it like this you can achieve it with a little bit of computational geometry, It's actually quite simple but it involves iteration through the collider mesh data. So for performance reasons you'd probably need to write it in C++ if you need to do it every frame.

Maintain your convex collider mesh data (along with corresponding UVs) and when the ray hits a collider, iterate through all of the collider triangles to determine which one is actually hit. Then interpolate the UVs of that triangle's vertices to get hitpoint's UV position.

    cybereality Thanks - I think as you said, only taking the the texture when needed, can be the winning approach. The ray hits a new collider and then I obtain the target texture and any other preparation I can do on that collider. This optimization might be why calculating it manually versus using a second viewport to line up on the target might go faster.

    To clarify the ray sensors will be pointing at things that might not be visible to the user in the main viewport at that time. So I either need a second viewport with a camera lined up on the ray. Or a manual calculation based on the ray result. The robot might have several of these sensors all active at the same time.

    xyz See above I have described the goals briefly. But more specifically - a line might be painted on the floor. The user's robot may have a few color sensors and use this to track the line to guide the robot. Or they might read a wall color and use that as a clue for an upcoming obstacle.

    I said the original texture value is best but I can make it work with the Viewport value (lighted) as well. We give the world some lighting for better visuals but the shadows and variation can create difficulties for the students. So the original raw texture can be fine.

    I'll try the calculation and see how it goes. I'll definitely move this to C++ later if it is useful and works well - maybe I'll find I need native methods anyways for the mesh data. I am hoping to prototype in GDScript first. We may migrate to Godot 4 soon as well which may help us.

    Thank you for all the responses. I feel confident now that this is at least worth trying. Exactly what the performance cost will be is unclear until I measure it.

    • xyz replied to this.

      You can render another full camera at quarter resolution (half width, half height) and it should perform okay. I've done up to 3 viewports at once (full resolution) but above that is too much.

      zaphara One thing I did to improve performance that helps a lot, was to do indexation of faces of all meshes in a in memory custom hash with position in space index, so when you need to get the pixel, you know where you hit the collision in space, so you can use this coordinate to get from index the faces close to it in world space, then you just use the uv calculation to get the pixel you want preaty fast, you dont even need c++.

        IceBergyn Just an observation, that technique uses a lot of time of processing to create the indexing depending of your game, but you can put this at the start of the game, do a thread processing when the player is in the menu, or load it when the scene is loading and show a progress bar or something.

        zaphara See above I have described the goals briefly. But more specifically - a line might be painted on the floor. The user's robot may have a few color sensors and use this to track the line to guide the robot. Or they might read a wall color and use that as a clue for an upcoming obstacle.

        Yeah but why do you need textures/colors for that? Can't you just use a distinct collider for every such situation?

          xyz the color sensor is my end goal (not a means) - I want my students to use it as they would with real robots. They are not learning about Godot (they won't know what Godot is).

          • xyz replied to this.

            Working on this I realized our artists have switched pretty much all of our meshes to multiple surfaces with different albedo colors instead of using textures (makes it much easier for me).

            So using the forum post above - I was able to get it all working fairly easily - I can scan the surfaces, find which one has a face matching the collision point, and read off the color.

            For those who may be interested I was losing about 200-300 us on the whole calculation which requires the MeshTool.

            But after optimizing so I cache the last mesh and face and surface (which usually doesn't change) this was reduced to about 50us which is quite acceptable. I can have maybe 10-20 of these sensors running at 30Hz without much impact on performance.

            We often use simple primitives with solid color - obviously those are much faster and the mesh size will be an issue as well.

            So for now I am running well - in the next few weeks I will solve two additional problems:

            (1) instead of making an exact point to face match - I'll search nearest best fit face. This will allow me to get a decent color approximation if I use simple collision such as a cube but the visual mesh has small differences. Eventually I think I can solve this exactly by running a second ray trace on the exact collision once I detect a hit on the approximate collision. I keep the simple collision because the objects are part of rigid bodies.

            (2) Make it work for a textured mesh (the original goal). If I get this working I'll post some performance numbers here.

            • xyz replied to this.

              zaphara
              Right. Then writing a C++ extension that returns UV coordinates from the point on mesh would probably be the way to go.

              If you happen to implement this, it'd be nice to release it to public as a plugin.

              zaphara Working on this I realized our artists have switched pretty much all of our meshes to multiple surfaces with different albedo colors instead of using textures (makes it much easier for me).

              So using the forum post above - I was able to get it all working fairly easily - I can scan the surfaces, find which one has a face matching the collision point, and read off the color.

              In that case it may be better to copy mesh data into colliders and let the engine do the rest.

                xyz I call create_trimesh_collision() to get the corresponding collision for my visual mesh.

                When I call intersect_ray I get result.shape which tells me which MeshInstance I hit. But then I have to use the MeshTool to figure out the surface and face.

                Is there any way to directly get the surface index from the intersect_ray call? This would be very useful.
                Also is there any way to directly get the face index?

                I have assumed this is not available directly.

                Many thanks for your help.

                • xyz replied to this.

                  zaphara You can make a separate collider for each surface. Surface identification will then be implicit. But I don't think the engine can get you the triangle.

                    xyz Thanks - this could be an interesting approach. Would definitely make the calculation trivial. I'll need to look into exactly how I would separate the surfaces.

                    • xyz replied to this.

                      zaphara I'll need to look into exactly how I would separate the surfaces.

                      Query the mesh using Mesh::surface_get_arrays() and use returned data to build convex mesh colliders.

                        xyz Thanks - performance is adequate for the moment so I will improve this probably in a few weeks. But these are all great suggestions.