Hey y'all,
I wanted to experiment with the new barycentric coordinate function coming in 4.2, but I seem to be getting some odd results with meshes imported from Blender. In a new project I'm using the code found in the example of this pull request (#71233), but with a small modification to allow for controlling the traveling object with the arrow keys; as opposed to an AnimationPlayer. This appears to work correctly with the track mesh provided in the example:

To double check my interpretation of the code, I made the same modification to the example project and it also works.
However upon trying a different track mesh imported from Blender (3.4.0), I get erratic behavior:

Leading up the ramp, the traveling object begins to jitter and doesn't align with the surface. When traveling to the left, it slowly begins to angle to the right even through the track mesh is flat...

It seems the normal data is bad? I tried exporting the mesh in different file formats (.glTF 2.0, .escn, and .obj), but regardless the behavior was the same or worse. I also tried triangulating the mesh in Blender with preserved normals, and made sure to have smooth shading enabled. To make things more confusing, when the barycentric function is disabled so we're using just the face normal of the track mesh, we do get the correct direction and the traveling object aligns properly (though without the smooth interpolation of course):

Granted, I'm fairly new to Godot (coming from a Unreal background) and am likely missing an import/export setting or not preparing the mesh properly. If any help or suggestions on how to fix this would be deeply appreciated, thanks!

Still haven't managed to make any head way on this. I've reduced the geometry of the new track mesh down to something similar to the track mesh in the example, I even imported the example's mesh into Blender to compare the normals and everything seems the same.
Example's:

My Mesh:

With the reduced geometry the erroneous behavior is even more apparent:

And again, when disabling the barycentric function so we're only using the face normals, the traveling object correctly orientates with the track.

Because everything works with the example's mesh, I'm not inclined to believe that the issue with the code itself. Never the less here's what it looks like:
Traveling Object:

	extends CharacterBody3D

	var target_velocity = Vector3.ZERO;
	var speed = 4;

	@export_category("Settings")
	@export var use_bary_coords = false;

	@onready var ray_cast: RayCast3D = $RayCast3D;

	func _ready():
		print("READY");

	func _physics_process(_delta: float) -> void:
		if ray_cast.is_colliding():
			var other: CollisionObject3D = ray_cast.get_collider()
			position.y = ray_cast.get_collision_point().y + .1
			align_up_direction(ray_cast.get_collision_normal())

			if ray_cast.get_collider().is_in_group("mesh_colliders") and use_bary_coords:
				var vertices: Array[Vector3] = other.get_vertex_positions_at_face_index(ray_cast.get_collision_face_index())
				var vertex_normals: Array[Vector3] = other.get_vertex_normals_at_face_index(ray_cast.get_collision_face_index())
				var bary_coords: Vector3 = Geometry3D.get_triangle_barycentric_coords(ray_cast.get_collision_point(), vertices[0], vertices[1], vertices[2])
				var up_normal: Vector3 = (vertex_normals[0] * bary_coords.x) + (vertex_normals[1] * bary_coords.y) + (vertex_normals[2] * bary_coords.z)
				up_normal = up_normal.normalized()
				align_up_direction(up_normal)
				print(up_normal)
				
		
		
		
		target_velocity.z = int(Input.is_action_pressed("ui_left")) + -int(Input.is_action_pressed("ui_right"));
		velocity = target_velocity*speed;
		move_and_slide();

	func align_up_direction(up_normal: Vector3) -> void:
		var new_basis: Basis = transform.basis
		new_basis.y = up_normal
		new_basis.x = -basis.z.cross(basis.y)
		new_basis = new_basis.orthonormalized()
		basis = new_basis

Track Object:

	extends StaticBody3D
	class_name TRACKOBJ_testoval


	@onready var mesh: Mesh = $basic_Flat_003.mesh
	var mesh_data: MeshDataTool

	func _ready() -> void:
		mesh_data = MeshDataTool.new()
		mesh_data.create_from_surface(mesh, 0)


	func get_vertex_normals_at_face_index(index: int) -> Array[Vector3]:
		var normals: Array[Vector3] = []
		for i in range(0, 3):
			normals.append(mesh_data.get_vertex_normal(mesh_data.get_face_vertex(index, i))) 
		return normals

	func get_vertex_positions_at_face_index(index: int) -> Array[Vector3]:
		var vertices: Array[Vector3] = []
		for i in range(0, 3):
			vertices.append(mesh_data.get_vertex(mesh_data.get_face_vertex(index, i)))
		return vertices

Maybe it's possible that the vertices are incorrectly ordered in the new track mesh? Not sure how to check that other then drawing debug geometry based on the normals and vertices arrays. Honestly I'm kinda spitballing at this point, if anyone else wants to take a crack at it here's the project file:

  • xyz replied to this.

    Scylla-Leeezard Harden the side edges in Blender. Normals on the mesh look smoothed even for 90 deg edges, so 3 vertex normals for each triangle will differ greatly and the interpolation from barycentric weights will cause the hit normal to fluctuate a lot as you traverse each triangle.

      xyz My apologies, not sure if I understand. By harden the normals are you referring to the setting in the bevel modifier? I tried converting the track mesh into a plane by removing the side geometry, and then setting the normals from the faces; so now they should be perfectly perpendicular with the mesh:

      This is the result:

      Side note, exporting the mesh with flat shading will cause the traveling object to behave as if the barycentric function is disabled - which is expected. So it would appear the issue lies somewhere with how the normals are being smoothed.

      • xyz replied to this.
        2 months later

        In blender, right click the object and select smooth shading. This will make the vert normals average among their neighbors vs just pointing in the same direction as the face normal.

        In your code, you need to adapt the script to set the arrow's Node3D.transform.basis to align to the new normal which involves getting cross products.

        If you don't understand the math, you can try asking chat GPT to explain it 🙂

        In fact you can actually make an F-Zero type player controller with just face normals, not using barycentric vertex normals, so you can start trying to make that, and when you have that figured out, you can throw in the barycentric calculated normal which is the average of the three vertex normals according to your position on the triangle which will only smoothen an already functioning wall riding player controller.

        Hope this helps.

        3 months later

        @Scylla-Leeezard - Were you able to resolve this issue? May I ask that you share the solution with us?

        I'm experiencing the same problem, despite hardening the side edges and smoothing the shading in Blender. The moment my character comes in contact with my track and starts using barycentric coordinates to determine gravity, I get extremely erratic results. None of which seem to correlate to the expected normal vector.

        3 months later

        Aye, back to share a solution to this. Turns out the barycentric coordinate function doesn't factor in any transformations for it's results. Meaning if you translate, rotate, or scale the mesh you're raycasting against then the vertices become "desynced". We and the raycast see the modified mesh, while the bary function believes that it's unmodified; returning what appears to us as corrupted data.

        So the fix is to zero out the mesh's transform. Here's it working:

        If you need you're mesh to be rotated, translated, etc., then you're gonna need to do that in Blender not in Godot.

        Also, don't forget to shade smooth the mesh. Otherwise bary will return the equivalent of the face normal and you'll get snapping.
        'Ight, I'm off to port my F-Zero clone from UE5 to Godot lmao

          Scylla-Leeezard If you need you're mesh to be rotated, translated, etc., then you're gonna need to do that in Blender not in Godot.

          Or just inverse transform the returned coordinate from mesh's local space to global space.

            Scylla-Leeezard Thanks for the reply! Better late than never.

            That would explain a lot, if it ignores mesh transformations. I take it you could grab the scale/rotation from the node and apply those to the function's result to get the "correct" gravity vector? (Unless that's what xyz is already referring to. I may be too green to know the difference.)

            • xyz replied to this.

              chetbeigemeister Geometry3D.get_triangle_barycentric_coords() is a generalized function that doesn't know anything about transformations. It just returns barycentric interpolation coords for a given point in a triangle. It's responsibility of the caller to put all arguments into the same transformations space. Otherwise the results won't be correct if mesh instance have any non-identity transforms.

              The original code made a mistake of taking the triangle coordinates in mesh/object space while keeping the interpolated point position (gotten from raycasting) in global space.

              If you need the normal in global space you can either:
              1) transform the triangle vertices from mesh space to global space prior to passing them to bary function, using mesh instance's global_transform. And then transform vertex normals from mesh space to global space prior to interpolating them to get the final normal.
              Or
              2) transform the raycast point from global space to mesh space, get barycentric coordinates in the mesh space and interpolate the normal in the mesh space as well. Then transform the final normal from mesh space to global space, again using mesh instance's global_transform.

                xyz Assuming the result is for calculating direction of gravity (meaning it's normalized, so position/scale is irrelevant) would it be adequate to simply take the Euler rotation of the mesh to transform the result from get_triangle_barycentric_coords()?

                Apologies if this is just a rehash of what you're already saying. Some of this is is going over my head, I'm trying to dumb it down for myself.

                • xyz replied to this.

                  chetbeigemeister would it be adequate to simply take the Euler rotation of the mesh to transform the result from get_triangle_barycentric_coords()?

                  No, you need the whole transformation matrix. But this is trivial, you just multiply the vertex with mesh node's global_transform or its inverse to transform back.

                  4 months later

                  xyz Hello! I am so sorry to necropost; however, you are the only person I've been able to find giving good advice on this topic.

                  I'm getting similar results (mainly post 4) as OP using mostly similar code. I've experimented with your answer and I cannot seem to figure out where exactly you would convert between local and global spaces. I've tried to think about it logically, but I can't seem to get it right.

                  1. Could one just multiply the vertices in the for loop by global_transform and leave it that? For example: var vertices: Array[Vector3] = other.get_vertex_positions_at_face_index(ray_cast.get_collision_face_index()) * global_transform

                  2. When you say "transform vertex normals from mesh space to global space prior to interpolating them to get the final normal," are you saying that you could do the same thing as the vertices array (multiply the normals during iteration by global_transform)? I think what I'm not understanding is the "prior to interpolation" part - as in, what step in the process that would be. Wouldn't that be when OP calls and passes the normal align_up_direction?

                  • xyz replied to this.

                    paftdunk Does it work as expected when the mesh has identity transforms (i.e. zero translation and rotation and (1,1,1) scaling)?

                      xyz Yes it does!

                      To give you an idea of what is working, here is the core code. Adapted from OP's code, but I made the global changes.

                      You can see I multiply by each vertex by GlobalTransform, and I also multiply it to upNormal when I pass it to SetUpDirection().

                      Vector3[] vertices = new Vector3[3];
                      Vector3[] normals = new Vector3[3];
                      
                      for (int i = 0; i < 3; i++)
                      {
                          vertices[i] = meshData.GetVertex(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i)) * GlobalTransform;
                          normals[i] = meshData.GetVertexNormal(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));
                      }
                      
                      Vector3 baryCoords = Geometry3D.GetTriangleBarycentricCoords(normRay.GetCollisionPoint(), vertices[0], vertices[1], vertices[2]);
                      
                      Vector3 upNormal = (normals[0] * baryCoords.X) + (normals[1] * baryCoords.Y) + (normals[2] * baryCoords.Z);
                      upNormal = upNormal.Normalized();
                      SetUpDirection(upNormal * GlobalTransform, delta);

                      In SetUpDirection(), you can see I'm using GlobalTransform as well.

                        private void SetUpDirection(Vector3 upNormal, double delta)
                        {
                            Transform3D normTransform = GlobalTransform;
                            normTransform.Basis.Y = upNormal;
                            normTransform.Basis.X = -normTransform.Basis.Z.Cross(normTransform.Basis.Y);
                            normTransform = normTransform.Orthonormalized();
                            GlobalTransform = GlobalTransform.InterpolateWith(normTransform, .5f);
                        }
                      • xyz replied to this.

                        paftdunk Matrix multiplication is not commutative. The order of operands is important. To transform a vertex by a matrix the order should be matrix * vertex. Your code is doing vertex * matrix. This is equivalent to multiplying with a transposed matrix which would result in inverse transformation (if the matrix is orthonormal).

                        Also to transform a normal properly you need to nullify the translation part of the matrix or use only 3x3 submatrix aka the basis. So you can do matrix.basis * normal_vector. If there is some proportional scaling in the basis you either need to orthonormalize the basis prior to multiplication or normalize the resulting normal vector. If there is non proportional scaling in the matrix, you need to use transposed inverse of the actual matrix. Since Godot's transform class doesn't implement transpose function, you can do normal_vector * matrix.basis.affine_inverse(), which now reverses the operand order to transpose the matrix.

                          xyz That makes sense! Matrix math is fairly new to me, so this is certainly trail by fire learning.

                          I'm getting slightly different results now that I have updated the logic. In particular, correctly transforming the vertices. On its own, this is working great.
                          vertices[i] = GlobalTransform * meshData.GetVertex(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));

                          I believe I am MOSTLY following what you are saying about the normal transformation (I have no scaling). This is what I did:
                          normals[i] = GlobalTransform.Basis * meshData.GetVertexNormal(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));

                          I now get the below result. I am trying to understand the mathematics of this issue here. The mesh I am moving ramps up early, so I was thinking at that point there may be an issue with the vertex transformation. Once it turned around and started shaking erratically, I figured this is something to do with the normals calculation.

                          If you happen to have any ideas, could you explain mathematically what would cause that shaking? Racked my brain but I cannot think of what that would be (as a side note, I move the moving mesh with my keyboard by updating the position on the z-axis).

                          • xyz replied to this.

                            paftdunk Post a better image and complete code. Hard to see what's happening there. Remove the interpolation in SetUpDirection for now. First make it work instantly. Just assign the final transform you calculated. Interpolation may introduce other problems. You interpolate barycentric coords anyway so it should work smooth (when on smooth surfaces) without that last interpolation. Always try to isolate the issue into minimum of code. Oh, and you need a second cross product there when calculating the final basis to ensure the wanted orthogonality.