Hey y'all,
I wanted to experiment with the new barycentric coordinate function coming in 4.2, but I seem to be getting some odd results with meshes imported from Blender. In a new project I'm using the code found in the example of this pull request (#71233), but with a small modification to allow for controlling the traveling object with the arrow keys; as opposed to an AnimationPlayer. This appears to work correctly with the track mesh provided in the example:

To double check my interpretation of the code, I made the same modification to the example project and it also works.
However upon trying a different track mesh imported from Blender (3.4.0), I get erratic behavior:

Leading up the ramp, the traveling object begins to jitter and doesn't align with the surface. When traveling to the left, it slowly begins to angle to the right even through the track mesh is flat...

It seems the normal data is bad? I tried exporting the mesh in different file formats (.glTF 2.0, .escn, and .obj), but regardless the behavior was the same or worse. I also tried triangulating the mesh in Blender with preserved normals, and made sure to have smooth shading enabled. To make things more confusing, when the barycentric function is disabled so we're using just the face normal of the track mesh, we do get the correct direction and the traveling object aligns properly (though without the smooth interpolation of course):

Granted, I'm fairly new to Godot (coming from a Unreal background) and am likely missing an import/export setting or not preparing the mesh properly. If any help or suggestions on how to fix this would be deeply appreciated, thanks!

Still haven't managed to make any head way on this. I've reduced the geometry of the new track mesh down to something similar to the track mesh in the example, I even imported the example's mesh into Blender to compare the normals and everything seems the same.
Example's:

My Mesh:

With the reduced geometry the erroneous behavior is even more apparent:

And again, when disabling the barycentric function so we're only using the face normals, the traveling object correctly orientates with the track.

Because everything works with the example's mesh, I'm not inclined to believe that the issue with the code itself. Never the less here's what it looks like:
Traveling Object:

	extends CharacterBody3D

	var target_velocity = Vector3.ZERO;
	var speed = 4;

	@export_category("Settings")
	@export var use_bary_coords = false;

	@onready var ray_cast: RayCast3D = $RayCast3D;

	func _ready():
		print("READY");

	func _physics_process(_delta: float) -> void:
		if ray_cast.is_colliding():
			var other: CollisionObject3D = ray_cast.get_collider()
			position.y = ray_cast.get_collision_point().y + .1
			align_up_direction(ray_cast.get_collision_normal())

			if ray_cast.get_collider().is_in_group("mesh_colliders") and use_bary_coords:
				var vertices: Array[Vector3] = other.get_vertex_positions_at_face_index(ray_cast.get_collision_face_index())
				var vertex_normals: Array[Vector3] = other.get_vertex_normals_at_face_index(ray_cast.get_collision_face_index())
				var bary_coords: Vector3 = Geometry3D.get_triangle_barycentric_coords(ray_cast.get_collision_point(), vertices[0], vertices[1], vertices[2])
				var up_normal: Vector3 = (vertex_normals[0] * bary_coords.x) + (vertex_normals[1] * bary_coords.y) + (vertex_normals[2] * bary_coords.z)
				up_normal = up_normal.normalized()
				align_up_direction(up_normal)
				print(up_normal)
				
		
		
		
		target_velocity.z = int(Input.is_action_pressed("ui_left")) + -int(Input.is_action_pressed("ui_right"));
		velocity = target_velocity*speed;
		move_and_slide();

	func align_up_direction(up_normal: Vector3) -> void:
		var new_basis: Basis = transform.basis
		new_basis.y = up_normal
		new_basis.x = -basis.z.cross(basis.y)
		new_basis = new_basis.orthonormalized()
		basis = new_basis

Track Object:

	extends StaticBody3D
	class_name TRACKOBJ_testoval


	@onready var mesh: Mesh = $basic_Flat_003.mesh
	var mesh_data: MeshDataTool

	func _ready() -> void:
		mesh_data = MeshDataTool.new()
		mesh_data.create_from_surface(mesh, 0)


	func get_vertex_normals_at_face_index(index: int) -> Array[Vector3]:
		var normals: Array[Vector3] = []
		for i in range(0, 3):
			normals.append(mesh_data.get_vertex_normal(mesh_data.get_face_vertex(index, i))) 
		return normals

	func get_vertex_positions_at_face_index(index: int) -> Array[Vector3]:
		var vertices: Array[Vector3] = []
		for i in range(0, 3):
			vertices.append(mesh_data.get_vertex(mesh_data.get_face_vertex(index, i)))
		return vertices

Maybe it's possible that the vertices are incorrectly ordered in the new track mesh? Not sure how to check that other then drawing debug geometry based on the normals and vertices arrays. Honestly I'm kinda spitballing at this point, if anyone else wants to take a crack at it here's the project file:

  • xyz replied to this.

    Scylla-Leeezard Harden the side edges in Blender. Normals on the mesh look smoothed even for 90 deg edges, so 3 vertex normals for each triangle will differ greatly and the interpolation from barycentric weights will cause the hit normal to fluctuate a lot as you traverse each triangle.

      xyz My apologies, not sure if I understand. By harden the normals are you referring to the setting in the bevel modifier? I tried converting the track mesh into a plane by removing the side geometry, and then setting the normals from the faces; so now they should be perfectly perpendicular with the mesh:

      This is the result:

      Side note, exporting the mesh with flat shading will cause the traveling object to behave as if the barycentric function is disabled - which is expected. So it would appear the issue lies somewhere with how the normals are being smoothed.

      • xyz replied to this.
        2 months later

        In blender, right click the object and select smooth shading. This will make the vert normals average among their neighbors vs just pointing in the same direction as the face normal.

        In your code, you need to adapt the script to set the arrow's Node3D.transform.basis to align to the new normal which involves getting cross products.

        If you don't understand the math, you can try asking chat GPT to explain it 🙂

        In fact you can actually make an F-Zero type player controller with just face normals, not using barycentric vertex normals, so you can start trying to make that, and when you have that figured out, you can throw in the barycentric calculated normal which is the average of the three vertex normals according to your position on the triangle which will only smoothen an already functioning wall riding player controller.

        Hope this helps.

        3 months later

        @Scylla-Leeezard - Were you able to resolve this issue? May I ask that you share the solution with us?

        I'm experiencing the same problem, despite hardening the side edges and smoothing the shading in Blender. The moment my character comes in contact with my track and starts using barycentric coordinates to determine gravity, I get extremely erratic results. None of which seem to correlate to the expected normal vector.

        3 months later

        Aye, back to share a solution to this. Turns out the barycentric coordinate function doesn't factor in any transformations for it's results. Meaning if you translate, rotate, or scale the mesh you're raycasting against then the vertices become "desynced". We and the raycast see the modified mesh, while the bary function believes that it's unmodified; returning what appears to us as corrupted data.

        So the fix is to zero out the mesh's transform. Here's it working:

        If you need you're mesh to be rotated, translated, etc., then you're gonna need to do that in Blender not in Godot.

        Also, don't forget to shade smooth the mesh. Otherwise bary will return the equivalent of the face normal and you'll get snapping.
        'Ight, I'm off to port my F-Zero clone from UE5 to Godot lmao

          Scylla-Leeezard If you need you're mesh to be rotated, translated, etc., then you're gonna need to do that in Blender not in Godot.

          Or just inverse transform the returned coordinate from mesh's local space to global space.

            Scylla-Leeezard Thanks for the reply! Better late than never.

            That would explain a lot, if it ignores mesh transformations. I take it you could grab the scale/rotation from the node and apply those to the function's result to get the "correct" gravity vector? (Unless that's what xyz is already referring to. I may be too green to know the difference.)

            • xyz replied to this.

              chetbeigemeister Geometry3D.get_triangle_barycentric_coords() is a generalized function that doesn't know anything about transformations. It just returns barycentric interpolation coords for a given point in a triangle. It's responsibility of the caller to put all arguments into the same transformations space. Otherwise the results won't be correct if mesh instance have any non-identity transforms.

              The original code made a mistake of taking the triangle coordinates in mesh/object space while keeping the interpolated point position (gotten from raycasting) in global space.

              If you need the normal in global space you can either:
              1) transform the triangle vertices from mesh space to global space prior to passing them to bary function, using mesh instance's global_transform. And then transform vertex normals from mesh space to global space prior to interpolating them to get the final normal.
              Or
              2) transform the raycast point from global space to mesh space, get barycentric coordinates in the mesh space and interpolate the normal in the mesh space as well. Then transform the final normal from mesh space to global space, again using mesh instance's global_transform.

                xyz Assuming the result is for calculating direction of gravity (meaning it's normalized, so position/scale is irrelevant) would it be adequate to simply take the Euler rotation of the mesh to transform the result from get_triangle_barycentric_coords()?

                Apologies if this is just a rehash of what you're already saying. Some of this is is going over my head, I'm trying to dumb it down for myself.

                • xyz replied to this.

                  chetbeigemeister would it be adequate to simply take the Euler rotation of the mesh to transform the result from get_triangle_barycentric_coords()?

                  No, you need the whole transformation matrix. But this is trivial, you just multiply the vertex with mesh node's global_transform or its inverse to transform back.

                  4 months later

                  xyz Hello! I am so sorry to necropost; however, you are the only person I've been able to find giving good advice on this topic.

                  I'm getting similar results (mainly post 4) as OP using mostly similar code. I've experimented with your answer and I cannot seem to figure out where exactly you would convert between local and global spaces. I've tried to think about it logically, but I can't seem to get it right.

                  1. Could one just multiply the vertices in the for loop by global_transform and leave it that? For example: var vertices: Array[Vector3] = other.get_vertex_positions_at_face_index(ray_cast.get_collision_face_index()) * global_transform

                  2. When you say "transform vertex normals from mesh space to global space prior to interpolating them to get the final normal," are you saying that you could do the same thing as the vertices array (multiply the normals during iteration by global_transform)? I think what I'm not understanding is the "prior to interpolation" part - as in, what step in the process that would be. Wouldn't that be when OP calls and passes the normal align_up_direction?

                  • xyz replied to this.

                    paftdunk Does it work as expected when the mesh has identity transforms (i.e. zero translation and rotation and (1,1,1) scaling)?

                      xyz Yes it does!

                      To give you an idea of what is working, here is the core code. Adapted from OP's code, but I made the global changes.

                      You can see I multiply by each vertex by GlobalTransform, and I also multiply it to upNormal when I pass it to SetUpDirection().

                      Vector3[] vertices = new Vector3[3];
                      Vector3[] normals = new Vector3[3];
                      
                      for (int i = 0; i < 3; i++)
                      {
                          vertices[i] = meshData.GetVertex(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i)) * GlobalTransform;
                          normals[i] = meshData.GetVertexNormal(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));
                      }
                      
                      Vector3 baryCoords = Geometry3D.GetTriangleBarycentricCoords(normRay.GetCollisionPoint(), vertices[0], vertices[1], vertices[2]);
                      
                      Vector3 upNormal = (normals[0] * baryCoords.X) + (normals[1] * baryCoords.Y) + (normals[2] * baryCoords.Z);
                      upNormal = upNormal.Normalized();
                      SetUpDirection(upNormal * GlobalTransform, delta);

                      In SetUpDirection(), you can see I'm using GlobalTransform as well.

                        private void SetUpDirection(Vector3 upNormal, double delta)
                        {
                            Transform3D normTransform = GlobalTransform;
                            normTransform.Basis.Y = upNormal;
                            normTransform.Basis.X = -normTransform.Basis.Z.Cross(normTransform.Basis.Y);
                            normTransform = normTransform.Orthonormalized();
                            GlobalTransform = GlobalTransform.InterpolateWith(normTransform, .5f);
                        }
                      • xyz replied to this.

                        paftdunk Matrix multiplication is not commutative. The order of operands is important. To transform a vertex by a matrix the order should be matrix * vertex. Your code is doing vertex * matrix. This is equivalent to multiplying with a transposed matrix which would result in inverse transformation (if the matrix is orthonormal).

                        Also to transform a normal properly you need to nullify the translation part of the matrix or use only 3x3 submatrix aka the basis. So you can do matrix.basis * normal_vector. If there is some proportional scaling in the basis you either need to orthonormalize the basis prior to multiplication or normalize the resulting normal vector. If there is non proportional scaling in the matrix, you need to use transposed inverse of the actual matrix. Since Godot's transform class doesn't implement transpose function, you can do normal_vector * matrix.basis.affine_inverse(), which now reverses the operand order to transpose the matrix.

                          xyz That makes sense! Matrix math is fairly new to me, so this is certainly trail by fire learning.

                          I'm getting slightly different results now that I have updated the logic. In particular, correctly transforming the vertices. On its own, this is working great.
                          vertices[i] = GlobalTransform * meshData.GetVertex(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));

                          I believe I am MOSTLY following what you are saying about the normal transformation (I have no scaling). This is what I did:
                          normals[i] = GlobalTransform.Basis * meshData.GetVertexNormal(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));

                          I now get the below result. I am trying to understand the mathematics of this issue here. The mesh I am moving ramps up early, so I was thinking at that point there may be an issue with the vertex transformation. Once it turned around and started shaking erratically, I figured this is something to do with the normals calculation.

                          If you happen to have any ideas, could you explain mathematically what would cause that shaking? Racked my brain but I cannot think of what that would be (as a side note, I move the moving mesh with my keyboard by updating the position on the z-axis).

                          • xyz replied to this.

                            paftdunk Post a better image and complete code. Hard to see what's happening there. Remove the interpolation in SetUpDirection for now. First make it work instantly. Just assign the final transform you calculated. Interpolation may introduce other problems. You interpolate barycentric coords anyway so it should work smooth (when on smooth surfaces) without that last interpolation. Always try to isolate the issue into minimum of code. Oh, and you need a second cross product there when calculating the final basis to ensure the wanted orthogonality.

                              xyz Good point - I did notice that the documentation for getting the barycentric coords mentions that it is already interpolating, so thank you for confirming that. I have removed that - getting SLIGHTLY better looking results.

                              Could you explain the need for the second cross product?

                              Here is a video:

                              Here is the full code:

                              private void SetUpDirection(Vector3 upNormal, double delta)
                              {
                                  Transform3D normTransform = GlobalTransform;
                                  normTransform.Basis.Y = upNormal;
                                  normTransform.Basis.X = -normTransform.Basis.Z.Cross(normTransform.Basis.Y);
                                  normTransform = normTransform.Orthonormalized();
                                  GlobalTransform = normTransform;
                              }
                              
                              public override void _PhysicsProcess(double delta)
                              {
                                  if (Input.IsActionPressed("move_test"))
                                  {
                                      Vector3 newPos = GlobalPosition;
                                      newPos.Z -= 6.0f * (float)delta;
                                      GlobalPosition = newPos;
                                  }
                              
                                  if (Input.IsActionPressed("move_test2"))
                                  {
                                      Vector3 newPos = GlobalPosition;
                                      newPos.Z += 6.0f * (float)delta;
                                      GlobalPosition = newPos;
                                  }
                              
                                  if (normRay.IsColliding())
                                  {
                              
                                      CollisionObject3D hit = (CollisionObject3D)normRay.GetCollider();
                              
                                      Vector3 newPos = GlobalPosition;
                                      newPos.Y = normRay.GetCollisionPoint().Y + .1f;
                              
                                      GlobalPosition = newPos;
                              
                                      SetUpDirection(normRay.GetCollisionNormal(), delta);
                              
                                      if (hit.IsInGroup("AlignmentTest"))
                                      {
                                          MeshInstance3D meshInstance = hit.GetNode<MeshInstance3D>("Mesh");
                                          Mesh mesh = meshInstance.Mesh;
                              
                                          meshData.CreateFromSurface((ArrayMesh)mesh, 0);
                              
                                          Vector3[] vertices = new Vector3[3];
                                          Vector3[] normals = new Vector3[3];
                              
                                          for (int i = 0; i < 3; i++)
                                          {
                                              vertices[i] = GlobalTransform * meshData.GetVertex(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));
                                              normals[i] = GlobalTransform.Basis * meshData.GetVertexNormal(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));
                                          }
                              
                                          Vector3 baryCoords = Geometry3D.GetTriangleBarycentricCoords(normRay.GetCollisionPoint(), vertices[0], vertices[1], vertices[2]);
                              
                                          Vector3 upNormal = (normals[0] * baryCoords.X) + (normals[1] * baryCoords.Y) + (normals[2] * baryCoords.Z);
                                          upNormal = upNormal.Normalized();
                              
                                          SetUpDirection(upNormal, delta);
                                      }
                                  }
                              }
                              • xyz replied to this.

                                paftdunk Could you explain the need for the second cross product?

                                I explained it. It ensures the orthogonality of the basis vector. orthonormalized() does that as well but it does not guarantee the order in which it does the cross products. So if your basis vectors are not perpendicular when calling orthonormalized() the function will have to change the direction of some of them to make them perpendicular, but you don't know which ones will be changed. It may happen to be y which is your interpolated normal. You don't want that to be changed. So better do it yourself using two cross products so that you end up with the orthogonal vector and use orthonormalized() only to normalize the vector lengths.
                                So:
                                y = normal
                                x = y cross z (or as you've put it -(z cross y)
                                z = x cross y
                                orthonormalize

                                  xyz I gotcha now, thank you! I've learned a lot math-wise through you, so I appreciate that and I'm going to continue reading/practicing with it so maybe one day I'll be able to tackle this since I just cannot get it working.

                                  I'm going to go back to the drawing board and figure out a new approach to this problem.

                                  • xyz replied to this.

                                    paftdunk Can you post a minimal reproduction project? The whole thing is straightforward, there shouldn't be anything hard or mysterious about it. You likely have a bug somewhere in your transformation calculations.

                                      xyz Sure thing, here's a simplified project.

                                      normalsdemo.zip
                                      4MB

                                      Also, it's not that I find it too difficult necessarily, it's the fact that there's a lot of theory behind this that I don't quite understand and it is making it hard for me to think through it thoroughly - that's all.

                                      • xyz replied to this.

                                        paftdunk Normals/vertices need to be transformed from collider's space to global space. Not from skate's space which you're currently doing. That doesn't make much sense. So the matrix/basis you're multiplying them with needs to be collider's, not skate's.

                                        You copypasted my pseudocode without thinking. I used GlobalTransform as a generalization, hoping we're understanding that this is GlobalTransform of whichever object triangles belong to.

                                          xyz Wow, I feel dumb. This makes sense and is what I mean by trying to do things I do not understand, and just copying pseudocode in a desperate attempt to do so. 🙂

                                          Thank you so much again. I have one more thing otherwise I'm going to ask for your PayPal and pay you for your time.

                                          I assume this isn't a "bug", but here's what I am working with now (code is the same, I just corrected the GlobalTransform to hit.GlobalTransform):

                                          On the first ramp, you can see that it either goes through the mesh OR moves very quickly ahead and off track if I move it slow enough. The collision faces look good to me - but I'm wondering maybe if it is because of the sharpness of the angle moving up?

                                          I would assume this is related: when I get close to an edge, the board starts to rotate 90 degrees. I get that the edge face is perpendicular to the current one; however, I'm not sure why it'd rotate along the y-axis and not the x in this case. What would cause this, either on my end or mathematically?

                                          Again, thank you for your time. You are awesome.

                                          • xyz replied to this.

                                            paftdunk Well you're trying to move the skate only in global horizontal plane. That will seemingly work only when on near horizontal surfaces. The steeper it gets the glitchier it becomes. Since you're changing skate's orientation in 3d space, it needs to move "forward" from its own point of view. This "forward" changes as your rotate the basis. So to go "forward" you need to move it along its z basis vector. Ditto for y when "vertically" constraining to a collider. You need to move along the normal (or y basis), not along global y axis. If done properly the skate should go smoothly around a sphere without glitches.

                                            Hard edges will obviously be a problem that needs some additional care, but that's for another topic.

                                              xyz AWESOME. I'm able to fully traverse a sphere now!! This should be my last question to put everything together. I understand using the basis to move (as you said, we need to take the rotation into account).

                                              Here's my code (I'm using CharacterBody3D in my actual project, but the code is mostly the same as in the demo project, just using move_and_slide() to handle movement):

                                              Vector3 velocity = Vector3.Zero;
                                              
                                              // Handle rotation based off of user input
                                              Basis newBasis = Transform.Basis.Rotated(Transform.Basis.Y, turnInput * TurnRate * (float)delta);
                                              Transform = new Transform3D(newBasis, Transform.Origin);
                                              
                                              ....bary normal collision code etc....
                                              
                                              // Forward movement
                                              velocity = -Transform.Basis.Z * forwardSpeed;
                                              Velocity = velocity;
                                              MoveAndSlide();

                                              I believe I implemented the forward movement correctly. The only thing I cannot think of how to do is do this on the y-axis, especially after setting the velocity to -basis.z * speed. I'm not exactly sure how to do this with the basis.

                                              Would it be something similar to creating a new Transform3D, setting the y-basis based on the up normal and the position in the world like basis.y = normal...? I think I'm just struggling to imagine what the implementation would look like in code - reason being, I can't think of what I'd set the basis to/how I'd use it with movement calculations. All I can think to do is position.y = normal.get_collision_point() + ...

                                              After this I'll have everything I need, so I'll work to figure out the hard edges etc. on my own (will start new thread if that happens) 🙂

                                              • xyz replied to this.

                                                paftdunk paftdunk Just directly add some offset along the current y basis to the position prior to doing MoveAndSlide(). It's one line of code. You can also do it along the bary-interpolated normal as that'll be the same vector after you align your basis with the normal.

                                                And use GlobalTransform instead of Transform. Ditto for position. Those two may coincide in a simple setup but as the setup grows they could become different. Use global ones to prevent future gotchas.

                                                  xyz So - I adapted this to try to account for gravity because got not great results because of the collider. I'm not too concerned about this as I am just going to fake it later on w/ animations, so gravity is not really needed - was just an idea.

                                                  if (!IsOnFloor())
                                                  {
                                                      velocity -= GlobalTransform.Basis.Y * gravity * (float)delta;
                                                  }

                                                  Would you be able to pseudocode the offsetting of a basis - just in general? I'm getting tripped up at the fact that you can't perform certain operations (+ and -) on the basis (Vectors and floats).

                                                  • xyz replied to this.

                                                    paftdunk GlobalPosition = GlobalTransfrom.Basis.Y * offset;
                                                    It has nothing to do with altering the basis or the velocity. Simply nudge the position. Or just alter the skate's hierarchy by putting an empty 3d node at the root, parent the mesh to it and position that mesh a little bit up. That way you can put the root node at the exact rayhit point.

                                                    Bringing gravity to this would require some changes in the whole approach. You first need to decide what exact type of movement this should be; gravity/inertia based simulation or something more stylized. The approach you started with would facilitate the latter.

                                                      xyz I am going with the stylized approach which is why I'm so hellbent on figuring this out, haha. I know generally how I'm going to attack it, it's just the fundamentals here.

                                                      As far as the offset with the basis, I'm having a little trouble with that.

                                                      When I do GlobalPosition = GlobalTransform.Basis.Y * offset, the CharacterBody ends up near the origin of the scene but offset on the y-axis based on my offset value. I've attempted to multiply the entire Basis.Y vector with another vector (I've attempted different values for the X, Y, Z) and a single float.

                                                      Thinking through this: I get, for example, the velocity vector - I am multiplying the speed in the forward direction; however, this isn't setting the position directly. But by doing GlobalPosition = GlobalTransform.Basis.Y, I'm NOT taking the current position into account, just the Basis.Y, hence it would end up at the 0, 1, 0 (since Basis.Y would be 0, 1, 0).

                                                      What am I missing as far as what the offset SHOULD be structured like? Especially in the case of getting the collision point.

                                                      • xyz replied to this.

                                                        paftdunk I made a typo. It's GlobalPosition += GlobalTransfrom.Basis.Y * offset.
                                                        Without += the whole thing wouldn't be an offset from the current position 🙂

                                                          xyz That makes more sense! But I still wonder - wouldn't that just cause the node to constantly move up on its y-axis/basis every single frame, meaning it'd float away? I ask because I feel that I may not be fully thinking this through.

                                                          • xyz replied to this.

                                                            paftdunk No, because it follows the line where you assign the rayhit point to the position. We're discussing this line in that context. You can do it all in one line. The gist is:

                                                            position = rayhit_position + offset_along_y_basis

                                                            or:

                                                            position = rayhit_position
                                                            position += ofset_along_y_basis

                                                              xyz Look at this thing go!

                                                              I think I can take it from here 🙂

                                                              Thank you for all your help again!

                                                              • xyz replied to this.

                                                                xyz Don't hate me... but I'm back. JUST for some polishing.

                                                                I'm building some obstacles for my demo. This looks GREAT imo, but there's just that little bounce/jitter when moving from face to face.

                                                                Two questions - Do you think this is mainly because of the mesh itself (is it not smooth enough)? If not, do you have any tips for how I can minimize this in code (any additional interpolation, etc.)?

                                                                • xyz replied to this.

                                                                  paftdunk I only see jitter when transitioning between objects, not between faces. Ist hat what you mean? This is understandable because you can only bary-interpolate when on the same object. To minimize it, don't set the normal directly but continually interpolate from the current normal towards wanted normal a little bit each frame.