@Scylla-Leeezard - Were you able to resolve this issue? May I ask that you share the solution with us?
I'm experiencing the same problem, despite hardening the side edges and smoothing the shading in Blender. The moment my character comes in contact with my track and starts using barycentric coordinates to determine gravity, I get extremely erratic results. None of which seem to correlate to the expected normal vector.
Aye, back to share a solution to this. Turns out the barycentric coordinate function doesn't factor in any transformations for it's results. Meaning if you translate, rotate, or scale the mesh you're raycasting against then the vertices become "desynced". We and the raycast see the modified mesh, while the bary function believes that it's unmodified; returning what appears to us as corrupted data.
So the fix is to zero out the mesh's transform. Here's it working:
If you need you're mesh to be rotated, translated, etc., then you're gonna need to do that in Blender not in Godot.
Also, don't forget to shade smooth the mesh. Otherwise bary will return the equivalent of the face normal and you'll get snapping.
'Ight, I'm off to port my F-Zero clone from UE5 to Godot lmao
Scylla-Leeezard Thanks for the reply! Better late than never.
That would explain a lot, if it ignores mesh transformations. I take it you could grab the scale/rotation from the node and apply those to the function's result to get the "correct" gravity vector? (Unless that's what xyz is already referring to. I may be too green to know the difference.)
chetbeigemeisterGeometry3D.get_triangle_barycentric_coords() is a generalized function that doesn't know anything about transformations. It just returns barycentric interpolation coords for a given point in a triangle. It's responsibility of the caller to put all arguments into the same transformations space. Otherwise the results won't be correct if mesh instance have any non-identity transforms.
The original code made a mistake of taking the triangle coordinates in mesh/object space while keeping the interpolated point position (gotten from raycasting) in global space.
If you need the normal in global space you can either:
1) transform the triangle vertices from mesh space to global space prior to passing them to bary function, using mesh instance's global_transform. And then transform vertex normals from mesh space to global space prior to interpolating them to get the final normal.
Or
2) transform the raycast point from global space to mesh space, get barycentric coordinates in the mesh space and interpolate the normal in the mesh space as well. Then transform the final normal from mesh space to global space, again using mesh instance's global_transform.
xyz Assuming the result is for calculating direction of gravity (meaning it's normalized, so position/scale is irrelevant) would it be adequate to simply take the Euler rotation of the mesh to transform the result from get_triangle_barycentric_coords()?
Apologies if this is just a rehash of what you're already saying. Some of this is is going over my head, I'm trying to dumb it down for myself.
chetbeigemeister would it be adequate to simply take the Euler rotation of the mesh to transform the result from get_triangle_barycentric_coords()?
No, you need the whole transformation matrix. But this is trivial, you just multiply the vertex with mesh node's global_transform or its inverse to transform back.
xyz Hello! I am so sorry to necropost; however, you are the only person I've been able to find giving good advice on this topic.
I'm getting similar results (mainly post 4) as OP using mostly similar code. I've experimented with your answer and I cannot seem to figure out where exactly you would convert between local and global spaces. I've tried to think about it logically, but I can't seem to get it right.
Could one just multiply the vertices in the for loop by global_transform and leave it that? For example: var vertices: Array[Vector3] = other.get_vertex_positions_at_face_index(ray_cast.get_collision_face_index()) * global_transform
When you say "transform vertex normals from mesh space to global space prior to interpolating them to get the final normal," are you saying that you could do the same thing as the vertices array (multiply the normals during iteration by global_transform)? I think what I'm not understanding is the "prior to interpolation" part - as in, what step in the process that would be. Wouldn't that be when OP calls and passes the normal align_up_direction?
paftdunk Matrix multiplication is not commutative. The order of operands is important. To transform a vertex by a matrix the order should be matrix * vertex. Your code is doing vertex * matrix. This is equivalent to multiplying with a transposed matrix which would result in inverse transformation (if the matrix is orthonormal).
Also to transform a normal properly you need to nullify the translation part of the matrix or use only 3x3 submatrix aka the basis. So you can do matrix.basis * normal_vector. If there is some proportional scaling in the basis you either need to orthonormalize the basis prior to multiplication or normalize the resulting normal vector. If there is non proportional scaling in the matrix, you need to use transposed inverse of the actual matrix. Since Godot's transform class doesn't implement transpose function, you can do normal_vector * matrix.basis.affine_inverse(), which now reverses the operand order to transpose the matrix.
xyz That makes sense! Matrix math is fairly new to me, so this is certainly trail by fire learning.
I'm getting slightly different results now that I have updated the logic. In particular, correctly transforming the vertices. On its own, this is working great. vertices[i] = GlobalTransform * meshData.GetVertex(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));
I believe I am MOSTLY following what you are saying about the normal transformation (I have no scaling). This is what I did: normals[i] = GlobalTransform.Basis * meshData.GetVertexNormal(meshData.GetFaceVertex(normRay.GetCollisionFaceIndex(), i));
I now get the below result. I am trying to understand the mathematics of this issue here. The mesh I am moving ramps up early, so I was thinking at that point there may be an issue with the vertex transformation. Once it turned around and started shaking erratically, I figured this is something to do with the normals calculation.
If you happen to have any ideas, could you explain mathematically what would cause that shaking? Racked my brain but I cannot think of what that would be (as a side note, I move the moving mesh with my keyboard by updating the position on the z-axis).
paftdunk Post a better image and complete code. Hard to see what's happening there. Remove the interpolation in SetUpDirection for now. First make it work instantly. Just assign the final transform you calculated. Interpolation may introduce other problems. You interpolate barycentric coords anyway so it should work smooth (when on smooth surfaces) without that last interpolation. Always try to isolate the issue into minimum of code. Oh, and you need a second cross product there when calculating the final basis to ensure the wanted orthogonality.
xyz Good point - I did notice that the documentation for getting the barycentric coords mentions that it is already interpolating, so thank you for confirming that. I have removed that - getting SLIGHTLY better looking results.
Could you explain the need for the second cross product?
paftdunk Could you explain the need for the second cross product?
I explained it. It ensures the orthogonality of the basis vector. orthonormalized() does that as well but it does not guarantee the order in which it does the cross products. So if your basis vectors are not perpendicular when calling orthonormalized() the function will have to change the direction of some of them to make them perpendicular, but you don't know which ones will be changed. It may happen to be y which is your interpolated normal. You don't want that to be changed. So better do it yourself using two cross products so that you end up with the orthogonal vector and use orthonormalized() only to normalize the vector lengths.
So:
y = normal
x = y cross z (or as you've put it -(z cross y)
z = x cross y
orthonormalize
xyz I gotcha now, thank you! I've learned a lot math-wise through you, so I appreciate that and I'm going to continue reading/practicing with it so maybe one day I'll be able to tackle this since I just cannot get it working.
I'm going to go back to the drawing board and figure out a new approach to this problem.
paftdunk Can you post a minimal reproduction project? The whole thing is straightforward, there shouldn't be anything hard or mysterious about it. You likely have a bug somewhere in your transformation calculations.
Also, it's not that I find it too difficult necessarily, it's the fact that there's a lot of theory behind this that I don't quite understand and it is making it hard for me to think through it thoroughly - that's all.
paftdunk Normals/vertices need to be transformed from collider's space to global space. Not from skate's space which you're currently doing. That doesn't make much sense. So the matrix/basis you're multiplying them with needs to be collider's, not skate's.
You copypasted my pseudocode without thinking. I used GlobalTransform as a generalization, hoping we're understanding that this is GlobalTransform of whichever object triangles belong to.
xyz Wow, I feel dumb. This makes sense and is what I mean by trying to do things I do not understand, and just copying pseudocode in a desperate attempt to do so.
Thank you so much again. I have one more thing otherwise I'm going to ask for your PayPal and pay you for your time.
I assume this isn't a "bug", but here's what I am working with now (code is the same, I just corrected the GlobalTransform to hit.GlobalTransform):
On the first ramp, you can see that it either goes through the mesh OR moves very quickly ahead and off track if I move it slow enough. The collision faces look good to me - but I'm wondering maybe if it is because of the sharpness of the angle moving up?
I would assume this is related: when I get close to an edge, the board starts to rotate 90 degrees. I get that the edge face is perpendicular to the current one; however, I'm not sure why it'd rotate along the y-axis and not the x in this case. What would cause this, either on my end or mathematically?
Again, thank you for your time. You are awesome.
Loading...
Something went wrong while trying to load the full version of this site. Try hard-refreshing this page to fix the error.