• Godot Help3D
  • How to translate a World Coordinate to UV Coordinate?

cybereality painting a mesh's splat map during runtime - for example, let's say you have a dirty car then you scrub the dirt out of it gradually (the dirt is a texture blended with the car's base texture and the splat map defines the alpha/amount).

It's working now, I can paint the splat map as I wanted, but unfortunately, as I said, only if the model is not rotated.

As for making this into a shader, I would have no idea of how to do that - I tried to understand https://github.com/Bauxitedev/godot-texture-painter and https://github.com/RodZill4/godot-material-spray shaders that do precisely that, but it's too out of my league - I only made two or three shaders in my life and they are copy and pastes of many sources. I'm definitely not a VFX person. 😅

But considering that this is easily achievable with Unity and UE (I made a PoC in both already too and they work), the logic should work in Godot as well, as long as the translation/transformation of world to UV is figured out.

alfredbaudisch

global_transform.xform(v1) should take into account rotation as well, so I'm not quite sure why that doesn't work.
However, you can still use global_transform.basis.xform(v1) afterwards to apply the rotation.

    This would be the transform code:

    v1 = mesh_instance.transform.xform_inv(v1)

    However, MeshDataTool is super slow. It is okay for generating things at the start, but it won't work for real-time deformation. You likely need to do this in a shader (it would also be much less code). I don't have time to test anything today, though. But one way you can do it is to draw into a texture (in GDScript) and have the texture bound as a uniform. The texture could be a grayscale heightmap, or whatever you want, and then in the shader you can sample it and convert it to a normal map, or a displacement map (if you want to move vertices), or whatever, it's not a lot of code.

      cybereality

      I'm not using MeshDataTool for deformation. I am using it to map WORLD COORDINATES to the mesh's UV Coordinates.

      Drawing on a texture and blending all the maps is easy peasy, and it's actually the second step of the process (which I already do).

      The hard part is figuring out where in the texture to draw (which this topic is all about). Example: clicked in the hand of the character, it's mapped to what Vector2 of the character's UV, so then I can MAP THAT COORDINATE to WHAT PART OF THE TEXTURE WHEN painting the texture? draw there, the shader then blends the base texture with whatever other map (splat, displacement, etc, w.e).

      And as I said, even this is working now, but as long nothing is rotated. So there's one last step of the puzzle to solve: how to make the World to UV work if the mesh is rotated in the world?

      Real example

      Better to illustrate with an image. This image is what I did in UE. The shader is extremely simple: Base Texture lerps with a Dirt Texture and the Alpha that decides the amount of dirt is a splat map painted during the game.

      To know WHERE TO PAINT in the splat map I use the function linked in the top of this post "FindWorldCollisionInUV", i.e. when I click anywhere, I use the Raycast Vector3 in FindWorldCollisionInUV and get a UV Vector2, where then I paint white in the splat map. The magic is because UE's FindWorldCollisionInUV manages to RETURN A PRECISE UV COORDINATE FROM A WORLD VECTOR3 for moving meshes, skeletal meshes, static, etc.

      So the trick is how to make a "FindWorldCollisionInUV" with Godot. It's almost there, as I posted before: https://godotforums.org/d/30491-how-to-translate-a-world-coordinate-to-uv-coordinate/2

      I see okay. That makes sense. I don't see why the model orientation or position should matter. You just need to convert the ray cast into the correct space before you do the collision.

        Solved, I also need to transform the mesh normals to World, when comparing them against the raycast normal.

        So instead of just: meshtool.get_face_normal(idx), I'm now doing mesh_instance.global_transform.basis.xform(meshtool.get_face_normal(idx)).

        Working code

        Extremely messy and unoptimized code because it comes from this final experimentation before being refactored to final usage, but for those looking to convert a World Coordinate into an UV coordinate for any mesh, the code with all the logic is below.

        Usage: Cast a raycast on any mesh and with the result, call get_uv_coords(raycast.position, raycast.normal).

        extends Node
        
        var meshtool
        var mesh
        var mesh_instance
        
        var transform_vertex_to_global = true
        
        func set_mesh(_mesh_instance):
        	mesh_instance = _mesh_instance
        	mesh = _mesh_instance.mesh
        	
        	meshtool = MeshDataTool.new()
        	meshtool.create_from_surface(mesh, 0)
        
        # Extracts position data for this triangle
        func _get_triangle_data(datatool, p1i, p2i, p3i):	
        	var p1 = datatool.get_vertex(p1i)
        	var p2 = datatool.get_vertex(p2i)
        	var p3 = datatool.get_vertex(p3i)
        	
        	return [p1, p2, p3]
        
        func equals_with_epsilon(v1, v2, epsilon):
        	if (v1.distance_to(v2) < epsilon):
        		return true
        	return false
        
        func get_face(point, normal, epsilon = 0.2):
        	for idx in range(meshtool.get_face_count()):
        		var world_normal = mesh_instance.global_transform.basis.xform(meshtool.get_face_normal(idx))
        		
        		if !equals_with_epsilon(world_normal, normal, epsilon):
        			continue
        		# Normal is the same-ish, so we need to check if the point is on this face
        		var v1 = meshtool.get_vertex(meshtool.get_face_vertex(idx, 0))
        		var v2 = meshtool.get_vertex(meshtool.get_face_vertex(idx, 1))
        		var v3 = meshtool.get_vertex(meshtool.get_face_vertex(idx, 2))
        		
        		if transform_vertex_to_global:
        			v1 = mesh_instance.global_transform.xform(v1)
        			v2 = mesh_instance.global_transform.xform(v2)
        			v3 = mesh_instance.global_transform.xform(v3)
        
        		if is_point_in_triangle(point, v1, v2, v3):
        			return idx	
        	return null
        
        func barycentric(P, A, B, C):
        	# Returns barycentric co-ordinates of point P in triangle ABC
        	var mat1 = Basis(A, B, C)
        	var det = mat1.determinant()
        	var mat2 = Basis(P, B, C)
        	var factor_alpha = mat2.determinant()
        	var mat3 = Basis(P, C, A)
        	var factor_beta = mat3.determinant()
        	var alpha = factor_alpha / det;
        	var beta = factor_beta / det;
        	var gamma = 1.0 - alpha - beta;
        	return Vector3(alpha, beta, gamma)
        	
        func cart2bary(p : Vector3, a : Vector3, b : Vector3, c: Vector3) -> Vector3:
        	var v0 := b - a
        	var v1 := c - a
        	var v2 := p - a
        	var d00 := v0.dot(v0)
        	var d01 := v0.dot(v1)
        	var d11 := v1.dot(v1)
        	var d20 := v2.dot(v0)
        	var d21 := v2.dot(v1)
        	var denom := d00 * d11 - d01 * d01
        	var v = (d11 * d20 - d01 * d21) / denom
        	var w = (d00 * d21 - d01 * d20) / denom
        	var u = 1.0 - v - w
        	return Vector3(u, v, w)
        
        func transfer_point(from : Basis, to : Basis, point : Vector3) -> Vector3:
        	return (to * from.inverse()).xform(point)
        	
        func bary2cart(a : Vector3, b : Vector3, c: Vector3, barycentric: Vector3) -> Vector3:
        	return barycentric.x * a + barycentric.y * b + barycentric.z * c
        	
        func is_point_in_triangle(point, v1, v2, v3):
        	#bc = barycentric(point, v1, v2, v3)
        	var bc = barycentric(point, v1, v2, v3)	
        	
        	if bc.x < 0 or bc.x > 1:
        		return false
        	if bc.y < 0 or bc.y > 1:
        		return false
        	if bc.z < 0 or bc.z > 1:
        		return false
        	return true
        
        func get_uv_coords(point, normal, transform = true):
        	# Gets the uv coordinates on the mesh given a point on the mesh and normal
        	# these values can be obtained from a raycast
        	transform_vertex_to_global = transform
        	
        	var face = get_face(point, normal)
        	if face == null:
        		return null
        	var v1 = meshtool.get_vertex(meshtool.get_face_vertex(face, 0))
        	var v2 = meshtool.get_vertex(meshtool.get_face_vertex(face, 1))
        	var v3 = meshtool.get_vertex(meshtool.get_face_vertex(face, 2))
        		
        	if transform_vertex_to_global:
        		v1 = mesh_instance.global_transform.xform(v1)
        		v2 = mesh_instance.global_transform.xform(v2)
        		v3 = mesh_instance.global_transform.xform(v3)
        		
        	var bc = barycentric(point, v1, v2, v3)
        	var uv1 = meshtool.get_vertex_uv(meshtool.get_face_vertex(face, 0))
        	var uv2 = meshtool.get_vertex_uv(meshtool.get_face_vertex(face, 1))
        	var uv3 = meshtool.get_vertex_uv(meshtool.get_face_vertex(face, 2))
        	return (uv1 * bc.x) + (uv2 * bc.y) + (uv3 * bc.z)

        cybereality For some reason correcting the space of the ray does not work. It only works only when converting the space of the vertices and normals, while keeping the raycast world aligned.

        Anyway, now that I have it fully working, I'll try to publish a little open-source project showing the usages. 🥳

        That's really cool. I still wonder if there is a simpler way, but if it works it works.

          cybereality I would love to see another implementation that does this and does not rely on MeshDataTool so heavily. It works and it fulfills the purpose, but it does not scale well 🙂

          Right now I can walk around meshes anywhere in the world and paint them individually.

          On another note, a completely opposite way of doing this was created by RodZill4 https://github.com/RodZill4/godot-material-spray, but there's a lot of more steps involved and there's no flexibility in the scene (and it's also Viewport + Shader based) - it's supposed to be used only for standalone painting.

          But his implementation is at least twice as lighter as mine, but then I couldn't find a way to make it usable in a game scenario - it was the 1st thing I tried actually.

          I think it would be a win if a viewport + shader solution were to be found, instead of using Viewport + MeshDataTool + Shader as I'm doing.

          a year later