Hi, I am making a grappling hook mechanic for an FPS game, and I wanted to add an aim assist for targeting it. I did it with casting a bunch of rays from camera, kinda like this:
And it works fine:
But when moving the camera the point jumps a bit and sometimes it misses even though it should hit. It also feels a bit hacky to do it this way. I saw that there is a shapecast3d function but if i use it as is by casting a sphere for example the amount of assist will change depending on how close or how far from the camera the point is, and I want it to be the same size on a screen. So is it possible to basically shapecast a cone/expanding sphere to detect a point that is closest to a screen center?
Shapecasting an expanding circle/sphere from camera to a 3d scene
is casting multiple rays all the time expensive on cpu? I mean can i leave the ray to shoot to infinity and wait for collision and then when i want to use the data to read from it?
- Edited
kuligs2 is casting multiple rays all the time expensive on cpu?
Not particularly but it really depends on how complex (in terms of colliders) the scene is. If otpimiztion is needed in this particular case, a cone or pyramid collider can be used to determine if any rays need to be cast at all.
xyz thats what i tought, first you "project" a collider in cone shape and then where it collides you send the ray or something? Im speaking in general terms, ofc the use case in each situation is different..
TLDR from what i understand is that moving "shape shifting" collider is cheaper than projecting multiple rays at runtime?