I'm working in a 3d project. There is a player which is a ball you move with wasd. There is a floor of hexagons that grow taller when they are far from the player, and shrink to became the floor when the player is near. Growth and shrinkage is achieved by lerping. I want there to be about 1000 hexagon on screen, the ones on the player area, about 250, I want them lerping to their desired height. Right now the game is laggy with only 177 hexagons on screen. Is this too much to handle with GDScript? Are there any possible optimizations possible?

Have you tried sprite batching? My guess, based on the hexagon count, is that rendering could be causing the slowdown rather than code.

As far as optimizing GDScript goes, it should be the same workflow as optimizing for any dynamic programming language: avoid unnecessary loops, cache calculations (and other values) when possible if they are used routinely, etc. You can also use a combination of C# or C++ in combination with GDScript if you need to optimize certain code sections for extra speed.

Are you sure it is the lerping and not the rendering? Have you done any profiling?

I'm working on a procedural game that requires lots of unique mesh rendering and I found that once I got even a remotely large number of meshes generated the node system effectively killed my project.

I had to access the visual server manually and effectively had a few meshes where I would add surfaces to and from them. This switched my game from lagging from around 400 chunks (think Minecraft level of poly) to running fine on my integrated with over 16,000 chunks.

If you are rendering each hexagon as its own node this is worth considering.

If it is indeed the lerping itself that is causing all this slowdown, prefer storage over processing if at all possible. Even if you have to store a gradient of z values or something to adjust your hexagons by.

Also, use built-in functions (such as lerp) over calculating it yourself, as these are faster. I assume the math behind the functions get calculated native rather than having to be interpreted. That is just a guess, though.

If this isn't doable you could always pop the lerping onto another thread so it isn't lagging the rest of the game.

@Binsk said: Are you sure it is the lerping and not the rendering? Have you done any profiling?

yes i did. it's the lerping.

I had to access the visual server manually and effectively had a few meshes where I would add surfaces to and from them. This switched my game from lagging from around 400 chunks (think Minecraft level of poly) to running fine on my integrated with over 16,000 chunks.

Mm, I'll see if I can find information on that.

If it is indeed the lerping itself that is causing all this slowdown, prefer storage over processing if at all possible. Even if you have to store a gradient of z values or something to adjust your hexagons by.

I don't understand. What would be storage, and what processing?

Also, use built-in functions (such as lerp) over calculating it yourself, as these are faster. I assume the math behind the functions get calculated native rather than having to be interpreted. That is just a guess, though.

Thanks, I use the built-in lerp

Thanks for the answer.

It seems to me there's no reason it shouldn't be able to calculate 1k lerps per frame. Let alone 250. I've been working on something very complex in gdscript, with many more calculations than that.

I can't imagine, why it would cause such an issue. But I am looking forward to performance improvements coming in 4.0, it seems they are mostly in places types are defined.

2 years later