• 2D
  • Any 2d Performance Optimizations?

While writing my own simple engine (on a break from Unity), I saw godot and started playing with it. So far I am a big fan of how its designed; but performance of a simple 2d scene (release build) with nothing except a single image (the default icon) and a text label (to show the framerate) is about 500 fps in the default window and 250 fps when full screened. The same thing thrown together with SFML or similar is about 1900 fps in the same size window and 950fps when full screened.

I understand the nature of a game engine, and that there is a lot more going on behind the scenes; but that is a considerable trade off. Is there anything I can do to increase performance, and close this gap somewhat?

Which rendering backend are you using (GLES3 or GLES2)? The GLES2 renderer in Godot 3.2.3 features batching, but the GLES3 renderer doesn't. The GLES3 renderer in 3.2.4beta1 and later supports batching though. You can give 3.2.4rc3 a try :)

Also, the GLES3 renderer has a higher base cost than the GLES2 renderer. It will never be as fast in very simple scenes like the one you created. However, it will often scale better to complex scenes.

With a single image, there isn't much to batch; but I tried both of the renderers with no discernable difference. (the release candidate didn't improve anything either).

It might be that as it scales out, the base cost gets absorbed and the gap closes considerably; but I already sunk months into Unity before deciding that its not going to work for me; and I don't look forward to investing that time in another engine in hopes that it will give me the performance I am looking for by the end. Its already dangerously close to the minimum framerate I would expect from a simple 2d game with this hardware.

Is there a 2d example floating around that demonstrates what performance that the engine would likely achieve with a full 2d game?

Godot has a large number of configuration options, are there any tweaks I can do to increase performance (to lower that base cost)? For example I do not need a physics engine, I just need basic 2d collision detection.

@latreides said:

It might be that as it scales out, the base cost gets absorbed and the gap closes considerably; but I already sunk months into Unity before deciding that its not going to work for me

you could try rpg maker :trollface:

@Zelta said: you could try rpg maker :trollface: Please no!

On a serious note, if there are no real optimizations right now, I will push on with my own engine and revisit Godot on the next game. :)

@latreides said: Is there a 2d example floating around that demonstrates what performance that the engine would likely achieve with a full 2d game?

Well, you could search github for some projects, for 2D the first one that caught my attention was this: https://github.com/akien-mga/dynadungeons

A bomberman clone. Last update is from Jan 2018 but it does mention making it compatible with 3.0


Here's another one: https://github.com/godot-mega-man/Mega-Man-Engine

Should be compatible with at least 3.2.2

@latreides said: On a serious note, if there are no real optimizations right now, I will push on with my own engine and revisit Godot on the next game. :)

I just hope you have tested an exported game and not in the editor. rofl.

anyway, when you finish your engine (which is more efficient than godot or unity, of course) is it going to be free? or will you sell it to kadokawa?

@Zelta said: I just hope you have tested an exported game and not in the editor. rofl. I did, this didn't really change performance by any measurable amount. Neither did Debug vs Release. Likely because there is nothing really happening.

@Zelta said: anyway, when you finish your engine (which is more efficient than godot or unity, of course) is it going to be free? or will you sell it to kadokawa? I am sorry if I didn't make it clear; but my custom engine is just a code-only engine designed specifically for my project, and while I keep the framework pretty generic, its not something that would be very useful outside of this project and is certainly not a replacement or competition for Unity or Godot.

So a major update, the disparity might not be as large as I first thought. In fact, there might not be a disparity at all.

I do not understand it one bit; but I figured I would update this post so that my original frame rate observations are not taken as "Godot is so slow" because this random person said so, by passerby's.

Here is the situation. My computer (like most) has two graphics units, the integrated GPU (Intel) and the discrete GPU (nVidia); wait, that conclusion you just jumped to, its wrong, follow me a moment because it gets strange.

For reasons I don't understand, the nVidia chipset (which the system defaults to with Godot) runs my simple static image build at ~550 FPS (reported by Godot). The Intel chipset (considerably slower) runs the same test at 2100 FPS (reported by Godot). Yes, that's right, the slower Intel chipset is performing about 4x faster than the nVidia.

As luck would have it, my engine (for some reason) was defaulting to the Intel chipset. Which is why it appeared to be 4x faster. It doesn't make any sense, its like the world is backward, and I am still trying to figureout what is going on; but it doesn't look like my initial observations are valid anymore.

@latreides said: For reasons I don't understand, the nVidia chipset (which the system defaults to with Godot) runs my simple static image build at ~550 FPS (reported by Godot). The Intel chipset (considerably slower) runs the same test at 2100 FPS (reported by Godot). Yes, that's right, the slower Intel chipset is performing about 4x faster than the nVidia.

Intel's not actually as bad at HW as people think. They have their own advantages. For an example they have the most efficient geometry shaders implementation. Their GPUs just tend to be small, so far...

In this case however your NVidia GPU might well be configured to try to be more energy efficient though. There should be a per application setting in NV control panel.

@Megalomaniak said: In this case however your NVidia GPU might well be configured to try to be more energy efficient though. There should be a per application setting in NV control panel.

There are no settings like that in the nVidia Control Panel (there isn't much there at all to be honest), and the Windows graphics settings shows this:

I am still trying to figure-out why this happens, how to solve it, and if its an isolated incident, My biggest take-away, and the reason for the follow-up post, was to update my original observation with the new information that a simple scene in Godot is not dramatically slower than the same scene in a custom built engine.

Nvidia Control Panel should be accessible via right click on the windows desktop. If you can't see it then you might need to install the latest driver supplied by nvidia themselves via their website.

edit: this hsould be the current latest for windows 10 64bit: https://www.nvidia.com/en-gb/drivers/results/171025/

Once you've opened the control panel you should navigate to Manage 3D settings then on the right pick the tab that says Program settings select or add the executable you want to modify the properties for and then find the power management mode and set it to prefer maximum performance.

https://edge.alluremedia.com.au/m/l/2018/03/nvidia3d.png

However if you don't care about power saving/battery life then in that windows panel you show in your post you can just set it to the high Performance option, save and forget it so to speak I guess.

@Megalomaniak said: Nvidia Control Panel should be accessible via right click on the windows desktop. If you can't see it then you might need to install the latest driver supplied by nvidia themselves via their website.

The nVidia control panel has no options except a single deprecated one (global and per app override to select Intel or nVidia).

See:

The option in the Windows graphics settings, just switches between the two, it doesn't change how the nVidia behaves.

I do not think the nVidia is set to any lower power, or non-performance mode because it runs as expected compared to the Intel in the few benchmarks and other graphically intense applications I have performance tested.

My best guess (and its purely a guess at this point) is that its an issue with the test scene being so simple. The Intel chipset is slower, so gives the CPU a bit more breathing room to process; but the nVidia is so much quicker that the CPU is constantly having to work, and it can't keep up.

Although I only get about 37% CPU utilization when using the nVidia and I get 64% utilization when using Intel (both at the same Ghz); so that doesn't quite add up either.

There are also factors such as driver overhead to consider. Still that Program Settings tab should be of at least some interest since it should let you assign the choice of GPU per application.

Though again, if your driver was acquired via windows update then I'd still try the one from nvidia. The one from windows update IIRC might be a cut down version since microsoft wanted such.

@Megalomaniak said: There are also factors such as driver overhead to consider. Still that Program Settings tab should be of at least some interest since it should let you assign the choice of GPU per application. I can assign the GPU no problem (though its better to do it in the Windows dialog, as per the message on the Nvidia Control Panel) , that's how I have been moving the apps back and forth to test.

@Megalomaniak said: Though again, if your driver was acquired via windows update then I'd still try the one from Nvidia. The one from windows update IIRC might be a cut down version since microsoft wanted such.
The driver is fine as far as I can tell (and I have tried the one from Nvidia, with no real difference). 3DMark gives the Nvidia a score of about 3300 and the Intel about 700, and many other graphically intensive workflows also work considerably better on the Nvidia.

The only games/applications that I have noticed this oddity on are Godot and my own engine, both with a minimal sample.

I'm not saying the driver isn't working I'm saying there should be settings to more minutely configure per application settings overrides. The fact that those are missing is what seriously concerns me.

@Megalomaniak said: I'm not saying the driver isn't working I'm saying there should be settings to more minutely configure per application settings overrides. The fact that those are missing is what seriously concerns me.

Yeah, there are more in the official driver (like the one you mentioned) but changing it doesn't have any impact at all. I am not sure if its just being ignored, or if the Nvidia chipset isn't being pushed far enough to actually notice a difference.

5 days later

For reasons I don't understand, the nVidia chipset (which the system defaults to with Godot) runs my simple static image build at ~550 FPS (reported by Godot). The Intel chipset (considerably slower) runs the same test at 2100 FPS (reported by Godot). Yes, that's right, the slower Intel chipset is performing about 4x faster than the nVidia.

My guess is that in this particular situation, the CPU-GPU communication required for NVIDIA Optimus to work is more expensive than using the Intel IGP directly (as it's always the IGP that actually drives the laptop display).