How to measure precise packet times?
xyz ok, got it
#main.gd
extends Control
@onready var udp_ping_server: ServerNode = $UdpPingServer
@onready var udp_ping_client: ClientNode = $UdpPingClient
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
pass # Replace with function body.
# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta: float) -> void:
if udp_ping_client.udp.get_available_packet_count() > 0:
var ut = Time.get_unix_time_from_system()
var ms = Time.get_ticks_msec()
var packet_ms = udp_ping_client.udp.get_packet().get_string_from_utf8().to_int()
var fps = Engine.get_frames_per_second()
var latency = ms - packet_ms
var engine_time = 1000 / fps
var real_latency = latency - engine_time
#print("tick: %s ms" % ms)
print("latency: %s ms" % latency)
#print("latency: %s ms" % real_latency)
udp_ping_server.server.poll() # Important!
if udp_ping_server.server.is_connection_available():
var peer: PacketPeerUDP = udp_ping_server.server.take_connection()
var packet = peer.get_packet()
peer.put_packet(packet)
# Keep a reference so we can keep contacting the remote peer.
udp_ping_server.peers.append(peer)
for i in range(0, udp_ping_server.peers.size()):
if udp_ping_server.peers[i].get_available_packet_count() > 0:
var packet = udp_ping_server.peers[i].get_packet()
udp_ping_server.peers[i].put_packet(packet)
pass # Do something with the connected peers.
pass
output:
Godot Engine v4.3.dev6.official.89850d553 - https://godotengine.org
Vulkan 1.3.277 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ClientNode connected
Timer started!
ServerNode listenning
latency: 6 ms
latency: 1 ms
latency: 1 ms
latency: 1 ms
latency: 1 ms
latency: 1 ms
--- Debugging process stopped ---
- Edited
kuligs2 You currently have two nodes:
UdpPingClient
UdpPingServer
Add the third node at the end so you get:
UdpPingClient
UdpPingServer
UdpPingClient2
Attach a new script to UdpPingClient2
and do the receiving part of client code in that node, i.e. move the _process()
function from UdpPingClient
to UdpPingClient2
.
Why do that? To force the receiving client code to execute after the server code in the same frame. And why do that? So you can see that there is no actual network latency there. What you perceive as latency is caused by executing the ping receiving code in the next frame rather than in the frame you pinged the server, due to order of "server" and "client" _process()
execution.
well actually idk.. maybe my pc is too fast for this.. i mean i tested this before on garbage can, compared to my gaming rig ..
The old method returns me:
Godot Engine v4.2.1.stable.official.b09f793f5 - https://godotengine.org
Vulkan API 1.3.277 - Forward+ - Using Vulkan Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ServerNode listenning
ClientNode connected
Timer started!
latency: 5 ms
latency: 3 ms
latency: 3 ms
latency: 2 ms
latency: 3 ms
latency: 2 ms
latency: 2 ms
latency: 3 ms
--- Debugging process stopped ---
But the new one 1ms.. man.. gotta have to wait for tomorrow when i get back at work and test this on that garbage pc
- Edited
kuligs2 ok, got it
Ah finally That looks more like the actual latency. Although if you did it properly and run it in a single instance, your "latency" printout should be near 0.
EDIT: Hm, are you again doing it in a script attached to Main? Not good as this script will be executed before the server code. Remember _process()
functions are executed from top to bottom as the nodes are displayed in the scene tree.
ok so did a test on laptop.. yes it works like you said..
Old method
new method
ok so wrote client app, just the client part, and im getting bad pings
Server is running on other machine..
Terminal ping gives me these numbers
Running server on laptop and client on my power pc,
Godot Engine v4.3.dev6.official.89850d553 - https://godotengine.org
Vulkan 1.3.277 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ClientNode connected
Timer started!
latency: 8 ms
latency: 5 ms
latency: 53 ms
latency: 86 ms
latency: 19 ms
latency: 36 ms
latency: 53 ms
latency: 67 ms
latency: 97 ms
latency: 8 ms
latency: 56 ms
latency: 70 ms
latency: 97 ms
--- Debugging process stopped ---
```
yikes :(
- Edited
kuligs2 With separate apps, this would not be a problem. As I said, the problem is only due to how you structured this dummy self-pinging code.
You can structure the code in multiple ways. The important thing for the self-pinging app is to ensure that all 3 parts are executed in the proper order inside the same frame:
- client pings server
- server pings back
- client receives the pingback.
Your original code was performing 1 and 2 in the current frame and 3 in the next frame. And that was because how you structured the chunks of your code to execute.
The takeaway message is to take care to understand how and when is each chunk of your code actually executed inside the single frame processing step.
xyz Server on laptop - vsync disabled
Godot Engine v4.3.dev6.official.89850d553 - https://godotengine.org
Vulkan 1.3.277 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ClientNode connected
Timer started!
latency: 66 ms
latency: 88 ms
latency: 6 ms
latency: 6 ms
latency: 6 ms
latency: 6 ms
latency: 5 ms
latency: 4 ms
latency: 5 ms
latency: 5 ms
latency: 5 ms
latency: 4 ms
latency: 48 ms
latency: 73 ms
latency: 94 ms
latency: 14 ms
latency: 38 ms
latency: 5 ms
latency: 87 ms
latency: 8 ms
latency: 6 ms
latency: 55 ms
latency: 77 ms
latency: 103 ms
latency: 26 ms
latency: 50 ms
latency: 6 ms
latency: 94 ms
latency: 5 ms
latency: 42 ms
latency: 5 ms
latency: 92 ms
latency: 107 ms
latency: 36 ms
latency: 6 ms
latency: 67 ms
latency: 67 ms
latency: 5 ms
--- Debugging process stopped ---
vsync enabled
Godot Engine v4.3.dev6.official.89850d553 - https://godotengine.org
Vulkan 1.3.277 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ClientNode connected
Timer started!
latency: 28 ms
latency: 58 ms
latency: 78 ms
latency: 111 ms
latency: 24 ms
latency: 42 ms
latency: 72 ms
latency: 88 ms
latency: 122 ms
latency: 38 ms
latency: 69 ms
latency: 103 ms
latency: 114 ms
latency: 39 ms
latency: 53 ms
latency: 86 ms
latency: 103 ms
latency: 36 ms
latency: 53 ms
latency: 13 ms
latency: 97 ms
latency: 114 ms
latency: 47 ms
latency: 63 ms
latency: 100 ms
latency: 117 ms
latency: 52 ms
latency: 66 ms
latency: 100 ms
latency: 111 ms
latency: 28 ms
latency: 61 ms
latency: 78 ms
latency: 109 ms
latency: 25 ms
latency: 59 ms
latency: 72 ms
latency: 89 ms
latency: 25 ms
latency: 41 ms
--- Debugging process stopped ---
xyz both vsync disabled
Godot Engine v4.3.dev6.official.89850d553 - https://godotengine.org
Vulkan 1.3.277 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ClientNode connected
Timer started!
latency: 171 ms
latency: 192 ms
latency: 103 ms
latency: 26 ms
latency: 51 ms
latency: 74 ms
latency: 98 ms
latency: 19 ms
latency: 47 ms
latency: 69 ms
latency: 91 ms
latency: 115 ms
latency: 36 ms
latency: 60 ms
latency: 84 ms
latency: 108 ms
latency: 29 ms
latency: 53 ms
latency: 78 ms
latency: 102 ms
latency: 23 ms
latency: 47 ms
latency: 71 ms
latency: 95 ms
latency: 16 ms
--- Debugging process stopped ---
Swapped machines:
Godot Engine v4.3.beta2.official.b75f0485b - https://godotengine.org
Vulkan 1.3.278 - Forward+ - Using Device #0: Intel - Intel(R) UHD Graphics 620 (WHL GT2)
ClientNode connected
Timer started!
latency: 8 ms
latency: 8 ms
latency: 8 ms
latency: 10 ms
latency: 8 ms
latency: 5 ms
latency: 9 ms
latency: 7 ms
latency: 8 ms
latency: 9 ms
latency: 8 ms
latency: 14 ms
latency: 14 ms
latency: 14 ms
latency: 14 ms
latency: 14 ms
latency: 14 ms
latency: 7 ms
latency: 14 ms
latency: 14 ms
latency: 14 ms
latency: 7 ms
latency: 7 ms
latency: 7 ms
--- Debugging process stopped ---
- Edited
kuligs2 But how would that change anything?
The frequency of polling won't be tied to node processing i.e. it'd be fully independent of framerate. It'll run (almost) as fast as possible. Other alternative to try would be to just run the server code as a standalone script instead of the main game loop.
- Edited
kuligs2 man, youre talking in riddles, or maybe youre just sarcastic, i cant tell..
No, I'm just assuming you're knowledgeable enough. If you're capable of writing an udp server you should be able to understand what "put it in a thread" means.
Any code inside a _process()
function acts as a part of the body of engine's main game loop. So basically the engine does this with your _process()
function under the hood:
while(running): #invisible
_process()
wait_for_next_frame() #invisible
The problem is that the execution of the body of this "invisible loop" is delayed so it executes exactly once per frame. If you want it to run faster, you'll need the code in _process()
to run in a loop that's decoupled from engine's main loop (runs in a thread) or to completely replace the main loop (runs as a custom standalone script instead of the default loop):
func my_thread():
while(running):
# do stuff that was previously done in _process()
The above loop will now iterate your code independently of engine's frame rate. Each loop iteration won't wait until the frame is finished. It will loop immediately. Assuming you launched that function as a thread or as a main game loop replacement.