Quick maths leads me to this - MEASURED_MS / (1000 / FPS) = 1ms -> 17 / (1000 / 60) = 1.02ms.. but not sure
How to measure precise packet times?
xyz wym? My server and client are separate nodes.
# client_node.gd
class_name ClientNode
extends Node
var udp := PacketPeerUDP.new()
var connected = false
var timer = Timer.new()
func _ready():
var error = udp.connect_to_host("127.0.0.1", 4242)
add_child(timer)
timer.wait_time=1
timer.timeout.connect(_on_timeout)
if error != OK:
print("ClientNode didnt connect")
else:
print("ClientNode connected")
timer.start()
print("Timer started!")
func _process(delta):
if udp.get_available_packet_count() > 0:
var ut = Time.get_unix_time_from_system()
var ms = Time.get_ticks_msec()
var packet_ms = udp.get_packet().get_string_from_utf8().to_int()
var latency = ms - packet_ms
print("tick: %s ms" % ms)
print("latency: %s ms" % latency)
connected = true
func _on_timeout():
var ut = Time.get_unix_time_from_system()
var ms = str(Time.get_ticks_msec()) #str(int(ut * 1000))
udp.put_packet(ms.to_utf8_buffer())
# server_node.gd
class_name ServerNode
extends Node
var server := UDPServer.new()
var peers = []
func _ready():
var error = server.listen(4242)
if error != OK:
print("ServerNode didnt listten")
else:
print("ServerNode listenning")
func _process(delta):
server.poll() # Important!
if server.is_connection_available():
var peer: PacketPeerUDP = server.take_connection()
var packet = peer.get_packet()
#print("Accepted peer: %s:%s" % [peer.get_packet_ip(), peer.get_packet_port()])
#print("Received data: %s" % [packet.get_string_from_utf8()])
# Reply so it knows we received the message.
peer.put_packet(packet)
# Keep a reference so we can keep contacting the remote peer.
peers.append(peer)
for i in range(0, peers.size()):
if peers[i].get_available_packet_count() > 0:
var packet = peers[i].get_packet()
peers[i].put_packet(packet)
pass # Do something with the connected peers.
# main.gd
extends Control
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
pass # Replace with function body.
# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta: float) -> void:
pass
- Edited
kuligs2 It's not about nodes. It's about what/when happens in a single frame _process()
functions. Separate your client code so that put_packet()
part happens before the server part and get_packet()
part happens after. This should make it more clear where that one frame "latency" in your benchmark is coming from.
xyz Im sorry im still not following.
Client node starts timer.
_on_timeout -> put packet.
Then on server node in the process i listen for available packets. If packet i put_packet back to the peer.
On client in the process i listen for available packets. If packet i get current time and return the difference.
Not sure how can i listen for packet outside the process function?
xyz how do i specify when the process is called, in other node? Do i add "client" node as child of the server node?
But this wont fix this problem when im doing it with multple machines. so on one machine i would have client process and on other machine i would have server process.. the resulting time wastage would be still incorrect..
Or maybe i dont understand something obvious?
- Edited
kuligs2 Your "latency" is caused solely by doing it in a single scene tree instance. And it's caused by order of execution of _process()
functions.
Their order of execution is from top to bottom as nodes are displayed in the tree.
So just try what I suggested, to see if printed "latency" is brought to 0. Then we can discuss it further.
- Edited
xyz ive put the code in the main scene script
#main.gd
extends Control
@onready var udp_ping_server: ServerNode = $UdpPingServer
@onready var udp_ping_client: ClientNode = $UdpPingClient
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
pass # Replace with function body.
# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta: float) -> void:
if udp_ping_client.udp.get_available_packet_count() > 0:
var ut = Time.get_unix_time_from_system()
var ms = Time.get_ticks_msec()
var packet_ms = udp_ping_client.udp.get_packet().get_string_from_utf8().to_int()
var fps = Engine.get_frames_per_second()
var latency = ms - packet_ms
var engine_time = 1000 / fps
var real_latency = latency - engine_time
#print("tick: %s ms" % ms)
print("latency: %s ms" % latency)
#print("latency: %s ms" % real_latency)
pass
wich is at the top of the tree and the latency now is doubled.
usually it was bellow 20ms now its 34ms.
Also, i have tested this with client/server apps wich is why i created this example to prove that this is a problem. I have other app that ran as server and client on different machines. Latency times were exactly the same, didnt matter wether im doing this on the same app or different machines.
for context this is what devs said on this https://github.com/godotengine/godot/issues/93805
EDIT:
Tried switching nodes around, didnt help either
- Edited
kuligs2 It's 100% the order of execution problem. The main node will still run before the server node, so you haven't changed anything in that respect. You need that client receiving part of code to run after the server code.
Don't be stubborn and try doing what I suggested. You may learn something
xyz ok, got it
#main.gd
extends Control
@onready var udp_ping_server: ServerNode = $UdpPingServer
@onready var udp_ping_client: ClientNode = $UdpPingClient
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
pass # Replace with function body.
# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta: float) -> void:
if udp_ping_client.udp.get_available_packet_count() > 0:
var ut = Time.get_unix_time_from_system()
var ms = Time.get_ticks_msec()
var packet_ms = udp_ping_client.udp.get_packet().get_string_from_utf8().to_int()
var fps = Engine.get_frames_per_second()
var latency = ms - packet_ms
var engine_time = 1000 / fps
var real_latency = latency - engine_time
#print("tick: %s ms" % ms)
print("latency: %s ms" % latency)
#print("latency: %s ms" % real_latency)
udp_ping_server.server.poll() # Important!
if udp_ping_server.server.is_connection_available():
var peer: PacketPeerUDP = udp_ping_server.server.take_connection()
var packet = peer.get_packet()
peer.put_packet(packet)
# Keep a reference so we can keep contacting the remote peer.
udp_ping_server.peers.append(peer)
for i in range(0, udp_ping_server.peers.size()):
if udp_ping_server.peers[i].get_available_packet_count() > 0:
var packet = udp_ping_server.peers[i].get_packet()
udp_ping_server.peers[i].put_packet(packet)
pass # Do something with the connected peers.
pass
output:
Godot Engine v4.3.dev6.official.89850d553 - https://godotengine.org
Vulkan 1.3.277 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ClientNode connected
Timer started!
ServerNode listenning
latency: 6 ms
latency: 1 ms
latency: 1 ms
latency: 1 ms
latency: 1 ms
latency: 1 ms
--- Debugging process stopped ---
- Edited
kuligs2 You currently have two nodes:
UdpPingClient
UdpPingServer
Add the third node at the end so you get:
UdpPingClient
UdpPingServer
UdpPingClient2
Attach a new script to UdpPingClient2
and do the receiving part of client code in that node, i.e. move the _process()
function from UdpPingClient
to UdpPingClient2
.
Why do that? To force the receiving client code to execute after the server code in the same frame. And why do that? So you can see that there is no actual network latency there. What you perceive as latency is caused by executing the ping receiving code in the next frame rather than in the frame you pinged the server, due to order of "server" and "client" _process()
execution.
well actually idk.. maybe my pc is too fast for this.. i mean i tested this before on garbage can, compared to my gaming rig ..
The old method returns me:
Godot Engine v4.2.1.stable.official.b09f793f5 - https://godotengine.org
Vulkan API 1.3.277 - Forward+ - Using Vulkan Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ServerNode listenning
ClientNode connected
Timer started!
latency: 5 ms
latency: 3 ms
latency: 3 ms
latency: 2 ms
latency: 3 ms
latency: 2 ms
latency: 2 ms
latency: 3 ms
--- Debugging process stopped ---
But the new one 1ms.. man.. gotta have to wait for tomorrow when i get back at work and test this on that garbage pc
- Edited
kuligs2 ok, got it
Ah finally That looks more like the actual latency. Although if you did it properly and run it in a single instance, your "latency" printout should be near 0.
EDIT: Hm, are you again doing it in a script attached to Main? Not good as this script will be executed before the server code. Remember _process()
functions are executed from top to bottom as the nodes are displayed in the scene tree.
ok so did a test on laptop.. yes it works like you said..
Old method
new method
ok so wrote client app, just the client part, and im getting bad pings
Server is running on other machine..
Terminal ping gives me these numbers
Running server on laptop and client on my power pc,
Godot Engine v4.3.dev6.official.89850d553 - https://godotengine.org
Vulkan 1.3.277 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3090
ClientNode connected
Timer started!
latency: 8 ms
latency: 5 ms
latency: 53 ms
latency: 86 ms
latency: 19 ms
latency: 36 ms
latency: 53 ms
latency: 67 ms
latency: 97 ms
latency: 8 ms
latency: 56 ms
latency: 70 ms
latency: 97 ms
--- Debugging process stopped ---
```
yikes :(
- Edited
kuligs2 With separate apps, this would not be a problem. As I said, the problem is only due to how you structured this dummy self-pinging code.
You can structure the code in multiple ways. The important thing for the self-pinging app is to ensure that all 3 parts are executed in the proper order inside the same frame:
- client pings server
- server pings back
- client receives the pingback.
Your original code was performing 1 and 2 in the current frame and 3 in the next frame. And that was because how you structured the chunks of your code to execute.
The takeaway message is to take care to understand how and when is each chunk of your code actually executed inside the single frame processing step.