You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@lws-team I have both a server and a client implemented using libwebsockets. My client receives requests every 5ms, and for every packet received on the client side, the client application sends a response. However, sometimes I notice that the ring buffer becomes full, leaving no space available. This situation persists for 12-13 seconds. During this time, the server sends a WebSocket ping, but the client responds with a masked WebSocket pong only after 12 seconds. As a result, the server closes the connection with the client due to the delayed WebSocket pong.
Does this mean that, due to the ring buffer being full, the client is unable to send the masked WebSocket pong? Is there any way to disable WebSocket pings from the server, as I am already handling ping messages at the application level?
The text was updated successfully, but these errors were encountered:
You can indicate that the connection is valid (and so no need to provoke it with a PING) using this
/*
* lws_validity_confirmed() - reset the validity timer for a network connection
*
* \param wsi: the connection that saw traffic proving the connection valid
*
* Network connections are subject to intervals defined by the context, the
* vhost if server connections, or the client connect info if a client
* connection. If the connection goes longer than the specified time since
* last observing traffic that can only happen if traffic is passing in both
* directions, then lws will try to create a PING transaction on the network
* connection.
*
* If the connection reaches the specified `.secs_since_valid_hangup` time
* still without any proof of validity, the connection will be closed.
*
* If the PONG comes, or user code observes traffic that satisfies the proof
* that both directions are passing traffic to the peer and calls this api,
* the connection validity timer is reset and the scheme repeats.
*/
LWS_VISIBLE LWS_EXTERN void
lws_validity_confirmed(struct lws *wsi);
It's more interesting to understand what delays the PONG... are you sending huge lumps of data? Lws will block sending anything else (including PONGs) until the large chunks of data have finished sending in the background if so.
@lws-team I have both a server and a client implemented using libwebsockets. My client receives requests every 5ms, and for every packet received on the client side, the client application sends a response. However, sometimes I notice that the ring buffer becomes full, leaving no space available. This situation persists for 12-13 seconds. During this time, the server sends a WebSocket ping, but the client responds with a masked WebSocket pong only after 12 seconds. As a result, the server closes the connection with the client due to the delayed WebSocket pong.
Does this mean that, due to the ring buffer being full, the client is unable to send the masked WebSocket pong? Is there any way to disable WebSocket pings from the server, as I am already handling ping messages at the application level?
The text was updated successfully, but these errors were encountered: