You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Libcurl automatically responds to incoming WebSocket PINGs with an
immediate PONG reply. Here is the code block executed by curl_ws_recv()
when a PING is received: Link
/* auto-respond to PINGs, only works for single-frame payloads atm */
size_t bytes;
infof(data, "WS: auto-respond to PING with a PONG");
/* send back the exact same content as a PONG */
*err = curl_ws_send(data, buf, buflen, &bytes, 0, CURLWS_PONG);
if(*err)
return -1;
I'm seeing two issues with the ways this is handled, both caused by the possibility
that curl_ws_send() might not completely send the data in a single call.
An application that is using the WebSocket connection in a "duplex" fashion
might perform the following sequence of operations:
First it calls curl_ws_send(), but either gets CURLE_AGAIN or gets
CURLE_OK with only partially sent data.
Then the application does not wait to complete sending, but instead calls curl_ws_recv().
During curl_ws_recv() an incoming PING frame is detected
by the auto-pong logic, so it will immediately try to send the PONG
reply. Now the curl_ws_send() call from within curl_ws_recv() will
break the encoder state, because curl_ws_send() is not able to handle
(let alone detect) two competing send operations given to it.
If the curl_ws_send() call in curl_ws_recv() fails to send the complete PONG
frame at once, then this remains undetected because the return value for such
a case is CURLE_OK (02edae5) and the bytes output parameter is not checked here.
(I'm not sure whether this problem might have been already fixed through 3e64569 somehow?)
I expected the following
Potential fix
To fix both problems in a backwards compatible way and not risk dropping any PONG
replies, a possible solutions seems to be:
Before sending the PONG curl_ws_recv() must detect whether it is in the first
situation (i.e. curl_ws_send() is currently busy sending application data).
If yes: Return CURLE_AGAIN from curl_ws_recv() and put curl_ws_recv()
into a "wait state", where it always returns CURLE_AGAIN until curl_ws_send()
is available again. Then continue with sending the PONG.
(Note that the application might accidentally congest curl_ws_send()
in such a way that this "wait state" will never be left.)
After attempting to send the PONG frame curl_ws_recv() must check whether
it is in the second situation (i.e. sending the PONG frame only partially completed).
If yes: Return CURLE_AGAIN from curl_ws_recv() and put curl_ws_recv()
into a "wait state", where is always returns CURLE_AGAIN until curl_ws_send()
has finished to send the PONG frame. Additionally curl_ws_send() must be
put into a "wait state" where it performs the PONG sending and returns
CURLE_AGAIN on calls with other data by the application. (To reliably
distinguish between calls made by the application versus internal calls
a new flag might potentially be useful/necessary?)
Disabling auto-pong reply
#12220 / #16744 adds the option to disable the auto-pong reply feature. This is the
solution we are looking for to use in our application!
Of course this doesn't fix the issue itself and especially does not fix any
existing applications that use the feature today.
I did this
While reviewing the libcurl code I noticed this:
Libcurl automatically responds to incoming WebSocket PINGs with an
immediate PONG reply. Here is the code block executed by
curl_ws_recv()
when a PING is received:
Link
I'm seeing two issues with the ways this is handled, both caused by the possibility
that
curl_ws_send()
might not completely send the data in a single call.might perform the following sequence of operations:
curl_ws_send()
, but either gets CURLE_AGAIN or getsCURLE_OK with only partially sent data.
curl_ws_recv()
.curl_ws_recv()
an incoming PING frame is detectedby the auto-pong logic, so it will immediately try to send the PONG
reply. Now the
curl_ws_send()
call from withincurl_ws_recv()
willbreak the encoder state, because
curl_ws_send()
is not able to handle(let alone detect) two competing send operations given to it.
curl_ws_send()
call incurl_ws_recv()
fails to send the complete PONGframe at once, then this remains undetected because the return value for such
a case is CURLE_OK (02edae5) and the
bytes
output parameter is not checked here.(I'm not sure whether this problem might have been already fixed through
3e64569 somehow?)
I expected the following
Potential fix
To fix both problems in a backwards compatible way and not risk dropping any PONG
replies, a possible solutions seems to be:
curl_ws_recv()
must detect whether it is in the firstsituation (i.e.
curl_ws_send()
is currently busy sending application data).curl_ws_recv()
and putcurl_ws_recv()
into a "wait state", where it always returns CURLE_AGAIN until
curl_ws_send()
is available again. Then continue with sending the PONG.
(Note that the application might accidentally congest
curl_ws_send()
in such a way that this "wait state" will never be left.)
curl_ws_recv()
must check whetherit is in the second situation (i.e. sending the PONG frame only partially completed).
curl_ws_recv()
and putcurl_ws_recv()
into a "wait state", where is always returns CURLE_AGAIN until
curl_ws_send()
has finished to send the PONG frame. Additionally
curl_ws_send()
must beput into a "wait state" where it performs the PONG sending and returns
CURLE_AGAIN on calls with other data by the application. (To reliably
distinguish between calls made by the application versus internal calls
a new flag might potentially be useful/necessary?)
Disabling auto-pong reply
#12220 / #16744 adds the option to disable the auto-pong reply feature. This is the
solution we are looking for to use in our application!
Of course this doesn't fix the issue itself and especially does not fix any
existing applications that use the feature today.
curl/libcurl version
master (d4f9788, curl 8.12.1)
operating system
N/A
EDIT
EDIT: Added reference to #16744
The text was updated successfully, but these errors were encountered: