8000 SCTP will lose data if a stream is closed 'too soon' after the last blocking write call. · Issue #361 · pion/sctp · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

SCTP will lose data if a stream is closed 'too soon' after the last blocking write call. #361

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
coridonhenshaw ope A500 ned this issue Jan 25, 2025 · 1 comment

Comments

@coridonhenshaw
Copy link

Summary

SCTP will lose data if a stream is closed 'too soon' after the last blocking write call.

The stream.Close() API does not flush transmission buffers before returning, nor is there any obvious way to manually flush a stream transmission buffer before calling Close(). As a result, any data stored for transmission will be lost, and "use of closed network connection" errors will result from lower layers of the communications stack, such as client.Shutdown(), and (udp)conn.Close().

Motivation

Data loss is a bad thing. SCTP should honor the implied intent of sctp.ReliabilityTypeReliable when terminating streams, by sending all pending data to the peer before closing the stream.

Describe alternatives you've considered

There does not seem to be any robust solution to conduct an orderly shutdown of a stream without cooperation from the peer to communicate that it has received all expected data.

Blocking until an OnBufferedAmountLow() handler reports 0 bytes in the send buffer only moves the issue down to the underlying transport. Closing a stream may succeed without error, but data will still be lost by lower layers of the stack and a "use of closed network connection" error will still result.

Example

Based on the pinger/ponger examples. Error checking removed for brevity.

Send:

func main() {
conn, err := net.Dial("udp", "127.0.0.1:9899")

config := sctp.Config{
	NetConn:       conn,
	LoggerFactory: logging.NewDefaultLoggerFactory(),
	BlockWrite:    true,
}
a, err := sctp.Client(config)

stream, err := a.OpenStream(0, sctp.PayloadTypeWebRTCString)

stream.SetReliabilityParams(false, sctp.ReliabilityTypeReliable, 10)

const Size = 256
const Iter = (Size * 1024 * 1024) / 65536
  // Sends 4096 blocks of 64KiB each.
for i := 0; i < Iter; i++ {
	pingMsg := make([]byte, 65536)
	rand.Read(pingMsg)
	_, err = stream.Write([]byte(pingMsg))
}

  // One of these (usually conn.Close()) will return a "use of closed network connection" error)
err = stream.Close()
if err != nil { log.Panic(err) }
err = a.Close()
if err != nil { log.Panic(err) }
err = conn.Close()
if err != nil { log.Panic(err) }
}

Receive:

func main() {
addr := net.UDPAddr{
	IP:   net.IPv4(0, 0, 0, 0),
	Port: 9899,
}
conn, _ := net.ListenUDP("udp", &addr)
config := sctp.Config{
	NetConn:       &disconnectedPacketConn{pConn: conn},
	LoggerFactory: logging.NewDefaultLoggerFactory(),
}
a, _ := sctp.Server(config)
stream, _ := a.AcceptStream()
stream.SetReliabilityParams(false, sctp.ReliabilityTypeReliable, 0)
var pongSeqNum int
for {
	buff := make([]byte, 65536)
	stream.Read(buff)

	fmt.Printf("received: %v, count: %v\n", len(buff), pongSeqNum)
	pongSeqNum++
    }
}

This code will typically stop, as a result of the send process ending, at block 4093 (zero-based count) out of 4094. Block 4094 will be lost.

@sirzooro
Copy link
Contributor
sirzooro commented Mar 6, 2025

Opposite expectation to have ungraceful close and drop all enqueued data when channel is closed is also valid. It allows application who sends realtime data stream via data channel to better handle network contentions, in particular avoid high memory and CPU usage caused by lots of enqueued data waiting for (re)transmissions. See also #357

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0