Proxy Streams: individual GRPC streams to the backend for connection or subscription #680
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Proposed changes
Proposing an experimental feature called Proxy Streams.
Proxy streams allow pushing data towards client channel subscription (and optionally connection itself) directly and individually from your application backend over the unidirectional or bidirectional GRPC stream.
The stream is established between Centrifugo and your application backend as soon as user subscribes to a channel (or connects to Centrifugo). The scheme may be useful if you want to generate individual streams and these streams should only work for a time while client is connected or subscribed to a channel.
In this case Centrifugo plays a role of WebSocket-to-GRPC streaming proxy – keeping numerous real-time connections from application's clients and establishing GRPC streams to the backend, multiplexing them using a pool of HTTP/2 (transport used by GRPC) connections.
This may be configured similar to our connect or subscribe proxies. For connection level stream proxy we open unidirectional or bidirectional GRPC stream for the time of connection liveness (backend can close stream at any point). For subscription level stream proxy we open unidirectional or bidirectional GRPC stream for the time of subscription liveness (backend can close stream at any point).
The downside of the idea is the number of additional goroutines we need for each such stream. Every stream will cost 3 extra goroutines. So for a WS connection with 5 streams the number of goroutines is 2 + 5*3 = 17. This will result into more CPU and more memory. This fact prevents me from proceeding right away. On the other hand the approach should scale horizontally (scaling GRPC on the backend is not a straightforward but definitely possible). Not every use case involves millions concurrent GRPC streams.