-
Notifications
You must be signed in to change notification settings - Fork 636
Mempool Lanes: introduce QoS to the mempool [tracking issue] #2803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Please see cosmos/ibc-go#6159 while doing the research here. Glad to provide IBC-side related context of background if that would help. |
@adizere will do as part of due-diligence on alternative approaches (added the link to the corresponding task above) |
This is an interesting proposal. I wanted to flag, however, that it doesn't serve Penumbra's needs, and while I can't speak on behalf of other users I am unconvinced that it solves the problem in general either. Specifically, while the proposal allows an application to specify a few statically-designated lanes, and assign transactions to those lanes, it provides no way to prioritize transactions WITHIN those lanes. But this is the actual problem faced by an application receiving more transactions than there is execution capacity — which is the context where this problem needs to be solved in the first place. Penumbra is designed so that every transaction has a chain-determined base fee and a user-specified transaction fee. The transaction fee minus the base fee is the proposer tip, and transactions should be priority ordered by the proposer tip. This is not possible to implement with the proposed design, which only allows defining a small number (as the number linearly replicates internal comet data structures) of lanes and assigning transactions to them, but does not allow specifying a transaction ordering within each lane. How is the app supposed to assign transactions to lanes to respect fee priority order? It cannot create one lane for every possible fee value, and it cannot statically quantize fee amounts into "priority groups" because the distribution of fee amounts is not known a priori. The notes describe a FCFS ordering as a desirable goal. This is both a conceptual and practical mistake. Conceptually, FCFS is not a transaction ordering, because different nodes may receive transactions in different orders. An ordering must be a property of a set of transactions, i.e., using information in the transactions themselves. Practically, it is also a mistake because FCFS allows abusive users to spam low value transactions without consequence. For example, while this proposal would allow an app to define an IBC lane to prioritize IBC packets, this would do absolutely nothing about the IBC spam currently ongoing on deployed chains, because it does not allow any prioritization within a lane. In other words, it is only useful for enshrined, permissioned transaction submission, and not useful for transactions made by users of the chain. Every other blockchain that has users and therefore experiences congestion has some kind of fee market (emergent or otherwise) that allows transactions to be sorted by fee. Comet should do the same. At the very least, this should be described in the alternatives section. |
Thanks for the feedback @hdevalence! I think what you want is something like the deprecated priority mempool, where each tx has a priority? One of the main requirements for this proposal was to respect the (best-effort) FIFO ordering because it's needed by IBC, so in this initial design all lanes will use the same scheduling algorithm for transaction dissemination and for reaping. That doesn't mean that in the future we can't change how each lane prioritize its transactions by customizing the scheduling logic per lane, probably with a callback defined in the application. |
With this proposal, the application can define a high-priority lane for IBC restricted to small and medium transactions and another low-priority lane for large transactions (IBC or not). I know it's not the final solution to the spamming problem but it helps. |
Really appreciate the insights @hdevalence, in particular the perspective from Penumbra which helps understand better the application requirements. I wonder if the concern about within-lane prioritization (or roughly equivalent concern of fee market) goes into the direction of #1112. If so, would love to have your input on that discussion; if not then I'll open another issue dedicated to within-lane prio work to gather requirements. |
Uh oh!
There was an error while loading. Please reload this page.
Feature Request
Summary
In order to enable better management of transaction traffic at P2P level, we need to allow the app to classify transactions. For that, we introduce the concept of Quality of Service (QoS), well known in IP networks, which we realize as mempool lanes. ADR-118 describes this feature more in detail.
This issue will track:
Teams interested
Execution Steps
As with previous sizeable features, the design and implementation work will be carried out in a feature branch --
feature/mempool-qos
, bifurcating frommain
-- to avoid the risk of last-minute serious problems. A feature branch enables the (management) decision to defer shipping a feature in a new release if it is considered not ready.If we need our users to validate our design/implementation while this feature is in progress, we will probably need to maintain some temporary branch off a released version (likely
v0.38.x
) that reflects our work infeature/mempool-qos
. We will work out the details as we start creating PRs with code.Initial steps
feature/mempool-qos
Other related pending improvements on mempool
Remove(not actually needed)TxsAvailable
andCreateEmptyBlocks
(laundry list)Design: core work
These tasks are mostly sequential, although some of them can be done in parallel
main
#3499Finalize spec/design document (ADR or move elsewhere) [7 days]Initial estimate (this section): 12 days
Implementation
TxsFront
andTxsWaitChan
inCListMempool
that leak implementation details #3303TryAddTx
to mempool reactor #3296Implemented in Dog's code: Take it from therekvstore
ande2e
apps with lanes #3503Info
out of Handshake (likely) toNewNode
CListMempool
#3484Initial estimate (this section): 50 days
Final steps
feature/mempool-qos
back intomain
when the implementation is validated (200 node run, without report)feature/mempool-qos
on top ofv0.38.x
,v0.38.x-experimental
, orv1.x-experimental
[1 day]v1.x-experimental
#4179Deal with the work done offv0.38.x
(merge or squash it intov0.38.x-experimental
) [7 days]v1.x-experimental
. Cosmos SDK will integrate Lanes from that branch ([Feature]: Mempool lanes cosmos/cosmos-sdk#21780). For the moment we won't backport tov0.38.x
orv0.38.x-experimental
unless there are interested users especially requiring these versions.Initial estimate (this section): 12 days
Additional changes after merging to main:
load_max_seconds
to manifest + optional flags toload
command #4175 (required for sending tx load on DigitalOcean testnets)The text was updated successfully, but these errors were encountered: