-
Notifications
You must be signed in to change notification settings - Fork 549
blockstore: Adjust rocksdb threadpool config #6166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
A previous change wired up the ability to configure rocksdb threadpool sizes through BlockstoreOptions. That change kept the threadpool sizes the same, but unintentionally limited how many concurrent flushes and compactions could be scheduled in each pool. The result was reduced parallelism with mostly idle threadpools. This change restores the previous level of allowed parallelism in the compact and flush threadpools
I'm also hoping to get some runtime with this patch from some people who have seen issues with v2.2. Assuming I have correctly identified the issue, we will need this for v2.2 as well. I see two options:
|
are there any non-obvious concerns related to bping this pr? seems to be (mostly) reverting us to 2.1 behavior |
No, not really. I'm technically using a different function to modify the config, but I have confidence it is doing the same thing that it was before via the
Yeah, more or less. Also leave the CLI args hooked up which aren't in v2.1 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #6166 +/- ##
=========================================
- Coverage 82.8% 82.8% -0.1%
=========================================
Files 842 842
Lines 377696 377690 -6
=========================================
- Hits 313061 313048 -13
- Misses 64635 64642 +7 🚀 New features to boost your workflow:
|
Backports to the beta branch are to be avoided unless absolutely necessary for fixing bugs, security issues, and perf regressions. Changes intended for backport should be structured such that a minimum effective diff can be committed separately from any refactoring, plumbing, cleanup, etc that are not strictly necessary to achieve the goal. Any of the latter should go only into master and ride the normal stabilization schedule. Exceptions include CI/metrics changes, CLI improvements and documentation updates on a case by case basis. |
A previous change wired up the ability to configure rocksdb threadpool sizes through BlockstoreOptions. That change kept the threadpool sizes the same, but unintentionally limited how many concurrent flushes and compactions could be scheduled in each pool. The result was reduced parallelism with mostly idle threadpools. This change restores the previous level of allowed parallelism in the compact and flush threadpools (cherry picked from commit 93df405)
#6179) blockstore: Adjust rocksdb threadpool config (#6166) A previous change wired up the ability to configure rocksdb threadpool sizes through BlockstoreOptions. That change kept the threadpool sizes the same, but unintentionally limited how many concurrent flushes and compactions could be scheduled in each pool. The result was reduced parallelism with mostly idle threadpools. This change restores the previous level of allowed parallelism in the compact and flush threadpools (cherry picked from commit 93df405) Co-authored-by: steviez <steven@anza.xyz>
…xyz#6166) (anza-xyz#6179) blockstore: Adjust rocksdb threadpool config (anza-xyz#6166) A previous change wired up the ability to configure rocksdb threadpool sizes through BlockstoreOptions. That change kept the threadpool sizes the same, but unintentionally limited how many concurrent flushes and compactions could be scheduled in each pool. The result was reduced parallelism with mostly idle threadpools. This change restores the previous level of allowed parallelism in the compact and flush threadpools (cherry picked from commit 93df405) Co-authored-by: steviez <steven@anza.xyz>
Problem
A previous change wired up the ability to configure rocksdb threadpool sizes through BlockstoreOptions. That change kept the threadpool sizes the same, but unintentionally limited how many concurrent flushes and compactions could be scheduled in each pool. The result was reduced parallelism with mostly idle threadpools. The previous change is #4214
Summary of Changes
This change restores the previous level of allowed parallelism in the compact and flush threadpools
Who is Effected
With current MNB load, I believe the node had to have the following in order to experience a noticeable issue:
--enable-rpc-transaction-history
--limit-ledger-size
Item 1. introduces two extra columns that increase our total amount of data written to the Blockstore by 2-3x. Item 2. opens up to some elevated I/O to clean those two extra columns (see #6017 for more details).
Testing
With the current tip of master, I put a breakpoint here, and observed the following (noting that the pool sizes scale with N threads):
You may notice that
max_compaction
changes in this PR vs.v2.1
. In rocks, the size of the threadpool and the number of jobs that can be scheduled in parallel are separate numbers:<= v2.1
on this box, we'd create 32 threads for compaction but only allow up to 24 jobs to be run in parallel