8000 add cluster flushslot command. by wuranxx · Pull Request #1384 · valkey-io/valkey · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

add cluster flushslot command. #1384

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: unstable
Choose a base branch
from

Conversation

wuranxx
Copy link
@wuranxx wuranxx commented Dec 3, 2024

add cluster flushslot command. #1133

background

We want to implement the ability to SYNC or ASYNC free all data from a single slot. This is useful in two cases:

  1. When a primary loses ownership of a slot, it will temporarily freeze while it synchronously frees the slot.
  2. When doing resharding, you may prefer to forcibly delete all the data from a slot and assign it to another node.

behavior

  1. Add CLUSTER FLUSHSLOT <slot> [ASYNC|SYNC] command.

  2. Modify the function signature of delKeysInSlot, adding a lazy parameter to decide whether to delete keys SYNC or ASYNC.

Copy link
codecov bot commented Dec 4, 2024

Codecov Report

Attention: Patch coverage is 88.00000% with 3 lines in your changes missing coverage. Please review.

Project coverage is 71.06%. Comparing base (c9c49b4) to head (ddfa7ff).
Report is 5 commits behind head on unstable.

Files with missing lines Patch % Lines
src/cluster.c 81.81% 2 Missing ⚠️
src/cluster_legacy.c 92.85% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##           unstable    #1384   +/-   ##
=========================================
  Coverage     71.05%   71.06%           
=========================================
  Files           123      123           
  Lines         66106    66134   +28     
=========================================
+ Hits          46971    46997   +26     
- Misses        19135    19137    +2     
Files with missing lines Coverage Δ
src/commands.def 100.00% <ø> (ø)
src/cluster_legacy.c 86.55% <92.85%> (-0.01%) ⬇️
src/cluster.c 89.90% <81.81%> (-0.13%) ⬇️

... and 15 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@wuranxx wuranxx changed the title WIP: add cluster flushslot command. add cluster flushslot command. Dec 5, 2024
@wuranxx
Copy link
Author
wuranxx commented Dec 5, 2024

Please help review this PR. Thank you. @hwware @madolson @PingXie @zuiderkwast

@hwware
Copy link
Member
hwware commented Dec 5, 2024

There are too much information in issue #1133. To save some of the reviewer energy, could you pls update the top comment to describe the background why we want to this new command? You could reference PR #1392.

Thanks a lot

Copy link
Member
@PingXie PingXie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall. Thanks @wuranxx!

addReplyErrorObject(c, shared.syntaxerr);
return;
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should fail the c->argc > 4 case I think

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the call site of clusterCommandFlushslot, I have already performed a check on the number of parameters. Is it still necessary to validate the number of parameters again within the function?

 else if (!strcasecmp(c->argv[1]->ptr, "flushslot") && (c->argc == 3 || c->argc == 4)) {
        /* CLUSTER FLUSHSLOT <slot> [ASYNC|SYNC] */
        clusterCommandFlushslot(c);
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit weird that the arity check is split into multiple places. But I don't mind too much because it's like that for many commands already.

But I think this should be in cluster.c instead of cluster_legacy.c. Cluster_legacy is about the cluster bus, and we would replace this file if we replace the cluster bus for cluster V2. At least, that's the idea. This command doesn't use the cluster bus at all, so I think it should be in cluster.c and be called directly from clusterCommand().

@wuranxx wuranxx force-pushed the cluster-flush-slot branch from c66cb6a to ab74efb Compare December 6, 2024 08:24
@wuranxx wuranxx changed the title add cluster flushslot command. WIP: add cluster flushslot command. Dec 6, 2024
@wuranxx
Copy link
Author
wuranxx commented Dec 6, 2024

Before I rebased to the latest unstable (old commit is 32f7541) , the Tcl tests and manual command tests ran fine locally.

After I rebased to the latest unstable 6df376d, the command failed when executed locally.

It is possible that changes in the code during this period caused the inconsistent behavior. Until I confirm the cause, I have marked this PR as WIP (Work In Progress).

@wuranxx wuranxx force-pushed the cluster-flush-slot branch from ab74efb to 4db74af Compare January 22, 2025 03:20
@wuranxx wuranxx changed the title WIP: add cluster flushslot command. add cluster flushslot command. Jan 22, 2025
@wuranxx
Copy link
Author
wuranxx commented Jan 22, 2025

Before I rebased to the latest unstable (old commit is 32f7541) , the Tcl tests and manual command tests ran fine locally.

After I rebased to the latest unstable 6df376d, the command failed when executed locally.

It is possible that changes in the code during this period caused the inconsistent behavior. Until I confirm the cause, I have marked this PR as WIP (Work In Progress).

Fixed. For details, see #1399.

@zuiderkwast
Copy link
Contributor

@enjoy-binbin @murphyjacob4 Do you want this command for atomic slot migration, for example when a slot migration is cancelled, the importing node can send CLUSTER FLUSHSLOT to its replicas to clean up the slot?

@murphyjacob4
Copy link
Contributor

I think we will have a protocol for cancelling the slot migration, and this will trigger the drop for all slot ranges being migrated locally. But we should be able to use the async delKeysInSlot functionality to implement this

@zuiderkwast
Copy link
Contributor

I think we will have a protocol for cancelling the slot migration, and this will trigger the drop for all slot ranges being migrated locally. But we should be able to use the async delKeysInSlot functionality to implement this

@murphyjacob4 The replicas are not involved in the slot migration protocol IIUC. So when a primary cleans up a cancelled slot import, do you just replicate a lot of DEL commands to the replicas? Replicating a single FLUSHSLOT may save some work and bandwidth...

@murphyjacob4
Copy link
Contributor

do you just replicate a lot of DEL commands to the replicas? Replicating a single FLUSHSLOT may save some work and bandwidth...

Yeah that's a good idea. Currently, this is what delKeysInSlot does, and it could result in a lot of bandwidth for big slots. Disabling the replication of the individual keys in delKeysInSlot and replacing with a single FLUSHSLOT makes sense to me. This path is also executed in the dirty slot case, so it won't just be an improvement for atomic slot migration (although I anticipate it will be executed more frequently there).

Copy link
Contributor
@zuiderkwast zuiderkwast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @wuranxx and sorry for the delay.

It looks strait-forward, but there are some details we should think about a bit more. I'm not actually sure what's best to do in all cases, but see my comments below.

addReplyErrorObject(c, shared.syntaxerr);
return;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit weird that the arity check is split into multiple places. But I don't mind too much because it's like that for many commands already.

But I think this should be in cluster.c instead of cluster_legacy.c. Cluster_legacy is about the cluster bus, and we would replace this file if we replace the cluster bus for cluster V2. At least, that's the idea. This command doesn't use the cluster bus at all, so I think it should be in cluster.c and be called directly from clusterCommand().

dbDelete(&server.db[0], key);
propagateDeletion(&server.db[0], key, server.lazyfree_lazy_server_del);
propagateDeletion(&server.db[0], key, lazy);
signalModifiedKey(NULL, &server.db[0], key);
/* The keys are not actually logically deleted from the database, just moved to another node.
* The modules needs to know that these keys are no longer available locally, so just send the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code comment about keyspace notifications:

        /* The keys are not actually logically deleted from the database, just moved to another node.
         * The modules needs to know that these keys are no longer available locally, so just send the
         * keyspace notification to the modules, but not to clients. */

delKeysInSlot is used for slot migrations, which are considered not deleting the keys but just migrating them to another node.

This is not necessarily the case for CLUSTER FLUSHSLOT. I think we should treat CLUSTER FLUSHSLOT more like FLUSHDB. For FLUSHDB, what are the keyspace notifications sent? A lot of DEL events or one FLUSHDB event?

Maybe we shouldn't use delKeysInSlot here, or use some extra argument or flag to indicate if it's a migration or if we're just deleting the keys.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. For FLUSHDB, what are the keyspace notifications sent? A lot of DEL events or one FLUSHDB event?
    • In my test, flushdb not trigger any event.
192.168.0.1:2774> set k2 1
OK
192.168.0.1:2774> flushdb
OK
192.168.0.1:2774> PSUBSCRIBE __key*@0__:*
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "__key*@0__:*"
3) (integer) 1
1) "pmessage"
2) "__key*@0__:*"
3) "__keyspace@0__:k2"
4) "set"
1) "pmessage"
2) "__key*@0__:*"
3) "__keyevent@0__:set"
4) "k2"
  1. Maybe we shouldn't use delKeysInSlot here, or use some extra argument or flag to indicate if it's a migration or if we're just deleting the keys.
    • yes, we should add some flag to mark is migration or del.

If we want to like flushdb, we should write a new method to implement flushslot logic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my test, flushdb not trigger any event.

That's surprising, but OK. I guess flushslot should behave the same as flushdb. I'm open to other suggestions though. If this was a bad design for flushdb, then we can do something else for flushslot.

If we want to like flushdb, we should write a new method to implement flushslot logic.

Let's to it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'll try to rewrite the flushslot as per the flushdb implementation.

if (lazy) {
dbAsyncDelete(&server.db[0], key);
} else {
dbSyncDelete(&server.db[0], key);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, the deleted keys are replicated just by multiple DEL commands. For comparison, FLUSHDB is replicated as a single FLUSHDB command, which is much less traffic than many DEL commands. If we want to replicate FLUSHSLOT as a single FLUSHSLOT command, then we should probably check that the replicas supports this though. We already have the replica's version stored in c->repl_data->replica_version (REPLCONF VERSION is sent by the replica since 8.0) so we could check that it's >= 8.1 for all connected replicas. It's also fine to skip this. We can do it later as a separate optimization.

Copy link
Contributor
@zuiderkwast zuiderkwast Apr 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I think we don't have to worry about old replicas. This is a new feature and users should use new features only after all nodes have been upgraded.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For atomic slot migration - we will need send some FLUSHSLOT commands implicitly:

  1. If an import doesn't succeed (is failed or cancelled) we need to flush the contents of the staged slots to replicas
  2. If there is an import to a target primary and FLUSHDB is run on that target, the target will need to break this out into FLUSHSLOT commands to avoid the importing slots being flushed
  3. If there is an export happening from some source primary and FLUSHDB is run, we need to break this out into FLUSHSLOT commands (so that the target only flushes the impacted importing slots)

Plus - in the case that a dirty slot is found - we currently delete it by sending individual key deletions. It would make sense to create the FLUSHSLOT command implicitly to improve replication traffic.

Due to the cases of implicit FLUSHSLOT propagation - I would suggest that the "delKeysInSlot" helper method include logic for deciding about if we should propagate as individual deletions or as FLUSHSLOT based on the capabilities of the replicas. I'm sure for many customers, they may be running in a cross-version deployment for some amount of time while they do rolling upgrades

@wuranxx wuranxx force-pushed the cluster-flush-slot branch from 4db74af to 0d751a2 Compare April 12, 2025 08:52
@wuranxx
Copy link
Author
wuranxx commented Apr 12, 2025

I add command scene to delKeysInSlot. In command sence, delKeysInSlot will not propagate del command, but will notify del event.

flushslot now just propagate flushslot command to aof and replica.

please help review this PR, thank you! @zuiderkwast @PingXie

Copy link
Member
@PingXie PingXie F438 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. We should also add official documentation for this new command.

R 0 CLUSTER FLUSHSLOT $key_slot ASYNC
assert_equal [R 0 CLUSTER COUNTKEYSINSLOT $key_slot] 0
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test case to validate the command is indeed replicated as-is, as opposed to the key deletions getting replicated?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course. Added.

@zuiderkwast zuiderkwast added needs-doc-pr This change needs to update a documentation page. Remove label once doc PR is open. and removed documentation labels Apr 14, 2025
@hwware hwware force-pushed the cluster-flush-slot branch 3 times, most recently from 12e3d55 to 1744f75 Compare April 14, 2025 16:45
@hwware hwware force-pushed the cluster-flush-slot branch from 0b478f3 to eab7f7c Compare April 24, 2025 15:08
wuranxx added 4 commits April 28, 2025 20:48
Signed-off-by: wuranxx <wuranxx@foxmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Signed-off-by: wuranxx <wuranxx@foxmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Signed-off-by: wuranxx <wuranxx@foxmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Signed-off-by: wuranxx <wuranxx@foxmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
@wuranxx wuranxx force-pushed the cluster-flush-slot branch from eab7f7c to cf527a6 Compare April 28, 2025 12:53
wuranxx and others added 13 commits April 29, 2025 17:25
2. add command scene to delKeysInSlot. In command sence, delKeysInSlot will not propagate del command, and notify del event.

Signed-off-by: wuranxx <wuranxx@foxmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Signed-off-by: wuranxx <155042264+wuranxx@users.noreply.github.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Signed-off-by: wuranxx <155042264+wuranxx@users.noreply.github.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Signed-off-by: wuranxx <155042264+wuranxx@users.noreply.github.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Signed-off-by: wuranxx <155042264+wuranxx@users.noreply.github.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Signed-off-by: wuranxx <155042264+wuranxx@users.noreply.github.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
Co-authored-by: Ping Xie <pingxie@outlook.com>
Signed-off-by: wuranxx <155042264+wuranxx@users.noreply.github.com>
Signed-off-by: hwware <wen.hui.ware@gmail.com>
fix `commands.def`.

Signed-off-by: wuranxx <wuranxx@foxmail.com>
Signed-off-by: wuranxx <wuranxx@foxmail.com>
@wuranxx wuranxx force-pushed the cluster-flush-slot branch from cf527a6 to ddfa7ff Compare April 29, 2025 09:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-doc-pr This change needs to update a documentation page. Remove label once doc PR is open.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0