-
Notifications
You must be signed in to change notification settings - Fork 2
Get requests should go via a raft log too #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello, first of all, much much thanks for the feedback. And no man, not discouraged, but on the contrary, I'm encouraged with this, love to solve these kinds of problems, that's why I'm doing this (and hope it will full time, not a side project 😄 ). Yes, you are totally right, a
No, definitely not, first for the reasons you described, but also I don't want for everything to go through the leader, it puts the burden on that node and lowers the latency (even though this can happen if a system using imdgo is a heavy write system)
Here I simply fully rely on my underlining Hashicorp's Raft implementation. On a system startup all of the nodes have to be known, atm it's not supported adding a node to a running cluster. When you have a new node, you'll update your config and redeploy your instances, and that way a cluster would be bootstrapped with additional nodes. Not ideal, I know, but I'm starting this one small step at a time 😄 For now, I'm gonna update the readme to remove misleading CP property which I violate. Again, thank you very much man for taking the time to check this out, much appreciated!! |
Cool! A fixed membership is a good choice for an educational project! Otherwise, it gets hairy really quickly:) Ad consistency guarantees: The problem with local-only reads is they are not even sequentially consistent. Local reads can yield indefinitely stale results. Consider this scenario: Setup: Execution on n2: With local-only This is OK for many applications and performance benefits are obvious. So some systems allow configurable guarantees. Hazelcast internally uses 3 different modes for read-only requests, see: https://github.com/hazelcast/hazelcast/blob/14ef0dbefa4a6fa00a50a8873aeea02ec6c3f5e4/hazelcast/src/main/java/com/hazelcast/cp/internal/raft/QueryPolicy.java#L23-L54 |
Yes, yes, I understand, you're totally right!! Thx, this is the ticket on top of my backlog :) Can I ask just this, with this setup, what is the formal consistency model here (I admit, consistency models are still a bit tricky for me)? |
That's a good question. Read-Your-Writes is a fairly basic property, that's what users implicitly expect and many consistency models are stronger than that. This is a good intro: https://jepsen.io/consistency (the picture is clickable) |
Yep, Jepsen is my reference on this 😄
100%, I'm gonna ensure that for starters, and the rest will stay the same for now. If I got everything right, with ensuring Read-Your-Writes, this could be sequentially consistent: causality is ensured by Raft and according to Jepsen "PRAM is exactly equivalent to read-your-writes", and we already have that (that is, will have that). Excellent, this is getting more and more interesting 😄 Thx man a lot!! |
👀 |
Uh oh!
There was an error while loading. Please reload this page.
Hi, this looks like a fun project. I am really not a golang dev, but I did a quick peek and here are some quick observations:
A follower might be behind a leader so when you do a
get()
on a follower you might get a stale value -> this breaks linearaziblity -> the system is no longer CP (in terms of CAP) -> You just violated the CP promise from readme 😢So a naive solution could be to always ask a leader, right? The problem is that a leader might be partitioned away, a new leader is elected without the original leader knowing, the system will make progress, again, without the old leader knowing. So the old leader might give you a stale reply and this again breaks linearizability.
The simplest solution is to do GET requests via the full Raft consensus. This will avoid stale replies and maintains linearizability. 💪
There is an obvious downside: The raft log gets populated just by doing read-only requests. Not great.
There are some optimization techniques, I believe some of them are documented in the Raft dissertation. I hope I didn't misremember that!
I didn't check how the server additional/removal works, but it's also a very tricky area with potential for subtle bugs.
Please don't get discouraged by ☝️, I'm glad to see projects in this area! Happy Rafting!
The text was updated successfully, but these errors were encountered: