-
Notifications
You must be signed in to change notification settings - Fork 273
outbound: Cache balancers within profile stack #641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This change updates and extends the diagnostic stack checks that we use during development. No functional changes.
The `HasDestination` trait isn't particularly useful, as it's basically just `AsRef<Addr>`. This change updates the `GetRoutes` signature to support this; and it updates the inbound target type to store an `Addr` instead of an `Option<NameAddr>` (so `Target` will be suitable for this in an upcoming change).
We need tower-rs/tower@ad348d8 for an upcoming change. This PR updates the tower dependency in anticipation of this change. The `Balance` constructor has changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks great to me so far, nice job on the latency reduction!
let rng = SmallRng::from_entropy(); | ||
layer::mk(move |inner| NewSplit { | ||
inner, | ||
rng: rng.clone(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we be doing something like the second example here? i'll admit i'm not too familiar w/ rand
's APIs...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hawkw is that the right link? if so, i'm not sure what you're suggesting...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
whoops, it is not --- i meant to link to here: https://docs.rs/rand/0.7.3/rand/rngs/struct.SmallRng.html#examples
let mut update = None; | ||
while let Poll::Ready(Some(up)) = self.rx.poll_recv_ref(cx) { | ||
update = Some(up.clone()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above, not sure if we need to do this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the ready cache isn't ready, then we want to be notified when the profile changes, so we have to drive it to pending. In general, we should always be driving to not ready to be sure we're at the latest state.
In an upcoming change, we'd like to do service profile discovery within the TCP-accept stack (and not the HTTP routing stack). But, today, service profile discovery is tightly coupled to the HTTP middleware implementations. This change splits the service profile layers (profile discovery, http request profiles, and traffic splitting) into several layers, so that the discovery logic is decoupled from the http-specific request-routing middleware. This change removes the balancer cache and the balancer-specific buffer so that balancers are owned by the split layer. The buffer has been moved outside of the split layer to drive all balancers in a split and to make the split cloneable (for the retry middleware). All of this is cached under the profile cache. Breaking changes: This likely breaks support for "external" service profiles, where the proxy resolves service profilse for domains that it cannot resolve via the resolver. This feature is not enabled by default and is considered experimental. We'll have to take attention to at least document this in upcoming releases. Side-effects: * Latency improvements at concurrency  * Compile time improved by 20-25% (integration tests in CI ~20m => ~15m);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks lovely, ship it!
This release includes several major changes to the proxy's behavior: - Service profile lookups are now necessary and fundamental to outbound discovery for HTTP traffic. That is, if a service profile lookup is rejected, endpoint discovery will not be performed; and endpoint discovery must succeed for all destinations that are permitted by service profiles. This simplifies caching and buffering to reduce latency (especially under concurrency). - Service discovery is now performed for all TCP traffic, and connections are balanced over endpoints according to connection latency. - This enables mTLS for **all** meshed connections; not just HTTP. - Outbound TCP metrics are now hydrated with endpoint-specific labels. --- * outbound: Cache balancers within profile stack (linkerd/linkerd2-proxy#641) * outbound: Remove unused error type (linkerd/linkerd2-proxy#648) * Eliminate the ConnectAddr trait (linkerd/linkerd2-proxy#649) * profiles: Do not rely on tuples as stack targets (linkerd/linkerd2-proxy#650) * proxy-http: Remove unneeded boilerplate (linkerd/linkerd2-proxy#651) * outbound: Clarify Http target types (linkerd/linkerd2-proxy#653) * outbound: TCP discovery and load balancing (linkerd/linkerd2-proxy#652) * metrics: Add endpoint labels to outbound TCP metrics (linkerd/linkerd2-proxy#654)
This release includes several major changes to the proxy's behavior: - Service profile lookups are now necessary and fundamental to outbound discovery for HTTP traffic. That is, if a service profile lookup is rejected, endpoint discovery will not be performed; and endpoint discovery must succeed for all destinations that are permitted by service profiles. This simplifies caching and buffering to reduce latency (especially under concurrency). - Service discovery is now performed for all TCP traffic, and connections are balanced over endpoints according to connection latency. - This enables mTLS for **all** meshed connections; not just HTTP. - Outbound TCP metrics are now hydrated with endpoint-specific labels. --- * outbound: Cache balancers within profile stack (linkerd/linkerd2-proxy#641) * outbound: Remove unused error type (linkerd/linkerd2-proxy#648) * Eliminate the ConnectAddr trait (linkerd/linkerd2-proxy#649) * profiles: Do not rely on tuples as stack targets (linkerd/linkerd2-proxy#650) * proxy-http: Remove unneeded boilerplate (linkerd/linkerd2-proxy#651) * outbound: Clarify Http target types (linkerd/linkerd2-proxy#653) * outbound: TCP discovery and load balancing (linkerd/linkerd2-proxy#652) * metrics: Add endpoint labels to outbound TCP metrics (linkerd/linkerd2-proxy#654)
In an upcoming change, we'd like to do service profile discovery within
the TCP-accept stack (and not the HTTP routing stack). But, today,
service profile discovery is tightly coupled to the HTTP middleware
implementations. This change splits the service profile layers (profile
discovery, http request profiles, and traffic splitting) into several
layers, so that the discovery logic is decoupled from the http-specific
request-routing middleware.
This change removes the balancer cache and the balancer-specific buffer
so that balancers are owned by the split layer. The buffer has been
moved outside of the split layer to drive all balancers in a split and
to make the split cloneable (for the retry middleware). All of this is
cached under the profile cache.
Breaking changes:
This likely breaks support for "external" service profiles, where the
proxy resolves service profilse for domains that it cannot resolve via
the resolver. This feature is not enabled by default and is considered
experimental. We'll have to take attention to at least document this in
upcoming releases.
Side-effects: