-
Notifications
You must be signed in to change notification settings - Fork 3.2k
v1.9 backports 2022-06-13 #20180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.9 backports 2022-06-13 #20180
Conversation
/test-backport-1.9 |
[ upstream commit f439177 ] This commit fixes a bug where the keys of the forward map inside the DNS cache were never removed, causing the map to grow forever. By contrast, the reverse map keys were being deleted. For both the forward and reverse maps (which are both maps whose values are another map), the inner map keys were being deleted. In other words, the delete on the outer map key was missing for the forward map. In addition to fixing the bug, this commit expands the unit test coverage to assert after any deletes (entries expiring or GC) that the forward and reverse maps contain what we expect. Particularly, in an environment where there are many unique DNS lookups (unique FQDNs) being done, this forward map could grow quite large over time, especially for a long-lived workload (endpoint). This fixes this memory-leak-like bug. Fixes: cf387ce ("fqdn: Introduce TTL-aware cache for DNS retention") Fixes: f6ce522 ("FQDN: Added garbage collector functions.") Signed-off-by: Chris Tarazi <chris@isovalent.com> Signed-off-by: Maciej Kwiek <maciej@isovalent.com>
[ upstream commit 4bcd7d2 ] When a service matcher LRP and the selected backend pods are deployed first, we previously didn't check if the LRP frontend information (aka clusterIP) is available. This led to agent panic. The frontend information is populated only when the LRP selected service event is received. This issue won't be hit when the selected service was deployed prior to the LRP or backend pod. Reported-by: Karsten Nielsen Signed-off-by: Aditi Ghag <aditi@cilium.io> Signed-off-by: Maciej Kwiek <maciej@isovalent.com> Signed-off-by: Paul Chaignon <paul@cilium.io>
4929538
to
79d2c68
Compare
/test-backport-1.9 Job 'Cilium-PR-K8s-1.17-kernel-4.9' failed: Click to show.Test Name
Failure Output
If it is a flake and a GitHub issue doesn't already exist to track it, comment Job 'Cilium-PR-Runtime-4.9' failed: Click to show.Test Name
Failure Output
If it is a flake and a GitHub issue doesn't already exist to track it, comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM Thanks!
/test-1.17-4.9 edit: all tests seem to have failed at an early stage (https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9/737/testReport/) so running tests again. |
This fails on:
Will try to see if I can reproduce locally. |
/test-runtime-4.9 Was not able to reproduce the same failure locally. So retrying. |
We have ACKs from authors, tests are green, marking as ready to merge. |
Should we file an issue for the new Runtime flake? |
PRs skipped due conflicts:
Once this PR is merged, you can update the PR labels via:
or with