successfulJobsHistoryLimit Applies Namespace-Wide Instead of Per Schedule · Issue #1053 · k8up-io/k8up · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It appears that successfulJobsHistoryLimit is applied across all Schedules within the same namespace, rather than being scoped to the Backups created by each individual Schedule.
Additional Context
When creating two Schedules in separate namespaces, the expected behavior occurs (each Schedule retains its own backups properly).
failedJobsHistoryLimit may also be affected by this issue.
Logs
Expected Behavior
Each Schedule should retain up to successfulJobsHistoryLimit backups independently, meaning we should see four backups in total (two per Schedule) from the steps listed below.
Steps To Reproduce
Create two backup Schedules within the same namespace, each selecting different pods using label selectors.
Set successfulJobsHistoryLimit to 2 for both Schedules.
Only the two most recent backups are retained across both Schedules, rather than retaining two per Schedule (for a total of four).
Version of K8up
v2.12.0
Version of Kubernetes
v1.31
Distribution of Kubernetes
k3s
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
Description
It appears that successfulJobsHistoryLimit is applied across all Schedules within the same namespace, rather than being scoped to the Backups created by each individual Schedule.
Additional Context
Logs
Expected Behavior
Each Schedule should retain up to successfulJobsHistoryLimit backups independently, meaning we should see four backups in total (two per Schedule) from the steps listed below.
Steps To Reproduce
Version of K8up
v2.12.0
Version of Kubernetes
v1.31
Distribution of Kubernetes
k3s
The text was updated successfully, but these errors were encountered: