-
Notifications
You must be signed in to change notification settings - Fork 61
Update kubevirt, cdi and velero plugin versions in drenv #2102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upgrading velero should be done separately and tested with the velero.yaml environemnt. We have an issue for this: #1489
"--plugins=quay.io/nirsof/velero-plugin-for-aws:v1.10.0", | ||
"--image=quay.io/prd/velero:v1.16.1", | ||
"--plugins=quay.io/prd/velero-plugin-for-aws:v1.12.0", | ||
"--plugins=quay.io/kubevirt/kubevirt-velero-plugin:v0.8.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is maintained by kubeivrt?
Do we need any other change for the plugin?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No change required, velero and velero-plugin-for-aws images were build from source code, since the docker.io images were not accessible from drenv.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did you build them? we don't want to introduce issues by incorrect build process. The current images are taken from dockerhub and packaged in a new multi-arch image. So we run exactly the same image that velero developers pushed to dockerhub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were built from Velero latest source code. https://velero.io/docs/v1.10/build-from-source/
"--image=quay.io/nirsof/velero:v1.14.0", | ||
"--plugins=quay.io/nirsof/velero-plugin-for-aws:v1.10.0", | ||
"--image=quay.io/prd/velero:v1.16.1", | ||
"--plugins=quay.io/prd/velero-plugin-for-aws:v1.12.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to move these images to quay.io/ramendr, but using them from your repo is fine for now.
However the images need to be multi-arch images, since we use them also on macOS. The images in my repo are multi-arch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok will rebuild images for multi-arch.
@@ -14,7 +14,7 @@ NAMESPACE = "cdi" | |||
|
|||
def deploy(cluster): | |||
print("Deploying cdi operator") | |||
path = cache.get("operator", "addons/cdi-operator-1.60.2.yaml") | |||
path = cache.get("operator", "addons/cdi-operator-1.62.0.yaml") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is required but not enough. What you tests is actually using the current cdi and kubevirt version using the current operator/kuestomization.yaml and cr/kustomization.yaml and caching the same version with a new name. This does not update anything and breaks the cache on the CI runner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example kubvirt update see #1491
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only a draft PR, my testing is in progress.
Manual test is complete and I experience issue with automating the same with drenv.
Here is the result of my manual test with these version updates:
# oc describe backup test-backup -n velero
Name: test-backup
Namespace: velero
Labels: ramendr.openshift.io/created-by-ramen=true
ramendr.openshift.io/owner-name=vm-cirros-dr
ramendr.openshift.io/owner-namespace-name=ramen-ops
velero.io/storage-location=default
velero.kubevirt.io/metadataBackup=true
Annotations: ramendr.openshift.io/vrg-generation: 2
velero.io/resource-timeout: 10m0s
velero.io/source-cluster-k8s-gitversion: v1.31.0
velero.io/source-cluster-k8s-major-version: 1
velero.io/source-cluster-k8s-minor-version: 31
API Version: velero.io/v1
Kind: Backup
Metadata:
Creation Timestamp: 2025-06-19T05:59:51Z
Generation: 6
Managed Fields:
API Version: velero.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:ramendr.openshift.io/vrg-generation:
f:velero.io/resource-timeout:
f:velero.io/source-cluster-k8s-gitversion:
f:velero.io/source-cluster-k8s-major-version:
f:velero.io/source-cluster-k8s-minor-version:
f:labels:
.:
f:ramendr.openshift.io/created-by-ramen:
f:ramendr.openshift.io/owner-name:
f:ramendr.openshift.io/owner-namespace-name:
f:velero.io/storage-location:
f:velero.kubevirt.io/metadataBackup:
f:spec:
.:
f:csiSnapshotTimeout:
f:defaultVolumesToFsBackup:
f:defaultVolumesToRestic:
f:excludedResources:
f:hooks:
f:includeClusterResources:
f:includedNamespaces:
f:itemOperationTimeout:
f:labelSelector:
f:metadata:
f:snapshotMoveData:
f:snapshotVolumes:
f:storageLocation:
f:ttl:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2025-06-19T05:59:51Z
API Version: velero.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:completionTimestamp:
f:expiration:
f:formatVersion:
f:hookStatus:
f:phase:
f:progress:
.:
f:itemsBackedUp:
f:totalItems:
f:startTimestamp:
f:version:
Manager: velero-server
Operation: Update
Time: 2025-06-19T05:59:53Z
Resource Version: 2421698
UID: b063e89b-e89c-4bb9-a087-b52a223f8ed5
Spec:
Csi Snapshot Timeout: 10m0s
Default Volumes To Fs Backup: false
Default Volumes To Restic: false
Excluded Resources:
events
event.events.k8s.io
persistentvolumes
replicaset
persistentvolumeclaims
pods
volumereplications.replication.storage.openshift.io
replicationsources.volsync.backube
replicationdestinations.volsync.backube
PersistentVolumeClaims
PersistentVolumes
Hooks:
Include Cluster Resources: true
Included Namespaces:
kubevirt-test
Item Operation Timeout: 4h0m0s
Label Selector:
Match Expressions:
Key: ramendr.openshift.io/k8s-resource-selector
Operator: In
Values:
protected-by-ramendr
Key: ramendr.openshift.io/created-by-ramen
Operator: NotIn
Values:
true
Metadata:
Snapshot Move Data: false
Snapshot Volumes: false
Storage Location: default
Ttl: 720h0m0s
Status:
Completion Timestamp: 2025-06-19T05:59:53Z
Expiration: 2025-07-19T05:59:51Z
Format Version: 1.1.0
Hook Status:
Phase: Completed
Progress:
Items Backed Up: 5
Total Items: 5
Start Timestamp: 2025-06-19T05:59:51Z
Version: 1
Events: <none>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for giving the previous version upgrade reference PR.
Signed-off-by: pruthvitd <prd@redhat.com>
4d63516
to
e4e90be
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please split the velero upgrade change from kubevirt and cdi to another PR. We want to test the new velero version separately and it does not depend on kubvirt and cdi.
Kubvirt and CDI do not depend on each other so we can upgrade them separately, ideally we have commit for each component.
Adding the velero plugin should idelaly be in a new commit, this is is a new component we did not use before and upgrading kubvirt/cdi/velero does not depend on it.
Issues in the current PR:
Missing the kubevirt operator upgrade
% git grep 1.3.1 addons/kubevirt/
addons/kubevirt/cache:cache.refresh("operator", "addons/kubevirt-operator-1.3.1.yaml")
addons/kubevirt/cache:cache.refresh("cr", "addons/kubevirt-cr-1.3.1.yaml")
addons/kubevirt/cr/kustomization.yaml: - https://github.com/kubevirt/kubevirt/releases/download/v1.3.1/kubevirt-cr.yaml
addons/kubevirt/operator/kustomization.yaml: - https://github.com/kubevirt/kubevirt/releases/download/v1.3.1/kubevirt-operator.yaml
addons/kubevirt/start: path = cache.get("operator", "addons/kubevirt-operator-1.3.1.yaml")
addons/kubevirt/start: path = cache.get("cr", "addons/kubevirt-cr-1.3.1.yaml")
Missing cdi operator upgrade:
% git grep 1.60.2 test/addons/cdi/
test/addons/cdi/cache:cache.refresh("operator", "addons/cdi-operator-1.60.2.yaml")
test/addons/cdi/cache:cache.refresh("cr", "addons/cdi-cr-1.60.2.yaml")
test/addons/cdi/cr/kustomization.yaml: - https://github.com/kubevirt/containerized-data-importer/releases/download/v1.60.2/cdi-cr.yaml
test/addons/cdi/operator/kustomization.yaml: - https://github.com/kubevirt/containerized-data-importer/releases/download/v1.60.2/cdi-operator.yaml
test/addons/cdi/start: path = cache.get("operator", "addons/cdi-operator-1.60.2.yaml")
test/addons/cdi/start: path = cache.get("cr", "addons/cdi-cr-1.60.2.yaml")
Missing virtctl upgrade which should match kubevirt version.
% git grep virtctl
docs/user-quick-start.md:1. Install the `virtctl` tool.
docs/user-quick-start.md: curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/v1.3.0/virtctl-v1.3.0-linux-amd64
docs/user-quick-start.md: sudo install virtctl /usr/local/bin
docs/user-quick-start.md: rm virtctl
docs/user-quick-start.md: [virtctl install](https://kubevirt.io/quickstart_minikube/#virtctl)
hack/check-drenv-prereqs.sh:commands=("minikube" "kubectl" "clusteradm" "subctl" "velero" "helm" "virtctl"
test/README.md:1. Install the `virtctl` tool
test/README.md: curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/v1.3.0/virtctl-v1.3.0-linux-amd64
test/README.md: sudo install virtctl /usr/local/bin
test/README.md: rm virtctl
test/README.md: [virtctl install](https://kubevirt.io/quickstart_minikube/#virtctl)
test/README.md: virtctl
test/addons/kubevirt/test:from drenv import virtctl
test/addons/kubevirt/test: out = virtctl.ssh(
test/drenv/virtctl.py: Run ssh command via virtctl.
test/drenv/virtctl.py: cmd = ["virtctl", "ssh"]
No description provided.