8000 OSD migration does not respect do-not-reconcile label · Issue #15988 · rook/rook · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
OSD migration does not respect do-not-reconcile label #15988
Open
@sp98

Description

@sp98

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:

  • OSD migration deletes the OSD deployment with do-not-reconcile label

Expected behavior:

  • OSD migration process should skip the OSDs with do-not-reconcile label

How to reproduce it (minimal and precise):

  • Create rook-ceph cluster with OSDs on PVC without encryption.

  • Add do-not-reconcile labels on the OSDs.

  • Enable encryption as day-2 operation. (set storageClassDeviceSets[0].encrypted: true and set storageClassDeviceSet.migration.confirmation: "yes-really-migrate-osds".

  • OSD deployments should not be migrated or updated due to the do-not-reconcile label, but osd migration process still migrates these OSDs.

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary

Logs to submit:

  • Operator's logs, if necessary

  • Crashing pod(s) logs, if necessary

    To get logs, use kubectl -n <namespace> logs <pod name>
    When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
    Read GitHub documentation if you need help.

Logs:

2025-06-10 06:25:08.535368 I | ceph-spec: found "0" "ceph-osd-id" pod to skip reconcile
2025-06-10 06:25:08.535379 I | ceph-spec: found "1" "ceph-osd-id" pod to skip reconcile
2025-06-10 06:25:08.535387 I | ceph-spec: found "2" "ceph-osd-id" pod to skip reconcile
2025-06-10 06:25:08.535394 I | op-osd: osd migration is requested
2025-06-10 06:25:09.216773 I | op-osd: migration is required for OSD.0 due to change in encryption settings from false to true in storageClassDeviceSet "ocs-deviceset-localblock-0"
2025-06-10 06:25:09.216796 I | op-osd: migration is required for OSD.1 due to change in encryption settings from false to true in storageClassDeviceSet "ocs-deviceset-localblock-0"
2025-06-10 06:25:09.216808 I | op-osd: migration is required for OSD.2 due to change in encryption settings from false to true in storageClassDeviceSet "ocs-deviceset-localblock-0"
2025-06-10 06:25:09.216817 I | op-osd: deleting OSD.1 deployment for migration 
2025-06-10 06:25:09.216826 I | op-osd: removing the OSD deployment "rook-ceph-osd-1"

Cluster Status to submit:

  • Output of kubectl commands, if necessary

    To get the health of the cluster, use kubectl rook-ceph health
    To get the status of the cluster, use kubectl rook-ceph ceph status
    For more details, see the Rook kubectl Plugin

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod):
  • Storage backend version (e.g. for ceph do ceph -v):
  • Kubernetes version (use kubectl version):
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0