Description
Is there any way to enable to rbd mirroring feature in different k8s clusters?
I deployed two k8s clusters with host networking mode by provding spec.network.provider=host
in cephcluster CRD. I found issue https://github.com/rook/rook/discussions/11488 but there is no answer how to implement it.
I tried to use NodePort service also to allow access to the mon ports.
Tried to replace internal mon ip addresses given in token JSON to external host IP's. But operator still can't resolve the token with the error
cephclient: mirroring status check interval for "test" is "1m0s"
2025-05-27 07:47:09.373839 I | cephclient: create rbd-mirror bootstrap peer token for pool "test"
2025-05-27 07:47:09.424275 I | cephclient: successfully created rbd-mirror bootstrap peer token for pool "test"
2025-05-27 07:47:09.445607 I | cephclient: add rbd-mirror bootstrap peer token for pool "test"
2025-05-27 07:47:24.447095 I | exec: exec timeout waiting for process rbd to return. Sending interrupt signal to the process
2025-05-27 07:47:24.449871 E | ceph-block-pool-controller: failed to reconcile CephBlockPool "rook-ceph/test". failed to add ceph rbd mirror peer: failed to import bootstrap peer token: failed to add rbd-mirror peer token for pool "test". : signal: interrupt
kubectl logs rook-ceph-rbd-mirror-a-7bc98b9c97-lnbp5 -n rook-ceph
Defaulted container "rbd-mirror" out of: rbd-mirror, log-collector, chown-container-data-dir (init)
debug 2025-05-24T17:54:59.345+0000 7fd5e9392640 0 set uid:gid to 167:167 (ceph:ceph)
debug 2025-05-24T17:54:59.345+0000 7fd5e9392640 0 ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable), process rbd-mirror, pid 12
debug 2025-05-24T17:54:59.349+0000 7fd5e9392640 1 mgrc service_daemon_register rbd-mirror.419456 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable),ceph_version_short=18.2.4,container_hostname=pool-fzlqy2yjy-tf9yx,container_image=quay.io/ceph/ceph:v18.2.4,cpu=DO-Premium-AMD,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=pool-fzlqy2yjy-tf9yx,id=a,instance_id=419456,kernel_description=#1 SMP PREEMPT_DYNAMIC Debian 6.1.129-1 (2025-03-06),kernel_version=6.1.0-32-amd64,mem_swap_kb=0,mem_total_kb=8131772,os=Linux,pod_name=rook-ceph-rbd-mirror-a-7bc98b9c97-lnbp5,pod_namespace=rook-ceph}
debug 2025-05-25T00:09:59.615+0000 7fd5e632a640 -1 Fail to open '/proc/103/cmdline' error = (2) No such file or directory
debug 2025-05-25T00:09:59.615+0000 7fd5e632a640 -1 received signal: Hangup from (PID: 103) UID: 0
debug 2025-05-25T00:09:59.615+0000 7fd5e632a640 -1 Fail to open '/proc/104/cmdline' error = (2) No such file or directory
debug 2025-05-25T00:09:59.615+0000 7fd5e632a640 -1 received signal: Hangup from (PID: 104) UID: 0
debug 2025-05-26T00:10:00.470+0000 7fd5e632a640 -1 received signal: Hangup from (PID: 299) UID: 0
debug 2025-05-26T00:10:00.474+0000 7fd5e632a640 -1 Fail to open '/proc/300/cmdline' error = (2) No such file or directory
debug 2025-05-26T00:10:00.474+0000 7fd5e632a640 -1 received signal: Hangup from (PID: 300) UID: 0
debug 2025-05-27T00:10:01.318+0000 7fd5e632a640 -1 Fail to open '/proc/495/cmdline' error = (2) No such file or directory
debug 2025-05-27T00:10:01.318+0000 7fd5e632a640 -1 received signal: Hangup from (PID: 495) UID: 0
debug 2025-05-27T00:10:01.322+0000 7fd5e632a640 -1 received signal: Hangup from (PID: 496) UID: 0
All pods running properly
kubectl get pods -n rook-ceph -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-fj6z7 3/3 Running 1 (9d ago) 9d 10.108.0.5 pool-fzlqy2yjy-tf9yo
csi-cephfsplugin-provisioner-64ff4dcc86-4vnr5 6/6 Running 0 9d 10.151.0.61 pool-fzlqy2yjy-tf9yj
csi-cephfsplugin-provisioner-64ff4dcc86-fswqx 6/6 Running 0 9d 10.151.0.219 pool-fzlqy2yjy-tf9yx
csi-cephfsplugin-trrsn 3/3 Running 0 9d 10.108.0.3 pool-fzlqy2yjy-tf9yj
csi-cephfsplugin-xsspc 3/3 Running 1 (9d ago) 9d 10.108.0.4 pool-fzlqy2yjy-tf9yx
csi-rbdplugin-45vpn 3/3 Running 1 (9d ago) 9d 10.108.0.4 pool-fzlqy2yjy-tf9yx
csi-rbdplugin-mnf4f 3/3 Running 1 (9d ago) 9d 10.108.0.3 pool-fzlqy2yjy-tf9yj
csi-rbdplugin-provisioner-7cbd54db94-mndtv 6/6 Running 0 9d 10.151.0.49 pool-fzlqy2yjy-tf9yj
csi-rbdplugin-provisioner-7cbd54db94-trms9 6/6 Running 2 (9d ago) 9d 10.151.1.21 pool-fzlqy2yjy-tf9yo
csi-rbdplugin-w6wvp 3/3 Running 1 (9d ago) 9d 10.108.0.5 pool-fzlqy2yjy-tf9yo
rook-ceph-crashcollector-pool-fzlqy2yjy-tf9yj-6fd549d6dd-6q4r8 1/1 Running 0 9d 10.108.0.3 pool-fzlqy2yjy-tf9yj
rook-ceph-crashcollector-pool-fzlqy2yjy-tf9yo-54687456df-4hbgb 1/1 Running 0 2d13h 10.108.0.5 pool-fzlqy2yjy-tf9yo
rook-ceph-crashcollector-pool-fzlqy2yjy-tf9yx-69c89b49c6-mrc8v 1/1 Running 0 2d13h 10.108.0.4 pool-fzlqy2yjy-tf9yx
rook-ceph-exporter-pool-fzlqy2yjy-tf9yj-6c4fff5c8d-m9mpl 1/1 Running 0 9d 10.108.0.3 pool-fzlqy2yjy-tf9yj
rook-ceph-exporter-pool-fzlqy2yjy-tf9yo-6c88cd6b9-rzrtx 1/1 Running 0 2d13h 10.108.0.5 pool-fzlqy2yjy-tf9yo
rook-ceph-exporter-pool-fzlqy2yjy-tf9yx-5b898c58fb-gx2jg 1/1 Running 0 2d13h 10.108.0.4 pool-fzlqy2yjy-tf9yx
rook-ceph-mgr-a-b65548d4b-cm977 3/3 Running 0 9d 10.108.0.3 pool-fzlqy2yjy-tf9yj
rook-ceph-mgr-b-847fdff8c7-wcr9f 3/3 Running 0 9d 10.108.0.5 pool-fzlqy2yjy-tf9yo
rook-ceph-mon-a-87f7c6f6c-p6wwf 2/2 Running 0 9d 10.108.0.5 pool-fzlqy2yjy-tf9yo
rook-ceph-mon-b-7d5694fd65-5l279 2/2 Running 0 9d 10.108.0.3 pool-fzlqy2yjy-tf9yj
rook-ceph-mon-c-79d587644b-6xxdg 2/2 Running 0 9d 10.108.0.4 pool-fzlqy2yjy-tf9yx
rook-ceph-operator-d8d769dbc-d72pw 1/1 Running 0 4d5h 10.151.0.237 pool-fzlqy2yjy-tf9yx
rook-ceph-osd-0-7b7479cc9c-8wzrk 2/2 Running 0 9d 10.108.0.3 pool-fzlqy2yjy-tf9yj
rook-ceph-osd-1-76474b46b6-ldsfz 2/2 Running 0 9d 10.108.0.4 pool-fzlqy2yjy-tf9yx
rook-ceph-osd-2-6cd569c4dc-6gx6m 2/2 Running 0 9d 10.108.0.5 pool-fzlqy2yjy-tf9yo
rook-ceph-osd-prepare-pool-fzlqy2yjy-tf9yj-jscdf 0/1 Completed 0 4d5h 10.151.0.46 pool-fzlqy2yjy-tf9yj
rook-ceph-osd-prepare-pool-fzlqy2yjy-tf9yo-rv2d9 0/1 Completed 0 4d5h 10.151.1.14 pool-fzlqy2yjy-tf9yo
rook-ceph-osd-prepare-pool-fzlqy2yjy-tf9yx-gzq6p 0/1 Completed 0 4d5h 10.151.0.140 pool-fzlqy2yjy-tf9yx
rook-ceph-rbd-mirror-a-7bc98b9c97-lnbp5 2/2 Running 0 2d13h 10.108.0.4 pool-fzlqy2yjy-tf9yx
rook-ceph-tools-7b75b967db-vzxqk 1/1 Running 0 9d 10.151.0.78
I followed all steps described in docs, added peer information to cephblockpool CRD, to cephrbdmirror crd but i still can't understand where is the problem. What is the additional actions i have to do?