Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
While upgrading our clusters from 1.14.7 (latest patch version at the time) to 1.15.1, new Cilium pods running on bare-metal worker nodes were not able to startup. Routing device discovery, which used to work in 1.14.x
is now unable to auto-detect the proper device.
I would guess this change was introduced by new device manager in 03ad61b. This is not that much of an issue for us as flag --direct-routing-device
did fix the issue.
However, device discovery fail led to a SIGSEGV
which I guess is no expected behavior. We also tried 1.15.2
and 1.15.3
, with exact same error. Upgrade date was 11/Apr/24, haven't found any related bug backport yet.
Log sample from failed worker agent pod:
level=fatal msg="failed to start: daemon creation failed: failed to detect devices: unable to determine direct routing device. Use --direct-routing-device to specify it" subsys=daemon
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1dfa041]
goroutine 460 [running]:
github.com/cilium/cilium/pkg/node.(*LocalNodeStore).Observe(0x3f4cfb0?, {0x3f4d0c8?, 0xc0026ff950?}, 0x707365520d01009c?, 0x706f725065736e6f?)
<autogenerated>:1 +0x21
github.com/cilium/cilium/pkg/stream.First[...]({_, _}, {_, _})
/go/src/github.com/cilium/cilium/pkg/stream/sinks.go:25 +0x191
github.com/cilium/cilium/pkg/node.(*LocalNodeStore).Get(...)
/go/src/github.com/cilium/cilium/pkg/node/local_node_store.go:145
github.com/cilium/cilium/pkg/node.getLocalNode()
/go/src/github.com/cilium/cilium/pkg/node/address.go:47 +0xb4
github.com/cilium/cilium/pkg/node.GetIPv4()
/go/src/github.com/cilium/cilium/pkg/node/address.go:221 +0x25
github.com/cilium/cilium/pkg/proxy/logger.NewLogRecord({0x393d57b, 0x7}, 0x0?, {0xc001c87bb0, 0x4, 0xc001649b46?})
/go/src/github.com/cilium/cilium/pkg/proxy/logger/logger.go:85 +0x158
github.com/cilium/cilium/daemon/cmd.(*Daemon).notifyOnDNSMsg(0xc0013b6000, {0xd981c2?, 0xc001ef7500?, 0x60bada0?}, 0xc0013cdc00, {0xc0028ca6f0, 0x12}, 0x2, {0xc001649a90, 0x10}, ...)
/go/src/github.com/cilium/cilium/daemon/cmd/fqdn.go:463 +0x1a5c
github.com/cilium/cilium/pkg/fqdn/dnsproxy.(*DNSProxy).ServeDNS(0xc0025d3d40, {0x3f60c48, 0xc00266cb00}, 0xc00157f680)
/go/src/github.com/cilium/cilium/pkg/fqdn/dnsproxy/proxy.go:978 +0x18f6
github.com/cilium/dns.(*Server).serveDNS(0xc001f47f00, {0xc001d1cc00, 0x75, 0x200}, 0xc00266cb00)
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:653 +0x382
github.com/cilium/dns.(*Server).serveUDPPacket(0xc001f47f00, 0xc00171e048?, {0xc001d1cc00, 0x75, 0x200}, {0x3f5a368?, 0xc0016fe1d8}, 0xc000c88690, {0x0, 0x0})
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:605 +0x1cc
github.com/cilium/dns.(*Server).serveUDP.func2()
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:531 +0x49
created by github.com/cilium/dns.(*Server).serveUDP in goroutine 407
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:530 +0x436
Cilium Version
cilium-cli: v0.15.22 compiled with go1.21.6 on linux/amd64
cilium image (default): v1.15.0
cilium image (stable): v1.15.3
cilium image (running): 1.14.7
Kernel Version
Linux k8s-woker01 5.15.0-67-generic #74~20.04.1-Ubuntu SMP Wed Feb 22 14:52:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Kubernetes Version
1.28.2
Regression
Maybe device discovery, but I'm not sure if this is due to a misconfiguration on our side or not. This issue is meant to report the SIGSEGV
error mostly.
Sysdump
Relevant part
Detected Cilium features: map[cidr-match-nodes:Disabled cni-chaining:Disabled:none enable-envoy-config:Disabled enable-gateway-api:Disabled enable-ipv4-egress-gateway:Disabled endpoint-routes:Disabled ingress-controller:Disabled ipv4:Enabled ipv6:Disabled mutual-auth-spiffe:Disabled wireguard-encapsulate:Disabled]
Relevant log output
level=info msg="Memory available for map entries (0.003% of 539625480192B): 1349063700B" subsys=config
level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 4733556" subsys=config
level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 2366778" subsys=config
level=info msg="option bpf-nat-global-max set by dynamic sizing to 4733556" subsys=config
level=info msg="option bpf-neigh-global-max set by dynamic sizing to 4733556" subsys=config
level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 2366778" subsys=config
level=info msg=" --agent-health-port='9879'" subsys=daemon
level=info msg=" --agent-labels=''" subsys=daemon
level=info msg=" --agent-liveness-update-interval='1s'" subsys=daemon
level=info msg=" --agent-not-ready-taint-key='node.cilium.io/agent-not-ready'" subsys=daemon
level=info msg=" --allocator-list-timeout='3m0s'" subsys=daemon
level=info msg=" --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg=" --allow-localhost='auto'" subsys=daemon
level=info msg=" --annotate-k8s-node='false'" subsys=daemon
level=info msg=" --api-rate-limit=''" subsys=daemon
level=info msg=" --arping-refresh-period='30s'" subsys=daemon
level=info msg=" --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg=" --auto-direct-node-routes='false'" subsys=daemon
level=info msg=" --bgp-announce-lb-ip='false'" subsys=daemon
level=info msg=" --bgp-announce-pod-cidr='false'" subsys=daemon
level=info msg=" --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=daemon
level=info msg=" --bpf-auth-map-max='524288'" subsys=daemon
level=info msg=" --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg=" --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp='2h13m20s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-tcp='2h13m20s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-tcp-grace='1m0s'" subsys=daemon
level=info msg=" --bpf-filter-priority='1'" subsys=daemon
level=info msg=" --bpf-fragments-map-max='8192'" subsys=daemon
level=info msg=" --bpf-lb-acceleration='disabled'" subsys=daemon
level=info msg=" --bpf-lb-affinity-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-algorithm='random'" subsys=daemon
level=info msg=" --bpf-lb-dsr-dispatch='opt'" subsys=daemon
level=info msg=" --bpf-lb-dsr-l4-xlate='frontend'" subsys=daemon
level=info msg=" --bpf-lb-external-clusterip='false'" subsys=daemon
level=info msg=" --bpf-lb-maglev-hash-seed='[redacted]'" subsys=daemon
level=info msg=" --bpf-lb-maglev-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-maglev-table-size='16381'" subsys=daemon
level=info msg=" --bpf-lb-map-max='65536'" subsys=daemon
level=info msg=" --bpf-lb-mode='snat'" subsys=daemon
level=info msg=" --bpf-lb-rev-nat-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-rss-ipv4-src-cidr=''" subsys=daemon
level=info msg=" --bpf-lb-rss-ipv6-src-cidr=''" subsys=daemon
level=info msg=" --bpf-lb-service-backend-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-service-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-sock='false'" subsys=daemon
level=info msg=" --bpf-lb-sock-hostns-" subsys=daemon
level=info msg=" --bpf-lb-source-range-map-max='0'" subsys=daemon
level=info msg=" --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
level=info msg=" --bpf-map-event-buffers=''" subsys=daemon
level=info msg=" --bpf-nat-global-max='524288'" subsys=daemon
level=info msg=" --bpf-neigh-global-max='524288'" subsys=daemon
level=info msg=" --bpf-policy-map-full-reconciliation-interval='15m0s'" subsys=daemon
level=info msg=" --bpf-policy-map-max='16384'" subsys=daemon
level=info msg=" --bpf-root='/sys/fs/bpf'" subsys=daemon
level=info msg=" --bpf-sock-rev-map-max='262144'" subsys=daemon
level=info msg=" --bypass-ip-availability-upon-restore='false'" subsys=daemon
level=info msg=" --certificates-directory='/var/run/cilium/certs'" subsys=daemon
level=info msg=" --cflags=''" subsys=daemon
level=info msg=" --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
level=info msg=" --cilium-endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg=" --cluster-health-port='4240'" subsys=daemon
level=info msg=" --cluster-id='1'" subsys=daemon
level=info msg=" --cluster-name='region01'" subsys=daemon
level=info msg=" --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg=" --clustermesh-ip-identities-sync-timeout='1m0s'" subsys=daemon
level=info msg=" --cmdref=''" subsys=daemon
level=info msg=" --cni-chaining-mode='none'" subsys=daemon
level=info msg=" --cni-chaining-target=''" subsys=daemon
level=info msg=" --cni-exclusive='true'" subsys=daemon
level=info msg=" --cni-external-routing='false'" subsys=daemon
level=info msg=" --cni-log-file='/var/run/cilium/cilium-cni.log'" subsys=daemon
level=info msg=" --cni-uninstall='true'" subsys=daemon
level=info msg=" --config=''" subsys=daemon
level=info msg=" --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg=" --config-sources='config-map:kube-system/cilium-config'" subsys=daemon
level=info msg=" --conntrack-gc-interval='0s'" subsys=daemon
level=info msg=" --conntrack-gc-max-interval='0s'" subsys=daemon
level=info msg=" --controller-group-metrics='write-cni-file,sync-host-ips,sync-lb-maps-with-k8s-services'" subsys=daemon
level=info msg=" --crd-wait-timeout='5m0s'" subsys=daemon
level=info msg=" --custom-cni-conf='false'" subsys=daemon
level=info msg=" --datapath-mode='veth'" subsys=daemon
level=info msg=" --debug='false'" subsys=daemon
level=info msg=" --debug-verbose=''" subsys=daemon
level=info msg=" --derive-masquerade-ip-addr-from-device=''" subsys=daemon
level=info msg=" --devices=''" subsys=daemon
level=info msg=" --direct-routing-device=''" subsys=daemon
level=info msg=" --disable-endpoint-crd='false'" subsys=daemon
level=info msg=" --disable-envoy-version-check='false'" subsys=daemon
level=info msg=" --disable-iptables-feeder-rules=''" subsys=daemon
level=info msg=" --dns-max-ips-per-restored-rule='1000'" subsys=daemon
level=info msg=" --dns-policy-unload-on-shutdown='false'" subsys=daemon
level=info msg=" --dnsproxy-concurrency-limit='0'" subsys=daemon
level=info msg=" --dnsproxy-concurrency-processing-grace-period='0s'" subsys=daemon
level=info msg=" --dnsproxy-enable-transparent-mode='true'" subsys=daemon
level=info msg=" --dnsproxy-lock-count='131'" subsys=daemon
level=info msg=" --dnsproxy-lock-timeout='500ms'" subsys=daemon
level=info msg=" --egress-gateway-policy-map-max='16384'" subsys=daemon
level=info msg=" --egress-gateway-reconciliation-trigger-interval='1s'" subsys=daemon
level=info msg=" --egress-masquerade-interfaces=''" subsys=daemon
level=info msg=" --egress-multi-home-ip-rule-compat='false'" subsys=daemon
level=info msg=" --enable-auto-protect-node-port-range='true'" subsys=daemon
level=info msg=" --enable-bandwidth-manager='false'" subsys=daemon
level=info msg=" --enable-bbr='false'" subsys=daemon
level=info msg=" --enable-bgp-control-plane='false'" subsys=daemon
level=info msg=" --enable-bpf-clock-probe='false'" subsys=daemon
level=info msg=" --enable-bpf-masquerade='false'" subsys=daemon
level=info msg=" --enable-bpf-tproxy='false'" subsys=daemon
level=info msg=" --enable-cilium-api-server-access='*'" subsys=daemon
level=info msg=" --enable-cilium-endpoint-slice='false'" subsys=daemon
level=info msg=" --enable-cilium-health-api-server-access='*'" subsys=daemon
level=info msg=" --enable-custom-calls='false'" subsys=daemon
level=info msg=" --enable-encryption-strict-mode='false'" subsys=daemon
level=info msg=" --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg=" --enable-endpoint-routes='false'" subsys=daemon
level=info msg=" --enable-envoy-config='false'" subsys=daemon
level=info msg=" --enable-external-ips='false'" subsys=daemon
level=info msg=" --enable-health-check-loadbalancer-ip='false'" subsys=daemon
level=info msg=" --enable-health-check-nodeport='true'" subsys=daemon
level=info msg=" --enable-health-checking='true'" subsys=daemon
level=info msg=" --enable-high-scale-ipcache='false'" subsys=daemon
level=info msg=" --enable-host-firewall='false'" subsys=daemon
level=info msg=" --enable-host-legacy-routing='false'" subsys=daemon
level=info msg=" --enable-host-port='false'" subsys=daemon
level=info msg=" --enable-hubble='true'" subsys=daemon
level=info msg=" --enable-hubble-recorder-api='true'" subsys=daemon
level=info msg=" --enable-icmp-rules='true'" subsys=daemon
level=info msg=" --enable-identity-mark='true'" subsys=daemon
level=info msg=" --enable-ip-masq-agent='false'" subsys=daemon
level=info msg=" --enable-ipsec='false'" subsys=daemon
level=info msg=" --enable-ipsec-key-watcher='true'" subsys=daemon
level=info msg=" --enable-ipv4='true'" subsys=daemon
level=info msg=" --enable-ipv4-big-tcp='false'" subsys=daemon
level=info msg=" --enable-ipv4-egress-gateway='false'" subsys=daemon
level=info msg=" --enable-ipv4-fragment-tracking='true'" subsys=daemon
level=info msg=" --enable-ipv4-masquerade='true'" subsys=daemon
level=info msg=" --enable-ipv6='false'" subsys=daemon
level=info msg=" --enable-ipv6-big-tcp='false'" subsys=daemon
level=info msg=" --enable-ipv6-masquerade='true'" subsys=daemon
level=info msg=" --enable-ipv6-ndp='false'" subsys=daemon
level=info msg=" --enable-k8s='true'" subsys=daemon
level=info msg=" --enable-k8s-api-discovery='false'" subsys=daemon
level=info msg=" --enable-k8s-endpoint-slice='true'" subsys=daemon
level=info msg=" --enable-k8s-networkpolicy='true'" subsys=daemon
level=info msg=" --enable-k8s-terminating-endpoint='true'" subsys=daemon
level=info msg=" --enable-l2-announcements='false'" subsys=daemon
level=info msg=" --enable-l2-neigh-discovery='true'" subsys=daemon
level=info msg=" --enable-l2-pod-announcements='false'" subsys=daemon
level=info msg=" --enable-l7-proxy='true'" subsys=daemon
level=info msg=" --enable-local-node-route='true'" subsys=daemon
level=info msg=" --enable-local-redirect-policy='false'" subsys=daemon
level=info msg=" --enable-masquerade-to-route-source='false'" subsys=daemon
level=info msg=" --enable-metrics='true'" subsys=daemon
level=info msg=" --enable-mke='false'" subsys=daemon
level=info msg=" --enable-monitor='true'" subsys=daemon
level=info msg=" --enable-nat46x64-gateway='false'" subsys=daemon
level=info msg=" --enable-node-port='false'" subsys=daemon
level=info msg=" --enable-pmtu-discovery='false'" subsys=daemon
level=info msg=" --enable-policy='default'" subsys=daemon
level=info msg=" --enable-recorder='false'" subsys=daemon
level=info msg=" --enable-remote-node-identity='true'" subsys=daemon
level=info msg=" --enable-runtime-device-detection='false'" subsys=daemon
level=info msg=" --enable-sctp='false'" subsys=daemon
level=info msg=" --enable-service-topology='true'" subsys=daemon
level=info msg=" --enable-session-affinity='false'" subsys=daemon
level=info msg=" --enable-srv6='false'" subsys=daemon
level=info msg=" --enable-stale-cilium-endpoint-cleanup='true'" subsys=daemon
level=info msg=" --enable-svc-source-range-check='true'" subsys=daemon
level=info msg=" --enable-tracing='false'" subsys=daemon
level=info msg=" --enable-unreachable-routes='false'" subsys=daemon
level=info msg=" --enable-vtep='false'" subsys=daemon
level=info msg=" --enable-well-known-identities='false'" subsys=daemon
level=info msg=" --enable-wireguard='false'" subsys=daemon
level=info msg=" --enable-wireguard-userspace-fallback='false'" subsys=daemon
level=info msg=" --enable-xdp-prefilter='false'" subsys=daemon
level=info msg=" --enable-xt-socket-fallback='true'" subsys=daemon
level=info msg=" --encrypt-interface=''" subsys=daemon
level=info msg=" --encrypt-node='false'" subsys=daemon
level=info msg=" --encryption-strict-mode-allow-remote-node-identities='false'" subsys=daemon
level=info msg=" --encryption-strict-mode-cidr=''" subsys=daemon
level=info msg=" --endpoint-bpf-prog-watchdog-interval='30s'" subsys=daemon
level=info msg=" --endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg=" --endpoint-queue-size='25'" subsys=daemon
level=info msg=" --endpoint-status=''" subsys=daemon
level=info msg=" --envoy-config-timeout='2m0s'" subsys=daemon
level=info msg=" --envoy-log=''" subsys=daemon
level=info msg=" --exclude-local-address=''" subsys=daemon
level=info msg=" --external-envoy-proxy='false'" subsys=daemon
level=info msg=" --fixed-identity-mapping=''" subsys=daemon
level=info msg=" --fqdn-regex-compile-lru-size='1024'" subsys=daemon
level=info msg=" --gops-port='9890'" subsys=daemon
level=info msg=" --http-403-msg=''" subsys=daemon
level=info msg=" --http-idle-timeout='0'" subsys=daemon
level=info msg=" --http-max-grpc-timeout='0'" subsys=daemon
level=info msg=" --http-normalize-path='true'" subsys=daemon
level=info msg=" --http-request-timeout='3600'" subsys=daemon
level=info msg=" --http-retry-count='3'" subsys=daemon
level=info msg=" --http-retry-timeout='0'" subsys=daemon
level=info msg=" --hubble-disable-tls='false'" subsys=daemon
level=info msg=" --hubble-event-buffer-capacity='4095'" subsys=daemon
level=info msg=" --hubble-event-queue-size='0'" subsys=daemon
level=info msg=" --hubble-export-allowlist=''" subsys=daemon
level=info msg=" --hubble-export-denylist=''" subsys=daemon
level=info msg=" --hubble-export-fieldmask=''" subsys=daemon
level=info msg=" --hubble-export-file-compress='false'" subsys=daemon
level=info msg=" --hubble-export-file-max-backups='5'" subsys=daemon
level=info msg=" --hubble-export-file-max-size-mb='10'" subsys=daemon
level=info msg=" --hubble-export-file-path=''" subsys=daemon
level=info msg=" --hubble-flowlogs-config-path=''" subsys=daemon
level=info msg=" --hubble-listen-address=':4244'" subsys=daemon
level=info msg=" --hubble-metrics=''" subsys=daemon
level=info msg=" --hubble-metrics-server=''" subsys=daemon
level=info msg=" --hubble-monitor-events=''" subsys=daemon
level=info msg=" --hubble-prefer-ipv6='false'" subsys=daemon
level=info msg=" --hubble-recorder-sink-queue-size='1024'" subsys=daemon
level=info msg=" --hubble-recorder-storage-path='/var/run/cilium/pcaps'" subsys=daemon
level=info msg=" --hubble-redact-enabled='false'" subsys=daemon
level=info msg=" --hubble-redact-http-headers-allow=''" subsys=daemon
level=info msg=" --hubble-redact-http-headers-deny=''" subsys=daemon
level=info msg=" --hubble-redact-http-urlquery='false'" subsys=daemon
level=info msg=" --hubble-redact-http-userinfo='true'" subsys=daemon
level=info msg=" --hubble-redact-kafka-apikey='false'" subsys=daemon
level=info msg=" --hubble-skip-unknown-cgroup-ids='true'" subsys=daemon
level=info msg=" --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
level=info msg=" --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
level=info msg=" --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
level=info msg=" --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
level=info msg=" --identity-allocation-mode='crd'" subsys=daemon
level=info msg=" --identity-change-grace-period='5s'" subsys=daemon
level=info msg=" --identity-gc-interval='15m0s'" subsys=daemon
level=info msg=" --identity-heartbeat-timeout='30m0s'" subsys=daemon
level=info msg=" --identity-restore-grace-period='10m0s'" subsys=daemon
level=info msg=" --install-egress-gateway-routes='false'" subsys=daemon
level=info msg=" --install-iptables-rules='true'" subsys=daemon
level=info msg=" --install-no-conntrack-iptables-rules='false'" subsys=daemon
level=info msg=" --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg=" --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
level=info msg=" --ipam='kubernetes'" subsys=daemon
level=info msg=" --ipam-cilium-node-update-rate='15s'" subsys=daemon
level=info msg=" --ipam-default-ip-pool='default'" subsys=daemon
level=info msg=" --ipam-multi-pool-pre-allocation=''" subsys=daemon
level=info msg=" --ipsec-key-file=''" subsys=daemon
level=info msg=" --ipsec-key-rotation-duration='5m0s'" subsys=daemon
level=info msg=" --iptables-lock-timeout='5s'" subsys=daemon
level=info msg=" --iptables-random-fully='false'" subsys=daemon
level=info msg=" --ipv4-native-routing-cidr=''" subsys=daemon
level=info msg=" --ipv4-node='auto'" subsys=daemon
level=info msg=" --ipv4-pod-subnets=''" subsys=daemon
level=info msg=" --ipv4-range='auto'" subsys=daemon
level=info msg=" --ipv4-service-loopback-address='[redacted]'" subsys=daemon
level=info msg=" --ipv4-service-range='auto'" subsys=daemon
level=info msg=" --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg=" --ipv6-mcast-device=''" subsys=daemon
level=info msg=" --ipv6-native-routing-cidr=''" subsys=daemon
level=info msg=" --ipv6-node='auto'" subsys=daemon
level=info msg=" --ipv6-pod-subnets=''" subsys=daemon
level=info msg=" --ipv6-range='auto'" subsys=daemon
level=info msg=" --ipv6-service-range='auto'" subsys=daemon
level=info msg=" --join-cluster='false'" subsys=daemon
level=info msg=" --k8s-api-server='https://[redacted]:6443'" subsys=daemon
level=info msg=" --k8s-client-burst='20'" subsys=daemon
level=info msg=" --k8s-client-qps='10'" subsys=daemon
level=info msg=" --k8s-heartbeat-timeout='30s'" subsys=daemon
level=info msg=" --k8s-kubeconfig-path=''" subsys=daemon
level=info msg=" --k8s-namespace='kube-system'" subsys=daemon
level=info msg=" --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-service-cache-size='128'" subsys=daemon
level=info msg=" --k8s-service-proxy-name=''" subsys=daemon
level=info msg=" --k8s-sync-timeout='3m0s'" subsys=daemon
level=info msg=" --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg=" --keep-config='false'" subsys=daemon
level=info msg=" --kube-proxy-replacement='true'" subsys=daemon
level=info msg=" --kube-proxy-replacement-healthz-bind-address='0.0.0.0:10256'" subsys=daemon
level=info msg=" --kvstore=''" subsys=daemon
level=info msg=" --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg=" --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg=" --kvstore-max-consecutive-quorum-errors='2'" subsys=daemon
level=info msg=" --kvstore-opt=''" subsys=daemon
level=info msg=" --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg=" --l2-announcements-lease-duration='15s'" subsys=daemon
level=info msg=" --l2-announcements-renew-deadline='5s'" subsys=daemon
level=info msg=" --l2-announcements-retry-period='2s'" subsys=daemon
level=info msg=" --l2-pod-announcements-interface=''" subsys=daemon
level=info msg=" --label-prefix-file=''" subsys=daemon
level=info msg=" --labels=''" subsys=daemon
level=info msg=" --legacy-turn-off-k8s-event-handover='false'" subsys=daemon
level=info msg=" --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg=" --local-max-addr-scope='252'" subsys=daemon
level=info msg=" --local-router-ipv4=''" subsys=daemon
level=info msg=" --local-router-ipv6=''" subsys=daemon
level=info msg=" --log-driver=''" subsys=daemon
level=info msg=" --log-opt=''" subsys=daemon
level=info msg=" --log-system-load='false'" subsys=daemon
level=info msg=" --max-connected-clusters='255'" subsys=daemon
level=info msg=" --max-controller-interval='0'" subsys=daemon
level=info msg=" --max-internal-timer-delay='0s'" subsys=daemon
level=info msg=" --mesh-auth-enabled='true'" subsys=daemon
level=info msg=" --mesh-auth-gc-interval='5m0s'" subsys=daemon
level=info msg=" --mesh-auth-mutual-connect-timeout='5s'" subsys=daemon
level=info msg=" --mesh-auth-mutual-listener-port='0'" subsys=daemon
level=info msg=" --mesh-auth-queue-size='1024'" subsys=daemon
level=info msg=" --mesh-auth-rotated-identities-queue-size='1024'" subsys=daemon
level=info msg=" --mesh-auth-signal-backoff-duration='1s'" subsys=daemon
level=info msg=" --mesh-auth-spiffe-trust-domain='spiffe.cilium'" subsys=daemon
level=info msg=" --mesh-auth-spire-admin-socket=''" subsys=daemon
level=info msg=" --metrics=''" subsys=daemon
level=info msg=" --mke-cgroup-mount=''" subsys=daemon
level=info msg=" --monitor-aggregation='medium'" subsys=daemon
level=info msg=" --monitor-aggregation-flags='all'" subsys=daemon
level=info msg=" --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg=" --monitor-queue-size='0'" subsys=daemon
level=info msg=" --mtu='0'" subsys=daemon
level=info msg=" --node-encryption-opt-out-labels='node-role.kubernetes.io/control-plane'" subsys=daemon
level=info msg=" --node-port-acceleration='disabled'" subsys=daemon
leve
10000
l=info msg=" --node-port-algorithm='random'" subsys=daemon
level=info msg=" --node-port-bind-protection='true'" subsys=daemon
level=info msg=" --node-port-mode='snat'" subsys=daemon
level=info msg=" --node-port-range='30000,32767'" subsys=daemon
level=info msg=" --nodeport-addresses=''" subsys=daemon
level=info msg=" --nodes-gc-interval='5m0s'" subsys=daemon
level=info msg=" --operator-api-serve-addr='127.0.0.1:9234'" subsys=daemon
level=info msg=" --operator-prometheus-serve-addr=':9963'" subsys=daemon
level=info msg=" --policy-audit-mode='false'" subsys=daemon
level=info msg=" --policy-cidr-match-mode=''" subsys=daemon
level=info msg=" --policy-queue-size='100'" subsys=daemon
level=info msg=" --policy-trigger-interval='1s'" subsys=daemon
level=info msg=" --pprof='false'" subsys=daemon
level=info msg=" --pprof-address='localhost'" subsys=daemon
level=info msg=" --pprof-port='6060'" subsys=daemon
level=info msg=" --preallocate-bpf-maps='false'" subsys=daemon
level=info msg=" --prepend-iptables-chains='true'" subsys=daemon
level=info msg=" --procfs='/host/proc'" subsys=daemon
level=info msg=" --prometheus-serve-addr=':9962'" subsys=daemon
level=info msg=" --proxy-connect-timeout='2'" subsys=daemon
level=info msg=" --proxy-gid='1337'" subsys=daemon
level=info msg=" --proxy-idle-timeout-seconds='60'" subsys=daemon
level=info msg=" --proxy-max-connection-duration-seconds='0'" subsys=daemon
level=info msg=" --proxy-max-requests-per-connection='0'" subsys=daemon
level=info msg=" --proxy-prometheus-port='9964'" subsys=daemon
level=info msg=" --read-cni-conf=''" subsys=daemon
level=info msg=" --remove-cilium-node-taints='true'" subsys=daemon
level=info msg=" --restore='true'" subsys=daemon
level=info msg=" --route-metric='0'" subsys=daemon
level=info msg=" --routing-mode='tunnel'" subsys=daemon
level=info msg=" --service-no-backend-response='reject'" subsys=daemon
level=info msg=" --set-cilium-is-up-condition='true'" subsys=daemon
level=info msg=" --set-cilium-node-taints='true'" subsys=daemon
level=info msg=" --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg=" --skip-cnp-status-startup-clean='false'" subsys=daemon
level=info msg=" --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg=" --srv6-encap-mode='reduced'" subsys=daemon
level=info msg=" --state-dir='/var/run/cilium'" subsys=daemon
level=info msg=" --synchronize-k8s-nodes='true'" subsys=daemon
level=info msg=" --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg=" --tofqdns-enable-dns-compression='true'" subsys=daemon
level=info msg=" --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg=" --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
level=info msg=" --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
level=info msg=" --tofqdns-min-ttl='0'" subsys=daemon
level=info msg=" --tofqdns-pre-cache=''" subsys=daemon
level=info msg=" --tofqdns-proxy-port='0'" subsys=daemon
level=info msg=" --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
level=info msg=" --trace-payloadlen='128'" subsys=daemon
level=info msg=" --trace-sock='true'" subsys=daemon
level=info msg=" --tunnel-port='0'" subsys=daemon
level=info msg=" --tunnel-protocol='vxlan'" subsys=daemon
level=info msg=" --unmanaged-pod-watcher-interval='15'" subsys=daemon
level=info msg=" --use-cilium-internal-ip-for-ipsec='false'" subsys=daemon
level=info msg=" --version='false'" subsys=daemon
level=info msg=" --vlan-bpf-bypass=''" subsys=daemon
level=info msg=" --vtep-cidr=''" subsys=daemon
level=info msg=" --vtep-endpoint=''" subsys=daemon
level=info msg=" --vtep-mac=''" subsys=daemon
level=info msg=" --vtep-mask=''" subsys=daemon
level=info msg=" --wireguard-persistent-keepalive='0s'" subsys=daemon
level=info msg=" --write-cni-conf-when-ready='/host/etc/cni/net.d/05-cilium.conflist'" subsys=daemon
level=info msg=" _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="| _| | | | | | |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.15.1 a368c8f0 2024-02-14T22:16:57+00:00 go version go1.21.6 linux/amd64" subsys=daemon
level=info msg="clang (10.0.0) and kernel (5.15.0) versions: OK!" subsys=linux-datapath
level=info msg="Kernel config file not found: if the agent fails to start, check the system requirements at https://docs.cilium.io/en/stable/operations/system_requirements" subsys=probes
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
level=info msg=" - reserved:.*" subsys=labels-filter
level=info msg=" - :io\\.kubernetes\\.pod\\.namespace" subsys=labels-filter
level=info msg=" - :io\\.cilium\\.k8s\\.namespace\\.labels" subsys=labels-filter
level=info msg=" - :app\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:io\\.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:statefulset\\.kubernetes\\.io/pod-name" subsys=labels-filter
level=info msg=" - !:apps\\.kubernetes\\.io/pod-index" subsys=labels-filter
level=info msg=" - !:batch\\.kubernetes\\.io/job-completion-index" subsys=labels-filter
level=info msg=" - !:.*beta\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:k8s\\.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=info msg=Invoked duration=1.129637ms function="pprof.glob..func1 (pkg/pprof/cell.go:50)" subsys=hive
level=info msg=Invoked duration="68.624µs" function="gops.registerGopsHooks (pkg/gops/cell.go:38)" subsys=hive
level=info msg=Invoked duration=1.348098ms function="metrics.glob..func1 (pkg/metrics/cell.go:13)" subsys=hive
level=info msg=Invoked duration="40.776µs" function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:281)" subsys=hive
level=info msg="Spire Delegate API Client is disabled as no socket path is configured" subsys=spire-delegate
level=info msg="Mutual authentication handler is disabled as no port is configured" subsys=auth
level=info msg=Invoked duration=105.85733ms function="cmd.configureAPIServer (cmd/cells.go:215)" subsys=hive
level=info msg=Invoked duration="26.824µs" function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:113)" subsys=hive
level=info msg=Invoked duration="57.712µs" function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=info msg=Invoked duration="193.473µs" function="endpointcleanup.registerCleanup (pkg/endpointcleanup/cleanup.go:66)" subsys=hive
level=info msg=Invoked duration="20.189µs" function="cmd.glob..func3 (cmd/daemon_main.go:1612)" subsys=hive
level=info msg=Invoked duration="98.672µs" function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:57)" subsys=hive
level=info msg=Invoked duration="48.051µs" function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:132)" subsys=hive
level=info msg=Invoked duration="10.521µs" function="bgpv1.glob..func1 (pkg/bgpv1/cell.go:71)" subsys=hive
level=info msg=Invoked duration="95.815µs" function="cmd.registerDeviceReloader (cmd/device-reloader.go:48)" subsys=hive
level=info msg=Invoked duration="23.2µs" function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:31)" subsys=hive
level=info msg=Invoked duration="85.983µs" function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:43)" subsys=hive
level=info msg=Invoked duration="38.598µs" function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=info msg=Invoked duration="81.453µs" function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:72)" subsys=hive
level=info msg=Invoked duration="75.551µs" function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=info msg=Invoked duration="11.869µs" function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:58)" subsys=hive
level=info msg=Invoked duration="11.223µs" function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:62)" subsys=hive
level=info msg=Invoked duration="69.556µs" function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=info msg=Starting subsys=hive
level=info msg="Started gops server" address="127.0.0.1:9890" subsys=gops
level=info msg="Start hook executed" duration="354.034µs" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:43)" subsys=hive
level=info msg="Start hook executed" duration="3.521µs" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=info msg="Establishing connection to apiserver" host="https://[redacted]:6443" subsys=k8s-client
level=info msg="Serving prometheus metrics on :9962" subsys=metrics
level=info msg="Connected to apiserver" subsys=k8s-client
level=info msg="Start hook executed" duration=21.799913ms function="client.(*compositeClientset).onStart" subsys=hive
level=info msg="Start hook executed" duration="69.243µs" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:27)" subsys=hive
level=info msg="Start hook executed" duration="38.315µs" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:23)" subsys=hive
level=info msg="Start hook executed" duration="141.443µs" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:44)" subsys=hive
level=info msg="Start hook executed" duration="32.296µs" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:23)" subsys=hive
level=info msg="Start hook executed" duration="74.328µs" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:35)" subsys=hive
level=info msg="Start hook executed" duration="22.493µs" function="*resource.resource[*v1.Node].Start" subsys=hive
level=info msg="Start hook executed" duration="3.977µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=[redacted]/16
level=info msg="Start hook executed" duration=7.620083ms function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:95)" subsys=hive
level=info msg="Start hook executed" duration="6.877µs" function="*statedb.DB.Start" subsys=hive
level=info msg="Start hook executed" duration="16.495µs" function="hive.New.func1.2 (pkg/hive/hive.go:105)" subsys=hive
level=info msg="Start hook executed" duration="4.54µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Devices changed" devices="[vlan01 vlan02]" subsys=devices-controller
level=info msg="Start hook executed" duration=4.285317ms function="*linux.devicesController.Start" subsys=hive
level=info msg="Node addresses updated" device=bridge01 node-addresses="10.1.2.3 (bridge01)" subsys=node-address
level=info msg="Node addresses updated" device=vlan01 node-addresses="[redacted] (vlan01)" subsys=node-address
level=info msg="Node addresses updated" device=vlan02 node-addresses="[redacted] (vlan02)" subsys=node-address
level=info msg="Node addresses updated" device=cilium_host node-addresses="[redacted] (cilium_host), [redacted] (cilium_host)" subsys=node-address
level=info msg="Start hook executed" duration="258.188µs" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:210)" subsys=hive
level=info msg="Start hook executed" duration="276.829µs" function="*bandwidth.manager.Start" subsys=hive
level=info msg="Start hook executed" duration="667.972µs" function="modules.(*Manager).Start" subsys=hive
level=info msg="Start hook executed" duration=3.86294ms function="*iptables.Manager.Start" subsys=hive
level=info msg="Start hook executed" duration="5.955µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="28.367µs" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:216)" subsys=hive
level=info msg="Start hook executed" duration="21.12µs" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:130)" subsys=hive
level=info msg="Start hook executed" duration="3.34µs" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="1.848µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration="2.26µs" function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=info msg="Restored 54 node IDs from the BPF map" subsys=linux-datapath
level=info msg="Start hook executed" duration="481.83µs" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:170)" subsys=hive
level=info msg="Start hook executed" duration="7.738µs" function="*resource.resource[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration="1.889µs" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=info msg="Start hook executed" duration="2.542µs" function="*resource.resource[*v1.Pod].Start" subsys=hive
level=info msg="Start hook executed" duration="1.467µs" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=info msg="Start hook executed" duration="3.744µs" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.086µs" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.244µs" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.403µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=info msg="Start hook executed" duration="2.749µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumEndpointSlice].Start" subsys=hive
level=info msg="Start hook executed" duration="2.147µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="19.849µs" function="*manager.manager.Start" subsys=hive
level=info msg="Start hook executed" duration="102.486µs" function="*cni.cniConfigManager.Start" subsys=hive
level=info msg="Generating CNI configuration file with mode none" subsys=cni-config
level=info msg="Start hook executed" duration="26.925µs" function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:144)" subsys=hive
level=info msg="Start hook executed" duration="99.177µs" function="*common.ClusterMesh.Start" subsys=hive
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Start hook executed" duration="208.263µs" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:61)" subsys=hive
level=info msg="Start hook executed" duration="3.421µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="2.987µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="8.083µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="65.808µs" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:107)" subsys=hive
level=info msg="Envoy: Starting access log server listening on /var/run/cilium/envoy/sockets/access_log.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration="29.351µs" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:178)" subsys=hive
level=info msg="Start hook executed" duration="161.155µs" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:65)" subsys=hive
level=info msg="Start hook executed" duration="2.242µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/envoy/sockets/xds.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration=35.120097ms function="signal.provideSignalManager.func1 (pkg/signal/cell.go:25)" subsys=hive
level=info msg="Datapath signal listener running" subsys=signal
level=info msg="Start hook executed" duration=2.01692ms function="auth.registerAuthManager.func1 (pkg/auth/cell.go:112)" subsys=hive
level=info msg="Start hook executed" duration="10.07µs" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:162)" subsys=hive
level=info msg="Start hook executed" duration="17.281µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="1.951µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="245.117µs" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:240)" subsys=hive
level=info msg="Start hook executed" duration="7.403µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="423.931µs" function="*ipsec.keyCustodian.Start" subsys=hive
level=info msg="Start hook executed" duration="1.583µs" function="*job.group.Start" subsys=hive
level=info msg="Inheriting MTU from external network interface" device=bridge01 ipAddr=10.1.2.3 mtu=1500 subsys=mtu
level=info msg="Start hook executed" duration=2.22117ms function="mtu.newForCell.func1 (pkg/mtu/cell.go:40)" subsys=hive
level=info msg="Auto-enabling \"enable-node-port\", \"enable-external-ips\", \"bpf-lb-sock\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
level=info msg="Cgroup metadata manager is enabled" subsys=cgroup-manager
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_ipcache, recreating and re-pinning map cilium_ipcache" file-path=/sys/fs/bpf/tc/globals/cilium_ipcache name=cilium_ipcache subsys=bpf
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_tunnel_map, recreating and re-pinning map cilium_tunnel_map" file-path=/sys/fs/bpf/tc/globals/cilium_tunnel_map name=cilium_tunnel_map subsys=bpf
level=info msg="Restored services from maps" failedServices=0 restoredServices=460 subsys=service
level=info msg="Restored backends from maps" failedBackends=0 restoredBackends=504 skippedBackends=0 subsys=service
level=info msg="Reading old endpoints..." subsys=daemon
level=info msg="Reusing previous DNS proxy port: 43657" subsys=daemon
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=k8s-woker01 subsys=daemon
level=info msg="Received own node information from API server" ipAddr.ipv4=10.1.2.3 ipAddr.ipv6="<nil>" k8sNodeIP=10.1.2.3 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-woker01 kubernetes.io/os:linux node-role.kubernetes.io/node: openebs.io/nodeid:k8s-woker01 openebs.io/nodename:k8s-woker01 topology.kubernetes.io/region:region01 topology.kubernetes.io/zone:zone01 topology.rbd.csi.ceph.com/region:region01 topology.rbd.csi.ceph.com/zone:zone01]" nodeName=k8s-woker01 subsys=daemon v4Prefix=[redacted]/24 v6Prefix="[redacted]::/64"
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=error msg="Start hook failed" error="daemon creation failed: failed to detect devices: unable to determine direct routing device. Use --direct-routing-device to specify it" function="cmd.newDaemonPromise.func1 (cmd/daemon_main.go:1685)" subsys=hive
level=info msg=Stopping subsys=hive
level=info msg="Stop hook executed" duration="8.541µs" function="*job.group.Stop" subsys=hive
level=info msg="Stop hook executed" duration=504ns function="*ipsec.keyCustodian.Stop" subsys=hive
level=info msg="Stop hook executed" duration="19.056µs" function="*cell.reporterHooks.Stop" subsys=hive
level=info msg="Stop hook executed" duration="11.684µs" function="*cell.reporterHooks.Stop" subsys=hive
level=error msg="Observer job stopped with an error" error="context canceled" func="auth.(*AuthManager).handleAuthRequest" name="auth request-authentication" subsys=auth
level=error msg="Observer job stopped with an error" error="context canceled" func="auth.(*authMapGarbageCollector).handleIdentityChange" name="auth gc-identity-events" subsys=auth
level=info msg="Stop hook executed" duration="207.211µs" function="*job.group.Stop" subsys=hive
level=info msg="Stop hook executed" duration="5.997µs" function="auth.registerGCJobs.func2 (pkg/auth/cell.go:167)" subsys=hive
level=info msg="Datapath signal listener exiting" subsys=signal
level=info msg="Datapath signal listener done" subsys=signal
level=info msg="Stop hook executed" duration="605.517µs" function="signal.provideSignalManager.func2 (pkg/signal/cell.go:28)" subsys=hive
level=info msg="Stop hook executed" duration="32.526µs" function="*cell.reporterHooks.Stop" subsys=hive
level=info msg="Stop hook executed" duration="85.123µs" function="envoy.newEnvoyXDSServer.func2 (pkg/envoy/cell.go:71)" subsys=hive
level=info msg="Stop hook executed" duration="2.012µs" function="envoy.newEnvoyAccessLogServer.func2 (pkg/envoy/cell.go:113)" subsys=hive
level=info msg="Stop hook executed" duration="9.08µs" function="*job.group.Stop" subsys=hive
level=info msg="Stop hook executed" duration="46.948µs" function="*cell.reporterHooks.Stop" subsys=hive
level=info msg="Stop hook executed" duration="17.22µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Stop" subsys=hive
level=info msg="Stop hook executed" duration="1.85µs" function="agent.newMonitorAgent.func2 (pkg/monitor/agent/cell.go:91)" subsys=hive
level=info msg="Stop hook executed" duration=5.918054ms function="*common.ClusterMesh.Stop" subsys=hive
level=info msg="Stop hook executed" duration="14.558µs" function="k8s.newServiceCache.func2 (pkg/k8s/service_cache.go:161)" subsys=hive
level=info msg="Stop hook executed" duration="85.985µs" function="*cni.cniConfigManager.Stop" subsys=hive
level=info msg="Stop hook executed" duration="15.804µs" function="*manager.manager.Stop" subsys=hive
level=info msg="Stop hook executed" duration="29.193µs" function="*cell.reporterHooks.Stop" subsys=hive
level=info msg="Stop hook executed" duration="8.035µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumEndpointSlice].Stop" subsys=hive
level=info msg="Stop hook executed" duration="5.164µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.771µs" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.757µs" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.825µs" function="*resource.resource[*v1.NetworkPolicy].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.717µs" function="*resource.resource[*v1.Namespace].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.849µs" function="*resource.resource[*v1.Pod].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.146µs" function="*resource.resource[*k8s.Endpoints].Stop" subsys=hive
level=info msg="Stop hook executed" duration="8.428µs" function="*resource.resource[*v1.Service].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.579µs" function="*resource.resource[*types.CiliumEndpoint].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.839µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Stop" subsys=hive
level=info msg="Stop hook executed" duration="7.942µs" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Stop" subsys=hive
level=error msg="Close() called without calling InitIdentityAllocator() first" subsys=identity-cache
level=info msg="Stop hook executed" duration="58.987µs" function="cmd.newPolicyTrifecta.func2 (cmd/policy.go:134)" subsys=hive
level=info msg="Stop hook executed" duration="10.672µs" function="endpointmanager.newDefaultEndpointManager.func2 (pkg/endpointmanager/cell.go:220)" subsys=hive
level=info msg="Stop hook executed" duration="20.416µs" function="*cell.reporterHooks.Stop" subsys=hive
level=info msg="Stop hook executed" duration=694ns function="*iptables.Manager.Stop" subsys=hive
level=info msg="Stop hook executed" duration=699ns function="*bandwidth.manager.Stop" subsys=hive
level=info msg="Stop hook executed" duration="76.838µs" function=job.Group.Stop subsys=hive
level=info msg="Stop hook executed" duration="4.852µs" function="*linux.devicesController.Stop" subsys=hive
level=info msg="Stop hook executed" duration="15.214µs" function="*cell.reporterHooks.Stop" subsys=hive
level=info msg="Stop hook executed" duration="1.667µs" function="hive.New.func1.3 (pkg/hive/hive.go:112)" subsys=hive
level=info msg="Stop hook executed" duration="13.697µs" function="*statedb.DB.Stop" subsys=hive
level=info msg="Stop hook executed" duration="25.473µs" function="node.NewLocalNodeStore.func2 (pkg/node/local_node_store.go:121)" subsys=hive
level=info msg="Stop hook executed" duration="164.287µs" function="*resource.resource[*cilium.io/
7B9E
v2.CiliumNode].Stop" subsys=hive
level=info msg="Stop hook executed" duration="70.432µs" function="*resource.resource[*v1.Node].Stop" subsys=hive
level=info msg="Stop hook executed" duration=785ns function="eventsmap.newEventsMap.func2 (pkg/maps/eventsmap/cell.go:45)" subsys=hive
level=info msg="Stop hook executed" duration="12.529µs" function="nodemap.newNodeMap.func2 (pkg/maps/nodemap/cell.go:26)" subsys=hive
level=info msg="Stop hook executed" duration="15.009µs" function="signalmap.newMap.func2 (pkg/maps/signalmap/cell.go:47)" subsys=hive
level=info msg="Stop hook executed" duration="8.255µs" function="configmap.newMap.func2 (pkg/maps/configmap/cell.go:26)" subsys=hive
level=info msg="Stop hook executed" duration="4.141µs" function="authmap.newAuthMap.func2 (pkg/maps/authmap/cell.go:30)" subsys=hive
level=info msg="Stop hook executed" duration="18.41µs" function="client.(*compositeClientset).onStop" subsys=hive
level=info msg="Stop hook executed" duration="56.143µs" function="metrics.NewRegistry.func2 (pkg/metrics/registry.go:96)" subsys=hive
level=info msg="Stopped gops server" address="127.0.0.1:9890" subsys=gops
level=info msg="Stop hook executed" duration="152.786µs" function="gops.registerGopsHooks.func2 (pkg/gops/cell.go:50)" subsys=hive
level=fatal msg="failed to start: daemon creation failed: failed to detect devices: unable to determine direct routing device. Use --direct-routing-device to specify it" subsys=daemon
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1dee7c1]
goroutine 469 [running]:
github.com/cilium/cilium/pkg/node.(*LocalNodeStore).Observe(0x3f36470?, {0x3f36588?, 0xc0009bbbd0?}, 0xff020104d5ff0f00?, 0x8aff010c0100d6?)
<autogenerated>:1 +0x21
github.com/cilium/cilium/pkg/stream.First[...]({_, _}, {_, _})
/go/src/github.com/cilium/cilium/pkg/stream/sinks.go:25 +0x191
github.com/cilium/cilium/pkg/node.(*LocalNodeStore).Get(...)
/go/src/github.com/cilium/cilium/pkg/node/local_node_store.go:145
github.com/cilium/cilium/pkg/node.getLocalNode()
/go/src/github.com/cilium/cilium/pkg/node/address.go:47 +0x9f
github.com/cilium/cilium/pkg/node.GetIPv4()
/go/src/github.com/cilium/cilium/pkg/node/address.go:221 +0x25
github.com/cilium/cilium/pkg/proxy/logger.NewLogRecord({0x3928cce, 0x7}, 0x0?, {0xc00202dbb0, 0x4, 0xc000730be6?})
/go/src/github.com/cilium/cilium/pkg/proxy/logger/logger.go:85 +0x158
github.com/cilium/cilium/daemon/cmd.(*Daemon).notifyOnDNSMsg(0xc000c56000, {0xd90e62?, 0xc001fd0500?, 0x6094be0?}, 0xc000d24e00, {0xc001fee108, 0x12}, 0x2, {0xc000730b30, 0x10}, ...)
/go/src/github.com/cilium/cilium/daemon/cmd/fqdn.go:463 +0x1a5c
github.com/cilium/cilium/pkg/fqdn/dnsproxy.(*DNSProxy).ServeDNS(0xc0027264e0, {0x3f4a128, 0xc001faa700}, 0xc001010bd0)
/go/src/github.com/cilium/cilium/pkg/fqdn/dnsproxy/proxy.go:978 +0x18f6
github.com/cilium/dns.(*Server).serveDNS(0xc000a6d500, {0xc001b22200, 0x55, 0x200}, 0xc001faa700)
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:653 +0x382
github.com/cilium/dns.(*Server).serveUDPPacket(0xc000a6d500, 0xc0014c2028?, {0xc001b22200, 0x55, 0x200}, {0x3f437e8?, 0xc000dd5690}, 0xc000735c10, {0x0, 0x0})
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:605 +0x1cc
github.com/cilium/dns.(*Server).serveUDP.func2()
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:531 +0x49
created by github.com/cilium/dns.(*Server).serveUDP in goroutine 313
/go/src/github.com/cilium/cilium/vendor/github.com/cilium/dns/server.go:530 +0x436
Anything else?
We are running Cilium in direct routing mode without kube-proxy
Cilium Users Document
- Are you a user of Cilium? Please add yourself to the Users doc
Code of Conduct
- I agree to follow this project's Code of Conduct