8000 Should cilium-envoy tolerations differ from cilium-agent? · Issue #31149 · cilium/cilium · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Should cilium-envoy tolerations differ from cilium-agent? #31149
Closed
@joestringer

Description

@joestringer

I noticed in a recent CI run that there were three instances of the cilium-envoy Pod and only two of the main cilium Pod:

	 kube-system         cilium-envoy-4kmmg                           1/1     Running    0          4m27s   172.18.0.3   kind-worker2         <none>           <none>
	 kube-system         cilium-envoy-kf7xl                           1/1     Running    0          4m27s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system         cilium-envoy-zjphl                           1/1     Running    0          4m27s   172.18.0.2   kind-control-plane   <none>           <none>
	 kube-system         cilium-node-init-42whq                       1/1     Running    0          4m27s   172.18.0.3   kind-worker2         <none>           <none>
	 kube-system         cilium-node-init-q5m5s                       1/1     Running    0          4m27s   172.18.0.2   kind-control-plane   <none>           <none>
	 kube-system         cilium-node-init-xgskv                       1/1     Running    0          4m27s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system         cilium-operator-f6bc44d98-l75dk              1/1     Running    0          4m27s   172.18.0.3   kind-worker2         <none>           <none>
	 kube-system         cilium-operator-f6bc44d98-sjmv2              1/1     Running    0          4m27s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system         cilium-qv62g                                 0/1     Init:5/7   0          4m27s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system         cilium-xjpqz                                 1/1     Running    0          4m27s   172.18.0.2   kind-control-plane   <none>           <none>
	 kube-system         coredns-76f75df574-kfxd7                     1/1     Running    0          7m35s   10.0.1.121   kind-control-plane   <none>           <none>
	 kube-system         coredns-76f75df574-pjhfd                     1/1     Running    0          7m35s   10.0.1.242   kind-control-plane   <none>           <none>
	 kube-system         etcd-kind-control-plane                      1/1     Running    0          9m18s   172.18.0.2   kind-control-plane   <none>           <none>
	 kube-system         kube-apiserver-kind-control-plane            1/1     Running    0          9m18s   172.18.0.2   kind-control-plane   <none>           <none>
	 kube-system         kube-controller-manager-kind-control-plane   1/1     Running    0          9m18s   172.18.0.2   kind-control-plane   <none>           <none>
	 kube-system         kube-scheduler-kind-control-plane            1/1     Running    0          9m18s   172.18.0.2   kind-control-plane   <none>           <none>
	 kube-system         log-gatherer-bmp7p                           1/1     Running    0          8m48s   172.18.0.3   kind-worker2         <none>           <none>
	 kube-system         log-gatherer-jgmnp                           1/1     Running    0          8m48s   172.18.0.4   kind-worker          <none>           <none>
	 kube-system         log-gatherer-zzb4b                           1/1     Running    0          8m48s   172.18.0.2   kind-control-plane   <none>           <none>

Seems like the tolerations are configured separately, but should cilium-envoy run on nodes that don't have cilium running?

Metadata

Metadata

Assignees

Labels

area/servicemeshGH issues or PRs regarding servicemeshkind/bugThis is a bug in the Cilium logic.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0