10000 CI: Suite-k8s-1.21.K8sAgentIstioTest Istio Bookinfo Demo Tests bookinfo inter-service connectivity · Issue #23174 · cilium/cilium · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
CI: Suite-k8s-1.21.K8sAgentIstioTest Istio Bookinfo Demo Tests bookinfo inter-service connectivity #23174
Closed as not planned
@ldelossa

Description

@ldelossa

Test Name

Suite-k8s-1.21.K8sAgentIstioTest Istio Bookinfo Demo Tests bookinfo inter-service connectivity

Failure Output

FAIL: unable to retrieve all nodes with 'kubectl get nodes -o json | jq '.items | length'': Exitcode: -1

Stack Trace

/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:453
unable to retrieve all nodes with 'kubectl get nodes -o json | jq '.items | length'': Exitcode: -1 
Err: signal: killed
Stdout:
 	 2
	 
Stderr:
 	 

/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:635

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 1
Number of "level=warning" in logs: 2
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Network status error received, restarting client connections
error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 6
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Unable to get node resource
Waiting for k8s node information
Key allocation attempt failed
Cilium pods: [cilium-mdw2b cilium-mlqz7]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                           Ingress   Egress
grafana-5747bcc8f9-kq4lh      false     false
prometheus-655fb888d7-462nk   false     false
coredns-69b675786c-7zzf6      false     false
Cilium agent 'cilium-mdw2b': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 26 Failed 0
Cilium agent 'cilium-mlqz7': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 22 Failed 0

Standard Error

11:03:37 STEP: Running BeforeAll block for EntireTestsuite K8sAgentIstioTest
11:03:37 STEP: Ensuring the namespace kube-system exists
11:03:37 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
11:03:37 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
11:03:38 STEP: Downloading cilium-istioctl
11:03:39 STEP: Installing Cilium
11:03:40 STEP: Waiting for Cilium to become ready
11:03:57 STEP: Restarting unmanaged pods coredns-69b675786c-x2r8g in namespace kube-system
11:03:57 STEP: Validating if Kubernetes DNS is deployed
11:03:57 STEP: Checking if deployment is ready
11:03:57 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
11:03:57 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
11:03:57 STEP: Waiting for Kubernetes DNS to become operational
11:03:57 STEP: Checking if deployment is ready
11:03:57 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:03:58 STEP: Checking if deployment is ready
11:03:58 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:03:59 STEP: Checking if deployment is ready
11:03:59 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:04:00 STEP: Checking if deployment is ready
11:04:00 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
11:04:01 STEP: Checking if deployment is ready
11:04:01 STEP: Checking if kube-dns service is plumbed correctly
11:04:01 STEP: Checking if pods have identity
11:04:01 STEP: Checking if DNS can resolve
11:04:05 STEP: Validating Cilium Installation
11:04:05 STEP: Performing Cilium controllers preflight check
11:04:05 STEP: Performing Cilium health check
11:04:05 STEP: Checking whether host EP regenerated
11:04:05 STEP: Performing Cilium status preflight check
11:04:13 STEP: Performing Cilium service preflight check
11:04:13 STEP: Performing K8s service preflight check
11:04:19 STEP: Waiting for cilium-operator to be ready
FAIL: unable to retrieve all nodes with 'kubectl get nodes -o json | jq '.items | length'': Exitcode: -1 
Err: signal: killed
Stdout:
 	 2
	 
Stderr:
 	 

11:04:30 STEP: Running JustAfterEach block for EntireTestsuite K8sAgentIstioTest
===================== TEST FAILED =====================
11:04:31 STEP: Running AfterFailed block for EntireTestsuite K8sAgentIstioTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS             RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-kq4lh           0/1     Running            0          59s     10.0.1.232      k8s2   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-462nk        1/1     Running            0          59s     10.0.1.54       k8s2   <none>           <none>
	 kube-system         cilium-mdw2b                       1/1     Running            0          56s     192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-mlqz7                       1/1     Running            0          56s     192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-5b6d689947-9xghv   1/1     Running            1          56s     192.168.56.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-5b6d689947-vvdnf   1/1     Running            0          56s     192.168.56.12   k8s2   <none>           <none>
	 kube-system         coredns-69b675786c-7zzf6           1/1     Running            0          39s     10.0.0.230      k8s1   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running            0          4m50s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running            0          4m59s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       0/1     CrashLoopBackOff   2          4m50s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-58mck                   1/1     Running            0          2m57s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-7rn5m                   1/1     Running            0          103s    192.168.56.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                0/1     CrashLoopBackOff   2          4m50s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-hj6ng                 1/1     Running            0          63s     192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-wwzf9                 1/1     Running            0          63s     192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-fn969               1/1     Running            0          101s    192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-wsln5               1/1     Running            0          101s    192.168.56.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-mdw2b cilium-mlqz7]
cmd: kubectl exec -n kube-system cilium-mdw2b -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                              
	 107        Disabled           Disabled          4          reserved:health                                                                    fd02::14c   10.0.1.60    ready   
	 2176       Disabled           Disabled          17843      k8s:app=prometheus                                                                 fd02::17c   10.0.1.54    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                              
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 2215       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                                  ready   
	                                                            reserved:host                                                                                                       
	 3666       Disabled           Disabled          61862      k8s:app=grafana                                                                    fd02::1f8   10.0.1.232   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                    
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                            
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                     
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-mlqz7 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                       
	 90         Disabled           Disabled          14949      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::35   10.0.0.230   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                     
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                              
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                  
	                                                            k8s:k8s-app=kube-dns                                                                                         
	 362        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                           ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                    
	                                                            k8s:node-role.kubernetes.io/master                                                                           
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                  
	                                                            reserved:host                                                                                                
	 713        Disabled           Disabled          4          reserved:health                                                              fd02::3d   10.0.0.24    ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
11:04:42 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|a83326c2_K8sAgentIstioTest_Istio_Bookinfo_Demo_Tests_bookinfo_inter-service_connectivity.zip]]
11:04:43 STEP: Running AfterAll block for EntireTestsuite K8sAgentIstioTest
11:04:43 STEP: Deleting default namespace sidecar injection label
11:04:43 STEP: Setting label istio-injection- in namespace default
11:04:44 STEP: Deleting the Istio resources
11:04:44 STEP: Waiting all terminating PODs to disappear
11:04:44 STEP:https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/2287/testReport/junit/Suite-k8s-1/21/K8sAgentIstioTest_Istio_Bookinfo_Demo_Tests_bookinfo_inter_service_connectivity/ Deleting the istio-system namespace
11:04:44 STEP: Deleting namespace istio-system
11:04:45 STEP: Removing Cilium installation using generated helm manifest

Resources

Anything else?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0