-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Which jobs are flaking?
sig cluster lifecycle cluster api
- capi-e2e-main
sig cluster lifecycle cluster api 1.9
- capi-e2e-mink8s-release-1-9
- capi-e2e-release-1.9
Which tests are flaking?
periodic-cluster-api-e2e-main.Overall
periodic-cluster-api-e2e-mink8s-release-1-9.Overall
periodic-cluster-api-e2e-release-1-9.Overall
Build Log
only difference observed between flakes is ports #s and ids
Test Suite Passed
cat: /proc/552692/status: No such file or directory
cat: /proc/552692/stack: No such file or directory
cat: /proc/552693/status: No such file or directory
cat: /proc/552693/stack: No such file or directory
ERROR: Found unexpected running containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e52349437d5d kindest/node:v1.23.17 "/usr/local/bin/entr…" 2 hours ago Up 2 hours 127.0.0.1:44871->6443/tcp clusterctl-upgrade-management-2mbke1-control-plane
+ EXIT_VALUE=1
+ set +o xtrace
Since when has it been flaking?
8/02/2025 but possibly earlier
Testgrid link
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-main/1952223600182300672
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-mink8s-release-1-9/1951726318206849024
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/periodic-cluster-api-e2e-release-1-9/1952147850607464448
No response
Reason for failure (if possible)
No response
Anything else we need to know?
No response
Label(s) to be applied
/kind flake
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status