Conversation
|
Skipping CI for Draft Pull Request. |
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
5 similar comments
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
1 similar comment
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
ff78ddf to
2db02c8
Compare
|
/test pull-cluster-api-provider-aws-e2e |
1 similar comment
|
/test pull-cluster-api-provider-aws-e2e |
2db02c8 to
b51760f
Compare
|
/test pull-cluster-api-provider-aws-e2e |
b51760f to
aab2eb4
Compare
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
3 similar comments
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
|
/test pull-cluster-api-provider-aws-e2e |
aab2eb4 to
978d143
Compare
- Update cluster-api to v1.12.2 - Update kubernetes depencencies to v0.34.3: - k8s.io/api - k8s.io/apiextensions-apiserver - k8s.io/apimachinery - k8s.io/client-go - k8s.io/component-base - Update controller-runtime to v0.22.5 - Update ginkgo to v2.27.2 - Update gomega to v1.38.2 - Regenerate base CRDs Signed-off-by: Borja Clemente <bclement@redhat.com>
Signed-off-by: Borja Clemente <bclement@redhat.com>
Signed-off-by: Borja Clemente <bclement@redhat.com>
Signed-off-by: Borja Clemente <bclement@redhat.com>
Background diagnostic goroutines (resource dump every 5s, machine dump every 60s) call upstream CAPI framework functions (DumpAllResources, DumpMachines) that use Gomega Expect() assertions internally. When a cluster is being deleted, these assertions can fail with "not found" errors. Since these dumps are purely informational, their failures should not mark the test spec as failed or cause panics. The previous fix used InterceptGomegaFailure() to catch these failures, but that function is not goroutine-safe: it temporarily replaces the global Gomega fail handler, which can race with assertions in the test's main goroutine. This caused [PANICKED] failures when Eventually() timeouts in the test goroutine hit the intercepted handler, which panics with "stop execution" instead of calling ginkgo.Fail(). Replace InterceptGomegaFailure with a goroutine-aware custom fail handler registered via RegisterFailHandler. The handler uses a sync.Map to track diagnostic goroutine IDs. When a Gomega assertion fails: - In a diagnostic goroutine: panic with a sentinel value (without ever calling ginkgo.Fail), caught by a per-call recover() which logs a warning. - In any other goroutine: delegate to ginkgo.Fail normally. This ensures diagnostic dump failures are silently absorbed without affecting global state or racing with test assertions. Ref: https://kubernetes.slack.com/archives/CD6U2V71N/p1758795545213209
…duling During self-hosted e2e tests, the CAPA controller pod can be evicted from a control plane node during upgrade drain and rescheduled to a worker node. The previous gcr.io/k8s-staging-cluster-api/capa-manager:e2e image reference is a local-only tag that doesn't exist on any registry, so if the kubelet garbage-collects the pre-loaded image, the pod enters ImagePullBackOff permanently. Fix this by dynamically rewriting the CAPA provider component image replacement to use the ECR Public URL (where the image is already pushed by ensureTestImageUploaded). With imagePullPolicy: IfNotPresent, the kubelet uses the local cache when available and falls back to pulling from ECR Public if needed. The ECR-tagged image is also added to the Kind bootstrap images list for consistency.
…it race Lower the healthy threshold from 5 to 2 and the health check interval from 10s to 5s, reducing the time for the target group to mark the API server as healthy from 50s to 10s. This leaves a comfortable 50s margin within kubeadm's default 60s kubernetesAPICallTimeout, preventing the upload-config/kubeadm phase from hitting a context deadline when the ELB has not yet started forwarding traffic. Also increase the unhealthy threshold from 3 to 6 to compensate for the shorter interval and avoid flapping during transient hiccups.
Add a new `DNSResolutionCheck` field to AWSLoadBalancerSpec that allows users to control whether the provider verifies DNS resolution of the API server LB before marking the load balancer as ready and setting the control plane endpoint. By default (when unset), the DNS lookup is performed So this check is now made by default as it is necessary to have a fully ready and routable load balancer before proceeding further with the cluster bootstrapping. This has now become a strict prerequisite for CAPI clusters being installed and bootrapped via the CAPI KubeAdm bootstrap provider, given the recent changes to make this a stricter requirement that have been introduced by kubeadm (see: kubernetes/kubeadm#3294) Setting the field to "None" skips the check entirely, which is useful in environments with no kubeadm requirements, slow DNS propagation, custom resolvers, or private hosted zones where the controller node may not be able to resolve the ELB's FQDN despite it being valid.
978d143 to
0512411
Compare
|
/test pull-cluster-api-provider-aws-e2e |
|
Flakes /test pull-cluster-api-provider-aws-e2e |
|
@damdo: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
No description provided.