Skip to content

Conversation

@darkdoc
Copy link
Contributor

@darkdoc darkdoc commented Sep 11, 2025

  • WIP: drop argo client from reconcile struct
  • WIP: drop config client from reconcile loop
  • WIP: drop olm client from reconcile object
  • WIP: drop full client from reconcile object, and everywhere else
  • WIP: drop full client from reconcile object, and everywhere else
  • WIP: drop dynamic client from reconcile object
  • WIP: refactor haveacm fn
  • WIP: drop operatorclient from reconciler
  • WIP: drop routeclient from reconciler

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 11, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 11, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: darkdoc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@darkdoc
Copy link
Contributor Author

darkdoc commented Sep 11, 2025

/test all

@mbaldessari
Copy link
Contributor

Fails with:

025/09/15 11:36:05 Reconcile step "applying defaults" failed: no kind is registered for the type v1.ClusterVersion in scheme "/workspace/cmd/main.go:54"
2025/09/15 11:36:05 Requeueing
2025-09-15T11:36:05Z INFO Warning: Reconciler returned both a non-zero result and a non-nil error. The result will always be ignored if the error is non-nil and the non-nil error causes requeuing with exponential backoff. For more details, see: https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler {"controller": "pattern", "controllerGroup": "gitops.hybrid-cloud-patterns.io", "controllerKind": "Pattern", "Pattern": {"name":"pattern-sample","namespace":"openshift-operators"}, "namespace": "openshift-operators", "name": "pattern-sample", "reconcileID": "758f7592-648b-4942-9b72-531d3516e50d"}
2025-09-15T11:36:05Z ERROR Reconciler error {"controller": "pattern", "controllerGroup": "gitops.hybrid-cloud-patterns.io", "controllerKind": "Pattern", "Pattern": {"name":"pattern-sample","namespace":"openshift-operators"}, "namespace": "openshift-operators", "name": "pattern-sample", "reconcileID": "758f7592-648b-4942-9b72-531d3516e50d", "error": "no kind is registered for the type v1.ClusterVersion in scheme \"/workspace/cmd/main.go:54\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:353
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:300
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.1
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:202

We need this change because we change the dynamic client to the runtime
client. It also makes the logic somewhat clearer.
Refactor acm code, add tests, update vendor code
Pass only the runtime client instead of the whole reconciler object.
This makes the testing also cleaner a bit.
This client is not used anywhere so it can be removed.
Change the code to use the runtime client so we can drop some vendor
dependencies.
@mbaldessari
Copy link
Contributor

2025-09-16T07:13:47Z ERROR controller-runtime.cache.UnhandledError Failed to watch {"reflector": "sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:114", "type": "*v1alpha1.Subscription", "error": "subscriptions.operators.coreos.com is forbidden: User \"system:serviceaccount:openshift-operators:patterns-operator-controller-manager\" cannot watch resource \"subscriptions\" in API group \"operators.coreos.com\" at the cluster scope"}
k8s.io/apimachinery/pkg/util/runtime.logError
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:226
k8s.io/apimachinery/pkg/util/runtime.handleError
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:217
k8s.io/apimachinery/pkg/util/runtime.HandleErrorWithContext
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:203
k8s.io/client-go/tools/cache.DefaultWatchErrorHandler
/workspace/vendor/k8s.io/client-go/tools/cache/reflector.go:200
k8s.io/client-go/tools/cache.(*Reflector).RunWithContext.func1
/workspace/vendor/k8s.io/client-go/tools/cache/reflector.go:360
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:233
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:255
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:256
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:233
k8s.io/client-go/tools/cache.(*Reflector).RunWithContext
/workspace/vendor/k8s.io/client-go/tools/cache/reflector.go:358
k8s.io/client-go/tools/cache.(*controller).RunWithContext.(*Group).StartWithContext.func3
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:63
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:72

@mbaldessari
Copy link
Contributor

Now it fails with:

2025-10-01T16:12:45Z INFO Reconciling Pattern {"controller": "pattern", "controllerGroup": "gitops.hybrid-cloud-patterns.io", "controllerKind": "Pattern", "Pattern": {"name":"pattern-sample","namespace":"openshift-operators"}, "namespace": "openshift-operators", "name": "pattern-sample", "reconcileID": "e5916188-8639-41e0-9997-dbcd9885ab97"}
2025/10/01 16:12:45 Reconcile step "created or updated clusterwide argo instance" failed: cannot find a sufficiently recent argocd crd version: no matches for kind "argocds" in version "argoproj.io/v1beta1"
2025/10/01 16:12:45 Requeueing
2025-10-01T16:12:45Z INFO Warning: Reconciler returned both a non-zero result and a non-nil error. The result will always be ignored if the error is non-nil and the non-nil error causes requeuing with exponential backoff. For more details, see: https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler {"controller": "pattern", "controllerGroup": "gitops.hybrid-cloud-patterns.io", "controllerKind": "Pattern", "Pattern": {"name":"pattern-sample","namespace":"openshift-operators"}, "namespace": "openshift-operators", "name": "pattern-sample", "reconcileID": "e5916188-8639-41e0-9997-dbcd9885ab97"}
2025-10-01T16:12:45Z ERROR Reconciler error {"controller": "pattern", "controllerGroup": "gitops.hybrid-cloud-patterns.io", "controllerKind": "Pattern", "Pattern": {"name":"pattern-sample","namespace":"openshift-operators"}, "namespace": "openshift-operators", "name": "pattern-sample", "reconcileID": "e5916188-8639-41e0-9997-dbcd9885ab97", "error": "cannot find a sufficiently recent argocd crd version: no matches for kind \"argocds\" in version \"argoproj.io/v1beta1\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:353
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:300
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.1
/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:202

@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

We need the discovery client functionality to keep backward compatibility
when checking for certain api versions
Instead of a separate haveargo fn, which has very similar functionality,
use the getargocd and checkAPIVersion functions.
@mbaldessari
Copy link
Contributor

2025-10-09T09:12:06Z ERROR controller-runtime.cache.UnhandledError Failed to watch {"reflector": "sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:114", "type": "*v1alpha1.Application", "error": "applications.argoproj.io is forbidden: User \"system:serviceaccount:openshift-operators:patterns-operator-controller-manager\" cannot watch resource \"applications\" in API group \"argoproj.io\" at the cluster scope"}
k8s.io/apimachinery/pkg/util/runtime.logError
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:226
k8s.io/apimachinery/pkg/util/runtime.handleError
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:217
k8s.io/apimachinery/pkg/util/runtime.HandleErrorWithContext
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:203
k8s.io/client-go/tools/cache.DefaultWatchErrorHandler
/workspace/vendor/k8s.io/client-go/tools/cache/reflector.go:200
k8s.io/client-go/tools/cache.(*Reflector).RunWithContext.func1
/workspace/vendor/k8s.io/client-go/tools/cache/reflector.go:360
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:233
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:255
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:256
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:233
k8s.io/client-go/tools/cache.(*Reflector).RunWithContext
/workspace/vendor/k8s.io/client-go/tools/cache/reflector.go:358
k8s.io/client-go/tools/cache.(*controller).RunWithContext.(*Group).StartWithContext.func3
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:63

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants