-
Notifications
You must be signed in to change notification settings - Fork 71
WIP 🐛 OCPBUGS-61082: Add HTTP proxy support for operator deployments #2501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for olmv1 ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request adds HTTP proxy support for operator deployments in OLMv1. The implementation reads proxy configuration (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) from the operator-controller's environment at startup and injects these values into all containers of deployed operator Deployments. When proxy configuration changes and operator-controller restarts, existing operator deployments are automatically updated through a startup reconciliation trigger that annotates all ClusterExtensions. The feature supports both Helm and Boxcutter appliers through a shared manifest provider architecture.
Changes:
- Added proxy environment variable injection into operator deployment containers
- Implemented proxy configuration fingerprinting to trigger new ClusterExtensionRevisions when proxy settings change
- Added startup hook to trigger reconciliation of existing ClusterExtensions when proxy config changes
- Extended E2E tests with proxy configuration scenarios and increased timeouts for deployment updates
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| test/e2e/steps/steps.go | Added E2E test step functions for proxy environment variable verification, increased timeout constants for Boxcutter deployments, added CRD cleanup wait logic |
| test/e2e/features/proxy.feature | New E2E test scenarios covering proxy configuration inheritance and removal |
| internal/operator-controller/rukpak/render/render.go | Added Proxy struct type and WithProxy option for render configuration |
| internal/operator-controller/rukpak/render/registryv1/generators/resources.go | Implemented WithProxy function to inject/remove proxy env vars from deployment containers |
| internal/operator-controller/rukpak/render/registryv1/generators/resources_test.go | Added unit test for proxy environment variable injection |
| internal/operator-controller/rukpak/render/registryv1/generators/generators.go | Integrated proxy options into deployment generation with always-call pattern for proper removal |
| internal/operator-controller/labels/labels.go | Added ProxyConfigHashKey annotation for tracking proxy configuration changes |
| internal/operator-controller/controllers/boxcutter_reconcile_steps_apply_test.go | Updated test to provide empty proxy fingerprint function |
| internal/operator-controller/controllers/boxcutter_reconcile_steps.go | Modified ApplyBundleWithBoxcutter to accept proxy fingerprint function and add hash to revision annotations |
| internal/operator-controller/applier/provider.go | Added Proxy field to RegistryV1ManifestProvider and ProxyFingerprint method for change detection |
| internal/operator-controller/applier/boxcutter.go | Enhanced revision creation logic to detect proxy changes via hash comparison and create new revisions accordingly |
| cmd/operator-controller/main.go | Read proxy environment variables at startup, added runnable to trigger reconciliation on all ClusterExtensions when proxy config is detected |
| Makefile | Added 30-minute timeout and GODOG_TAGS support for selective E2E test execution |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
internal/operator-controller/rukpak/render/registryv1/generators/resources.go
Show resolved
Hide resolved
3959ccf to
80e0d7e
Compare
80e0d7e to
9e92a12
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated 9 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
9e92a12 to
b9569bf
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #2501 +/- ##
==========================================
+ Coverage 69.82% 73.57% +3.75%
==========================================
Files 102 103 +1
Lines 8496 8614 +118
==========================================
+ Hits 5932 6338 +406
+ Misses 2091 1779 -312
- Partials 473 497 +24
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Fixes: OCPBUGS-61082 Add support for HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables in operator deployments. Proxy configuration is read from operator-controller's environment at startup and injected into all containers in deployed operator Deployments. When operator-controller restarts with changed proxy configuration, existing operator deployments are automatically updated via triggered reconciliation. Supports both helm and boxcutter appliers. Includes unit and e2e tests. Signed-off-by: Todd Short <tshort@redhat.com> Assisted-By: Claude
b9569bf to
5ac663e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| var dep appsv1.Deployment | ||
| waitFor(ctx, func() bool { | ||
| raw, err := k8sClient("get", rtype, name, "-n", sc.namespace, "-o", "json") | ||
| if err != nil { | ||
| return false | ||
| } | ||
| if err := json.Unmarshal([]byte(raw), &dep); err != nil { | ||
| return false | ||
| } |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In ResourceHasEnvVar, dep is declared outside the waitFor polling loop and reused across iterations. Since json.Unmarshal does not reset struct fields that are omitted from the JSON (common with omitempty slices like initContainers), this can leave stale data from a previous iteration and cause false positives/negatives. Create a fresh appsv1.Deployment inside the polling function (as done in ResourceHasNoProxyEnvVars) before unmarshalling.
| var dep appsv1.Deployment | ||
| waitFor(ctx, func() bool { | ||
| raw, err := k8sClient("get", rtype, name, "-n", sc.namespace, "-o", "json") | ||
| if err != nil { | ||
| return false | ||
| } | ||
| if err := json.Unmarshal([]byte(raw), &dep); err != nil { | ||
| return false | ||
| } |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In ResourceHasNoEnvVar, dep is declared outside the waitFor polling loop and reused across iterations. Because json.Unmarshal leaves previously-populated fields intact when the incoming JSON omits them (e.g., omitempty slices), this polling can read stale container/env data and become flaky. Instantiate a new appsv1.Deployment inside the polling closure before each unmarshal.
| // Check first pod is Running and Ready | ||
| pod := podList.Items[0] | ||
| if pod.Status.Phase != corev1.PodRunning { | ||
| return false | ||
| } | ||
| for _, cond := range pod.Status.Conditions { | ||
| if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue { | ||
| podName = pod.Name | ||
| podStartTime = pod.Status.StartTime | ||
| return true | ||
| } | ||
| } | ||
| return false |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
waitForOperatorControllerStartup only inspects podList.Items[0] to decide readiness. If multiple pods are returned and the first one is not Ready (or is terminating) while another is Ready, this will unnecessarily block and can flake depending on list ordering. Iterate over all pods and select a Running+Ready pod (ideally the newest by .status.startTime) before proceeding to log checks.
| // Check first pod is Running and Ready | |
| pod := podList.Items[0] | |
| if pod.Status.Phase != corev1.PodRunning { | |
| return false | |
| } | |
| for _, cond := range pod.Status.Conditions { | |
| if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue { | |
| podName = pod.Name | |
| podStartTime = pod.Status.StartTime | |
| return true | |
| } | |
| } | |
| return false | |
| // Select the newest Running+Ready, non-terminating pod (by Status.StartTime) | |
| var bestPodName string | |
| var bestPodStartTime *metav1.Time | |
| for i := range podList.Items { | |
| pod := &podList.Items[i] | |
| // Skip terminating pods | |
| if pod.DeletionTimestamp != nil { | |
| continue | |
| } | |
| if pod.Status.Phase != corev1.PodRunning { | |
| continue | |
| } | |
| isReady := false | |
| for _, cond := range pod.Status.Conditions { | |
| if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue { | |
| isReady = true | |
| break | |
| } | |
| } | |
| if !isReady { | |
| continue | |
| } | |
| // Choose the newest pod by start time | |
| if bestPodStartTime == nil { | |
| bestPodName = pod.Name | |
| bestPodStartTime = pod.Status.StartTime | |
| continue | |
| } | |
| if pod.Status.StartTime != nil && pod.Status.StartTime.After(bestPodStartTime.Time) { | |
| bestPodName = pod.Name | |
| bestPodStartTime = pod.Status.StartTime | |
| } | |
| } | |
| if bestPodName == "" { | |
| return false | |
| } | |
| podName = bestPodName | |
| podStartTime = bestPodStartTime | |
| return true |
| for i := range extList.Items { | ||
| ext := &extList.Items[i] | ||
| // Trigger reconciliation by adding an annotation | ||
| if ext.Annotations == nil { | ||
| ext.Annotations = make(map[string]string) | ||
| } | ||
| ext.Annotations[proxy.ReconcileAnnotationKey] = time.Now().Format(time.RFC3339Nano) | ||
| if err := cl.Update(ctx, ext); err != nil { | ||
| setupLog.Error(err, "failed to trigger reconciliation for ClusterExtension", "name", ext.Name) | ||
| // Continue with other ClusterExtensions |
Copilot
AI
Feb 12, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The startup runnable triggers reconciliation by mutating ext.Annotations and calling cl.Update(ctx, ext) on objects read from a list. This is very likely to hit update conflicts (resourceVersion changes from status updates) and may skip triggering reconciliation for some ClusterExtensions. Prefer a metadata-only patch (e.g., client.Patch with MergeFrom) and retry on conflict so the annotation write is reliably applied without clobbering other concurrent changes.
Fixes: OCPBUGS-61082
Add support for HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables in operator deployments. Proxy configuration is read from operator-controller's environment at startup and injected into all containers in deployed operator Deployments.
When operator-controller restarts with changed proxy configuration, existing operator deployments are automatically updated via triggered reconciliation.
Supports both helm and boxcutter appliers.
Includes unit and e2e tests.
Assisted-By: Claude
Description
Reviewer Checklist