Skip to content

Conversation

@tmshort
Copy link
Contributor

@tmshort tmshort commented Feb 10, 2026

Fixes: OCPBUGS-61082

Add support for HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables in operator deployments. Proxy configuration is read from operator-controller's environment at startup and injected into all containers in deployed operator Deployments.

When operator-controller restarts with changed proxy configuration, existing operator deployments are automatically updated via triggered reconciliation.

Supports both helm and boxcutter appliers.
Includes unit and e2e tests.

Assisted-By: Claude

Description

Reviewer Checklist

  • API Go Documentation
  • Tests: Unit Tests (and E2E Tests, if appropriate)
  • Comprehensive Commit Messages
  • Links to related GitHub Issue(s)

Copilot AI review requested due to automatic review settings February 10, 2026 20:07
@netlify
Copy link

netlify bot commented Feb 10, 2026

Deploy Preview for olmv1 ready!

Name Link
🔨 Latest commit 5ac663e
🔍 Latest deploy log https://app.netlify.com/projects/olmv1/deploys/698d32cec44a260008b67013
😎 Deploy Preview https://deploy-preview-2501--olmv1.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@openshift-ci
Copy link

openshift-ci bot commented Feb 10, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign perdasilva for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds HTTP proxy support for operator deployments in OLMv1. The implementation reads proxy configuration (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) from the operator-controller's environment at startup and injects these values into all containers of deployed operator Deployments. When proxy configuration changes and operator-controller restarts, existing operator deployments are automatically updated through a startup reconciliation trigger that annotates all ClusterExtensions. The feature supports both Helm and Boxcutter appliers through a shared manifest provider architecture.

Changes:

  • Added proxy environment variable injection into operator deployment containers
  • Implemented proxy configuration fingerprinting to trigger new ClusterExtensionRevisions when proxy settings change
  • Added startup hook to trigger reconciliation of existing ClusterExtensions when proxy config changes
  • Extended E2E tests with proxy configuration scenarios and increased timeouts for deployment updates

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
test/e2e/steps/steps.go Added E2E test step functions for proxy environment variable verification, increased timeout constants for Boxcutter deployments, added CRD cleanup wait logic
test/e2e/features/proxy.feature New E2E test scenarios covering proxy configuration inheritance and removal
internal/operator-controller/rukpak/render/render.go Added Proxy struct type and WithProxy option for render configuration
internal/operator-controller/rukpak/render/registryv1/generators/resources.go Implemented WithProxy function to inject/remove proxy env vars from deployment containers
internal/operator-controller/rukpak/render/registryv1/generators/resources_test.go Added unit test for proxy environment variable injection
internal/operator-controller/rukpak/render/registryv1/generators/generators.go Integrated proxy options into deployment generation with always-call pattern for proper removal
internal/operator-controller/labels/labels.go Added ProxyConfigHashKey annotation for tracking proxy configuration changes
internal/operator-controller/controllers/boxcutter_reconcile_steps_apply_test.go Updated test to provide empty proxy fingerprint function
internal/operator-controller/controllers/boxcutter_reconcile_steps.go Modified ApplyBundleWithBoxcutter to accept proxy fingerprint function and add hash to revision annotations
internal/operator-controller/applier/provider.go Added Proxy field to RegistryV1ManifestProvider and ProxyFingerprint method for change detection
internal/operator-controller/applier/boxcutter.go Enhanced revision creation logic to detect proxy changes via hash comparison and create new revisions accordingly
cmd/operator-controller/main.go Read proxy environment variables at startup, added runnable to trigger reconciliation on all ClusterExtensions when proxy config is detected
Makefile Added 30-minute timeout and GODOG_TAGS support for selective E2E test execution

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@tmshort tmshort changed the title 🐛 OCPBUGS-61082: Add HTTP proxy support for operator deployments WIP 🐛 OCPBUGS-61082: Add HTTP proxy support for operator deployments Feb 10, 2026
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 10, 2026
Copilot AI review requested due to automatic review settings February 10, 2026 21:42
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 14 out of 14 changed files in this pull request and generated 9 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@codecov
Copy link

codecov bot commented Feb 11, 2026

Codecov Report

❌ Patch coverage is 79.84496% with 26 lines in your changes missing coverage. Please review.
✅ Project coverage is 73.57%. Comparing base (75393e1) to head (5ac663e).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
cmd/operator-controller/main.go 68.42% 8 Missing and 4 partials ⚠️
internal/operator-controller/proxy/proxy.go 50.00% 8 Missing and 1 partial ⚠️
...r/rukpak/render/registryv1/generators/resources.go 91.66% 2 Missing and 1 partial ⚠️
internal/operator-controller/applier/boxcutter.go 86.66% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2501      +/-   ##
==========================================
+ Coverage   69.82%   73.57%   +3.75%     
==========================================
  Files         102      103       +1     
  Lines        8496     8614     +118     
==========================================
+ Hits         5932     6338     +406     
+ Misses       2091     1779     -312     
- Partials      473      497      +24     
Flag Coverage Δ
e2e 46.24% <46.51%> (+0.01%) ⬆️
experimental-e2e 51.26% <65.11%> (+38.36%) ⬆️
unit 57.68% <40.31%> (-0.30%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Fixes: OCPBUGS-61082

Add support for HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment
variables in operator deployments. Proxy configuration is read from
operator-controller's environment at startup and injected into all
containers in deployed operator Deployments.

When operator-controller restarts with changed proxy configuration,
existing operator deployments are automatically updated via triggered
reconciliation.

Supports both helm and boxcutter appliers.
Includes unit and e2e tests.

Signed-off-by: Todd Short <tshort@redhat.com>
Assisted-By: Claude
Copilot AI review requested due to automatic review settings February 12, 2026 01:54
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 14 out of 14 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +1427 to +1435
var dep appsv1.Deployment
waitFor(ctx, func() bool {
raw, err := k8sClient("get", rtype, name, "-n", sc.namespace, "-o", "json")
if err != nil {
return false
}
if err := json.Unmarshal([]byte(raw), &dep); err != nil {
return false
}
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In ResourceHasEnvVar, dep is declared outside the waitFor polling loop and reused across iterations. Since json.Unmarshal does not reset struct fields that are omitted from the JSON (common with omitempty slices like initContainers), this can leave stale data from a previous iteration and cause false positives/negatives. Create a fresh appsv1.Deployment inside the polling function (as done in ResourceHasNoProxyEnvVars) before unmarshalling.

Copilot uses AI. Check for mistakes.
Comment on lines +1568 to +1576
var dep appsv1.Deployment
waitFor(ctx, func() bool {
raw, err := k8sClient("get", rtype, name, "-n", sc.namespace, "-o", "json")
if err != nil {
return false
}
if err := json.Unmarshal([]byte(raw), &dep); err != nil {
return false
}
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In ResourceHasNoEnvVar, dep is declared outside the waitFor polling loop and reused across iterations. Because json.Unmarshal leaves previously-populated fields intact when the incoming JSON omits them (e.g., omitempty slices), this polling can read stale container/env data and become flaky. Instantiate a new appsv1.Deployment inside the polling closure before each unmarshal.

Copilot uses AI. Check for mistakes.
Comment on lines +456 to +468
// Check first pod is Running and Ready
pod := podList.Items[0]
if pod.Status.Phase != corev1.PodRunning {
return false
}
for _, cond := range pod.Status.Conditions {
if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue {
podName = pod.Name
podStartTime = pod.Status.StartTime
return true
}
}
return false
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

waitForOperatorControllerStartup only inspects podList.Items[0] to decide readiness. If multiple pods are returned and the first one is not Ready (or is terminating) while another is Ready, this will unnecessarily block and can flake depending on list ordering. Iterate over all pods and select a Running+Ready pod (ideally the newest by .status.startTime) before proceeding to log checks.

Suggested change
// Check first pod is Running and Ready
pod := podList.Items[0]
if pod.Status.Phase != corev1.PodRunning {
return false
}
for _, cond := range pod.Status.Conditions {
if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue {
podName = pod.Name
podStartTime = pod.Status.StartTime
return true
}
}
return false
// Select the newest Running+Ready, non-terminating pod (by Status.StartTime)
var bestPodName string
var bestPodStartTime *metav1.Time
for i := range podList.Items {
pod := &podList.Items[i]
// Skip terminating pods
if pod.DeletionTimestamp != nil {
continue
}
if pod.Status.Phase != corev1.PodRunning {
continue
}
isReady := false
for _, cond := range pod.Status.Conditions {
if cond.Type == corev1.PodReady && cond.Status == corev1.ConditionTrue {
isReady = true
break
}
}
if !isReady {
continue
}
// Choose the newest pod by start time
if bestPodStartTime == nil {
bestPodName = pod.Name
bestPodStartTime = pod.Status.StartTime
continue
}
if pod.Status.StartTime != nil && pod.Status.StartTime.After(bestPodStartTime.Time) {
bestPodName = pod.Name
bestPodStartTime = pod.Status.StartTime
}
}
if bestPodName == "" {
return false
}
podName = bestPodName
podStartTime = bestPodStartTime
return true

Copilot uses AI. Check for mistakes.
Comment on lines +583 to +592
for i := range extList.Items {
ext := &extList.Items[i]
// Trigger reconciliation by adding an annotation
if ext.Annotations == nil {
ext.Annotations = make(map[string]string)
}
ext.Annotations[proxy.ReconcileAnnotationKey] = time.Now().Format(time.RFC3339Nano)
if err := cl.Update(ctx, ext); err != nil {
setupLog.Error(err, "failed to trigger reconciliation for ClusterExtension", "name", ext.Name)
// Continue with other ClusterExtensions
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The startup runnable triggers reconciliation by mutating ext.Annotations and calling cl.Update(ctx, ext) on objects read from a list. This is very likely to hit update conflicts (resourceVersion changes from status updates) and may skip triggering reconciliation for some ClusterExtensions. Prefer a metadata-only patch (e.g., client.Patch with MergeFrom) and retry on conflict so the annotation write is reliably applied without clobbering other concurrent changes.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant