Skip to content

OCPBUGS-77949 OCPBUGS-77948 OCPBUGS-78298: TNF node replacement test updates#30846

Open
jaypoulz wants to merge 5 commits intoopenshift:mainfrom
jaypoulz:tnf-node-replacement-fixes
Open

OCPBUGS-77949 OCPBUGS-77948 OCPBUGS-78298: TNF node replacement test updates#30846
jaypoulz wants to merge 5 commits intoopenshift:mainfrom
jaypoulz:tnf-node-replacement-fixes

Conversation

@jaypoulz
Copy link
Contributor

@jaypoulz jaypoulz commented Mar 6, 2026

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Summary by CodeRabbit

  • Tests
    • Improved node-replacement reliability with longer, per-operation timeouts and parallelized waits
    • Enhanced cleanup and force-delete resilience for test resources, including finalizer-based force-delete helpers
    • Added automatic pod log capture after job completion and new job-by-node wait helpers
    • Safer SSH output truncation for logs to limit size while preserving UTF-8 boundaries
    • Expanded pacemaker/status debugging with fuller status dumps on failures
    • Updated test templates to use a Redfish authority-style BMC address format

@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 6, 2026
@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Walkthrough

Updated the two-node test suite: parameterized the Redfish BMC address to use an authority string, expanded and parallelized node-replacement timeouts/flows, added job/pod log capture and force-delete helpers, improved pacemaker status/debugging, and truncated large SSH outputs for logging.

Changes

Cohort / File(s) Summary
Test Configuration
test/extended/testdata/two_node/baremetalhost-template.yaml
BMC address value changed to use a Redfish authority string ({REDFISH_AUTHORITY}/redfish/v1/Systems/{UUID}) instead of a host:port IP literal.
Node replacement test
test/extended/two_node/tnf_node_replacement.go
Added per-operation timeouts and retry constants, parallelized update-setup waits for survivor/target, extended VM/etcd/API wait windows, swapped status calls for verbose variants, introduced Redfish authority usage for BMH creation, and added force-delete / finalizer handling and related helpers.
SSH logging utilities
test/extended/two_node/utils/core/ssh.go
Introduced maxLogOutputBytes and truncateForLog (UTF-8 aware) and applied truncation to stdout/stderr logging with indicators when output is shortened.
Common utilities
test/extended/two_node/utils/common.go
Changed TryPacemakerCleanup to call the more verbose PcsStatusFullViaDebug variant for pacemaker status retrieval.
Job & pod handling (etcd)
test/extended/two_node/utils/services/etcd.go
Added DumpJobPodLogs(jobName, namespace, oc); refactored WaitForJobCompletion to capture and dump pod logs after completion; added node-targeted and survivor-aware update-setup wait helpers that resolve jobs by node label and gate on pod creation time.
Pacemaker services
test/extended/two_node/utils/services/pacemaker.go
Added PcsStatusFullViaDebug(ctx, oc, nodeName) to retrieve full pacemaker status via debug container; WaitForNodesOnline now logs full status on poll timeout/failure and tolerates transient retrieval/parse errors during polling.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Test as Test Orchestrator
participant OC as OpenShift API (oc)
participant JobPod as Job / Pod
participant BMC as Redfish BMC
participant Pacemaker as Pacemaker (cluster)

Test->>OC: create BareMetalHost (uses REDFISH_AUTHORITY)
Test->>OC: create replacement Job
OC->>JobPod: schedule & run job pod
JobPod->>BMC: invoke Redfish actions (provisioning/inspection)
JobPod->>OC: write pod logs / status
Test->>OC: poll Job and Pod status (WaitForSurvivorUpdateSetupJobCompletion)
alt job completes or timeout
  OC->>Test: return completion + logs (DumpJobPodLogs)
  Test->>Pacemaker: poll cluster state (PcsStatusFullViaDebug)
  Pacemaker->>Test: return full status
end

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Test Structure And Quality ⚠️ Warning The getUpdateSetupJobNameForNode function returns the first job from an unsorted LIST without sorting by CreationTimestamp, risking selection of stale jobs due to non-deterministic Kubernetes LIST ordering. Sort update-setup jobs by CreationTimestamp and select the newest job. Add meaningful assertion failure messages in utility functions for better test diagnostics.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly summarizes the main changes: TNF node replacement test updates addressing multiple bugs with specific Jira ticket references.
Docstring Coverage ✅ Passed Docstring coverage is 93.94% which is sufficient. The required threshold is 80.00%.
Stable And Deterministic Test Names ✅ Passed The PR modifies test implementation files and utilities. The only test title added/modified is stable and deterministic with no dynamic information.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 6, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jaypoulz
Once this PR has been reviewed and has the lgtm label, please assign xueqzhan for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jaypoulz
Copy link
Contributor Author

jaypoulz commented Mar 6, 2026

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 6, 2026
@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

Details

In response to this:

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Summary by CodeRabbit

Release Notes

  • Tests
  • Enhanced node replacement test reliability with improved concurrency and timeout handling for job waits
  • Strengthened resource cleanup logic with finalizer-based deletion support
  • Expanded debugging capabilities with verbose pacemaker status logging and extended error reporting
  • Improved SSH command output logging with automatic truncation to prevent excessive log sizes
  • Added automatic pod log capture for job completion monitoring

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/extended/two_node/utils/core/ssh.go`:
- Around line 136-141: The current log truncation slices the UTF-8 string bytes
with out[:maxLogOutputBytes], which can cut a multi-byte rune and produce
invalid UTF-8 in logs; update the truncation logic around stdout.String()/out
and the e2e.Logf call to perform rune-safe truncation (for example, convert to
runes or iterate runes until adding the next rune would exceed
maxLogOutputBytes) and then log the safely truncated string along with the total
byte length using the existing maxLogOutputBytes and e2e.Logf call sites.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3ecbedc9-3c0f-4263-b6e6-a6a05314b2c2

📥 Commits

Reviewing files that changed from the base of the PR and between 35bab74 and 547c9eb.

📒 Files selected for processing (6)
  • test/extended/testdata/two_node/baremetalhost-template.yaml
  • test/extended/two_node/tnf_node_replacement.go
  • test/extended/two_node/utils/common.go
  • test/extended/two_node/utils/core/ssh.go
  • test/extended/two_node/utils/services/etcd.go
  • test/extended/two_node/utils/services/pacemaker.go

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 6, 2026

@jaypoulz: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/verify 547c9eb link true /test verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link
Contributor

@eggfoobar eggfoobar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking great, just had some small suggestions.

automatedCleaningMode: metadata
bmc:
address: redfish+https://{REDFISH_IP}:8000/redfish/v1/Systems/{UUID}
address: redfish+https://{REDFISH_HOST_PORT}/redfish/v1/Systems/{UUID}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, we should just rename this to REDFISH_HOST, the port can cause confusion

// to empty so the test can proceed without blocking on controller cleanup.
func deleteOcResourceWithRetry(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
return core.RetryWithOptions(func() error {
done := make(chan error, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing this timer, can we just use the old core.RetryWithOptions and if that fails we fall to force delete?

"{CREDENTIALS_NAME}": testConfig.TargetNode.BMCSecretName,
"{BOOT_MAC_ADDRESS}": newMACAddress,
"{BMH_NAME}": testConfig.TargetNode.BMHName,
"{REDFISH_HOST_PORT}": redfishHostPort,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here on updating to REDFISH_HOST

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to go with REDFISH_AUTHORITY since that's the technical term for the host:port part of a URL

…logs

- Use PcsStatusFull / PcsStatusFullViaDebug for human-readable pacemaker
  status instead of XML
- Truncate long SSH stdout/stderr in logs (e.g. avoid pcs status xml dumps)
- On WaitForNodesOnline timeout, log full pacemaker status for debugging

Made-with: Cursor
…nHostPort for IPv6

- Use net.JoinHostPort(RedfishIP, port) so IPv6 addresses are bracketed (RFC 3986)
- BMH template placeholder {REDFISH_HOST_PORT} replaces {REDFISH_IP}; port 8000 in code

Made-with: Cursor
…e replacement

- Capture job pod logs and gate on survivor update-setup run
- Force-delete stuck BMH/Machine after 1m; shorten recovery and BMH timeouts;
  rename timeouts to match duration (3m/7m/10m)
- Wait for survivor and target update-setup jobs in parallel

Made-with: Cursor
CSR approval was never observed in testing; remove the wait to simplify recovery.

Made-with: Cursor
@jaypoulz jaypoulz force-pushed the tnf-node-replacement-fixes branch from 547c9eb to 0816c76 Compare March 10, 2026 18:55
@jaypoulz
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 10, 2026

@jaypoulz: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c38d4d90-1cb2-11f1-8125-bec258656377-0

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/extended/two_node/tnf_node_replacement.go`:
- Around line 1273-1280: The wait uses a hardcoded minPodCreationTime
(time.Now().Add(-2 * time.Minute)) which can include pre-Ready stale pods;
change waitForNodeRecovery to return the node Ready timestamp (e.g., readyTime)
and use that exact timestamp here instead of time.Now().Add(...), passing the
returned readyTime as minPodCreationTime into
services.WaitForSurvivorUpdateSetupJobCompletion (and the symmetric
WaitForTargetUpdateSetupJobCompletion) so the waits are gated on the node Ready
time rather than an approximate clock offset.
- Around line 1337-1354: The current attempt spawns oc.AsAdmin().Run("delete")
in a goroutine and uses time.After(deleteAttemptTimeout), which leaves the
delete running if the timer fires; replace that pattern with a per-attempt
cancelable context so each delete is actually bounded: inside the
RetryWithOptions callback create ctx, cancel :=
context.WithTimeout(context.Background(), deleteAttemptTimeout) and defer
cancel(), then invoke the delete command with that context (e.g.,
oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n",
namespace).WithContext(ctx).Output() or the project’s equivalent Run/Output
method that accepts a context), remove the extra goroutine and select, capture
and log the returned error from the cancelable delete call, and then use
ocResourceExists(oc, resourceType, resourceName, namespace) to decide
success/failure as before.
- Around line 1466-1468: The call to core.RetryOptions in
waitForEtcdResourceToStop is ignoring the function's timeout parameter and
hardcodes threeMinuteTimeout, preventing callers from controlling the deadline;
change the RetryOptions Timeout to use the function's timeout argument (the
timeout parameter of waitForEtcdResourceToStop) instead of threeMinuteTimeout,
and ensure any associated log message that references the timeout reflects the
passed-in timeout value so logs match behavior.
- Around line 1378-1393: The function forceDeleteOcResourceByRemovingFinalizers
currently returns nil even when the confirm loop times out, causing callers to
assume deletion succeeded; change the final branch so that after the timeout
(where it currently logs the WARNING) the function returns a non-nil error
(e.g., fmt.Errorf with context including resourceType, resourceName and
forceDeleteConfirmTimeout) instead of nil so callers see the failure and can
handle the retry/error path; update the log call in
forceDeleteOcResourceByRemovingFinalizers to include the same error context when
returning.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: fce9d0e8-a5c5-480a-b91a-17f85f92e721

📥 Commits

Reviewing files that changed from the base of the PR and between 547c9eb and 0816c76.

📒 Files selected for processing (6)
  • test/extended/testdata/two_node/baremetalhost-template.yaml
  • test/extended/two_node/tnf_node_replacement.go
  • test/extended/two_node/utils/common.go
  • test/extended/two_node/utils/core/ssh.go
  • test/extended/two_node/utils/services/etcd.go
  • test/extended/two_node/utils/services/pacemaker.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/extended/testdata/two_node/baremetalhost-template.yaml

Comment on lines +1273 to +1280
minPodCreationTime := time.Now().Add(-2 * time.Minute)
e2e.Logf("Waiting for both CEO update-setup jobs (survivor and target) in parallel")
var wg sync.WaitGroup
var errSurvivor, errTarget error
wg.Add(2)
go func() {
defer wg.Done()
errSurvivor = services.WaitForSurvivorUpdateSetupJobCompletion(testConfig.Jobs.UpdateSetupJobSurvivorName, etcdNamespace, minPodCreationTime, tenMinuteTimeout, utils.ThirtySecondPollInterval, oc)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Gate this wait on the actual Ready timestamp.

time.Now().Add(-2 * time.Minute) can still admit an update-setup pod that started before the replacement node became Ready, which is exactly the stale run this logic is trying to exclude. Capture the Ready time in waitForNodeRecovery and pass that exact timestamp through here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1273 - 1280, The
wait uses a hardcoded minPodCreationTime (time.Now().Add(-2 * time.Minute))
which can include pre-Ready stale pods; change waitForNodeRecovery to return the
node Ready timestamp (e.g., readyTime) and use that exact timestamp here instead
of time.Now().Add(...), passing the returned readyTime as minPodCreationTime
into services.WaitForSurvivorUpdateSetupJobCompletion (and the symmetric
WaitForTargetUpdateSetupJobCompletion) so the waits are gated on the node Ready
time rather than an approximate clock offset.

Comment on lines +1337 to +1354
err := core.RetryWithOptions(func() error {
done := make(chan error, 1)
go func() {
_, err := oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n", namespace).Output()
done <- err
}()
select {
case err := <-done:
if err != nil {
e2e.Logf("%s returned error: %v", opName, err)
}
case <-time.After(deleteAttemptTimeout):
e2e.Logf("%s did not complete within %v", opName, deleteAttemptTimeout)
}
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
return nil
}
return fmt.Errorf("resource still exists")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

deleteAttemptTimeout is not actually enforced here.

When time.After(deleteAttemptTimeout) fires, the oc delete keeps running in the background. Later retries can overlap the earlier delete, so a hung attempt is never really capped at 20 seconds. Use a cancelable delete path per attempt instead of goroutine + time.After.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1337 - 1354, The
current attempt spawns oc.AsAdmin().Run("delete") in a goroutine and uses
time.After(deleteAttemptTimeout), which leaves the delete running if the timer
fires; replace that pattern with a per-attempt cancelable context so each delete
is actually bounded: inside the RetryWithOptions callback create ctx, cancel :=
context.WithTimeout(context.Background(), deleteAttemptTimeout) and defer
cancel(), then invoke the delete command with that context (e.g.,
oc.AsAdmin().Run("delete").Args(resourceType, resourceName, "-n",
namespace).WithContext(ctx).Output() or the project’s equivalent Run/Output
method that accepts a context), remove the extra goroutine and select, capture
and log the returned error from the cancelable delete call, and then use
ocResourceExists(oc, resourceType, resourceName, namespace) to decide
success/failure as before.

Comment on lines +1378 to +1393
func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
_, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output()
if err != nil {
return fmt.Errorf("force-delete (patch finalizers) failed: %w", err)
}
e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval)
deadline := time.Now().Add(forceDeleteConfirmTimeout)
for time.Now().Before(deadline) {
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
return nil
}
time.Sleep(forceDeleteConfirmInterval)
}
e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
return nil
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t report success while the resource still exists.

If the confirm loop expires, callers continue as though deletion succeeded, but the later recreate steps still race the old BMH/Machine object. Return an error here; otherwise the failure is deferred into a much harder-to-diagnose conflict downstream.

🛠️ Minimal fix
 	for time.Now().Before(deadline) {
 		if !ocResourceExists(oc, resourceType, resourceName, namespace) {
 			e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
 			return nil
 		}
 		time.Sleep(forceDeleteConfirmInterval)
 	}
-	e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
-	return nil
+	return fmt.Errorf("%s %s still present after %v even after finalizer patch", resourceType, resourceName, forceDeleteConfirmTimeout)
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
_, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output()
if err != nil {
return fmt.Errorf("force-delete (patch finalizers) failed: %w", err)
}
e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval)
deadline := time.Now().Add(forceDeleteConfirmTimeout)
for time.Now().Before(deadline) {
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
return nil
}
time.Sleep(forceDeleteConfirmInterval)
}
e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
return nil
func forceDeleteOcResourceByRemovingFinalizers(oc *exutil.CLI, resourceType, resourceName, namespace string) error {
_, err := oc.AsAdmin().Run("patch").Args(resourceType, resourceName, "-n", namespace, "-p", `{"metadata":{"finalizers":[]}}`, "--type=merge").Output()
if err != nil {
return fmt.Errorf("force-delete (patch finalizers) failed: %w", err)
}
e2e.Logf("Force-delete patch applied for %s %s; confirming resource is gone (timeout: %v, interval: %v)", resourceType, resourceName, forceDeleteConfirmTimeout, forceDeleteConfirmInterval)
deadline := time.Now().Add(forceDeleteConfirmTimeout)
for time.Now().Before(deadline) {
if !ocResourceExists(oc, resourceType, resourceName, namespace) {
e2e.Logf("Resource %s %s confirmed gone", resourceType, resourceName)
return nil
}
time.Sleep(forceDeleteConfirmInterval)
}
return fmt.Errorf("%s %s still present after %v even after finalizer patch", resourceType, resourceName, forceDeleteConfirmTimeout)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1378 - 1393, The
function forceDeleteOcResourceByRemovingFinalizers currently returns nil even
when the confirm loop times out, causing callers to assume deletion succeeded;
change the final branch so that after the timeout (where it currently logs the
WARNING) the function returns a non-nil error (e.g., fmt.Errorf with context
including resourceType, resourceName and forceDeleteConfirmTimeout) instead of
nil so callers see the failure and can handle the retry/error path; update the
log call in forceDeleteOcResourceByRemovingFinalizers to include the same error
context when returning.

Comment on lines 1466 to 1468
}, core.RetryOptions{
Timeout: fiveMinuteTimeout,
Timeout: threeMinuteTimeout,
PollInterval: utils.FiveSecondPollInterval,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Honor the timeout parameter here.

waitForEtcdResourceToStop accepts timeout but hardcodes threeMinuteTimeout, so callers cannot actually control the deadline and the log message above can be misleading.

🛠️ Minimal fix
 	}, core.RetryOptions{
-		Timeout:      threeMinuteTimeout,
+		Timeout:      timeout,
 		PollInterval: utils.FiveSecondPollInterval,
 	}, fmt.Sprintf("etcd stop on %s", testConfig.SurvivingNode.Name))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1466 - 1468, The
call to core.RetryOptions in waitForEtcdResourceToStop is ignoring the
function's timeout parameter and hardcodes threeMinuteTimeout, preventing
callers from controlling the deadline; change the RetryOptions Timeout to use
the function's timeout argument (the timeout parameter of
waitForEtcdResourceToStop) instead of threeMinuteTimeout, and ensure any
associated log message that references the timeout reflects the passed-in
timeout value so logs match behavior.

…MH force-delete

- Discover update-setup jobs by label and node name instead of hardcoded
  job name, so the test works with CEO's hashed job names
  (tnf-update-setup-job-master-0-637363be) from PR 1554.
- Add WaitForUpdateSetupJobCompletionByNode and
  WaitForSurvivorUpdateSetupJobCompletionByNode in services/etcd.go;
  switch node replacement test to use them.
- When force-deleting BMH fails due to baremetal-operator webhook
  having no endpoints, remove the validating webhook configuration
  and retry the finalizers patch so the test can complete.

Made-with: Cursor
@jaypoulz jaypoulz changed the title OCPBUGS-77949: OCPBUGS-77948: TNF node replacement test updates OCPBUGS-77949 OCPBUGS-77948 OCPBUGS-78298: TNF node replacement test updates Mar 11, 2026
@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

The bug has been updated to refer to the pull request using the external bug tracker.

This pull request references Jira Issue OCPBUGS-77948, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

The bug has been updated to refer to the pull request using the external bug tracker.

This pull request references Jira Issue OCPBUGS-78298, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Summary by CodeRabbit

  • Tests
  • Improved node-replacement reliability with longer, per-operation timeouts and parallelized waits
  • Enhanced cleanup and force-delete resilience for test resources
  • Added automatic pod log capture after job completion and safer SSH output truncation to limit log size
  • Expanded pacemaker/status debugging with fuller status dumps on failures
  • Updated test templates to use a Redfish authority-style BMC address format

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jaypoulz
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 11, 2026

@jaypoulz: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/215be940-1d8a-11f1-85b1-c121c92478a0-0

@openshift-ci-robot
Copy link

@jaypoulz: This pull request references Jira Issue OCPBUGS-77949, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

This pull request references Jira Issue OCPBUGS-77948, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

This pull request references Jira Issue OCPBUGS-78298, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (dhensel@redhat.com), skipping review request.

Details

In response to this:

  • Tightens up timeouts in core test loop
  • Fixes podman-etcd logging to feature human-readable output
  • Fixes a bug with IPv6 IP address formatting in URL

Summary by CodeRabbit

  • Tests
  • Improved node-replacement reliability with longer, per-operation timeouts and parallelized waits
  • Enhanced cleanup and force-delete resilience for test resources, including finalizer-based force-delete helpers
  • Added automatic pod log capture after job completion and new job-by-node wait helpers
  • Safer SSH output truncation for logs to limit size while preserving UTF-8 boundaries
  • Expanded pacemaker/status debugging with fuller status dumps on failures
  • Updated test templates to use a Redfish authority-style BMC address format

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
test/extended/two_node/tnf_node_replacement.go (1)

589-598: ⚠️ Potential issue | 🟡 Minor

Timeout documentation is inconsistent with implementation.

Line 589 says “overall 30-minute timeout”, but Line 597 sets 20 * time.Minute. Please align the comment/value to avoid misleading recovery logs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 589 - 598, The
comment above recoverClusterFromBackup says "Has an overall 30-minute timeout"
but the implementation sets const recoveryTimeout = 20 * time.Minute; update
either the comment or the recoveryTimeout constant so they match (e.g., change
the comment to "20-minute timeout" or set recoveryTimeout = 30 * time.Minute)
and ensure the descriptive log/comment near recoverClusterFromBackup and the
recoveryTimeout constant stay consistent.
♻️ Duplicate comments (4)
test/extended/two_node/tnf_node_replacement.go (4)

1511-1513: ⚠️ Potential issue | 🟡 Minor

Honor the function timeout parameter.

Line 1512 hardcodes threeMinuteTimeout, so the timeout argument to waitForEtcdResourceToStop is ignored.

🛠️ Minimal fix
 	}, core.RetryOptions{
-		Timeout:      threeMinuteTimeout,
+		Timeout:      timeout,
 		PollInterval: utils.FiveSecondPollInterval,
 	}, fmt.Sprintf("etcd stop on %s", testConfig.SurvivingNode.Name))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1511 - 1513, The
call to configure core.RetryOptions in waitForEtcdResourceToStop is ignoring the
function's timeout parameter by hardcoding threeMinuteTimeout; change the
RetryOptions.Timeout field to use the function's timeout parameter (named
timeout) instead of threeMinuteTimeout so the provided timeout is honored (keep
PollInterval as utils.FiveSecondPollInterval and preserve the surrounding call
in waitForEtcdResourceToStop).

1277-1292: ⚠️ Potential issue | 🟠 Major

Gate survivor job timing on actual Ready time, not a fixed offset.

Line 1277 uses time.Now().Add(-2 * time.Minute), which can still include stale pre-Ready runs or exclude valid runs depending on timing drift. Use the exact replacement-node Ready timestamp captured from waitForNodeRecovery.

🛠️ Minimal direction
- minPodCreationTime := time.Now().Add(-2 * time.Minute)
+ minPodCreationTime := replacementNodeReadyTime
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1277 - 1292,
Replace the fixed minPodCreationTime (currently set to time.Now().Add(-2 *
time.Minute)) with the actual replacement-node Ready timestamp returned by
waitForNodeRecovery (use that Ready time as the min creation time); update the
variables passed into services.WaitForSurvivorUpdateSetupJobCompletionByNode and
services.WaitForUpdateSetupJobCompletionByNode to use that replacementReadyTime
(or equivalent field on testConfig.TargetNode) so both waits are gated on the
node's real Ready time rather than a hardcoded offset.

1411-1412: ⚠️ Potential issue | 🟠 Major

Do not return success while the resource still exists.

After force-delete confirmation times out, Line 1412 still returns nil, which lets callers proceed as if cleanup succeeded.

🛠️ Minimal fix
-	e2e.Logf("WARNING: %s %s still present after %v (patch was applied; it may disappear shortly)", resourceType, resourceName, forceDeleteConfirmTimeout)
-	return nil
+	return fmt.Errorf("%s %s still present after %v even after finalizer patch", resourceType, resourceName, forceDeleteConfirmTimeout)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1411 - 1412, The
current code logs a warning when the resource still exists after
forceDeleteConfirmTimeout but then returns nil, signaling success; change the
behavior in the function where this occurs (referencing resourceType,
resourceName, forceDeleteConfirmTimeout and e2e.Logf) to return a non-nil error
instead of nil (e.g., a formatted error describing the resource still present)
so callers do not treat cleanup as successful; ensure fmt (or errors) is
imported and use fmt.Errorf to construct the error message.

1342-1354: ⚠️ Potential issue | 🟠 Major

deleteAttemptTimeout is not truly enforced per attempt.

When Line 1352 times out, the goroutine running oc delete keeps executing in the background. Retries can overlap and race each other.

#!/bin/bash
set -euo pipefail

# Verify whether exutil CLI supports context-bound command execution
fd client.go
rg -n -C3 'type CLI struct' test/extended/util/client.go
rg -n -C4 'func \(.*CLI.*\) Run\(' test/extended/util/client.go
rg -n -C4 'func \(.*CLI.*\) Output\(' test/extended/util/client.go
rg -n -C4 'WithContext|context\.Context' test/extended/util/client.go

If context-bound command execution is unavailable, use a client-go delete path with context timeout where possible.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_node_replacement.go` around lines 1342 - 1354, The
deleteAttemptTimeout isn't canceling the in-flight oc delete goroutine, so
retries can overlap; modify the deletion to be context-aware: create a context
with timeout (based on deleteAttemptTimeout) and use a context-bound API (either
pass ctx into oc.AsAdmin().Run(...).Output() if that method supports contexts,
or replace this path with a client-go delete call that accepts ctx) or execute
the command via an exec path that supports CommandContext so the process is
killed when the context times out; ensure the goroutine returns on context
cancellation and send the final error into the done channel only when the ctx
isn't canceled to avoid races between overlapping attempts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/extended/two_node/utils/services/etcd.go`:
- Around line 356-369: The function getUpdateSetupJobNameForNode currently
returns the first job matching nodeName from the list (label selector
tnfUpdateSetupJobLabelSelector), which is nondeterministic; instead, filter
list.Items for Spec.Template.Spec.NodeName == nodeName, then choose the item
with the largest CreationTimestamp (newest) and return its Name; update the
function to iterate to collect matches, compare metav1.Time (or
.CreationTimestamp) to pick the latest job, and return that job's name (or
""/error if none).

---

Outside diff comments:
In `@test/extended/two_node/tnf_node_replacement.go`:
- Around line 589-598: The comment above recoverClusterFromBackup says "Has an
overall 30-minute timeout" but the implementation sets const recoveryTimeout =
20 * time.Minute; update either the comment or the recoveryTimeout constant so
they match (e.g., change the comment to "20-minute timeout" or set
recoveryTimeout = 30 * time.Minute) and ensure the descriptive log/comment near
recoverClusterFromBackup and the recoveryTimeout constant stay consistent.

---

Duplicate comments:
In `@test/extended/two_node/tnf_node_replacement.go`:
- Around line 1511-1513: The call to configure core.RetryOptions in
waitForEtcdResourceToStop is ignoring the function's timeout parameter by
hardcoding threeMinuteTimeout; change the RetryOptions.Timeout field to use the
function's timeout parameter (named timeout) instead of threeMinuteTimeout so
the provided timeout is honored (keep PollInterval as
utils.FiveSecondPollInterval and preserve the surrounding call in
waitForEtcdResourceToStop).
- Around line 1277-1292: Replace the fixed minPodCreationTime (currently set to
time.Now().Add(-2 * time.Minute)) with the actual replacement-node Ready
timestamp returned by waitForNodeRecovery (use that Ready time as the min
creation time); update the variables passed into
services.WaitForSurvivorUpdateSetupJobCompletionByNode and
services.WaitForUpdateSetupJobCompletionByNode to use that replacementReadyTime
(or equivalent field on testConfig.TargetNode) so both waits are gated on the
node's real Ready time rather than a hardcoded offset.
- Around line 1411-1412: The current code logs a warning when the resource still
exists after forceDeleteConfirmTimeout but then returns nil, signaling success;
change the behavior in the function where this occurs (referencing resourceType,
resourceName, forceDeleteConfirmTimeout and e2e.Logf) to return a non-nil error
instead of nil (e.g., a formatted error describing the resource still present)
so callers do not treat cleanup as successful; ensure fmt (or errors) is
imported and use fmt.Errorf to construct the error message.
- Around line 1342-1354: The deleteAttemptTimeout isn't canceling the in-flight
oc delete goroutine, so retries can overlap; modify the deletion to be
context-aware: create a context with timeout (based on deleteAttemptTimeout) and
use a context-bound API (either pass ctx into oc.AsAdmin().Run(...).Output() if
that method supports contexts, or replace this path with a client-go delete call
that accepts ctx) or execute the command via an exec path that supports
CommandContext so the process is killed when the context times out; ensure the
goroutine returns on context cancellation and send the final error into the done
channel only when the ctx isn't canceled to avoid races between overlapping
attempts.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 01a5ae69-4452-435c-803f-8baa3fa28e0a

📥 Commits

Reviewing files that changed from the base of the PR and between 0816c76 and 5cba2a7.

📒 Files selected for processing (2)
  • test/extended/two_node/tnf_node_replacement.go
  • test/extended/two_node/utils/services/etcd.go

Comment on lines +356 to +369
func getUpdateSetupJobNameForNode(oc *exutil.CLI, namespace, nodeName string) (string, error) {
list, err := oc.AdminKubeClient().BatchV1().Jobs(namespace).List(context.Background(), metav1.ListOptions{
LabelSelector: tnfUpdateSetupJobLabelSelector,
})
if err != nil {
return "", err
}
for i := range list.Items {
if list.Items[i].Spec.Template.Spec.NodeName == nodeName {
return list.Items[i].Name, nil
}
}
return "", nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does Kubernetes guarantee deterministic ordering of objects in LIST responses (for example, Jobs), or should clients explicitly sort by CreationTimestamp when selecting the latest object?

💡 Result:

Kubernetes does not guarantee a deterministic / stable ordering of the items[] returned from a LIST (for Jobs or any other resource). In practice, list pagination is defined in terms of continuing “from the next key” in the underlying storage, not in terms of creationTimestamp (or any other user-visible field), so clients should treat list order as implementation-defined and not rely on it for “latest”. [1]

If you need “the latest” object, clients should explicitly sort (commonly by .metadata.creationTimestamp, with a tie-breaker such as name/UID). Alternatively, if you want “most recently changed”, you can sort by .metadata.resourceVersion (it’s defined to be monotonically increasing and orderable within the same resource type). [2]

Sources:
[1] Kubernetes OpenAPI / ListOptions continue semantics (“next key”) (app.unpkg.com)
[2] Kubernetes API Concepts: resourceVersion is orderable/monotonic within a resource type (kubernetes.io)

Citations:


Select the latest matching update-setup job, not the first match.

Kubernetes does not guarantee deterministic ordering of objects returned from LIST requests. The current code returns the first job matching nodeName, but the order is implementation-defined and unstable. This can cause selection of stale hashed jobs, leading to false success or prolonged waits against the wrong job. Sort by CreationTimestamp to select the newest job.

🛠️ Suggested fix
 func getUpdateSetupJobNameForNode(oc *exutil.CLI, namespace, nodeName string) (string, error) {
 	list, err := oc.AdminKubeClient().BatchV1().Jobs(namespace).List(context.Background(), metav1.ListOptions{
 		LabelSelector: tnfUpdateSetupJobLabelSelector,
 	})
 	if err != nil {
 		return "", err
 	}
-	for i := range list.Items {
-		if list.Items[i].Spec.Template.Spec.NodeName == nodeName {
-			return list.Items[i].Name, nil
-		}
-	}
-	return "", nil
+	var newest *batchv1.Job
+	for i := range list.Items {
+		job := &list.Items[i]
+		if job.Spec.Template.Spec.NodeName != nodeName {
+			continue
+		}
+		if newest == nil || job.CreationTimestamp.Time.After(newest.CreationTimestamp.Time) {
+			newest = job
+		}
+	}
+	if newest == nil {
+		return "", nil
+	}
+	return newest.Name, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/utils/services/etcd.go` around lines 356 - 369, The
function getUpdateSetupJobNameForNode currently returns the first job matching
nodeName from the list (label selector tnfUpdateSetupJobLabelSelector), which is
nondeterministic; instead, filter list.Items for Spec.Template.Spec.NodeName ==
nodeName, then choose the item with the largest CreationTimestamp (newest) and
return its Name; update the function to iterate to collect matches, compare
metav1.Time (or .CreationTimestamp) to pick the latest job, and return that
job's name (or ""/error if none).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants