Skip to content

TST/FIX: Expand cost function test coverage and fix numpy smoothness gradient#158

Closed
rcjackson wants to merge 5 commits intoopenradar:mainfrom
rcjackson:unit_testing
Closed

TST/FIX: Expand cost function test coverage and fix numpy smoothness gradient#158
rcjackson wants to merge 5 commits intoopenradar:mainfrom
rcjackson:unit_testing

Conversation

@rcjackson
Copy link
Copy Markdown
Collaborator

  • Added cross-engine unit tests for gradients and cost functions to ensure consistency between engines
  • Fix numpy calculate_smoothness_gradient: the previous implementation used scipy.ndimage.laplace without the Cx/Cy/Cz coefficients, producing a gradient inconsistent with the actual cost function. Replaced with the correct analytic gradient using np.gradient. Removes unused scipy imports.

Robert Jackson and others added 5 commits April 8, 2026 14:31
…ngines

- Add _make_radvel_inputs() helper for building single-radar test inputs
- Add test_calculate_rad_velocity_cost_nonzero_jax: verifies JAX cost
  function and gradient match numpy reference with float32 tolerance
- Add test_calculate_rad_velocity_cost_nonzero_tf: same for TensorFlow

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
For each cost function (smoothness, mass continuity, background,
vertical vorticity, model, point), adds nonzero-field tests for both
JAX and TensorFlow engines that verify:
- Cost values match the numpy reference within float32 tolerance
- Gradients are nonzero (or match numpy where implementations agree)

Notable findings documented in test comments:
- Smoothness gradient: numpy omits Cx/Cy/Cz coefficients and uses
  scipy.ndimage.laplace; cross-engine comparison is not meaningful
- Mass continuity JAX gradient: jax.vjp differs from numpy's analytic
  adjoint at grid boundaries; cost comparison is used instead
- Vertical vorticity TF: default coeff=1 vs numpy/JAX default coeff=1e-5;
  explicit coeff=1e-5 is now passed in tests

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The previous implementation used scipy.ndimage.laplace (biharmonic operator)
without the Cx, Cy, Cz weighting coefficients, producing a gradient
inconsistent with the actual cost function.

Replace with the analytic gradient of the cost:
  grad = 2*Cx * d²fx/dx² + 2*Cy * d²fy/dy² + 2*Cz * d²fz/dz²
where fx/fy/fz are the combined second-derivative terms from the cost.
u, v, w share the same gradient since the cost treats them symmetrically.

Also remove unused scipy and _nd_image imports, and strengthen the
smoothness gradient tests to do full cross-engine numerical comparison
now that all three engines are consistent.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@rcjackson
Copy link
Copy Markdown
Collaborator Author

Claude, you silly goose...not committing.

@rcjackson rcjackson closed this Apr 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant