-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Labels
area: review-pipelineReview pipeline, context, promptsReview pipeline, context, promptsbugSomething isn't workingSomething isn't working
Description
Found during deep code review of LLM adapters
1. Anthropic adapter ignores response_schema
OpenAI adapter translates it to response_format. Anthropic adapter never reads it — structured output enforcement silently skipped for Anthropic judge models.
2. Neither adapter detects max_tokens truncation
stop_reason/finish_reason never checked. Truncated JSON treated as complete. Verification responses with tight budgets (400 tokens) parsed as garbage.
3. OpenAI Responses API silently drops schema for non-standard endpoints
OpenRouter-proxied OpenAI models get no structured output enforcement.
4. API error body may leak secrets (common.rs:59-65)
Full response body in error messages. Some providers echo request details.
5. Linear backoff instead of exponential (common.rs:51-54)
Aggravates rate limiting.
Acceptance
- Anthropic adapter supports response_schema
- Both adapters check stop_reason for truncation
- Error messages redact response body
- Exponential backoff with jitter
🤖 Generated with Claude Code
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
area: review-pipelineReview pipeline, context, promptsReview pipeline, context, promptsbugSomething isn't workingSomething isn't working