Skip to content

Anthropic adapter ignores response_schema; adapters don't detect truncation #77

@haasonsaas

Description

@haasonsaas

Found during deep code review of LLM adapters

1. Anthropic adapter ignores response_schema

OpenAI adapter translates it to response_format. Anthropic adapter never reads it — structured output enforcement silently skipped for Anthropic judge models.

2. Neither adapter detects max_tokens truncation

stop_reason/finish_reason never checked. Truncated JSON treated as complete. Verification responses with tight budgets (400 tokens) parsed as garbage.

3. OpenAI Responses API silently drops schema for non-standard endpoints

OpenRouter-proxied OpenAI models get no structured output enforcement.

4. API error body may leak secrets (common.rs:59-65)

Full response body in error messages. Some providers echo request details.

5. Linear backoff instead of exponential (common.rs:51-54)

Aggravates rate limiting.

Acceptance

  • Anthropic adapter supports response_schema
  • Both adapters check stop_reason for truncation
  • Error messages redact response body
  • Exponential backoff with jitter

🤖 Generated with Claude Code

Metadata

Metadata

Assignees

No one assigned

    Labels

    area: review-pipelineReview pipeline, context, promptsbugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions