Context
The Claude Agent SDK is Anthropic's official SDK for building agentic applications with Claude Code. It's used internally by stamphog (our PR approval agent) and increasingly by external developers building Claude-powered agents.
PostHog's LLM Analytics already has integrations for Anthropic (direct API), OpenAI, OpenAI Agents, Gemini, and LangChain — but no integration for the Claude Agent SDK.
What
A posthog.ai.claude_agent_sdk integration in the Python SDK with three entry points:
query() — one-shot drop-in replacement for claude_agent_sdk.query()
instrument() — configure-once, reuse across multiple one-shot queries
PostHogClaudeSDKClient — stateful multi-turn conversations with full history
Automatically captures:
$ai_generation — one per LLM turn, with tokens, cost, latency, input/output, cache metrics
$ai_span — one per tool use (Read, Grep, Glob, Bash, etc.)
$ai_trace — one per query/session, with aggregate cost and latency
All instrumentation is try/except wrapped — PostHog errors never interrupt the underlying query.
PRs
File structure (posthog-python)
posthog/ai/claude_agent_sdk/
├── __init__.py — public API: query(), instrument(), exports
├── processor.py — PostHogClaudeAgentProcessor, _GenerationTracker, one-shot query()
└── client.py — PostHogClaudeSDKClient, stateful multi-turn
Design notes
The Claude Agent SDK has no TracingProcessor interface (unlike OpenAI Agents SDK). The integration wraps the async streaming iterator, enables include_partial_messages=True to get raw Anthropic StreamEvents, and reconstructs per-turn generation metrics from message_start/message_stop boundaries. A two-slot input tracking approach correctly associates tool results with subsequent generations despite the SDK's message ordering (UserMessage arrives before message_stop).
See the llma-claude-agent-sdk skill in the PostHog skills store for full architecture docs.
Context
The Claude Agent SDK is Anthropic's official SDK for building agentic applications with Claude Code. It's used internally by stamphog (our PR approval agent) and increasingly by external developers building Claude-powered agents.
PostHog's LLM Analytics already has integrations for Anthropic (direct API), OpenAI, OpenAI Agents, Gemini, and LangChain — but no integration for the Claude Agent SDK.
What
A
posthog.ai.claude_agent_sdkintegration in the Python SDK with three entry points:query()— one-shot drop-in replacement forclaude_agent_sdk.query()instrument()— configure-once, reuse across multiple one-shot queriesPostHogClaudeSDKClient— stateful multi-turn conversations with full historyAutomatically captures:
$ai_generation— one per LLM turn, with tokens, cost, latency, input/output, cache metrics$ai_span— one per tool use (Read, Grep, Glob, Bash, etc.)$ai_trace— one per query/session, with aggregate cost and latencyAll instrumentation is try/except wrapped — PostHog errors never interrupt the underlying query.
PRs
File structure (posthog-python)
Design notes
The Claude Agent SDK has no
TracingProcessorinterface (unlike OpenAI Agents SDK). The integration wraps the async streaming iterator, enablesinclude_partial_messages=Trueto get raw Anthropic StreamEvents, and reconstructs per-turn generation metrics frommessage_start/message_stopboundaries. A two-slot input tracking approach correctly associates tool results with subsequent generations despite the SDK's message ordering (UserMessage arrives before message_stop).See the
llma-claude-agent-sdkskill in the PostHog skills store for full architecture docs.