feat: streaming structured output (chat outputSchema + stream:true)#527
Conversation
- @tanstack/ai: typed StructuredOutputStream<T> with terminal CUSTOM
structured-output.complete event { object, raw, reasoning? }; optional
TextAdapter.structuredOutputStream + activity-layer fallback;
orchestrator hardening (always-finalize, typed RUN_ERROR with
runId/model/timestamp, exactly-one-terminal-pair on tools branch, sync
pre-flight errors, UI->Model message conversion on no-tools path).
- @tanstack/ai-openrouter: native structuredOutputStream via single
stream:true + response_format:json_schema request; always-finalize on
upstream close; empty-response and parse-error surface as typed
RUN_ERROR; in-stream provider errors terminate the run; chain-of-thought
reasoning threaded through the final CUSTOM event.
- E2E: structured-output-stream feature in matrix with happy-path + abort
specs; useChat onCustomEvent/onChunk wiring exposes CUSTOM payload +
delta count to DOM.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughchat({ outputSchema, stream: true }) now produces an AsyncIterable that emits JSON delta chunks plus a terminal CUSTOM event ChangesStreaming Structured Output (single cohesive change DAG)
Sequence DiagramsequenceDiagram
participant Client
participant ChatRoute as /api/chat
participant ChatFn as chat()
participant Engine as TextEngine
participant Adapter as TextAdapter
participant UI as ChatUI
Client->>ChatRoute: POST structured-stream request
ChatRoute->>ChatFn: chat({ outputSchema, stream: true, ... })
ChatFn->>ChatFn: branch on outputSchema && stream === true
ChatFn->>Engine: convert messages / run agent loop (if tools)
Engine->>Adapter: adapter.structuredOutputStream(request)
loop streaming
Adapter-->>ChatFn: TEXT_MESSAGE_CONTENT chunk
ChatFn-->>ChatRoute: forward chunk via SSE
ChatRoute-->>UI: SSE chunk
UI->>UI: accumulate delta
end
Adapter->>Adapter: parse accumulated JSON & validate schema
Adapter-->>ChatFn: CUSTOM(structured-output.complete { object, raw, reasoning? })
ChatFn-->>ChatRoute: forward CUSTOM event
ChatRoute-->>UI: SSE CUSTOM event
UI->>UI: display structured object
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Changeset Version Preview5 package(s) bumped directly, 28 bumped as dependents. 🟥 Major bumps
🟨 Minor bumps
🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit 9312770
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.changeset/streaming-structured-output-openrouter.md (1)
1-6:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMissing changeset entry for
@tanstack/ai.The changeset only bumps
@tanstack/ai-openrouter, but according to the PR summary and the AI-generated diff summary,@tanstack/aialso ships public API additions in this PR:
StructuredOutputCompleteEvent/StructuredOutputStreamtypes added topackages/typescript/ai/src/types.ts- New
outputSchema && stream === truepath inpackages/typescript/ai/src/activities/chat/index.tsBaseTextAdapterbehaviour change (defaultstructuredOutputStreamremoved)Without a corresponding entry,
@tanstack/aiwon't receive a version bump when the changeset is consumed.📦 Suggested addition to the changeset
--- +'@tanstack/ai': minor '@tanstack/ai-openrouter': minor ---As per coding guidelines:
.changeset/**/*.md— create a changeset withpnpm changesetbefore making changes for release management.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.changeset/streaming-structured-output-openrouter.md around lines 1 - 6, Add a new changeset markdown entry that also bumps `@tanstack/ai` (in addition to `@tanstack/ai-openrouter`) and documents the public API additions: include the new types StructuredOutputCompleteEvent and StructuredOutputStream (from packages/typescript/ai/src/types.ts), the new outputSchema && stream === true chat path (packages/typescript/ai/src/activities/chat/index.ts), and the BaseTextAdapter behavior change (removal/default change of structuredOutputStream); ensure the changeset message explains these API additions/behavior change so `@tanstack/ai` is versioned when the changeset is released..changeset/streaming-structured-output-chat.md (1)
1-6:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
@tanstack/ai-openrouteris missing from the changeset packages block.
OpenRouterTextAdapter.structuredOutputStreamis a new public method inpackages/typescript/ai-openrouter— a user-visible feature addition. Without a corresponding entry in the changeset, the package version won't be bumped on release.📦 Suggested fix
--- '@tanstack/ai': minor +'@tanstack/ai-openrouter': minor ---🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.changeset/streaming-structured-output-chat.md around lines 1 - 6, The changeset is missing a package entry for the new public method OpenRouterTextAdapter.structuredOutputStream, so add the package name for the ai-openrouter adapter to the changeset packages block and mark it as a minor release (public API addition); update the same changeset that introduced streaming-structured-output-chat to include '@tanstack/ai-openrouter': minor so the package version is bumped on release and consumers get the new structuredOutputStream method.
🧹 Nitpick comments (2)
packages/typescript/ai-openrouter/src/adapters/text.ts (1)
266-530: 🏗️ Heavy lift
structuredOutputStreamduplicates ~150 lines of stream lifecycle boilerplate fromchatStream.The AGUIState initialization (lines 272–291), the per-chunk RUN_STARTED emission + inline error dispatch +
processChoiceloop (lines 324–376), and the catch block structure (lines 490–529) are near-verbatim copies of the equivalent blocks inchatStream. Any future change to stream lifecycle handling (e.g., abort signal threading, new event types, logging conventions) must be applied to both methods independently, increasing divergence risk.Consider extracting a shared
runStreamLoop(stream, aguiState, accumulators, onFinalize)helper that takes the finalization callback as a parameter — the only materially different code between the two methods.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/typescript/ai-openrouter/src/adapters/text.ts` around lines 266 - 530, structuredOutputStream duplicates ~150 lines of stream lifecycle boilerplate from chatStream (AGUIState init, per-chunk RUN_STARTED/error handling, processChoice loop, and catch block); extract this shared logic into a new helper (e.g., runStreamLoop) that accepts the stream iterator, the aguiState, accumulators (accumulatedReasoning, accumulatedContent, toolCallBuffers), and a finalization callback (onFinalize) to run the method-specific wrap-up (JSON parsing, structured-output CUSTOM emission, and RUN_FINISHED); update structuredOutputStream to build its AGUIState and accumulators, call runStreamLoop(stream, aguiState, {accumulatedReasoning, accumulatedContent, toolCallBuffers}, onFinalize) and move processChoice usage, per-chunk yields, and catch handling into the shared helper so both structuredOutputStream and chatStream reuse the same lifecycle code.packages/typescript/ai/src/activities/chat/index.ts (1)
1717-1722: 💤 Low value
mock-ID prefixes leak into real adapter fallback runs.
fallbackStructuredOutputStreamis the production path for any adapter without a nativestructuredOutputStream, but the synthesizedrunId/threadId/messageIduse amock-prefix. That looks like test scaffolding in observability/devtools output and is inconsistent withcreateId('run' | 'thread' | 'msg')used elsewhere in this engine.♻️ Use neutral prefixes
- const runId = chatOptions.runId ?? `mock-${Date.now()}` - const threadId = chatOptions.threadId ?? `mock-${Date.now()}` - const messageId = `mock-${Date.now()}-${Math.random().toString(36).slice(2)}` + const rand = () => Math.random().toString(36).slice(2, 9) + const runId = chatOptions.runId ?? `run-${Date.now()}-${rand()}` + const threadId = chatOptions.threadId ?? `thread-${Date.now()}-${rand()}` + const messageId = `msg-${Date.now()}-${rand()}`🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 1717 - 1722, The current fallback ID generation in chat (variables chatOptions, runId, threadId, messageId) uses "mock-" prefixes which leak into production telemetry; replace the mock-prefixed defaults with the same neutral ID generation used elsewhere (e.g., call the shared createId helper with 'run' | 'thread' | 'msg' or use the existing engine's neutral prefix strategy) so runId/threadId/messageId are generated consistently when chatOptions doesn't supply them; update the fallback logic in the code that reads chatOptions.model/timestamp to use createId for runId and threadId and a non-mock random message id for messageId.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.changeset/streaming-structured-output-chat.md:
- Line 5: The changelog line incorrectly states that BaseTextAdapter provides a
default structuredOutputStream; update the description to say BaseTextAdapter
does not implement structuredOutputStream and that the activity layer's
fallbackStructuredOutputStream is the single source of truth for non-streaming
adapters. Reference BaseTextAdapter and structuredOutputStream in
packages/typescript/ai/src/activities/chat/adapter.ts and mention
fallbackStructuredOutputStream in the activity layer as the fallback, and advise
adapter authors to implement structuredOutputStream if they need native
streaming behavior.
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 378-440: The finalization path for empty non-exception streams
fails to emit RUN_STARTED if no chunks arrived; update the end-of-stream logic
in the generator (the block after the streaming loop that currently emits
RUN_ERROR for empty content) to check hasEmittedRunStarted and, if false, set
hasEmittedRunStarted = true and yield the same RUN_STARTED chunk that the normal
streaming path emits (include runId from aguiState.runId, model resolvedModel,
and timestamp) before yielding RUN_ERROR; this mirrors the catch block behavior
and ensures the AG-UI lifecycle (hasEmittedRunStarted, RUN_STARTED, then
RUN_ERROR) is preserved.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1860-1886: The loop over engine.run() can yield a RUN_ERROR chunk
without throwing, but the code always proceeds to call engine.getMessages() and
adapter.structuredOutputStream(...); fix by detecting an error-yield and
short-circuiting: inside the for-await loop in runStreamingStructuredOutput (the
block that calls engine.run()), when you yield a chunk of type 'RUN_ERROR' set a
local flag (e.g., sawRunError) or reuse earlyTermination set by
handleRunErrorEvent, then after the loop but before calling finalMessages =
engine.getMessages() and adapter.structuredOutputStream(...), check that flag
and return early (or skip the structured-output call) so no structured-output
flow is started after an error. Ensure the check references the same symbols:
engine.run(), handleRunErrorEvent, finalMessages, and
adapter.structuredOutputStream.
In `@testing/e2e/README.md`:
- Line 126: Update the documented prefix list to include the undocumented prefix
used by the abort fixture: add "[structured-stream-abort]" to the "Existing
prefixes" line so it reflects the actual usage in
fixtures/structured-output-stream/abort.json; ensure the exact token
"[structured-stream-abort]" is added alongside "[structured-stream]" to prevent
future collisions.
---
Outside diff comments:
In @.changeset/streaming-structured-output-chat.md:
- Around line 1-6: The changeset is missing a package entry for the new public
method OpenRouterTextAdapter.structuredOutputStream, so add the package name for
the ai-openrouter adapter to the changeset packages block and mark it as a minor
release (public API addition); update the same changeset that introduced
streaming-structured-output-chat to include '@tanstack/ai-openrouter': minor so
the package version is bumped on release and consumers get the new
structuredOutputStream method.
In @.changeset/streaming-structured-output-openrouter.md:
- Around line 1-6: Add a new changeset markdown entry that also bumps
`@tanstack/ai` (in addition to `@tanstack/ai-openrouter`) and documents the public
API additions: include the new types StructuredOutputCompleteEvent and
StructuredOutputStream (from packages/typescript/ai/src/types.ts), the new
outputSchema && stream === true chat path
(packages/typescript/ai/src/activities/chat/index.ts), and the BaseTextAdapter
behavior change (removal/default change of structuredOutputStream); ensure the
changeset message explains these API additions/behavior change so `@tanstack/ai`
is versioned when the changeset is released.
---
Nitpick comments:
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 266-530: structuredOutputStream duplicates ~150 lines of stream
lifecycle boilerplate from chatStream (AGUIState init, per-chunk
RUN_STARTED/error handling, processChoice loop, and catch block); extract this
shared logic into a new helper (e.g., runStreamLoop) that accepts the stream
iterator, the aguiState, accumulators (accumulatedReasoning, accumulatedContent,
toolCallBuffers), and a finalization callback (onFinalize) to run the
method-specific wrap-up (JSON parsing, structured-output CUSTOM emission, and
RUN_FINISHED); update structuredOutputStream to build its AGUIState and
accumulators, call runStreamLoop(stream, aguiState, {accumulatedReasoning,
accumulatedContent, toolCallBuffers}, onFinalize) and move processChoice usage,
per-chunk yields, and catch handling into the shared helper so both
structuredOutputStream and chatStream reuse the same lifecycle code.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1717-1722: The current fallback ID generation in chat (variables
chatOptions, runId, threadId, messageId) uses "mock-" prefixes which leak into
production telemetry; replace the mock-prefixed defaults with the same neutral
ID generation used elsewhere (e.g., call the shared createId helper with 'run' |
'thread' | 'msg' or use the existing engine's neutral prefix strategy) so
runId/threadId/messageId are generated consistently when chatOptions doesn't
supply them; update the fallback logic in the code that reads
chatOptions.model/timestamp to use createId for runId and threadId and a
non-mock random message id for messageId.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a66ef4a2-3824-4fbc-ad8f-03cfd2e1f307
📒 Files selected for processing (17)
.changeset/streaming-structured-output-chat.md.changeset/streaming-structured-output-openrouter.mdpackages/typescript/ai-openrouter/src/adapters/text.tspackages/typescript/ai-openrouter/tests/openrouter-adapter.test.tspackages/typescript/ai/src/activities/chat/adapter.tspackages/typescript/ai/src/activities/chat/index.tspackages/typescript/ai/src/types.tstesting/e2e/README.mdtesting/e2e/fixtures/structured-output-stream/abort.jsontesting/e2e/fixtures/structured-output-stream/basic.jsontesting/e2e/src/components/ChatUI.tsxtesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/features.tstesting/e2e/src/lib/types.tstesting/e2e/src/routes/$provider/$feature.tsxtesting/e2e/src/routes/api.chat.tstesting/e2e/tests/structured-output-stream.spec.ts
Adds `structuredOutputStream` to `@tanstack/ai-openai`,
`@tanstack/ai-grok`, and `@tanstack/ai-groq`, mirroring the openrouter
reference: a single request with `stream: true` +
`response_format: json_schema` (Chat Completions for grok/groq) or
`text.format: json_schema` (Responses API for openai), no tools, raw
JSON deltas as `TEXT_MESSAGE_CONTENT` plus a terminal `CUSTOM`
`structured-output.complete` event with `{ object, raw }`.
- Always-finalize on upstream close so truncated streams never hang
consumers
- Typed `RUN_ERROR` paths: `empty-response`, `parse-error`, `aborted`,
plus mid-stream provider errors (terminal — no `RUN_FINISHED` after)
- `transformNullsToUndefined` applied on parse for parity with the
non-streaming `structuredOutput`
- E2E feature-support matrix: openai/grok/groq join openrouter for
`structured-output-stream`; the existing parameterized spec now runs
against all four
- ts-react-chat example: `api.structured-output.ts` and the matching
page gain a provider selector (openai/grok/groq/openrouter) and a
Stream toggle that consumes SSE, renders deltas live, and snaps to
the parsed object on the terminal CUSTOM event
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@examples/ts-react-chat/src/routes/api.structured-output.ts`:
- Around line 76-80: The non-streaming branch calls chat(...) but doesn't pass
the HTTP request abort signal, so provider work continues after client
disconnects; modify the call to propagate the request's abort signal (e.g., pass
req.signal or a derived AbortSignal) into chat and/or the adapter: update the
chat invocation in the non-streaming path to include signal: req.signal (or
create an AbortController that ties to req.signal and pass that) and ensure
adapterFor(...) or the adapter returned accepts/forwards that signal so provider
requests are cancelled when the client disconnects.
- Around line 30-43: The POST body should be parsed and validated inside a
try/catch and you must validate the provider before calling adapterFor to return
a 400 for bad input; move the request.json() call into the try block, validate
required fields (e.g., provider and optional model types), and if provider is
missing or not one of the expected values return a 400 JSON error instead of
proceeding. Also harden adapterFor by adding a default/else branch (or throw)
when provider is unknown so it cannot return undefined (refer to adapterFor and
its switch cases like 'openai'/'grok'/'groq'/'openrouter'); ensure any casted
model strings are validated or fall back to safe defaults only after input
validation.
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 447-450: The logger currently sends the raw SDK error
(logger.errors('grok.structuredOutputStream fatal', { error, ... })) which may
contain sensitive request/auth metadata; replace the raw error with a
sanitized/normalized error object like the one used in
packages/typescript/ai-openai/src/adapters/text.ts (e.g., call the same
sanitize/normalizeOpenAIError helper from that module or replicate its behavior)
and log only the sanitizedError (and minimal context fields) instead of the raw
SDK error so request-level details are not leaked.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 312-424: The loop currently only handles
'response.output_text.delta' and drops 'response.reasoning_text.delta' and
'response.reasoning_summary_text.delta', so add handling for those chunk.type
values: create a separate accumulator (e.g., reasoningAccumulatedContent) and
flags (similar to hasEmittedTextMessageStart) to collect and emit reasoning
deltas via yield asChunk (use types analogous to TEXT_MESSAGE_START/CONTENT or a
clear reasoning event), append deltas when chunk.delta is string|array, and
ensure when the run completes ('response.completed' / final structured-output
emission) you attach the accumulated reasoning to the structured-output
completion payload (structured-output.complete.value.reasoning) so
schema-failure post-mortems receive the model's reasoning; use existing symbols
runId, messageId, model, timestamp, asChunk, accumulatedContent for locating
where to add this logic.
In `@testing/e2e/src/lib/feature-support.ts`:
- Around line 83-85: Update the comment to remove the stale reference to
BaseTextAdapter and instead mention the activity-layer
fallbackStructuredOutputStream as the fallback handler; edit the block that
currently references "BaseTextAdapter implementation" so it reads that other
providers fall back to the activity-layer fallbackStructuredOutputStream (or
similar phrasing) and keep the rest of the comment about providers with native
streaming JSON schema support intact to avoid confusion during matrix
maintenance.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d0a46433-8d08-486d-aaac-9c40b3c58f5d
📒 Files selected for processing (12)
.changeset/streaming-structured-output-grok.md.changeset/streaming-structured-output-groq.md.changeset/streaming-structured-output-openai.mdexamples/ts-react-chat/src/routes/api.structured-output.tsexamples/ts-react-chat/src/routes/generations.structured-output.tsxpackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-grok/tests/grok-adapter.test.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-groq/tests/groq-adapter.test.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openai/tests/openai-adapter.test.tstesting/e2e/src/lib/feature-support.ts
✅ Files skipped from review due to trivial changes (3)
- .changeset/streaming-structured-output-openai.md
- .changeset/streaming-structured-output-grok.md
- .changeset/streaming-structured-output-groq.md
Surfaces chain-of-thought as REASONING_MESSAGE_CONTENT during `structuredOutputStream` for the three OpenAI-compatible adapters, matching openrouter's existing behavior. Each provider exposes reasoning differently and none are typed by the upstream SDKs: - openai: consumes `response.reasoning_text.delta` and `response.reasoning_summary_text.delta` Responses API events - grok (xAI): reads `delta.reasoning_content` (DeepSeek convention) on Chat Completions deltas - groq: reads `delta.reasoning` (mirroring its `message.reasoning` on completed responses) on Chat Completions deltas In all three, reasoning lifecycle is closed cleanly before TEXT_MESSAGE_START so consumers see the contractual transition. Accumulated reasoning is also surfaced on the terminal `CUSTOM` `structured-output.complete` event's `value.reasoning` field. Tests: 4 new cases covering reasoning surfacing + omission across grok and groq. 61 grok, 19 groq, 137 openai tests pass. ts-react-chat structured-output example: - model lists refreshed to latest per provider - progressive UI rendering via `parsePartialJSON` so cards/fields fill in as JSON streams - live "Thinking" strip rendering the latest reasoning sentence - per-provider reasoning opt-ins wired through modelOptions so models actually emit reasoning deltas (openai reasoning.summary: 'auto', groq reasoning_format: 'parsed', openrouter reasoning.effort) - debug: true on chat() calls for inspection of provider events Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx`:
- Around line 281-300: The labels "Provider", "Model" and "Prompt" are not
linked to their form controls; add id attributes to the corresponding <select>
and <textarea> elements (e.g., "provider-select", "model-select",
"prompt-textarea") and set the matching htmlFor on each <label> so the label
text is associated with the control (affects the <select value={provider}
onChange={onProviderChange} disabled={isLoading}>, the <select value={model}
onChange={e => setModel(e.target.value} disabled={isLoading}>, and the prompt
<textarea>), ensuring unique ids and keeping existing props like isLoading
intact.
- Around line 222-229: The handler updates reasoningFull when a streamed chunk
contains a reasoning string (setReasoningFull on chunk.value) but never
recomputes or updates the one-line summary reasoningLine, so the “Thinking”
strip can remain blank or stale; after calling setReasoningFull((chunk.value as
{ reasoning: string }).reasoning) also derive and call setReasoningLine with a
compact one-line version (e.g., trim and collapse whitespace or take the first
sentence) so that both reasoningFull and reasoningLine stay consistent when
reasoning is sent only in the final structured-output.complete or when the final
chunk extends earlier reasoning.
- Around line 170-235: The reader loop currently exits on EOF without failing if
the canonical completion signal ("structured-output.complete") never arrived;
introduce a local boolean flag (e.g. sawFinalResult = false) and set it to true
inside the branch that handles chunk.type === 'CUSTOM' && chunk.name ===
'structured-output.complete' (also still call setHasFinalResult(true)). After
the outer while (after you break on done from reader.read()), check if
(!sawFinalResult) and throw a descriptive Error (e.g. 'Stream ended before
structured-output.complete arrived') so the run fails on EOF when the final
structured-output payload was not observed.
- Around line 163-174: The stream decoding loop uses a TextDecoder with {stream:
true} but never flushes remaining buffered bytes when the reader loop exits;
after the while(true) loop that reads from response.body!.getReader() and
appends decoder.decode(value, {stream: true}) into buffer, call decoder.decode()
once more (without the stream flag) and append its result to buffer (or
otherwise process it) before continuing with accumulated/reasoning/deltas
handling so any partial UTF-8 sequences are correctly finalized.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 49bf45f2-6618-4113-8aea-d644b0809d7c
📒 Files selected for processing (7)
examples/ts-react-chat/src/routes/api.structured-output.tsexamples/ts-react-chat/src/routes/generations.structured-output.tsxpackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-grok/tests/grok-adapter.test.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-groq/tests/groq-adapter.test.tspackages/typescript/ai-openai/src/adapters/text.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- examples/ts-react-chat/src/routes/api.structured-output.ts
- packages/typescript/ai-groq/src/adapters/text.ts
- packages/typescript/ai-groq/tests/groq-adapter.test.ts
ts-react-chat /generations/structured-output now opens with the OpenAI frontier model (gpt-5.2) preselected in both the Provider and Model dropdowns instead of OpenRouter. Also removes `debug: true` from the chat() calls — it was only there to inspect provider events while diagnosing reasoning visibility on gpt-5.2-pro. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (6)
examples/ts-react-chat/src/routes/api.structured-output.ts (2)
97-105:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate the POST body before casting it.
request.json()runs outside thetry, and the unchecked cast lets malformed JSON or an unsupportedproviderfall through as a 500. Parse the body inside thetrywith a Zod schema and return 400 on invalid input instead of lettingadapterFor()receive unchecked data.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/api.structured-output.ts` around lines 97 - 105, Move parsing of the POST body into the try block and validate it with a Zod schema before casting so malformed JSON or invalid provider values return a 400 instead of reaching adapterFor(); specifically, define a Zod schema for { prompt: string, provider?: Provider, model?: string, stream?: boolean }, call await request.json() inside the try, parse/validate with schema.parse or safeParse, and if validation fails return a 400 response; then compute resolvedProvider (fallback to 'openrouter') only from validated data and pass that safe value to adapterFor().
127-132:⚠️ Potential issue | 🟠 Major | ⚡ Quick winPropagate disconnect aborts through the non-streaming
chat()call too.Only the streaming branch cancels provider work when
request.signalaborts. The non-streaming call will keep running after the client is gone, burning tokens and tying up server capacity.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/api.structured-output.ts` around lines 127 - 132, The non-streaming chat call doesn't receive the client's abort signal, so work continues after the client disconnects; update the chat invocation to pass through the abort signal (e.g., include signal: request.signal) so that chat(...) (and downstream adapterFor/resolvedProvider work) can cancel when the request is aborted; ensure the chat call's options include the signal property alongside adapter, modelOptions, messages, and outputSchema.examples/ts-react-chat/src/routes/generations.structured-output.tsx (4)
279-299:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAssociate the visible labels with their controls.
Provider,Model, andPromptare rendered as standalone labels, so the selects and textarea lose explicit accessible names and label-click focus.Also applies to: 326-334
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 279 - 299, The labels "Provider", "Model" and "Prompt" are not associated with their form controls; add explicit associations by giving each select/textarea a unique id (e.g., providerSelect, modelSelect, promptTextarea) and set the corresponding label's htmlFor to that id (or wrap the control inside the label). Update the JSX around the provider select (value={provider}, onChange={onProviderChange}), the model select (value={model}, onChange={setModel}), and the prompt textarea to use those ids so clicking a label focuses the correct control and improves accessibility.
220-227:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winKeep
reasoningLinein sync with terminal reasoning.This branch overwrites
reasoningFullbut leaves the one-line strip stale. If reasoning only arrives onstructured-output.complete, the “Thinking” summary stays blank or outdated.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 220 - 227, When you set reasoningFull from the chunk (inside the branch that checks (chunk.value as { reasoning?: string }).reasoning), also update reasoningLine so it stays in sync: assign reasoningFull via setReasoningFull((chunk.value as { reasoning: string }).reasoning) and immediately call setReasoningLine with a one-line/trimmed version (e.g., first line or trimmed slice) of the same value; update the same logic path that handles structured-output.complete so both state variables (reasoningFull and reasoningLine) are always updated from the same source.
168-233:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFail the run if EOF arrives before
structured-output.complete.The loop currently exits successfully on EOF even when only partial deltas were received. If the connection drops before the terminal event, this page leaves a partial object rendered with no error.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 168 - 233, The reader loop currently breaks on EOF even if the terminal structured-output.complete was never received; add a local boolean (e.g., receivedFinalResult) initialized false, set it true in the branch where you call setResult(...) and setHasFinalResult(true) for the CUSTOM/structured-output.complete chunk (the same place that currently setsHasFinalResult), and change the EOF handling (when done is true) to throw an Error (or reject) if receivedFinalResult is still false (e.g., throw new Error('Stream ended before structured-output.complete')). This ensures the run fails on premature EOF while preserving existing state updates.
168-233:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFlush the
TextDecoderafter the read loop.
decoder.decode(value, { stream: true })can buffer trailing UTF-8 bytes. Without a finaldecoder.decode(), the last SSE frame can be truncated or dropped when a multibyte character spans reads.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 168 - 233, The TextDecoder may have buffered trailing UTF-8 bytes; after the read loop that uses decoder.decode(value, { stream: true }) (the while (true) { const { done, value } = await reader.read() ... } block) call decoder.decode() with no arguments to flush remaining bytes and append the result to buffer before continuing to parse frames, then run the same frame-parsing logic on the updated buffer; update references to buffer, decoder, and the reader loop/stream parsing code so the final partial multibyte characters are not dropped.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@examples/ts-react-chat/src/routes/api.structured-output.ts`:
- Around line 97-105: Move parsing of the POST body into the try block and
validate it with a Zod schema before casting so malformed JSON or invalid
provider values return a 400 instead of reaching adapterFor(); specifically,
define a Zod schema for { prompt: string, provider?: Provider, model?: string,
stream?: boolean }, call await request.json() inside the try, parse/validate
with schema.parse or safeParse, and if validation fails return a 400 response;
then compute resolvedProvider (fallback to 'openrouter') only from validated
data and pass that safe value to adapterFor().
- Around line 127-132: The non-streaming chat call doesn't receive the client's
abort signal, so work continues after the client disconnects; update the chat
invocation to pass through the abort signal (e.g., include signal:
request.signal) so that chat(...) (and downstream adapterFor/resolvedProvider
work) can cancel when the request is aborted; ensure the chat call's options
include the signal property alongside adapter, modelOptions, messages, and
outputSchema.
In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx`:
- Around line 279-299: The labels "Provider", "Model" and "Prompt" are not
associated with their form controls; add explicit associations by giving each
select/textarea a unique id (e.g., providerSelect, modelSelect, promptTextarea)
and set the corresponding label's htmlFor to that id (or wrap the control inside
the label). Update the JSX around the provider select (value={provider},
onChange={onProviderChange}), the model select (value={model},
onChange={setModel}), and the prompt textarea to use those ids so clicking a
label focuses the correct control and improves accessibility.
- Around line 220-227: When you set reasoningFull from the chunk (inside the
branch that checks (chunk.value as { reasoning?: string }).reasoning), also
update reasoningLine so it stays in sync: assign reasoningFull via
setReasoningFull((chunk.value as { reasoning: string }).reasoning) and
immediately call setReasoningLine with a one-line/trimmed version (e.g., first
line or trimmed slice) of the same value; update the same logic path that
handles structured-output.complete so both state variables (reasoningFull and
reasoningLine) are always updated from the same source.
- Around line 168-233: The reader loop currently breaks on EOF even if the
terminal structured-output.complete was never received; add a local boolean
(e.g., receivedFinalResult) initialized false, set it true in the branch where
you call setResult(...) and setHasFinalResult(true) for the
CUSTOM/structured-output.complete chunk (the same place that currently
setsHasFinalResult), and change the EOF handling (when done is true) to throw an
Error (or reject) if receivedFinalResult is still false (e.g., throw new
Error('Stream ended before structured-output.complete')). This ensures the run
fails on premature EOF while preserving existing state updates.
- Around line 168-233: The TextDecoder may have buffered trailing UTF-8 bytes;
after the read loop that uses decoder.decode(value, { stream: true }) (the while
(true) { const { done, value } = await reader.read() ... } block) call
decoder.decode() with no arguments to flush remaining bytes and append the
result to buffer before continuing to parse frames, then run the same
frame-parsing logic on the updated buffer; update references to buffer, decoder,
and the reader loop/stream parsing code so the final partial multibyte
characters are not dropped.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5087f1ff-61cd-4360-9d0e-51d3117b3ba0
📒 Files selected for processing (2)
examples/ts-react-chat/src/routes/api.structured-output.tsexamples/ts-react-chat/src/routes/generations.structured-output.tsx
`runStreamingStructuredOutputImpl` never derived `request: { signal }`
from `abortController`, so aborting the SSE response didn't cancel
the upstream provider request — the terminal `structured-output.complete`
event still got yielded after stop. Mirror `TextEngine` and forward
the signal so adapters' underlying network calls actually abort.
Also fix two e2e abort fixtures whose `opts: { tokensPerSecond, chunkSize }`
wrapper aimock silently ignores (the real schema uses `chunkSize`
at top level + `streamingProfile.tps`). They streamed at full speed,
so the abort test raced the response and saw the complete event
before stop could land.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…utput - Short-circuit structured output stream when agent loop yields RUN_ERROR - Emit RUN_STARTED for empty streams in openrouter finalization path - Capture trailing usage chunks (empty choices) in grok adapter - Sanitize SDK errors via toRunErrorPayload in grok structuredOutputStream - Validate POST body with zod and propagate abort signal through non-streaming chat() in example - Flush TextDecoder, throw if structured-output.complete missing, sync reasoningLine, link form labels in example UI - Correct changeset wording and update e2e fallback comment + abort prefix doc Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Structured streaming in action Screen.Recording.2026-05-05.at.5.26.00.pm.mov |
Summary
This PR ships
chat({ outputSchema, stream: true })— a typedStructuredOutputStream<T>that yields raw JSON deltas and a terminalCUSTOM structured-output.completeevent with{ object, raw, reasoning? }. It started as openrouter-only (#526) and grew to cover the four OpenAI-compatible providers plus a hand-testable example.Core (
@tanstack/ai)chat({ outputSchema, stream: true })overload returningStructuredOutputStream<InferSchemaType<TSchema>>.finishReason), typedRUN_ERRORon empty content, mid-stream provider errors terminate cleanly, schema-validation failures carryrunId / model / timestamp.fallbackStructuredOutputStreamin the activity layer is the single source of truth for adapters that don't implementstructuredOutputStreamnatively.BaseTextAdapterno longer ships a default.Native streaming structured output
@tanstack/ai-openrouterresponse_format: json_schema+stream: truedelta.reasoningDetails(camelCase)@tanstack/ai-openaitext.format: json_schema+stream: trueresponse.reasoning_text.delta+response.reasoning_summary_text.delta(requiresreasoning.summary: 'auto')@tanstack/ai-grokresponse_format: json_schema+stream: truedelta.reasoning_content(DeepSeek convention; not typed by OpenAI SDK)@tanstack/ai-groqresponse_format: json_schema+stream: truedelta.reasoning(requiresreasoning_format: 'parsed'; not typed by groq-sdk)All four emit the contractual
REASONING_*lifecycle (REASONING_START→REASONING_MESSAGE_START→REASONING_MESSAGE_CONTENTdeltas →REASONING_MESSAGE_END→REASONING_END) and close it beforeTEXT_MESSAGE_START. Accumulated reasoning is also surfaced on the terminalstructured-output.completeevent'svalue.reasoningfield for consumers that only subscribe to the final event.ts-react-chatexample refreshed/generations/structured-outputis now a hand-testable demo of the entire feature surface:Streamtoggle: off uses the existing non-streamingchat({ outputSchema }), on usesstructuredOutputStream.parsePartialJSON— title, summary, recommendation cards (brand → name → type → price → reason), and next steps fill in field-by-field as JSON streams in, with subtle visual cues (orange border + blinking caret on the card currently being built) and snap to the validated payload on the terminal event.REASONING_MESSAGE_CONTENTdeltas, with collapsible full-reasoning details.modelOptionsso reasoning models actually emit deltas: openai gpt-5.x/o-series →reasoning: { summary: 'auto' }, groq gpt-oss/qwen3/kimi-k2 →reasoning_format: 'parsed', openrouter →reasoning: { effort: 'medium' }, grok reasoning models stream automatically.debug: trueonchat()calls so the dev server console shows every Responses API / Chat Completions event each provider emits — useful for diagnosing whether a model is reasoning silently.E2E
testing/e2e/src/lib/feature-support.ts:86—structured-output-streamset expanded from['openrouter']to['openai', 'grok', 'groq', 'openrouter']. The existing parameterized spec intesting/e2e/tests/structured-output-stream.spec.ts(happy path + abort) now runs against all four providers.Test plan
pnpm test:lib— all unit tests pass; 137 openai, 61 grok, 19 groq, plus the original 50 openrouter cases (4 new reasoning tests across grok/groq, ~18 new structuredOutputStream tests across the three new adapters)pnpm test:types— clean across the workspacepnpm test:eslint— clean (one pre-existing warning unrelated to this PR)pnpm test:build,pnpm test:knip— cleanpnpm --filter @tanstack/ai-e2e test:e2e -- --grep structured-output-stream— needs to run on a host where port 4010 is freeexamples/ts-react-chat:pnpm dev, set provider keys (OPENAI_API_KEY/XAI_API_KEY/GROQ_API_KEY/OPENROUTER_API_KEY), open/generations/structured-output, pick a reasoning model, tick Stream, verify the purple Thinking strip updates live and the JSON cards fill in progressively🤖 Generated with Claude Code
Summary by CodeRabbit