Describe the bug
The assistant sometimes emits multiple consecutive tool-only turns with no <text> block at all. The CLI faithfully renders each turn as just a stack of tool calls + tool results — no surrounding narration. From the user's seat the session looks frozen / silent: tools execute, return, the model takes another turn, executes more tools, returns, and so on, without ever sending a text reply. The user has no way to tell whether the agent is still working, has gotten stuck, has failed silently, or is just choosing not to talk.
This bypasses the system prompt's explicit instruction:
Always lead tool-using work with a brief user-facing update so the user knows what you're doing and why; keep progress visible between tool batches.
- Before the first tool call and before each new tool-call batch, first send a short visible message naming what you're about to do and why; never begin or shift work with a tools-only turn.
When the model violates that instruction, the CLI does nothing to mitigate. The user has to type something like "notice you didn't reply" or "are you stuck?" to wake the agent into producing visible text.
Affected version
1.0.44
Steps to reproduce the behavior
Hard to reproduce on demand because it's model-side instruction-following drift, but:
- Start a session on Claude Opus 4.7 (1M context, internal).
- Ask a complex multi-phase investigation question that requires many parallel tool calls (e.g. "explore my AKS cluster and find why X is broken").
- Observe assistant turns — most include a leading text block, but some do not. Each tool-only turn renders as just
tool_use ... tool_result ... with no narration.
- After 2–4 such turns, the user reasonably believes the session has hung.
In my session it happened ~4 turns in a row in the middle of a Kubernetes investigation. I have a feedback bundle (copilot-feedback-<session-id>.tgz) that captures the exact transcript; I'm not attaching it here because it contains internal infra data, but I'm happy to share it via the confidential /feedback channel.
Expected behavior
Two complementary fixes — either or both:
1. Stronger model-side enforcement.
The system prompt rule already exists. It needs to actually bind. Options:
- Auto-prepend a short "Working on it: — " status line if the assistant emits a turn whose first content block is
tool_use rather than text.
- Or refuse the turn at the API/client layer (resample) when the assistant returns no text and the previous turn also had no text — currently nothing detects the "tools-only run".
2. CLI-side progress indicator.
Even when the model misbehaves, the CLI could surface something:
- A persistent footer line:
assistant working — last tool: <name> · turn N · elapsed Ns
- Or, after N consecutive tools-only turns (configurable), inject a synthetic visible "⚠️ assistant has run N tool batches without sending text" hint.
Today the CLI is silent in proportion to the model being silent, which compounds the failure mode.
Additional context
- Model: Claude Opus 4.7 (1M context, internal) —
claude-opus-4.7-1m-internal
- Mode: interactive (default), not autopilot
- Custom instructions: yes (custom personal instructions plus several skills, hooks, MCP servers)
- OS: Windows 11 / PowerShell 7
- Tools active during the failure: many MCP servers (aks-mcp, kusto, grafana-amg, bluebird, etc.) — high tool-call density per turn
- The session was making progress — every tool call returned successfully; it's not a hang on a tool, it's a hang on user-visible output. The transcript shows valid
tool_use → tool_result → next turn tool_use → tool_result ... with no text blocks anywhere.
Related issues
None of those are this bug. This one is specifically: the model returns turns containing only tool_use blocks, no text, repeatedly, and the CLI does not surface that to the user.
Describe the bug
The assistant sometimes emits multiple consecutive tool-only turns with no
<text>block at all. The CLI faithfully renders each turn as just a stack of tool calls + tool results — no surrounding narration. From the user's seat the session looks frozen / silent: tools execute, return, the model takes another turn, executes more tools, returns, and so on, without ever sending a text reply. The user has no way to tell whether the agent is still working, has gotten stuck, has failed silently, or is just choosing not to talk.This bypasses the system prompt's explicit instruction:
When the model violates that instruction, the CLI does nothing to mitigate. The user has to type something like "notice you didn't reply" or "are you stuck?" to wake the agent into producing visible text.
Affected version
1.0.44
Steps to reproduce the behavior
Hard to reproduce on demand because it's model-side instruction-following drift, but:
tool_use...tool_result... with no narration.In my session it happened ~4 turns in a row in the middle of a Kubernetes investigation. I have a feedback bundle (
copilot-feedback-<session-id>.tgz) that captures the exact transcript; I'm not attaching it here because it contains internal infra data, but I'm happy to share it via the confidential/feedbackchannel.Expected behavior
Two complementary fixes — either or both:
1. Stronger model-side enforcement.
The system prompt rule already exists. It needs to actually bind. Options:
tool_userather thantext.2. CLI-side progress indicator.
Even when the model misbehaves, the CLI could surface something:
assistant working — last tool: <name> · turn N · elapsed NsToday the CLI is silent in proportion to the model being silent, which compounds the failure mode.
Additional context
claude-opus-4.7-1m-internaltool_use→tool_result→ next turntool_use→tool_result... with notextblocks anywhere.Related issues
/compactreturns empty), but in the same family of "model silently produced nothing".None of those are this bug. This one is specifically: the model returns turns containing only
tool_useblocks, no text, repeatedly, and the CLI does not surface that to the user.