Skip to content

Vim/hyp 184 cli hs follow command for websocket streams#74

Merged
vimmotions merged 98 commits intomainfrom
vim/hyp-184-cli-hs-follow-command-for-websocket-streams
Mar 26, 2026
Merged

Vim/hyp 184 cli hs follow command for websocket streams#74
vimmotions merged 98 commits intomainfrom
vim/hyp-184-cli-hs-follow-command-for-websocket-streams

Conversation

@vimmotions
Copy link
Copy Markdown
Contributor

hs stream — Live WebSocket Stream CLI + Interactive TUI

Summary

Adds a new hs stream command that connects to a deployed stack's WebSocket and streams live entity data to the terminal. This is the CLI equivalent of what the TypeScript SDK does programmatically — but with filtering, recording, time-travel, and an interactive TUI for exploration.

This was the highest-leverage missing UX feature: users could deploy stacks with hs up but had no way to observe live stream data without writing code.

What's new

Core streaming (hs stream <View> --url <wss://...>)

  • Connects to any deployed stack's WebSocket endpoint
  • Subscribes to a view using the standard Entity/mode syntax (e.g. PumpfunToken/list)
  • Outputs merged entity state as NDJSON (one JSON object per line) to stdout
  • --raw mode outputs unmerged WebSocket frames directly
  • Pipe-friendly: works with | jq, | head, | grep, etc.
  • URL resolution: --url explicit, --stack from hyperstack.toml, or auto-match from config
  • Subscription controls: --key, --take, --skip, --no-snapshot, --after

Filtering & triggers

  • --where DSL with 10 operators: =, !=, >, >=, <, <=, ~regex, !~regex, ? (exists), !? (not exists)
  • Dot-path support for nested fields: --where "info.symbol=TRUMP"
  • Multiple --where flags are ANDed together
  • --first exits immediately after the first entity matches the filter
  • --select projects specific fields: --select "info.name,info.symbol"
  • --ops filters by operation type: --ops upsert,patch
  • --count shows a running update count on stderr

Agent-friendly output

  • --no-dna outputs NO_DNA v1 envelope format with lifecycle events (connected, snapshot_complete, entity_update, disconnected)
  • Every TUI interaction has a non-interactive CLI equivalent for agent consumption

Recording & replay

  • --save snapshot.json records all frames with timestamps
  • --duration 30 auto-stops recording after N seconds
  • --load snapshot.json replays through the same merge/filter pipeline (no WebSocket needed)

Entity history & time-travel

  • EntityStore tracks per-entity history with a ring buffer (default 1000 entries)
  • --history --key <key> outputs full update history as JSON
  • --at N --key <key> shows entity state at a specific point in history
  • --diff --key <key> shows field-level changes between updates

Interactive TUI (--tui)

Behind --features tui (ratatui + crossterm):

  • Split-pane layout: entity list (30%) + JSON detail view (70%)
  • JSON syntax coloring: keys in cyan, strings green, numbers magenta, booleans yellow
  • Deep search (/): filters entities by searching all values in the JSON tree, not just keys
  • Time-travel: h/l step through entity version history, with diff view toggle (d)
  • Vim motions: j/k, G/gg, Ctrl+d/Ctrl+u, n (next match), number prefixes (10j)
  • Pause/resume (p): freeze the stream while exploring
  • Save snapshot (s): dump current recording to JSON file
  • Raw frame toggle (r)
  • Auto-scrolling list: selection always stays visible in both directions

SDK changes

  • deep_merge_with_append made public for reuse
  • parse_frame, parse_snapshot_entities, try_parse_subscribed_frame, ClientMessage, SnapshotEntity exported from hyperstack-sdk

Usage examples

# Stream all entities as NDJSON
hs stream PumpfunToken/list --url wss://my-stack.stack.usehyperstack.com

# Find first token with a specific symbol
hs stream PumpfunToken/list --url wss://... --where "info.symbol=TRUMP" --first

# Record 30 seconds, replay later in TUI
hs stream PumpfunToken/list --url wss://... --save capture.json --duration 30
hs stream --load capture.json --tui

# Interactive exploration
hs stream PumpfunToken/list --url wss://... --tui

# Pipe to jq
hs stream PumpfunToken/list --url wss://... --raw | jq '.data[0].data.info.name'

New files

cli/src/commands/stream/
├── mod.rs          # Command args, URL resolution, entry point
├── client.rs       # WebSocket connection, frame processing, replay
├── filter.rs       # --where DSL parser & evaluator (9 unit tests)
├── output.rs       # NDJSON, NO_DNA, raw formatters
├── snapshot.rs     # --save/--load file I/O
├── store.rs        # EntityStore with history ring buffer (5 unit tests)
└── tui/
    ├── mod.rs      # Terminal setup, event loop, key dispatch
    ├── app.rs      # App state, vim motions, entity management
    └── ui.rs       # ratatui layout, JSON coloring, widgets

New dependencies (cli)

  • hyperstack-sdk — reuse Frame types, subscription protocol, merge logic
  • tokio, futures-util, tokio-tungstenite — async WebSocket client
  • ratatui, crossterm — TUI (optional, behind tui feature flag)

Tests

  • 14 unit tests: 9 for filter DSL (all operators, nested paths, type coercion), 5 for EntityStore (merge, patch, history, diff)
  • Manually tested against my live stack deployment with PumpfunToken/list view

…ase 1)

Core streaming MVP: connect to a deployed stack's WebSocket, subscribe
to a view (e.g. OreRound/latest), and stream entity data as NDJSON to
stdout. Supports --raw mode for raw frames and merged entity output
(default). Resolves WebSocket URL from --url, --stack, or hyperstack.toml.

Also exports parse_frame, parse_snapshot_entities, ClientMessage, and
deep_merge_with_append from hyperstack-sdk for reuse.
…s stream (Phase 2)

- Filter DSL via --where: =, !=, >, >=, <, <=, ~regex, !~regex, ?, !?
  with dot-path support for nested fields (e.g. --where "user.age>18")
- --first exits after first entity matches filter criteria
- --select projects specific fields (comma-separated dot paths)
- --ops filters by operation type (upsert, patch, delete)
- --no-dna outputs NO_DNA v1 agent-friendly envelope format with
  lifecycle events (connected, snapshot_complete, entity_update, disconnected)
- --count shows running update count on stderr
- 9 unit tests for filter parsing and evaluation
…hase 3)

- --save <file> records all raw frames with timestamps to a JSON file
- --duration <secs> auto-stops recording after N seconds
- --load <file> replays a saved snapshot through the same merge/filter
  pipeline (no WebSocket connection needed)
- Snapshot format includes metadata (view, url, captured_at, duration)
…ase 4)

EntityStore tracks full entity state + per-entity history ring buffer
(default 1000 entries). Supports:
- --history: outputs full update history for --key entity as JSON array
- --at N: shows entity state at specific history index (0 = latest)
- --diff: shows field-level diff (added/changed/removed) between updates
  with raw patch data when available

These flags provide non-interactive agent equivalents of the TUI time
travel feature. 5 unit tests for store operations and diffing.
Behind --features tui flag, `hs stream --tui` launches a ratatui-based
terminal UI with:
- Split-pane layout: entity list (30%) + detail view (70%)
- Entity navigation (j/k, arrows), detail focus (Enter/Esc)
- Time travel through entity history (h/l, Home/End)
- Diff view toggle (d) showing field-level changes
- JSON syntax coloring in detail panel
- Pause/resume live updates (p)
- Save snapshot to file (s)
- Entity key filtering (/)
- Raw frame toggle (r)
- Status bar with keybinding hints
- Timeline bar showing history position

Dependencies: ratatui 0.29, crossterm 0.28 (optional)
Without the tui feature, --tui prints an error with install instructions.
The server sends subscribed acknowledgments as binary frames with a
different shape (no `entity` field), causing parse_frame to fail.
Now falls back to try_parse_subscribed_frame before warning, so real
parse errors are still surfaced while subscribed frames are handled
cleanly. Re-exports try_parse_subscribed_frame from hyperstack-sdk.
Previously, keys like r, d, s, h, l etc. were matched as TUI commands
before checking if filter input was active, making it impossible to
type those characters in the filter. Now checks filter_input_active
first and routes all Char keys to the filter text input.
The / filter now recursively searches all string, number, and boolean
values in each entity's JSON data. Typing "test" matches any entity
where any field value contains "test" (case-insensitive), not just
entities whose key contains the search term.
When typing a filter that reduces the entity list, the selection now
clamps to stay within the filtered results. Navigation (j/k) also
bounds against the filtered count instead of the full entity list.
Uses ratatui's ListState with the selected index so the list widget
automatically scrolls to keep the highlighted entity in view when
navigating past the visible area. Also shows filter text and
filtered/total count in the title when a filter is active.
- gg: jump to top of list
- G: jump to bottom of list
- Ctrl+d / Ctrl+u: half-page down/up
- n: jump to next filter match (wraps around)
- Number prefixes: e.g. 10j moves down 10, 5k moves up 5,
  3Ctrl+d moves 3 half-pages down
- Esc clears any pending count/g prefix
ListState was being recreated fresh each frame, losing the scroll
offset. Now stored in App and synced with selected_index on every
action, so ratatui properly auto-scrolls the entity list in both
directions — pressing k after scrolling past the bottom now scrolls
back up as expected.
The timeline bar now shows two distinct pieces of info:
- Row position: "Row 1/513" (your position in the entity list)
- Entity version: "version 1/2" (update history for the selected entity)

Previously only showed "update N/N" which was confusing because it
looked like it referred to list position rather than entity history.
The entity count is already shown in the list panel title, no need
to repeat it in the header bar.
@vimmotions vimmotions requested a review from adiman9 March 23, 2026 17:15
@vercel
Copy link
Copy Markdown

vercel bot commented Mar 23, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
hyperstack-docs Ready Ready Preview, Comment Mar 25, 2026 10:35pm

Request Review

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Mar 23, 2026

Greptile Summary

This PR adds a comprehensive hs stream command that connects to a deployed stack's WebSocket endpoint and streams live entity data to the terminal. It covers the full feature surface described in the PR: NDJSON/NoDna/raw output modes, a --where filter DSL with 10 operators, --select projection, --save/--load snapshot recording and replay, per-entity history with a ring buffer, and an optional interactive TUI built on ratatui/crossterm. The SDK is extended minimally to export previously internal types needed by the CLI.

  • The architecture is cleanly layered: mod.rs handles URL resolution and argument validation, client.rs owns the WebSocket loop and frame dispatch, filter.rs/store.rs/snapshot.rs/output.rs each have a single responsibility, and the TUI is isolated behind a Cargo feature flag.
  • 14 unit tests cover all 10 filter operators (including edge cases like absent fields and type coercion) and 5 EntityStore operations (upsert, patch, diff, delete, compute_diff).
  • The main non-blocking issues found are: (1) a BufWriter that is flushed on every write in output.rs, negating its batching benefit; (2) two redundant duplicate self.history_anchor = None assignments in tui/app.rs; and (3) crossterm::event::poll used as a blocking call inside an async fn (acceptable with multi-threaded Tokio but not idiomatic).
  • No reconnection/retry logic is present — a dropped connection ends the stream permanently, which may surprise users in long-lived streaming scenarios.

Confidence Score: 5/5

  • This PR is safe to merge; all identified issues are non-blocking style improvements with no impact on correctness.
  • The implementation is large but well-structured, with clear module boundaries, 14 passing unit tests covering filter DSL and EntityStore, atomic snapshot writes, and defensive error handling throughout. No P0/P1 bugs were found — only three P2 style suggestions (BufWriter flush semantics, duplicate assignments in TUI, and synchronous event::poll in async context). The SDK changes are purely additive (visibility widening). The feature is self-contained behind a new subcommand and optional Cargo feature, so there is no regression risk to existing commands.
  • No files require special attention; cli/src/commands/stream/output.rs (BufWriter flush) and cli/src/commands/stream/tui/app.rs (duplicate assignments) have the P2 items worth a quick pass before or after merge.

Important Files Changed

Filename Overview
cli/src/commands/stream/client.rs Core WebSocket streaming engine: connects, subscribes, processes frames (snapshot/upsert/patch/delete), handles --save recording, --duration timer, Ctrl+C, and a replay path. Logic is well-structured with a shared StreamState. No critical bugs found; Operation::Subscribed is guarded before process_frame so it is never recorded.
cli/src/commands/stream/filter.rs Filter DSL with 10 operators, two-char precedence correctly handled, dot-path resolution, and select_fields projection. 9 unit tests cover all operators. NotEq/NotRegex intentionally returns true for absent fields (documented in tests).
cli/src/commands/stream/mod.rs Command entry point: URL resolution (explicit → --stack → entity-name auto-match → single-stack fallback), TUI conflict guards, and subscription builder. Clean layering with good user-facing error messages.
cli/src/commands/stream/output.rs NDJSON/NoDna/raw formatters and StdoutWriter. BufWriter is flushed on every writeln call, negating its batching benefit for high-throughput streams — either use raw Stdout with a lock, or flush only at natural batch boundaries.
cli/src/commands/stream/snapshot.rs Frame recorder (100k cap with warning) and player. Uses atomic tmp-then-rename write for safety. Manual JSON streaming avoids holding the full JSON string in memory. SnapshotPlayer::load reads the entire file via read_to_string before parsing; for max-size snapshots (~100k frames × ~1 KB) this could require ~100 MB of RAM.
cli/src/commands/stream/store.rs Per-entity history ring buffer (VecDeque, 1000-entry cap). at, at_absolute, diff_at, and history methods are consistent: index 0 = latest. 5 unit tests cover upsert, patch, diff, delete, and compute_diff.
cli/src/commands/stream/tui/app.rs TUI app state, vim motions, entity management, and history browsing. Has two redundant duplicate self.history_anchor = None assignments in HistoryForward (lines 402–404) and HistoryNewest (lines 425–426) — harmless but look like copy-paste artifacts. The compensate_history_anchor logic for ring-buffer eviction is thoughtfully implemented.
cli/src/commands/stream/tui/mod.rs Terminal setup with panic hook restoration, WS reader task with drop counter, and main event loop. crossterm::event::poll(tick_rate) is a blocking syscall inside an async fn — works with multi-threaded Tokio but consumes a worker thread for up to 50 ms per tick; crossterm's async EventStream would be more correct.
cli/src/commands/stream/tui/ui.rs Ratatui layout (header/split-pane/timeline/status). JSON syntax coloring uses a heuristic line-by-line approach relying on serde_json pretty-print format. Timeline and status bar are informative without being cluttered.
rust/hyperstack-sdk/src/lib.rs Re-exports parse_frame, parse_snapshot_entities, try_parse_subscribed_frame, SnapshotEntity, ClientMessage, and deep_merge_with_append from the SDK for use by the CLI — minimal, additive change with no breaking modifications.

Sequence Diagram

sequenceDiagram
    participant User as User / CLI
    participant Mod as stream/mod.rs
    participant Client as client.rs
    participant WS as WebSocket Server
    participant Out as output.rs
    participant Snap as snapshot.rs

    User->>Mod: hs stream View --url wss://...
    Mod->>Mod: resolve_url(), validate_ws_url()
    Mod->>Client: stream(url, view, args)
    Client->>Client: build_state() (parse filters, open recorder)
    Client->>WS: connect_async()
    WS-->>Client: HTTP 101 Upgrade
    Client->>WS: ClientMessage::Subscribe(sub)
    WS-->>Client: Subscribed frame

    loop Live stream
        WS-->>Client: Binary/Text frame
        Client->>Client: parse_frame()
        alt Snapshot frame
            Client->>Client: parse_snapshot_entities()
            Client->>Client: EntityStore::upsert() + entities HashMap
            Client->>Out: emit_entity() → writeln NDJSON/NoDna
        else Upsert/Create
            Client->>Client: entities.insert()
            Client->>Out: emit_entity()
        else Patch
            Client->>Client: deep_merge_with_append()
            Client->>Out: emit_entity()
        else Delete
            Client->>Client: entities.remove()
            Client->>Out: print_delete()
        end
        opt --save active
            Client->>Snap: recorder.record(frame)
        end
        opt --first matched
            Client-->>User: break (exit 0)
        end
        opt --duration elapsed / Ctrl+C
            Client->>WS: ws_tx.close()
            Client-->>User: break
        end
    end

    opt --save path given
        Client->>Snap: recorder.save(path)
        Snap->>Snap: tmp write → atomic rename
    end
    opt --history/--at/--diff
        Client->>Out: output_history_if_requested()
    end
Loading

Comments Outside Diff (4)

  1. cli/src/commands/stream/tui/app.rs, line 402-404 (link)

    Duplicate history_anchor assignment

    self.history_anchor = None; is assigned twice back-to-back in both the HistoryForward arm (lines 402–404) and the HistoryNewest arm (lines 425–426). These are dead writes — the second assignment in each block is unreachable in any meaningful sense and is never observed. While this produces no incorrect behavior, it looks like a copy-paste artifact and should be cleaned up to avoid future confusion when modifying this logic.

  2. cli/src/commands/stream/tui/app.rs, line 424-427 (link)

    Duplicate history_anchor = None in HistoryNewest

    Same pattern as the duplicate in HistoryForwardself.history_anchor = None; is written twice consecutively. One of these two statements should be removed.

  3. cli/src/commands/stream/output.rs, line 24-29 (link)

    BufWriter flushed on every write, negating buffering benefit

    self.inner.flush() is called after every writeln!, which means each line triggers its own write syscall — the same behavior as using io::Stdout directly. BufWriter's purpose is to coalesce multiple small writes into a single syscall; flushing on every write eliminates that benefit and the overhead of the BufWriter wrapper is paid for nothing.

    For the pipe-friendly use case (| jq, | head), per-line flushing is indeed desirable to ensure downstream consumers receive data promptly. If that's the intent, consider either:

    1. Removing BufWriter and using io::Stdout with explicit locking, or
    2. Keeping BufWriter but only flushing periodically (e.g. after each frame batch) and relying on Drop::drop to flush at exit.

    (Flush on drop — already implemented — will then ensure final output is not lost.)

  4. cli/src/commands/stream/tui/mod.rs, line 198-199 (link)

    Synchronous event::poll blocks a Tokio worker thread

    crossterm::event::poll(tick_rate) is a blocking system call (it may block up to the full 50 ms timeout). Calling it inside an async fn (run_loop) without spawn_blocking means it occupies a Tokio worker thread for up to 50 ms per iteration. With the default multi-threaded runtime this doesn't stall the WebSocket reader task (it runs on a different thread), but it does reduce the effective thread-pool size during the poll and prevents other Tokio tasks from being scheduled on that thread.

    Consider wrapping the poll + read in tokio::task::spawn_blocking or using crossterm's async event stream (EventStream from the crossterm crate's event-stream feature) to keep the executor fully cooperative.

Prompt To Fix All With AI
This is a comment left during a code review.
Path: cli/src/commands/stream/tui/app.rs
Line: 402-404

Comment:
**Duplicate `history_anchor` assignment**

`self.history_anchor = None;` is assigned twice back-to-back in both the `HistoryForward` arm (lines 402–404) and the `HistoryNewest` arm (lines 425–426). These are dead writes — the second assignment in each block is unreachable in any meaningful sense and is never observed. While this produces no incorrect behavior, it looks like a copy-paste artifact and should be cleaned up to avoid future confusion when modifying this logic.

```suggestion
                self.history_anchor = None;
                self.history_position = 0;
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cli/src/commands/stream/tui/app.rs
Line: 424-427

Comment:
**Duplicate `history_anchor = None` in `HistoryNewest`**

Same pattern as the duplicate in `HistoryForward``self.history_anchor = None;` is written twice consecutively. One of these two statements should be removed.

```suggestion
                self.history_position = 0;
                self.history_anchor = None;
                self.scroll_offset = 0;
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cli/src/commands/stream/output.rs
Line: 24-29

Comment:
**`BufWriter` flushed on every write, negating buffering benefit**

`self.inner.flush()` is called after every `writeln!`, which means each line triggers its own `write` syscall — the same behavior as using `io::Stdout` directly. `BufWriter`'s purpose is to coalesce multiple small writes into a single syscall; flushing on every write eliminates that benefit and the overhead of the `BufWriter` wrapper is paid for nothing.

For the pipe-friendly use case (`| jq`, `| head`), per-line flushing is indeed desirable to ensure downstream consumers receive data promptly. If that's the intent, consider either:
1. Removing `BufWriter` and using `io::Stdout` with explicit locking, or
2. Keeping `BufWriter` but only flushing periodically (e.g. after each frame batch) and relying on `Drop::drop` to flush at exit.

```suggestion
    pub fn writeln(&mut self, line: &str) -> Result<()> {
        writeln!(self.inner, "{}", line)?;
        Ok(())
    }
```
(Flush on drop — already implemented — will then ensure final output is not lost.)

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cli/src/commands/stream/tui/mod.rs
Line: 198-199

Comment:
**Synchronous `event::poll` blocks a Tokio worker thread**

`crossterm::event::poll(tick_rate)` is a blocking system call (it may block up to the full 50 ms timeout). Calling it inside an `async fn` (`run_loop`) without `spawn_blocking` means it occupies a Tokio worker thread for up to 50 ms per iteration. With the default multi-threaded runtime this doesn't stall the WebSocket reader task (it runs on a different thread), but it does reduce the effective thread-pool size during the poll and prevents other Tokio tasks from being scheduled on that thread.

Consider wrapping the poll + read in `tokio::task::spawn_blocking` or using `crossterm`'s async event stream (`EventStream` from the `crossterm` crate's `event-stream` feature) to keep the executor fully cooperative.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (32): Last reviewed commit: "fix: swap sort/save key bindings for erg..." | Re-trigger Greptile

Previously the deadline was checked at the top of the loop before
entering select!, so on quiet streams the actual stop time could
overshoot by up to the 30-second ping interval. Now uses a sleep
future as a select! arm that fires exactly when the duration expires.
…lisions

Previously --select "a.id,b.id" would output {"id": <b's value>},
silently overwriting a.id. Now uses the full dot-path as the output
key: {"a.id": 1, "b.id": 2}. Single-segment paths are unchanged
(--select "name" still outputs {"name": ...}).

Adds test for the collision case.
raw_frames were only collected when show_raw was active, and were
never read by any rendering code. Now:
- Always collects raw frames (so toggling on shows recent data)
- selected_entity_data() checks show_raw first and returns the most
  recent raw WebSocket frame for the selected entity key
If the TUI panics, disable_raw_mode and LeaveAlternateScreen never
executed, leaving the user's terminal in raw mode (unusable until
running 'reset'). Now installs a panic hook before entering raw mode
that restores the terminal state before re-invoking the original hook.
entity_keys.contains() was O(n) per frame, degrading with thousands
of entities. Now maintains a parallel HashSet<String> for O(1)
membership checks. HashSet::insert returns false if already present,
so it doubles as the contains check. Delete also removes from the set.
- Use strip_suffix instead of manual string slicing for ? and !? suffix
- Use is_some_and/is_none_or instead of map_or for Option comparisons
- Use direct == Some() comparison instead of map_or for equality check
The snapshot_complete detection and NO_DNA event emission only existed
in the Message::Text branch. Since the server primarily sends binary
frames, consumers relying on the NO_DNA snapshot_complete lifecycle
event would never see it. Now mirrors the same tracking logic in the
binary frame branch.
Byte-index slicing panics when the cut point lands in the middle of a
multi-byte codepoint (emoji, CJK characters). Now uses char_indices
to find safe byte boundaries for truncation.
…nfigured

Previously fell through to the first stack with any URL, silently
connecting to an unrelated stack. Now only auto-selects when there is
exactly one stack with a URL (unambiguous), and prints which stack was
chosen. With multiple stacks, requires explicit --url or --stack.
Previously --first only triggered when a --where filter was present,
silently running forever without one. Now --first always exits after
the first output: with --where it exits on first match, without
--where it exits after the first frame (raw) or entity (merged).
Adds comments explaining that two-char operators are checked before
single-char to avoid misparsing, and that the split is on the first
operator occurrence so values may contain operator characters
(e.g. --where "name=a=b" works correctly).
…source

- Moved connected event from build_state to after connect_async succeeds,
  so failed connections don't emit a connected event with no matching
  disconnected
- Replay connected event includes "source": "replay" so consumers can
  distinguish live vs replay
- --load now conflicts_with --duration at clap level
- Bail on --where/--select/--ops/--first with --tui (previously ignored)
- TUI detects WebSocket disconnect and shows DISCONNECTED in header
- Float equality uses exact bitwise comparison after string match
  (relative epsilon was too loose for large numbers)
- Duration expiry sends WebSocket close frame before breaking
- finalize_count() clears the overwriting \r count line before post-
  stream messages (prevents garbled terminal output)
- Snapshot write removes existing destination before rename (Windows
  compatibility where fs::rename fails if target exists)
- Document that NoDna snapshot entity_count is a running tally
- Document silent delete filter drop for unseen entities
- Remove-before-rename only runs on Windows (POSIX rename overwrites)
- Propagate remove_file errors instead of silently swallowing them
- Clean up tmp file on rename failure before propagating error
…okups

- Output functions now use a shared BufWriter<Stdout> held in StreamState
  instead of acquiring/releasing stdout lock per call
- Text WebSocket frames parsed once directly to Frame instead of double-
  parsing (Value then Frame)
- diff_at stores entry in local variable instead of 3 redundant gets
- TUI channel buffer increased to 10k for large snapshot batches
- Document that filter cache invalidation is per-tick not per-frame
…emove dead flush

- Replay snapshot_complete now checks received_snapshot (consistent
  with live stream path)
- Document --select flattening behavior in help text
- Document compute_diff as shallow top-level only
- Document colorize_json_line serde_json assumption
- Confirm --first semantics with comment
- Remove unused StdoutWriter::flush (Drop impl suffices)
…g_count cap

- StdoutWriter flushes after each writeln (prevents delays on low-
  throughput streams)
- JSON key coloring uses "\": " to avoid matching colons inside keys
- Snapshot serialization streams to file via BufWriter instead of
  building full JSON string in memory
- Document --ops snapshot as valid value
- Cap pending_count at 99999 to prevent usize overflow
adiman9
adiman9 previously approved these changes Mar 25, 2026
Copy link
Copy Markdown
Contributor

@adiman9 adiman9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a belter. Works really nice. Few comments:

  1. Would be nice to have some way to dynamically sort the entities. By _seq or any other field
  2. See below image for how arrays render. Would be nice if we could render inline up to certain width before we break to this format.
Image 3. When the json for an entity overflows the container I want j/k to move me up and down the json, not navigate between entities. This is especially true after i've hit enter to drill into an entity. I'm now in the context of the entity and expect jk to navigate within it 4. When using version history during streaming it jumps me around. I'm assuming its using offset from current version rather than an absolute offset? Or maybe to do with rolling window of 1k, as things roll off we jump? Idk

@adiman9
Copy link
Copy Markdown
Contributor

adiman9 commented Mar 25, 2026

I was using hs stream OreRound/latest --url wss://ore.stack.usehyperstack.com --tui btw

vimmotions and others added 6 commits March 25, 2026 22:03
In Detail mode (after pressing Enter), j/k now scroll the JSON
detail pane instead of navigating between entities. G/gg go to
bottom/top of the JSON, Ctrl+d/Ctrl+u do half-page scroll.

Arrow keys still navigate entities in both modes as an escape hatch.
Press Esc to return to list mode where j/k navigate entities.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously history_position was relative (0=latest, N=Nth from latest),
so when new frames arrived for the selected entity, the viewed content
would jump because the same relative index now pointed to a different
history entry.

Now stores an absolute VecDeque index (history_anchor) when browsing
history. The anchor stays fixed as new entries are appended. When the
ring buffer evicts old entries (pop_front), the anchor is decremented
to continue pointing at the same entry.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Small arrays like [1, 2, 3] now render on a single line when they
fit within the terminal width, instead of one element per line.
Larger arrays that exceed the width still expand to multi-line.

Uses a custom JSON formatter (compact_pretty) that tries the inline
form for each array and falls back to expanded when it doesn't fit.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Press S to cycle sort modes (insertion order → _seq field → insertion).
Press O to toggle ascending/descending direction.

Sort applies to the filtered cache after filtering, never mutates the
raw entity_keys list. Numbers sort numerically, strings lexicographically,
null/missing values sort last.

Sort indicator shown in status bar: [_seq↓] or [_seq↑].

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- G now scrolls to the last line of JSON, not past it
- All scroll actions clamp to max_scroll_offset (total lines - visible)
- Detail mode border is yellow to indicate focus state
- Scroll position shown in title: [line N/total]

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- s = cycle sort mode (was S)
- o = toggle sort direction (was O)
- S = save snapshot (was s)

Lowercase for frequent actions, uppercase for the less common save.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@adiman9 adiman9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@vimmotions vimmotions merged commit 486294b into main Mar 26, 2026
10 checks passed
@vimmotions vimmotions deleted the vim/hyp-184-cli-hs-follow-command-for-websocket-streams branch March 26, 2026 14:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants