Skip to content

[feat] Extend queues#3814

Draft
mmabrouk wants to merge 17 commits intomainfrom
feature/annotation-queue-v2
Draft

[feat] Extend queues#3814
mmabrouk wants to merge 17 commits intomainfrom
feature/annotation-queue-v2

Conversation

@mmabrouk
Copy link
Member

Summary

Design workspace for the annotation queue v2 feature.

  • context.md — background, problem statement, goals
  • prd.md — product requirements, user stories, acceptance criteria
  • rfc.md — technical RFC with three solution approaches (A: extend eval runs, B: new domain, C: metadata-based) — recommends C for v1
  • research.md — analysis of the existing EvaluationQueue backend implementation
  • research-human-eval-implementation.md — how human evaluation works today end-to-end (frontend components, state, API calls, annotation storage, backend service logic)
  • competitive-analysis.md — competitor metadata-based approach analysis

Related

Linear PRD: https://linear.app/agenta/document/prd-annotation-queues-b80788a78c9a

- context.md: background, current state analysis, problem statement
- prd.md: product requirements with 4 capabilities, user stories, acceptance criteria
- research.md: analysis of current EvaluationQueue implementation
- rfc.md: technical RFC with two solution options (extend runs vs new domain)
- competitive-analysis.md: anonymized analysis of competitor's approach
- rfc.md: added Solution C using metadata-based queues (no new tables)
- Updated recommendation: Solution C for v1 (1-2 weeks vs 4-5 weeks)

Key insight: annotations and review status can be stored as metadata
on existing items, with queues as filtered views rather than entities.
@vercel
Copy link

vercel bot commented Feb 24, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
agenta-documentation Ready Ready Preview, Comment Feb 26, 2026 3:08pm

Request Review

Proposes building annotation queues as a convenience layer over
existing EvaluationRun + EvaluationQueue entities. No new domain
entities — the problem is the interface, not the data model.

Covers: trace annotation, test set annotation, human+auto eval,
convenience API design, UI direction (view swap + inbox), phased
implementation plan.
**What happens behind the scenes:**

1. User selects traces in observability view, clicks "Send to review"
2. User picks evaluators (what to annotate) and optionally assigns people
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the correct flow. The flow is that the user would send the trace to an annotation queue, or create a new one, if they decide to create a new one they can configure it with evaluators, people, number of annoations etc..

The convenience API auto-creates the data for that annotation queue (and also needs to delete it and its data when asked to)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the flow. Now it's:

  1. Select traces → "Send to annotation queue"
  2. Choose existing queue OR create new one (configure labels, assign people, set repeats)
  3. Convenience API auto-creates backing infrastructure

Also added deletion handling — the convenience API should clean up EvaluationQueue + Scenarios + Results + Run when a queue is deleted (but NOT the immutable OTel annotation spans).

- The run has no inputs (no testset, no query) — just annotation steps with `origin: "human"` for each evaluator
- One EvaluationScenario per selected trace, with the trace's `trace_id` stored as the invocation reference in the scenario (no separate invocation step needed)
- An EvaluationQueue linked to the run, with user assignments if specified
4. Annotator opens inbox → sees assigned traces → annotates → submits
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be alright to have the user see the annotation queues they are assigned to and not the inbox directly (which could go to v2)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Rewrote the section as "Annotation Queues Page" — a dedicated page showing queues assigned to the current user, with progress (X/Y done), labels, and source type. Each queue has an "Open" action.

The global inbox (flat list of pending items across queues) is explicitly marked as v2.

- `POST /preview/annotations/` creates the annotation OTel span (same as today)
- `PATCH /preview/evaluations/results/` links the annotation `trace_id` to the step result (same as today)
- The annotation is also visible on the trace span in observability (existing write-through via OTel links)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does the FE discovers:

  • Which annotations has been done and still open?
  • How far are we from done?

Can the user edit their annotation after it being done?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a "Frontend: tracking progress and status" section addressing all three questions:

  1. Per-item status: Each EvaluationResult has a status (PENDING/COMPLETED). FE queries results for the queue's scenarios to determine done vs open.
  2. Overall progress: Count of COMPLETED results vs total (scenarios × annotation steps). Convenience API exposes this on the queue detail endpoint.
  3. Editing after completion: Yes. Re-submitting creates a new annotation OTel span (append-only) and updates the EvaluationResult.trace_id to the latest. Previous annotations are preserved.

- `PATCH /preview/evaluations/results/` links the annotation `trace_id` to the step result (same as today)
- The annotation is also visible on the trace span in observability (existing write-through via OTel links)

**Key design choice: evaluations without inputs.** The run has no input steps. The trace being annotated is referenced as the invocation in the scenario. This requires backend support for runs where only invocation references exist (no testset inputs).
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does need to be changed for this to be possible. let's run a research and add a note there in the appendix and link it from here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done! Added Appendix A: Evaluations Without Inputs — Technical Analysis to the RFC with detailed findings:

Good news: The data model already supports it — testcase_id and trace_id are both nullable, scenarios have no input fields.

What blocks it: 3 things:

  1. The start() gate requires query_steps or testset_steps — without either, the run is never dispatched
  2. The batch worker (evaluate_batch_testset) assumes testsets and would crash without them
  3. No trace-only worker exists

Recommended approach: Model a new worker after the live eval flow (evaluate_live_query) which already handles trace-based scenarios with testcase=None. Just swap the query-discovery step for a pre-provided list of trace_ids.

Estimate: ~2-3 days backend work.

**What happens behind the scenes:**

1. User opens a test set, clicks "Annotate" (or "Send to review")
2. User picks evaluators (what to annotate) — or defines them inline:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a UX perspective, it does not make sense for the user here to pick evaluators, instead they should specify which label they want to have in the annotation (we would offer defaults like correct_answer judge_guidelines). The FE or BE should create evaluators based on that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed — updated the flow. The user now specifies labels ("What do you want annotators to provide?") with sensible defaults like correct_answer, quality_rating, judge_guidelines. The FE/BE auto-creates a human evaluator with a matching JSON schema behind the scenes.

The user never sees or picks "evaluators" — they think in terms of labels and fields.

**Key design choice: annotating ≠ modifying the test set.** The annotation step creates annotation traces (OTel spans). These reference the test cases but don't modify them. Writing back to the test set is a separate, explicit action that creates a new revision. This preserves test case immutability and versioning.

**Constraint:** Test cases are immutable today — changing content creates new IDs, and changes only stick when attached to a revision. The write-back step must respect this by creating a new revision, not mutating existing test cases.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we will be creating a queue for each test set review. What happens when they are done, do we show them somewhere as finished? or hide them

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. Proposed approach:

Queue states: Active → Completed → (optionally) Archived

  • Active: Has pending items. Shown prominently on the Annotation Queues page.
  • Completed: All items annotated (100% progress). Still visible on the page but visually de-emphasized (e.g., grayed out, moved to a "Completed" section/tab).
  • Archived: User explicitly archives. Hidden from default view but accessible via a filter.

The state is derived from progress (not a manual status toggle): a queue is "completed" when all scenarios × annotation steps have COMPLETED results. No lifecycle state machine needed — it's just a computed property.

For test set annotation queues specifically: the queue shows as completed, and the user can then click "Write back to test set" to create a new revision. The queue itself stays visible as a record of the annotation work done.

Will add this to the RFC's Key Design Decisions section.

- An EvaluationQueue with optional user assignments
4. Annotator works through rows → fills in labels → submits
5. On submit: same annotation creation + result linking as today
6. **Write-back step** (separate action): User clicks "Save annotations to test set" → creates a new test set revision with annotation values as new columns
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who clicks on that? Is it the annotator? What if there are many? Is it part of the configuration of the queue so that it happens automatically on submit?

Would it be possible to have this same action for annotation queues for traces? Adding the annotated traces as test cases? [competitor offer a UX for this]

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three questions answered:

1. Who clicks write-back?
The queue creator/admin, not the annotator. When the queue is completed (or partially done), the admin goes to the queue detail view and clicks "Write back to test set." This is NOT per-annotator — it's a one-time action that collects all annotations and creates a new test set revision.

2. Can it be automatic?
Could be a queue config option: on_complete: "auto_write_back". When the last item is annotated, automatically create the new revision. But this adds complexity — for v1, let's keep it manual (explicit button). Can revisit for v2.

3. Can annotated traces be saved as test cases?
Yes! This is a great feature (competitor does offer this). The flow would be:

  • User annotates traces in a queue
  • On completion, clicks "Save as test set" → creates a new test set where each trace becomes a test case row
  • The trace's inputs become test case inputs, the trace's outputs become expected outputs, and the annotations become additional columns

This is essentially the same write-back mechanism but creating a new test set instead of updating an existing one. The convenience API could support both: write_back_target: "existing_testset"" | "new_testset".

Will add this to the RFC.

- "What do you want to add?" → "Expected answer (text), Difficulty (1-5)"
- Behind the scenes: a human evaluator is auto-created with a JSON schema matching these fields
3. **Convenience API** auto-creates:
- An EvaluationRun linked to the test set revision, with annotation steps for each evaluator
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this feasible with today's evaluation run interface? would that break something?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is feasible with today's interface. The EvaluationRunData stores steps as JSONB, and testset input steps already reference testset_revision in the step references. Creating an EvaluationRun linked to a testset revision works today — the batch worker (evaluate_batch_testset) already fetches testcases from the revision and creates scenarios.

The part that doesn't exist is creating such a run purely for annotation (no auto evaluators, no app invocation). That requires the changes described in Appendix A — mainly the start() gate and a trace-less worker path.


**What happens behind the scenes:**

1. User creates evaluation run with both auto and human evaluators (existing flow)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we make the flow clear? The UX obvious and the feature discoverable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the RFC with concrete discoverability mechanisms:

  1. Eval run detail view: Shows a prominent banner when human annotation is pending — "This evaluation has X human annotation tasks. [Go to annotation queue]"
  2. Annotation Queues page: Lists all queues including ones auto-created from eval runs, with the run name as context
  3. Eval run list: Runs with pending human annotation should show a distinct status (not just ERRORS as today)

The key insight: the orchestrator should handle mixed runs natively (skip human steps, seed as PENDING, create queue). The user doesn't need to do anything special — creating an eval run with human evaluators automatically surfaces annotation tasks.

**What happens behind the scenes:**

1. User creates evaluation run with both auto and human evaluators (existing flow)
2. Auto evaluators execute immediately (existing flow)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when an evaluation run is created as of now (using the automatic evaluation orchestrator) with human evaluator steps? Does it get stuck? Does it ignore the?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Researched this thoroughly. It fails.

When the orchestrator encounters human evaluator steps, it attempts to invoke them via workflows_service.invoke_workflow(). Human evaluators have no registered handler (no uri — only a JSON schema in data.service.format), so the invocation raises InvalidInterfaceURIV0Error.

The result:

  • Human evaluator steps are recorded with status=FAILURE and an error payload
  • The scenario is marked as ERRORS
  • The run is marked as ERRORS (not fully failed since auto evaluators may have succeeded)
  • No EvaluationQueue is created — queue creation is not part of the orchestration pipeline at all
  • There are zero is_human checks anywhere in the evaluation orchestration code

So for mixed auto+human runs, the auto evaluators succeed and the human ones fail. The has_human flag is computed and stored on run flags, but never acted upon.

What's needed: The orchestrator should check step.origin == "human" before invoking, skip invocation for human steps, seed their results as PENDING, and optionally create an EvaluationQueue for those steps.


1. User creates evaluation run with both auto and human evaluators (existing flow)
2. Auto evaluators execute immediately (existing flow)
3. Human evaluator steps → an EvaluationQueue is auto-created for the run
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The evaluation run orchestrator is the responsible for the creation of the steps. Does this mean that the convenience API would schedule an evaluation run and then wait for the steps to be created so that it can create an evaluation queue that specifies the human evaluation steps that are related to this.
Or that we need to change the implementation for the automatic evaluation orchestration to include this itself (basically it creates the steps then finds the human evaluators and then creates the queue based on it).

Side note: how does the online eval work in this case, maybe there is something to learn from it structure

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question. Updated the RFC — the evaluation orchestrator creates the queue, not the convenience API. Here's the reasoning:

The orchestrator should be modified to:

  1. Check step.origin == "human" before invoking
  2. Skip invocation for human steps, seed results as PENDING
  3. After processing all steps, detect has_human and create the EvaluationQueue

This is cleaner than having the convenience API schedule a run and wait because:

  • The orchestrator already knows which steps are human and which scenarios exist
  • No polling/waiting needed
  • Follows the structure of the online eval flow

Re: online eval structure — good call. The live eval worker (evaluate_live_query in live.py) is relevant: it creates scenarios from traces (not testcases), fetches trace data, and runs evaluators per scenario. The trace-only annotation worker should be modeled after this pattern. The live flow already handles testcase=None, inputs=None — exactly what we need for annotation-only runs.

],

// Items to annotate (one of these)
"source": {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very weird design imo. Why is it needed?

For instance why is the run_id needed? It seems that this layer is not needed for run_id since the orchestration layer for evaluation would probably deal with the creation of the annotation queue in case there are human evaluation (cc @junaway @jp-agenta question mark)

Very likely the only thing that is needed is whether the source is traces of test cases.
In the case of testset, we need a revision id to save back (although if we do it case by case, then we can save a new version of that test case from where it comes from [ not sure if a test case id tells us where it is from, probably not ]

The trace_ids for initialization make sense as optional

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right — completely reworked the API design.

Dropped run_id as a source type. For eval runs, the orchestrator creates the queue directly. The convenience API only handles the two explicit cases: traces and testset.

Source types are now just traces and testset. The trace_ids field is optional (for initialization — you can add items later).

Also changed from evaluators to labels as the primary interface — users define what they want ("correctness: boolean", "quality: 1-5") and we auto-create evaluators behind the scenes. Can also reference existing evaluator slugs for power users.

```
POST /preview/annotation-queues/{queue_id}/items
{
"trace_ids": ["new-trace-1", "new-trace-2"]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if test cases? do we validate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the "Add Items" endpoint to handle both traces and test cases. For testset-sourced queues, you can add specific testcase_ids.

Validation: Yes, the endpoint validates that the item type matches the queue's source type. Can't add traces to a testset-sourced queue or vice versa.

### Write Back to Test Set

```
POST /preview/annotation-queues/{queue_id}/write-back
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very weird

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed — completely reworked. Renamed to /export and it now supports two targets:

  1. testset_revision: For testset-sourced queues. Creates a new revision with annotation values as new columns. User provides a column_mapping to control naming.
  2. new_testset: For trace-sourced queues. Creates a new test set from annotated traces (trace inputs → test case inputs, annotations → columns). This is the "save annotated traces as test cases" feature.

The export is triggered by the queue admin, not individual annotators.


Creates a new test set revision with annotation values as new columns.

---
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is missing discussion about the frontend and how it would interact with this in every place

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rewrote the entire UI section with explicit frontend interaction details for each use case:

  1. Observability: Select traces → "Send to annotation queue" modal → POST to convenience API → queue appears on Annotation Queues page
  2. Test set: "Send to annotation queue" button → configure labels → POST to convenience API → queue appears → on completion, admin exports to test set
  3. Eval run: Orchestrator auto-creates queue → banner on eval run detail: "This evaluation has human annotation tasks" → queue appears on page → results flow back to eval table

Also specified that the annotation view is the same for all source types — one view, multiple data types.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not what I meant only. But basically how the annotation view works right now (in human evaluation) vs. how it would work. What needs to change

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added Appendix B: Annotation View — Current State vs Proposed Changes to the RFC with a thorough comparison.

How it works today (2 separate contexts):

  1. Trace drawer (AnnotateDrawer): 400px side drawer with ONLY the annotation form. No inputs/outputs shown. User selects evaluators manually. Ad-hoc annotations.

  2. Eval run focus view (SingleScenarioViewerPOC + ScenarioAnnotationPanel): Full-page layout — inputs+outputs on left (7/12), annotation panel on right (5/12). Has prev/next navigation via ScenarioNavigator. This is the closest to what we need.

What does NOT exist today:

  • No assignment (all scenarios shown to all users, no queue API calls)
  • No progress tracking (no X/Y counter anywhere)
  • No auto-advance after annotation
  • No unified view for traces + testcases (separate code paths)

What needs to change (built on top of Focus View):

  1. New queue page + queue list page (~2 days)
  2. Adapt Focus View for queue context — fetch assigned items, render trace data or testcase data uniformly (~3 days)
  3. Assignment integration (~1 day)
  4. Progress tracking (~1 day)
  5. Navigation improvements: auto-advance, keyboard shortcuts (~1 day)

Total: ~8 days FE work


### Annotation Mode (View Swap)

Instead of a separate "annotation queue" page, the annotation experience lives **inside existing views**. The user switches to "annotation mode" on the current view:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is mostly orthogonal to this RFC. Having an annotation mode view does not require annotation queues since it local / main user only and stateless.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed — moved annotation mode (view swap) to a separate note explicitly marked as orthogonal and out of scope for this RFC. It's local/stateless and doesn't require queue infrastructure. Could be built in parallel as an independent feature.

- Same as today but with actual assignment from the queue
- Annotator only sees their assigned scenarios

This approach avoids creating a separate "annotation queue" page. The queue is a background concept — the user works inside the views they already know.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will have and need to have annotation queue page

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! Rewrote the section — it's now a dedicated "Annotation Queues" page in the sidebar nav, listing all queues assigned to the user with progress, labels, and source info. This is a first-class page, not a hidden concept.

└─────────────────────────────────────────────────────────────────────┘
```

Clicking "Open" navigates to the appropriate view (observability for trace queues, test set view for test set queues, eval run details for eval queues) in annotation mode, filtered to the user's assigned items.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No the annotation view is the same for all three. It is an extension of what we have now allowing just showing multiple types of data

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed — updated. The annotation view is now explicitly described as the same for all three source types. It's an extension of the existing annotation drawer/eval table that renders trace data and testcase data uniformly (both have inputs/outputs). The annotation form on the side is always the same, driven by the evaluator's JSON schema.

One view, multiple data types.


## Open Questions

1. **Evaluations without inputs:** How much backend work is needed to support runs with no input steps? Are there assumptions in the scenario/result seeding that require inputs?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well research and answer

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Researched and answered — see Appendix A in the RFC.

TL;DR: The data model is ready (all fields nullable), but 3 blocking changes are needed: relax the start() gate, add trace_ids to SimpleEvaluationData, and implement a new trace-batch worker modeled after the existing live eval flow. ~2-3 days of backend work. No DB migrations needed.

Major changes based on review feedback:
- Fix Use Case 1 flow: queue-first (send to queue), not evaluator-first
- Add deletion handling for annotation queues
- Add FE progress/status tracking details (per-item, overall, editing)
- Add Appendix A: technical analysis of evaluations without inputs
- Rewrite API: drop run_id source, labels-first (not evaluators), /export endpoint
- Rewrite UI: dedicated Annotation Queues page, unified annotation view for all sources
- Fix Use Case 3: orchestrator creates queue, documents current broken behavior
- Separate annotation mode (orthogonal) from annotation queues
- Add queue lifecycle (Active → Completed → Archived)
- Add traces→testcases export capability
**Key design choice: annotating ≠ modifying the test set.** The annotation step creates annotation traces (OTel spans). These reference the test cases but don't modify them. Writing back to the test set is a separate, explicit action that creates a new revision. This preserves test case immutability and versioning.

**Constraint:** Test cases are immutable today — changing content creates new IDs, and changes only stick when attached to a revision. The write-back step must respect this by creating a new revision, not mutating existing test cases.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The evaluation run orchestrator is the responsible for the creation of the steps. Does this mean that the convenience API would schedule an evaluation run and then wait for the steps to be created so that it can create an evaluation queue that specifies the human evaluation steps that are related to this.
Or that we need to change the implementation for the automatic evaluation orchestration to include this itself (basically it creates the steps then finds the human evaluators and then creates the queue based on it).

Side note: how does the online eval work in this case, maybe there is something to learn from it structure

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is addressed right below this line in the updated text (lines 152-155):

Who creates the queue: The evaluation orchestrator itself. After processing all steps, it detects has_human steps and creates the EvaluationQueue as part of the run lifecycle.

The answer is option 2: we change the orchestrator to handle this natively. It:

  1. Checks step.origin == "human" before invoking
  2. Skips invocation, seeds results as PENDING
  3. Creates the EvaluationQueue after processing all steps

The convenience API is NOT involved for the eval run case. No scheduling + waiting.

Re: online eval — the live eval worker (evaluate_live_query in live.py) is the closest structural model. It creates scenarios from traces and runs evaluators per-scenario. Our new trace-batch worker should follow this pattern.


**Queue detail / annotation view:** When the user clicks "Open" on a queue, they enter the **annotation view**. This view is **the same regardless of source type** (traces, testset, or eval run). It is an extension of the existing annotation drawer/eval table, adapted to show:

- A table of items to annotate (traces or test cases)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No the annotation view is the same for all three. It is an extension of what we have now allowing just showing multiple types of data

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed and already reflected in the RFC text at this exact section. The annotation view is described as one view, multiple data types — it renders trace data and testcase data uniformly through the same layout (inputs/outputs on left, annotation form on right).

Added Appendix B with detailed analysis of how this extends from the existing Focus View (SingleScenarioViewerPOC).

Detailed comparison of how the annotation view works today (trace drawer,
eval run focus view, eval run table drawer) vs what needs to change for
annotation queues. Includes component hierarchy, state management, API
calls, and FE work estimate (~8 days).
@jp-agenta jp-agenta changed the title docs: annotation queue v2 design documents [feat] Extend queues Feb 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants