Conversation
- context.md: background, current state analysis, problem statement - prd.md: product requirements with 4 capabilities, user stories, acceptance criteria - research.md: analysis of current EvaluationQueue implementation - rfc.md: technical RFC with two solution options (extend runs vs new domain)
- competitive-analysis.md: anonymized analysis of competitor's approach - rfc.md: added Solution C using metadata-based queues (no new tables) - Updated recommendation: Solution C for v1 (1-2 weeks vs 4-5 weeks) Key insight: annotations and review status can be stored as metadata on existing items, with queues as filtered views rather than entities.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Proposes building annotation queues as a convenience layer over existing EvaluationRun + EvaluationQueue entities. No new domain entities — the problem is the interface, not the data model. Covers: trace annotation, test set annotation, human+auto eval, convenience API design, UI direction (view swap + inbox), phased implementation plan.
| **What happens behind the scenes:** | ||
|
|
||
| 1. User selects traces in observability view, clicks "Send to review" | ||
| 2. User picks evaluators (what to annotate) and optionally assigns people |
There was a problem hiding this comment.
This is not the correct flow. The flow is that the user would send the trace to an annotation queue, or create a new one, if they decide to create a new one they can configure it with evaluators, people, number of annoations etc..
The convenience API auto-creates the data for that annotation queue (and also needs to delete it and its data when asked to)
There was a problem hiding this comment.
Updated the flow. Now it's:
- Select traces → "Send to annotation queue"
- Choose existing queue OR create new one (configure labels, assign people, set repeats)
- Convenience API auto-creates backing infrastructure
Also added deletion handling — the convenience API should clean up EvaluationQueue + Scenarios + Results + Run when a queue is deleted (but NOT the immutable OTel annotation spans).
| - The run has no inputs (no testset, no query) — just annotation steps with `origin: "human"` for each evaluator | ||
| - One EvaluationScenario per selected trace, with the trace's `trace_id` stored as the invocation reference in the scenario (no separate invocation step needed) | ||
| - An EvaluationQueue linked to the run, with user assignments if specified | ||
| 4. Annotator opens inbox → sees assigned traces → annotates → submits |
There was a problem hiding this comment.
It would be alright to have the user see the annotation queues they are assigned to and not the inbox directly (which could go to v2)
There was a problem hiding this comment.
Agreed. Rewrote the section as "Annotation Queues Page" — a dedicated page showing queues assigned to the current user, with progress (X/Y done), labels, and source type. Each queue has an "Open" action.
The global inbox (flat list of pending items across queues) is explicitly marked as v2.
| - `POST /preview/annotations/` creates the annotation OTel span (same as today) | ||
| - `PATCH /preview/evaluations/results/` links the annotation `trace_id` to the step result (same as today) | ||
| - The annotation is also visible on the trace span in observability (existing write-through via OTel links) | ||
|
|
There was a problem hiding this comment.
How does the FE discovers:
- Which annotations has been done and still open?
- How far are we from done?
Can the user edit their annotation after it being done?
There was a problem hiding this comment.
Added a "Frontend: tracking progress and status" section addressing all three questions:
- Per-item status: Each EvaluationResult has a status (PENDING/COMPLETED). FE queries results for the queue's scenarios to determine done vs open.
- Overall progress: Count of COMPLETED results vs total (scenarios × annotation steps). Convenience API exposes this on the queue detail endpoint.
- Editing after completion: Yes. Re-submitting creates a new annotation OTel span (append-only) and updates the EvaluationResult.trace_id to the latest. Previous annotations are preserved.
| - `PATCH /preview/evaluations/results/` links the annotation `trace_id` to the step result (same as today) | ||
| - The annotation is also visible on the trace span in observability (existing write-through via OTel links) | ||
|
|
||
| **Key design choice: evaluations without inputs.** The run has no input steps. The trace being annotated is referenced as the invocation in the scenario. This requires backend support for runs where only invocation references exist (no testset inputs). |
There was a problem hiding this comment.
what does need to be changed for this to be possible. let's run a research and add a note there in the appendix and link it from here
There was a problem hiding this comment.
Done! Added Appendix A: Evaluations Without Inputs — Technical Analysis to the RFC with detailed findings:
Good news: The data model already supports it — testcase_id and trace_id are both nullable, scenarios have no input fields.
What blocks it: 3 things:
- The
start()gate requiresquery_steps or testset_steps— without either, the run is never dispatched - The batch worker (
evaluate_batch_testset) assumes testsets and would crash without them - No trace-only worker exists
Recommended approach: Model a new worker after the live eval flow (evaluate_live_query) which already handles trace-based scenarios with testcase=None. Just swap the query-discovery step for a pre-provided list of trace_ids.
Estimate: ~2-3 days backend work.
| **What happens behind the scenes:** | ||
|
|
||
| 1. User opens a test set, clicks "Annotate" (or "Send to review") | ||
| 2. User picks evaluators (what to annotate) — or defines them inline: |
There was a problem hiding this comment.
From a UX perspective, it does not make sense for the user here to pick evaluators, instead they should specify which label they want to have in the annotation (we would offer defaults like correct_answer judge_guidelines). The FE or BE should create evaluators based on that.
There was a problem hiding this comment.
Agreed — updated the flow. The user now specifies labels ("What do you want annotators to provide?") with sensible defaults like correct_answer, quality_rating, judge_guidelines. The FE/BE auto-creates a human evaluator with a matching JSON schema behind the scenes.
The user never sees or picks "evaluators" — they think in terms of labels and fields.
| **Key design choice: annotating ≠ modifying the test set.** The annotation step creates annotation traces (OTel spans). These reference the test cases but don't modify them. Writing back to the test set is a separate, explicit action that creates a new revision. This preserves test case immutability and versioning. | ||
|
|
||
| **Constraint:** Test cases are immutable today — changing content creates new IDs, and changes only stick when attached to a revision. The write-back step must respect this by creating a new revision, not mutating existing test cases. | ||
|
|
There was a problem hiding this comment.
Given that we will be creating a queue for each test set review. What happens when they are done, do we show them somewhere as finished? or hide them
There was a problem hiding this comment.
Good question. Proposed approach:
Queue states: Active → Completed → (optionally) Archived
- Active: Has pending items. Shown prominently on the Annotation Queues page.
- Completed: All items annotated (100% progress). Still visible on the page but visually de-emphasized (e.g., grayed out, moved to a "Completed" section/tab).
- Archived: User explicitly archives. Hidden from default view but accessible via a filter.
The state is derived from progress (not a manual status toggle): a queue is "completed" when all scenarios × annotation steps have COMPLETED results. No lifecycle state machine needed — it's just a computed property.
For test set annotation queues specifically: the queue shows as completed, and the user can then click "Write back to test set" to create a new revision. The queue itself stays visible as a record of the annotation work done.
Will add this to the RFC's Key Design Decisions section.
| - An EvaluationQueue with optional user assignments | ||
| 4. Annotator works through rows → fills in labels → submits | ||
| 5. On submit: same annotation creation + result linking as today | ||
| 6. **Write-back step** (separate action): User clicks "Save annotations to test set" → creates a new test set revision with annotation values as new columns |
There was a problem hiding this comment.
Who clicks on that? Is it the annotator? What if there are many? Is it part of the configuration of the queue so that it happens automatically on submit?
Would it be possible to have this same action for annotation queues for traces? Adding the annotated traces as test cases? [competitor offer a UX for this]
There was a problem hiding this comment.
Three questions answered:
1. Who clicks write-back?
The queue creator/admin, not the annotator. When the queue is completed (or partially done), the admin goes to the queue detail view and clicks "Write back to test set." This is NOT per-annotator — it's a one-time action that collects all annotations and creates a new test set revision.
2. Can it be automatic?
Could be a queue config option: on_complete: "auto_write_back". When the last item is annotated, automatically create the new revision. But this adds complexity — for v1, let's keep it manual (explicit button). Can revisit for v2.
3. Can annotated traces be saved as test cases?
Yes! This is a great feature (competitor does offer this). The flow would be:
- User annotates traces in a queue
- On completion, clicks "Save as test set" → creates a new test set where each trace becomes a test case row
- The trace's inputs become test case inputs, the trace's outputs become expected outputs, and the annotations become additional columns
This is essentially the same write-back mechanism but creating a new test set instead of updating an existing one. The convenience API could support both: write_back_target: "existing_testset"" | "new_testset".
Will add this to the RFC.
| - "What do you want to add?" → "Expected answer (text), Difficulty (1-5)" | ||
| - Behind the scenes: a human evaluator is auto-created with a JSON schema matching these fields | ||
| 3. **Convenience API** auto-creates: | ||
| - An EvaluationRun linked to the test set revision, with annotation steps for each evaluator |
There was a problem hiding this comment.
Is this feasible with today's evaluation run interface? would that break something?
There was a problem hiding this comment.
Yes, this is feasible with today's interface. The EvaluationRunData stores steps as JSONB, and testset input steps already reference testset_revision in the step references. Creating an EvaluationRun linked to a testset revision works today — the batch worker (evaluate_batch_testset) already fetches testcases from the revision and creates scenarios.
The part that doesn't exist is creating such a run purely for annotation (no auto evaluators, no app invocation). That requires the changes described in Appendix A — mainly the start() gate and a trace-less worker path.
|
|
||
| **What happens behind the scenes:** | ||
|
|
||
| 1. User creates evaluation run with both auto and human evaluators (existing flow) |
There was a problem hiding this comment.
How can we make the flow clear? The UX obvious and the feature discoverable?
There was a problem hiding this comment.
Updated the RFC with concrete discoverability mechanisms:
- Eval run detail view: Shows a prominent banner when human annotation is pending — "This evaluation has X human annotation tasks. [Go to annotation queue]"
- Annotation Queues page: Lists all queues including ones auto-created from eval runs, with the run name as context
- Eval run list: Runs with pending human annotation should show a distinct status (not just ERRORS as today)
The key insight: the orchestrator should handle mixed runs natively (skip human steps, seed as PENDING, create queue). The user doesn't need to do anything special — creating an eval run with human evaluators automatically surfaces annotation tasks.
| **What happens behind the scenes:** | ||
|
|
||
| 1. User creates evaluation run with both auto and human evaluators (existing flow) | ||
| 2. Auto evaluators execute immediately (existing flow) |
There was a problem hiding this comment.
What happens when an evaluation run is created as of now (using the automatic evaluation orchestrator) with human evaluator steps? Does it get stuck? Does it ignore the?
There was a problem hiding this comment.
Researched this thoroughly. It fails.
When the orchestrator encounters human evaluator steps, it attempts to invoke them via workflows_service.invoke_workflow(). Human evaluators have no registered handler (no uri — only a JSON schema in data.service.format), so the invocation raises InvalidInterfaceURIV0Error.
The result:
- Human evaluator steps are recorded with
status=FAILUREand an error payload - The scenario is marked as
ERRORS - The run is marked as
ERRORS(not fully failed since auto evaluators may have succeeded) - No
EvaluationQueueis created — queue creation is not part of the orchestration pipeline at all - There are zero
is_humanchecks anywhere in the evaluation orchestration code
So for mixed auto+human runs, the auto evaluators succeed and the human ones fail. The has_human flag is computed and stored on run flags, but never acted upon.
What's needed: The orchestrator should check step.origin == "human" before invoking, skip invocation for human steps, seed their results as PENDING, and optionally create an EvaluationQueue for those steps.
|
|
||
| 1. User creates evaluation run with both auto and human evaluators (existing flow) | ||
| 2. Auto evaluators execute immediately (existing flow) | ||
| 3. Human evaluator steps → an EvaluationQueue is auto-created for the run |
There was a problem hiding this comment.
The evaluation run orchestrator is the responsible for the creation of the steps. Does this mean that the convenience API would schedule an evaluation run and then wait for the steps to be created so that it can create an evaluation queue that specifies the human evaluation steps that are related to this.
Or that we need to change the implementation for the automatic evaluation orchestration to include this itself (basically it creates the steps then finds the human evaluators and then creates the queue based on it).
Side note: how does the online eval work in this case, maybe there is something to learn from it structure
There was a problem hiding this comment.
Great question. Updated the RFC — the evaluation orchestrator creates the queue, not the convenience API. Here's the reasoning:
The orchestrator should be modified to:
- Check
step.origin == "human"before invoking - Skip invocation for human steps, seed results as PENDING
- After processing all steps, detect
has_humanand create the EvaluationQueue
This is cleaner than having the convenience API schedule a run and wait because:
- The orchestrator already knows which steps are human and which scenarios exist
- No polling/waiting needed
- Follows the structure of the online eval flow
Re: online eval structure — good call. The live eval worker (evaluate_live_query in live.py) is relevant: it creates scenarios from traces (not testcases), fetches trace data, and runs evaluators per scenario. The trace-only annotation worker should be modeled after this pattern. The live flow already handles testcase=None, inputs=None — exactly what we need for annotation-only runs.
| ], | ||
|
|
||
| // Items to annotate (one of these) | ||
| "source": { |
There was a problem hiding this comment.
This is a very weird design imo. Why is it needed?
For instance why is the run_id needed? It seems that this layer is not needed for run_id since the orchestration layer for evaluation would probably deal with the creation of the annotation queue in case there are human evaluation (cc @junaway @jp-agenta question mark)
Very likely the only thing that is needed is whether the source is traces of test cases.
In the case of testset, we need a revision id to save back (although if we do it case by case, then we can save a new version of that test case from where it comes from [ not sure if a test case id tells us where it is from, probably not ]
The trace_ids for initialization make sense as optional
There was a problem hiding this comment.
You're right — completely reworked the API design.
Dropped run_id as a source type. For eval runs, the orchestrator creates the queue directly. The convenience API only handles the two explicit cases: traces and testset.
Source types are now just traces and testset. The trace_ids field is optional (for initialization — you can add items later).
Also changed from evaluators to labels as the primary interface — users define what they want ("correctness: boolean", "quality: 1-5") and we auto-create evaluators behind the scenes. Can also reference existing evaluator slugs for power users.
| ``` | ||
| POST /preview/annotation-queues/{queue_id}/items | ||
| { | ||
| "trace_ids": ["new-trace-1", "new-trace-2"] |
There was a problem hiding this comment.
what if test cases? do we validate?
There was a problem hiding this comment.
Updated the "Add Items" endpoint to handle both traces and test cases. For testset-sourced queues, you can add specific testcase_ids.
Validation: Yes, the endpoint validates that the item type matches the queue's source type. Can't add traces to a testset-sourced queue or vice versa.
| ### Write Back to Test Set | ||
|
|
||
| ``` | ||
| POST /preview/annotation-queues/{queue_id}/write-back |
There was a problem hiding this comment.
Agreed — completely reworked. Renamed to /export and it now supports two targets:
testset_revision: For testset-sourced queues. Creates a new revision with annotation values as new columns. User provides a column_mapping to control naming.new_testset: For trace-sourced queues. Creates a new test set from annotated traces (trace inputs → test case inputs, annotations → columns). This is the "save annotated traces as test cases" feature.
The export is triggered by the queue admin, not individual annotators.
|
|
||
| Creates a new test set revision with annotation values as new columns. | ||
|
|
||
| --- |
There was a problem hiding this comment.
There is missing discussion about the frontend and how it would interact with this in every place
There was a problem hiding this comment.
Rewrote the entire UI section with explicit frontend interaction details for each use case:
- Observability: Select traces → "Send to annotation queue" modal → POST to convenience API → queue appears on Annotation Queues page
- Test set: "Send to annotation queue" button → configure labels → POST to convenience API → queue appears → on completion, admin exports to test set
- Eval run: Orchestrator auto-creates queue → banner on eval run detail: "This evaluation has human annotation tasks" → queue appears on page → results flow back to eval table
Also specified that the annotation view is the same for all source types — one view, multiple data types.
There was a problem hiding this comment.
This is not what I meant only. But basically how the annotation view works right now (in human evaluation) vs. how it would work. What needs to change
There was a problem hiding this comment.
Added Appendix B: Annotation View — Current State vs Proposed Changes to the RFC with a thorough comparison.
How it works today (2 separate contexts):
-
Trace drawer (
AnnotateDrawer): 400px side drawer with ONLY the annotation form. No inputs/outputs shown. User selects evaluators manually. Ad-hoc annotations. -
Eval run focus view (
SingleScenarioViewerPOC+ScenarioAnnotationPanel): Full-page layout — inputs+outputs on left (7/12), annotation panel on right (5/12). Has prev/next navigation viaScenarioNavigator. This is the closest to what we need.
What does NOT exist today:
- No assignment (all scenarios shown to all users, no queue API calls)
- No progress tracking (no X/Y counter anywhere)
- No auto-advance after annotation
- No unified view for traces + testcases (separate code paths)
What needs to change (built on top of Focus View):
- New queue page + queue list page (~2 days)
- Adapt Focus View for queue context — fetch assigned items, render trace data or testcase data uniformly (~3 days)
- Assignment integration (~1 day)
- Progress tracking (~1 day)
- Navigation improvements: auto-advance, keyboard shortcuts (~1 day)
Total: ~8 days FE work
|
|
||
| ### Annotation Mode (View Swap) | ||
|
|
||
| Instead of a separate "annotation queue" page, the annotation experience lives **inside existing views**. The user switches to "annotation mode" on the current view: |
There was a problem hiding this comment.
This is mostly orthogonal to this RFC. Having an annotation mode view does not require annotation queues since it local / main user only and stateless.
There was a problem hiding this comment.
Agreed — moved annotation mode (view swap) to a separate note explicitly marked as orthogonal and out of scope for this RFC. It's local/stateless and doesn't require queue infrastructure. Could be built in parallel as an independent feature.
| - Same as today but with actual assignment from the queue | ||
| - Annotator only sees their assigned scenarios | ||
|
|
||
| This approach avoids creating a separate "annotation queue" page. The queue is a background concept — the user works inside the views they already know. |
There was a problem hiding this comment.
We will have and need to have annotation queue page
There was a problem hiding this comment.
Agreed! Rewrote the section — it's now a dedicated "Annotation Queues" page in the sidebar nav, listing all queues assigned to the user with progress, labels, and source info. This is a first-class page, not a hidden concept.
| └─────────────────────────────────────────────────────────────────────┘ | ||
| ``` | ||
|
|
||
| Clicking "Open" navigates to the appropriate view (observability for trace queues, test set view for test set queues, eval run details for eval queues) in annotation mode, filtered to the user's assigned items. |
There was a problem hiding this comment.
No the annotation view is the same for all three. It is an extension of what we have now allowing just showing multiple types of data
There was a problem hiding this comment.
Agreed — updated. The annotation view is now explicitly described as the same for all three source types. It's an extension of the existing annotation drawer/eval table that renders trace data and testcase data uniformly (both have inputs/outputs). The annotation form on the side is always the same, driven by the evaluator's JSON schema.
One view, multiple data types.
|
|
||
| ## Open Questions | ||
|
|
||
| 1. **Evaluations without inputs:** How much backend work is needed to support runs with no input steps? Are there assumptions in the scenario/result seeding that require inputs? |
There was a problem hiding this comment.
well research and answer
There was a problem hiding this comment.
Researched and answered — see Appendix A in the RFC.
TL;DR: The data model is ready (all fields nullable), but 3 blocking changes are needed: relax the start() gate, add trace_ids to SimpleEvaluationData, and implement a new trace-batch worker modeled after the existing live eval flow. ~2-3 days of backend work. No DB migrations needed.
Major changes based on review feedback: - Fix Use Case 1 flow: queue-first (send to queue), not evaluator-first - Add deletion handling for annotation queues - Add FE progress/status tracking details (per-item, overall, editing) - Add Appendix A: technical analysis of evaluations without inputs - Rewrite API: drop run_id source, labels-first (not evaluators), /export endpoint - Rewrite UI: dedicated Annotation Queues page, unified annotation view for all sources - Fix Use Case 3: orchestrator creates queue, documents current broken behavior - Separate annotation mode (orthogonal) from annotation queues - Add queue lifecycle (Active → Completed → Archived) - Add traces→testcases export capability
| **Key design choice: annotating ≠ modifying the test set.** The annotation step creates annotation traces (OTel spans). These reference the test cases but don't modify them. Writing back to the test set is a separate, explicit action that creates a new revision. This preserves test case immutability and versioning. | ||
|
|
||
| **Constraint:** Test cases are immutable today — changing content creates new IDs, and changes only stick when attached to a revision. The write-back step must respect this by creating a new revision, not mutating existing test cases. | ||
|
|
There was a problem hiding this comment.
The evaluation run orchestrator is the responsible for the creation of the steps. Does this mean that the convenience API would schedule an evaluation run and then wait for the steps to be created so that it can create an evaluation queue that specifies the human evaluation steps that are related to this.
Or that we need to change the implementation for the automatic evaluation orchestration to include this itself (basically it creates the steps then finds the human evaluators and then creates the queue based on it).
Side note: how does the online eval work in this case, maybe there is something to learn from it structure
There was a problem hiding this comment.
This is addressed right below this line in the updated text (lines 152-155):
Who creates the queue: The evaluation orchestrator itself. After processing all steps, it detects
has_humansteps and creates the EvaluationQueue as part of the run lifecycle.
The answer is option 2: we change the orchestrator to handle this natively. It:
- Checks
step.origin == "human"before invoking - Skips invocation, seeds results as PENDING
- Creates the EvaluationQueue after processing all steps
The convenience API is NOT involved for the eval run case. No scheduling + waiting.
Re: online eval — the live eval worker (evaluate_live_query in live.py) is the closest structural model. It creates scenarios from traces and runs evaluators per-scenario. Our new trace-batch worker should follow this pattern.
|
|
||
| **Queue detail / annotation view:** When the user clicks "Open" on a queue, they enter the **annotation view**. This view is **the same regardless of source type** (traces, testset, or eval run). It is an extension of the existing annotation drawer/eval table, adapted to show: | ||
|
|
||
| - A table of items to annotate (traces or test cases) |
There was a problem hiding this comment.
No the annotation view is the same for all three. It is an extension of what we have now allowing just showing multiple types of data
There was a problem hiding this comment.
Agreed and already reflected in the RFC text at this exact section. The annotation view is described as one view, multiple data types — it renders trace data and testcase data uniformly through the same layout (inputs/outputs on left, annotation form on right).
Added Appendix B with detailed analysis of how this extends from the existing Focus View (SingleScenarioViewerPOC).
Detailed comparison of how the annotation view works today (trace drawer, eval run focus view, eval run table drawer) vs what needs to change for annotation queues. Includes component hierarchy, state management, API calls, and FE work estimate (~8 days).
queues
Summary
Design workspace for the annotation queue v2 feature.
EvaluationQueuebackend implementationRelated
Linear PRD: https://linear.app/agenta/document/prd-annotation-queues-b80788a78c9a