Conversation
📝 WalkthroughWalkthrough新增完整的模型检测子系统:TypeScript 类型与探针数据、请求候选与响应解析、检测运行器与评分逻辑、前端客户端页面与测试、Vitest 测试配置及导航链接更新。 Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 分钟 Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a model detection tool that identifies AI model families by analyzing API response signatures. The implementation includes a React client component, provider-specific request builders for OpenAI, Anthropic, and Gemini, and a scoring system to rank results. Comprehensive tests and a Vitest configuration are also added. Feedback identifies a bug in the Anthropic request structure, suggests performance optimizations in the runner logic, and recommends improvements for component testability and code documentation.
src/lib/model-detection/providers.ts
Outdated
| model, | ||
| max_tokens: 128, | ||
| stream: false, | ||
| system: [{ type: 'text', text: ANTHROPIC_SYSTEM }], |
There was a problem hiding this comment.
The structure for the system parameter in this Anthropic request candidate appears to be incorrect. The Anthropic Messages API expects the system parameter to be a string, but here it's provided as an array of content blocks. This will likely cause the request to fail. It should be a simple string, as used in the second candidate.
| system: [{ type: 'text', text: ANTHROPIC_SYSTEM }], | |
| system: ANTHROPIC_SYSTEM, |
| interface ModelDetectionClientProps { | ||
| runDetection?: (input: ModelDetectionInput) => Promise<ModelDetectionResult> | ||
| } |
There was a problem hiding this comment.
The runDetection prop is a good way to inject dependencies for testing. As an alternative that could simplify the component's props interface, you might consider using vi.mock within your test file to mock the runModelDetection function directly. This would remove the need for the optional prop on the component itself.
Example of mocking in ModelDetectionClient.test.tsx:
import { render, screen } from '@testing-library/react';
import { describe, expect, test, vi } from 'vitest';
import { runModelDetection } from '@/lib/model-detection/runner';
import { ModelDetectionClient } from '@/app/docs/model-detection/components/ModelDetectionClient';
// Mock the runner module
vi.mock('@/lib/model-detection/runner');
describe('ModelDetectionClient', () => {
test('...', async () => {
// Use the mocked function
vi.mocked(runModelDetection).mockResolvedValue(...);
// Render the component without the prop
render(<ModelDetectionClient />);
// ... assertions
});
});
src/lib/model-detection/runner.ts
Outdated
| repeatedExactly: extractResponseTextFromAny(payload).includes(probe), | ||
| rawText: extractResponseTextFromAny(payload), |
There was a problem hiding this comment.
| const hits = familyObservations.filter( | ||
| (observation) => observation.repeatedExactly === false, | ||
| ).length |
There was a problem hiding this comment.
The variable hits is used to count instances where repeatedExactly === false. This implies a "hit" is when the model refuses to repeat the probe string, which is a valid fingerprinting technique. However, the name hits can be confusing for other developers. To improve code clarity and maintainability, consider adding a comment explaining that a "hit" in this context refers to a signature match via refusal.
| const hits = familyObservations.filter( | |
| (observation) => observation.repeatedExactly === false, | |
| ).length | |
| // A "hit" signifies a signature match, occurring when the model refuses to repeat the probe string. | |
| const hits = familyObservations.filter( | |
| (observation) => observation.repeatedExactly === false, | |
| ).length |
| @@ -0,0 +1,67 @@ | |||
| 'use client' | |||
There was a problem hiding this comment.
Stray
'use client' directive in test file
The 'use client' string at line 1 is a Next.js boundary directive that is meaningless in a Vitest test file. Vitest evaluates it as a no-op expression, but it makes the file look like it is a Next.js client module and may confuse tooling or future contributors. Remove it.
| 'use client' | |
| import { render, screen } from '@testing-library/react' |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/docs/model-detection/components/ModelDetectionClient.test.tsx
Line: 1
Comment:
**Stray `'use client'` directive in test file**
The `'use client'` string at line 1 is a Next.js boundary directive that is meaningless in a Vitest test file. Vitest evaluates it as a no-op expression, but it makes the file look like it is a Next.js client module and may confuse tooling or future contributors. Remove it.
```suggestion
import { render, screen } from '@testing-library/react'
```
How can I resolve this? If you propose a fix, please make it concise.| import { Suspense } from 'react' | ||
|
|
||
| import { ModelDetectionClient } from '@/app/docs/model-detection/components/ModelDetectionClient' | ||
|
|
||
| export default function ModelDetectionPage() { | ||
| return ( | ||
| <Suspense | ||
| fallback={ | ||
| <div className="flex min-w-0 flex-auto flex-col px-4 py-16 lg:pl-8 xl:px-16"> | ||
| <div className="rounded-2xl border border-[var(--claude-smoke)]/30 bg-[var(--claude-paper)] p-4 text-sm text-[var(--claude-walnut)]/70 dark:bg-[var(--claude-sand)]"> | ||
| 正在加载检测页面... | ||
| </div> | ||
| </div> | ||
| } | ||
| > | ||
| <ModelDetectionClient /> | ||
| </Suspense> | ||
| ) | ||
| } |
There was a problem hiding this comment.
Suspense boundary will never activate
ModelDetectionClient is imported eagerly (not via React.lazy) and does not call use() or any other Suspense-triggering API. The Suspense wrapper and its fallback are dead code. If the intent is to lazy-load the client component to defer its JS bundle, wrap it with React.lazy and a dynamic import; otherwise remove the Suspense boundary.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/docs/model-detection/page.tsx
Line: 1-19
Comment:
**`Suspense` boundary will never activate**
`ModelDetectionClient` is imported eagerly (not via `React.lazy`) and does not call `use()` or any other Suspense-triggering API. The `Suspense` wrapper and its `fallback` are dead code. If the intent is to lazy-load the client component to defer its JS bundle, wrap it with `React.lazy` and a dynamic import; otherwise remove the `Suspense` boundary.
How can I resolve this? If you propose a fix, please make it concise.
src/lib/model-detection/runner.ts
Outdated
| return { | ||
| repeatedExactly: extractResponseTextFromAny(payload).includes(probe), | ||
| rawText: extractResponseTextFromAny(payload), | ||
| } |
There was a problem hiding this comment.
extractResponseTextFromAny invoked twice on the same payload
extractResponseTextFromAny(payload) is called once to compute repeatedExactly and again to store rawText. Extracting to a local variable avoids the redundant work.
| return { | |
| repeatedExactly: extractResponseTextFromAny(payload).includes(probe), | |
| rawText: extractResponseTextFromAny(payload), | |
| } | |
| const rawText = extractResponseTextFromAny(payload) | |
| return { | |
| repeatedExactly: rawText.includes(probe), | |
| rawText, | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/model-detection/runner.ts
Line: 83-86
Comment:
**`extractResponseTextFromAny` invoked twice on the same payload**
`extractResponseTextFromAny(payload)` is called once to compute `repeatedExactly` and again to store `rawText`. Extracting to a local variable avoids the redundant work.
```suggestion
const rawText = extractResponseTextFromAny(payload)
return {
repeatedExactly: rawText.includes(probe),
rawText,
}
```
How can I resolve this? If you propose a fix, please make it concise.| if (normalizedBaseUrl.includes(':generateContent')) { | ||
| return normalizedBaseUrl | ||
| } | ||
|
|
||
| const normalizedModel = normalizeGeminiModel(model) | ||
| if (normalizedBaseUrl.endsWith(`/${version}`)) { | ||
| return `${normalizedBaseUrl}/models/${normalizedModel}:generateContent` | ||
| } | ||
|
|
||
| return `${normalizedBaseUrl}/${version}/models/${normalizedModel}:generateContent` |
There was a problem hiding this comment.
Both Gemini candidates become identical when the base URL already contains
:generateContent
When baseUrl already includes :generateContent (e.g. https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent), resolveGeminiUrl returns normalizedBaseUrl unchanged for both calls — the version and model arguments are ignored entirely. The two RequestCandidate entries produced by buildGeminiCandidates therefore share the same URL but different labels. The runner will hit the same endpoint twice with no fallback benefit, and if the first attempt returns an empty body it will retry the identical URL rather than a different API version.
Consider returning null (and filtering) when the URL is already fully qualified, or at minimum add a comment acknowledging the degenerate case so future readers understand why both candidates can be the same URL.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/model-detection/providers.ts
Line: 40-49
Comment:
**Both Gemini candidates become identical when the base URL already contains `:generateContent`**
When `baseUrl` already includes `:generateContent` (e.g. `https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent`), `resolveGeminiUrl` returns `normalizedBaseUrl` unchanged for **both** calls — the `version` and `model` arguments are ignored entirely. The two `RequestCandidate` entries produced by `buildGeminiCandidates` therefore share the same URL but different labels. The runner will hit the same endpoint twice with no fallback benefit, and if the first attempt returns an empty body it will retry the identical URL rather than a different API version.
Consider returning `null` (and filtering) when the URL is already fully qualified, or at minimum add a comment acknowledging the degenerate case so future readers understand why both candidates can be the same URL.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2013d2fd53
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
src/lib/model-detection/runner.ts
Outdated
| } | ||
|
|
||
| return { | ||
| repeatedExactly: extractResponseTextFromAny(payload).includes(probe), |
There was a problem hiding this comment.
Use exact match for probe repeat classification
runCandidateRequest marks repeatedExactly using includes(probe), so outputs like "Here is the text: <probe>" are treated as exact repeats even though they violate the prompt contract. Because scoring counts repeatedExactly === false as a hit, this substring check suppresses true hits whenever providers add wrappers/prefixes/suffixes, producing materially inaccurate family probabilities.
Useful? React with 👍 / 👎.
src/lib/model-detection/providers.ts
Outdated
| if (normalizedBaseUrl.endsWith(path)) { | ||
| return normalizedBaseUrl | ||
| } | ||
|
|
||
| return `${normalizedBaseUrl}${path}` |
There was a problem hiding this comment.
Avoid duplicating API version segments in request URLs
ensurePath blindly appends the target path unless the base URL already ends with that full path, so a common base like https://api.openai.com/v1 becomes .../v1/v1/responses (and similarly for other endpoints). In that input shape every probe request fails with 404, making the detector unusable despite otherwise valid credentials and model settings.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
🧹 Nitpick comments (5)
src/lib/navigation.test.ts (1)
8-15: 建议把断言范围收敛到“参考文档”分组。当前只验证“全局存在”,不能防止链接被错误移动到其它分组后仍通过。建议先定位
title === '参考文档',再断言其links。♻️ 建议修改
describe('navigation', () => { test('包含响应模型检测入口', () => { - expect( - navigation.some((section) => - section.links.some( - (link) => - link.title === '响应模型检测' && - link.href === '/docs/model-detection', - ), - ), - ).toBe(true) + const referenceSection = navigation.find( + (section) => section.title === '参考文档', + ) + expect(referenceSection).toBeDefined() + expect( + referenceSection?.links.some( + (link) => + link.title === '响应模型检测' && + link.href === '/docs/model-detection', + ), + ).toBe(true) }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/navigation.test.ts` around lines 8 - 15, The test currently asserts globally via navigation.some(...) which allows the target link to pass if placed in any group; change the assertion to first find the specific section by matching section.title === '参考文档' (use the same navigation and section variables from the test), then assert that that section.links contains a link object where link.title === '响应模型检测' and link.href === '/docs/model-detection' so the check is scoped to the “参考文档” group only.src/app/docs/model-detection/components/ModelDetectionClient.tsx (1)
170-172: 类型断言可考虑添加运行时验证将
event.target.value直接断言为DetectionEndpointType虽然在当前场景下是安全的(因为<option>值是硬编码的),但如果未来选项变更,可能引入不一致。可选的防御性改进:
♻️ 可选:添加运行时类型守卫
+const ENDPOINT_TYPES = ['openai', 'anthropic', 'gemini'] as const + +function isDetectionEndpointType(value: string): value is DetectionEndpointType { + return (ENDPOINT_TYPES as readonly string[]).includes(value) +} + onChange={(event) => - setEndpointType(event.target.value as DetectionEndpointType) + { + const value = event.target.value + if (isDetectionEndpointType(value)) { + setEndpointType(value) + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/docs/model-detection/components/ModelDetectionClient.tsx` around lines 170 - 172, The onChange handler for the endpoint selector directly casts event.target.value to DetectionEndpointType via setEndpointType(event.target.value as DetectionEndpointType); add a runtime guard to avoid unsafe casts: implement an isDetectionEndpointType(value: string): value is DetectionEndpointType type-guard (or a Set/array of allowed values) and call it inside the onChange handler (or a small helper like handleEndpointChange) to validate the incoming value before calling setEndpointType; if the value is invalid, either ignore it or log/handle the fallback case.src/lib/model-detection/runner.ts (2)
83-86: 重复调用extractResponseTextFromAny
extractResponseTextFromAny(payload)在同一个 payload 上被调用了两次。虽然函数是纯函数且开销不大,但可以缓存结果。♻️ 可选:缓存提取结果
+ const rawText = extractResponseTextFromAny(payload) return { - repeatedExactly: extractResponseTextFromAny(payload).includes(probe), - rawText: extractResponseTextFromAny(payload), + repeatedExactly: rawText.includes(probe), + rawText, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/model-detection/runner.ts` around lines 83 - 86, The code calls extractResponseTextFromAny(payload) twice when building the returned object; cache its result in a local const (e.g., const text = extractResponseTextFromAny(payload)) and then set repeatedExactly: text.includes(probe) and rawText: text to avoid duplicate calls; update the function/closure that returns repeatedExactly/rawText accordingly.
145-150: 并发工作器中的游标访问存在潜在风险虽然 JavaScript 是单线程的,但在
await mapper(...)暂停期间,多个 worker 可能交错执行。当前实现在await之前读取并递增cursor,这在 JS 中是安全的,因为while检查到赋值之间没有await。但这个模式容易在重构时引入问题。建议添加注释说明或使用更明确的索引分配方式。
♻️ 可选:使用更明确的模式
async function worker() { while (cursor < items.length) { + // 注意:cursor 读取和递增必须在同一同步代码块内, + // 在 await 之前完成,以确保多个 worker 不会处理同一索引 const currentIndex = cursor cursor += 1 results[currentIndex] = await mapper(items[currentIndex]) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/model-detection/runner.ts` around lines 145 - 150, The worker() loop risks future race conditions if refactored because cursor is read and incremented around an await; update worker to perform an explicit atomic index assignment before any await (e.g., capture an index via a single fetch-and-increment operation on cursor and use that index for items/results) and/or add a clear comment in worker explaining that the index must be reserved before awaiting; reference the worker function and variables cursor, items, mapper, and results so reviewers can find and verify the change.src/lib/model-detection/providers.ts (1)
159-159: 常量命名可能引起混淆
ANTHROPIC_SYSTEM被 Gemini 候选构建器复用。虽然内容通用,但命名暗示它是 Anthropic 专用的。♻️ 可选:重命名为更通用的名称
-const ANTHROPIC_SYSTEM = +const SIMPLE_COPY_SYSTEM = 'Copy the target text exactly once. Return only the original text.'然后在 Anthropic 和 Gemini 构建器中都使用
SIMPLE_COPY_SYSTEM。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/model-detection/providers.ts` at line 159, Rename the constant ANTHROPIC_SYSTEM to a more generic name (e.g. SIMPLE_COPY_SYSTEM) and update all usages such as the line constructing prompt in providers.ts (`const prompt = \`${ANTHROPIC_SYSTEM}\n\n${buildProbePrompt(probe)}\``) and any Anthropic/Gemini builder modules to reference the new SIMPLE_COPY_SYSTEM identifier so both builders share the same generic system prompt; ensure imports/exports and any tests are updated to the new name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/app/docs/model-detection/components/ModelDetectionClient.tsx`:
- Around line 170-172: The onChange handler for the endpoint selector directly
casts event.target.value to DetectionEndpointType via
setEndpointType(event.target.value as DetectionEndpointType); add a runtime
guard to avoid unsafe casts: implement an isDetectionEndpointType(value:
string): value is DetectionEndpointType type-guard (or a Set/array of allowed
values) and call it inside the onChange handler (or a small helper like
handleEndpointChange) to validate the incoming value before calling
setEndpointType; if the value is invalid, either ignore it or log/handle the
fallback case.
In `@src/lib/model-detection/providers.ts`:
- Line 159: Rename the constant ANTHROPIC_SYSTEM to a more generic name (e.g.
SIMPLE_COPY_SYSTEM) and update all usages such as the line constructing prompt
in providers.ts (`const prompt =
\`${ANTHROPIC_SYSTEM}\n\n${buildProbePrompt(probe)}\``) and any Anthropic/Gemini
builder modules to reference the new SIMPLE_COPY_SYSTEM identifier so both
builders share the same generic system prompt; ensure imports/exports and any
tests are updated to the new name.
In `@src/lib/model-detection/runner.ts`:
- Around line 83-86: The code calls extractResponseTextFromAny(payload) twice
when building the returned object; cache its result in a local const (e.g.,
const text = extractResponseTextFromAny(payload)) and then set repeatedExactly:
text.includes(probe) and rawText: text to avoid duplicate calls; update the
function/closure that returns repeatedExactly/rawText accordingly.
- Around line 145-150: The worker() loop risks future race conditions if
refactored because cursor is read and incremented around an await; update worker
to perform an explicit atomic index assignment before any await (e.g., capture
an index via a single fetch-and-increment operation on cursor and use that index
for items/results) and/or add a clear comment in worker explaining that the
index must be reserved before awaiting; reference the worker function and
variables cursor, items, mapper, and results so reviewers can find and verify
the change.
In `@src/lib/navigation.test.ts`:
- Around line 8-15: The test currently asserts globally via navigation.some(...)
which allows the target link to pass if placed in any group; change the
assertion to first find the specific section by matching section.title ===
'参考文档' (use the same navigation and section variables from the test), then
assert that that section.links contains a link object where link.title ===
'响应模型检测' and link.href === '/docs/model-detection' so the check is scoped to the
“参考文档” group only.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9a261c1b-2a69-4d09-93d7-945f80a98846
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
package.jsonsrc/app/docs/model-detection/components/ModelDetectionClient.test.tsxsrc/app/docs/model-detection/components/ModelDetectionClient.tsxsrc/app/docs/model-detection/page.tsxsrc/lib/model-detection/families.tssrc/lib/model-detection/providers.test.tssrc/lib/model-detection/providers.tssrc/lib/model-detection/runner.tssrc/lib/model-detection/scoring.test.tssrc/lib/model-detection/scoring.tssrc/lib/model-detection/types.tssrc/lib/navigation.test.tssrc/lib/navigation.tsvitest.config.tsvitest.setup.ts
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ❌ Deployment failed View logs |
claude-code-hub-docs | 8828bde | Apr 08 2026, 06:01 PM |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4a6524aaee
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| return `${normalizedBaseUrl}/models/${normalizedModel}:generateContent` | ||
| } | ||
|
|
||
| return `${normalizedBaseUrl}/${version}/models/${normalizedModel}:generateContent` |
There was a problem hiding this comment.
Support Gemini model-scoped base URLs without path duplication
resolveGeminiUrl only handles two Gemini input shapes (root .../v1/.../v1beta and fully-qualified ...:generateContent), so a common model-scoped base URL like https://.../v1beta/models/gemini-2.0-flash falls into the final return path and becomes .../v1beta/models/gemini-2.0-flash/v1beta/models/<model>:generateContent. In that case every probe request hits a malformed endpoint and detection fails even though the user provided a valid Gemini model URL and key.
Useful? React with 👍 / 👎.
|
Tip: Greploops — Automatically fix all review issues by running Use the Greptile plugin for Claude Code to query reviews, search comments, and manage custom context directly from your terminal. |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/model-detection/providers.ts`:
- Around line 21-46: ensurePath currently lets callers paste a full endpoint
like "/v1/chat/completions" which then gets appended to other candidates and
corrupts fallbacks; update ensurePath so before computing targetSegments you
normalize the input (use trimTrailingSlash) and if the first target segment
matches a version like /^v\d+$/ then truncate targetSegments to only that
version segment (i.e., treat "/v1/..." as just "/v1"), then continue the
overlap/merge logic with baseSegments and targetSegments; reference ensurePath,
trimTrailingSlash, targetSegments and baseSegments when making the change.
In `@src/lib/model-detection/runner.ts`:
- Around line 68-88: The runCandidateRequest function lacks a timeout for fetch;
wrap the fetch call with an AbortController and a timer so requests are aborted
after a short configurable timeout, clear the timer on success, and pass
controller.signal into fetch; ensure the abort error is propagated the same way
as other failures so readPayload/extractResponseTextFromAny callers and the
existing error aggregation see the failure (i.e., throw an Error when the
controller aborts or when fetch rejects due to the timeout). Update references
in runCandidateRequest to use the controller.signal and to clear the timeout
before returning.
- Around line 102-107: buildRequestCandidates(...) can throw (e.g., new
URL(input.baseUrl)) and currently that rejection will short-circuit
mapWithConcurrency; wrap the call to buildRequestCandidates in a try/catch
inside runProbe and on error return a ProbeObservation representing the failure
(include probe, an empty/failed candidates list and the error message) instead
of letting the exception propagate; ensure runProbe always resolves to a
ProbeObservation so mapWithConcurrency / runModelDetection can continue
aggregating per-probe results.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c13cb20f-04cc-4e6e-b953-0ffae2e8381c
📒 Files selected for processing (6)
src/app/docs/model-detection/components/ModelDetectionClient.tsxsrc/lib/model-detection/providers.test.tssrc/lib/model-detection/providers.tssrc/lib/model-detection/runner.tssrc/lib/model-detection/scoring.tssrc/lib/navigation.test.ts
✅ Files skipped from review due to trivial changes (1)
- src/lib/navigation.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/lib/model-detection/scoring.ts
- src/app/docs/model-detection/components/ModelDetectionClient.tsx
| function ensurePath(baseUrl: string, path: string) { | ||
| const normalizedBaseUrl = trimTrailingSlash(baseUrl) | ||
| if (normalizedBaseUrl.endsWith(path)) { | ||
| return normalizedBaseUrl | ||
| } | ||
|
|
||
| const url = new URL(normalizedBaseUrl) | ||
| const baseSegments = url.pathname.split('/').filter(Boolean) | ||
| const targetSegments = path.split('/').filter(Boolean) | ||
|
|
||
| let overlap = 0 | ||
| for ( | ||
| let size = Math.min(baseSegments.length, targetSegments.length); | ||
| size > 0; | ||
| size -= 1 | ||
| ) { | ||
| const baseTail = baseSegments.slice(-size).join('/') | ||
| const targetHead = targetSegments.slice(0, size).join('/') | ||
| if (baseTail === targetHead) { | ||
| overlap = size | ||
| break | ||
| } | ||
| } | ||
|
|
||
| url.pathname = `/${[...baseSegments, ...targetSegments.slice(overlap)].join('/')}` | ||
| return trimTrailingSlash(url.toString()) |
There was a problem hiding this comment.
完整 endpoint 输入会把回退 URL 拼坏。
如果这里允许用户直接粘贴完整的 /v1/... endpoint,而不只是根路径,ensurePath() 在切到其他候选路径时会把已有终端路径继续拼在后面。比如传入 /v1/chat/completions 时,另一个候选会变成 .../v1/chat/completions/v1/responses。这样首选候选一旦失败,fallback 就失效了。建议在拼接前先剥离已知终端路径,或者在入口把该字段明确限制为真正的 base URL。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/model-detection/providers.ts` around lines 21 - 46, ensurePath
currently lets callers paste a full endpoint like "/v1/chat/completions" which
then gets appended to other candidates and corrupts fallbacks; update ensurePath
so before computing targetSegments you normalize the input (use
trimTrailingSlash) and if the first target segment matches a version like
/^v\d+$/ then truncate targetSegments to only that version segment (i.e., treat
"/v1/..." as just "/v1"), then continue the overlap/merge logic with
baseSegments and targetSegments; reference ensurePath, trimTrailingSlash,
targetSegments and baseSegments when making the change.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3dccf07bc3
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| return `${normalizedBaseUrl}/models/${normalizedModel}:generateContent` | ||
| } | ||
|
|
||
| return `${normalizedBaseUrl}/${version}/models/${normalizedModel}:generateContent` |
There was a problem hiding this comment.
Skip incompatible Gemini API version candidates
When baseUrl already includes a fixed Gemini version (for example https://relay.example.com/v1), resolveGeminiUrl still constructs a candidate for the other version via the fallback return path, producing URLs like /v1/v1beta/models/.... In runProbe this malformed candidate is tried first for /v1 inputs, so every probe incurs a guaranteed failure before the valid candidate, which materially increases request volume/latency and can trigger avoidable rate-limit failures on large probe runs.
Useful? React with 👍 / 👎.
src/lib/model-detection/runner.ts
Outdated
| familyId: string, | ||
| probe: string, | ||
| ): Promise<ProbeObservation> { | ||
| const candidates = buildRequestCandidates({ ...input, probe }) |
There was a problem hiding this comment.
Catch candidate-construction errors inside runProbe
buildRequestCandidates is invoked before entering the per-candidate try/catch, so invalid baseUrl values (for example missing protocol) throw from new URL(...) and reject the mapper directly. Because mapWithConcurrency uses Promise.all, that single throw aborts the whole detection batch instead of being recorded as probe-level failures, preventing the structured error path from running.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8828bdef8f
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if (normalizedBaseUrl.endsWith(`/${version}`)) { | ||
| return `${normalizedBaseUrl}/models/${normalizedModel}:generateContent` | ||
| } | ||
|
|
||
| return `${normalizedBaseUrl}/${version}/models/${normalizedModel}:generateContent` |
There was a problem hiding this comment.
Build Gemini URLs with URL parsing, not string concat
When baseUrl includes query parameters (for example a copied Gemini endpoint like .../v1beta?key=... or a signed relay URL), this branch appends /${version}/models/... directly to the raw string and produces malformed URLs such as .../v1beta?key=abc/v1beta/models/.... In that input shape every Gemini probe request targets an invalid endpoint and the detector reports total failure even though the base host/model are valid.
Useful? React with 👍 / 👎.
Summary
Verification
Notes
Diagnostics not shown: 14.
Checked 87 files in 39ms. No fixes applied.
Found 34 errors. still reports pre-existing issues outside this change set
Greptile Summary
This PR adds a pure-frontend model-family detection page (
/docs/model-detection) that probes a user-supplied API endpoint with glitch-token strings and ranks candidate model families by how often the model fails to echo each probe. The concurrency runner, scoring logic, and request-builder fallback chain are all well-structured, and test coverage addresses the key paths (timeout, invalid URL, response extraction, scoring edge cases).Confidence Score: 5/5
Safe to merge; all remaining findings are P2 style/improvement suggestions that do not affect correctness.
Prior P1 findings (double extractResponseTextFromAny call, dead Suspense boundary, Gemini URL degenerate case) are flagged in previous review threads. The two new inline comments are both P2: stale-result UX and the endpointType-ordering preference in extractResponseTextFromAny. Core detection logic, concurrency worker pattern, scoring, and test coverage are sound.
src/lib/model-detection/runner.ts and src/app/docs/model-detection/components/ModelDetectionClient.tsx have the open P2 suggestions.
Vulnerabilities
No security concerns identified. The API key is typed directly by the user into a
type="password"field, held only in React component state, and sent straight from the browser to the user-supplied endpoint — it never touches the application's own backend. All detection logic runs client-side. No secrets are logged or persisted.Important Files Changed
Sequence Diagram
sequenceDiagram participant UI as ModelDetectionClient participant Runner as runner.ts participant Providers as providers.ts participant API as External API UI->>Runner: runModelDetection(input) Runner->>Runner: buildProbeEntries() — flatten families × probes loop mapWithConcurrency (4 workers) Runner->>Runner: runProbe(input, familyId, probe) Runner->>Providers: buildRequestCandidates(input + probe) Providers-->>Runner: RequestCandidate[] loop try each candidate in order Runner->>API: POST /v1/responses (or /messages, :generateContent) alt HTTP 200 API-->>Runner: payload Runner->>Providers: extractResponseTextFromAny(payload) Providers-->>Runner: rawText Runner-->>Runner: repeatedExactly = rawText.includes(probe) else HTTP error or timeout Runner->>Runner: push error, try next candidate end end Runner-->>Runner: ProbeObservation end Runner->>Runner: summarizeDetectionFamilies(families, observations) Runner-->>UI: ModelDetectionResult (rankings + summary) UI->>UI: render ranked family cardsPrompt To Fix All With AI
Reviews (4): Last reviewed commit: "fix: harden detection request handling" | Re-trigger Greptile