Skip to content

Unify OpenAI-compatible providers and custom provider flow#930

Open
PeterDaveHello wants to merge 2 commits intoChatGPTBox-dev:masterfrom
PeterDaveHello:refactor/openai-provider-platform
Open

Unify OpenAI-compatible providers and custom provider flow#930
PeterDaveHello wants to merge 2 commits intoChatGPTBox-dev:masterfrom
PeterDaveHello:refactor/openai-provider-platform

Conversation

@PeterDaveHello
Copy link
Member

@PeterDaveHello PeterDaveHello commented Feb 24, 2026

User description

This PR unifies OpenAI-compatible API execution into a shared core + provider registry,
and completes the custom provider workflow in API Modes.

Included changes

  • Consolidated OpenAI-compatible request logic into shared execution path
  • Unified provider resolution/secret lookup for built-in and custom providers
  • Added config-driven custom provider flow in API Modes
  • Kept GPT-5/OpenAI token param behavior (max_completion_tokens where required)
  • Hardened migration for legacy configs/sessions (provider IDs, keys, custom URL mapping)
  • Fixed settings/key update edge cases and stream completion signaling consistency

Compatibility

  • No intended breaking behavior for existing users
  • Existing configs are migrated with backward-compatible handling
  • Existing OpenAI-compatible providers continue to work under unified config management

Validation

  • npm run lint passed
  • npm run build passed

GitHub Copilot PR summary

This pull request introduces a major refactor to how OpenAI-compatible and custom API providers are managed and configured. The changes unify the handling of various OpenAI-compatible APIs (including custom, Ollama, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML, and legacy GPT Completion APIs) under a single provider system, and add a robust migration and normalization layer for user configuration. This will make it easier to add new providers, manage secrets, and ensure backward compatibility with legacy config data.

Key changes include:

Unification and Refactor of API Provider Handling

  • Consolidated all OpenAI-compatible API providers (custom, Ollama, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML, GPT Completion, etc.) to be handled by a single generateAnswersWithOpenAICompatibleApi function, replacing multiple specific imports and execution branches in src/background/index.mjs. This simplifies the codebase and centralizes provider logic. [1] [2] [3] [4]

  • Added isUsingOpenAICompatibleApiSession utility to determine if a session should be routed through the unified OpenAI-compatible API handler.

Configuration Schema and Migration

  • Introduced a configuration schema versioning system and a comprehensive migration function (migrateUserConfig) in src/config/index.mjs. This function normalizes provider IDs, migrates legacy secrets, deduplicates and renames custom providers, and ensures all config data is up-to-date and consistent.

  • Updated the default config to include new fields: customOpenAIProviders, providerSecrets, and configSchemaVersion.

Popup UI and Provider Management

  • Refactored the API modes section in the popup UI (src/popup/sections/ApiModes.jsx) to use the new provider registry and normalization logic. Added helpers for provider ID normalization, uniqueness, and base URL sanitization.

  • Ensured that UI state and config updates are consistent with the new provider structure, and that legacy custom providers are handled gracefully.


References:

  • Unified provider imports and execution: [1] [2] [3] [4]
  • Added isUsingOpenAICompatibleApiSession:
  • Config migration and normalization:
  • Default config updates:
  • Popup/provider UI refactor:

PR Type

Enhancement, Tests, Documentation


Description

  • Unified OpenAI-compatible provider system: Consolidated all OpenAI-compatible API providers (custom, Ollama, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML) into a single shared execution path via generateAnswersWithOpenAICompatibleApi, eliminating duplicate provider-specific implementations

  • Configuration migration and schema versioning: Introduced comprehensive migrateUserConfig function that normalizes provider IDs, consolidates legacy secrets, handles custom provider deduplication, and maintains backward compatibility with existing configs

  • Unified provider registry: Created provider-registry.mjs with centralized provider resolution logic supporting both built-in and custom providers, with secret lookup and endpoint URL construction

  • Custom provider UI workflow: Refactored API Modes popup to support custom provider creation/editing with provider selector dropdown, validation, and dynamic provider list management

  • Provider secret management: Added buildProviderSecretUpdate utility for managing provider secrets across built-in and custom providers with legacy field support

  • API mode normalization: Implemented normalizeApiMode utility to ensure consistent API mode object structure throughout the codebase

  • Stream completion signaling: Fixed stream completion consistency to emit final message with { answer: null, done: true, session } format

  • Comprehensive test coverage: Added 16 config migration tests, 8 provider registry tests, and additional tests for provider utilities and stream handling

  • Multilingual support: Updated localization strings across 12 languages for custom provider UI labels and validation messages

  • Removed 6 provider-specific API modules: Deleted ollama-api.mjs, chatglm-api.mjs, aiml-api.mjs, moonshot-api.mjs, openrouter-api.mjs, and deepseek-api.mjs as functionality consolidated into unified handler


Diagram Walkthrough

flowchart LR
  A["Legacy Provider APIs<br/>Ollama, DeepSeek, etc."] -->|"Consolidated"| B["generateAnswersWithOpenAICompatibleApi"]
  C["User Config<br/>Legacy format"] -->|"migrateUserConfig"| D["Normalized Config<br/>customOpenAIProviders<br/>providerSecrets"]
  B -->|"Resolves via"| E["provider-registry"]
  E -->|"Looks up"| D
  F["API Modes UI"] -->|"Uses"| G["Provider utilities<br/>createProviderId<br/>parseChatCompletionsUrl"]
  G -->|"Updates via"| H["buildProviderSecretUpdate"]
  H -->|"Persists to"| D
Loading

File Walkthrough

Relevant files
Enhancement
13 files
index.mjs
Configuration migration and provider registry unification

src/config/index.mjs

  • Added comprehensive configuration migration system with
    migrateUserConfig function to normalize provider IDs, consolidate
    legacy secrets, and handle custom provider deduplication
  • Introduced new config fields: customOpenAIProviders, providerSecrets,
    and configSchemaVersion for unified provider management
  • Implemented migration logic that promotes legacy customUrl and apiKey
    fields into provider-based storage while maintaining backward
    compatibility
  • Enhanced getUserConfig to automatically persist migrated configuration
    changes to storage
+474/-3 
ApiModes.jsx
Custom provider UI editor and provider selector integration

src/popup/sections/ApiModes.jsx

  • Refactored API modes UI to support custom provider management with
    create/edit provider dialogs
  • Added provider selector dropdown and validation for provider name and
    chat completions URL
  • Implemented sanitizeApiModeForSave to ensure proper provider ID and
    API key handling during save
  • Integrated getCustomOpenAIProviders and provider utilities for dynamic
    provider list management
+303/-46
provider-registry.mjs
Unified provider registry and resolution system                   

src/services/apis/provider-registry.mjs

  • Created unified provider registry with getAllOpenAIProviders,
    getProviderById, and resolveOpenAICompatibleRequest functions
  • Implemented provider resolution logic that handles both built-in
    providers (OpenAI, DeepSeek, Ollama, etc.) and custom providers
  • Added getProviderSecret function to resolve API keys from
    providerSecrets map with fallback to legacy fields
  • Provides endpoint URL construction and normalization for various
    provider configurations
+362/-0 
GeneralPart.jsx
Unified provider detection and API key management               

src/popup/sections/GeneralPart.jsx

  • Replaced individual provider detection functions with unified
    resolveOpenAICompatibleRequest call
  • Updated API key input and balance check to use provider-resolved
    credentials via buildProviderSecretUpdate
  • Simplified provider-specific UI logic by consolidating multiple
    isUsingXxxApiModel checks into single isUsingOpenAICompatibleProvider
    flag
+45/-123
openai-api.mjs
OpenAI-compatible API core extraction and unification       

src/services/apis/openai-api.mjs

  • Extracted common OpenAI-compatible request logic into new
    generateAnswersWithOpenAICompatible core function
  • Added generateAnswersWithOpenAICompatibleApi as unified entry point
    that resolves provider configuration and delegates to core
  • Implemented touchOllamaKeepAlive for Ollama-specific keep-alive
    handling
  • Simplified legacy API functions to delegate to unified core
    implementation
+98/-160
index.mjs
Unified background API execution routing                                 

src/background/index.mjs

  • Consolidated multiple provider-specific imports into single
    generateAnswersWithOpenAICompatibleApi call
  • Added isUsingOpenAICompatibleApiSession utility to route all
    OpenAI-compatible providers through unified handler
  • Removed individual provider execution branches for custom, Ollama,
    DeepSeek, Moonshot, ChatGLM, OpenRouter, and AIML APIs
+19/-68 
openai-compatible-core.mjs
OpenAI-compatible API core implementation                               

src/services/apis/openai-compatible-core.mjs

  • Created new core module with generateAnswersWithOpenAICompatible
    function handling both chat and completion endpoints
  • Implements unified request body construction, streaming response
    parsing, and message answer building
  • Supports legacy response field handling via allowLegacyResponseField
    parameter for backward compatibility
  • Centralizes token parameter handling and error management for all
    OpenAI-compatible providers
+160/-0 
model-name-convert.mjs
API mode normalization and comparison utilities                   

src/utils/model-name-convert.mjs

  • Added normalizeApiMode function to ensure consistent API mode object
    structure with default values
  • Updated apiModeToModelName and getApiModesFromConfig to use normalized
    API modes for safer property access
  • Enhanced isApiModeSelected comparison logic to normalize both sides
    before comparing relevant fields
+40/-5   
init-session.mjs
Session initialization API mode normalization                       

src/services/init-session.mjs

  • Added import of normalizeApiMode utility function
  • Applied normalizeApiMode to session initialization to ensure API mode
    objects have consistent structure
+6/-2     
custom-api.mjs
Consolidate custom API into unified OpenAI-compatible handler

src/services/apis/custom-api.mjs

  • Removed 84 lines of duplicated OpenAI-compatible API request logic
  • Replaced custom implementation with call to unified
    generateAnswersWithOpenAICompatible function
  • Simplified function to delegate to shared core with provider-specific
    parameters
+11/-92 
provider-secret-utils.mjs
Provider secret update utility for API modes                         

src/popup/sections/provider-secret-utils.mjs

  • New utility module with buildProviderSecretUpdate function for
    managing provider secrets
  • Handles both built-in and custom provider secret updates with legacy
    field support
  • Clears inherited mode-level keys matching old provider secrets
  • Syncs selected mode keys to new provider secret values
+78/-0   
api-modes-provider-utils.mjs
Provider configuration utilities for API modes UI               

src/popup/sections/api-modes-provider-utils.mjs

  • New utility module with helper functions for provider management in
    API modes UI
  • createProviderId generates unique normalized provider IDs avoiding
    conflicts
  • parseChatCompletionsEndpointUrl validates and parses chat completions
    endpoint URLs
  • resolveProviderChatEndpointUrl resolves endpoint URL from provider
    configuration
+79/-0   
wrappers.mjs
API mode normalization in port listener                                   

src/services/wrappers.mjs

  • Added import of normalizeApiMode function from model-name-convert
    utility
  • Added normalization of session.apiMode during port listener
    registration
  • Ensures API mode is normalized before processing in message handlers
+6/-1     
Tests
5 files
migrate-user-config.test.mjs
Configuration migration test suite                                             

tests/unit/config/migrate-user-config.test.mjs

  • Added 16 comprehensive test cases covering config migration scenarios
    including legacy URL/key consolidation, provider ID normalization, and
    deduplication
  • Tests validate migration of custom providers from legacy customUrl
    fields, secret key consolidation, and backward compatibility
  • Covers edge cases like trailing slashes in URLs, multiple modes with
    same URL but different keys, and schema version updates
+447/-0 
provider-registry.test.mjs
Provider registry resolution test suite                                   

tests/unit/services/apis/provider-registry.test.mjs

  • Added 8 test cases validating provider resolution logic including
    endpoint type detection and URL normalization
  • Tests cover custom provider resolution by normalized ID, legacy URL
    matching, and duplicate /v1 path prevention
  • Validates correct endpoint URL construction for OpenAI, Ollama, and
    custom providers
+161/-0 
provider-secret-utils.test.mjs
Unit tests for provider secret update logic                           

tests/unit/popup/provider-secret-utils.test.mjs

  • New test file with 114 lines covering the buildProviderSecretUpdate
    function
  • Tests validate provider secret updates for both built-in and custom
    providers
  • Tests verify clearing of inherited mode-level keys and syncing of
    selected mode keys
  • Tests ensure unrelated providers are not modified during secret
    updates
+114/-0 
api-modes-provider-utils.test.mjs
Unit tests for API modes provider utilities                           

tests/unit/popup/api-modes-provider-utils.test.mjs

  • New test file with 60 lines covering provider utility functions
  • Tests validate provider ID creation with conflict avoidance
  • Tests verify chat completions endpoint URL parsing and validation
  • Tests ensure endpoint URL resolution from provider base URL and path
+60/-0   
openai-api-compat.test.mjs
Stream completion signaling consistency tests                       

tests/unit/services/apis/openai-api-compat.test.mjs

  • Updated existing test to expect { answer: null, done: true, session }
    final message instead of { done: true }
  • Added new test case for fallback done message when stream ends without
    finish reason
  • New test verifies stream completion signaling consistency for
    incomplete streams
+39/-1   
Documentation
13 files
main.json
Japanese localization updates                                                       

src/_locales/ja/main.json

  • Added Japanese translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Turkish localization updates                                                         

src/_locales/tr/main.json

  • Added Turkish translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Traditional Chinese localization updates                                 

src/_locales/zh-hant/main.json

  • Added Traditional Chinese translations for Custom Provider and
    Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Russian localization updates                                                         

src/_locales/ru/main.json

  • Added Russian translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Italian localization updates                                                         

src/_locales/it/main.json

  • Added Italian translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
French localization updates                                                           

src/_locales/fr/main.json

  • Added French translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
German localization updates                                                           

src/_locales/de/main.json

  • Added German translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Korean localization updates                                                           

src/_locales/ko/main.json

  • Added Korean translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Spanish localization updates                                                         

src/_locales/es/main.json

  • Added Spanish translations for Custom Provider and Provider UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Indonesian localization updates                                                   

src/_locales/in/main.json

  • Added Indonesian translations for Custom Provider and Provider UI
    labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Portuguese localization updates                                                   

src/_locales/pt/main.json

  • Added Portuguese translations for Custom Provider and Provider UI
    labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Simplified Chinese localization updates                                   

src/_locales/zh-hans/main.json

  • Added Simplified Chinese translations for Custom Provider and Provider
    UI labels
  • Added error message translation for Chat Completions URL validation
+3/-0     
main.json
Localization strings for custom provider UI                           

src/_locales/en/main.json

  • Added new localization string "Custom Provider" for UI display
  • Added new localization string "Provider" for provider selection UI
  • Added new localization string "Please enter a full Chat Completions
    URL" for URL validation error message
+3/-0     
Configuration changes
1 files
openai-provider-mappings.mjs
Provider ID and legacy field mapping configuration             

src/config/openai-provider-mappings.mjs

  • New configuration mapping module with provider ID to legacy API key
    field mappings
  • Defines LEGACY_API_KEY_FIELD_BY_PROVIDER_ID for 8 built-in providers
  • Provides reverse mapping LEGACY_SECRET_KEY_TO_PROVIDER_ID for
    migration
  • Maps legacy group names to normalized provider IDs via
    OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID
+30/-0   
Additional files
6 files
aiml-api.mjs +0/-12   
chatglm-api.mjs +0/-14   
deepseek-api.mjs +0/-12   
moonshot-api.mjs +0/-12   
ollama-api.mjs +0/-36   
openrouter-api.mjs +0/-12   

Summary by CodeRabbit

  • New Features

    • Provider manager UI: create/edit/select custom providers, choose provider per API mode, and override mode-specific API keys; balance/billing checks per provider.
  • Refactor

    • Unified OpenAI‑compatible request path that routes all compatible providers through a single streaming core and provider registry.
  • Chores

    • Automatic migration of legacy settings into provider model; added many locale strings.
  • Tests

    • Extensive unit tests for migration, provider utilities, registry, streaming core, and popup config.

Open with Devin

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2026

📝 Walkthrough

Walkthrough

Consolidates per-provider adapters into a unified OpenAI‑compatible provider registry and streaming core; adds provider catalog, providerSecrets, migration and provider-management UI, apiMode normalization, and routes background execution through the new OpenAI-compatible generation path.

Changes

Cohort / File(s) Summary
Provider registry & core
src/services/apis/provider-registry.mjs, src/services/apis/openai-compatible-core.mjs
Add built‑in+custom provider catalog, ID/URL/secret resolution, endpoint-type resolution, and centralized SSE streaming core generateAnswersWithOpenAICompatible.
OpenAI wrappers & delegation
src/services/apis/openai-api.mjs, src/services/apis/custom-api.mjs
Replace inline per-provider SSE implementations with calls into the OpenAI‑compatible core; add base-URL normalization, model/provider resolution, and Ollama keep‑alive handling.
Removed thin adapters
src/services/apis/*-api.mjs
Remove many small adapter modules (aiml, chatglm, deepseek, moonshot, ollama, openrouter, etc.); behavior consolidated into provider registry + openai-compatible core.
Background & session routing
src/background/index.mjs, src/services/init-session.mjs, src/services/wrappers.mjs
Normalize apiMode at session init/wrappers, add isUsingOpenAICompatibleApiSession, and route executeApi to the unified OpenAI-compatible provider branch.
Config, mappings & migration
src/config/index.mjs, src/config/openai-provider-mappings.mjs
Add migration pipeline, new config fields (customOpenAIProviders, providerSecrets, configSchemaVersion), and legacy secret↔provider mappings plus normalization/persistence logic.
Popup UX & provider UI
src/popup/sections/ApiModes.jsx, src/popup/sections/GeneralPart.jsx, src/popup/sections/api-modes-provider-utils.mjs
Add provider selector/editor flows, provider-aware apiMode editing/saving, provider validation/reference checks, and utilities for provider ID/URL parsing and selection resolution.
Provider secret management
src/popup/sections/provider-secret-utils.mjs
Add utilities to build provider-secret updates, commit signatures, session migration/rollback for provider-secret overrides, and selection-preserving config updates.
Balance & formatting helpers
src/popup/sections/general-balance-utils.mjs
Add helper functions for balance formatting, billing base-URL normalization, cache keys, and OpenAI usage-page decisioning.
Model-name / apiMode normalization
src/utils/model-name-convert.mjs
Add normalizeApiMode, include providerId in apiMode shapes, normalize comparisons, and update related utilities.
Popup write/rollback queue
src/popup/Popup.jsx, src/popup/popup-config-utils.mjs
Add serialized config write queue, per-key latest-touch tracking, persisted baseline ref, optimistic merge + rollback patch generation.
UI/labels & conversation display
src/components/ConversationCard/index.jsx
Use provider-aware display helpers for AI names and apiMode labels.
I18n
src/_locales/*/main.json
Add provider-related translation keys (e.g., "Custom Provider", "Provider", "Please enter a full Chat Completions URL", "Delete", "Edit", selection/loading messages).
Tests added/updated
tests/unit/...
Add/extend extensive tests for migration, provider registry, provider-secret utils, openai-compatible core behavior, api-mode normalization, popup config write/rollback, and balance utilities.

Sequence Diagram(s)

sequenceDiagram
  participant Port as UI Port
  participant BG as Background
  participant Registry as Provider Registry
  participant Core as OpenAI-Compatible Core
  participant Ext as External Provider/API

  Port->>BG: request generation (question, session)
  BG->>Registry: resolveOpenAICompatibleRequest(config, session)
  Registry-->>BG: { providerId, requestUrl, apiKey, endpointType, model }
  BG->>Core: generateAnswersWithOpenAICompatible({port, question, session, endpointType, requestUrl, model, apiKey})
  Core->>Ext: open SSE/HTTP stream (requestUrl, headers, body)
  Ext-->>Core: SSE stream (partial tokens/events)
  Core->>Port: post interim answers ({answer, done:false})
  Ext-->>Core: final event / [DONE]
  Core->>Port: post final ({answer:null, done:true, session})
  Core->>BG: persist Q/A record
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested labels

Review effort 3/5

Poem

🐰 I hopped through configs, keys in tow,
Providers gathered tidy in a row,
Streams now flow on one bright track,
Old adapters folded, nothing lacks,
A rabbit cheers: unified we go!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.30% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Unify OpenAI-compatible providers and custom provider flow' accurately summarizes the main objective of the PR, which consolidates OpenAI-compatible API handling and implements the custom provider workflow.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can customize the high-level summary generated by CodeRabbit.

Configure the reviews.high_level_summary_instructions setting to provide custom instructions for generating the high-level summary.

@gemini-code-assist
Copy link

Summary of Changes

Hello @PeterDaveHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the extension's architecture for managing AI API providers. By unifying OpenAI-compatible API interactions and introducing a comprehensive provider registry with robust configuration migration, the changes aim to improve maintainability, simplify the addition of new providers, and enhance the user experience for custom API configurations. This also ensures better consistency in how API keys and endpoints are handled across the application.

Highlights

  • Unified OpenAI-compatible API Handling: Consolidated all OpenAI-compatible API execution logic into a single shared core function, replacing multiple individual API implementations and simplifying the codebase.
  • Enhanced Provider Registry and Configuration: Introduced a robust provider registry for both built-in and custom OpenAI-compatible providers, centralizing their management, secret lookup, and configuration.
  • Comprehensive Configuration Migration: Implemented a new configuration schema versioning system and a migration function to normalize provider IDs, migrate legacy API keys, deduplicate custom providers, and ensure backward compatibility for existing user settings.
  • Improved Custom Provider Workflow in UI: Refactored the API Modes section in the popup UI to support the new provider registry, allowing users to define and manage custom providers with unique IDs, base URLs, and API key handling.
  • Streamlined API Key Management: Centralized API key storage and retrieval through providerSecrets, ensuring that API keys are consistently managed across different providers and API modes, and handling updates gracefully.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/background/index.mjs
    • Removed direct imports and calls for individual OpenAI-compatible APIs (e.g., Custom, Ollama, Moonshot, DeepSeek, ChatGLM, OpenRouter, AIML, GPT Completion).
    • Introduced a new isUsingOpenAICompatibleApiSession utility function to determine if a session should use the unified API handler.
    • Replaced multiple conditional API calls with a single call to generateAnswersWithOpenAICompatibleApi for all OpenAI-compatible providers.
  • src/config/index.mjs
    • Added providerId field to the default API mode configuration.
    • Introduced new configuration fields: customOpenAIProviders, providerSecrets, and configSchemaVersion.
    • Implemented a migrateUserConfig function to handle schema versioning, normalize provider IDs, migrate legacy API keys to providerSecrets, and deduplicate/rename custom providers.
    • Updated getUserConfig to apply the migrateUserConfig function and persist any necessary migrations.
  • src/popup/sections/ApiModes.jsx
    • Removed CustomApiKeyGroups and CustomUrlGroups imports, relying on the new provider registry.
    • Added LEGACY_CUSTOM_PROVIDER_ID constant for backward compatibility.
    • Introduced defaultProviderDraft for creating new custom providers.
    • Added utility functions normalizeProviderId, createProviderId, normalizeBaseUrl, and sanitizeApiModeForSave for UI-related provider management.
    • Updated useLayoutEffect to fetch customProviders from the new registry.
    • Refactored the onSaveEditing function to handle new custom provider creation and update existing API modes with provider IDs and secrets.
    • Modified the API mode editing component to include a provider selector for choosing existing custom providers or creating new ones.
    • Adjusted the logic for setting editingApiMode and providerSelector when editing an existing API mode or adding a new one.
  • src/popup/sections/GeneralPart.jsx
    • Removed imports for individual API model checks (e.g., isUsingOpenAiApiModel, isUsingChatGLMApiModel, isUsingMoonshotApiModel, etc.).
    • Introduced LEGACY_API_KEY_FIELD_BY_PROVIDER_ID mapping for secret migration.
    • Added buildProviderSecretUpdate function to manage API key updates across providerSecrets and legacy fields, and to sync API keys in customApiModes and apiMode.
    • Updated getBalance function to use the resolved OpenAI-compatible provider's base URL and API key.
    • Modified the API key input field to dynamically display the API key from the selectedProviderApiKey and update it using buildProviderSecretUpdate.
    • Removed specific API key input fields for ChatGLM, Moonshot, DeepSeek, Ollama, OpenRouter, and AIML, centralizing API key management.
  • src/services/apis/custom-api.mjs
    • Removed extensive custom API logic, including SSE fetching and message parsing.
    • Replaced the custom API implementation with a call to the new generateAnswersWithOpenAICompatible core function, passing relevant parameters.
  • src/services/apis/openai-api.mjs
    • Removed direct SSE fetching and message parsing logic from generateAnswersWithGptCompletionApi and generateAnswersWithChatgptApiCompat.
    • Introduced normalizeBaseUrl and resolveModelName utility functions.
    • Added touchOllamaKeepAlive function for Ollama provider-specific keep-alive requests.
    • Refactored generateAnswersWithGptCompletionApi and generateAnswersWithChatgptApiCompat to utilize the new generateAnswersWithOpenAICompatible core function.
    • Added generateAnswersWithOpenAICompatibleApi as a unified entry point for all OpenAI-compatible providers, resolving provider details and calling the core function.
  • src/services/apis/openai-compatible-core.mjs
    • Added new file implementing generateAnswersWithOpenAICompatible.
    • Implemented a generic SSE fetching and message parsing mechanism for OpenAI-compatible APIs.
    • Provided helper functions buildHeaders, buildMessageAnswer, and hasFinished for consistent API interaction.
    • Dynamically constructs request bodies for both 'chat' and 'completion' endpoint types, handling conversation context and token parameters.
  • src/services/apis/provider-registry.mjs
    • Added new file to manage OpenAI-compatible API providers.
    • Defined DEFAULT_CHAT_PATH, DEFAULT_COMPLETION_PATH, and LEGACY_KEY_BY_PROVIDER_ID constants.
    • Defined BUILTIN_PROVIDER_TEMPLATE for various pre-configured providers (OpenAI, DeepSeek, Moonshot, OpenRouter, AIML, ChatGLM, Ollama, Legacy Custom).
    • Defined OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID mapping for legacy group names.
    • Implemented utility functions like getModelNamePresetPart, resolveProviderIdFromLegacyModelName, isLegacyCompletionModelName, toStringOrEmpty, trimSlashes, ensureLeadingSlash, and joinUrl.
    • Provided functions to build built-in providers (buildBuiltinProviders), normalize custom providers (normalizeCustomProvider), get custom providers (getCustomOpenAIProviders), and get all providers (getAllOpenAIProviders).
    • Implemented functions to resolve provider ID for a session (resolveProviderIdForSession), endpoint type (resolveEndpointTypeForSession), get provider by ID (getProviderById), and retrieve provider secrets (getProviderSecret).
    • Added resolveOpenAICompatibleRequest as a central function to determine the correct provider, endpoint, URL, and API key for a given session.
  • src/services/init-session.mjs
    • Imported normalizeApiMode from model-name-convert.mjs.
    • Applied normalizeApiMode to the apiMode during session initialization to ensure consistency.
  • src/services/wrappers.mjs
    • Imported normalizeApiMode from model-name-convert.mjs.
    • Applied normalizeApiMode to session.apiMode within the registerPortListener to ensure API mode objects are consistently normalized.
  • src/utils/model-name-convert.mjs
    • Added providerId field to the modelNameToApiMode function's return object for custom API modes.
    • Introduced a new normalizeApiMode function to ensure API mode objects have consistent structure and default values.
    • Updated apiModeToModelName to use normalizeApiMode for robust handling of API mode objects.
    • Updated getApiModesFromConfig to use normalizeApiMode when processing custom API modes and to correctly handle Ollama model names.
    • Modified isApiModeSelected to use normalizeForCompare (which leverages normalizeApiMode) for more reliable comparison of API mode objects.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Feb 24, 2026

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
SSRF credential exfiltration

Description: resolveUrlFromProvider() allows session.apiMode.customUrl (for customApiModelKeys) to
fully override the request destination while getProviderSecret() can still supply a real
provider API key (e.g., OpenAI) from config.providerSecrets, enabling a
malicious/compromised config (or manually edited storage) to exfiltrate a user's API key
by sending an authenticated request to an attacker-controlled URL (SSRF/credential leak).
provider-registry.mjs [270-318]

Referred Code
function resolveUrlFromProvider(provider, endpointType, config, session) {
  if (!provider) return ''

  const apiModeCustomUrl =
    endpointType === 'chat' &&
    session?.apiMode &&
    typeof session.apiMode === 'object' &&
    session.apiMode.groupName === 'customApiModelKeys'
      ? toStringOrEmpty(session.apiMode.customUrl).trim()
      : ''
  if (apiModeCustomUrl) return apiModeCustomUrl

  if (endpointType === 'completion') {
    if (provider.completionsUrl) return provider.completionsUrl
    if (provider.completionsPath) return joinUrl(provider.baseUrl, provider.completionsPath)
  } else {
    if (provider.chatCompletionsUrl) return provider.chatCompletionsUrl
    if (provider.chatCompletionsPath) return joinUrl(provider.baseUrl, provider.chatCompletionsPath)
  }

  if (provider.id === 'legacy-custom-default') {


 ... (clipped 28 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Missing URL validation: getBalance can build a billing URL from an empty/undefined base URL
(selectedProvider?.baseUrl || config.customOpenAiApiUrl), potentially resulting in an
invalid or relative fetch without graceful handling.

Referred Code
const getBalance = async () => {
  const openAiApiUrl = selectedProvider?.baseUrl || config.customOpenAiApiUrl
  const response = await fetch(`${openAiApiUrl}/dashboard/billing/credit_grants`, {
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${selectedProviderApiKey}`,
    },
  })
  if (response.ok) setBalance((await response.json()).total_available.toFixed(2))
  else {
    const billing = await checkBilling(selectedProviderApiKey, openAiApiUrl)
    if (billing && billing.length > 2 && billing[2]) setBalance(`${billing[2].toFixed(2)}`)
    else openUrl('https://platform.openai.com/account/usage')

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status:
Sensitive debug logs: The new unified core logs raw SSE messages and conversation history via console.debug,
which can include user content and other sensitive data in logs.

Referred Code
  console.debug('conversation history', { content: session.conversationRecords })
  port.postMessage({ answer: null, done: true, session: session })
}

await fetchSSE(requestUrl, {
  method: 'POST',
  signal: controller.signal,
  headers: buildHeaders(apiKey, extraHeaders),
  body: JSON.stringify(requestBody),
  onMessage(message) {
    console.debug('sse message', message)
    if (finished) return
    if (message.trim() === '[DONE]') {
      finish()
      return
    }
    let data
    try {
      data = JSON.parse(message)
    } catch (error) {
      console.debug('json error', error)


 ... (clipped 1 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
No audit logging: The new migration path writes sensitive configuration/secrets to local storage without any
audit trail context (who/when/what/outcome), making post-incident reconstruction
difficult.

Referred Code
export async function getUserConfig() {
  const options = await Browser.storage.local.get(Object.keys(defaultConfig))
  const { migrated, dirty } = migrateUserConfig(options)
  if (dirty) {
    const payload = {
      customChatGptWebApiUrl: migrated.customChatGptWebApiUrl,
      customApiModes: migrated.customApiModes,
      customOpenAIProviders: migrated.customOpenAIProviders,
      providerSecrets: migrated.providerSecrets,
      configSchemaVersion: migrated.configSchemaVersion,
    }
    if (migrated.apiMode !== undefined) payload.apiMode = migrated.apiMode
    await Browser.storage.local.set(payload)
  }

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Provider error exposed: The SSE onError path throws an Error built from resp.json() or status text, which may
propagate provider-specific details to user-facing surfaces depending on upstream
handling.

Referred Code
async onError(resp) {
  port.onMessage.removeListener(messageListener)
  port.onDisconnect.removeListener(disconnectListener)
  if (resp instanceof Error) throw resp
  const error = await resp.json().catch(() => ({}))
  throw new Error(!isEmpty(error) ? JSON.stringify(error) : `${resp.status} ${resp.statusText}`)
},

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Weak URL validation: Custom provider baseUrl/endpoint derivation relies on minimal normalization (trim/trailing
slash removal and regex heuristics) without stricter validation (scheme/host
allowlisting), which may permit unsafe or unintended request targets depending on
extension threat model.

Referred Code
function normalizeBaseUrl(value) {
  return String(value || '')
    .trim()
    .replace(/\/+$/, '')
}

function sanitizeApiModeForSave(apiMode) {
  const nextApiMode = { ...apiMode }
  if (nextApiMode.groupName !== 'customApiModelKeys') {
    nextApiMode.providerId = ''
    nextApiMode.apiKey = ''
    return nextApiMode
  }
  if (!nextApiMode.providerId) nextApiMode.providerId = LEGACY_CUSTOM_PROVIDER_ID
  return nextApiMode
}

export function ApiModes({ config, updateConfig }) {
  const { t } = useTranslation()
  const [editing, setEditing] = useState(false)
  const [editingApiMode, setEditingApiMode] = useState(defaultApiMode)


 ... (clipped 86 lines)

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Feb 24, 2026

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
The migration logic is overly complex

The migrateUserConfig function is over 300 lines long and overly complex,
creating a high risk of corrupting user data. Consider a simpler, staged
migration to improve maintainability and reduce risk.

Examples:

src/config/index.mjs [796-1083]
function migrateUserConfig(options) {
  const migrated = { ...options }
  let dirty = false

  if (migrated.customChatGptWebApiUrl === 'https://chat.openai.com') {
    migrated.customChatGptWebApiUrl = 'https://chatgpt.com'
    dirty = true
  }

  const providerSecrets =

 ... (clipped 278 lines)

Solution Walkthrough:

Before:

// src/config/index.mjs
function migrateUserConfig(options) {
  const migrated = { ...options };
  let dirty = false;

  // ... ~300 lines of interwoven logic ...

  // Migrate legacy secrets to providerSecrets
  for (const [legacyKey, providerId] of Object.entries(LEGACY_SECRET_KEY_TO_PROVIDER_ID)) { ... }

  // Normalize and deduplicate custom provider IDs
  // ... complex logic with sets and maps ...

  // Migrate customApiModes, creating new providers on the fly
  for (const apiMode of customApiModes) {
    if (apiMode.groupName !== 'customApiModelKeys') { ... } 
    else {
      // ... create new providers from customUrl ...
      // ... migrate apiMode keys to providerSecrets ...
    }
  }

  // Migrate selected apiMode
  if (migrated.apiMode) { ... }

  return { migrated, dirty };
}

After:

// A simpler, staged migration approach
// src/config/index.mjs

function migrateLegacySecrets(config) {
  // Only handles migrating top-level API keys to providerSecrets.
  // ...
  return newConfig;
}

function migrateCustomProviders(config) {
  // Only handles normalizing custom provider definitions.
  // ...
  return newConfig;
}

function migrateApiModes(config) {
  // Only handles linking apiModes to the new provider IDs.
  // May flag ambiguous cases for user review instead of auto-resolving.
  // ...
  return newConfig;
}

function migrateUserConfig(options) {
    let migrated = { ...options };
    migrated = migrateLegacySecrets(migrated);
    migrated = migrateCustomProviders(migrated);
    migrated = migrateApiModes(migrated);
    // ... other simple migrations ...
    return { migrated, dirty };
}
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies the high complexity and risk of the monolithic migrateUserConfig function, which is critical for preventing user data loss and ensuring future maintainability.

Medium
General
Build billing URL from provider request

To correctly query billing information for all providers, derive the billing
endpoint's base URL from the fully resolved requestUrl instead of using the
provider's baseUrl.

src/popup/sections/GeneralPart.jsx [206-212]

-const openAiApiUrl = selectedProvider?.baseUrl || config.customOpenAiApiUrl
-const response = await fetch(`${openAiApiUrl}/dashboard/billing/credit_grants`, {
+// extract billing base from the provider request URL
+const billingBase = selectedProviderRequest.requestUrl.replace(
+  /(\/v1\/chat\/completions|\/v1\/completions).*$/,
+  ''
+)
+const response = await fetch(`${billingBase}/dashboard/billing/credit_grants`, {
   headers: {
     'Content-Type': 'application/json',
     Authorization: `Bearer ${selectedProviderApiKey}`,
   },
 })
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that the billing check should derive its base URL from the resolved request URL, not the provider's base URL, to correctly handle providers with non-standard URL structures.

Medium
Remove legacy API keys after migration

After migrating legacy API keys to the new providerSecrets map, explicitly
remove the old top-level keys from browser storage to avoid redundant data.

src/config/index.mjs [1100-1101]

 if (migrated.apiMode !== undefined) payload.apiMode = migrated.apiMode
 await Browser.storage.local.set(payload)
 
+const legacyKeysToRemove = Object.keys(LEGACY_SECRET_KEY_TO_PROVIDER_ID).filter(
+  (key) => migrated[key] !== undefined,
+)
+if (legacyKeysToRemove.length > 0) {
+  await Browser.storage.local.remove(legacyKeysToRemove)
+}
+
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies that legacy API keys are not removed from storage after migration and proposes a clean solution to remove them, improving data hygiene and preventing potential confusion.

Low
Possible issue
Use customModelApiUrl for custom completions

For the legacy custom provider, ensure completion requests use
config.customModelApiUrl before falling back to config.customOpenAiApiUrl to
correctly route to custom endpoints.

src/services/apis/provider-registry.mjs [290-298]

 if (provider.id === 'legacy-custom-default') {
   if (endpointType === 'completion') {
-    return `${trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')}/v1/completions`
+    const customUrl = toStringOrEmpty(config.customModelApiUrl).trim()
+    return customUrl
+      ? `${trimSlashes(customUrl)}/v1/completions`
+      : `${trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')}/v1/completions`
   }
   return (
     toStringOrEmpty(config.customModelApiUrl).trim() ||
     'http://localhost:8000/v1/chat/completions'
   )
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that legacy completion requests were incorrectly using customOpenAiApiUrl instead of customModelApiUrl, and provides a fix to respect the user's custom endpoint for completions.

Medium
  • Update

This comment was marked as resolved.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly refactors the API mode handling by unifying OpenAI-compatible providers and introducing a custom provider flow. The changes centralize API logic, improve configuration management through schema versioning and migration, and enhance the UI for custom provider creation and selection. The refactoring reduces code duplication and sets a solid foundation for future extensibility. Overall, the changes are well-structured and address key areas for maintainability and user experience.

I am having trouble creating individual review comments. Click here to see my feedback.

src/background/index.mjs (437-458)

medium

The removal of the isUsingCustomModel(session) block and its associated generateAnswersWithCustomApi calls is a direct result of the unification efforts. This simplifies the executeApi function by delegating custom API handling to the new generateAnswersWithOpenAICompatibleApi function, reducing conditional logic and improving code clarity.

src/background/index.mjs (566-568)

medium

The introduction of isUsingOpenAICompatibleApiSession(session) and the call to generateAnswersWithOpenAICompatibleApi consolidates the logic for various OpenAI-compatible providers. This is a core part of the unification, making the executeApi function more streamlined and extensible. All previously separate API calls for OpenAI, Moonshot, ChatGLM, DeepSeek, Ollama, OpenRouter, and AIML are now routed through this single entry point.

src/config/index.mjs (550-556)

medium

The addition of providerId, customOpenAIProviders, providerSecrets, and configSchemaVersion to the defaultConfig is crucial for the new configuration schema and migration system. providerId allows custom API modes to link to specific providers, while customOpenAIProviders and providerSecrets manage custom provider details and their API keys. configSchemaVersion enables future-proof migration logic.

src/config/index.mjs (729-733)

medium

The getUserConfig function has been updated to incorporate the migrateUserConfig logic. This ensures that any legacy configurations are automatically migrated to the new schema upon loading, providing backward compatibility and consistency across user settings. The dirty flag correctly triggers a save if migration occurs.

src/popup/sections/ApiModes.jsx (31)

medium

The addition of providerId to defaultApiMode is necessary to support the new provider registry system. This allows API modes to be explicitly linked to a specific provider, whether built-in or custom, enabling more granular control and clearer configuration.

src/popup/sections/ApiModes.jsx (98)

medium

The useLayoutEffect now includes config.customOpenAIProviders as a dependency. This ensures that the UI re-renders and updates the list of custom providers whenever the underlying configuration for custom OpenAI providers changes, maintaining UI consistency with the backend data.

src/popup/sections/ApiModes.jsx (121-134)

medium

The persistApiMode function is a new utility that centralizes the logic for saving API mode configurations. It handles updating customApiModes and optionally customOpenAIProviders, and ensures that the currently selected API mode is also updated if it's being edited. This improves code organization and reduces duplication.

src/popup/sections/ApiModes.jsx (136-195)

medium

The onSaveEditing function has been completely refactored to support the new custom provider creation and selection flow. It now handles the creation of new providers, assigns provider IDs, and manages the clearing of API keys when switching providers. This is a critical piece of the custom provider workflow, ensuring data integrity and proper linking between API modes and providers.

src/popup/sections/ApiModes.jsx (267-288)

medium

This new UI section allows users to select an existing custom provider or create a new one. The providerSelector state manages the selection, and the options are dynamically populated from customProviders. This is a key enhancement for the custom provider workflow, making it user-friendly.

src/popup/sections/ApiModes.jsx (290-305)

medium

This new block provides input fields for creating a new custom provider, including its name and base URL. This directly supports the custom provider creation flow, allowing users to define new OpenAI-compatible endpoints directly within the UI.

src/popup/sections/GeneralPart.jsx (12)

medium

The removal of isUsingOpenAiApiModel and isUsingChatGLMApiModel from imports reflects the consolidation of these checks into the new isUsingOpenAICompatibleProvider utility. This simplifies the import list and centralizes the logic for identifying OpenAI-compatible API usage.

src/popup/sections/GeneralPart.jsx (96-105)

medium

The LEGACY_API_KEY_FIELD_BY_PROVIDER_ID constant maps provider IDs to their legacy API key field names in the configuration. This is essential for the migration and unified secret management, allowing the system to correctly identify and update API keys for various providers.

src/popup/sections/GeneralPart.jsx (107-181)

medium

The buildProviderSecretUpdate function is a critical new utility for managing API keys. It handles updating providerSecrets and also ensures that legacy API key fields are updated for backward compatibility. Furthermore, it propagates API key changes to customApiModes and the apiMode if they are currently selected, ensuring consistency across the application. This function is vital for the new unified secret management system.

src/popup/sections/GeneralPart.jsx (197-203)

medium

These new state variables and derived values (selectedProviderRequest, selectedProviderId, selectedProvider, selectedProviderApiKey, isUsingOpenAICompatibleProvider) are crucial for dynamically displaying and managing API keys for the currently selected OpenAI-compatible provider. They leverage the new provider-registry.mjs to resolve provider information.

src/popup/sections/GeneralPart.jsx (206-217)

medium

The getBalance function has been updated to use the selectedProviderApiKey and openAiApiUrl derived from the new provider registry. This ensures that balance checks are performed against the correct API key and endpoint for the currently active OpenAI-compatible provider, centralizing the logic and making it more robust.

src/popup/sections/GeneralPart.jsx (275)

medium

The condition isUsingOpenAICompatibleProvider replaces multiple individual checks for OpenAI-compatible models. This simplifies the conditional rendering logic for the API key input field, making it more concise and easier to understand.

src/popup/sections/GeneralPart.jsx (326-356)

medium

This block now dynamically renders the API key input and balance check button based on whether an isUsingOpenAICompatibleProvider is selected. The selectedProviderApiKey and selectedProviderId are used to manage the input value and conditional rendering of the 'Get'/'Balance' buttons, centralizing API key management.

src/services/apis/custom-api.mjs (1)

medium

The import statement has been simplified to only include generateAnswersWithOpenAICompatible. This is a direct consequence of unifying the OpenAI-compatible API execution into a shared core module, reducing the number of individual API service imports.

src/services/apis/custom-api.mjs (19-29)

medium

The generateAnswersWithCustomApi function has been refactored to delegate its logic to the new generateAnswersWithOpenAICompatible function. This significantly reduces code duplication and centralizes the core logic for handling OpenAI-compatible API requests, including custom ones. The allowLegacyResponseField: true is important for maintaining backward compatibility with custom APIs that might use a response field.

src/services/apis/openai-api.mjs (3-4)

medium

The import of generateAnswersWithOpenAICompatible and resolveOpenAICompatibleRequest from the new core and provider registry modules is key to unifying API handling. This allows openai-api.mjs to leverage the shared logic instead of duplicating it.

src/services/apis/openai-api.mjs (6-8)

medium

The normalizeBaseUrl function is a useful utility to ensure consistency in API endpoint URLs by trimming trailing slashes. This helps prevent issues with URL construction and improves the robustness of API requests.

src/services/apis/openai-api.mjs (10-21)

medium

The resolveModelName function centralizes the logic for determining the actual model name to be used in API requests, especially for custom models and API modes. This ensures that the correct model identifier is passed to the unified API handler.

src/services/apis/openai-api.mjs (24-40)

medium

The touchOllamaKeepAlive function is a new addition specifically for Ollama providers. It sends a small request to the Ollama endpoint to keep the model loaded, which is important for performance and responsiveness. This demonstrates how provider-specific logic can be integrated while still using a unified API flow.

src/services/apis/openai-api.mjs (48-58)

medium

The generateAnswersWithGptCompletionApi function has been refactored to use the new generateAnswersWithOpenAICompatible core function. This significantly reduces code duplication and centralizes the logic for handling completion-type API requests, improving maintainability.

src/services/apis/openai-api.mjs (69-70)

medium

The baseUrl for generateAnswersWithChatgptApiCompat now uses normalizeBaseUrl, ensuring consistent URL formatting. This is a minor but good improvement for robustness.

src/services/apis/openai-api.mjs (89-99)

medium

The generateAnswersWithChatgptApiCompat function has been refactored to use the new generateAnswersWithOpenAICompatible core function. This change eliminates duplicated SSE fetching logic and centralizes the handling of chat-completion type API requests, making the code cleaner and more maintainable.

src/services/apis/openai-api.mjs (109-133)

medium

This new function, generateAnswersWithOpenAICompatibleApi, serves as the unified entry point for all OpenAI-compatible providers. It resolves the correct provider, endpoint type, URL, API key, and model using the new provider registry, then delegates to generateAnswersWithOpenAICompatible. This is the cornerstone of the PR's unification efforts, drastically simplifying the executeApi logic in background/index.mjs.

src/services/apis/openai-compatible-core.mjs (8-14)

medium

The buildHeaders function centralizes the logic for constructing HTTP headers, including the Authorization header with the API key. This promotes consistency and reusability across all OpenAI-compatible API requests.

src/services/apis/openai-compatible-core.mjs (17-28)

medium

The buildMessageAnswer function encapsulates the logic for extracting the answer content from different API response formats (delta, content, text, and legacy response field). This abstraction makes the SSE onMessage handler cleaner and more adaptable to various provider responses.

src/services/apis/openai-compatible-core.mjs (31-33)

medium

The hasFinished function provides a clear and concise way to determine if an API response indicates the end of a stream, based on the finish_reason field. This improves readability in the SSE onMessage handler.

src/services/apis/openai-compatible-core.mjs (49-159)

medium

This new generateAnswersWithOpenAICompatible function is the core of the unified API handling. It abstracts away the details of constructing request bodies for both 'completion' and 'chat' endpoints, handling token parameters, and managing the SSE connection. This significantly reduces code duplication across different OpenAI-compatible API services and makes it easier to add new providers in the future.

src/services/apis/provider-registry.mjs (4-13)

medium

The LEGACY_KEY_BY_PROVIDER_ID constant is crucial for the migration process, mapping older configuration keys to the new provider IDs. This ensures that existing user configurations are correctly interpreted and migrated to the unified secret management system.

src/services/apis/provider-registry.mjs (15-85)

medium

The BUILTIN_PROVIDER_TEMPLATE defines a standardized structure for all built-in OpenAI-compatible providers. This template includes essential information like ID, name, base URLs, and paths, which is fundamental for the new provider registry and unified API handling. It also includes allowLegacyResponseField for backward compatibility.

src/services/apis/provider-registry.mjs (88-98)

medium

The OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID mapping is vital for translating legacy API mode group names into the new standardized provider IDs. This enables the system to correctly identify which provider corresponds to a given API mode, facilitating the migration and unification.

src/services/apis/provider-registry.mjs (101-134)

medium

The getModelNamePresetPart and resolveProviderIdFromLegacyModelName functions are essential for backward compatibility. They allow the system to infer the correct provider ID from older, less structured model names, ensuring that existing user configurations continue to work seamlessly with the new provider registry.

src/services/apis/provider-registry.mjs (136-139)

medium

The isLegacyCompletionModelName function helps identify models that historically used the 'completion' endpoint. This is important for correctly routing requests to the appropriate endpoint type within the unified API handler.

src/services/apis/provider-registry.mjs (141-157)

medium

The toStringOrEmpty, trimSlashes, ensureLeadingSlash, and joinUrl utility functions provide robust and consistent string manipulation for URLs and paths. These are crucial for correctly constructing API endpoints, especially when dealing with user-provided custom URLs and paths.

src/services/apis/provider-registry.mjs (160-184)

medium

The buildBuiltinProviders function dynamically constructs the list of built-in providers, incorporating user-specific configuration values like customOpenAiApiUrl and ollamaEndpoint. This ensures that built-in providers are correctly configured based on user settings.

src/services/apis/provider-registry.mjs (186-201)

medium

The normalizeCustomProvider function ensures that custom provider configurations are consistently structured and validated. It assigns default values and normalizes fields like id, name, baseUrl, and paths, which is essential for reliable custom provider management.

src/services/apis/provider-registry.mjs (203-208)

medium

The getCustomOpenAIProviders function retrieves and normalizes custom provider configurations from the user's settings. This provides a clean, consistent list of custom providers for use throughout the application.

src/services/apis/provider-registry.mjs (210-212)

medium

The getAllOpenAIProviders function combines both built-in and custom providers into a single, comprehensive list. This unified list is then used by other functions to resolve provider details.

src/services/apis/provider-registry.mjs (215-228)

medium

The resolveProviderIdForSession function is crucial for determining the correct provider ID based on the current session's API mode or model name. It handles both new API mode structures and legacy model names, ensuring that the appropriate provider is identified for each request.

src/services/apis/provider-registry.mjs (230-238)

medium

The resolveEndpointTypeForSession function determines whether a session should use a 'chat' or 'completion' endpoint. This is important for correctly formatting the request body in the unified API handler, especially for legacy completion models.

src/services/apis/provider-registry.mjs (240-246)

medium

The getProviderById function retrieves a provider's full configuration by its ID. It filters out disabled providers, ensuring that only active and valid providers are used.

src/services/apis/provider-registry.mjs (248-268)

medium

The getProviderSecret function centralizes the logic for retrieving API keys. It prioritizes API keys specified directly in the apiMode, then checks the new providerSecrets map, and finally falls back to legacy API key fields. This ensures that the correct API key is used for each provider, supporting both new and old configurations.

src/services/apis/provider-registry.mjs (270-301)

medium

The resolveUrlFromProvider function constructs the full API request URL based on the provider's configuration and the endpoint type (chat or completion). It handles custom URLs specified in the API mode, as well as base URLs and paths defined in the provider template, ensuring accurate endpoint resolution.

src/services/apis/provider-registry.mjs (303-317)

medium

The resolveOpenAICompatibleRequest function is the main entry point for obtaining all necessary information to make an OpenAI-compatible API request. It orchestrates the resolution of provider ID, provider details, endpoint type, request URL, and API key, providing a complete request object to the unified API handler.

src/services/init-session.mjs (5)

medium

The import of normalizeApiMode is essential for ensuring that API mode objects are consistently structured and validated when a new session is initialized. This helps prevent unexpected behavior due to malformed API mode data.

src/services/init-session.mjs (75)

medium

The apiMode property is now explicitly normalized using normalizeApiMode during session initialization. This ensures that all API mode objects conform to a consistent structure, which is critical for the new provider registry and unified API handling.

src/services/wrappers.mjs (13)

medium

The import of normalizeApiMode is crucial here to ensure that any API mode object received or processed by the port listener is normalized. This guarantees consistency and prevents issues with potentially malformed API mode data.

src/services/wrappers.mjs (110)

medium

The session.apiMode is now explicitly normalized using normalizeApiMode within the port listener. This ensures that any API mode passed into the executeApi function is in a consistent and expected format, which is vital for the new unified API handling logic.

src/utils/model-name-convert.mjs (81-93)

medium

The new normalizeApiMode function ensures that API mode objects always have a consistent structure and default values. This is critical for reliable processing of API modes throughout the application, especially with the introduction of new fields like providerId and the migration logic.

src/utils/model-name-convert.mjs (97-98)

medium

The apiModeToModelName function now explicitly normalizes the apiMode object at the beginning. This ensures that any subsequent logic in the function operates on a well-formed API mode object, improving robustness.

src/utils/model-name-convert.mjs (111-115)

medium

The getApiModesFromConfig function now normalizes config.customApiModes before processing them. This ensures that custom API modes are always in a consistent format, which is important for accurate filtering and mapping.

src/utils/model-name-convert.mjs (132-133)

medium

The logic for Ollama model names has been updated to explicitly set modelName to ollamaModel- followed by the configured Ollama model name. This ensures consistency in how Ollama models are identified within the system, aligning with the new provider registry.

src/utils/model-name-convert.mjs (139)

medium

The normalizedCustomApiModes are now correctly filtered and included in the returned array. This ensures that all custom API modes, after normalization, are properly accounted for when retrieving the list of available API modes.

src/utils/model-name-convert.mjs (148-157)

medium

The normalizeForCompare helper function is introduced to create a consistent string representation of API mode objects for comparison. This is crucial for accurately determining if an API mode is selected, especially with the addition of new fields like providerId and the need to ignore transient fields like apiKey and customUrl during comparison.

src/utils/model-name-convert.mjs (160)

medium

The isApiModeSelected function now uses the normalizeForCompare helper to compare API mode objects. This ensures that comparisons are robust and accurate, preventing false negatives or positives due to differences in non-essential fields or object ordering.

coderabbitai[bot]

This comment was marked as resolved.

@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from 36ed81e to fdc0497 Compare February 25, 2026 15:14
@PeterDaveHello
Copy link
Member Author

@codex review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
src/background/index.mjs (1)

8-8: ⚠️ Potential issue | 🟠 Major

Missing .mjs extension on the openai-api import.

This was flagged in a prior review and still needs the extension for runtime resolution consistency.

-import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api'
+import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api.mjs'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/background/index.mjs` at line 8, The import statement for
generateAnswersWithOpenAICompatibleApi is missing the .mjs extension; update the
import in src/background/index.mjs to reference
'../services/apis/openai-api.mjs' (i.e., change the module path used where
generateAnswersWithOpenAICompatibleApi is imported) so the runtime can resolve
the module consistently.
src/popup/sections/GeneralPart.jsx (1)

96-105: Duplication of provider-to-key mapping remains — already flagged in prior review.

LEGACY_API_KEY_FIELD_BY_PROVIDER_ID here and LEGACY_SECRET_KEY_TO_PROVIDER_ID in src/config/index.mjs express the same mapping in opposite directions. A shared constants module would keep them in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 96 - 105,
LEGACY_API_KEY_FIELD_BY_PROVIDER_ID duplicates the inverse mapping
LEGACY_SECRET_KEY_TO_PROVIDER_ID; extract the canonical mapping into a new
shared constants module (e.g., export LEGACY_API_KEY_FIELD_BY_PROVIDER_ID from a
central module), import that constant into GeneralPart.jsx and update
src/config's code to derive LEGACY_SECRET_KEY_TO_PROVIDER_ID from it (use
Object.entries/Object.fromEntries to invert the mapping), and remove the
hard-coded duplicate so both modules consume the single source of truth.
src/services/apis/openai-api.mjs (1)

24-40: Missing timeout on touchOllamaKeepAlive fetch — can block indefinitely.

The await at line 129 will hang if the Ollama endpoint is unresponsive. While the .catch() prevents unhandled rejections, the promise itself never resolves until the TCP stack gives up. Add an AbortController with a reasonable timeout.

🛡️ Proposed fix
 async function touchOllamaKeepAlive(config, model, apiKey) {
+  const controller = new AbortController()
+  const timeout = setTimeout(() => controller.abort(), 5000)
   return fetch(`${normalizeBaseUrl(config.ollamaEndpoint)}/api/generate`, {
     method: 'POST',
+    signal: controller.signal,
     headers: {
       'Content-Type': 'application/json',
       ...(apiKey ? { Authorization: `Bearer ${apiKey}` } : {}),
     },
     body: JSON.stringify({
       model,
       prompt: 't',
       options: {
         num_predict: 1,
       },
       keep_alive: config.ollamaKeepAliveTime === '-1' ? -1 : config.ollamaKeepAliveTime,
     }),
-  })
+  }).finally(() => clearTimeout(timeout))
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-api.mjs` around lines 24 - 40, The fetch in
touchOllamaKeepAlive can hang indefinitely; modify touchOllamaKeepAlive to
create an AbortController, pass controller.signal to fetch, and set a timer
(e.g., via setTimeout) to call controller.abort() after a reasonable timeout
(e.g., 5-10s), clearing the timer on completion; ensure the fetch call uses the
controller.signal and that any catch handles AbortError gracefully so the
calling await won't hang.
src/services/apis/provider-registry.mjs (1)

15-86: OpenAI template paths include /v1 prefix while other providers don't — potential /v1 duplication.

The OpenAI provider template defines chatCompletionsPath: '/v1/chat/completions' (line 19), while the OpenAI baseUrl is set from config.customOpenAiApiUrl in buildBuiltinProviders (line 165). If a user has stored a base URL already ending in /v1 (e.g., https://api.openai.com/v1), joinUrl will produce https://api.openai.com/v1/v1/chat/completions.

Strip a trailing /v1 from the OpenAI base URL before storing, or unify the path convention across all providers.

🛠️ Proposed fix in buildBuiltinProviders
     if (provider.id === 'openai') {
+      const rawUrl = trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com')
       return {
         ...provider,
-        baseUrl: trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com'),
+        baseUrl: rawUrl.replace(/\/v1$/i, ''),
       }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 15 - 86,
BUILTIN_PROVIDER_TEMPLATE currently sets OpenAI's chatCompletionsPath to
'/v1/chat/completions' which can duplicate '/v1' if buildBuiltinProviders uses a
baseUrl that already ends with '/v1'; update buildBuiltinProviders to normalize
the OpenAI base URL by stripping a trailing '/v1' or '/v1/' before saving (e.g.,
use baseUrl = config.customOpenAiApiUrl.replace(/\/v1\/?$/, '') when building
the provider entry for id 'openai'), or alternatively change the OpenAI template
path to '/chat/completions' so all providers use the same path convention;
adjust the code that creates the OpenAI provider (referencing
BUILTIN_PROVIDER_TEMPLATE and buildBuiltinProviders) to perform the
normalization.
🧹 Nitpick comments (8)
src/config/index.mjs (2)

740-751: Minor: aimlModelKeys and aimlApiModelKeys both map to 'aiml'.

ModelGroups at line 197 uses the key aimlModelKeys (not aimlApiModelKeys). The extra entry is harmless (belt-and-suspenders), but if it's intentional for forward-compatibility, a comment would help. Otherwise it's dead code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/index.mjs` around lines 740 - 751, The mapping
API_MODE_GROUP_TO_PROVIDER_ID currently contains both aimlModelKeys and
aimlApiModelKeys pointing to 'aiml'; remove the redundant key aimlApiModelKeys
(or add a brief clarifying comment if the duplicate is intentional for future
compatibility) so only the canonical key used by ModelGroups (aimlModelKeys)
remains, updating the object API_MODE_GROUP_TO_PROVIDER_ID to eliminate dead
code while preserving behavior.

796-1101: Migration logic is thorough but consider extracting shared apiMode migration into a helper.

The migration for customApiModes (lines 879-970) and selectedApiMode (lines 973-1086) share significant structural similarity — both normalize provider IDs, create providers from legacy custom URLs, and migrate API keys to providerSecrets. This isn't urgent since the migration runs once per config version bump, but extracting a shared migrateApiModeEntry(apiMode, context) helper would reduce the ~100 lines of duplication and lower the risk of one branch drifting from the other in future migrations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/index.mjs` around lines 796 - 1101, The customApiModes loop and
selectedApiMode handling in migrateUserConfig share repeated logic; extract that
shared logic into a helper named migrateApiModeEntry(apiMode, {providerSecrets,
customOpenAIProviders, providerIdSet, providerIdRenameLookup,
customProviderCounter, LEGACY_CUSTOM_PROVIDER_SECRET,
API_MODE_GROUP_TO_PROVIDER_ID}) that normalizes providerId, applies renames,
creates new provider entries from customUrl (updating providerIdSet,
customOpenAIProviders and customProviderCounter), moves apiKey into
providerSecrets, clears apiMode.customUrl/apiKey/providerId as appropriate, and
returns {updatedApiMode, providerIdAssignedFromLegacyCustomUrl,
customProviderCounter, dirtyFlags} so migrateUserConfig can call it for each
entry in customApiModes and once for selectedApiMode (merging returned updates
and flags) to remove the duplicated ~100 lines while preserving existing
side-effects on providerSecrets, customOpenAIProviders, providerIdSet, and
dirty/custom*Dirty flags.
src/services/apis/openai-compatible-core.mjs (2)

8-15: extraHeaders can be silently overridden when apiKey is provided.

If a caller passes Authorization inside extraHeaders and provides apiKey, the spread on line 11 sets Authorization first, then line 13 unconditionally overwrites it. This is likely the desired precedence, but it's implicit. A small comment or an explicit delete would prevent surprises for custom provider integrations that may need a non-Bearer auth header.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 8 - 15, The
buildHeaders function currently allows an Authorization in extraHeaders to be
silently overwritten when apiKey is provided; make this precedence explicit by
removing any Authorization from extraHeaders before merging (e.g., delete
extraHeaders.Authorization) when you intend apiKey to win, or alternatively
check for an existing Authorization and only set headers.Authorization = `Bearer
${apiKey}` if none exists — update the buildHeaders function and add a brief
comment clarifying the chosen precedence so custom providers aren’t surprised.

109-117: Partial answers are silently discarded when the stream ends without a finish_reason.

When onEnd fires and finished is false (line 148), only { done: true } is posted — finish() is never called, so pushRecord is skipped and the partial answer is lost. This appears intentional based on the corresponding test (emits fallback done message when stream ends without finish reason), but it means any accumulated answer text from intermediate deltas is dropped from the conversation history.

If this is by design (don't persist incomplete answers), a brief comment would help future readers understand the intent.

     async onEnd() {
-      if (!finished) port.postMessage({ done: true })
+      // Stream ended without a finish_reason — notify the UI but do not
+      // persist the partial answer into the conversation history.
+      if (!finished) port.postMessage({ done: true })

Also applies to: 146-151

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 109 - 117, The
onEnd handler currently posts a fallback done message without calling finish(),
so any accumulated partial answer in the answer variable is never persisted via
pushRecord; update the onEnd logic to call finish() whenever finished is false
(ensuring finish() uses pushRecord(session, question, answer) and sends the
final port.postMessage with done:true and session) so partial deltas are saved,
or if dropping partial answers is intentional add a concise comment near
finish()/onEnd explaining the design decision; reference finish, pushRecord,
answer, onEnd and port.postMessage when making the change.
src/services/apis/provider-registry.mjs (3)

249-269: getProviderSecret cascade is well-layered but the final fallback (line 268) may surface stale keys.

The priority is: (1) apiMode.apiKey for customApiModelKeys, (2) providerSecrets map, (3) legacy config key, (4) apiMode.apiKey as last resort for all groups. The last fallback means an old apiMode.apiKey value embedded in a session could override a rotated key in providerSecrets or legacy config — but only if both are empty. This is a reasonable last-resort behavior; just document that the apiMode-embedded key is the weakest source.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 249 - 269,
getProviderSecret currently falls back to session.apiMode.apiKey as the weakest
source after checking providerSecrets and legacy keys, which can surface stale
embedded keys; update the function (getProviderSecret) to include a concise
inline comment or docstring clarifying the priority order
(apiMode.customApiModelKeys -> providerSecrets map -> LEGACY_KEY_BY_PROVIDER_ID
lookup -> session.apiMode.apiKey as last resort) and explicitly note that the
final fallback (session.apiMode.apiKey) is the weakest/least-authoritative
source and may be stale so should not override rotated keys in providerSecrets
or legacy config.

241-247: getProviderById silently returns null for disabled providers — callers should be aware.

Line 245 returns null if provider.enabled === false. This means resolveOpenAICompatibleRequest will return null, and the caller in openai-api.mjs will throw 'Unknown OpenAI-compatible provider configuration'. The error message doesn't distinguish "provider not found" from "provider disabled," which could confuse users troubleshooting.

💡 Consider returning a richer error or distinct null reasons

One option is to have getProviderById return a status object, or have resolveOpenAICompatibleRequest produce a more descriptive error reason:

 export function resolveOpenAICompatibleRequest(config, session) {
   const providerId = resolveProviderIdForSession(session)
   if (!providerId) return null
   const provider = getProviderById(config, providerId)
-  if (!provider) return null
+  if (!provider) {
+    const allProviders = getAllOpenAIProviders(config)
+    const disabled = allProviders.find((p) => p.id === providerId && p.enabled === false)
+    if (disabled) return { error: `Provider '${providerId}' is disabled` }
+    return null
+  }

The caller would then need to check for .error — but this is optional depending on how much UX polish is desired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 241 - 247,
getProviderById currently returns null for both "not found" and "disabled" cases
which makes callers like resolveOpenAICompatibleRequest unable to distinguish
why resolution failed; change getProviderById to return a richer result (e.g. {
provider } on success, { error: 'not_found' } or { error: 'disabled' } when
appropriate) or throw a descriptive error, then update
resolveOpenAICompatibleRequest (and callers such as openai-api.mjs) to handle
the new shape/error and produce a clearer message like "provider not found" vs
"provider disabled"; reference the getProviderById and
resolveOpenAICompatibleRequest symbols when making these changes.

101-134: resolveProviderIdFromLegacyModelName — thorough legacy mapping, one subtle gap.

Line 130 uses preset.startsWith('chatglm') which would also match a hypothetical model starting with chatglm that isn't actually ChatGLM-related. Given the bounded set of known model names in this extension, the practical risk is negligible, but it's worth noting the loose match.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 101 - 134, The loose
match in resolveProviderIdFromLegacyModelName uses preset.startsWith('chatglm')
which can incorrectly match unrelated names; tighten the condition by matching
the known ChatGLM tokens only (for example check preset === 'chatglm' OR
preset.startsWith('chatglm_') OR preset === 'chatglmApiModelKeys') so it only
identifies legitimate ChatGLM presets; update the condition near the other
provider checks (see resolveProviderIdFromLegacyModelName and
getModelNamePresetPart) to use these stricter comparisons.
src/services/apis/openai-api.mjs (1)

6-8: normalizeBaseUrl duplicates trimSlashes in provider-registry.mjs.

Both normalizeBaseUrl here and trimSlashes in provider-registry.mjs strip trailing slashes from a string. Consider importing trimSlashes (or exporting a shared helper) instead of maintaining a parallel implementation that could drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-api.mjs` around lines 6 - 8, Replace the duplicate
trailing-slash logic in normalizeBaseUrl by reusing the existing trimSlashes
helper: remove the local normalizeBaseUrl function and import trimSlashes (from
the module that currently exports it) into this module, then call trimSlashes
wherever normalizeBaseUrl was used; alternatively, if trimSlashes is not
exported, export it from provider-registry.mjs and update imports so
openai-api.mjs uses that shared helper instead of duplicating the
implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/services/apis/provider-registry.mjs`:
- Around line 186-201: The fallback ID in normalizeCustomProvider currently uses
the unstable index-based `custom-provider-${index + 1}` which will change when
array order changes and break lookups (e.g., providerSecrets and
session.apiMode.providerId); change normalizeCustomProvider to produce a stable
fallback ID when provider.id is missing by deriving it deterministically (for
example, compute a short hash from stable fields like name + baseUrl) or by
using a persistent UUID assigned at provider creation time, and update any
creation flow so newly created providers get that stable id; reference
normalizeCustomProvider and ensure code paths that persist/lookup provider IDs
(providerSecrets, session.apiMode.providerId) use the new stable id scheme.

---

Duplicate comments:
In `@src/background/index.mjs`:
- Line 8: The import statement for generateAnswersWithOpenAICompatibleApi is
missing the .mjs extension; update the import in src/background/index.mjs to
reference '../services/apis/openai-api.mjs' (i.e., change the module path used
where generateAnswersWithOpenAICompatibleApi is imported) so the runtime can
resolve the module consistently.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 96-105: LEGACY_API_KEY_FIELD_BY_PROVIDER_ID duplicates the inverse
mapping LEGACY_SECRET_KEY_TO_PROVIDER_ID; extract the canonical mapping into a
new shared constants module (e.g., export LEGACY_API_KEY_FIELD_BY_PROVIDER_ID
from a central module), import that constant into GeneralPart.jsx and update
src/config's code to derive LEGACY_SECRET_KEY_TO_PROVIDER_ID from it (use
Object.entries/Object.fromEntries to invert the mapping), and remove the
hard-coded duplicate so both modules consume the single source of truth.

In `@src/services/apis/openai-api.mjs`:
- Around line 24-40: The fetch in touchOllamaKeepAlive can hang indefinitely;
modify touchOllamaKeepAlive to create an AbortController, pass controller.signal
to fetch, and set a timer (e.g., via setTimeout) to call controller.abort()
after a reasonable timeout (e.g., 5-10s), clearing the timer on completion;
ensure the fetch call uses the controller.signal and that any catch handles
AbortError gracefully so the calling await won't hang.

In `@src/services/apis/provider-registry.mjs`:
- Around line 15-86: BUILTIN_PROVIDER_TEMPLATE currently sets OpenAI's
chatCompletionsPath to '/v1/chat/completions' which can duplicate '/v1' if
buildBuiltinProviders uses a baseUrl that already ends with '/v1'; update
buildBuiltinProviders to normalize the OpenAI base URL by stripping a trailing
'/v1' or '/v1/' before saving (e.g., use baseUrl =
config.customOpenAiApiUrl.replace(/\/v1\/?$/, '') when building the provider
entry for id 'openai'), or alternatively change the OpenAI template path to
'/chat/completions' so all providers use the same path convention; adjust the
code that creates the OpenAI provider (referencing BUILTIN_PROVIDER_TEMPLATE and
buildBuiltinProviders) to perform the normalization.

---

Nitpick comments:
In `@src/config/index.mjs`:
- Around line 740-751: The mapping API_MODE_GROUP_TO_PROVIDER_ID currently
contains both aimlModelKeys and aimlApiModelKeys pointing to 'aiml'; remove the
redundant key aimlApiModelKeys (or add a brief clarifying comment if the
duplicate is intentional for future compatibility) so only the canonical key
used by ModelGroups (aimlModelKeys) remains, updating the object
API_MODE_GROUP_TO_PROVIDER_ID to eliminate dead code while preserving behavior.
- Around line 796-1101: The customApiModes loop and selectedApiMode handling in
migrateUserConfig share repeated logic; extract that shared logic into a helper
named migrateApiModeEntry(apiMode, {providerSecrets, customOpenAIProviders,
providerIdSet, providerIdRenameLookup, customProviderCounter,
LEGACY_CUSTOM_PROVIDER_SECRET, API_MODE_GROUP_TO_PROVIDER_ID}) that normalizes
providerId, applies renames, creates new provider entries from customUrl
(updating providerIdSet, customOpenAIProviders and customProviderCounter), moves
apiKey into providerSecrets, clears apiMode.customUrl/apiKey/providerId as
appropriate, and returns {updatedApiMode, providerIdAssignedFromLegacyCustomUrl,
customProviderCounter, dirtyFlags} so migrateUserConfig can call it for each
entry in customApiModes and once for selectedApiMode (merging returned updates
and flags) to remove the duplicated ~100 lines while preserving existing
side-effects on providerSecrets, customOpenAIProviders, providerIdSet, and
dirty/custom*Dirty flags.

In `@src/services/apis/openai-api.mjs`:
- Around line 6-8: Replace the duplicate trailing-slash logic in
normalizeBaseUrl by reusing the existing trimSlashes helper: remove the local
normalizeBaseUrl function and import trimSlashes (from the module that currently
exports it) into this module, then call trimSlashes wherever normalizeBaseUrl
was used; alternatively, if trimSlashes is not exported, export it from
provider-registry.mjs and update imports so openai-api.mjs uses that shared
helper instead of duplicating the implementation.

In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 8-15: The buildHeaders function currently allows an Authorization
in extraHeaders to be silently overwritten when apiKey is provided; make this
precedence explicit by removing any Authorization from extraHeaders before
merging (e.g., delete extraHeaders.Authorization) when you intend apiKey to win,
or alternatively check for an existing Authorization and only set
headers.Authorization = `Bearer ${apiKey}` if none exists — update the
buildHeaders function and add a brief comment clarifying the chosen precedence
so custom providers aren’t surprised.
- Around line 109-117: The onEnd handler currently posts a fallback done message
without calling finish(), so any accumulated partial answer in the answer
variable is never persisted via pushRecord; update the onEnd logic to call
finish() whenever finished is false (ensuring finish() uses pushRecord(session,
question, answer) and sends the final port.postMessage with done:true and
session) so partial deltas are saved, or if dropping partial answers is
intentional add a concise comment near finish()/onEnd explaining the design
decision; reference finish, pushRecord, answer, onEnd and port.postMessage when
making the change.

In `@src/services/apis/provider-registry.mjs`:
- Around line 249-269: getProviderSecret currently falls back to
session.apiMode.apiKey as the weakest source after checking providerSecrets and
legacy keys, which can surface stale embedded keys; update the function
(getProviderSecret) to include a concise inline comment or docstring clarifying
the priority order (apiMode.customApiModelKeys -> providerSecrets map ->
LEGACY_KEY_BY_PROVIDER_ID lookup -> session.apiMode.apiKey as last resort) and
explicitly note that the final fallback (session.apiMode.apiKey) is the
weakest/least-authoritative source and may be stale so should not override
rotated keys in providerSecrets or legacy config.
- Around line 241-247: getProviderById currently returns null for both "not
found" and "disabled" cases which makes callers like
resolveOpenAICompatibleRequest unable to distinguish why resolution failed;
change getProviderById to return a richer result (e.g. { provider } on success,
{ error: 'not_found' } or { error: 'disabled' } when appropriate) or throw a
descriptive error, then update resolveOpenAICompatibleRequest (and callers such
as openai-api.mjs) to handle the new shape/error and produce a clearer message
like "provider not found" vs "provider disabled"; reference the getProviderById
and resolveOpenAICompatibleRequest symbols when making these changes.
- Around line 101-134: The loose match in resolveProviderIdFromLegacyModelName
uses preset.startsWith('chatglm') which can incorrectly match unrelated names;
tighten the condition by matching the known ChatGLM tokens only (for example
check preset === 'chatglm' OR preset.startsWith('chatglm_') OR preset ===
'chatglmApiModelKeys') so it only identifies legitimate ChatGLM presets; update
the condition near the other provider checks (see
resolveProviderIdFromLegacyModelName and getModelNamePresetPart) to use these
stricter comparisons.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 36ed81e and fdc0497.

📒 Files selected for processing (12)
  • src/background/index.mjs
  • src/config/index.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/services/apis/custom-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/services/init-session.mjs

Comment on lines +186 to +201
function normalizeCustomProvider(provider, index) {
if (!provider || typeof provider !== 'object') return null
const id = toStringOrEmpty(provider.id).trim() || `custom-provider-${index + 1}`
return {
id,
name: toStringOrEmpty(provider.name).trim() || `Custom Provider ${index + 1}`,
baseUrl: trimSlashes(provider.baseUrl),
chatCompletionsPath: ensureLeadingSlash(provider.chatCompletionsPath, DEFAULT_CHAT_PATH),
completionsPath: ensureLeadingSlash(provider.completionsPath, DEFAULT_COMPLETION_PATH),
chatCompletionsUrl: toStringOrEmpty(provider.chatCompletionsUrl).trim(),
completionsUrl: toStringOrEmpty(provider.completionsUrl).trim(),
builtin: false,
enabled: provider.enabled !== false,
allowLegacyResponseField: Boolean(provider.allowLegacyResponseField),
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Index-based fallback ID for custom providers is fragile.

Line 188 generates custom-provider-${index + 1} when provider.id is empty. Since the index depends on array position, reordering or removing providers will shift IDs, breaking providerSecrets lookups and persisted session.apiMode.providerId references. Consider generating a stable ID (e.g., hash of name+baseUrl, or a UUID at creation time) when the ID is missing.

#!/bin/bash
# Check how custom providers are created to see if IDs are assigned at creation
rg -n -C5 'customOpenAIProviders' --type=js -g '!node_modules/**'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 186 - 201, The fallback
ID in normalizeCustomProvider currently uses the unstable index-based
`custom-provider-${index + 1}` which will change when array order changes and
break lookups (e.g., providerSecrets and session.apiMode.providerId); change
normalizeCustomProvider to produce a stable fallback ID when provider.id is
missing by deriving it deterministically (for example, compute a short hash from
stable fields like name + baseUrl) or by using a persistent UUID assigned at
provider creation time, and update any creation flow so newly created providers
get that stable id; reference normalizeCustomProvider and ensure code paths that
persist/lookup provider IDs (providerSecrets, session.apiMode.providerId) use
the new stable id scheme.

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Chef's kiss.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from fdc0497 to 407d734 Compare February 25, 2026 17:42
coderabbitai[bot]

This comment was marked as resolved.

@PeterDaveHello
Copy link
Member Author

@codex review

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +63 to +66
const { controller, messageListener, disconnectListener } = setAbortController(port)
const config = await getUserConfig()

let requestBody
Copy link

Copilot AI Feb 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generateAnswersWithOpenAICompatibleApi already receives a fully-loaded config, but generateAnswersWithOpenAICompatible calls getUserConfig() again to read token/temperature/context settings. This adds an extra storage read (and potential migration write) per request and can lead to inconsistencies if callers pass a config snapshot. Consider passing the needed config fields into generateAnswersWithOpenAICompatible (or accepting config as a parameter) instead of reloading from storage.

Copilot uses AI. Check for mistakes.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 407d734e9e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

const legacyValue = legacyKey ? toStringOrEmpty(config?.[legacyKey]).trim() : ''
if (legacyValue) return legacyValue

return apiModeApiKey

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stop using session apiKey as global fallback

For non-custom providers this fallback can make settings updates ineffective: if a tab still has a stale session.apiMode.apiKey (from legacy/session state), getProviderSecret will keep returning it whenever providerSecrets and legacy config fields are empty, so clearing or rotating the provider key in settings does not take effect for that session. This regression is introduced by the unconditional return apiModeApiKey; for non-customApiModelKeys modes, key resolution should come from provider/config storage only.

Useful? React with 👍 / 👎.

@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from 407d734 to 3b4527c Compare February 28, 2026 11:02
@PeterDaveHello
Copy link
Member Author

@codex review

coderabbitai[bot]

This comment was marked as resolved.

This comment was marked as resolved.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 41 out of 41 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Can't wait for the next one!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from c48b0f8 to 5cab19b Compare March 3, 2026 19:52
@PeterDaveHello PeterDaveHello requested a review from Copilot March 3, 2026 19:53
@PeterDaveHello
Copy link
Member Author

@codex review

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (3)
src/services/apis/openai-compatible-core.mjs (2)

146-149: ⚠️ Potential issue | 🟠 Major

Persist partial answers when stream ends unexpectedly.

At Line 147, onEnd sends a done message but skips finish(), so accumulated partial output is not saved to session.conversationRecords.

🔧 Proposed fix
     async onEnd() {
-      if (!finished) {
-        port.postMessage({ answer: null, done: true, session: session })
-      }
+      if (!finished) finish()
       port.onMessage.removeListener(messageListener)
       port.onDisconnect.removeListener(disconnectListener)
     },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 146 - 149, In
onEnd(), when the stream ends unexpectedly and finished is false, call finish()
(the same function that persists accumulated output to
session.conversationRecords) before sending the done message; update the branch
that currently only does port.postMessage({ answer: null, done: true, session })
to first invoke await finish() (or finish() if synchronous) to ensure partial
output is saved to session.conversationRecords, then post the done message,
keeping the finished guard to avoid double-finalizing.

114-115: ⚠️ Potential issue | 🟠 Major

Avoid logging full conversation and SSE payload content.

Line 114 and Line 124 can leak user prompts/responses into logs. Please remove or sanitize these logs.

🔧 Proposed fix
   const finish = () => {
     if (finished) return
     finished = true
     pushRecord(session, question, answer)
-    console.debug('conversation history', { content: session.conversationRecords })
     port.postMessage({ answer: null, done: true, session: session })
   }

@@
     onMessage(message) {
-      console.debug('sse message', message)
       if (finished) return

Also applies to: 124-124

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 114 - 115, Remove
or sanitize any logs that emit full conversation or SSE payloads: delete or
replace the console.debug('conversation history', { content:
session.conversationRecords }) call and avoid posting the entire session object
via port.postMessage({ answer: null, done: true, session: session }); instead
log minimal metadata (e.g., conversation length, last activity timestamp, or a
redacted summary/hash) and ensure any port.postMessage payloads do not include
raw session.conversationRecords or user content; apply the same change to the
other instance referenced (the similar log around line 124) so no full
prompts/responses are written to logs or sent over SSE.
src/popup/sections/ApiModes.jsx (1)

381-381: ⚠️ Potential issue | 🟡 Minor

Use single quotes for the placeholder literal.

This line still uses a double-quoted string literal in changed JSX.

🔧 Suggested fix
-            placeholder="https://api.example.com/v1/chat/completions"
+            placeholder='https://api.example.com/v1/chat/completions'

As per coding guidelines **/*.{js,jsx,mjs,ts,tsx}: Use single quotes instead of double quotes in imports and string literals.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` at line 381, In the ApiModes.jsx JSX element
that renders the endpoint input (look for the placeholder prop in the ApiModes
component/JSX block), replace the double-quoted placeholder string
"https://api.example.com/v1/chat/completions" with a single-quoted literal
'https://api.example.com/v1/chat/completions' to conform to the project's
single-quote string style.
🧹 Nitpick comments (1)
src/config/index.mjs (1)

952-992: Extract duplicated custom-provider resolution/creation logic into one helper.

The blocks from Line 952 to Line 992 and Line 1041 to Line 1085 implement near-identical provider lookup/creation rules. Consolidating them would reduce drift risk and make future migration fixes safer.

Also applies to: 1041-1085

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/index.mjs` around lines 952 - 992, The duplicated logic that
resolves or creates a custom provider should be extracted into a single helper
(e.g., resolveOrCreateCustomProvider) and invoked from both places; currently
the blocks that use normalizeText, normalizeEndpointUrlForCompare,
customOpenAIProviders, providerSecrets, and
isProviderSecretCompatibleForCustomMode to find or create a provider (using
customProviderCounter, normalizeProviderId, ensureUniqueProviderId,
providerIdSet, setting provider fields and toggling
customProvidersDirty/customApiModesDirty) are repeated — move that logic into a
helper that accepts the apiMode (or customUrl/customName/apiKey),
providerSecrets, providerIdSet, and returns { providerId, providerCreatedFlag,
provider } so callers simply set apiMode.providerId, clear apiMode.customUrl as
needed, and update flags
(customProvidersDirty/customApiModesDirty/providerIdAssignedFromLegacyCustomUrl)
based on the helper result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/popup/sections/ApiModes.jsx`:
- Line 381: In the ApiModes.jsx JSX element that renders the endpoint input
(look for the placeholder prop in the ApiModes component/JSX block), replace the
double-quoted placeholder string "https://api.example.com/v1/chat/completions"
with a single-quoted literal 'https://api.example.com/v1/chat/completions' to
conform to the project's single-quote string style.

In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 146-149: In onEnd(), when the stream ends unexpectedly and
finished is false, call finish() (the same function that persists accumulated
output to session.conversationRecords) before sending the done message; update
the branch that currently only does port.postMessage({ answer: null, done: true,
session }) to first invoke await finish() (or finish() if synchronous) to ensure
partial output is saved to session.conversationRecords, then post the done
message, keeping the finished guard to avoid double-finalizing.
- Around line 114-115: Remove or sanitize any logs that emit full conversation
or SSE payloads: delete or replace the console.debug('conversation history', {
content: session.conversationRecords }) call and avoid posting the entire
session object via port.postMessage({ answer: null, done: true, session: session
}); instead log minimal metadata (e.g., conversation length, last activity
timestamp, or a redacted summary/hash) and ensure any port.postMessage payloads
do not include raw session.conversationRecords or user content; apply the same
change to the other instance referenced (the similar log around line 124) so no
full prompts/responses are written to logs or sent over SSE.

---

Nitpick comments:
In `@src/config/index.mjs`:
- Around line 952-992: The duplicated logic that resolves or creates a custom
provider should be extracted into a single helper (e.g.,
resolveOrCreateCustomProvider) and invoked from both places; currently the
blocks that use normalizeText, normalizeEndpointUrlForCompare,
customOpenAIProviders, providerSecrets, and
isProviderSecretCompatibleForCustomMode to find or create a provider (using
customProviderCounter, normalizeProviderId, ensureUniqueProviderId,
providerIdSet, setting provider fields and toggling
customProvidersDirty/customApiModesDirty) are repeated — move that logic into a
helper that accepts the apiMode (or customUrl/customName/apiKey),
providerSecrets, providerIdSet, and returns { providerId, providerCreatedFlag,
provider } so callers simply set apiMode.providerId, clear apiMode.customUrl as
needed, and update flags
(customProvidersDirty/customApiModesDirty/providerIdAssignedFromLegacyCustomUrl)
based on the helper result.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c48b0f8 and 5cab19b.

📒 Files selected for processing (44)
  • src/_locales/de/main.json
  • src/_locales/en/main.json
  • src/_locales/es/main.json
  • src/_locales/fr/main.json
  • src/_locales/in/main.json
  • src/_locales/it/main.json
  • src/_locales/ja/main.json
  • src/_locales/ko/main.json
  • src/_locales/pt/main.json
  • src/_locales/ru/main.json
  • src/_locales/tr/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/zh-hant/main.json
  • src/background/index.mjs
  • src/config/index.mjs
  • src/config/openai-provider-mappings.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/popup/sections/general-balance-utils.mjs
  • src/popup/sections/provider-secret-utils.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/custom-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/config/migrate-user-config.test.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • tests/unit/popup/general-balance-utils.test.mjs
  • tests/unit/popup/provider-secret-utils.test.mjs
  • tests/unit/services/apis/custom-api.test.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
  • tests/unit/services/apis/provider-registry.test.mjs
  • tests/unit/services/apis/thin-adapters.test.mjs
  • tests/unit/services/wrappers-register.test.mjs
  • tests/unit/utils/model-name-convert.test.mjs
💤 Files with no reviewable changes (6)
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/deepseek-api.mjs
🚧 Files skipped from review as they are similar to previous changes (18)
  • src/_locales/es/main.json
  • src/services/wrappers.mjs
  • src/_locales/ru/main.json
  • tests/unit/popup/provider-secret-utils.test.mjs
  • src/popup/sections/provider-secret-utils.mjs
  • src/utils/model-name-convert.mjs
  • src/config/openai-provider-mappings.mjs
  • src/_locales/zh-hant/main.json
  • src/_locales/in/main.json
  • tests/unit/services/apis/provider-registry.test.mjs
  • src/_locales/fr/main.json
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/_locales/ko/main.json
  • src/services/init-session.mjs
  • src/services/apis/provider-registry.mjs
  • src/_locales/ja/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/de/main.json

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. 🚀

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 44 out of 44 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

async onStart() {},
async onEnd() {
if (!finished) {
port.postMessage({ answer: null, done: true, session: session })
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In generateAnswersWithOpenAICompatible(), onEnd() posts a final { answer: null, done: true, session } message when the stream ends without a [DONE] chunk or finish_reason, but it never calls the shared finish() helper. That means the accumulated partial answer is not persisted via pushRecord(), so the conversation history can silently lose the final/partial response even though the UI is told the request is done. Consider calling finish() (or at least pushRecord + setting finished = true) inside onEnd when !finished so fallback termination behaves consistently with the normal finish path.

Suggested change
port.postMessage({ answer: null, done: true, session: session })
finish()

Copilot uses AI. Check for mistakes.
@PeterDaveHello PeterDaveHello force-pushed the refactor/openai-provider-platform branch from 5cab19b to a5c12a2 Compare March 11, 2026 12:51
@PeterDaveHello
Copy link
Member Author

@codex review

This comment was marked as resolved.

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Keep it up!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 44 out of 44 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +145 to +152
async onStart() {},
async onEnd() {
if (!finished) {
port.postMessage({ answer: null, done: true, session: session })
}
port.onMessage.removeListener(messageListener)
port.onDisconnect.removeListener(disconnectListener)
},
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

onEnd emits a final { answer: null, done: true } message without calling finish(). This means when a stream ends without [DONE] or finish_reason, the partial answer is not persisted via pushRecord, and finished is never set to true. Consider calling finish() from onEnd (or duplicating its persistence logic) so the last partial answer is saved consistently and finished is updated before cleanup.

Copilot uses AI. Check for mistakes.
@PeterDaveHello
Copy link
Member Author

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Already looking forward to the next diff.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Consolidate OpenAI-compatible API request handling into a single shared
module and route background dispatch through provider registry lookup.
This removes duplicated streaming/parsing logic from openai-api and
custom-api while keeping existing behavior.

Add config migration to preserve existing API keys and custom mode
entries by mapping them into providerSecrets and custom provider
records. Keep legacy fallbacks for apiMode customUrl/custom apiKey to
avoid user-visible regressions during rollout.

Normalize apiMode objects at runtime and compare selection using stable
identity fields so migrated and legacy session data continue to match
correctly.
Split provider management from API mode saving so users can add and edit
custom OpenAI-compatible providers in a dedicated editor.

Require a full chat-completions endpoint URL for provider setup and
derive the paired completions endpoint with shared popup utilities
backed by unit tests.

Keep API mode provider binding explicit at save time and preserve
provider-secret synchronization in General settings for backward
compatibility.

Add provider-related locale keys across supported locales.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 48 out of 48 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +559 to +565
if (!provider) {
if (normalizeProviderId(providerId) === 'legacy-custom-default') {
provider = getProviderById(config, providerId)
useLegacyCustomUrlFallback = !!provider
} else {
return null
}
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In resolveOpenAICompatibleRequest, the legacy fallback branch checks normalizeProviderId(providerId) === 'legacy-custom-default' but then calls getProviderById(config, providerId) with the un-normalized providerId. If a session has providerId with different casing/spacing (common for legacy data, since normalizeApiMode only trims), this will fail to resolve the built-in legacy-custom-default provider and the request returns null / throws. Use the normalized id when calling getProviderById (e.g. 'legacy-custom-default' or normalizeProviderId(providerId)) and ensure resolvedProviderId aligns with the provider actually used.

Copilot uses AI. Check for mistakes.
Copy link

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 potential issues.

View 8 additional findings in Devin Review.

Open in Devin Review

"OpenAI (GPT-5.4 nano)": "OpenAI (GPT-5.4 nano)",
"Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)"
"Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)",
"This provider is still used by other API modes or saved conversations": "Bu saglayici hala diger API modlari veya kaydedilmis konusmalar tarafindan kullaniliyor",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Turkish locale translation missing diacritical marks (ASCII-only characters)

The Turkish translation at line 212 uses ASCII-only characters instead of proper Turkish diacritical marks. For example, "Bu saglayici hala diger API modlari veya kaydedilmis konusmalar tarafindan kullaniliyor" should use sağlayıcı, hâlâ, diğer, modları, kaydedilmiş, konuşmalar, kullanılıyor. Other Turkish translations in the same file (e.g., lines 153 and 215) correctly use Turkish characters. This will display garbled-looking text to Turkish-speaking users.

Suggested change
"This provider is still used by other API modes or saved conversations": "Bu saglayici hala diger API modlari veya kaydedilmis konusmalar tarafindan kullaniliyor",
"This provider is still used by other API modes or saved conversations": "Bu sağlayıcı hâlâ diğer API modları veya kaydedilmiş konuşmalar tarafından kullanılıyor",
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

"OpenAI (GPT-5.4 nano)": "OpenAI (GPT-5.4 nano)",
"Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)"
"Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)",
"This provider is still used by other API modes or saved conversations": "Ce fournisseur est encore utilise par d'autres modes d'API ou des conversations enregistrees",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 French locale translation missing accent marks

The French translation at line 212 uses "utilise" and "enregistrees" instead of the correctly accented "utilisé" and "enregistrées". Other French translations in the same file correctly use accents (e.g., line 215: "nécessaire pour les conversations enregistrées"). This will display incorrectly-spelled text to French-speaking users.

Suggested change
"This provider is still used by other API modes or saved conversations": "Ce fournisseur est encore utilise par d'autres modes d'API ou des conversations enregistrees",
"This provider is still used by other API modes or saved conversations": "Ce fournisseur est encore utilisé par d'autres modes d'API ou des conversations enregistrées",
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9ac7c67071

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +152 to +154
const selectedProvider = selectedProviderRequest
? getProviderById(config, selectedProviderRequest.providerId)
: null

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reuse the resolved provider when legacy-custom-default is custom

The new registry tests explicitly support migrated configs where a custom provider keeps the ID legacy-custom-default, but this code throws away selectedProviderRequest.provider and looks the provider up by ID again. In that case getProviderById() returns the builtin legacy provider first, so the mode-override flow in buildSelectedModeProviderSecretOverrideUpdate() clones the builtin template instead of the real custom endpoint. Saving an override from the General tab then creates a materialized provider with no usable request URL, and that mode stops working on the next request.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

♻️ Duplicate comments (4)
src/services/apis/openai-compatible-core.mjs (2)

17-20: ⚠️ Potential issue | 🟠 Major

Append legacy response chunks instead of replacing answer.

For legacy streaming providers, each response chunk currently overwrites the accumulated buffer, so only the tail fragment is persisted and returned.

🔁 Preserve the full streamed answer
-function buildMessageAnswer(answer, data, allowLegacyResponseField) {
-  if (allowLegacyResponseField && typeof data?.response === 'string' && data.response) {
-    return data.response
+function buildMessageAnswer(answer, data, allowLegacyResponseField) {
+  if (allowLegacyResponseField && typeof data?.response === 'string') {
+    return answer + data.response
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 17 - 20,
buildMessageAnswer currently replaces the accumulated answer with each legacy
data.response chunk, losing earlier fragments; change the logic in
buildMessageAnswer so that when allowLegacyResponseField is true and
data?.response is a non-empty string you append data.response to the existing
answer (e.g., answer = (answer ?? '') + data.response) rather than returning
data.response alone, and then return the concatenated result so streaming
fragments are preserved; keep existing type checks for data?.response and
respect allowLegacyResponseField flag.

114-118: ⚠️ Potential issue | 🟠 Major

Remove raw conversation and SSE payload logging from the production path.

These debug statements dump full prompts, answers, and provider payloads into extension logs on every request.

🧹 Minimal fix
   const finish = () => {
     if (finished) return
     finished = true
     if (answer !== '') {
       pushRecord(session, question, answer)
-      console.debug('conversation history', { content: session.conversationRecords })
     }
     port.postMessage({ answer: null, done: true, session: session })
   }
@@
     onMessage(message) {
-      console.debug('sse message', message)
       if (finished) return

Also applies to: 126-127

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/openai-compatible-core.mjs` around lines 114 - 118, The
production path currently logs raw conversation and SSE payloads (see the
console.debug call that prints session.conversationRecords and any other debug
logs around SSE handling), which must be removed or disabled; remove the
console.debug that dumps session.conversationRecords (and any similar console.*
that logs full prompts/answers/provider payloads) and instead either sanitize
the data or gate logging behind a runtime debug flag/ENV check (e.g., only log
when DEBUG_MODE is true); keep the pushRecord(session, question, answer) and the
port.postMessage({ answer: null, done: true, session }) behavior but ensure
session sent over postMessage does not contain sensitive raw prompt/answer
payloads (sanitize or omit conversationRecords) before sending.
src/popup/sections/ApiModes.jsx (1)

605-610: ⚠️ Potential issue | 🟡 Minor

Use single quotes for the placeholder literal.

This new JSX prop still uses double quotes.

✏️ Style-only fix
-            placeholder="https://api.example.com/v1/chat/completions"
+            placeholder='https://api.example.com/v1/chat/completions'
As per coding guidelines `**/*.{js,jsx,mjs,ts,tsx}`: Use single quotes instead of double quotes in imports and string literals.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` around lines 605 - 610, The JSX input uses a
double-quoted placeholder literal; replace the double quotes with single quotes
for the placeholder prop in the input element that uses providerBaseUrlInputRef
and value={providerDraft.apiUrl} (i.e., change
placeholder="https://api.example.com/v1/chat/completions" to use single quotes)
to comply with the project's string-literal style rule.
src/services/apis/provider-registry.mjs (1)

284-302: ⚠️ Potential issue | 🟠 Major

Canonicalize custom provider IDs before using them as registry keys.

normalizeCustomProvider() only trims provider.id, but later in the same module getProviderSecret() canonicalizes IDs with normalizeProviderId() before looking in providerSecrets. A provider imported as MyProxy can still resolve here while its secret lookup falls through to providerSecrets.myproxy and returns empty. The order-based custom-provider-${index + 1} fallback has the same problem after deletions or reordering, because persisted providerId references shift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/provider-registry.mjs` around lines 284 - 302,
normalizeCustomProvider currently only trims provider.id causing mismatches with
getProviderSecret which calls normalizeProviderId; update
normalizeCustomProvider to canonicalize the returned id by passing the computed
id (either the trimmed provider.id or the fallback string) through
normalizeProviderId before returning. Specifically, in normalizeCustomProvider
ensure the variable id is built as you already do (toStringOrEmpty(...).trim()
|| `custom-provider-${index + 1}`) and then set id = normalizeProviderId(id) so
providerSecrets lookups and getProviderSecret use the same canonical key.
🧹 Nitpick comments (2)
src/components/ConversationCard/index.jsx (1)

39-42: Move these provider-label helpers to a shared module.

ConversationCard is a shared component, but it now reaches into src/popup/sections/ for display logic. That reverses the dependency direction and makes the component harder to reuse outside the popup. A neutral utility module under src/utils/ or similar would keep this boundary cleaner.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/components/ConversationCard/index.jsx` around lines 39 - 42,
ConversationCard currently imports getApiModeDisplayLabel and
getConversationAiName from src/popup/sections/api-modes-provider-utils.mjs which
inverts the dependency graph; move these helper functions into a neutral shared
utilities module (e.g., src/utils/api-mode-labels.{js|mjs}) and update
ConversationCard to import getApiModeDisplayLabel and getConversationAiName from
that new module so the shared component no longer reaches into the
popup/sections folder; ensure you export the same function names and update any
other consumers to the new path.
src/background/index.mjs (1)

338-350: Avoid keeping a second provider registry in the background.

This predicate hard-codes the OpenAI-compatible families even though the PR introduces provider-registry.mjs. The list will drift the next time a compatible provider is added, and that provider will silently miss the unified path until this file is updated too. Consider exposing a single compatibility decision from the registry/resolver and branching on that here.

As per coding guidelines, "Centralize network/API logic under src/services/apis/ and keep endpoints auditable".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/background/index.mjs` around lines 338 - 350, The predicate
isUsingOpenAICompatibleApiSession currently hard-codes provider checks and must
be replaced with a single compatibility query from the new provider registry:
import and call the registry/resolver function (eg. an exported
isOpenAICompatibleProvider or similar) from provider-registry.mjs instead of
listing isUsingCustomModel/isUsingChatgptApiModel/...; update
isUsingOpenAICompatibleApiSession to delegate to that registry function (passing
the session or session.provider) so new providers are automatically covered, and
ensure any network/API logic remains under src/services/apis/ as per guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/_locales/zh-hans/main.json`:
- Around line 220-221: The translations for the strings "This API key is set on
the selected custom mode. Editing it here will create a dedicated provider for
that mode." and "Use shared key" mix the English word "provider" into zh-Hans;
replace "provider" with the consistent term "提供商" in both translations so they
read e.g. "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用提供商。" and "使用共享提供商密钥" to match the
rest of the locale.

In `@src/config/index.mjs`:
- Around line 945-966: The code snapshots legacyCustomProviderSecret once at
module init which misses later mutations to providerSecrets; change to read the
live value lazily by replacing the constant with a function (e.g.,
getLegacyCustomProviderSecret()) that returns
normalizeText(providerSecrets['legacy-custom-default']) and update callers
(including isProviderSecretCompatibleForCustomMode and any other places where
legacyCustomProviderSecret is used such as the other migration blocks around the
getCustomModeMigrationSignature usage) to call that function so migrations see
the current map value instead of a stale snapshot.

In `@src/popup/popup-config-utils.mjs`:
- Around line 1-2: The current helpers (mergeConfigUpdate and
buildConfigRollbackPatch) perform shallow top-level merges and rollbacks which
can accidentally overwrite or fail to delete nested keys like providerSecrets;
update these functions to perform deep merge/patch semantics for nested maps
(specifically handle providerSecrets as a map of provider -> secrets) or else
enforce and document a strict full-object-replacement contract at every call
site. Concretely: modify mergeConfigUpdate to deep-merge nested objects (merging
providerSecrets maps rather than replacing them), and update
buildConfigRollbackPatch to produce rollback operations that can remove added
nested keys and restore only modified nested entries; alternatively, add
validation in all call sites of mergeConfigUpdate and buildConfigRollbackPatch
to reject partial updates for providerSecrets and require callers to pass the
entire nested object. Ensure tests for functions mergeConfigUpdate and
buildConfigRollbackPatch cover adding, updating, and deleting nested
providerSecrets entries.

In `@src/popup/Popup.jsx`:
- Around line 107-113: When the initial getUserConfig() resolves in the
useEffect, don't clobber any optimistic in-memory edits; instead merge the
loaded config into state while preserving existing in-memory values. Update the
handler that currently does persistedConfigRef.current = config;
setConfig(config) to set persistedConfigRef.current = config and then
setConfig(prev => ({ ...config, ...prev })) so keys already edited in the
current popup take precedence over the storage snapshot (referencing useEffect,
getUserConfig, persistedConfigRef, and setConfig).

In `@src/popup/sections/ApiModes.jsx`:
- Around line 252-277: The persistApiMode function currently calls
updateConfig(payload) and then immediately calls clearPendingProviderChanges(),
which can lose in-progress edits if the async updateConfig rejects; change
persistApiMode to await the promise returned by updateConfig(payload) (i.e.,
make the caller async or handle the returned promise) and only call
clearPendingProviderChanges() and any editor-close logic after a successful
write, and add a catch that logs/surfaces the error and avoids clearing pending
state on failure; apply the same pattern to the other similar persistence block
(the one around the second occurrence noted in the review) so both updateConfig
usages await success before tearing down pending provider/editor state.
- Around line 121-129: The catch block for Browser.storage.local.get('sessions')
incorrectly sets sessionsLoaded to true on failure; remove or change the
setSessionsLoaded(true) in the catch so that a failed read does not mark
sessions as loaded. In the component (ApiModes.jsx) adjust the promise.catch
handler (the block referencing isMounted and setSessionsLoaded) to either only
log the error or set an error flag but do NOT call setSessionsLoaded(true); keep
the existing updateSessions call in the .then branch and preserve the isMounted
check before any state updates so the saved-conversation guard remains closed on
read failure.

In `@src/popup/sections/general-balance-utils.mjs`:
- Around line 22-29: The cache key builder getBalanceCacheKey is not normalizing
the billing base URL the same way as normalizeBillingApiBaseUrl, so equivalent
endpoints like "https://host" and "https://host/v1" produce different keys;
update getBalanceCacheKey to run the baseUrl through the same normalization
(call normalizeBillingApiBaseUrl(baseUrl) or apply its logic: trim, strip
trailing slashes, and remove a trailing "/v1" segment) before JSON.stringify,
ensuring the baseUrl component matches the billing normalization used elsewhere.

In `@src/popup/sections/GeneralPart.jsx`:
- Around line 163-172: The OpenAI balance control is shown based only on
provider IDs; change the gating to also verify the resolved billing host is the
official OpenAI platform by using resolvedOpenAiApiUrl. Update the logic that
computes shouldShowOpenAIBalanceControls (and any other occurrences around the
second block at lines ~444-479) to call shouldOpenOpenAIUsagePage(...) AND
validate resolvedOpenAiApiUrl (from selectedProvider?.baseUrl ||
config.customOpenAiApiUrl || 'https://api.openai.com') resolves to the OpenAI
host (e.g., parse the URL and check hostname === 'api.openai.com' or
hostname.endsWith('.openai.com') / other canonical OpenAI host checks). Ensure
you reference resolvedOpenAiApiUrl and
selectedProviderId/selectedSourceProviderId when changing the condition so the
balance link is hidden for proxies/self-hosted endpoints.
- Around line 62-105: The checkBilling function (and similar balance fetch logic
used by getBalance) should be moved out of the popup UI into a new or existing
module under src/services/apis/; create an API service function (e.g., export
async function fetchBillingInfo(apiKey, apiUrl)) that contains the HTTP calls,
header construction, error handling, date logic (startDate/endDate/subDate) and
returns { totalAmount, totalUsage, remaining } or nulls on error, then import
and call that service from GeneralPart.jsx (replace direct calls to
checkBilling/fetch with the new service). Ensure the new service reuses
formatDate if needed, centralizes the Authorization header creation, and
preserves existing response/error behavior so UI code only deals with the
returned values.
- Around line 283-349: The cleanup decision uses a stale apiMode because
buildCleanupConfigUpdate(...) is called before the second
isOverrideCommitCurrent(...) check and before
buildSelectionPreservingConfigUpdate(...); reorder so the cleanup is based on
the final selection: compute shouldPreserveCurrentSelection via
isOverrideCommitCurrent(...) first (using commitGeneration/overrideCommit refs
and commitSelectionSignature refs), then call
buildCleanupConfigUpdate(cleanupCandidateProviderId, config, configUpdate,
updatedSessions) only after you know whether selection is preserved, or
alternatively recompute nextConfigUpdate by calling buildCleanupConfigUpdate
again after obtaining finalConfigUpdate from
buildSelectionPreservingConfigUpdate(...); ensure cleanupCandidateProviderId
removal logic uses the final apiMode/selection state so you don't orphan the
newly selected provider secret.

In `@src/services/apis/custom-api.mjs`:
- Around line 19-29: The shared OpenAI-compatible flow can leak port listeners
if fetchSSE throws before onEnd/onError run; update the code that calls
setAbortController (referencing setAbortController, controller, cleanController,
messageListener, disconnectListener) so that after creating the controller you
wrap the fetchSSE call in a try/finally and always call cleanController() in the
finally block (ensuring messageListener/disconnectListener are removed even when
fetchSSE throws); apply this change in the openai-compatible implementation used
by generateAnswersWithOpenAICompatible to guarantee listeners are cleaned on all
code paths.

In `@src/utils/model-name-convert.mjs`:
- Around line 151-170: The identity comparison in isApiModeSelected() wrongly
includes the "active" property, causing a mode to fail matching itself when
toggled; update normalizeForCompare (used by isApiModeSelected and
normalizeApiMode) to omit "active" from the JSON key so identity only uses
groupName, itemName, isCustom, customName and providerId (i.e., remove active
from the returned object before JSON.stringify); keep the rest of the logic
unchanged so selectedApiMode and targetApiMode continue to be compared by the
narrower identity.

---

Duplicate comments:
In `@src/popup/sections/ApiModes.jsx`:
- Around line 605-610: The JSX input uses a double-quoted placeholder literal;
replace the double quotes with single quotes for the placeholder prop in the
input element that uses providerBaseUrlInputRef and value={providerDraft.apiUrl}
(i.e., change placeholder="https://api.example.com/v1/chat/completions" to use
single quotes) to comply with the project's string-literal style rule.

In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 17-20: buildMessageAnswer currently replaces the accumulated
answer with each legacy data.response chunk, losing earlier fragments; change
the logic in buildMessageAnswer so that when allowLegacyResponseField is true
and data?.response is a non-empty string you append data.response to the
existing answer (e.g., answer = (answer ?? '') + data.response) rather than
returning data.response alone, and then return the concatenated result so
streaming fragments are preserved; keep existing type checks for data?.response
and respect allowLegacyResponseField flag.
- Around line 114-118: The production path currently logs raw conversation and
SSE payloads (see the console.debug call that prints session.conversationRecords
and any other debug logs around SSE handling), which must be removed or
disabled; remove the console.debug that dumps session.conversationRecords (and
any similar console.* that logs full prompts/answers/provider payloads) and
instead either sanitize the data or gate logging behind a runtime debug flag/ENV
check (e.g., only log when DEBUG_MODE is true); keep the pushRecord(session,
question, answer) and the port.postMessage({ answer: null, done: true, session
}) behavior but ensure session sent over postMessage does not contain sensitive
raw prompt/answer payloads (sanitize or omit conversationRecords) before
sending.

In `@src/services/apis/provider-registry.mjs`:
- Around line 284-302: normalizeCustomProvider currently only trims provider.id
causing mismatches with getProviderSecret which calls normalizeProviderId;
update normalizeCustomProvider to canonicalize the returned id by passing the
computed id (either the trimmed provider.id or the fallback string) through
normalizeProviderId before returning. Specifically, in normalizeCustomProvider
ensure the variable id is built as you already do (toStringOrEmpty(...).trim()
|| `custom-provider-${index + 1}`) and then set id = normalizeProviderId(id) so
providerSecrets lookups and getProviderSecret use the same canonical key.

---

Nitpick comments:
In `@src/background/index.mjs`:
- Around line 338-350: The predicate isUsingOpenAICompatibleApiSession currently
hard-codes provider checks and must be replaced with a single compatibility
query from the new provider registry: import and call the registry/resolver
function (eg. an exported isOpenAICompatibleProvider or similar) from
provider-registry.mjs instead of listing
isUsingCustomModel/isUsingChatgptApiModel/...; update
isUsingOpenAICompatibleApiSession to delegate to that registry function (passing
the session or session.provider) so new providers are automatically covered, and
ensure any network/API logic remains under src/services/apis/ as per guidelines.

In `@src/components/ConversationCard/index.jsx`:
- Around line 39-42: ConversationCard currently imports getApiModeDisplayLabel
and getConversationAiName from src/popup/sections/api-modes-provider-utils.mjs
which inverts the dependency graph; move these helper functions into a neutral
shared utilities module (e.g., src/utils/api-mode-labels.{js|mjs}) and update
ConversationCard to import getApiModeDisplayLabel and getConversationAiName from
that new module so the shared component no longer reaches into the
popup/sections folder; ensure you export the same function names and update any
other consumers to the new path.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: df51bb37-ab90-4217-94a0-b5d913a4959f

📥 Commits

Reviewing files that changed from the base of the PR and between 5cab19b and 9ac7c67.

📒 Files selected for processing (48)
  • src/_locales/de/main.json
  • src/_locales/en/main.json
  • src/_locales/es/main.json
  • src/_locales/fr/main.json
  • src/_locales/in/main.json
  • src/_locales/it/main.json
  • src/_locales/ja/main.json
  • src/_locales/ko/main.json
  • src/_locales/pt/main.json
  • src/_locales/ru/main.json
  • src/_locales/tr/main.json
  • src/_locales/zh-hans/main.json
  • src/_locales/zh-hant/main.json
  • src/background/index.mjs
  • src/components/ConversationCard/index.jsx
  • src/config/index.mjs
  • src/config/openai-provider-mappings.mjs
  • src/popup/Popup.jsx
  • src/popup/popup-config-utils.mjs
  • src/popup/sections/ApiModes.jsx
  • src/popup/sections/GeneralPart.jsx
  • src/popup/sections/api-modes-provider-utils.mjs
  • src/popup/sections/general-balance-utils.mjs
  • src/popup/sections/provider-secret-utils.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/custom-api.mjs
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/openai-api.mjs
  • src/services/apis/openai-compatible-core.mjs
  • src/services/apis/openrouter-api.mjs
  • src/services/apis/provider-registry.mjs
  • src/services/init-session.mjs
  • src/services/wrappers.mjs
  • src/utils/model-name-convert.mjs
  • tests/unit/config/migrate-user-config.test.mjs
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • tests/unit/popup/general-balance-utils.test.mjs
  • tests/unit/popup/popup-config-update.test.mjs
  • tests/unit/popup/provider-secret-utils.test.mjs
  • tests/unit/services/apis/custom-api.test.mjs
  • tests/unit/services/apis/openai-api-compat.test.mjs
  • tests/unit/services/apis/provider-registry.test.mjs
  • tests/unit/services/apis/thin-adapters.test.mjs
  • tests/unit/services/wrappers-register.test.mjs
  • tests/unit/utils/model-name-convert.test.mjs
💤 Files with no reviewable changes (6)
  • src/services/apis/deepseek-api.mjs
  • src/services/apis/moonshot-api.mjs
  • src/services/apis/ollama-api.mjs
  • src/services/apis/aiml-api.mjs
  • src/services/apis/chatglm-api.mjs
  • src/services/apis/openrouter-api.mjs
✅ Files skipped from review due to trivial changes (10)
  • tests/unit/services/wrappers-register.test.mjs
  • src/config/openai-provider-mappings.mjs
  • tests/unit/popup/general-balance-utils.test.mjs
  • src/_locales/fr/main.json
  • src/_locales/tr/main.json
  • src/_locales/ru/main.json
  • tests/unit/popup/provider-secret-utils.test.mjs
  • src/_locales/it/main.json
  • tests/unit/popup/api-modes-provider-utils.test.mjs
  • src/_locales/es/main.json
🚧 Files skipped from review as they are similar to previous changes (9)
  • tests/unit/services/apis/custom-api.test.mjs
  • src/services/init-session.mjs
  • src/_locales/de/main.json
  • tests/unit/services/apis/thin-adapters.test.mjs
  • tests/unit/utils/model-name-convert.test.mjs
  • src/_locales/pt/main.json
  • src/_locales/en/main.json
  • src/_locales/ko/main.json
  • src/_locales/zh-hant/main.json

Comment on lines +220 to +221
"This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用 provider。",
"Use shared key": "使用共享 provider 密钥",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep provider translated consistently in zh-Hans.

These new strings still mix raw English provider into otherwise localized copy, while the rest of the locale uses 提供商. That will look inconsistent in the popup.

🔤 Suggested wording
-  "This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用 provider。",
-  "Use shared key": "使用共享 provider 密钥",
+  "This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设置在所选的自定义模式上。在这里编辑会为该模式创建专用提供商。",
+  "Use shared key": "使用共享提供商密钥",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用 provider",
"Use shared key": "使用共享 provider 密钥",
"This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设置在所选的自定义模式上。在这里编辑会为该模式创建专用提供商",
"Use shared key": "使用共享提供商密钥",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/_locales/zh-hans/main.json` around lines 220 - 221, The translations for
the strings "This API key is set on the selected custom mode. Editing it here
will create a dedicated provider for that mode." and "Use shared key" mix the
English word "provider" into zh-Hans; replace "provider" with the consistent
term "提供商" in both translations so they read e.g. "此 API
密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用提供商。" and "使用共享提供商密钥" to match the rest of the
locale.

Comment on lines +945 to +966
const legacyCustomProviderSecret = normalizeText(providerSecrets['legacy-custom-default'])
const hasOwnProviderSecret = (providerId) =>
Object.prototype.hasOwnProperty.call(providerSecrets, providerId)
const getCustomModeMigrationSignature = (apiMode) =>
JSON.stringify({
groupName: normalizeText(apiMode?.groupName),
itemName: normalizeText(apiMode?.itemName),
isCustom: Boolean(apiMode?.isCustom),
customName: normalizeText(apiMode?.customName),
customUrl: normalizeEndpointUrlForCompare(normalizeText(apiMode?.customUrl)),
providerId: normalizeProviderId(
typeof apiMode?.providerId === 'string' ? apiMode.providerId : '',
),
apiKey: normalizeText(apiMode?.apiKey),
})
const isProviderSecretCompatibleForCustomMode = (modeApiKey, providerSecret) => {
const effectiveModeKey = normalizeText(modeApiKey) || legacyCustomProviderSecret
if (effectiveModeKey) {
return !providerSecret || providerSecret === effectiveModeKey
}
return !providerSecret
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Re-read legacy-custom-default after the migration mutates providerSecrets.

legacyCustomProviderSecret is snapshotted once on Line 945, but earlier iterations can populate providerSecrets['legacy-custom-default'] later in the same migration pass. URL-derived modes processed after that then miss the inherited key and get migrated without credentials.

🔄 Read the legacy secret lazily from the live map
-  const legacyCustomProviderSecret = normalizeText(providerSecrets['legacy-custom-default'])
+  const getLegacyCustomProviderSecret = () =>
+    normalizeText(providerSecrets['legacy-custom-default'])
@@
   const isProviderSecretCompatibleForCustomMode = (modeApiKey, providerSecret) => {
-    const effectiveModeKey = normalizeText(modeApiKey) || legacyCustomProviderSecret
+    const effectiveModeKey = normalizeText(modeApiKey) || getLegacyCustomProviderSecret()
     if (effectiveModeKey) {
       return !providerSecret || providerSecret === effectiveModeKey
     }
     return !providerSecret
   }
@@
-    } else if (legacyCustomProviderSecret && providerIdAssignedFromLegacyCustomUrl) {
+    } else if (getLegacyCustomProviderSecret() && providerIdAssignedFromLegacyCustomUrl) {
       if (!hasOwnProviderSecret(apiMode.providerId)) {
-        providerSecrets[apiMode.providerId] = legacyCustomProviderSecret
+        providerSecrets[apiMode.providerId] = getLegacyCustomProviderSecret()
         dirty = true
       }
     }
@@
-      legacyCustomProviderSecret &&
+      getLegacyCustomProviderSecret() &&
       selectedTargetProviderId &&
       !hasOwnProviderSecret(selectedTargetProviderId)
     ) {
-      providerSecrets[selectedTargetProviderId] = legacyCustomProviderSecret
+      providerSecrets[selectedTargetProviderId] = getLegacyCustomProviderSecret()
       dirty = true
     }

Also applies to: 1115-1119, 1220-1228

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/config/index.mjs` around lines 945 - 966, The code snapshots
legacyCustomProviderSecret once at module init which misses later mutations to
providerSecrets; change to read the live value lazily by replacing the constant
with a function (e.g., getLegacyCustomProviderSecret()) that returns
normalizeText(providerSecrets['legacy-custom-default']) and update callers
(including isProviderSecretCompatibleForCustomMode and any other places where
legacyCustomProviderSecret is used such as the other migration blocks around the
getCustomModeMigrationSignature usage) to call that function so migrations see
the current map value instead of a stale snapshot.

Comment on lines +1 to +2
export function mergeConfigUpdate(currentConfig, value) {
return { ...currentConfig, ...value }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

These helpers only understand top-level replacement, which is risky for providerSecrets.

mergeConfigUpdate shallow-merges, and buildConfigRollbackPatch restores whole top-level keys. With nested config like providerSecrets, that means a partial patch can wipe sibling entries, while rollback for newly added nested entries cannot express a real delete. Please either make the full-object replacement contract explicit and enforce it at every call site, or teach merge and rollback to handle nested provider maps together.

Also applies to: 14-30

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/popup-config-utils.mjs` around lines 1 - 2, The current helpers
(mergeConfigUpdate and buildConfigRollbackPatch) perform shallow top-level
merges and rollbacks which can accidentally overwrite or fail to delete nested
keys like providerSecrets; update these functions to perform deep merge/patch
semantics for nested maps (specifically handle providerSecrets as a map of
provider -> secrets) or else enforce and document a strict
full-object-replacement contract at every call site. Concretely: modify
mergeConfigUpdate to deep-merge nested objects (merging providerSecrets maps
rather than replacing them), and update buildConfigRollbackPatch to produce
rollback operations that can remove added nested keys and restore only modified
nested entries; alternatively, add validation in all call sites of
mergeConfigUpdate and buildConfigRollbackPatch to reject partial updates for
providerSecrets and require callers to pass the entire nested object. Ensure
tests for functions mergeConfigUpdate and buildConfigRollbackPatch cover adding,
updating, and deleting nested providerSecrets entries.

Comment on lines 107 to 113
useEffect(() => {
getPreferredLanguageKey().then((lang) => {
i18n.changeLanguage(lang)
})
getUserConfig().then((config) => {
persistedConfigRef.current = config
setConfig(config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don’t let the initial storage read clobber optimistic edits.

If updateConfig runs before getUserConfig() resolves, Line 113 replaces the in-memory form state with the stale storage snapshot even though the queued write may still succeed. That makes the just-edited setting appear to revert until the popup is reopened.

💡 Merge only untouched keys from the initial load
     getUserConfig().then((config) => {
       persistedConfigRef.current = config
-      setConfig(config)
+      setConfig((currentConfig) => {
+        const untouchedConfig = { ...config }
+        for (const key of Object.keys(latestTouchedRequestByKeyRef.current)) {
+          delete untouchedConfig[key]
+        }
+        return mergeConfigUpdate(currentConfig, untouchedConfig)
+      })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/Popup.jsx` around lines 107 - 113, When the initial getUserConfig()
resolves in the useEffect, don't clobber any optimistic in-memory edits; instead
merge the loaded config into state while preserving existing in-memory values.
Update the handler that currently does persistedConfigRef.current = config;
setConfig(config) to set persistedConfigRef.current = config and then
setConfig(prev => ({ ...config, ...prev })) so keys already edited in the
current popup take precedence over the storage snapshot (referencing useEffect,
getUserConfig, persistedConfigRef, and setConfig).

Comment on lines +121 to +129
Browser.storage.local
.get('sessions')
.then(({ sessions }) => {
updateSessions(sessions)
})
.catch(() => {
if (!isMounted) return
setSessionsLoaded(true)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep the saved-conversation guard closed when the initial session scan fails.

The catch branch turns sessionsLoaded on even when the storage read failed, so destructive provider edits are treated as if there were no saved conversations to protect.

🛡️ Keep the guard in a non-ready state on read failure
     Browser.storage.local
       .get('sessions')
       .then(({ sessions }) => {
         updateSessions(sessions)
       })
       .catch(() => {
         if (!isMounted) return
-        setSessionsLoaded(true)
+        setSessions([])
+        // Keep `sessionsLoaded` false here so provider delete/edit guards stay blocked.
+        // If needed, surface a dedicated load-error message instead of treating this as "empty".
       })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Browser.storage.local
.get('sessions')
.then(({ sessions }) => {
updateSessions(sessions)
})
.catch(() => {
if (!isMounted) return
setSessionsLoaded(true)
})
Browser.storage.local
.get('sessions')
.then(({ sessions }) => {
updateSessions(sessions)
})
.catch(() => {
if (!isMounted) return
setSessions([])
// Keep `sessionsLoaded` false here so provider delete/edit guards stay blocked.
// If needed, surface a dedicated load-error message instead of treating this as "empty".
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/ApiModes.jsx` around lines 121 - 129, The catch block for
Browser.storage.local.get('sessions') incorrectly sets sessionsLoaded to true on
failure; remove or change the setSessionsLoaded(true) in the catch so that a
failed read does not mark sessions as loaded. In the component (ApiModes.jsx)
adjust the promise.catch handler (the block referencing isMounted and
setSessionsLoaded) to either only log the error or set an error flag but do NOT
call setSessionsLoaded(true); keep the existing updateSessions call in the .then
branch and preserve the isMounted check before any state updates so the
saved-conversation guard remains closed on read failure.

Comment on lines +62 to +105
async function checkBilling(apiKey, apiUrl) {
const now = new Date()
let startDate = new Date(now - 90 * 24 * 60 * 60 * 1000)
const endDate = new Date(now.getTime() + 24 * 60 * 60 * 1000)
const subDate = new Date(now)
subDate.setDate(1)

const urlSubscription = `${apiUrl}/v1/dashboard/billing/subscription`
let urlUsage = `${apiUrl}/v1/dashboard/billing/usage?start_date=${formatDate(
startDate,
)}&end_date=${formatDate(endDate)}`
const headers = {
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json',
}

try {
let response = await fetch(urlSubscription, { headers })
if (!response.ok) {
console.log('Your account has been suspended. Please log in to OpenAI to check.')
return [null, null, null]
}
const subscriptionData = await response.json()
const totalAmount = subscriptionData.hard_limit_usd

if (totalAmount > 20) {
startDate = subDate
}

urlUsage = `${apiUrl}/v1/dashboard/billing/usage?start_date=${formatDate(
startDate,
)}&end_date=${formatDate(endDate)}`

response = await fetch(urlUsage, { headers })
const usageData = await response.json()
const totalUsage = usageData.total_usage / 100
const remaining = totalAmount - totalUsage

return [totalAmount, totalUsage, remaining]
} catch (error) {
console.error(error)
return [null, null, null]
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Move the billing transport into src/services/apis/.

checkBilling() and the balance fetches in getBalance() add new OpenAI-compatible HTTP calls directly in a popup component. That duplicates provider transport logic in the UI layer and makes these endpoints harder to audit alongside the rest of the provider stack.

As per coding guidelines, "Centralize network/API logic under src/services/apis/ and keep endpoints auditable".

Also applies to: 444-479

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 62 - 105, The checkBilling
function (and similar balance fetch logic used by getBalance) should be moved
out of the popup UI into a new or existing module under src/services/apis/;
create an API service function (e.g., export async function
fetchBillingInfo(apiKey, apiUrl)) that contains the HTTP calls, header
construction, error handling, date logic (startDate/endDate/subDate) and returns
{ totalAmount, totalUsage, remaining } or nulls on error, then import and call
that service from GeneralPart.jsx (replace direct calls to checkBilling/fetch
with the new service). Ensure the new service reuses formatDate if needed,
centralizes the Authorization header creation, and preserves existing
response/error behavior so UI code only deals with the returned values.

Comment on lines +163 to +172
const resolvedOpenAiApiUrl =
selectedProvider?.baseUrl || config.customOpenAiApiUrl || 'https://api.openai.com'
const isSelectedProviderKeyManagedByModeOverride = hasSelectedModeOwnProviderSecretOverride(
config,
selectedProviderId,
)
const shouldShowOpenAIBalanceControls = shouldOpenOpenAIUsagePage(
selectedProviderId,
selectedSourceProviderId,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Gate OpenAI balance links by the resolved billing host.

shouldShowOpenAIBalanceControls only looks at provider IDs. When customOpenAiApiUrl points at a proxy or self-hosted OpenAI-compatible endpoint, the popup still renders the OpenAI balance affordance, probes /dashboard/billing/* on that host, and then falls back to the OpenAI account page. This should stay hidden unless the resolved billing base is actually the OpenAI platform.

🔧 Host-aware gating
+  const isDefaultOpenAIBillingHost =
+    normalizeBillingApiBaseUrl(resolvedOpenAiApiUrl) === 'https://api.openai.com'
   const shouldShowOpenAIBalanceControls =
-    shouldOpenOpenAIUsagePage(
-      selectedProviderId,
-      selectedSourceProviderId,
-    )
+    isDefaultOpenAIBillingHost &&
+    shouldOpenOpenAIUsagePage(selectedProviderId, selectedSourceProviderId)

Also applies to: 444-479

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 163 - 172, The OpenAI
balance control is shown based only on provider IDs; change the gating to also
verify the resolved billing host is the official OpenAI platform by using
resolvedOpenAiApiUrl. Update the logic that computes
shouldShowOpenAIBalanceControls (and any other occurrences around the second
block at lines ~444-479) to call shouldOpenOpenAIUsagePage(...) AND validate
resolvedOpenAiApiUrl (from selectedProvider?.baseUrl ||
config.customOpenAiApiUrl || 'https://api.openai.com') resolves to the OpenAI
host (e.g., parse the URL and check hostname === 'api.openai.com' or
hostname.endsWith('.openai.com') / other canonical OpenAI host checks). Ensure
you reference resolvedOpenAiApiUrl and
selectedProviderId/selectedSourceProviderId when changing the condition so the
balance link is hidden for proxies/self-hosted endpoints.

Comment on lines +283 to +349
const nextConfigUpdate = buildCleanupConfigUpdate(
cleanupCandidateProviderId,
config,
configUpdate,
updatedSessions,
)
if (
!isOverrideCommitCurrent(
commitGeneration,
overrideCommitGenerationRef.current,
commitSelectionSignature,
overrideCommitSelectionSignatureRef.current,
)
) {
return
}

if (updatedSessions !== latestSessions) {
try {
await Browser.storage.local.set({ sessions: updatedSessions })
} catch (error) {
console.error('[popup] Failed to persist migrated sessions for provider override', error)
return
}
}

const rollbackMigratedSessions = async (message, error) => {
if (updatedSessions === latestSessions || !sessionMigration) return
if (error) {
console.error(message, error)
} else {
console.error(message)
}

const currentSessionsResult = await loadLatestSessions()
if (!currentSessionsResult.ok) {
console.error(
'[popup] Failed to reload sessions for provider override selective rollback',
)
return
}
const rolledBackSessions = rollbackProviderSecretOverrideSessionMigration(
currentSessionsResult.sessions,
latestSessions,
sessionMigration,
)
if (rolledBackSessions === currentSessionsResult.sessions) return
try {
await Browser.storage.local.set({ sessions: rolledBackSessions })
} catch (rollbackError) {
console.error(
'[popup] Failed to persist selective rollback for provider override sessions',
rollbackError,
)
}
}

const shouldPreserveCurrentSelection = !isOverrideCommitCurrent(
commitGeneration,
overrideCommitGenerationRef.current,
commitSelectionSignature,
overrideCommitSelectionSignatureRef.current,
)
const finalConfigUpdate = buildSelectionPreservingConfigUpdate(
nextConfigUpdate,
shouldPreserveCurrentSelection,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't clean up providers after the selection has changed.

If the user switches modes after the second isOverrideCommitCurrent(...) check, buildCleanupConfigUpdate() has already decided whether to delete cleanupCandidateProviderId using the old config.apiMode. buildSelectionPreservingConfigUpdate() only strips apiMode, so the stale cleanup can still be persisted and orphan the newly selected mode/provider secret.

🛠️ Safer sequencing
-      const nextConfigUpdate = buildCleanupConfigUpdate(
-        cleanupCandidateProviderId,
-        config,
-        configUpdate,
-        updatedSessions,
-      )
       if (
         !isOverrideCommitCurrent(
           commitGeneration,
@@
-      const shouldPreserveCurrentSelection = !isOverrideCommitCurrent(
-        commitGeneration,
-        overrideCommitGenerationRef.current,
-        commitSelectionSignature,
-        overrideCommitSelectionSignatureRef.current,
-      )
-      const finalConfigUpdate = buildSelectionPreservingConfigUpdate(
-        nextConfigUpdate,
-        shouldPreserveCurrentSelection,
-      )
+      if (
+        !isOverrideCommitCurrent(
+          commitGeneration,
+          overrideCommitGenerationRef.current,
+          commitSelectionSignature,
+          overrideCommitSelectionSignatureRef.current,
+        )
+      ) {
+        await rollbackMigratedSessions(
+          '[popup] Selection changed during provider override commit; attempting selective session rollback',
+        )
+        return
+      }
+
+      const nextConfigUpdate = buildCleanupConfigUpdate(
+        cleanupCandidateProviderId,
+        config,
+        configUpdate,
+        updatedSessions,
+      )
+      const finalConfigUpdate = buildSelectionPreservingConfigUpdate(nextConfigUpdate)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/popup/sections/GeneralPart.jsx` around lines 283 - 349, The cleanup
decision uses a stale apiMode because buildCleanupConfigUpdate(...) is called
before the second isOverrideCommitCurrent(...) check and before
buildSelectionPreservingConfigUpdate(...); reorder so the cleanup is based on
the final selection: compute shouldPreserveCurrentSelection via
isOverrideCommitCurrent(...) first (using commitGeneration/overrideCommit refs
and commitSelectionSignature refs), then call
buildCleanupConfigUpdate(cleanupCandidateProviderId, config, configUpdate,
updatedSessions) only after you know whether selection is preserved, or
alternatively recompute nextConfigUpdate by calling buildCleanupConfigUpdate
again after obtaining finalConfigUpdate from
buildSelectionPreservingConfigUpdate(...); ensure cleanupCandidateProviderId
removal logic uses the final apiMode/selection state so you don't orphan the
newly selected provider secret.

Comment on lines +19 to 29
await generateAnswersWithOpenAICompatible({
port,
question,
session,
endpointType: 'chat',
requestUrl: apiUrl,
model: modelName,
apiKey,
provider: 'custom',
allowLegacyResponseField: true,
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure the shared OpenAI-compatible path always cleans up port listeners.

This call now relies entirely on generateAnswersWithOpenAICompatible, and the shared implementation only removes its abort listeners from onEnd/onError. If fetchSSE throws before either callback runs, those listeners stay attached to the port and accumulate across retries.

💡 Suggested fix in src/services/apis/openai-compatible-core.mjs
const { controller, cleanController, messageListener, disconnectListener } =
  setAbortController(port)

try {
  await fetchSSE(/* existing request */)
} finally {
  cleanController()
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/apis/custom-api.mjs` around lines 19 - 29, The shared
OpenAI-compatible flow can leak port listeners if fetchSSE throws before
onEnd/onError run; update the code that calls setAbortController (referencing
setAbortController, controller, cleanController, messageListener,
disconnectListener) so that after creating the controller you wrap the fetchSSE
call in a try/finally and always call cleanController() in the finally block
(ensuring messageListener/disconnectListener are removed even when fetchSSE
throws); apply this change in the openai-compatible implementation used by
generateAnswersWithOpenAICompatible to guarantee listeners are cleaned on all
code paths.

Comment on lines 151 to +170
export function isApiModeSelected(apiMode, configOrSession) {
return configOrSession.apiMode
? JSON.stringify(configOrSession.apiMode, Object.keys(configOrSession.apiMode).sort()) ===
JSON.stringify(apiMode, Object.keys(apiMode).sort())
: configOrSession.modelName === apiModeToModelName(apiMode)
const normalizeForCompare = (value) => {
const normalized = normalizeApiMode(value)
if (!normalized) return null
return JSON.stringify({
groupName: normalized.groupName,
itemName: normalized.itemName,
isCustom: normalized.isCustom,
customName: normalized.customName,
providerId: normalized.providerId,
active: normalized.active,
})
}
if (!configOrSession.apiMode) {
return configOrSession.modelName === apiModeToModelName(apiMode)
}
const selectedApiMode = normalizeForCompare(configOrSession.apiMode)
const targetApiMode = normalizeForCompare(apiMode)
if (!selectedApiMode || !targetApiMode) return false
return selectedApiMode === targetApiMode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

active should not be part of API mode identity.

isApiModeSelected() is used for state updates as well as rendering. Once active participates in the compare key, the same mode stops matching itself as soon as it is toggled on or off, so flows like disabling the selected mode in src/popup/sections/ApiModes.jsx:140-150 or clearing the selected mode's provider key in src/popup/sections/provider-secret-utils.mjs:250-365 can miss the target entry.

🎯 Narrower identity check
   const normalizeForCompare = (value) => {
     const normalized = normalizeApiMode(value)
     if (!normalized) return null
     return JSON.stringify({
       groupName: normalized.groupName,
       itemName: normalized.itemName,
       isCustom: normalized.isCustom,
       customName: normalized.customName,
       providerId: normalized.providerId,
-      active: normalized.active,
     })
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/model-name-convert.mjs` around lines 151 - 170, The identity
comparison in isApiModeSelected() wrongly includes the "active" property,
causing a mode to fail matching itself when toggled; update normalizeForCompare
(used by isApiModeSelected and normalizeApiMode) to omit "active" from the JSON
key so identity only uses groupName, itemName, isCustom, customName and
providerId (i.e., remove active from the returned object before JSON.stringify);
keep the rest of the logic unchanged so selectedApiMode and targetApiMode
continue to be compared by the narrower identity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants