Unify OpenAI-compatible providers and custom provider flow#930
Unify OpenAI-compatible providers and custom provider flow#930PeterDaveHello wants to merge 2 commits intoChatGPTBox-dev:masterfrom
Conversation
📝 WalkthroughWalkthroughConsolidates per-provider adapters into a unified OpenAI‑compatible provider registry and streaming core; adds provider catalog, providerSecrets, migration and provider-management UI, apiMode normalization, and routes background execution through the new OpenAI-compatible generation path. Changes
Sequence Diagram(s)sequenceDiagram
participant Port as UI Port
participant BG as Background
participant Registry as Provider Registry
participant Core as OpenAI-Compatible Core
participant Ext as External Provider/API
Port->>BG: request generation (question, session)
BG->>Registry: resolveOpenAICompatibleRequest(config, session)
Registry-->>BG: { providerId, requestUrl, apiKey, endpointType, model }
BG->>Core: generateAnswersWithOpenAICompatible({port, question, session, endpointType, requestUrl, model, apiKey})
Core->>Ext: open SSE/HTTP stream (requestUrl, headers, body)
Ext-->>Core: SSE stream (partial tokens/events)
Core->>Port: post interim answers ({answer, done:false})
Ext-->>Core: final event / [DONE]
Core->>Port: post final ({answer:null, done:true, session})
Core->>BG: persist Q/A record
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can customize the high-level summary generated by CodeRabbit.Configure the |
Summary of ChangesHello @PeterDaveHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the extension's architecture for managing AI API providers. By unifying OpenAI-compatible API interactions and introducing a comprehensive provider registry with robust configuration migration, the changes aim to improve maintainability, simplify the addition of new providers, and enhance the user experience for custom API configurations. This also ensures better consistency in how API keys and endpoints are handled across the application. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
|||||||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||||
There was a problem hiding this comment.
Code Review
This pull request significantly refactors the API mode handling by unifying OpenAI-compatible providers and introducing a custom provider flow. The changes centralize API logic, improve configuration management through schema versioning and migration, and enhance the UI for custom provider creation and selection. The refactoring reduces code duplication and sets a solid foundation for future extensibility. Overall, the changes are well-structured and address key areas for maintainability and user experience.
I am having trouble creating individual review comments. Click here to see my feedback.
src/background/index.mjs (437-458)
The removal of the isUsingCustomModel(session) block and its associated generateAnswersWithCustomApi calls is a direct result of the unification efforts. This simplifies the executeApi function by delegating custom API handling to the new generateAnswersWithOpenAICompatibleApi function, reducing conditional logic and improving code clarity.
src/background/index.mjs (566-568)
The introduction of isUsingOpenAICompatibleApiSession(session) and the call to generateAnswersWithOpenAICompatibleApi consolidates the logic for various OpenAI-compatible providers. This is a core part of the unification, making the executeApi function more streamlined and extensible. All previously separate API calls for OpenAI, Moonshot, ChatGLM, DeepSeek, Ollama, OpenRouter, and AIML are now routed through this single entry point.
src/config/index.mjs (550-556)
The addition of providerId, customOpenAIProviders, providerSecrets, and configSchemaVersion to the defaultConfig is crucial for the new configuration schema and migration system. providerId allows custom API modes to link to specific providers, while customOpenAIProviders and providerSecrets manage custom provider details and their API keys. configSchemaVersion enables future-proof migration logic.
src/config/index.mjs (729-733)
The getUserConfig function has been updated to incorporate the migrateUserConfig logic. This ensures that any legacy configurations are automatically migrated to the new schema upon loading, providing backward compatibility and consistency across user settings. The dirty flag correctly triggers a save if migration occurs.
src/popup/sections/ApiModes.jsx (31)
The addition of providerId to defaultApiMode is necessary to support the new provider registry system. This allows API modes to be explicitly linked to a specific provider, whether built-in or custom, enabling more granular control and clearer configuration.
src/popup/sections/ApiModes.jsx (98)
The useLayoutEffect now includes config.customOpenAIProviders as a dependency. This ensures that the UI re-renders and updates the list of custom providers whenever the underlying configuration for custom OpenAI providers changes, maintaining UI consistency with the backend data.
src/popup/sections/ApiModes.jsx (121-134)
The persistApiMode function is a new utility that centralizes the logic for saving API mode configurations. It handles updating customApiModes and optionally customOpenAIProviders, and ensures that the currently selected API mode is also updated if it's being edited. This improves code organization and reduces duplication.
src/popup/sections/ApiModes.jsx (136-195)
The onSaveEditing function has been completely refactored to support the new custom provider creation and selection flow. It now handles the creation of new providers, assigns provider IDs, and manages the clearing of API keys when switching providers. This is a critical piece of the custom provider workflow, ensuring data integrity and proper linking between API modes and providers.
src/popup/sections/ApiModes.jsx (267-288)
This new UI section allows users to select an existing custom provider or create a new one. The providerSelector state manages the selection, and the options are dynamically populated from customProviders. This is a key enhancement for the custom provider workflow, making it user-friendly.
src/popup/sections/ApiModes.jsx (290-305)
This new block provides input fields for creating a new custom provider, including its name and base URL. This directly supports the custom provider creation flow, allowing users to define new OpenAI-compatible endpoints directly within the UI.
src/popup/sections/GeneralPart.jsx (12)
The removal of isUsingOpenAiApiModel and isUsingChatGLMApiModel from imports reflects the consolidation of these checks into the new isUsingOpenAICompatibleProvider utility. This simplifies the import list and centralizes the logic for identifying OpenAI-compatible API usage.
src/popup/sections/GeneralPart.jsx (96-105)
The LEGACY_API_KEY_FIELD_BY_PROVIDER_ID constant maps provider IDs to their legacy API key field names in the configuration. This is essential for the migration and unified secret management, allowing the system to correctly identify and update API keys for various providers.
src/popup/sections/GeneralPart.jsx (107-181)
The buildProviderSecretUpdate function is a critical new utility for managing API keys. It handles updating providerSecrets and also ensures that legacy API key fields are updated for backward compatibility. Furthermore, it propagates API key changes to customApiModes and the apiMode if they are currently selected, ensuring consistency across the application. This function is vital for the new unified secret management system.
src/popup/sections/GeneralPart.jsx (197-203)
These new state variables and derived values (selectedProviderRequest, selectedProviderId, selectedProvider, selectedProviderApiKey, isUsingOpenAICompatibleProvider) are crucial for dynamically displaying and managing API keys for the currently selected OpenAI-compatible provider. They leverage the new provider-registry.mjs to resolve provider information.
src/popup/sections/GeneralPart.jsx (206-217)
The getBalance function has been updated to use the selectedProviderApiKey and openAiApiUrl derived from the new provider registry. This ensures that balance checks are performed against the correct API key and endpoint for the currently active OpenAI-compatible provider, centralizing the logic and making it more robust.
src/popup/sections/GeneralPart.jsx (275)
The condition isUsingOpenAICompatibleProvider replaces multiple individual checks for OpenAI-compatible models. This simplifies the conditional rendering logic for the API key input field, making it more concise and easier to understand.
src/popup/sections/GeneralPart.jsx (326-356)
This block now dynamically renders the API key input and balance check button based on whether an isUsingOpenAICompatibleProvider is selected. The selectedProviderApiKey and selectedProviderId are used to manage the input value and conditional rendering of the 'Get'/'Balance' buttons, centralizing API key management.
src/services/apis/custom-api.mjs (1)
The import statement has been simplified to only include generateAnswersWithOpenAICompatible. This is a direct consequence of unifying the OpenAI-compatible API execution into a shared core module, reducing the number of individual API service imports.
src/services/apis/custom-api.mjs (19-29)
The generateAnswersWithCustomApi function has been refactored to delegate its logic to the new generateAnswersWithOpenAICompatible function. This significantly reduces code duplication and centralizes the core logic for handling OpenAI-compatible API requests, including custom ones. The allowLegacyResponseField: true is important for maintaining backward compatibility with custom APIs that might use a response field.
src/services/apis/openai-api.mjs (3-4)
The import of generateAnswersWithOpenAICompatible and resolveOpenAICompatibleRequest from the new core and provider registry modules is key to unifying API handling. This allows openai-api.mjs to leverage the shared logic instead of duplicating it.
src/services/apis/openai-api.mjs (6-8)
The normalizeBaseUrl function is a useful utility to ensure consistency in API endpoint URLs by trimming trailing slashes. This helps prevent issues with URL construction and improves the robustness of API requests.
src/services/apis/openai-api.mjs (10-21)
The resolveModelName function centralizes the logic for determining the actual model name to be used in API requests, especially for custom models and API modes. This ensures that the correct model identifier is passed to the unified API handler.
src/services/apis/openai-api.mjs (24-40)
The touchOllamaKeepAlive function is a new addition specifically for Ollama providers. It sends a small request to the Ollama endpoint to keep the model loaded, which is important for performance and responsiveness. This demonstrates how provider-specific logic can be integrated while still using a unified API flow.
src/services/apis/openai-api.mjs (48-58)
The generateAnswersWithGptCompletionApi function has been refactored to use the new generateAnswersWithOpenAICompatible core function. This significantly reduces code duplication and centralizes the logic for handling completion-type API requests, improving maintainability.
src/services/apis/openai-api.mjs (69-70)
The baseUrl for generateAnswersWithChatgptApiCompat now uses normalizeBaseUrl, ensuring consistent URL formatting. This is a minor but good improvement for robustness.
src/services/apis/openai-api.mjs (89-99)
The generateAnswersWithChatgptApiCompat function has been refactored to use the new generateAnswersWithOpenAICompatible core function. This change eliminates duplicated SSE fetching logic and centralizes the handling of chat-completion type API requests, making the code cleaner and more maintainable.
src/services/apis/openai-api.mjs (109-133)
This new function, generateAnswersWithOpenAICompatibleApi, serves as the unified entry point for all OpenAI-compatible providers. It resolves the correct provider, endpoint type, URL, API key, and model using the new provider registry, then delegates to generateAnswersWithOpenAICompatible. This is the cornerstone of the PR's unification efforts, drastically simplifying the executeApi logic in background/index.mjs.
src/services/apis/openai-compatible-core.mjs (8-14)
The buildHeaders function centralizes the logic for constructing HTTP headers, including the Authorization header with the API key. This promotes consistency and reusability across all OpenAI-compatible API requests.
src/services/apis/openai-compatible-core.mjs (17-28)
The buildMessageAnswer function encapsulates the logic for extracting the answer content from different API response formats (delta, content, text, and legacy response field). This abstraction makes the SSE onMessage handler cleaner and more adaptable to various provider responses.
src/services/apis/openai-compatible-core.mjs (31-33)
The hasFinished function provides a clear and concise way to determine if an API response indicates the end of a stream, based on the finish_reason field. This improves readability in the SSE onMessage handler.
src/services/apis/openai-compatible-core.mjs (49-159)
This new generateAnswersWithOpenAICompatible function is the core of the unified API handling. It abstracts away the details of constructing request bodies for both 'completion' and 'chat' endpoints, handling token parameters, and managing the SSE connection. This significantly reduces code duplication across different OpenAI-compatible API services and makes it easier to add new providers in the future.
src/services/apis/provider-registry.mjs (4-13)
The LEGACY_KEY_BY_PROVIDER_ID constant is crucial for the migration process, mapping older configuration keys to the new provider IDs. This ensures that existing user configurations are correctly interpreted and migrated to the unified secret management system.
src/services/apis/provider-registry.mjs (15-85)
The BUILTIN_PROVIDER_TEMPLATE defines a standardized structure for all built-in OpenAI-compatible providers. This template includes essential information like ID, name, base URLs, and paths, which is fundamental for the new provider registry and unified API handling. It also includes allowLegacyResponseField for backward compatibility.
src/services/apis/provider-registry.mjs (88-98)
The OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID mapping is vital for translating legacy API mode group names into the new standardized provider IDs. This enables the system to correctly identify which provider corresponds to a given API mode, facilitating the migration and unification.
src/services/apis/provider-registry.mjs (101-134)
The getModelNamePresetPart and resolveProviderIdFromLegacyModelName functions are essential for backward compatibility. They allow the system to infer the correct provider ID from older, less structured model names, ensuring that existing user configurations continue to work seamlessly with the new provider registry.
src/services/apis/provider-registry.mjs (136-139)
The isLegacyCompletionModelName function helps identify models that historically used the 'completion' endpoint. This is important for correctly routing requests to the appropriate endpoint type within the unified API handler.
src/services/apis/provider-registry.mjs (141-157)
The toStringOrEmpty, trimSlashes, ensureLeadingSlash, and joinUrl utility functions provide robust and consistent string manipulation for URLs and paths. These are crucial for correctly constructing API endpoints, especially when dealing with user-provided custom URLs and paths.
src/services/apis/provider-registry.mjs (160-184)
The buildBuiltinProviders function dynamically constructs the list of built-in providers, incorporating user-specific configuration values like customOpenAiApiUrl and ollamaEndpoint. This ensures that built-in providers are correctly configured based on user settings.
src/services/apis/provider-registry.mjs (186-201)
The normalizeCustomProvider function ensures that custom provider configurations are consistently structured and validated. It assigns default values and normalizes fields like id, name, baseUrl, and paths, which is essential for reliable custom provider management.
src/services/apis/provider-registry.mjs (203-208)
The getCustomOpenAIProviders function retrieves and normalizes custom provider configurations from the user's settings. This provides a clean, consistent list of custom providers for use throughout the application.
src/services/apis/provider-registry.mjs (210-212)
The getAllOpenAIProviders function combines both built-in and custom providers into a single, comprehensive list. This unified list is then used by other functions to resolve provider details.
src/services/apis/provider-registry.mjs (215-228)
The resolveProviderIdForSession function is crucial for determining the correct provider ID based on the current session's API mode or model name. It handles both new API mode structures and legacy model names, ensuring that the appropriate provider is identified for each request.
src/services/apis/provider-registry.mjs (230-238)
The resolveEndpointTypeForSession function determines whether a session should use a 'chat' or 'completion' endpoint. This is important for correctly formatting the request body in the unified API handler, especially for legacy completion models.
src/services/apis/provider-registry.mjs (240-246)
The getProviderById function retrieves a provider's full configuration by its ID. It filters out disabled providers, ensuring that only active and valid providers are used.
src/services/apis/provider-registry.mjs (248-268)
The getProviderSecret function centralizes the logic for retrieving API keys. It prioritizes API keys specified directly in the apiMode, then checks the new providerSecrets map, and finally falls back to legacy API key fields. This ensures that the correct API key is used for each provider, supporting both new and old configurations.
src/services/apis/provider-registry.mjs (270-301)
The resolveUrlFromProvider function constructs the full API request URL based on the provider's configuration and the endpoint type (chat or completion). It handles custom URLs specified in the API mode, as well as base URLs and paths defined in the provider template, ensuring accurate endpoint resolution.
src/services/apis/provider-registry.mjs (303-317)
The resolveOpenAICompatibleRequest function is the main entry point for obtaining all necessary information to make an OpenAI-compatible API request. It orchestrates the resolution of provider ID, provider details, endpoint type, request URL, and API key, providing a complete request object to the unified API handler.
src/services/init-session.mjs (5)
The import of normalizeApiMode is essential for ensuring that API mode objects are consistently structured and validated when a new session is initialized. This helps prevent unexpected behavior due to malformed API mode data.
src/services/init-session.mjs (75)
The apiMode property is now explicitly normalized using normalizeApiMode during session initialization. This ensures that all API mode objects conform to a consistent structure, which is critical for the new provider registry and unified API handling.
src/services/wrappers.mjs (13)
The import of normalizeApiMode is crucial here to ensure that any API mode object received or processed by the port listener is normalized. This guarantees consistency and prevents issues with potentially malformed API mode data.
src/services/wrappers.mjs (110)
The session.apiMode is now explicitly normalized using normalizeApiMode within the port listener. This ensures that any API mode passed into the executeApi function is in a consistent and expected format, which is vital for the new unified API handling logic.
src/utils/model-name-convert.mjs (81-93)
The new normalizeApiMode function ensures that API mode objects always have a consistent structure and default values. This is critical for reliable processing of API modes throughout the application, especially with the introduction of new fields like providerId and the migration logic.
src/utils/model-name-convert.mjs (97-98)
The apiModeToModelName function now explicitly normalizes the apiMode object at the beginning. This ensures that any subsequent logic in the function operates on a well-formed API mode object, improving robustness.
src/utils/model-name-convert.mjs (111-115)
The getApiModesFromConfig function now normalizes config.customApiModes before processing them. This ensures that custom API modes are always in a consistent format, which is important for accurate filtering and mapping.
src/utils/model-name-convert.mjs (132-133)
The logic for Ollama model names has been updated to explicitly set modelName to ollamaModel- followed by the configured Ollama model name. This ensures consistency in how Ollama models are identified within the system, aligning with the new provider registry.
src/utils/model-name-convert.mjs (139)
The normalizedCustomApiModes are now correctly filtered and included in the returned array. This ensures that all custom API modes, after normalization, are properly accounted for when retrieving the list of available API modes.
src/utils/model-name-convert.mjs (148-157)
The normalizeForCompare helper function is introduced to create a consistent string representation of API mode objects for comparison. This is crucial for accurately determining if an API mode is selected, especially with the addition of new fields like providerId and the need to ignore transient fields like apiKey and customUrl during comparison.
src/utils/model-name-convert.mjs (160)
The isApiModeSelected function now uses the normalizeForCompare helper to compare API mode objects. This ensures that comparisons are robust and accurate, preventing false negatives or positives due to differences in non-essential fields or object ordering.
36ed81e to
fdc0497
Compare
|
@codex review |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 12 out of 12 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
src/background/index.mjs (1)
8-8:⚠️ Potential issue | 🟠 MajorMissing
.mjsextension on theopenai-apiimport.This was flagged in a prior review and still needs the extension for runtime resolution consistency.
-import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api' +import { generateAnswersWithOpenAICompatibleApi } from '../services/apis/openai-api.mjs'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/background/index.mjs` at line 8, The import statement for generateAnswersWithOpenAICompatibleApi is missing the .mjs extension; update the import in src/background/index.mjs to reference '../services/apis/openai-api.mjs' (i.e., change the module path used where generateAnswersWithOpenAICompatibleApi is imported) so the runtime can resolve the module consistently.src/popup/sections/GeneralPart.jsx (1)
96-105: Duplication of provider-to-key mapping remains — already flagged in prior review.
LEGACY_API_KEY_FIELD_BY_PROVIDER_IDhere andLEGACY_SECRET_KEY_TO_PROVIDER_IDinsrc/config/index.mjsexpress the same mapping in opposite directions. A shared constants module would keep them in sync.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/popup/sections/GeneralPart.jsx` around lines 96 - 105, LEGACY_API_KEY_FIELD_BY_PROVIDER_ID duplicates the inverse mapping LEGACY_SECRET_KEY_TO_PROVIDER_ID; extract the canonical mapping into a new shared constants module (e.g., export LEGACY_API_KEY_FIELD_BY_PROVIDER_ID from a central module), import that constant into GeneralPart.jsx and update src/config's code to derive LEGACY_SECRET_KEY_TO_PROVIDER_ID from it (use Object.entries/Object.fromEntries to invert the mapping), and remove the hard-coded duplicate so both modules consume the single source of truth.src/services/apis/openai-api.mjs (1)
24-40: Missing timeout ontouchOllamaKeepAlivefetch — can block indefinitely.The
awaitat line 129 will hang if the Ollama endpoint is unresponsive. While the.catch()prevents unhandled rejections, the promise itself never resolves until the TCP stack gives up. Add anAbortControllerwith a reasonable timeout.🛡️ Proposed fix
async function touchOllamaKeepAlive(config, model, apiKey) { + const controller = new AbortController() + const timeout = setTimeout(() => controller.abort(), 5000) return fetch(`${normalizeBaseUrl(config.ollamaEndpoint)}/api/generate`, { method: 'POST', + signal: controller.signal, headers: { 'Content-Type': 'application/json', ...(apiKey ? { Authorization: `Bearer ${apiKey}` } : {}), }, body: JSON.stringify({ model, prompt: 't', options: { num_predict: 1, }, keep_alive: config.ollamaKeepAliveTime === '-1' ? -1 : config.ollamaKeepAliveTime, }), - }) + }).finally(() => clearTimeout(timeout)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-api.mjs` around lines 24 - 40, The fetch in touchOllamaKeepAlive can hang indefinitely; modify touchOllamaKeepAlive to create an AbortController, pass controller.signal to fetch, and set a timer (e.g., via setTimeout) to call controller.abort() after a reasonable timeout (e.g., 5-10s), clearing the timer on completion; ensure the fetch call uses the controller.signal and that any catch handles AbortError gracefully so the calling await won't hang.src/services/apis/provider-registry.mjs (1)
15-86: OpenAI template paths include/v1prefix while other providers don't — potential/v1duplication.The OpenAI provider template defines
chatCompletionsPath: '/v1/chat/completions'(line 19), while the OpenAIbaseUrlis set fromconfig.customOpenAiApiUrlinbuildBuiltinProviders(line 165). If a user has stored a base URL already ending in/v1(e.g.,https://api.openai.com/v1),joinUrlwill producehttps://api.openai.com/v1/v1/chat/completions.Strip a trailing
/v1from the OpenAI base URL before storing, or unify the path convention across all providers.🛠️ Proposed fix in buildBuiltinProviders
if (provider.id === 'openai') { + const rawUrl = trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com') return { ...provider, - baseUrl: trimSlashes(config.customOpenAiApiUrl || 'https://api.openai.com'), + baseUrl: rawUrl.replace(/\/v1$/i, ''), } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/provider-registry.mjs` around lines 15 - 86, BUILTIN_PROVIDER_TEMPLATE currently sets OpenAI's chatCompletionsPath to '/v1/chat/completions' which can duplicate '/v1' if buildBuiltinProviders uses a baseUrl that already ends with '/v1'; update buildBuiltinProviders to normalize the OpenAI base URL by stripping a trailing '/v1' or '/v1/' before saving (e.g., use baseUrl = config.customOpenAiApiUrl.replace(/\/v1\/?$/, '') when building the provider entry for id 'openai'), or alternatively change the OpenAI template path to '/chat/completions' so all providers use the same path convention; adjust the code that creates the OpenAI provider (referencing BUILTIN_PROVIDER_TEMPLATE and buildBuiltinProviders) to perform the normalization.
🧹 Nitpick comments (8)
src/config/index.mjs (2)
740-751: Minor:aimlModelKeysandaimlApiModelKeysboth map to'aiml'.
ModelGroupsat line 197 uses the keyaimlModelKeys(notaimlApiModelKeys). The extra entry is harmless (belt-and-suspenders), but if it's intentional for forward-compatibility, a comment would help. Otherwise it's dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/index.mjs` around lines 740 - 751, The mapping API_MODE_GROUP_TO_PROVIDER_ID currently contains both aimlModelKeys and aimlApiModelKeys pointing to 'aiml'; remove the redundant key aimlApiModelKeys (or add a brief clarifying comment if the duplicate is intentional for future compatibility) so only the canonical key used by ModelGroups (aimlModelKeys) remains, updating the object API_MODE_GROUP_TO_PROVIDER_ID to eliminate dead code while preserving behavior.
796-1101: Migration logic is thorough but consider extracting shared apiMode migration into a helper.The migration for
customApiModes(lines 879-970) andselectedApiMode(lines 973-1086) share significant structural similarity — both normalize provider IDs, create providers from legacy custom URLs, and migrate API keys toproviderSecrets. This isn't urgent since the migration runs once per config version bump, but extracting a sharedmigrateApiModeEntry(apiMode, context)helper would reduce the ~100 lines of duplication and lower the risk of one branch drifting from the other in future migrations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/index.mjs` around lines 796 - 1101, The customApiModes loop and selectedApiMode handling in migrateUserConfig share repeated logic; extract that shared logic into a helper named migrateApiModeEntry(apiMode, {providerSecrets, customOpenAIProviders, providerIdSet, providerIdRenameLookup, customProviderCounter, LEGACY_CUSTOM_PROVIDER_SECRET, API_MODE_GROUP_TO_PROVIDER_ID}) that normalizes providerId, applies renames, creates new provider entries from customUrl (updating providerIdSet, customOpenAIProviders and customProviderCounter), moves apiKey into providerSecrets, clears apiMode.customUrl/apiKey/providerId as appropriate, and returns {updatedApiMode, providerIdAssignedFromLegacyCustomUrl, customProviderCounter, dirtyFlags} so migrateUserConfig can call it for each entry in customApiModes and once for selectedApiMode (merging returned updates and flags) to remove the duplicated ~100 lines while preserving existing side-effects on providerSecrets, customOpenAIProviders, providerIdSet, and dirty/custom*Dirty flags.src/services/apis/openai-compatible-core.mjs (2)
8-15:extraHeaderscan be silently overridden whenapiKeyis provided.If a caller passes
AuthorizationinsideextraHeadersand providesapiKey, the spread on line 11 setsAuthorizationfirst, then line 13 unconditionally overwrites it. This is likely the desired precedence, but it's implicit. A small comment or an explicitdeletewould prevent surprises for custom provider integrations that may need a non-Bearer auth header.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-compatible-core.mjs` around lines 8 - 15, The buildHeaders function currently allows an Authorization in extraHeaders to be silently overwritten when apiKey is provided; make this precedence explicit by removing any Authorization from extraHeaders before merging (e.g., delete extraHeaders.Authorization) when you intend apiKey to win, or alternatively check for an existing Authorization and only set headers.Authorization = `Bearer ${apiKey}` if none exists — update the buildHeaders function and add a brief comment clarifying the chosen precedence so custom providers aren’t surprised.
109-117: Partial answers are silently discarded when the stream ends without afinish_reason.When
onEndfires andfinishedis false (line 148), only{ done: true }is posted —finish()is never called, sopushRecordis skipped and the partial answer is lost. This appears intentional based on the corresponding test (emits fallback done message when stream ends without finish reason), but it means any accumulatedanswertext from intermediate deltas is dropped from the conversation history.If this is by design (don't persist incomplete answers), a brief comment would help future readers understand the intent.
async onEnd() { - if (!finished) port.postMessage({ done: true }) + // Stream ended without a finish_reason — notify the UI but do not + // persist the partial answer into the conversation history. + if (!finished) port.postMessage({ done: true })Also applies to: 146-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-compatible-core.mjs` around lines 109 - 117, The onEnd handler currently posts a fallback done message without calling finish(), so any accumulated partial answer in the answer variable is never persisted via pushRecord; update the onEnd logic to call finish() whenever finished is false (ensuring finish() uses pushRecord(session, question, answer) and sends the final port.postMessage with done:true and session) so partial deltas are saved, or if dropping partial answers is intentional add a concise comment near finish()/onEnd explaining the design decision; reference finish, pushRecord, answer, onEnd and port.postMessage when making the change.src/services/apis/provider-registry.mjs (3)
249-269:getProviderSecretcascade is well-layered but the final fallback (line 268) may surface stale keys.The priority is: (1) apiMode.apiKey for
customApiModelKeys, (2)providerSecretsmap, (3) legacy config key, (4) apiMode.apiKey as last resort for all groups. The last fallback means an oldapiMode.apiKeyvalue embedded in a session could override a rotated key inproviderSecretsor legacy config — but only if both are empty. This is a reasonable last-resort behavior; just document that the apiMode-embedded key is the weakest source.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/provider-registry.mjs` around lines 249 - 269, getProviderSecret currently falls back to session.apiMode.apiKey as the weakest source after checking providerSecrets and legacy keys, which can surface stale embedded keys; update the function (getProviderSecret) to include a concise inline comment or docstring clarifying the priority order (apiMode.customApiModelKeys -> providerSecrets map -> LEGACY_KEY_BY_PROVIDER_ID lookup -> session.apiMode.apiKey as last resort) and explicitly note that the final fallback (session.apiMode.apiKey) is the weakest/least-authoritative source and may be stale so should not override rotated keys in providerSecrets or legacy config.
241-247:getProviderByIdsilently returnsnullfor disabled providers — callers should be aware.Line 245 returns
nullifprovider.enabled === false. This meansresolveOpenAICompatibleRequestwill returnnull, and the caller inopenai-api.mjswill throw'Unknown OpenAI-compatible provider configuration'. The error message doesn't distinguish "provider not found" from "provider disabled," which could confuse users troubleshooting.💡 Consider returning a richer error or distinct null reasons
One option is to have
getProviderByIdreturn a status object, or haveresolveOpenAICompatibleRequestproduce a more descriptive error reason:export function resolveOpenAICompatibleRequest(config, session) { const providerId = resolveProviderIdForSession(session) if (!providerId) return null const provider = getProviderById(config, providerId) - if (!provider) return null + if (!provider) { + const allProviders = getAllOpenAIProviders(config) + const disabled = allProviders.find((p) => p.id === providerId && p.enabled === false) + if (disabled) return { error: `Provider '${providerId}' is disabled` } + return null + }The caller would then need to check for
.error— but this is optional depending on how much UX polish is desired.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/provider-registry.mjs` around lines 241 - 247, getProviderById currently returns null for both "not found" and "disabled" cases which makes callers like resolveOpenAICompatibleRequest unable to distinguish why resolution failed; change getProviderById to return a richer result (e.g. { provider } on success, { error: 'not_found' } or { error: 'disabled' } when appropriate) or throw a descriptive error, then update resolveOpenAICompatibleRequest (and callers such as openai-api.mjs) to handle the new shape/error and produce a clearer message like "provider not found" vs "provider disabled"; reference the getProviderById and resolveOpenAICompatibleRequest symbols when making these changes.
101-134:resolveProviderIdFromLegacyModelName— thorough legacy mapping, one subtle gap.Line 130 uses
preset.startsWith('chatglm')which would also match a hypothetical model starting withchatglmthat isn't actually ChatGLM-related. Given the bounded set of known model names in this extension, the practical risk is negligible, but it's worth noting the loose match.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/provider-registry.mjs` around lines 101 - 134, The loose match in resolveProviderIdFromLegacyModelName uses preset.startsWith('chatglm') which can incorrectly match unrelated names; tighten the condition by matching the known ChatGLM tokens only (for example check preset === 'chatglm' OR preset.startsWith('chatglm_') OR preset === 'chatglmApiModelKeys') so it only identifies legitimate ChatGLM presets; update the condition near the other provider checks (see resolveProviderIdFromLegacyModelName and getModelNamePresetPart) to use these stricter comparisons.src/services/apis/openai-api.mjs (1)
6-8:normalizeBaseUrlduplicatestrimSlashesin provider-registry.mjs.Both
normalizeBaseUrlhere andtrimSlashesinprovider-registry.mjsstrip trailing slashes from a string. Consider importingtrimSlashes(or exporting a shared helper) instead of maintaining a parallel implementation that could drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-api.mjs` around lines 6 - 8, Replace the duplicate trailing-slash logic in normalizeBaseUrl by reusing the existing trimSlashes helper: remove the local normalizeBaseUrl function and import trimSlashes (from the module that currently exports it) into this module, then call trimSlashes wherever normalizeBaseUrl was used; alternatively, if trimSlashes is not exported, export it from provider-registry.mjs and update imports so openai-api.mjs uses that shared helper instead of duplicating the implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/apis/provider-registry.mjs`:
- Around line 186-201: The fallback ID in normalizeCustomProvider currently uses
the unstable index-based `custom-provider-${index + 1}` which will change when
array order changes and break lookups (e.g., providerSecrets and
session.apiMode.providerId); change normalizeCustomProvider to produce a stable
fallback ID when provider.id is missing by deriving it deterministically (for
example, compute a short hash from stable fields like name + baseUrl) or by
using a persistent UUID assigned at provider creation time, and update any
creation flow so newly created providers get that stable id; reference
normalizeCustomProvider and ensure code paths that persist/lookup provider IDs
(providerSecrets, session.apiMode.providerId) use the new stable id scheme.
---
Duplicate comments:
In `@src/background/index.mjs`:
- Line 8: The import statement for generateAnswersWithOpenAICompatibleApi is
missing the .mjs extension; update the import in src/background/index.mjs to
reference '../services/apis/openai-api.mjs' (i.e., change the module path used
where generateAnswersWithOpenAICompatibleApi is imported) so the runtime can
resolve the module consistently.
In `@src/popup/sections/GeneralPart.jsx`:
- Around line 96-105: LEGACY_API_KEY_FIELD_BY_PROVIDER_ID duplicates the inverse
mapping LEGACY_SECRET_KEY_TO_PROVIDER_ID; extract the canonical mapping into a
new shared constants module (e.g., export LEGACY_API_KEY_FIELD_BY_PROVIDER_ID
from a central module), import that constant into GeneralPart.jsx and update
src/config's code to derive LEGACY_SECRET_KEY_TO_PROVIDER_ID from it (use
Object.entries/Object.fromEntries to invert the mapping), and remove the
hard-coded duplicate so both modules consume the single source of truth.
In `@src/services/apis/openai-api.mjs`:
- Around line 24-40: The fetch in touchOllamaKeepAlive can hang indefinitely;
modify touchOllamaKeepAlive to create an AbortController, pass controller.signal
to fetch, and set a timer (e.g., via setTimeout) to call controller.abort()
after a reasonable timeout (e.g., 5-10s), clearing the timer on completion;
ensure the fetch call uses the controller.signal and that any catch handles
AbortError gracefully so the calling await won't hang.
In `@src/services/apis/provider-registry.mjs`:
- Around line 15-86: BUILTIN_PROVIDER_TEMPLATE currently sets OpenAI's
chatCompletionsPath to '/v1/chat/completions' which can duplicate '/v1' if
buildBuiltinProviders uses a baseUrl that already ends with '/v1'; update
buildBuiltinProviders to normalize the OpenAI base URL by stripping a trailing
'/v1' or '/v1/' before saving (e.g., use baseUrl =
config.customOpenAiApiUrl.replace(/\/v1\/?$/, '') when building the provider
entry for id 'openai'), or alternatively change the OpenAI template path to
'/chat/completions' so all providers use the same path convention; adjust the
code that creates the OpenAI provider (referencing BUILTIN_PROVIDER_TEMPLATE and
buildBuiltinProviders) to perform the normalization.
---
Nitpick comments:
In `@src/config/index.mjs`:
- Around line 740-751: The mapping API_MODE_GROUP_TO_PROVIDER_ID currently
contains both aimlModelKeys and aimlApiModelKeys pointing to 'aiml'; remove the
redundant key aimlApiModelKeys (or add a brief clarifying comment if the
duplicate is intentional for future compatibility) so only the canonical key
used by ModelGroups (aimlModelKeys) remains, updating the object
API_MODE_GROUP_TO_PROVIDER_ID to eliminate dead code while preserving behavior.
- Around line 796-1101: The customApiModes loop and selectedApiMode handling in
migrateUserConfig share repeated logic; extract that shared logic into a helper
named migrateApiModeEntry(apiMode, {providerSecrets, customOpenAIProviders,
providerIdSet, providerIdRenameLookup, customProviderCounter,
LEGACY_CUSTOM_PROVIDER_SECRET, API_MODE_GROUP_TO_PROVIDER_ID}) that normalizes
providerId, applies renames, creates new provider entries from customUrl
(updating providerIdSet, customOpenAIProviders and customProviderCounter), moves
apiKey into providerSecrets, clears apiMode.customUrl/apiKey/providerId as
appropriate, and returns {updatedApiMode, providerIdAssignedFromLegacyCustomUrl,
customProviderCounter, dirtyFlags} so migrateUserConfig can call it for each
entry in customApiModes and once for selectedApiMode (merging returned updates
and flags) to remove the duplicated ~100 lines while preserving existing
side-effects on providerSecrets, customOpenAIProviders, providerIdSet, and
dirty/custom*Dirty flags.
In `@src/services/apis/openai-api.mjs`:
- Around line 6-8: Replace the duplicate trailing-slash logic in
normalizeBaseUrl by reusing the existing trimSlashes helper: remove the local
normalizeBaseUrl function and import trimSlashes (from the module that currently
exports it) into this module, then call trimSlashes wherever normalizeBaseUrl
was used; alternatively, if trimSlashes is not exported, export it from
provider-registry.mjs and update imports so openai-api.mjs uses that shared
helper instead of duplicating the implementation.
In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 8-15: The buildHeaders function currently allows an Authorization
in extraHeaders to be silently overwritten when apiKey is provided; make this
precedence explicit by removing any Authorization from extraHeaders before
merging (e.g., delete extraHeaders.Authorization) when you intend apiKey to win,
or alternatively check for an existing Authorization and only set
headers.Authorization = `Bearer ${apiKey}` if none exists — update the
buildHeaders function and add a brief comment clarifying the chosen precedence
so custom providers aren’t surprised.
- Around line 109-117: The onEnd handler currently posts a fallback done message
without calling finish(), so any accumulated partial answer in the answer
variable is never persisted via pushRecord; update the onEnd logic to call
finish() whenever finished is false (ensuring finish() uses pushRecord(session,
question, answer) and sends the final port.postMessage with done:true and
session) so partial deltas are saved, or if dropping partial answers is
intentional add a concise comment near finish()/onEnd explaining the design
decision; reference finish, pushRecord, answer, onEnd and port.postMessage when
making the change.
In `@src/services/apis/provider-registry.mjs`:
- Around line 249-269: getProviderSecret currently falls back to
session.apiMode.apiKey as the weakest source after checking providerSecrets and
legacy keys, which can surface stale embedded keys; update the function
(getProviderSecret) to include a concise inline comment or docstring clarifying
the priority order (apiMode.customApiModelKeys -> providerSecrets map ->
LEGACY_KEY_BY_PROVIDER_ID lookup -> session.apiMode.apiKey as last resort) and
explicitly note that the final fallback (session.apiMode.apiKey) is the
weakest/least-authoritative source and may be stale so should not override
rotated keys in providerSecrets or legacy config.
- Around line 241-247: getProviderById currently returns null for both "not
found" and "disabled" cases which makes callers like
resolveOpenAICompatibleRequest unable to distinguish why resolution failed;
change getProviderById to return a richer result (e.g. { provider } on success,
{ error: 'not_found' } or { error: 'disabled' } when appropriate) or throw a
descriptive error, then update resolveOpenAICompatibleRequest (and callers such
as openai-api.mjs) to handle the new shape/error and produce a clearer message
like "provider not found" vs "provider disabled"; reference the getProviderById
and resolveOpenAICompatibleRequest symbols when making these changes.
- Around line 101-134: The loose match in resolveProviderIdFromLegacyModelName
uses preset.startsWith('chatglm') which can incorrectly match unrelated names;
tighten the condition by matching the known ChatGLM tokens only (for example
check preset === 'chatglm' OR preset.startsWith('chatglm_') OR preset ===
'chatglmApiModelKeys') so it only identifies legitimate ChatGLM presets; update
the condition near the other provider checks (see
resolveProviderIdFromLegacyModelName and getModelNamePresetPart) to use these
stricter comparisons.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
src/background/index.mjssrc/config/index.mjssrc/popup/sections/ApiModes.jsxsrc/popup/sections/GeneralPart.jsxsrc/services/apis/custom-api.mjssrc/services/apis/openai-api.mjssrc/services/apis/openai-compatible-core.mjssrc/services/apis/provider-registry.mjssrc/services/init-session.mjssrc/services/wrappers.mjssrc/utils/model-name-convert.mjstests/unit/services/apis/openai-api-compat.test.mjs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/services/init-session.mjs
| function normalizeCustomProvider(provider, index) { | ||
| if (!provider || typeof provider !== 'object') return null | ||
| const id = toStringOrEmpty(provider.id).trim() || `custom-provider-${index + 1}` | ||
| return { | ||
| id, | ||
| name: toStringOrEmpty(provider.name).trim() || `Custom Provider ${index + 1}`, | ||
| baseUrl: trimSlashes(provider.baseUrl), | ||
| chatCompletionsPath: ensureLeadingSlash(provider.chatCompletionsPath, DEFAULT_CHAT_PATH), | ||
| completionsPath: ensureLeadingSlash(provider.completionsPath, DEFAULT_COMPLETION_PATH), | ||
| chatCompletionsUrl: toStringOrEmpty(provider.chatCompletionsUrl).trim(), | ||
| completionsUrl: toStringOrEmpty(provider.completionsUrl).trim(), | ||
| builtin: false, | ||
| enabled: provider.enabled !== false, | ||
| allowLegacyResponseField: Boolean(provider.allowLegacyResponseField), | ||
| } | ||
| } |
There was a problem hiding this comment.
Index-based fallback ID for custom providers is fragile.
Line 188 generates custom-provider-${index + 1} when provider.id is empty. Since the index depends on array position, reordering or removing providers will shift IDs, breaking providerSecrets lookups and persisted session.apiMode.providerId references. Consider generating a stable ID (e.g., hash of name+baseUrl, or a UUID at creation time) when the ID is missing.
#!/bin/bash
# Check how custom providers are created to see if IDs are assigned at creation
rg -n -C5 'customOpenAIProviders' --type=js -g '!node_modules/**'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/apis/provider-registry.mjs` around lines 186 - 201, The fallback
ID in normalizeCustomProvider currently uses the unstable index-based
`custom-provider-${index + 1}` which will change when array order changes and
break lookups (e.g., providerSecrets and session.apiMode.providerId); change
normalizeCustomProvider to produce a stable fallback ID when provider.id is
missing by deriving it deterministically (for example, compute a short hash from
stable fields like name + baseUrl) or by using a persistent UUID assigned at
provider creation time, and update any creation flow so newly created providers
get that stable id; reference normalizeCustomProvider and ensure code paths that
persist/lookup provider IDs (providerSecrets, session.apiMode.providerId) use
the new stable id scheme.
|
Codex Review: Didn't find any major issues. Chef's kiss. ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
fdc0497 to
407d734
Compare
|
@codex review |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const { controller, messageListener, disconnectListener } = setAbortController(port) | ||
| const config = await getUserConfig() | ||
|
|
||
| let requestBody |
There was a problem hiding this comment.
generateAnswersWithOpenAICompatibleApi already receives a fully-loaded config, but generateAnswersWithOpenAICompatible calls getUserConfig() again to read token/temperature/context settings. This adds an extra storage read (and potential migration write) per request and can lead to inconsistencies if callers pass a config snapshot. Consider passing the needed config fields into generateAnswersWithOpenAICompatible (or accepting config as a parameter) instead of reloading from storage.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 407d734e9e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const legacyValue = legacyKey ? toStringOrEmpty(config?.[legacyKey]).trim() : '' | ||
| if (legacyValue) return legacyValue | ||
|
|
||
| return apiModeApiKey |
There was a problem hiding this comment.
Stop using session apiKey as global fallback
For non-custom providers this fallback can make settings updates ineffective: if a tab still has a stale session.apiMode.apiKey (from legacy/session state), getProviderSecret will keep returning it whenever providerSecrets and legacy config fields are empty, so clearing or rotating the provider key in settings does not take effect for that session. This regression is introduced by the unconditional return apiModeApiKey; for non-customApiModelKeys modes, key resolution should come from provider/config storage only.
Useful? React with 👍 / 👎.
407d734 to
3b4527c
Compare
|
@codex review |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 41 out of 41 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Codex Review: Didn't find any major issues. Can't wait for the next one! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
c48b0f8 to
5cab19b
Compare
|
@codex review |
There was a problem hiding this comment.
♻️ Duplicate comments (3)
src/services/apis/openai-compatible-core.mjs (2)
146-149:⚠️ Potential issue | 🟠 MajorPersist partial answers when stream ends unexpectedly.
At Line 147,
onEndsends a done message but skipsfinish(), so accumulated partial output is not saved tosession.conversationRecords.🔧 Proposed fix
async onEnd() { - if (!finished) { - port.postMessage({ answer: null, done: true, session: session }) - } + if (!finished) finish() port.onMessage.removeListener(messageListener) port.onDisconnect.removeListener(disconnectListener) },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-compatible-core.mjs` around lines 146 - 149, In onEnd(), when the stream ends unexpectedly and finished is false, call finish() (the same function that persists accumulated output to session.conversationRecords) before sending the done message; update the branch that currently only does port.postMessage({ answer: null, done: true, session }) to first invoke await finish() (or finish() if synchronous) to ensure partial output is saved to session.conversationRecords, then post the done message, keeping the finished guard to avoid double-finalizing.
114-115:⚠️ Potential issue | 🟠 MajorAvoid logging full conversation and SSE payload content.
Line 114 and Line 124 can leak user prompts/responses into logs. Please remove or sanitize these logs.
🔧 Proposed fix
const finish = () => { if (finished) return finished = true pushRecord(session, question, answer) - console.debug('conversation history', { content: session.conversationRecords }) port.postMessage({ answer: null, done: true, session: session }) } @@ onMessage(message) { - console.debug('sse message', message) if (finished) returnAlso applies to: 124-124
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-compatible-core.mjs` around lines 114 - 115, Remove or sanitize any logs that emit full conversation or SSE payloads: delete or replace the console.debug('conversation history', { content: session.conversationRecords }) call and avoid posting the entire session object via port.postMessage({ answer: null, done: true, session: session }); instead log minimal metadata (e.g., conversation length, last activity timestamp, or a redacted summary/hash) and ensure any port.postMessage payloads do not include raw session.conversationRecords or user content; apply the same change to the other instance referenced (the similar log around line 124) so no full prompts/responses are written to logs or sent over SSE.src/popup/sections/ApiModes.jsx (1)
381-381:⚠️ Potential issue | 🟡 MinorUse single quotes for the placeholder literal.
This line still uses a double-quoted string literal in changed JSX.
🔧 Suggested fix
- placeholder="https://api.example.com/v1/chat/completions" + placeholder='https://api.example.com/v1/chat/completions'As per coding guidelines
**/*.{js,jsx,mjs,ts,tsx}: Use single quotes instead of double quotes in imports and string literals.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/popup/sections/ApiModes.jsx` at line 381, In the ApiModes.jsx JSX element that renders the endpoint input (look for the placeholder prop in the ApiModes component/JSX block), replace the double-quoted placeholder string "https://api.example.com/v1/chat/completions" with a single-quoted literal 'https://api.example.com/v1/chat/completions' to conform to the project's single-quote string style.
🧹 Nitpick comments (1)
src/config/index.mjs (1)
952-992: Extract duplicated custom-provider resolution/creation logic into one helper.The blocks from Line 952 to Line 992 and Line 1041 to Line 1085 implement near-identical provider lookup/creation rules. Consolidating them would reduce drift risk and make future migration fixes safer.
Also applies to: 1041-1085
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/index.mjs` around lines 952 - 992, The duplicated logic that resolves or creates a custom provider should be extracted into a single helper (e.g., resolveOrCreateCustomProvider) and invoked from both places; currently the blocks that use normalizeText, normalizeEndpointUrlForCompare, customOpenAIProviders, providerSecrets, and isProviderSecretCompatibleForCustomMode to find or create a provider (using customProviderCounter, normalizeProviderId, ensureUniqueProviderId, providerIdSet, setting provider fields and toggling customProvidersDirty/customApiModesDirty) are repeated — move that logic into a helper that accepts the apiMode (or customUrl/customName/apiKey), providerSecrets, providerIdSet, and returns { providerId, providerCreatedFlag, provider } so callers simply set apiMode.providerId, clear apiMode.customUrl as needed, and update flags (customProvidersDirty/customApiModesDirty/providerIdAssignedFromLegacyCustomUrl) based on the helper result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/popup/sections/ApiModes.jsx`:
- Line 381: In the ApiModes.jsx JSX element that renders the endpoint input
(look for the placeholder prop in the ApiModes component/JSX block), replace the
double-quoted placeholder string "https://api.example.com/v1/chat/completions"
with a single-quoted literal 'https://api.example.com/v1/chat/completions' to
conform to the project's single-quote string style.
In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 146-149: In onEnd(), when the stream ends unexpectedly and
finished is false, call finish() (the same function that persists accumulated
output to session.conversationRecords) before sending the done message; update
the branch that currently only does port.postMessage({ answer: null, done: true,
session }) to first invoke await finish() (or finish() if synchronous) to ensure
partial output is saved to session.conversationRecords, then post the done
message, keeping the finished guard to avoid double-finalizing.
- Around line 114-115: Remove or sanitize any logs that emit full conversation
or SSE payloads: delete or replace the console.debug('conversation history', {
content: session.conversationRecords }) call and avoid posting the entire
session object via port.postMessage({ answer: null, done: true, session: session
}); instead log minimal metadata (e.g., conversation length, last activity
timestamp, or a redacted summary/hash) and ensure any port.postMessage payloads
do not include raw session.conversationRecords or user content; apply the same
change to the other instance referenced (the similar log around line 124) so no
full prompts/responses are written to logs or sent over SSE.
---
Nitpick comments:
In `@src/config/index.mjs`:
- Around line 952-992: The duplicated logic that resolves or creates a custom
provider should be extracted into a single helper (e.g.,
resolveOrCreateCustomProvider) and invoked from both places; currently the
blocks that use normalizeText, normalizeEndpointUrlForCompare,
customOpenAIProviders, providerSecrets, and
isProviderSecretCompatibleForCustomMode to find or create a provider (using
customProviderCounter, normalizeProviderId, ensureUniqueProviderId,
providerIdSet, setting provider fields and toggling
customProvidersDirty/customApiModesDirty) are repeated — move that logic into a
helper that accepts the apiMode (or customUrl/customName/apiKey),
providerSecrets, providerIdSet, and returns { providerId, providerCreatedFlag,
provider } so callers simply set apiMode.providerId, clear apiMode.customUrl as
needed, and update flags
(customProvidersDirty/customApiModesDirty/providerIdAssignedFromLegacyCustomUrl)
based on the helper result.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (44)
src/_locales/de/main.jsonsrc/_locales/en/main.jsonsrc/_locales/es/main.jsonsrc/_locales/fr/main.jsonsrc/_locales/in/main.jsonsrc/_locales/it/main.jsonsrc/_locales/ja/main.jsonsrc/_locales/ko/main.jsonsrc/_locales/pt/main.jsonsrc/_locales/ru/main.jsonsrc/_locales/tr/main.jsonsrc/_locales/zh-hans/main.jsonsrc/_locales/zh-hant/main.jsonsrc/background/index.mjssrc/config/index.mjssrc/config/openai-provider-mappings.mjssrc/popup/sections/ApiModes.jsxsrc/popup/sections/GeneralPart.jsxsrc/popup/sections/api-modes-provider-utils.mjssrc/popup/sections/general-balance-utils.mjssrc/popup/sections/provider-secret-utils.mjssrc/services/apis/aiml-api.mjssrc/services/apis/chatglm-api.mjssrc/services/apis/custom-api.mjssrc/services/apis/deepseek-api.mjssrc/services/apis/moonshot-api.mjssrc/services/apis/ollama-api.mjssrc/services/apis/openai-api.mjssrc/services/apis/openai-compatible-core.mjssrc/services/apis/openrouter-api.mjssrc/services/apis/provider-registry.mjssrc/services/init-session.mjssrc/services/wrappers.mjssrc/utils/model-name-convert.mjstests/unit/config/migrate-user-config.test.mjstests/unit/popup/api-modes-provider-utils.test.mjstests/unit/popup/general-balance-utils.test.mjstests/unit/popup/provider-secret-utils.test.mjstests/unit/services/apis/custom-api.test.mjstests/unit/services/apis/openai-api-compat.test.mjstests/unit/services/apis/provider-registry.test.mjstests/unit/services/apis/thin-adapters.test.mjstests/unit/services/wrappers-register.test.mjstests/unit/utils/model-name-convert.test.mjs
💤 Files with no reviewable changes (6)
- src/services/apis/chatglm-api.mjs
- src/services/apis/ollama-api.mjs
- src/services/apis/moonshot-api.mjs
- src/services/apis/aiml-api.mjs
- src/services/apis/openrouter-api.mjs
- src/services/apis/deepseek-api.mjs
🚧 Files skipped from review as they are similar to previous changes (18)
- src/_locales/es/main.json
- src/services/wrappers.mjs
- src/_locales/ru/main.json
- tests/unit/popup/provider-secret-utils.test.mjs
- src/popup/sections/provider-secret-utils.mjs
- src/utils/model-name-convert.mjs
- src/config/openai-provider-mappings.mjs
- src/_locales/zh-hant/main.json
- src/_locales/in/main.json
- tests/unit/services/apis/provider-registry.test.mjs
- src/_locales/fr/main.json
- src/popup/sections/api-modes-provider-utils.mjs
- src/_locales/ko/main.json
- src/services/init-session.mjs
- src/services/apis/provider-registry.mjs
- src/_locales/ja/main.json
- src/_locales/zh-hans/main.json
- src/_locales/de/main.json
|
Codex Review: Didn't find any major issues. 🚀 ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 44 out of 44 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| async onStart() {}, | ||
| async onEnd() { | ||
| if (!finished) { | ||
| port.postMessage({ answer: null, done: true, session: session }) |
There was a problem hiding this comment.
In generateAnswersWithOpenAICompatible(), onEnd() posts a final { answer: null, done: true, session } message when the stream ends without a [DONE] chunk or finish_reason, but it never calls the shared finish() helper. That means the accumulated partial answer is not persisted via pushRecord(), so the conversation history can silently lose the final/partial response even though the UI is told the request is done. Consider calling finish() (or at least pushRecord + setting finished = true) inside onEnd when !finished so fallback termination behaves consistently with the normal finish path.
| port.postMessage({ answer: null, done: true, session: session }) | |
| finish() |
5cab19b to
a5c12a2
Compare
|
@codex review |
|
Codex Review: Didn't find any major issues. Keep it up! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
a5c12a2 to
e023916
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 44 out of 44 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| async onStart() {}, | ||
| async onEnd() { | ||
| if (!finished) { | ||
| port.postMessage({ answer: null, done: true, session: session }) | ||
| } | ||
| port.onMessage.removeListener(messageListener) | ||
| port.onDisconnect.removeListener(disconnectListener) | ||
| }, |
There was a problem hiding this comment.
onEnd emits a final { answer: null, done: true } message without calling finish(). This means when a stream ends without [DONE] or finish_reason, the partial answer is not persisted via pushRecord, and finished is never set to true. Consider calling finish() from onEnd (or duplicating its persistence logic) so the last partial answer is saved consistently and finished is updated before cleanup.
|
@codex review |
|
Codex Review: Didn't find any major issues. Already looking forward to the next diff. ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Consolidate OpenAI-compatible API request handling into a single shared module and route background dispatch through provider registry lookup. This removes duplicated streaming/parsing logic from openai-api and custom-api while keeping existing behavior. Add config migration to preserve existing API keys and custom mode entries by mapping them into providerSecrets and custom provider records. Keep legacy fallbacks for apiMode customUrl/custom apiKey to avoid user-visible regressions during rollout. Normalize apiMode objects at runtime and compare selection using stable identity fields so migrated and legacy session data continue to match correctly.
Split provider management from API mode saving so users can add and edit custom OpenAI-compatible providers in a dedicated editor. Require a full chat-completions endpoint URL for provider setup and derive the paired completions endpoint with shared popup utilities backed by unit tests. Keep API mode provider binding explicit at save time and preserve provider-secret synchronization in General settings for backward compatibility. Add provider-related locale keys across supported locales.
e023916 to
9ac7c67
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 48 out of 48 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (!provider) { | ||
| if (normalizeProviderId(providerId) === 'legacy-custom-default') { | ||
| provider = getProviderById(config, providerId) | ||
| useLegacyCustomUrlFallback = !!provider | ||
| } else { | ||
| return null | ||
| } |
There was a problem hiding this comment.
In resolveOpenAICompatibleRequest, the legacy fallback branch checks normalizeProviderId(providerId) === 'legacy-custom-default' but then calls getProviderById(config, providerId) with the un-normalized providerId. If a session has providerId with different casing/spacing (common for legacy data, since normalizeApiMode only trims), this will fail to resolve the built-in legacy-custom-default provider and the request returns null / throws. Use the normalized id when calling getProviderById (e.g. 'legacy-custom-default' or normalizeProviderId(providerId)) and ensure resolvedProviderId aligns with the provider actually used.
| "OpenAI (GPT-5.4 nano)": "OpenAI (GPT-5.4 nano)", | ||
| "Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)" | ||
| "Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)", | ||
| "This provider is still used by other API modes or saved conversations": "Bu saglayici hala diger API modlari veya kaydedilmis konusmalar tarafindan kullaniliyor", |
There was a problem hiding this comment.
🟡 Turkish locale translation missing diacritical marks (ASCII-only characters)
The Turkish translation at line 212 uses ASCII-only characters instead of proper Turkish diacritical marks. For example, "Bu saglayici hala diger API modlari veya kaydedilmis konusmalar tarafindan kullaniliyor" should use sağlayıcı, hâlâ, diğer, modları, kaydedilmiş, konuşmalar, kullanılıyor. Other Turkish translations in the same file (e.g., lines 153 and 215) correctly use Turkish characters. This will display garbled-looking text to Turkish-speaking users.
| "This provider is still used by other API modes or saved conversations": "Bu saglayici hala diger API modlari veya kaydedilmis konusmalar tarafindan kullaniliyor", | |
| "This provider is still used by other API modes or saved conversations": "Bu sağlayıcı hâlâ diğer API modları veya kaydedilmiş konuşmalar tarafından kullanılıyor", |
Was this helpful? React with 👍 or 👎 to provide feedback.
| "OpenAI (GPT-5.4 nano)": "OpenAI (GPT-5.4 nano)", | ||
| "Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)" | ||
| "Anthropic (Claude Sonnet 4.6)": "Anthropic (Claude Sonnet 4.6)", | ||
| "This provider is still used by other API modes or saved conversations": "Ce fournisseur est encore utilise par d'autres modes d'API ou des conversations enregistrees", |
There was a problem hiding this comment.
🟡 French locale translation missing accent marks
The French translation at line 212 uses "utilise" and "enregistrees" instead of the correctly accented "utilisé" and "enregistrées". Other French translations in the same file correctly use accents (e.g., line 215: "nécessaire pour les conversations enregistrées"). This will display incorrectly-spelled text to French-speaking users.
| "This provider is still used by other API modes or saved conversations": "Ce fournisseur est encore utilise par d'autres modes d'API ou des conversations enregistrees", | |
| "This provider is still used by other API modes or saved conversations": "Ce fournisseur est encore utilisé par d'autres modes d'API ou des conversations enregistrées", |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9ac7c67071
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const selectedProvider = selectedProviderRequest | ||
| ? getProviderById(config, selectedProviderRequest.providerId) | ||
| : null |
There was a problem hiding this comment.
Reuse the resolved provider when
legacy-custom-default is custom
The new registry tests explicitly support migrated configs where a custom provider keeps the ID legacy-custom-default, but this code throws away selectedProviderRequest.provider and looks the provider up by ID again. In that case getProviderById() returns the builtin legacy provider first, so the mode-override flow in buildSelectedModeProviderSecretOverrideUpdate() clones the builtin template instead of the real custom endpoint. Saving an override from the General tab then creates a materialized provider with no usable request URL, and that mode stops working on the next request.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 12
♻️ Duplicate comments (4)
src/services/apis/openai-compatible-core.mjs (2)
17-20:⚠️ Potential issue | 🟠 MajorAppend legacy
responsechunks instead of replacinganswer.For legacy streaming providers, each
responsechunk currently overwrites the accumulated buffer, so only the tail fragment is persisted and returned.🔁 Preserve the full streamed answer
-function buildMessageAnswer(answer, data, allowLegacyResponseField) { - if (allowLegacyResponseField && typeof data?.response === 'string' && data.response) { - return data.response +function buildMessageAnswer(answer, data, allowLegacyResponseField) { + if (allowLegacyResponseField && typeof data?.response === 'string') { + return answer + data.response }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-compatible-core.mjs` around lines 17 - 20, buildMessageAnswer currently replaces the accumulated answer with each legacy data.response chunk, losing earlier fragments; change the logic in buildMessageAnswer so that when allowLegacyResponseField is true and data?.response is a non-empty string you append data.response to the existing answer (e.g., answer = (answer ?? '') + data.response) rather than returning data.response alone, and then return the concatenated result so streaming fragments are preserved; keep existing type checks for data?.response and respect allowLegacyResponseField flag.
114-118:⚠️ Potential issue | 🟠 MajorRemove raw conversation and SSE payload logging from the production path.
These debug statements dump full prompts, answers, and provider payloads into extension logs on every request.
🧹 Minimal fix
const finish = () => { if (finished) return finished = true if (answer !== '') { pushRecord(session, question, answer) - console.debug('conversation history', { content: session.conversationRecords }) } port.postMessage({ answer: null, done: true, session: session }) } @@ onMessage(message) { - console.debug('sse message', message) if (finished) returnAlso applies to: 126-127
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/openai-compatible-core.mjs` around lines 114 - 118, The production path currently logs raw conversation and SSE payloads (see the console.debug call that prints session.conversationRecords and any other debug logs around SSE handling), which must be removed or disabled; remove the console.debug that dumps session.conversationRecords (and any similar console.* that logs full prompts/answers/provider payloads) and instead either sanitize the data or gate logging behind a runtime debug flag/ENV check (e.g., only log when DEBUG_MODE is true); keep the pushRecord(session, question, answer) and the port.postMessage({ answer: null, done: true, session }) behavior but ensure session sent over postMessage does not contain sensitive raw prompt/answer payloads (sanitize or omit conversationRecords) before sending.src/popup/sections/ApiModes.jsx (1)
605-610:⚠️ Potential issue | 🟡 MinorUse single quotes for the placeholder literal.
This new JSX prop still uses double quotes.
As per coding guidelines `**/*.{js,jsx,mjs,ts,tsx}`: Use single quotes instead of double quotes in imports and string literals.✏️ Style-only fix
- placeholder="https://api.example.com/v1/chat/completions" + placeholder='https://api.example.com/v1/chat/completions'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/popup/sections/ApiModes.jsx` around lines 605 - 610, The JSX input uses a double-quoted placeholder literal; replace the double quotes with single quotes for the placeholder prop in the input element that uses providerBaseUrlInputRef and value={providerDraft.apiUrl} (i.e., change placeholder="https://api.example.com/v1/chat/completions" to use single quotes) to comply with the project's string-literal style rule.src/services/apis/provider-registry.mjs (1)
284-302:⚠️ Potential issue | 🟠 MajorCanonicalize custom provider IDs before using them as registry keys.
normalizeCustomProvider()only trimsprovider.id, but later in the same modulegetProviderSecret()canonicalizes IDs withnormalizeProviderId()before looking inproviderSecrets. A provider imported asMyProxycan still resolve here while its secret lookup falls through toproviderSecrets.myproxyand returns empty. The order-basedcustom-provider-${index + 1}fallback has the same problem after deletions or reordering, because persistedproviderIdreferences shift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/apis/provider-registry.mjs` around lines 284 - 302, normalizeCustomProvider currently only trims provider.id causing mismatches with getProviderSecret which calls normalizeProviderId; update normalizeCustomProvider to canonicalize the returned id by passing the computed id (either the trimmed provider.id or the fallback string) through normalizeProviderId before returning. Specifically, in normalizeCustomProvider ensure the variable id is built as you already do (toStringOrEmpty(...).trim() || `custom-provider-${index + 1}`) and then set id = normalizeProviderId(id) so providerSecrets lookups and getProviderSecret use the same canonical key.
🧹 Nitpick comments (2)
src/components/ConversationCard/index.jsx (1)
39-42: Move these provider-label helpers to a shared module.
ConversationCardis a shared component, but it now reaches intosrc/popup/sections/for display logic. That reverses the dependency direction and makes the component harder to reuse outside the popup. A neutral utility module undersrc/utils/or similar would keep this boundary cleaner.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/ConversationCard/index.jsx` around lines 39 - 42, ConversationCard currently imports getApiModeDisplayLabel and getConversationAiName from src/popup/sections/api-modes-provider-utils.mjs which inverts the dependency graph; move these helper functions into a neutral shared utilities module (e.g., src/utils/api-mode-labels.{js|mjs}) and update ConversationCard to import getApiModeDisplayLabel and getConversationAiName from that new module so the shared component no longer reaches into the popup/sections folder; ensure you export the same function names and update any other consumers to the new path.src/background/index.mjs (1)
338-350: Avoid keeping a second provider registry in the background.This predicate hard-codes the OpenAI-compatible families even though the PR introduces
provider-registry.mjs. The list will drift the next time a compatible provider is added, and that provider will silently miss the unified path until this file is updated too. Consider exposing a single compatibility decision from the registry/resolver and branching on that here.As per coding guidelines, "Centralize network/API logic under
src/services/apis/and keep endpoints auditable".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/background/index.mjs` around lines 338 - 350, The predicate isUsingOpenAICompatibleApiSession currently hard-codes provider checks and must be replaced with a single compatibility query from the new provider registry: import and call the registry/resolver function (eg. an exported isOpenAICompatibleProvider or similar) from provider-registry.mjs instead of listing isUsingCustomModel/isUsingChatgptApiModel/...; update isUsingOpenAICompatibleApiSession to delegate to that registry function (passing the session or session.provider) so new providers are automatically covered, and ensure any network/API logic remains under src/services/apis/ as per guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/_locales/zh-hans/main.json`:
- Around line 220-221: The translations for the strings "This API key is set on
the selected custom mode. Editing it here will create a dedicated provider for
that mode." and "Use shared key" mix the English word "provider" into zh-Hans;
replace "provider" with the consistent term "提供商" in both translations so they
read e.g. "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用提供商。" and "使用共享提供商密钥" to match the
rest of the locale.
In `@src/config/index.mjs`:
- Around line 945-966: The code snapshots legacyCustomProviderSecret once at
module init which misses later mutations to providerSecrets; change to read the
live value lazily by replacing the constant with a function (e.g.,
getLegacyCustomProviderSecret()) that returns
normalizeText(providerSecrets['legacy-custom-default']) and update callers
(including isProviderSecretCompatibleForCustomMode and any other places where
legacyCustomProviderSecret is used such as the other migration blocks around the
getCustomModeMigrationSignature usage) to call that function so migrations see
the current map value instead of a stale snapshot.
In `@src/popup/popup-config-utils.mjs`:
- Around line 1-2: The current helpers (mergeConfigUpdate and
buildConfigRollbackPatch) perform shallow top-level merges and rollbacks which
can accidentally overwrite or fail to delete nested keys like providerSecrets;
update these functions to perform deep merge/patch semantics for nested maps
(specifically handle providerSecrets as a map of provider -> secrets) or else
enforce and document a strict full-object-replacement contract at every call
site. Concretely: modify mergeConfigUpdate to deep-merge nested objects (merging
providerSecrets maps rather than replacing them), and update
buildConfigRollbackPatch to produce rollback operations that can remove added
nested keys and restore only modified nested entries; alternatively, add
validation in all call sites of mergeConfigUpdate and buildConfigRollbackPatch
to reject partial updates for providerSecrets and require callers to pass the
entire nested object. Ensure tests for functions mergeConfigUpdate and
buildConfigRollbackPatch cover adding, updating, and deleting nested
providerSecrets entries.
In `@src/popup/Popup.jsx`:
- Around line 107-113: When the initial getUserConfig() resolves in the
useEffect, don't clobber any optimistic in-memory edits; instead merge the
loaded config into state while preserving existing in-memory values. Update the
handler that currently does persistedConfigRef.current = config;
setConfig(config) to set persistedConfigRef.current = config and then
setConfig(prev => ({ ...config, ...prev })) so keys already edited in the
current popup take precedence over the storage snapshot (referencing useEffect,
getUserConfig, persistedConfigRef, and setConfig).
In `@src/popup/sections/ApiModes.jsx`:
- Around line 252-277: The persistApiMode function currently calls
updateConfig(payload) and then immediately calls clearPendingProviderChanges(),
which can lose in-progress edits if the async updateConfig rejects; change
persistApiMode to await the promise returned by updateConfig(payload) (i.e.,
make the caller async or handle the returned promise) and only call
clearPendingProviderChanges() and any editor-close logic after a successful
write, and add a catch that logs/surfaces the error and avoids clearing pending
state on failure; apply the same pattern to the other similar persistence block
(the one around the second occurrence noted in the review) so both updateConfig
usages await success before tearing down pending provider/editor state.
- Around line 121-129: The catch block for Browser.storage.local.get('sessions')
incorrectly sets sessionsLoaded to true on failure; remove or change the
setSessionsLoaded(true) in the catch so that a failed read does not mark
sessions as loaded. In the component (ApiModes.jsx) adjust the promise.catch
handler (the block referencing isMounted and setSessionsLoaded) to either only
log the error or set an error flag but do NOT call setSessionsLoaded(true); keep
the existing updateSessions call in the .then branch and preserve the isMounted
check before any state updates so the saved-conversation guard remains closed on
read failure.
In `@src/popup/sections/general-balance-utils.mjs`:
- Around line 22-29: The cache key builder getBalanceCacheKey is not normalizing
the billing base URL the same way as normalizeBillingApiBaseUrl, so equivalent
endpoints like "https://host" and "https://host/v1" produce different keys;
update getBalanceCacheKey to run the baseUrl through the same normalization
(call normalizeBillingApiBaseUrl(baseUrl) or apply its logic: trim, strip
trailing slashes, and remove a trailing "/v1" segment) before JSON.stringify,
ensuring the baseUrl component matches the billing normalization used elsewhere.
In `@src/popup/sections/GeneralPart.jsx`:
- Around line 163-172: The OpenAI balance control is shown based only on
provider IDs; change the gating to also verify the resolved billing host is the
official OpenAI platform by using resolvedOpenAiApiUrl. Update the logic that
computes shouldShowOpenAIBalanceControls (and any other occurrences around the
second block at lines ~444-479) to call shouldOpenOpenAIUsagePage(...) AND
validate resolvedOpenAiApiUrl (from selectedProvider?.baseUrl ||
config.customOpenAiApiUrl || 'https://api.openai.com') resolves to the OpenAI
host (e.g., parse the URL and check hostname === 'api.openai.com' or
hostname.endsWith('.openai.com') / other canonical OpenAI host checks). Ensure
you reference resolvedOpenAiApiUrl and
selectedProviderId/selectedSourceProviderId when changing the condition so the
balance link is hidden for proxies/self-hosted endpoints.
- Around line 62-105: The checkBilling function (and similar balance fetch logic
used by getBalance) should be moved out of the popup UI into a new or existing
module under src/services/apis/; create an API service function (e.g., export
async function fetchBillingInfo(apiKey, apiUrl)) that contains the HTTP calls,
header construction, error handling, date logic (startDate/endDate/subDate) and
returns { totalAmount, totalUsage, remaining } or nulls on error, then import
and call that service from GeneralPart.jsx (replace direct calls to
checkBilling/fetch with the new service). Ensure the new service reuses
formatDate if needed, centralizes the Authorization header creation, and
preserves existing response/error behavior so UI code only deals with the
returned values.
- Around line 283-349: The cleanup decision uses a stale apiMode because
buildCleanupConfigUpdate(...) is called before the second
isOverrideCommitCurrent(...) check and before
buildSelectionPreservingConfigUpdate(...); reorder so the cleanup is based on
the final selection: compute shouldPreserveCurrentSelection via
isOverrideCommitCurrent(...) first (using commitGeneration/overrideCommit refs
and commitSelectionSignature refs), then call
buildCleanupConfigUpdate(cleanupCandidateProviderId, config, configUpdate,
updatedSessions) only after you know whether selection is preserved, or
alternatively recompute nextConfigUpdate by calling buildCleanupConfigUpdate
again after obtaining finalConfigUpdate from
buildSelectionPreservingConfigUpdate(...); ensure cleanupCandidateProviderId
removal logic uses the final apiMode/selection state so you don't orphan the
newly selected provider secret.
In `@src/services/apis/custom-api.mjs`:
- Around line 19-29: The shared OpenAI-compatible flow can leak port listeners
if fetchSSE throws before onEnd/onError run; update the code that calls
setAbortController (referencing setAbortController, controller, cleanController,
messageListener, disconnectListener) so that after creating the controller you
wrap the fetchSSE call in a try/finally and always call cleanController() in the
finally block (ensuring messageListener/disconnectListener are removed even when
fetchSSE throws); apply this change in the openai-compatible implementation used
by generateAnswersWithOpenAICompatible to guarantee listeners are cleaned on all
code paths.
In `@src/utils/model-name-convert.mjs`:
- Around line 151-170: The identity comparison in isApiModeSelected() wrongly
includes the "active" property, causing a mode to fail matching itself when
toggled; update normalizeForCompare (used by isApiModeSelected and
normalizeApiMode) to omit "active" from the JSON key so identity only uses
groupName, itemName, isCustom, customName and providerId (i.e., remove active
from the returned object before JSON.stringify); keep the rest of the logic
unchanged so selectedApiMode and targetApiMode continue to be compared by the
narrower identity.
---
Duplicate comments:
In `@src/popup/sections/ApiModes.jsx`:
- Around line 605-610: The JSX input uses a double-quoted placeholder literal;
replace the double quotes with single quotes for the placeholder prop in the
input element that uses providerBaseUrlInputRef and value={providerDraft.apiUrl}
(i.e., change placeholder="https://api.example.com/v1/chat/completions" to use
single quotes) to comply with the project's string-literal style rule.
In `@src/services/apis/openai-compatible-core.mjs`:
- Around line 17-20: buildMessageAnswer currently replaces the accumulated
answer with each legacy data.response chunk, losing earlier fragments; change
the logic in buildMessageAnswer so that when allowLegacyResponseField is true
and data?.response is a non-empty string you append data.response to the
existing answer (e.g., answer = (answer ?? '') + data.response) rather than
returning data.response alone, and then return the concatenated result so
streaming fragments are preserved; keep existing type checks for data?.response
and respect allowLegacyResponseField flag.
- Around line 114-118: The production path currently logs raw conversation and
SSE payloads (see the console.debug call that prints session.conversationRecords
and any other debug logs around SSE handling), which must be removed or
disabled; remove the console.debug that dumps session.conversationRecords (and
any similar console.* that logs full prompts/answers/provider payloads) and
instead either sanitize the data or gate logging behind a runtime debug flag/ENV
check (e.g., only log when DEBUG_MODE is true); keep the pushRecord(session,
question, answer) and the port.postMessage({ answer: null, done: true, session
}) behavior but ensure session sent over postMessage does not contain sensitive
raw prompt/answer payloads (sanitize or omit conversationRecords) before
sending.
In `@src/services/apis/provider-registry.mjs`:
- Around line 284-302: normalizeCustomProvider currently only trims provider.id
causing mismatches with getProviderSecret which calls normalizeProviderId;
update normalizeCustomProvider to canonicalize the returned id by passing the
computed id (either the trimmed provider.id or the fallback string) through
normalizeProviderId before returning. Specifically, in normalizeCustomProvider
ensure the variable id is built as you already do (toStringOrEmpty(...).trim()
|| `custom-provider-${index + 1}`) and then set id = normalizeProviderId(id) so
providerSecrets lookups and getProviderSecret use the same canonical key.
---
Nitpick comments:
In `@src/background/index.mjs`:
- Around line 338-350: The predicate isUsingOpenAICompatibleApiSession currently
hard-codes provider checks and must be replaced with a single compatibility
query from the new provider registry: import and call the registry/resolver
function (eg. an exported isOpenAICompatibleProvider or similar) from
provider-registry.mjs instead of listing
isUsingCustomModel/isUsingChatgptApiModel/...; update
isUsingOpenAICompatibleApiSession to delegate to that registry function (passing
the session or session.provider) so new providers are automatically covered, and
ensure any network/API logic remains under src/services/apis/ as per guidelines.
In `@src/components/ConversationCard/index.jsx`:
- Around line 39-42: ConversationCard currently imports getApiModeDisplayLabel
and getConversationAiName from src/popup/sections/api-modes-provider-utils.mjs
which inverts the dependency graph; move these helper functions into a neutral
shared utilities module (e.g., src/utils/api-mode-labels.{js|mjs}) and update
ConversationCard to import getApiModeDisplayLabel and getConversationAiName from
that new module so the shared component no longer reaches into the
popup/sections folder; ensure you export the same function names and update any
other consumers to the new path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: df51bb37-ab90-4217-94a0-b5d913a4959f
📒 Files selected for processing (48)
src/_locales/de/main.jsonsrc/_locales/en/main.jsonsrc/_locales/es/main.jsonsrc/_locales/fr/main.jsonsrc/_locales/in/main.jsonsrc/_locales/it/main.jsonsrc/_locales/ja/main.jsonsrc/_locales/ko/main.jsonsrc/_locales/pt/main.jsonsrc/_locales/ru/main.jsonsrc/_locales/tr/main.jsonsrc/_locales/zh-hans/main.jsonsrc/_locales/zh-hant/main.jsonsrc/background/index.mjssrc/components/ConversationCard/index.jsxsrc/config/index.mjssrc/config/openai-provider-mappings.mjssrc/popup/Popup.jsxsrc/popup/popup-config-utils.mjssrc/popup/sections/ApiModes.jsxsrc/popup/sections/GeneralPart.jsxsrc/popup/sections/api-modes-provider-utils.mjssrc/popup/sections/general-balance-utils.mjssrc/popup/sections/provider-secret-utils.mjssrc/services/apis/aiml-api.mjssrc/services/apis/chatglm-api.mjssrc/services/apis/custom-api.mjssrc/services/apis/deepseek-api.mjssrc/services/apis/moonshot-api.mjssrc/services/apis/ollama-api.mjssrc/services/apis/openai-api.mjssrc/services/apis/openai-compatible-core.mjssrc/services/apis/openrouter-api.mjssrc/services/apis/provider-registry.mjssrc/services/init-session.mjssrc/services/wrappers.mjssrc/utils/model-name-convert.mjstests/unit/config/migrate-user-config.test.mjstests/unit/popup/api-modes-provider-utils.test.mjstests/unit/popup/general-balance-utils.test.mjstests/unit/popup/popup-config-update.test.mjstests/unit/popup/provider-secret-utils.test.mjstests/unit/services/apis/custom-api.test.mjstests/unit/services/apis/openai-api-compat.test.mjstests/unit/services/apis/provider-registry.test.mjstests/unit/services/apis/thin-adapters.test.mjstests/unit/services/wrappers-register.test.mjstests/unit/utils/model-name-convert.test.mjs
💤 Files with no reviewable changes (6)
- src/services/apis/deepseek-api.mjs
- src/services/apis/moonshot-api.mjs
- src/services/apis/ollama-api.mjs
- src/services/apis/aiml-api.mjs
- src/services/apis/chatglm-api.mjs
- src/services/apis/openrouter-api.mjs
✅ Files skipped from review due to trivial changes (10)
- tests/unit/services/wrappers-register.test.mjs
- src/config/openai-provider-mappings.mjs
- tests/unit/popup/general-balance-utils.test.mjs
- src/_locales/fr/main.json
- src/_locales/tr/main.json
- src/_locales/ru/main.json
- tests/unit/popup/provider-secret-utils.test.mjs
- src/_locales/it/main.json
- tests/unit/popup/api-modes-provider-utils.test.mjs
- src/_locales/es/main.json
🚧 Files skipped from review as they are similar to previous changes (9)
- tests/unit/services/apis/custom-api.test.mjs
- src/services/init-session.mjs
- src/_locales/de/main.json
- tests/unit/services/apis/thin-adapters.test.mjs
- tests/unit/utils/model-name-convert.test.mjs
- src/_locales/pt/main.json
- src/_locales/en/main.json
- src/_locales/ko/main.json
- src/_locales/zh-hant/main.json
| "This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用 provider。", | ||
| "Use shared key": "使用共享 provider 密钥", |
There was a problem hiding this comment.
Keep provider translated consistently in zh-Hans.
These new strings still mix raw English provider into otherwise localized copy, while the rest of the locale uses 提供商. That will look inconsistent in the popup.
🔤 Suggested wording
- "This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用 provider。",
- "Use shared key": "使用共享 provider 密钥",
+ "This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设置在所选的自定义模式上。在这里编辑会为该模式创建专用提供商。",
+ "Use shared key": "使用共享提供商密钥",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用 provider。", | |
| "Use shared key": "使用共享 provider 密钥", | |
| "This API key is set on the selected custom mode. Editing it here will create a dedicated provider for that mode.": "此 API 密钥设置在所选的自定义模式上。在这里编辑会为该模式创建专用提供商。", | |
| "Use shared key": "使用共享提供商密钥", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/_locales/zh-hans/main.json` around lines 220 - 221, The translations for
the strings "This API key is set on the selected custom mode. Editing it here
will create a dedicated provider for that mode." and "Use shared key" mix the
English word "provider" into zh-Hans; replace "provider" with the consistent
term "提供商" in both translations so they read e.g. "此 API
密钥设定在所选的自定义模式上。在这里编辑会为该模式建立专用提供商。" and "使用共享提供商密钥" to match the rest of the
locale.
| const legacyCustomProviderSecret = normalizeText(providerSecrets['legacy-custom-default']) | ||
| const hasOwnProviderSecret = (providerId) => | ||
| Object.prototype.hasOwnProperty.call(providerSecrets, providerId) | ||
| const getCustomModeMigrationSignature = (apiMode) => | ||
| JSON.stringify({ | ||
| groupName: normalizeText(apiMode?.groupName), | ||
| itemName: normalizeText(apiMode?.itemName), | ||
| isCustom: Boolean(apiMode?.isCustom), | ||
| customName: normalizeText(apiMode?.customName), | ||
| customUrl: normalizeEndpointUrlForCompare(normalizeText(apiMode?.customUrl)), | ||
| providerId: normalizeProviderId( | ||
| typeof apiMode?.providerId === 'string' ? apiMode.providerId : '', | ||
| ), | ||
| apiKey: normalizeText(apiMode?.apiKey), | ||
| }) | ||
| const isProviderSecretCompatibleForCustomMode = (modeApiKey, providerSecret) => { | ||
| const effectiveModeKey = normalizeText(modeApiKey) || legacyCustomProviderSecret | ||
| if (effectiveModeKey) { | ||
| return !providerSecret || providerSecret === effectiveModeKey | ||
| } | ||
| return !providerSecret | ||
| } |
There was a problem hiding this comment.
Re-read legacy-custom-default after the migration mutates providerSecrets.
legacyCustomProviderSecret is snapshotted once on Line 945, but earlier iterations can populate providerSecrets['legacy-custom-default'] later in the same migration pass. URL-derived modes processed after that then miss the inherited key and get migrated without credentials.
🔄 Read the legacy secret lazily from the live map
- const legacyCustomProviderSecret = normalizeText(providerSecrets['legacy-custom-default'])
+ const getLegacyCustomProviderSecret = () =>
+ normalizeText(providerSecrets['legacy-custom-default'])
@@
const isProviderSecretCompatibleForCustomMode = (modeApiKey, providerSecret) => {
- const effectiveModeKey = normalizeText(modeApiKey) || legacyCustomProviderSecret
+ const effectiveModeKey = normalizeText(modeApiKey) || getLegacyCustomProviderSecret()
if (effectiveModeKey) {
return !providerSecret || providerSecret === effectiveModeKey
}
return !providerSecret
}
@@
- } else if (legacyCustomProviderSecret && providerIdAssignedFromLegacyCustomUrl) {
+ } else if (getLegacyCustomProviderSecret() && providerIdAssignedFromLegacyCustomUrl) {
if (!hasOwnProviderSecret(apiMode.providerId)) {
- providerSecrets[apiMode.providerId] = legacyCustomProviderSecret
+ providerSecrets[apiMode.providerId] = getLegacyCustomProviderSecret()
dirty = true
}
}
@@
- legacyCustomProviderSecret &&
+ getLegacyCustomProviderSecret() &&
selectedTargetProviderId &&
!hasOwnProviderSecret(selectedTargetProviderId)
) {
- providerSecrets[selectedTargetProviderId] = legacyCustomProviderSecret
+ providerSecrets[selectedTargetProviderId] = getLegacyCustomProviderSecret()
dirty = true
}Also applies to: 1115-1119, 1220-1228
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/index.mjs` around lines 945 - 966, The code snapshots
legacyCustomProviderSecret once at module init which misses later mutations to
providerSecrets; change to read the live value lazily by replacing the constant
with a function (e.g., getLegacyCustomProviderSecret()) that returns
normalizeText(providerSecrets['legacy-custom-default']) and update callers
(including isProviderSecretCompatibleForCustomMode and any other places where
legacyCustomProviderSecret is used such as the other migration blocks around the
getCustomModeMigrationSignature usage) to call that function so migrations see
the current map value instead of a stale snapshot.
| export function mergeConfigUpdate(currentConfig, value) { | ||
| return { ...currentConfig, ...value } |
There was a problem hiding this comment.
These helpers only understand top-level replacement, which is risky for providerSecrets.
mergeConfigUpdate shallow-merges, and buildConfigRollbackPatch restores whole top-level keys. With nested config like providerSecrets, that means a partial patch can wipe sibling entries, while rollback for newly added nested entries cannot express a real delete. Please either make the full-object replacement contract explicit and enforce it at every call site, or teach merge and rollback to handle nested provider maps together.
Also applies to: 14-30
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/popup/popup-config-utils.mjs` around lines 1 - 2, The current helpers
(mergeConfigUpdate and buildConfigRollbackPatch) perform shallow top-level
merges and rollbacks which can accidentally overwrite or fail to delete nested
keys like providerSecrets; update these functions to perform deep merge/patch
semantics for nested maps (specifically handle providerSecrets as a map of
provider -> secrets) or else enforce and document a strict
full-object-replacement contract at every call site. Concretely: modify
mergeConfigUpdate to deep-merge nested objects (merging providerSecrets maps
rather than replacing them), and update buildConfigRollbackPatch to produce
rollback operations that can remove added nested keys and restore only modified
nested entries; alternatively, add validation in all call sites of
mergeConfigUpdate and buildConfigRollbackPatch to reject partial updates for
providerSecrets and require callers to pass the entire nested object. Ensure
tests for functions mergeConfigUpdate and buildConfigRollbackPatch cover adding,
updating, and deleting nested providerSecrets entries.
| useEffect(() => { | ||
| getPreferredLanguageKey().then((lang) => { | ||
| i18n.changeLanguage(lang) | ||
| }) | ||
| getUserConfig().then((config) => { | ||
| persistedConfigRef.current = config | ||
| setConfig(config) |
There was a problem hiding this comment.
Don’t let the initial storage read clobber optimistic edits.
If updateConfig runs before getUserConfig() resolves, Line 113 replaces the in-memory form state with the stale storage snapshot even though the queued write may still succeed. That makes the just-edited setting appear to revert until the popup is reopened.
💡 Merge only untouched keys from the initial load
getUserConfig().then((config) => {
persistedConfigRef.current = config
- setConfig(config)
+ setConfig((currentConfig) => {
+ const untouchedConfig = { ...config }
+ for (const key of Object.keys(latestTouchedRequestByKeyRef.current)) {
+ delete untouchedConfig[key]
+ }
+ return mergeConfigUpdate(currentConfig, untouchedConfig)
+ })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/popup/Popup.jsx` around lines 107 - 113, When the initial getUserConfig()
resolves in the useEffect, don't clobber any optimistic in-memory edits; instead
merge the loaded config into state while preserving existing in-memory values.
Update the handler that currently does persistedConfigRef.current = config;
setConfig(config) to set persistedConfigRef.current = config and then
setConfig(prev => ({ ...config, ...prev })) so keys already edited in the
current popup take precedence over the storage snapshot (referencing useEffect,
getUserConfig, persistedConfigRef, and setConfig).
| Browser.storage.local | ||
| .get('sessions') | ||
| .then(({ sessions }) => { | ||
| updateSessions(sessions) | ||
| }) | ||
| .catch(() => { | ||
| if (!isMounted) return | ||
| setSessionsLoaded(true) | ||
| }) |
There was a problem hiding this comment.
Keep the saved-conversation guard closed when the initial session scan fails.
The catch branch turns sessionsLoaded on even when the storage read failed, so destructive provider edits are treated as if there were no saved conversations to protect.
🛡️ Keep the guard in a non-ready state on read failure
Browser.storage.local
.get('sessions')
.then(({ sessions }) => {
updateSessions(sessions)
})
.catch(() => {
if (!isMounted) return
- setSessionsLoaded(true)
+ setSessions([])
+ // Keep `sessionsLoaded` false here so provider delete/edit guards stay blocked.
+ // If needed, surface a dedicated load-error message instead of treating this as "empty".
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Browser.storage.local | |
| .get('sessions') | |
| .then(({ sessions }) => { | |
| updateSessions(sessions) | |
| }) | |
| .catch(() => { | |
| if (!isMounted) return | |
| setSessionsLoaded(true) | |
| }) | |
| Browser.storage.local | |
| .get('sessions') | |
| .then(({ sessions }) => { | |
| updateSessions(sessions) | |
| }) | |
| .catch(() => { | |
| if (!isMounted) return | |
| setSessions([]) | |
| // Keep `sessionsLoaded` false here so provider delete/edit guards stay blocked. | |
| // If needed, surface a dedicated load-error message instead of treating this as "empty". | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/popup/sections/ApiModes.jsx` around lines 121 - 129, The catch block for
Browser.storage.local.get('sessions') incorrectly sets sessionsLoaded to true on
failure; remove or change the setSessionsLoaded(true) in the catch so that a
failed read does not mark sessions as loaded. In the component (ApiModes.jsx)
adjust the promise.catch handler (the block referencing isMounted and
setSessionsLoaded) to either only log the error or set an error flag but do NOT
call setSessionsLoaded(true); keep the existing updateSessions call in the .then
branch and preserve the isMounted check before any state updates so the
saved-conversation guard remains closed on read failure.
| async function checkBilling(apiKey, apiUrl) { | ||
| const now = new Date() | ||
| let startDate = new Date(now - 90 * 24 * 60 * 60 * 1000) | ||
| const endDate = new Date(now.getTime() + 24 * 60 * 60 * 1000) | ||
| const subDate = new Date(now) | ||
| subDate.setDate(1) | ||
|
|
||
| const urlSubscription = `${apiUrl}/v1/dashboard/billing/subscription` | ||
| let urlUsage = `${apiUrl}/v1/dashboard/billing/usage?start_date=${formatDate( | ||
| startDate, | ||
| )}&end_date=${formatDate(endDate)}` | ||
| const headers = { | ||
| Authorization: `Bearer ${apiKey}`, | ||
| 'Content-Type': 'application/json', | ||
| } | ||
|
|
||
| try { | ||
| let response = await fetch(urlSubscription, { headers }) | ||
| if (!response.ok) { | ||
| console.log('Your account has been suspended. Please log in to OpenAI to check.') | ||
| return [null, null, null] | ||
| } | ||
| const subscriptionData = await response.json() | ||
| const totalAmount = subscriptionData.hard_limit_usd | ||
|
|
||
| if (totalAmount > 20) { | ||
| startDate = subDate | ||
| } | ||
|
|
||
| urlUsage = `${apiUrl}/v1/dashboard/billing/usage?start_date=${formatDate( | ||
| startDate, | ||
| )}&end_date=${formatDate(endDate)}` | ||
|
|
||
| response = await fetch(urlUsage, { headers }) | ||
| const usageData = await response.json() | ||
| const totalUsage = usageData.total_usage / 100 | ||
| const remaining = totalAmount - totalUsage | ||
|
|
||
| return [totalAmount, totalUsage, remaining] | ||
| } catch (error) { | ||
| console.error(error) | ||
| return [null, null, null] | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Move the billing transport into src/services/apis/.
checkBilling() and the balance fetches in getBalance() add new OpenAI-compatible HTTP calls directly in a popup component. That duplicates provider transport logic in the UI layer and makes these endpoints harder to audit alongside the rest of the provider stack.
As per coding guidelines, "Centralize network/API logic under src/services/apis/ and keep endpoints auditable".
Also applies to: 444-479
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/popup/sections/GeneralPart.jsx` around lines 62 - 105, The checkBilling
function (and similar balance fetch logic used by getBalance) should be moved
out of the popup UI into a new or existing module under src/services/apis/;
create an API service function (e.g., export async function
fetchBillingInfo(apiKey, apiUrl)) that contains the HTTP calls, header
construction, error handling, date logic (startDate/endDate/subDate) and returns
{ totalAmount, totalUsage, remaining } or nulls on error, then import and call
that service from GeneralPart.jsx (replace direct calls to checkBilling/fetch
with the new service). Ensure the new service reuses formatDate if needed,
centralizes the Authorization header creation, and preserves existing
response/error behavior so UI code only deals with the returned values.
| const resolvedOpenAiApiUrl = | ||
| selectedProvider?.baseUrl || config.customOpenAiApiUrl || 'https://api.openai.com' | ||
| const isSelectedProviderKeyManagedByModeOverride = hasSelectedModeOwnProviderSecretOverride( | ||
| config, | ||
| selectedProviderId, | ||
| ) | ||
| const shouldShowOpenAIBalanceControls = shouldOpenOpenAIUsagePage( | ||
| selectedProviderId, | ||
| selectedSourceProviderId, | ||
| ) |
There was a problem hiding this comment.
Gate OpenAI balance links by the resolved billing host.
shouldShowOpenAIBalanceControls only looks at provider IDs. When customOpenAiApiUrl points at a proxy or self-hosted OpenAI-compatible endpoint, the popup still renders the OpenAI balance affordance, probes /dashboard/billing/* on that host, and then falls back to the OpenAI account page. This should stay hidden unless the resolved billing base is actually the OpenAI platform.
🔧 Host-aware gating
+ const isDefaultOpenAIBillingHost =
+ normalizeBillingApiBaseUrl(resolvedOpenAiApiUrl) === 'https://api.openai.com'
const shouldShowOpenAIBalanceControls =
- shouldOpenOpenAIUsagePage(
- selectedProviderId,
- selectedSourceProviderId,
- )
+ isDefaultOpenAIBillingHost &&
+ shouldOpenOpenAIUsagePage(selectedProviderId, selectedSourceProviderId)Also applies to: 444-479
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/popup/sections/GeneralPart.jsx` around lines 163 - 172, The OpenAI
balance control is shown based only on provider IDs; change the gating to also
verify the resolved billing host is the official OpenAI platform by using
resolvedOpenAiApiUrl. Update the logic that computes
shouldShowOpenAIBalanceControls (and any other occurrences around the second
block at lines ~444-479) to call shouldOpenOpenAIUsagePage(...) AND validate
resolvedOpenAiApiUrl (from selectedProvider?.baseUrl ||
config.customOpenAiApiUrl || 'https://api.openai.com') resolves to the OpenAI
host (e.g., parse the URL and check hostname === 'api.openai.com' or
hostname.endsWith('.openai.com') / other canonical OpenAI host checks). Ensure
you reference resolvedOpenAiApiUrl and
selectedProviderId/selectedSourceProviderId when changing the condition so the
balance link is hidden for proxies/self-hosted endpoints.
| const nextConfigUpdate = buildCleanupConfigUpdate( | ||
| cleanupCandidateProviderId, | ||
| config, | ||
| configUpdate, | ||
| updatedSessions, | ||
| ) | ||
| if ( | ||
| !isOverrideCommitCurrent( | ||
| commitGeneration, | ||
| overrideCommitGenerationRef.current, | ||
| commitSelectionSignature, | ||
| overrideCommitSelectionSignatureRef.current, | ||
| ) | ||
| ) { | ||
| return | ||
| } | ||
|
|
||
| if (updatedSessions !== latestSessions) { | ||
| try { | ||
| await Browser.storage.local.set({ sessions: updatedSessions }) | ||
| } catch (error) { | ||
| console.error('[popup] Failed to persist migrated sessions for provider override', error) | ||
| return | ||
| } | ||
| } | ||
|
|
||
| const rollbackMigratedSessions = async (message, error) => { | ||
| if (updatedSessions === latestSessions || !sessionMigration) return | ||
| if (error) { | ||
| console.error(message, error) | ||
| } else { | ||
| console.error(message) | ||
| } | ||
|
|
||
| const currentSessionsResult = await loadLatestSessions() | ||
| if (!currentSessionsResult.ok) { | ||
| console.error( | ||
| '[popup] Failed to reload sessions for provider override selective rollback', | ||
| ) | ||
| return | ||
| } | ||
| const rolledBackSessions = rollbackProviderSecretOverrideSessionMigration( | ||
| currentSessionsResult.sessions, | ||
| latestSessions, | ||
| sessionMigration, | ||
| ) | ||
| if (rolledBackSessions === currentSessionsResult.sessions) return | ||
| try { | ||
| await Browser.storage.local.set({ sessions: rolledBackSessions }) | ||
| } catch (rollbackError) { | ||
| console.error( | ||
| '[popup] Failed to persist selective rollback for provider override sessions', | ||
| rollbackError, | ||
| ) | ||
| } | ||
| } | ||
|
|
||
| const shouldPreserveCurrentSelection = !isOverrideCommitCurrent( | ||
| commitGeneration, | ||
| overrideCommitGenerationRef.current, | ||
| commitSelectionSignature, | ||
| overrideCommitSelectionSignatureRef.current, | ||
| ) | ||
| const finalConfigUpdate = buildSelectionPreservingConfigUpdate( | ||
| nextConfigUpdate, | ||
| shouldPreserveCurrentSelection, | ||
| ) |
There was a problem hiding this comment.
Don't clean up providers after the selection has changed.
If the user switches modes after the second isOverrideCommitCurrent(...) check, buildCleanupConfigUpdate() has already decided whether to delete cleanupCandidateProviderId using the old config.apiMode. buildSelectionPreservingConfigUpdate() only strips apiMode, so the stale cleanup can still be persisted and orphan the newly selected mode/provider secret.
🛠️ Safer sequencing
- const nextConfigUpdate = buildCleanupConfigUpdate(
- cleanupCandidateProviderId,
- config,
- configUpdate,
- updatedSessions,
- )
if (
!isOverrideCommitCurrent(
commitGeneration,
@@
- const shouldPreserveCurrentSelection = !isOverrideCommitCurrent(
- commitGeneration,
- overrideCommitGenerationRef.current,
- commitSelectionSignature,
- overrideCommitSelectionSignatureRef.current,
- )
- const finalConfigUpdate = buildSelectionPreservingConfigUpdate(
- nextConfigUpdate,
- shouldPreserveCurrentSelection,
- )
+ if (
+ !isOverrideCommitCurrent(
+ commitGeneration,
+ overrideCommitGenerationRef.current,
+ commitSelectionSignature,
+ overrideCommitSelectionSignatureRef.current,
+ )
+ ) {
+ await rollbackMigratedSessions(
+ '[popup] Selection changed during provider override commit; attempting selective session rollback',
+ )
+ return
+ }
+
+ const nextConfigUpdate = buildCleanupConfigUpdate(
+ cleanupCandidateProviderId,
+ config,
+ configUpdate,
+ updatedSessions,
+ )
+ const finalConfigUpdate = buildSelectionPreservingConfigUpdate(nextConfigUpdate)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/popup/sections/GeneralPart.jsx` around lines 283 - 349, The cleanup
decision uses a stale apiMode because buildCleanupConfigUpdate(...) is called
before the second isOverrideCommitCurrent(...) check and before
buildSelectionPreservingConfigUpdate(...); reorder so the cleanup is based on
the final selection: compute shouldPreserveCurrentSelection via
isOverrideCommitCurrent(...) first (using commitGeneration/overrideCommit refs
and commitSelectionSignature refs), then call
buildCleanupConfigUpdate(cleanupCandidateProviderId, config, configUpdate,
updatedSessions) only after you know whether selection is preserved, or
alternatively recompute nextConfigUpdate by calling buildCleanupConfigUpdate
again after obtaining finalConfigUpdate from
buildSelectionPreservingConfigUpdate(...); ensure cleanupCandidateProviderId
removal logic uses the final apiMode/selection state so you don't orphan the
newly selected provider secret.
| await generateAnswersWithOpenAICompatible({ | ||
| port, | ||
| question, | ||
| session, | ||
| endpointType: 'chat', | ||
| requestUrl: apiUrl, | ||
| model: modelName, | ||
| apiKey, | ||
| provider: 'custom', | ||
| allowLegacyResponseField: true, | ||
| }) |
There was a problem hiding this comment.
Ensure the shared OpenAI-compatible path always cleans up port listeners.
This call now relies entirely on generateAnswersWithOpenAICompatible, and the shared implementation only removes its abort listeners from onEnd/onError. If fetchSSE throws before either callback runs, those listeners stay attached to the port and accumulate across retries.
💡 Suggested fix in src/services/apis/openai-compatible-core.mjs
const { controller, cleanController, messageListener, disconnectListener } =
setAbortController(port)
try {
await fetchSSE(/* existing request */)
} finally {
cleanController()
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/apis/custom-api.mjs` around lines 19 - 29, The shared
OpenAI-compatible flow can leak port listeners if fetchSSE throws before
onEnd/onError run; update the code that calls setAbortController (referencing
setAbortController, controller, cleanController, messageListener,
disconnectListener) so that after creating the controller you wrap the fetchSSE
call in a try/finally and always call cleanController() in the finally block
(ensuring messageListener/disconnectListener are removed even when fetchSSE
throws); apply this change in the openai-compatible implementation used by
generateAnswersWithOpenAICompatible to guarantee listeners are cleaned on all
code paths.
| export function isApiModeSelected(apiMode, configOrSession) { | ||
| return configOrSession.apiMode | ||
| ? JSON.stringify(configOrSession.apiMode, Object.keys(configOrSession.apiMode).sort()) === | ||
| JSON.stringify(apiMode, Object.keys(apiMode).sort()) | ||
| : configOrSession.modelName === apiModeToModelName(apiMode) | ||
| const normalizeForCompare = (value) => { | ||
| const normalized = normalizeApiMode(value) | ||
| if (!normalized) return null | ||
| return JSON.stringify({ | ||
| groupName: normalized.groupName, | ||
| itemName: normalized.itemName, | ||
| isCustom: normalized.isCustom, | ||
| customName: normalized.customName, | ||
| providerId: normalized.providerId, | ||
| active: normalized.active, | ||
| }) | ||
| } | ||
| if (!configOrSession.apiMode) { | ||
| return configOrSession.modelName === apiModeToModelName(apiMode) | ||
| } | ||
| const selectedApiMode = normalizeForCompare(configOrSession.apiMode) | ||
| const targetApiMode = normalizeForCompare(apiMode) | ||
| if (!selectedApiMode || !targetApiMode) return false | ||
| return selectedApiMode === targetApiMode |
There was a problem hiding this comment.
active should not be part of API mode identity.
isApiModeSelected() is used for state updates as well as rendering. Once active participates in the compare key, the same mode stops matching itself as soon as it is toggled on or off, so flows like disabling the selected mode in src/popup/sections/ApiModes.jsx:140-150 or clearing the selected mode's provider key in src/popup/sections/provider-secret-utils.mjs:250-365 can miss the target entry.
🎯 Narrower identity check
const normalizeForCompare = (value) => {
const normalized = normalizeApiMode(value)
if (!normalized) return null
return JSON.stringify({
groupName: normalized.groupName,
itemName: normalized.itemName,
isCustom: normalized.isCustom,
customName: normalized.customName,
providerId: normalized.providerId,
- active: normalized.active,
})
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/model-name-convert.mjs` around lines 151 - 170, The identity
comparison in isApiModeSelected() wrongly includes the "active" property,
causing a mode to fail matching itself when toggled; update normalizeForCompare
(used by isApiModeSelected and normalizeApiMode) to omit "active" from the JSON
key so identity only uses groupName, itemName, isCustom, customName and
providerId (i.e., remove active from the returned object before JSON.stringify);
keep the rest of the logic unchanged so selectedApiMode and targetApiMode
continue to be compared by the narrower identity.
User description
This PR unifies OpenAI-compatible API execution into a shared core + provider registry,
and completes the custom provider workflow in API Modes.
Included changes
max_completion_tokenswhere required)Compatibility
Validation
npm run lintpassednpm run buildpassedGitHub Copilot PR summary
PR Type
Enhancement, Tests, Documentation
Description
Unified OpenAI-compatible provider system: Consolidated all OpenAI-compatible API providers (custom, Ollama, DeepSeek, Moonshot, ChatGLM, OpenRouter, AIML) into a single shared execution path via
generateAnswersWithOpenAICompatibleApi, eliminating duplicate provider-specific implementationsConfiguration migration and schema versioning: Introduced comprehensive
migrateUserConfigfunction that normalizes provider IDs, consolidates legacy secrets, handles custom provider deduplication, and maintains backward compatibility with existing configsUnified provider registry: Created
provider-registry.mjswith centralized provider resolution logic supporting both built-in and custom providers, with secret lookup and endpoint URL constructionCustom provider UI workflow: Refactored API Modes popup to support custom provider creation/editing with provider selector dropdown, validation, and dynamic provider list management
Provider secret management: Added
buildProviderSecretUpdateutility for managing provider secrets across built-in and custom providers with legacy field supportAPI mode normalization: Implemented
normalizeApiModeutility to ensure consistent API mode object structure throughout the codebaseStream completion signaling: Fixed stream completion consistency to emit final message with
{ answer: null, done: true, session }formatComprehensive test coverage: Added 16 config migration tests, 8 provider registry tests, and additional tests for provider utilities and stream handling
Multilingual support: Updated localization strings across 12 languages for custom provider UI labels and validation messages
Removed 6 provider-specific API modules: Deleted
ollama-api.mjs,chatglm-api.mjs,aiml-api.mjs,moonshot-api.mjs,openrouter-api.mjs, anddeepseek-api.mjsas functionality consolidated into unified handlerDiagram Walkthrough
File Walkthrough
13 files
index.mjs
Configuration migration and provider registry unificationsrc/config/index.mjs
migrateUserConfigfunction to normalize provider IDs, consolidatelegacy secrets, and handle custom provider deduplication
customOpenAIProviders,providerSecrets,and
configSchemaVersionfor unified provider managementcustomUrlandapiKeyfields into provider-based storage while maintaining backward
compatibility
getUserConfigto automatically persist migrated configurationchanges to storage
ApiModes.jsx
Custom provider UI editor and provider selector integrationsrc/popup/sections/ApiModes.jsx
create/edit provider dialogs
chat completions URL
sanitizeApiModeForSaveto ensure proper provider ID andAPI key handling during save
getCustomOpenAIProvidersand provider utilities for dynamicprovider list management
provider-registry.mjs
Unified provider registry and resolution systemsrc/services/apis/provider-registry.mjs
getAllOpenAIProviders,getProviderById, andresolveOpenAICompatibleRequestfunctionsproviders (OpenAI, DeepSeek, Ollama, etc.) and custom providers
getProviderSecretfunction to resolve API keys fromproviderSecretsmap with fallback to legacy fieldsprovider configurations
GeneralPart.jsx
Unified provider detection and API key managementsrc/popup/sections/GeneralPart.jsx
resolveOpenAICompatibleRequestcallcredentials via
buildProviderSecretUpdateisUsingXxxApiModelchecks into singleisUsingOpenAICompatibleProviderflag
openai-api.mjs
OpenAI-compatible API core extraction and unificationsrc/services/apis/openai-api.mjs
generateAnswersWithOpenAICompatiblecore functiongenerateAnswersWithOpenAICompatibleApias unified entry pointthat resolves provider configuration and delegates to core
touchOllamaKeepAlivefor Ollama-specific keep-alivehandling
implementation
index.mjs
Unified background API execution routingsrc/background/index.mjs
generateAnswersWithOpenAICompatibleApicallisUsingOpenAICompatibleApiSessionutility to route allOpenAI-compatible providers through unified handler
DeepSeek, Moonshot, ChatGLM, OpenRouter, and AIML APIs
openai-compatible-core.mjs
OpenAI-compatible API core implementationsrc/services/apis/openai-compatible-core.mjs
generateAnswersWithOpenAICompatiblefunction handling both chat and completion endpoints
parsing, and message answer building
allowLegacyResponseFieldparameter for backward compatibility
OpenAI-compatible providers
model-name-convert.mjs
API mode normalization and comparison utilitiessrc/utils/model-name-convert.mjs
normalizeApiModefunction to ensure consistent API mode objectstructure with default values
apiModeToModelNameandgetApiModesFromConfigto use normalizedAPI modes for safer property access
isApiModeSelectedcomparison logic to normalize both sidesbefore comparing relevant fields
init-session.mjs
Session initialization API mode normalizationsrc/services/init-session.mjs
normalizeApiModeutility functionnormalizeApiModeto session initialization to ensure API modeobjects have consistent structure
custom-api.mjs
Consolidate custom API into unified OpenAI-compatible handlersrc/services/apis/custom-api.mjs
generateAnswersWithOpenAICompatiblefunctionparameters
provider-secret-utils.mjs
Provider secret update utility for API modessrc/popup/sections/provider-secret-utils.mjs
buildProviderSecretUpdatefunction formanaging provider secrets
field support
api-modes-provider-utils.mjs
Provider configuration utilities for API modes UIsrc/popup/sections/api-modes-provider-utils.mjs
API modes UI
createProviderIdgenerates unique normalized provider IDs avoidingconflicts
parseChatCompletionsEndpointUrlvalidates and parses chat completionsendpoint URLs
resolveProviderChatEndpointUrlresolves endpoint URL from providerconfiguration
wrappers.mjs
API mode normalization in port listenersrc/services/wrappers.mjs
normalizeApiModefunction from model-name-convertutility
session.apiModeduring port listenerregistration
5 files
migrate-user-config.test.mjs
Configuration migration test suitetests/unit/config/migrate-user-config.test.mjs
including legacy URL/key consolidation, provider ID normalization, and
deduplication
customUrlfields, secret key consolidation, and backward compatibility
same URL but different keys, and schema version updates
provider-registry.test.mjs
Provider registry resolution test suitetests/unit/services/apis/provider-registry.test.mjs
endpoint type detection and URL normalization
matching, and duplicate
/v1path preventioncustom providers
provider-secret-utils.test.mjs
Unit tests for provider secret update logictests/unit/popup/provider-secret-utils.test.mjs
buildProviderSecretUpdatefunction
providers
selected mode keys
updates
api-modes-provider-utils.test.mjs
Unit tests for API modes provider utilitiestests/unit/popup/api-modes-provider-utils.test.mjs
openai-api-compat.test.mjs
Stream completion signaling consistency teststests/unit/services/apis/openai-api-compat.test.mjs
{ answer: null, done: true, session }final message instead of
{ done: true }finish reason
incomplete streams
13 files
main.json
Japanese localization updatessrc/_locales/ja/main.json
Custom ProviderandProviderUI labelsmain.json
Turkish localization updatessrc/_locales/tr/main.json
Custom ProviderandProviderUI labelsmain.json
Traditional Chinese localization updatessrc/_locales/zh-hant/main.json
Custom ProviderandProviderUI labelsmain.json
Russian localization updatessrc/_locales/ru/main.json
Custom ProviderandProviderUI labelsmain.json
Italian localization updatessrc/_locales/it/main.json
Custom ProviderandProviderUI labelsmain.json
French localization updatessrc/_locales/fr/main.json
Custom ProviderandProviderUI labelsmain.json
German localization updatessrc/_locales/de/main.json
Custom ProviderandProviderUI labelsmain.json
Korean localization updatessrc/_locales/ko/main.json
Custom ProviderandProviderUI labelsmain.json
Spanish localization updatessrc/_locales/es/main.json
Custom ProviderandProviderUI labelsmain.json
Indonesian localization updatessrc/_locales/in/main.json
Custom ProviderandProviderUIlabels
main.json
Portuguese localization updatessrc/_locales/pt/main.json
Custom ProviderandProviderUIlabels
main.json
Simplified Chinese localization updatessrc/_locales/zh-hans/main.json
Custom ProviderandProviderUI labels
main.json
Localization strings for custom provider UIsrc/_locales/en/main.json
"Custom Provider"for UI display"Provider"for provider selection UI"Please enter a full Chat CompletionsURL"for URL validation error message1 files
openai-provider-mappings.mjs
Provider ID and legacy field mapping configurationsrc/config/openai-provider-mappings.mjs
field mappings
LEGACY_API_KEY_FIELD_BY_PROVIDER_IDfor 8 built-in providersLEGACY_SECRET_KEY_TO_PROVIDER_IDformigration
OPENAI_COMPATIBLE_GROUP_TO_PROVIDER_ID6 files
Summary by CodeRabbit
New Features
Refactor
Chores
Tests