Skip to content

feat:Update OpenAI OpenAPI schema with streaming and multimodal definitions#259

Merged
HavenDV merged 1 commit intomainfrom
bot/update-openapi_202603132123
Mar 13, 2026
Merged

feat:Update OpenAI OpenAPI schema with streaming and multimodal definitions#259
HavenDV merged 1 commit intomainfrom
bot/update-openapi_202603132123

Conversation

@HavenDV
Copy link
Contributor

@HavenDV HavenDV commented Mar 13, 2026

Summary by CodeRabbit

Release Notes

  • Documentation
    • Updated API specification to clarify streaming response structures, event types, and webhook payloads for enhanced real-time integration capabilities.
    • Expanded schema definitions for tool invocation, audio/video processing, and configuration options.

@HavenDV HavenDV merged commit 5d6b034 into main Mar 13, 2026
3 of 4 checks passed
@HavenDV HavenDV deleted the bot/update-openapi_202603132123 branch March 13, 2026 21:24
@coderabbitai
Copy link

coderabbitai bot commented Mar 13, 2026

Walkthrough

This PR updates the OpenAI OpenAPI schema file with comprehensive definitions for streaming responses, event types, tool invocations, audio/video handling, webhooks, and configuration options supporting real-time multimodal API interactions.

Changes

Cohort / File(s) Summary
OpenAPI Schema Definitions
src/libs/tryAGI.OpenAI/openapi.yaml
Added extensive schema definitions for streaming response events with discriminator properties, event types (ResponseCreatedEvent, ResponseDoneEvent, ErrorEvent, etc.), tool invocation schemas (FunctionTool, MCPTool, etc.), audio/video input/output structures, webhook event schemas for batch processing and lifecycle management, and configuration options for response formats and truncation strategies.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 Hopping through schemas with joy and delight,
Events and streams dancing in API light!
Tools, audio, webhooks—oh what a sight,
Real-time multimodal, configured just right,
This OpenAI feast is deliciously bright! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title Check ✅ Passed Title check skipped as CodeRabbit has written the PR title.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bot/update-openapi_202603132123
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot changed the title feat:@coderabbitai feat:Update OpenAI OpenAPI schema with streaming and multimodal definitions Mar 13, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/libs/tryAGI.OpenAI/openapi.yaml`:
- Around line 263-267: The SSE examples in the SDK snippets are missing the
stream_format flag; update each SDK call to include stream_format: "sse" (or the
language-idiomatic field name) so the generated “SSE Stream Format” docs match
the curl example — e.g. add stream_format: "sse" to client.audio.speech.create
in the Node snippet, to client.audio.speech.create(...) in the Python snippet,
to the openai.AudioSpeechNewParams passed to client.Audio.Speech.New in Go, to
SpeechCreateParams in the Java builder, and to openai.audio.speech.create in the
Ruby snippet (use the SDK’s native param name for stream format in each case).
- Around line 253-267: The OpenAPI changes were not paired with a regenerated
SDK; run the library's generate.sh script to regenerate the Generated/ SDK
output and commit those files so the checked-in SDK matches the updated spec.
Specifically, invoke the generate.sh located in the tryAGI.OpenAI library
directory, verify the Generated/ folder updates, run any tests/build for the
SDK, and commit the updated Generated/ files alongside this OpenAPI change.
- Line 9189: The Node.js examples that call fs.createReadStream (e.g., the
snippet using OpenAI and client.videos.createCharacter and other video-related
examples) are missing an import for the fs module and will throw ReferenceError:
fs is not defined; fix by adding an import/require for fs at the top of each
affected snippet (for example add "import fs from 'fs'" or "const fs =
require('fs')" depending on snippet style) before calling fs.createReadStream so
the file stream is available to client.videos.createCharacter and the other
createReadStream usages.
- Line 9192: The Java examples are using ByteArrayInputStream without the new
keyword (invalid Java); update each occurrence to instantiate with new
ByteArrayInputStream(...) where it's passed (e.g., inside
VideoCreateCharacterParams.builder().video(...)), and apply the same fix in the
other two upload examples that construct ByteArrayInputStream so each example
uses "new ByteArrayInputStream(...)" (check the Main class examples and any
similar upload parameter builders).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 608fc9b1-92af-45ac-96ed-e11a73344e72

📥 Commits

Reviewing files that changed from the base of the PR and between bb06012 and 686632d.

⛔ Files ignored due to path filters (1)
  • src/libs/tryAGI.OpenAI/Generated/tryAGI.OpenAI.Models.VoiceIdsOrCustomVoice.g.cs is excluded by !**/generated/**
📒 Files selected for processing (1)
  • src/libs/tryAGI.OpenAI/openapi.yaml

Comment on lines +253 to +267
python: "import os\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=os.environ.get(\"OPENAI_API_KEY\"), # This is the default and can be omitted\n)\nspeech = client.audio.speech.create(\n input=\"input\",\n model=\"string\",\n voice=\"string\",\n)\nprint(speech)\ncontent = speech.read()\nprint(content)"
javascript: "import fs from \"fs\";\nimport path from \"path\";\nimport OpenAI from \"openai\";\n\nconst openai = new OpenAI();\n\nconst speechFile = path.resolve(\"./speech.mp3\");\n\nasync function main() {\n const mp3 = await openai.audio.speech.create({\n model: \"gpt-4o-mini-tts\",\n voice: \"alloy\",\n input: \"Today is a wonderful day to build something people love!\",\n });\n console.log(speechFile);\n const buffer = Buffer.from(await mp3.arrayBuffer());\n await fs.promises.writeFile(speechFile, buffer);\n}\nmain();\n"
csharp: "using System;\nusing System.IO;\n\nusing OpenAI.Audio;\n\nAudioClient client = new(\n model: \"gpt-4o-mini-tts\",\n apiKey: Environment.GetEnvironmentVariable(\"OPENAI_API_KEY\")\n);\n\nBinaryData speech = client.GenerateSpeech(\n text: \"The quick brown fox jumped over the lazy dog.\",\n voice: GeneratedSpeechVoice.Alloy\n);\n\nusing FileStream stream = File.OpenWrite(\"speech.mp3\");\nspeech.ToStream().CopyTo(stream);\n"
node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst speech = await client.audio.speech.create({\n input: 'input',\n model: 'string',\n voice: 'ash',\n});\n\nconsole.log(speech);\n\nconst content = await speech.blob();\nconsole.log(content);"
go: "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/openai/openai-go\"\n\t\"github.com/openai/openai-go/option\"\n)\n\nfunc main() {\n\tclient := openai.NewClient(\n\t\toption.WithAPIKey(\"My API Key\"),\n\t)\n\tspeech, err := client.Audio.Speech.New(context.TODO(), openai.AudioSpeechNewParams{\n\t\tInput: \"input\",\n\t\tModel: openai.SpeechModelTTS1,\n\t\tVoice: openai.AudioSpeechNewParamsVoiceAsh,\n\t})\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"%+v\\n\", speech)\n}\n"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.HttpResponse;\nimport com.openai.models.audio.speech.SpeechCreateParams;\nimport com.openai.models.audio.speech.SpeechModel;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n SpeechCreateParams params = SpeechCreateParams.builder()\n .input(\"input\")\n .model(SpeechModel.TTS_1)\n .voice(SpeechCreateParams.Voice.ASH)\n .build();\n HttpResponse speech = client.audio().speech().create(params);\n }\n}"
ruby: "require \"openai\"\n\nopenai = OpenAI::Client.new(api_key: \"My API Key\")\n\nspeech = openai.audio.speech.create(input: \"input\", model: :\"tts-1\", voice: :ash)\n\nputs(speech)"
node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst speech = await client.audio.speech.create({\n input: 'input',\n model: 'string',\n voice: 'string',\n});\n\nconsole.log(speech);\n\nconst content = await speech.blob();\nconsole.log(content);"
go: "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/openai/openai-go\"\n\t\"github.com/openai/openai-go/option\"\n)\n\nfunc main() {\n\tclient := openai.NewClient(\n\t\toption.WithAPIKey(\"My API Key\"),\n\t)\n\tspeech, err := client.Audio.Speech.New(context.TODO(), openai.AudioSpeechNewParams{\n\t\tInput: \"input\",\n\t\tModel: openai.SpeechModelTTS1,\n\t\tVoice: openai.AudioSpeechNewParamsVoiceUnion{\n\t\t\tOfString: openai.String(\"string\"),\n\t\t},\n\t})\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"%+v\\n\", speech)\n}\n"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.HttpResponse;\nimport com.openai.models.audio.speech.SpeechCreateParams;\nimport com.openai.models.audio.speech.SpeechModel;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n SpeechCreateParams params = SpeechCreateParams.builder()\n .input(\"input\")\n .model(SpeechModel.TTS_1)\n .voice(\"string\")\n .build();\n HttpResponse speech = client.audio().speech().create(params);\n }\n}"
ruby: "require \"openai\"\n\nopenai = OpenAI::Client.new(api_key: \"My API Key\")\n\nspeech = openai.audio.speech.create(input: \"input\", model: :\"tts-1\", voice: \"string\")\n\nputs(speech)"
- title: SSE Stream Format
request:
curl: "curl https://api.openai.com/v1/audio/speech \\\n -H \"Authorization: Bearer $OPENAI_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"gpt-4o-mini-tts\",\n \"input\": \"The quick brown fox jumped over the lazy dog.\",\n \"voice\": \"alloy\",\n \"stream_format\": \"sse\"\n }'\n"
node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst speech = await client.audio.speech.create({\n input: 'input',\n model: 'string',\n voice: 'ash',\n});\n\nconsole.log(speech);\n\nconst content = await speech.blob();\nconsole.log(content);"
python: "import os\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=os.environ.get(\"OPENAI_API_KEY\"), # This is the default and can be omitted\n)\nspeech = client.audio.speech.create(\n input=\"input\",\n model=\"string\",\n voice=\"ash\",\n)\nprint(speech)\ncontent = speech.read()\nprint(content)"
go: "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/openai/openai-go\"\n\t\"github.com/openai/openai-go/option\"\n)\n\nfunc main() {\n\tclient := openai.NewClient(\n\t\toption.WithAPIKey(\"My API Key\"),\n\t)\n\tspeech, err := client.Audio.Speech.New(context.TODO(), openai.AudioSpeechNewParams{\n\t\tInput: \"input\",\n\t\tModel: openai.SpeechModelTTS1,\n\t\tVoice: openai.AudioSpeechNewParamsVoiceAsh,\n\t})\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"%+v\\n\", speech)\n}\n"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.HttpResponse;\nimport com.openai.models.audio.speech.SpeechCreateParams;\nimport com.openai.models.audio.speech.SpeechModel;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n SpeechCreateParams params = SpeechCreateParams.builder()\n .input(\"input\")\n .model(SpeechModel.TTS_1)\n .voice(SpeechCreateParams.Voice.ASH)\n .build();\n HttpResponse speech = client.audio().speech().create(params);\n }\n}"
ruby: "require \"openai\"\n\nopenai = OpenAI::Client.new(api_key: \"My API Key\")\n\nspeech = openai.audio.speech.create(input: \"input\", model: :\"tts-1\", voice: :ash)\n\nputs(speech)"
node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst speech = await client.audio.speech.create({\n input: 'input',\n model: 'string',\n voice: 'string',\n});\n\nconsole.log(speech);\n\nconst content = await speech.blob();\nconsole.log(content);"
python: "import os\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=os.environ.get(\"OPENAI_API_KEY\"), # This is the default and can be omitted\n)\nspeech = client.audio.speech.create(\n input=\"input\",\n model=\"string\",\n voice=\"string\",\n)\nprint(speech)\ncontent = speech.read()\nprint(content)"
go: "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/openai/openai-go\"\n\t\"github.com/openai/openai-go/option\"\n)\n\nfunc main() {\n\tclient := openai.NewClient(\n\t\toption.WithAPIKey(\"My API Key\"),\n\t)\n\tspeech, err := client.Audio.Speech.New(context.TODO(), openai.AudioSpeechNewParams{\n\t\tInput: \"input\",\n\t\tModel: openai.SpeechModelTTS1,\n\t\tVoice: openai.AudioSpeechNewParamsVoiceUnion{\n\t\t\tOfString: openai.String(\"string\"),\n\t\t},\n\t})\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"%+v\\n\", speech)\n}\n"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.HttpResponse;\nimport com.openai.models.audio.speech.SpeechCreateParams;\nimport com.openai.models.audio.speech.SpeechModel;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n SpeechCreateParams params = SpeechCreateParams.builder()\n .input(\"input\")\n .model(SpeechModel.TTS_1)\n .voice(\"string\")\n .build();\n HttpResponse speech = client.audio().speech().create(params);\n }\n}"
ruby: "require \"openai\"\n\nopenai = OpenAI::Client.new(api_key: \"My API Key\")\n\nspeech = openai.audio.speech.create(input: \"input\", model: :\"tts-1\", voice: \"string\")\n\nputs(speech)"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Regenerate Generated/ in the same PR.

src/libs/tryAGI.OpenAI/generate.sh consumes this spec directly. If the regenerated src/libs/tryAGI.OpenAI/Generated/ output is not committed with these schema/example changes, the checked-in SDK will drift from the new contract.

Based on learnings: Regenerate the SDK by running ./generate.sh in src/libs/tryAGI.OpenAI/ when the OpenAPI specification changes.

Also applies to: 9189-9193, 9219-9223, 9279-9283, 33397-33403

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/tryAGI.OpenAI/openapi.yaml` around lines 253 - 267, The OpenAPI
changes were not paired with a regenerated SDK; run the library's generate.sh
script to regenerate the Generated/ SDK output and commit those files so the
checked-in SDK matches the updated spec. Specifically, invoke the generate.sh
located in the tryAGI.OpenAI library directory, verify the Generated/ folder
updates, run any tests/build for the SDK, and commit the updated Generated/
files alongside this OpenAPI change.

Comment on lines +263 to +267
node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst speech = await client.audio.speech.create({\n input: 'input',\n model: 'string',\n voice: 'string',\n});\n\nconsole.log(speech);\n\nconst content = await speech.blob();\nconsole.log(content);"
python: "import os\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=os.environ.get(\"OPENAI_API_KEY\"), # This is the default and can be omitted\n)\nspeech = client.audio.speech.create(\n input=\"input\",\n model=\"string\",\n voice=\"string\",\n)\nprint(speech)\ncontent = speech.read()\nprint(content)"
go: "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\n\t\"github.com/openai/openai-go\"\n\t\"github.com/openai/openai-go/option\"\n)\n\nfunc main() {\n\tclient := openai.NewClient(\n\t\toption.WithAPIKey(\"My API Key\"),\n\t)\n\tspeech, err := client.Audio.Speech.New(context.TODO(), openai.AudioSpeechNewParams{\n\t\tInput: \"input\",\n\t\tModel: openai.SpeechModelTTS1,\n\t\tVoice: openai.AudioSpeechNewParamsVoiceUnion{\n\t\t\tOfString: openai.String(\"string\"),\n\t\t},\n\t})\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"%+v\\n\", speech)\n}\n"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.core.http.HttpResponse;\nimport com.openai.models.audio.speech.SpeechCreateParams;\nimport com.openai.models.audio.speech.SpeechModel;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n SpeechCreateParams params = SpeechCreateParams.builder()\n .input(\"input\")\n .model(SpeechModel.TTS_1)\n .voice(\"string\")\n .build();\n HttpResponse speech = client.audio().speech().create(params);\n }\n}"
ruby: "require \"openai\"\n\nopenai = OpenAI::Client.new(api_key: \"My API Key\")\n\nspeech = openai.audio.speech.create(input: \"input\", model: :\"tts-1\", voice: \"string\")\n\nputs(speech)"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add stream_format: "sse" to the SSE examples.

Only the curl tab in this section sends the SSE flag. These added SDK examples still show the default request, so the generated docs under “SSE Stream Format” will be wrong.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/tryAGI.OpenAI/openapi.yaml` around lines 263 - 267, The SSE examples
in the SDK snippets are missing the stream_format flag; update each SDK call to
include stream_format: "sse" (or the language-idiomatic field name) so the
generated “SSE Stream Format” docs match the curl example — e.g. add
stream_format: "sse" to client.audio.speech.create in the Node snippet, to
client.audio.speech.create(...) in the Python snippet, to the
openai.AudioSpeechNewParams passed to client.Audio.Speech.New in Go, to
SpeechCreateParams in the Java builder, and to openai.audio.speech.create in the
Ruby snippet (use the SDK’s native param name for stream format in each case).

go: "package main\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/openai/openai-go\"\n\t\"github.com/openai/openai-go/option\"\n)\n\nfunc main() {\n\tclient := openai.NewClient(\n\t\toption.WithAPIKey(\"My API Key\"),\n\t)\n\tcharacter, err := client.Videos.Character.New(context.TODO(), openai.VideoCharacterNewParams{\n\t\tName: \"x\",\n\t\tVideo: io.Reader(bytes.NewBuffer([]byte(\"Example data\"))),\n\t})\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"%+v\\n\", character.ID)\n}\n"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.videos.character.CharacterCreateParams;\nimport com.openai.models.videos.character.CharacterCreateResponse;\nimport java.io.ByteArrayInputStream;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n CharacterCreateParams params = CharacterCreateParams.builder()\n .name(\"x\")\n .video(ByteArrayInputStream(\"Example data\".getBytes()))\n .build();\n CharacterCreateResponse character = client.videos().character().create(params);\n }\n}"
ruby: "require \"openai\"\n\nopenai = OpenAI::Client.new(api_key: \"My API Key\")\n\ncharacter = openai.videos.character.create(name: \"x\", video: StringIO.new(\"Example data\"))\n\nputs(character)"
node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst response = await client.videos.createCharacter({\n name: 'x',\n video: fs.createReadStream('path/to/file'),\n});\n\nconsole.log(response.id);"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Show the Node.js upload snippets and any fs imports in this spec file.
rg -n -C2 "fs\.createReadStream|import fs from 'fs'|import \* as fs from 'fs'|import \{ createReadStream \} from 'fs'" src/libs/tryAGI.OpenAI/openapi.yaml

Repository: tryAGI/OpenAI

Length of output: 39200


Import fs before calling createReadStream(...).

Node.js snippets reference fs.createReadStream(...) without importing fs, causing ReferenceError: fs is not defined at runtime.

Suggested fix
+import fs from 'fs';
 import OpenAI from 'openai';

Affected lines: 9189, 9249, 9279

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst response = await client.videos.createCharacter({\n name: 'x',\n video: fs.createReadStream('path/to/file'),\n});\n\nconsole.log(response.id);"
import fs from 'fs';
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});
const response = await client.videos.createCharacter({
name: 'x',
video: fs.createReadStream('path/to/file'),
});
console.log(response.id);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/tryAGI.OpenAI/openapi.yaml` at line 9189, The Node.js examples that
call fs.createReadStream (e.g., the snippet using OpenAI and
client.videos.createCharacter and other video-related examples) are missing an
import for the fs module and will throw ReferenceError: fs is not defined; fix
by adding an import/require for fs at the top of each affected snippet (for
example add "import fs from 'fs'" or "const fs = require('fs')" depending on
snippet style) before calling fs.createReadStream so the file stream is
available to client.videos.createCharacter and the other createReadStream
usages.

node.js: "import OpenAI from 'openai';\n\nconst client = new OpenAI({\n apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted\n});\n\nconst response = await client.videos.createCharacter({\n name: 'x',\n video: fs.createReadStream('path/to/file'),\n});\n\nconsole.log(response.id);"
python: "import os\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=os.environ.get(\"OPENAI_API_KEY\"), # This is the default and can be omitted\n)\nresponse = client.videos.create_character(\n name=\"x\",\n video=b\"Example data\",\n)\nprint(response.id)"
go: "package main\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\n\t\"github.com/openai/openai-go\"\n\t\"github.com/openai/openai-go/option\"\n)\n\nfunc main() {\n\tclient := openai.NewClient(\n\t\toption.WithAPIKey(\"My API Key\"),\n\t)\n\tresponse, err := client.Videos.NewCharacter(context.TODO(), openai.VideoNewCharacterParams{\n\t\tName: \"x\",\n\t\tVideo: io.Reader(bytes.NewBuffer([]byte(\"Example data\"))),\n\t})\n\tif err != nil {\n\t\tpanic(err.Error())\n\t}\n\tfmt.Printf(\"%+v\\n\", response.ID)\n}\n"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.videos.VideoCreateCharacterParams;\nimport com.openai.models.videos.VideoCreateCharacterResponse;\nimport java.io.ByteArrayInputStream;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n VideoCreateCharacterParams params = VideoCreateCharacterParams.builder()\n .name(\"x\")\n .video(ByteArrayInputStream(\"Example data\".getBytes()))\n .build();\n VideoCreateCharacterResponse response = client.videos().createCharacter(params);\n }\n}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find Java upload examples that call ByteArrayInputStream without `new`.
python - <<'PY'
from pathlib import Path
import re

text = Path("src/libs/tryAGI.OpenAI/openapi.yaml").read_text()
for m in re.finditer(r"\.video\(ByteArrayInputStream\(", text):
    line = text.count("\n", 0, m.start()) + 1
    print(f"Line {line}: .video(ByteArrayInputStream(...))")
PY

Repository: tryAGI/OpenAI

Length of output: 191


🏁 Script executed:

#!/bin/bash
# Extract the relevant lines from the openapi.yaml file
sed -n '9180,9200p' src/libs/tryAGI.OpenAI/openapi.yaml | cat -n
echo "---"
sed -n '9240,9260p' src/libs/tryAGI.OpenAI/openapi.yaml | cat -n
echo "---"
sed -n '9270,9290p' src/libs/tryAGI.OpenAI/openapi.yaml | cat -n

Repository: tryAGI/OpenAI

Length of output: 8195


Fix Java examples to instantiate ByteArrayInputStream with new keyword.

All three Java upload examples are missing the new keyword when instantiating ByteArrayInputStream, resulting in invalid Java syntax. This occurs at lines 9192, 9252, and 9282.

Suggested fixes

Line 9192:

-            .video(ByteArrayInputStream("Example data".getBytes()))
+            .video(new ByteArrayInputStream("Example data".getBytes()))

Line 9252:

-            .video(ByteArrayInputStream("Example data".getBytes()))
+            .video(new ByteArrayInputStream("Example data".getBytes()))

Line 9282:

-            .video(ByteArrayInputStream("Example data".getBytes()))
+            .video(new ByteArrayInputStream("Example data".getBytes()))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.videos.VideoCreateCharacterParams;\nimport com.openai.models.videos.VideoCreateCharacterResponse;\nimport java.io.ByteArrayInputStream;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n VideoCreateCharacterParams params = VideoCreateCharacterParams.builder()\n .name(\"x\")\n .video(ByteArrayInputStream(\"Example data\".getBytes()))\n .build();\n VideoCreateCharacterResponse response = client.videos().createCharacter(params);\n }\n}"
java: "package com.openai.example;\n\nimport com.openai.client.OpenAIClient;\nimport com.openai.client.okhttp.OpenAIOkHttpClient;\nimport com.openai.models.videos.VideoCreateCharacterParams;\nimport com.openai.models.videos.VideoCreateCharacterResponse;\nimport java.io.ByteArrayInputStream;\n\npublic final class Main {\n private Main() {}\n\n public static void main(String[] args) {\n OpenAIClient client = OpenAIOkHttpClient.fromEnv();\n\n VideoCreateCharacterParams params = VideoCreateCharacterParams.builder()\n .name(\"x\")\n .video(new ByteArrayInputStream(\"Example data\".getBytes()))\n .build();\n VideoCreateCharacterResponse response = client.videos().createCharacter(params);\n }\n}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/libs/tryAGI.OpenAI/openapi.yaml` at line 9192, The Java examples are
using ByteArrayInputStream without the new keyword (invalid Java); update each
occurrence to instantiate with new ByteArrayInputStream(...) where it's passed
(e.g., inside VideoCreateCharacterParams.builder().video(...)), and apply the
same fix in the other two upload examples that construct ByteArrayInputStream so
each example uses "new ByteArrayInputStream(...)" (check the Main class examples
and any similar upload parameter builders).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant