A Cloudflare Worker that gives the gpt-oss model on Workers AI the ability to execute Python code using the Cloudflare Sandbox SDK.
- Workers AI Integration: Uses
@cf/openai/gpt-oss-120bvia the workers-ai-provider package - Vercel AI SDK: Leverages
generateText()andtool()for clean function calling - Sandbox Execution: Python code runs in isolated Cloudflare Sandbox containers
- User sends a prompt to the
/runendpoint - GPT-OSS receives the prompt with an
execute_pythontool - Model decides if Python execution is needed
- Code runs in an isolated Cloudflare Sandbox container
- Results are sent back to the model for final response
POST /run
Content-Type: application/json
{
"input": "Your prompt here"
}# Simple calculation
curl -X POST http://localhost:8787/run \
-H "Content-Type: application/json" \
-d '{"input": "Calculate 5 factorial using Python"}'
# Execute specific code
curl -X POST http://localhost:8787/run \
-H "Content-Type: application/json" \
-d '{"input": "Execute this Python: print(sum(range(1, 101)))"}'
# Complex operations
curl -X POST http://localhost:8787/run \
-H "Content-Type: application/json" \
-d '{"input": "Use Python to find all prime numbers under 20"}'- From the project root:
npm install
npm run build- Run locally:
cd examples/code-interpreter
npm run devNote: First run builds the Docker container (2-3 minutes). Subsequent runs are much faster.
npx wrangler deployWait for provisioning: After first deployment, wait 2-3 minutes before making requests.