Skip to content

submato/dragonapi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

OpenAI Compatible DeepSeek Qwen Pricing

DragonAPI — Access China's Best AI Models

One API key. OpenAI-compatible. No Chinese phone number needed.

DragonAPI is an API gateway that gives global developers instant access to China's top AI models — DeepSeek, Qwen, GLM, and more — through a standard OpenAI-compatible interface. Just change your base_url and you're up and running.

Why? Models like DeepSeek V3 and Qwen 3 rival GPT-4o at a fraction of the cost, but accessing them from outside China requires a Chinese phone number, WeChat Pay, and navigating Chinese-only interfaces. DragonAPI removes all those barriers.


Why Developers Choose DragonAPI

DragonAPI Direct China API OpenAI
Chinese phone number required No Yes No
WeChat/Alipay required No Yes No
OpenAI SDK compatible Yes No Yes
DeepSeek V3 pricing (per 1M input tokens) $0.13 $0.27 N/A
DeepSeek R1 pricing (per 1M input tokens) $0.28 $0.55 N/A
GPT-4o equivalent pricing ~90% cheaper N/A $5.00
Credit card payment Yes No Yes
Latency (US endpoint) ~200ms TTFT High (from outside China) ~150ms

Quick Start

1. Get your API key

Sign up at the DragonAPI Dashboard and create an API key.

2. Use with any OpenAI SDK

Python

from openai import OpenAI

client = OpenAI(
    api_key="sk-your-dragonapi-key",
    base_url="http://154.21.86.24:3000/v1"
)

response = client.chat.completions.create(
    model="deepseek-chat",  # DeepSeek V3
    messages=[{"role": "user", "content": "Explain quantum computing in simple terms"}]
)
print(response.choices[0].message.content)

JavaScript / TypeScript

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "sk-your-dragonapi-key",
  baseURL: "http://154.21.86.24:3000/v1",
});

const response = await client.chat.completions.create({
  model: "deepseek-chat",
  messages: [{ role: "user", content: "Write a Python function to sort a list" }],
});
console.log(response.choices[0].message.content);

cURL

curl http://154.21.86.24:3000/v1/chat/completions \
  -H "Authorization: Bearer sk-your-dragonapi-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-chat",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Supported Models

DeepSeek

Model Best For Input (per 1M tokens) Output (per 1M tokens)
deepseek-chat General chat, coding, writing $0.13 $0.28
deepseek-coder Code generation & completion $0.13 $0.28
deepseek-reasoner Complex reasoning (R1) $0.28 $0.55

Qwen (Alibaba)

Model Best For Input (per 1M tokens) Output (per 1M tokens)
qwen-turbo Fast, cheap inference $0.05 $0.20
qwen-plus Balanced quality & cost $0.13 $0.78

Route Aliases (Smart Routing)

Use simple aliases and we'll route to the best model for the job:

Route Maps To Best For Input Output
fast Qwen Turbo Simple chat, classification $0.05 $0.20
balanced DeepSeek Chat SaaS chat, summarization $0.13 $0.78
coding Qwen Coder Plus Code agents, generation $0.05 $0.20
reasoning DeepSeek R1 Analysis, reasoning $0.30 $1.04

Additional Models

GLM-4, Yi, SparkDesk, Hunyuan, and more — see the full list on the pricing page.


Supported API Formats

DragonAPI isn't limited to OpenAI format. We support multiple API formats through a single endpoint:

Format Endpoint Use Case
OpenAI Chat POST /v1/chat/completions Default, most SDKs
OpenAI Responses POST /v1/responses New OpenAI responses API
Anthropic / Claude POST /v1/messages Claude SDK compatibility
Google Gemini POST /v1beta/models/* Gemini SDK compatibility
Embeddings POST /v1/embeddings Text embeddings
Image Generation POST /v1/images/generations DALL-E compatible
Audio POST /v1/audio/speech TTS & transcription
Rerank POST /v1/rerank Search reranking

Framework Integration

DragonAPI works out of the box with popular AI frameworks:

LangChain
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="deepseek-chat",
    openai_api_key="sk-your-dragonapi-key",
    openai_api_base="http://154.21.86.24:3000/v1"
)

response = llm.invoke("Summarize the key points of transformer architecture")
print(response.content)
LlamaIndex
from llama_index.llms.openai import OpenAI

llm = OpenAI(
    model="deepseek-chat",
    api_key="sk-your-dragonapi-key",
    api_base="http://154.21.86.24:3000/v1"
)

response = llm.complete("Explain RAG in 3 sentences")
print(response)
Vercel AI SDK
import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";

const dragonapi = createOpenAI({
  apiKey: "sk-your-dragonapi-key",
  baseURL: "http://154.21.86.24:3000/v1",
});

const { text } = await generateText({
  model: dragonapi("deepseek-chat"),
  prompt: "Write a haiku about programming",
});
AutoGen / CrewAI
# Works with any framework that accepts OpenAI-compatible config
config = {
    "model": "deepseek-chat",
    "api_key": "sk-your-dragonapi-key",
    "base_url": "http://154.21.86.24:3000/v1"
}

Cost Comparison

How much can you save by switching from OpenAI?

Use Case OpenAI (GPT-4o) DragonAPI (DeepSeek V3) Savings
1M input tokens $5.00 $0.13 97%
1M output tokens $15.00 $0.28 98%
10k API calls/day (avg 500 tokens each) ~$75/day ~$2/day $73/day
Monthly (moderate SaaS) ~$2,250 ~$60 $2,190/mo

Features

  • OpenAI SDK drop-in replacement — Change one line of code (base_url) and everything works
  • Pay-as-you-go — No monthly fees, no commitments, no minimum spend
  • Credit card payments — No WeChat Pay or Alipay required
  • US & CN endpoints — Low latency wherever your servers are
  • Streaming support — Full SSE streaming for real-time responses
  • Multi-format — OpenAI, Claude, Gemini formats all supported
  • Smart routing — Use aliases like fast, balanced, coding, reasoning
  • Image & video generation — DALL-E compatible image gen, plus Sora/Kling video gen
  • Audio — TTS and speech-to-text support

API Reference

Full OpenAPI documentation is available at the API Docs.

Authentication

All requests require a Bearer token:

Authorization: Bearer sk-your-dragonapi-key

Base URLs

Endpoint URL Best For
US (recommended) http://154.21.86.24:3000/v1 Users in Americas/Europe
CN http://8.130.88.75:3000/v1 Users in Asia-Pacific

FAQ

Is this legal? Yes. We relay API requests to Chinese AI providers. The models themselves are publicly available commercial APIs. We simply handle the payment and access infrastructure so you don't need Chinese credentials.
What about data privacy? Your API requests are relayed to the upstream model providers (DeepSeek, Alibaba Cloud, etc.). We do not store your prompts or responses. The same data handling policies of the upstream providers apply.
How is latency? The US endpoint is hosted in the United States. Typical TTFT (time to first token) is ~200ms for DeepSeek models. Streaming is fully supported.
Can I use this in production? Yes. Many developers use DragonAPI for production SaaS applications. We recommend using the route aliases (fast, balanced, coding, reasoning) for automatic fallback.
What if a model goes down? Smart routing aliases automatically fail over to the best available model in that category.

Get Started

  1. Sign up at DragonAPI Dashboard
  2. Create an API key in the dashboard
  3. Replace your base_url — that's it, you're done
# Before (OpenAI — $5/1M input tokens)
client = OpenAI(api_key="sk-openai-key")

# After (DragonAPI — $0.13/1M input tokens)
client = OpenAI(
    api_key="sk-dragonapi-key",
    base_url="http://154.21.86.24:3000/v1"
)

License

MIT


Built for developers who want the best AI models at the best prices.
Dashboard · Pricing · API Docs

About

Access China's best AI models (DeepSeek, Qwen, GLM) via OpenAI-compatible API. Drop-in replacement — just change base_url. Up to 97% cheaper than GPT-4o. No Chinese phone number needed.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors