git-msg supports four LLM providers. All use stdlib net/http — no vendor
SDKs are bundled.
API reference: docs.anthropic.com
- Create an account at console.anthropic.com
- Generate an API key under API Keys
- Run the setup wizard:
rm ~/.config/mdstn/git-msg/config.toml
git-msg generate --dry-run
# Select: Anthropic (Claude)
# Enter your key when promptedOr configure manually:
git-msg config set provider.name anthropic
git-msg config set provider.model claude-haiku-4-5
# Key is stored interactively via wizard, or:
export GIT_MSG_ANTHROPIC_API_KEY=sk-ant-...| Model | Speed | Quality | Notes |
|---|---|---|---|
claude-haiku-4-5 |
Fast | Good | Default — best cost/speed ratio |
claude-sonnet-4-5 |
Medium | Great | Better for complex diffs |
claude-opus-4-5 |
Slow | Best | Large context, detailed commits |
https://api.anthropic.com/v1/messages
API reference: platform.openai.com/docs
- Create an account at platform.openai.com
- Generate an API key under API Keys
- Configure:
git-msg config set provider.name openai
git-msg config set provider.model gpt-4o-mini
export GIT_MSG_OPENAI_API_KEY=sk-...| Model | Speed | Quality | Notes |
|---|---|---|---|
gpt-4o-mini |
Fast | Good | Default — cheap and capable |
gpt-4o |
Medium | Great | Better reasoning |
o1-mini |
Slow | Best | Highest quality, higher cost |
https://api.openai.com/v1/chat/completions
API reference: ai.google.dev
- Go to aistudio.google.com and create an API key
- Configure:
git-msg config set provider.name gemini
git-msg config set provider.model gemini-1.5-flash
export GIT_MSG_GEMINI_API_KEY=AIza...| Model | Speed | Quality | Notes |
|---|---|---|---|
gemini-1.5-flash |
Fast | Good | Default — generous free tier |
gemini-1.5-pro |
Medium | Great | Larger context window |
gemini-2.0-flash |
Fast | Great | Latest Flash generation |
https://generativelanguage.googleapis.com/v1beta/models/<model>:generateContent
The API key is passed as a query parameter (?key=...), not a header.
Documentation: ollama.ai
Ollama runs models locally on your machine — no API key required, no data leaves your network.
- Install Ollama from ollama.ai/download
- Pull a model:
ollama pull llama3
ollama pull mistral
ollama pull codellama- Configure
git-msg:
git-msg config set provider.name ollama
git-msg config set provider.model llama3Or run the setup wizard — it will query ollama list and show a picker of
your installed models.
ollama listgit-msg queries this automatically during the setup wizard.
If Ollama is running on a different machine or port:
git-msg config set ollama.host http://192.168.1.50:11434| Model | Size | Notes |
|---|---|---|
llama3 |
4.7GB | Good all-rounder |
mistral |
4.1GB | Fast, concise output |
codellama |
3.8GB | Code-focused |
phi4 |
9.1GB | High quality, larger |
http://localhost:11434/api/chat
No Authorization header is sent.
For a single run, use --provider:
git-msg generate --provider ollama
git-msg generate --provider anthropicTo change the default permanently:
git-msg config set provider.name gemini
git-msg config set provider.model gemini-1.5-flashAll providers use a 30-second HTTP timeout. If the LLM is slow to respond (large diff, slow network), the request will fail with a timeout error. Reduce diff size by staging fewer files, or use a faster model.