An AI coding agent that runs entirely on your machine.
This is Claude Code for local LLMs. OpenJet handles the model, the runtime, and the setup without having to manually wrangle complex confirgurations. You get a coding agent in your terminal that reads your files, edits your code, runs commands, and stays out of the cloud.
git clone https://github.com/l-forster/open-jet.git
cd open-jet
./install.sh
open-jet --setupThat's it. Setup detects your hardware, picks a model that fits your RAM, downloads it, and gets everything running. Already have a .gguf? It finds that too.
Then just:
open-jetAn agent in your terminal that can actually do things:
- Read and edit your code — search files, apply edits, write new ones
- Run shell commands — with explicit approval before anything executes
- Resume sessions — close the terminal, come back later, pick up where you left off
- Work on constrained hardware — automatic context condensing, model unload/reload around heavy tasks
- Device access — cameras, microphones, GPIO for edge and embedded work
- Python SDK — automate the same agent from scripts
Cloud coding agents need API keys, send your code to someone else's server, and cost money per token. Local chat tools give you a chat window but not an agent — no file access, no shell, no session recovery.
OpenJet closes that gap. It's built for local models on real hardware, where memory is tight, context windows are short, and sessions get interrupted. Everything runs on your machine, nothing leaves it.
- Quickstart
- Installation
- Configuration
- Runtime: llama.cpp
- Python SDK
- Usage: CLI
- Usage: Slash commands
- Usage: Device sources
- Usage: Workflow harness
- Usage: Session state and logging
- Examples
- Deployment: Jetson
- Deployment: Linux x86 + NVIDIA
- Deployment: CPU-only
AGPL-3.0-only, with commercial licensing available under separate terms.