| layout | title | nav_order | has_children | format_version |
|---|---|---|---|---|
default |
Devika Tutorial |
190 |
true |
v2 |
Learn how to deploy and operate
stitionai/devika— a multi-agent autonomous coding system that plans, researches, writes, and debugs code end-to-end.
Devika represents one of the most complete open-source implementations of an autonomous software engineering agent, combining multi-agent coordination, live web research, browser automation, and polyglot code generation in a single self-hosted stack. As teams evaluate autonomous coding systems for internal use, understanding how Devika's agent pipeline is structured, how it coordinates specialized roles, and how to govern it safely becomes a critical engineering competency. This track takes you from first install to production-grade team deployment, covering every architectural layer in depth.
This track focuses on:
- deploying and configuring Devika with any major LLM provider including Claude 3, GPT-4, Gemini, Mistral, Groq, and Ollama
- understanding the multi-agent pipeline: planner, researcher, coder, action, and internal monologue agents
- operating browser automation and web research capabilities safely and effectively
- governing autonomous code generation at team scale with cost controls and audit discipline
- repository:
stitionai/devika - stars: about 19.5k
flowchart LR
A[User Task Prompt] --> B[Planner Agent]
B --> C[Researcher Agent]
C --> D[Browser Automation / Playwright]
D --> E[Coder Agent]
E --> F[Action Agent]
F --> G[Internal Monologue / Self-Reflection]
G -->|next step| B
G --> H[Workspace Output + Git]
| Chapter | Key Question | Outcome |
|---|---|---|
| 01 - Getting Started | How do I install Devika and run a first task? | Working baseline |
| 02 - Architecture and Agent Pipeline | How do Devika's specialized agents coordinate? | Architecture clarity |
| 03 - LLM Provider Configuration | How do I connect Claude, GPT-4, Gemini, Ollama, and others? | Provider flexibility |
| 04 - Task Planning and Code Generation | How does Devika decompose tasks and generate code? | Reliable code output |
| 05 - Web Research and Browser Integration | How does Devika research the web with Playwright? | Research agent control |
| 06 - Project Management and Workspaces | How do I manage projects, files, and git integration? | Workspace discipline |
| 07 - Debugging and Troubleshooting | How do I diagnose failures in the agent pipeline? | Operational resilience |
| 08 - Production Operations and Governance | How do teams deploy Devika safely at scale? | Governance runbook |
- how to configure and run Devika across multiple LLM providers for different cost and capability tradeoffs
- how to reason about multi-agent coordination, context flow, and internal monologue loops
- how to operate browser automation and research pipelines responsibly
- how to govern autonomous code generation workflows in team environments with audit and rollback controls
- OpenHands Tutorial — multi-agent AI software engineering OS
- SWE-agent Tutorial — SWE-bench autonomous software engineering agent
- Mini SWE-agent Tutorial — lightweight autonomous coding agent core
- Aider Tutorial — AI pair programming in the terminal
- Sweep Tutorial — issue-to-PR autonomous coding agent
- BabyAGI Tutorial — foundational autonomous task-driven agent patterns
Start with Chapter 1: Getting Started.
- Start Here: Chapter 1: Getting Started
- Back to Main Catalog
- Browse A-Z Tutorial Directory
- Search by Intent
- Explore Category Hubs
- Chapter 1: Getting Started
- Chapter 2: Architecture and Agent Pipeline
- Chapter 3: LLM Provider Configuration
- Chapter 4: Task Planning and Code Generation
- Chapter 5: Web Research and Browser Integration
- Chapter 6: Project Management and Workspaces
- Chapter 7: Debugging and Troubleshooting
- Chapter 8: Production Operations and Governance
Generated by AI Codebase Knowledge Builder