A web-based chat interface that bridges Ollama LLMs with a Linux MCP (Model Context Protocol) server, enabling AI models to execute shell commands on remote Linux systems through a secure API.
Note: THIS README WAS GENERATED USING LLM.
- 🤖 Chat interface for interacting with Ollama models
- 🔧 Tool calling support for executing Linux commands
- 💬 Real-time streaming responses with thinking indicators
- 🎨 Modern, gradient-based dark UI
- 📝 Markdown rendering for AI responses
- 🔄 Automatic command execution and result polling
The bridge consists of three main components:
- Express Server (server.js) - Handles API requests and orchestrates communication between Ollama and Linux MCP
- React UI (ui-src/App.jsx) - Provides the chat interface
- Linux MCP Server Github Repo - Executes shell commands where-ever it is installed
- Node.js (v18 or higher recommended)
- Ollama server running with your preferred model
- Linux MCP server running and accessible
- API key for the Linux MCP server
- Clone the repository:
git clone <your-repo-url>
cd mcp-bridge- Install dependencies:
npm install- Create a
.envfile based on .env.local:
cp .env.local .env- Configure your environment variables in
.env:
OLLAMA_API=http://localhost:11434
OLLAMA_MODEL_NAME=gpt-oss:20b
LINUX_HTTP=http://127.0.0.1:5379
LINUX_API_KEY=your-api-key-here
PORT=3000| Variable | Description | Default |
|---|---|---|
OLLAMA_API |
URL of your Ollama server | http://localhost:11434 |
OLLAMA_MODEL_NAME |
Name of the Ollama model to use | gpt-oss:20b |
LINUX_HTTP |
URL of the Linux MCP server | http://127.0.0.1:5379 |
LINUX_API_KEY |
API key for Linux MCP authentication | (required) |
PORT |
Port for the bridge server | 3000 |
- Build the UI:
npm run build:ui- Start the server:
npm start- Open your browser to
http://localhost:3000
Run the UI builder in watch mode for live reloading:
npm run watch:uiThen in another terminal:
npm start- User enters a prompt in the chat interface
- The prompt is sent to the Ollama API with available tools (shell command execution)
- If the model decides to use a tool:
- The command is enqueued on the Linux MCP server
- The bridge polls for the command result
- The result is sent back to Ollama
- Ollama generates a final response based on the command output
- The response is streamed back to the UI in real-time
Non-streaming endpoint for chat completion.
Request:
{
"prompt": "Delete every temp/thumbnail files store in the Home folder (~/)"
}Streaming endpoint with real-time response updates.
Request:
{
"prompt": "List files in the current directory"
}- Chat Area - Displays conversation history with user and assistant messages
- Thinking Indicator - Shows when the model is processing (with spinner)
- Input Bar - Text area for entering prompts
- Toolbar - Clear and download JSON buttons (UI placeholders)
The UI uses a custom dark theme defined in ui-src/styles.css with:
- Gradient backgrounds
- Teal accent color (
#2dd4bf) - Responsive design for mobile devices
- Chat bubble layout
LINUX_API_KEY in production environments. The server will warn you if it's not set.
- ESLint - Configured in eslint.config.mjs
- Prettier - Code formatting configured in .prettierrc
- esbuild - Fast bundling for the React UI
Set the LINUX_API_KEY environment variable in your .env file.
- Verify Ollama is running:
curl http://localhost:11434/api/tags - Check that the model name in
.envmatches an installed model
- Ensure the Linux MCP server is accessible at the configured URL
- Verify the API key is correct
- Check Linux MCP server logs