-
Notifications
You must be signed in to change notification settings - Fork 0
Python : Integration of Telegram Bot with Local LLMs on Ollama #48
Copy link
Copy link
Open
Labels
Description
Description
Implement a private LLM chatbot interface on a Raspberry Pi 5 using Python (aiogram) and Ollama. This project aims to bridge the Telegram Bot API with a local inference engine to enable secure, private, and messenger-based AI interactions.
Architecture
The system follows a sequential request-response flow to optimize resource usage on the Raspberry Pi 5 hardware.
- Telegram User -> Raspberry Pi (aiogram Server) -> Ollama API (Local)
- Response Flow: Streamed chunks from Ollama -> Throttled Message Updates (1.5s interval) -> User
Technical Specifications & Constraints
- Hardware: Raspberry Pi 5 (16GB RAM)
- LLM Engine: Ollama (Running locally at
http://localhost:11434) - Models: Lightweight models under 10B parameters (e.g., Llama 3.2 3B, Phi-3, or Gemma-2 2B)
- Concurrency Control: * Since the Pi 5 has limited resources for inference, the bot must handle requests sequentially.
- Implementation of an
asyncio.Lockis required to prevent multiple simultaneous inferences.
- Implementation of an
- Streaming UI: Real-time message updates via
edit_textwith a throttled interval (approx. 1.5s) to comply with Telegram's Rate Limits.
Tasks
- Set up the
aiogram3.x project structure - Implement asynchronous Ollama API client (streaming enabled)
- Integrate
asyncio.Lockfor sequential request handling - Implement throttled message editing logic (1.5s interval)
- Test with Llama 3.2 3B or similar lightweight models
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Projects
Status
Todo