Skip to content

Python : Integration of Telegram Bot with Local LLMs on Ollama #48

@kimpro82

Description

@kimpro82

Description

Implement a private LLM chatbot interface on a Raspberry Pi 5 using Python (aiogram) and Ollama. This project aims to bridge the Telegram Bot API with a local inference engine to enable secure, private, and messenger-based AI interactions.

Architecture

The system follows a sequential request-response flow to optimize resource usage on the Raspberry Pi 5 hardware.

  • Telegram User -> Raspberry Pi (aiogram Server) -> Ollama API (Local)
  • Response Flow: Streamed chunks from Ollama -> Throttled Message Updates (1.5s interval) -> User

Technical Specifications & Constraints

  • Hardware: Raspberry Pi 5 (16GB RAM)
  • LLM Engine: Ollama (Running locally at http://localhost:11434)
  • Models: Lightweight models under 10B parameters (e.g., Llama 3.2 3B, Phi-3, or Gemma-2 2B)
  • Concurrency Control: * Since the Pi 5 has limited resources for inference, the bot must handle requests sequentially.
    • Implementation of an asyncio.Lock is required to prevent multiple simultaneous inferences.
  • Streaming UI: Real-time message updates via edit_text with a throttled interval (approx. 1.5s) to comply with Telegram's Rate Limits.

Tasks

  • Set up the aiogram 3.x project structure
  • Implement asynchronous Ollama API client (streaming enabled)
  • Integrate asyncio.Lock for sequential request handling
  • Implement throttled message editing logic (1.5s interval)
  • Test with Llama 3.2 3B or similar lightweight models

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions