Skip to content
View AntonioSabbatellaUni's full-sized avatar

Highlights

  • Pro

Block or report AntonioSabbatellaUni

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Antonio Sabbatella

MSc Data Science (110L/110) | AI Research Engineer

Specializing in Bayesian Optimization, Multi-Agent Systems, and Efficient LLM Architectures.
Bridging the gap between theoretical research and production-grade engineering.


πŸ”¬ Research & Engineering Focus

I focus on reducing computational overhead and automating complex reasoning in Generative AI systems. My work spans from architectural optimization to high-level agentic orchestration.

  • Deep Learning & Systems: Designing scalable architectures for DeepSeek Sparse Attention, RAG pipelines, and custom execution environments for agentic workflows.
  • Bayesian Optimization: Automating prompt engineering and multi-agent team composition (MALBO, BOInG) using Multi-Objective strategies.
  • Efficient NLP: Exploring Context Compression frameworks ($84%$ token reduction) and fine-tuning strategies to optimize inference costs on constrained budgets.

πŸ›  Tech Stack

Core: Python PyTorch CUDA

LLM & Agents: HuggingFace LangChain BoTorch

Infrastructure: Docker GCP KNIME Git


πŸ† Featured Work

Project Domain Impact
LUDUS Deep Learning / Kernels Implementation of DeepSeek Sparse Attention for Qwen models. Reduces complexity to $O(N \cdot K)$. Optimized for consumer hardware (T4).
MALBO Multi-Agent / Bayesian Opt Pareto-Efficient Multi-Agent Optimization. Finds optimal trade-offs between Cost and Performance for agent teams using Multi-Objective Bayesian Optimization. Features a custom fork of smolagents for heterogeneous LLM swapping.
StudyWithWisp Full Stack AI / SaaS AI Platform for student prep (Flashcards/Simulations). Built with Next.js & Python on GCP. Scalable RAG pipeline with Prisma. (Private Repo)
UiNav Computer Vision / Agents Autonomous UI interaction system combining YOLO (finetuned) with LLMs for natural language-driven browser automation.

πŸ“„ Selected Publications

  • [High Impact] Prompt optimization in large language models A. Sabbatella, A. Ponti, I. Giordani, A. Candelieri, F. Archetti (2024)
    Mathematics 12 (6), 929
    Seminal work on Bayesian strategies for prompt engineering. Cited by 50+ researchers.
    Citations DOI

  • MALBO: Optimizing LLM-Based Multi-Agent Teams via Multi-Objective Bayesian Optimization Antonio Sabbatella (2025)
    arXiv preprint arXiv:2511.11788
    Framework for identifying Pareto-efficient agent teams, achieving 65.8% cost reduction vs baselines.
    arXiv Code

  • Bayesian Optimization for Instruction Generation A. Sabbatella, et al. (2024)
    Applied Sciences 14 (24), 11865
    Introduction of the BOInG framework, reducing GPU memory requirements by two orders of magnitude.
    DOI

  • Bayesian Optimization Using Simulation-Based Multiple Information Sources A. Sabbatella, et al. (2024)
    Machine Learning and Knowledge Extraction 6 (4)
    Advanced combinatorial optimization using multi-source information fusion.
    Citations

Pinned Loading

  1. LLM-Multi-Agent-Optimization-Framework LLM-Multi-Agent-Optimization-Framework Public

    Official implementation of MALBO (arXiv:2511.11788). Optimizes Multi-Agent Systems via Multi-Objective Bayesian Optimization to find the Pareto front between Cost & Performance.

    Jupyter Notebook 1

  2. uinav uinav Public

    UiNav combines YOLO and an LLM to detect and select the most relevant UI element in a web screenshot based on a given task, enabling smarter automation and UI analysis.

    Jupyter Notebook

  3. Black-Box-Prompt-Learning Black-Box-Prompt-Learning Public

    Forked from shizhediao/Black-Box-Prompt-Learning

    Code for "Prompt optimization in large language models" (Mathematics 2024). Bayesian strategies for Black-Box Prompt Learning. [50+ Citations]

    Python

  4. nlp_llm_context_cost_optimization nlp_llm_context_cost_optimization Public

    Exploring Context Compression techniques for token reduction. Fine-tuning LLMs for efficient text compression and reduced inference costs, analyzing the trade-offs with Q&A accuracy.

    Jupyter Notebook 1

  5. LLAMBO LLAMBO Public

    Forked from tennisonliu/LLAMBO

    (Fork) Reference implementation for "Large Language Models to Enhance Bayesian Optimization". Research baseline for BOinG.

    Python

  6. smolagents_agents_optimization smolagents_agents_optimization Public

    Forked from huggingface/smolagents

    πŸ€— smolagents: a barebones library for agents that think in code.

    Python