Open Source · Self-Hosted · v1.0

Autonomous Agents. Your Infrastructure.

Helium Bees is an open-source autonomous AI agent framework that runs entirely on your hardware. Connect any LLM, automate complex workflows, control browsers, and deploy across every messaging platform — all without surrendering your data.

helium-bees — bash
$ git clone https://github.com/helium-bees/core
$ cd core && pip install -r requirements.txt
✓ Installing helium-bees core dependencies...
✓ Loaded 8 skill modules · Vector store initialized
$ python agent.py --provider openrouter --channel telegram
🐝 Helium Bees agent started on port 8080
⚡ Connected to Telegram · Memory loaded · Skills active
→ Awaiting instructions...
Supports OpenRouter Groq Gemini Alibaba + more
Scroll
0
LLM Providers
0
Messaging Channels
0
% Self-Hosted
0
GitHub Stars

Everything Helium Bees needs.
Nothing it doesn't.

A complete toolkit for building autonomous AI agents that actually work in production — on your terms, on your hardware.

🧠
Multi-LLM Flexibility
Switch between providers without changing your code. Helium Bees supports OpenRouter, Groq, Gemini, and Alibaba out of the box with a unified API abstraction layer.
OpenRouter Groq Gemini Alibaba
💬
Omnichannel Messaging
Deploy Helium Bees across Telegram, Discord, Slack, and WhatsApp simultaneously. One agent, every platform, unified conversation context across all channels.
Telegram Discord Slack WhatsApp
🗄️
Memory & Skills System
Persistent vector memory lets Helium Bees remember context across sessions. Modular skills extend capabilities — load only what you need, keep your footprint lean.
Vector DB Long-term Memory Skill Modules
Cron Jobs & Automation
Schedule tasks with cron expressions. Helium Bees wakes up, executes workflows, sends reports, and goes back to sleep — fully autonomous, zero babysitting required.
Cron Scheduler Workflows Event Triggers
🌐
Browser Control
Full Playwright-powered browser automation built into Helium Bees. Navigate, click, fill forms, extract data, take screenshots — your agent sees and interacts with the web.
Playwright Web Scraping Form Automation
💰
Cost Tracking
Real-time token usage and cost monitoring per provider, per conversation, per task. Set budgets, get alerts, and optimize your LLM spend intelligently with Helium Bees.
Token Counting Budget Alerts Usage Reports
Code Execution Engine
Sandboxed Python and JavaScript execution built into Helium Bees. Your agent can write code, run it, inspect the output, and iterate — all in a secure isolated environment. Build data pipelines, generate reports, automate complex multi-step tasks with real computation power.
Python Sandbox JS Runtime File I/O Package Install Secure Isolation
🔒
100% Self-Hosted
Your data never leaves your infrastructure. Deploy Helium Bees on any Linux server, Docker container, or Raspberry Pi. Full control, zero vendor lock-in, forever.
Docker On-Premise Air-Gapped

Built for the real world.

Helium Bees uses a layered architecture that separates concerns cleanly — swap any component without breaking the rest.

01
Configure Your Stack
Choose your LLM provider, set API keys, pick your messaging channels. A single YAML config file controls everything in Helium Bees.
02
Load Skills & Memory
Enable the skill modules you need — browser, code execution, web search. Helium Bees initializes memory from your vector store automatically.
03
Connect Your Channels
Webhook handlers spin up for each platform. Helium Bees goes live on Telegram, Discord, and Slack simultaneously within seconds.
04
Automate & Scale
Set cron schedules, define workflows, monitor costs. Helium Bees runs 24/7, learns from interactions, and gets smarter over time.
// Helium Bees — System Architecture
🌐 Channel Layer
Telegram Discord Slack
🧠 Helium Bees Core
Planner Router Memory
⚡ Skills Engine
Browser Code Search
🔌 LLM Providers
OpenRouter Groq Gemini

Meet your users
where they are.

Deploy once, reach everywhere. Helium Bees maintains unified conversation context across all platforms — your agent always remembers who it's talking to.

✈️
Telegram
Bot API with inline keyboards, file sharing, groups
Live
🎮
Discord
Slash commands, embeds, server-wide deployment
Live
💼
Slack
Workspace bots, channel monitoring, app actions
Live
📱
WhatsApp
Business API integration, media messages
Beta
// Live conversation — Helium Bees
👤
Hey, can you scrape the top 10 AI papers from arxiv today and summarize them?
09:41 · Telegram
🐝
On it! Helium Bees launching browser, navigating to arxiv.org/cs.AI...
09:41 · Helium Bees
🐝
✅ Found 10 papers. Summarizing with Groq (llama-3.1-70b)... Cost so far: $0.0012
09:42 · Helium Bees
👤
Also post the summary to our #research Discord channel
09:42 · Discord
🐝
Done! Helium Bees posted to #research. Total cost: $0.0031 🎯
09:43 · Helium Bees

Pick your model.
Control your costs.

Helium Bees tracks every token, every cent. Switch providers mid-conversation or route different tasks to different models automatically.

Groq
Ultra-Fast
Inference at 500+ tokens/sec
  • LLaMA 3.1 70B / 8B
  • Mixtral 8x7B
  • Gemma 2 9B
  • Real-time streaming
💎
Gemini + Alibaba
Multimodal
Vision, audio, long context
  • Gemini 1.5 Pro (1M ctx)
  • Qwen 2.5 series
  • Image & video input
  • Cost-optimized routing
// Helium Bees — Cost Dashboard · Today
Auto-refreshes every 60s
TOTAL SPEND
$0.47
TOKENS USED
1.2M
REQUESTS
847
AVG LATENCY
312ms

Deploy Helium Bees
in under 5 minutes.

Clone the repo, set your API keys, and your autonomous Helium Bees agent is live. No cloud accounts, no subscriptions, no data leaving your server — ever.

Open source under MIT License · View on GitHub · Join Discord