Spaces:
Sleeping
title: HuggingClaw
emoji: π¦
colorFrom: blue
colorTo: purple
sdk: docker
app_port: 7860
pinned: true
π¦ HuggingClaw
Run your own always-on AI assistant on HuggingFace Spaces β for free.
Works with any LLM (Anthropic, OpenAI, Google), connects via Telegram, and persists your workspace to HF Datasets automatically.
β¨ Features
- Zero-config β just add 3 secrets and deploy
- Any LLM provider β Claude, GPT-4, Gemini, etc.
- Built-in keep-alive β self-pings to prevent HF sleep (no external cron needed)
- Auto-sync workspace β commits + pushes changes every 10 min
- Auto-create backup β creates the HF Dataset for you if it doesn't exist
- Graceful shutdown β saves workspace before container dies
- Multi-user Telegram β supports comma-separated user IDs for teams
- Health endpoint β
/healthfor monitoring - Version pinning β lock OpenClaw to a specific version
- 100% HF-native β runs entirely on HuggingFace infrastructure
π Quick Start
1. Duplicate this Space
Click the button above β name it β set to Private
2. Add Required Secrets
Go to Settings β Secrets:
| Secret | Value |
|---|---|
LLM_API_KEY |
Your API key (Anthropic / OpenAI / Google) |
LLM_MODEL |
Model to use (e.g. google/gemini-2.5-flash, anthropic/claude-sonnet-4-5, openai/gpt-4) |
GATEWAY_TOKEN |
Run openssl rand -hex 32 to generate |
3. Deploy
That's it! The Space builds and starts automatically.
4. (Optional) Add Telegram
| Secret | Value |
|---|---|
TELEGRAM_BOT_TOKEN |
From @BotFather |
TELEGRAM_USER_ID |
Your user ID (how to find) |
5. (Optional) Enable Workspace Backup
| Secret | Value |
|---|---|
HF_USERNAME |
Your HuggingFace username |
HF_TOKEN |
HF token with write access |
The backup dataset (huggingclaw-backup) is created automatically β no manual setup needed.
π All Configuration Options
See .env.example for the complete reference with examples.
Required
| Variable | Purpose |
|---|---|
LLM_API_KEY |
LLM provider API key |
LLM_MODEL |
Model to use (e.g. google/gemini-2.5-flash, anthropic/claude-sonnet-4-5, openai/gpt-4) β auto-detects provider from prefix |
GATEWAY_TOKEN |
Gateway auth token |
Telegram
| Variable | Purpose |
|---|---|
TELEGRAM_BOT_TOKEN |
Bot token from @BotFather |
TELEGRAM_USER_ID |
Single user allowlist |
TELEGRAM_USER_IDS |
Multiple users (comma-separated): 123,456,789 |
Workspace Backup
| Variable | Default | Purpose |
|---|---|---|
HF_USERNAME |
β | Your HF username |
HF_TOKEN |
β | HF token (write access) |
BACKUP_DATASET_NAME |
huggingclaw-backup |
Dataset name (auto-created!) |
WORKSPACE_GIT_USER |
openclaw@example.com |
Git commit email |
WORKSPACE_GIT_NAME |
OpenClaw Bot |
Git commit name |
Background Services
| Variable | Default | Purpose |
|---|---|---|
KEEP_ALIVE_INTERVAL |
300 (5 min) |
Self-ping interval. 0 = disable |
SYNC_INTERVAL |
600 (10 min) |
Auto-sync interval |
Advanced
| Variable | Default | Purpose |
|---|---|---|
OPENCLAW_VERSION |
latest |
Pin OpenClaw version |
HEALTH_PORT |
7861 |
Health endpoint port |
π€ LLM Provider Setup
Just set LLM_MODEL with the correct provider prefix β any provider is supported! The provider is auto-detected from the model name.
Anthropic (Claude)
LLM_API_KEY=sk-ant-v0-...
LLM_MODEL=anthropic/claude-haiku-4-5
Models: anthropic/claude-opus-4-6 Β· anthropic/claude-sonnet-4-5 Β· anthropic/claude-haiku-4-5
OpenAI
LLM_API_KEY=sk-...
LLM_MODEL=openai/gpt-4
Models: openai/gpt-4-turbo Β· openai/gpt-4 Β· openai/gpt-3.5-turbo
Google (Gemini)
LLM_API_KEY=AIzaSy...
LLM_MODEL=google/gemini-2.5-flash
Models: google/gemini-2.5-flash Β· google/gemini-2.0-flash Β· google/gemini-1.5-pro
Zhipu (ChatGLM) / ZAI
LLM_API_KEY=your_zhipu_api_key
LLM_MODEL=zhipu/glm-4-plus
Models: zhipu/glm-4-plus Β· zhipu/glm-4 Β· zai/glm-4 (alias)
Get key from: Zhipu Platform
Moonshot (Kimi)
LLM_API_KEY=sk-...
LLM_MODEL=moonshot/moonshot-v1-128k
Models: moonshot/moonshot-v1-128k Β· moonshot/moonshot-v1-32k
Get key from: Moonshot API
MiniMax
LLM_API_KEY=your_minimax_api_key
LLM_MODEL=minimax/minimax-01
Models: minimax/minimax-01 Β· minimax/minimax-text-01
Get key from: MiniMax Platform
Mistral
LLM_API_KEY=your_mistral_api_key
LLM_MODEL=mistral/mistral-large
Models: mistral/mistral-large Β· mistral/mistral-medium Β· mistral/mistral-small
Get key from: Mistral Console
Cohere
LLM_API_KEY=your_cohere_api_key
LLM_MODEL=cohere/command-r
Models: cohere/command-r Β· cohere/command-r-plus
Get key from: Cohere Dashboard
Groq
LLM_API_KEY=your_groq_api_key
LLM_MODEL=groq/mixtral-8x7b-32768
Models: groq/mixtral-8x7b-32768 Β· groq/llama2-70b-4096
Get key from: Groq Console
Qwen
LLM_API_KEY=your_qwen_api_key
LLM_MODEL=qwen/qwen3.6-plus-preview
Models: qwen/qwen3.6-plus-preview (free!) Β· qwen/qwen3.5-35b-a3b Β· qwen/qwen3.5-9b
Get key from: Qwen API
xAI (Grok)
LLM_API_KEY=your_xai_api_key
LLM_MODEL=x-ai/grok-4.20-beta
Models: x-ai/grok-4.20-beta Β· x-ai/grok-4.20-multi-agent-beta
Get key from: xAI Console
NVIDIA (Nemotron)
LLM_API_KEY=your_nvidia_api_key
LLM_MODEL=nvidia/nemotron-3-super-120b-a12b
Models: nvidia/nemotron-3-super-120b-a12b Β· nvidia/nemotron-3-super-120b-a12b (free)
Get key from: NVIDIA API
Xiaomi (MiMo)
LLM_API_KEY=your_xiaomi_api_key
LLM_MODEL=xiaomi/mimo-v2-pro
Models: xiaomi/mimo-v2-pro (1M context) Β· xiaomi/mimo-v2-omni (multimodal)
Get key from: Xiaomi
ByteDance (Seed)
LLM_API_KEY=your_bytedance_api_key
LLM_MODEL=bytedance-seed/seed-2.0-lite
Models: bytedance-seed/seed-2.0-lite Β· bytedance-seed/seed-2.0-mini
Get key from: ByteDance
Z.ai (GLM)
LLM_API_KEY=your_zai_api_key
LLM_MODEL=z-ai/glm-5-turbo
Models: z-ai/glm-5-turbo
Get key from: Z.ai
OpenRouter (100+ models via single API)
LLM_API_KEY=your_openrouter_api_key
LLM_MODEL=openrouter/google/lyria-3-pro-preview
Popular models via OpenRouter:
openrouter/openai/gpt-5.4-proβ Latest OpenAI (1M context)openrouter/openai/gpt-5.4-miniβ Fast, efficient OpenAIopenrouter/google/gemini-3.1-flash-lite-previewβ Google's latestopenrouter/anthropic/claude-opus-4-6β Latest Claude via OpenRouteropenrouter/mistral/mistral-small-2603β Mistral's latestopenrouter/inception/mercury-2β Ultra-fast reasoning (1000 tok/sec)openrouter/qwen/qwen3.6-plus-previewβ Free tier available!openrouter/x-ai/grok-4.20-betaβ xAI's latest Grokopenrouter/nvidia/nemotron-3-super-120b-a12bβ NVIDIA's powerhouseopenrouter/xiaomi/mimo-v2-proβ Xiaomi's 1M context model
Why OpenRouter?
- Single API key for 100+ models
- Unified pricing and routing
- Auto-fallback to other models
- No vendor lock-in
Get key from: OpenRouter.ai (free tier available!)
Any Other Provider
HuggingClaw supports any LLM provider that OpenClaw supports. Just use:
LLM_API_KEY=your_api_key
LLM_MODEL=provider/model-name
The provider prefix is auto-detected and mapped to the appropriate environment variable.
π± Telegram Setup
- Message @BotFather β
/newbotβ copy the token - Message @userinfobot to get your user ID
- Add secrets:
TELEGRAM_BOT_TOKENandTELEGRAM_USER_ID - Restart the Space β DM your bot π
Multiple users? Use TELEGRAM_USER_IDS=123,456,789 (comma-separated)
πΎ Workspace Backup
Set HF_USERNAME + HF_TOKEN and HuggingClaw handles everything:
- Auto-creates the dataset if it doesn't exist
- Restores workspace on every startup
- Auto-syncs changes every 10 minutes (configurable)
- Saves on shutdown (graceful SIGTERM handling)
Custom dataset name: BACKUP_DATASET_NAME=my-custom-backup
π How It Stays Alive
HF Spaces sleeps after 48h of no HTTP requests. HuggingClaw prevents this with:
- Self-ping β pings its own URL every 5 min (uses HF's
SPACE_HOSTenv var) - Health endpoint β returns
200 OKwith uptime info - Zero dependencies β no external cron, no third-party pinger
Your Space runs forever, powered entirely by HF. π―
π» Local Development
git clone https://github.com/somratpro/huggingclaw.git
cd huggingclaw
cp .env.example .env
nano .env # fill in your values
Docker:
docker build -t huggingclaw .
docker run -p 7860:7860 --env-file .env huggingclaw
Without Docker:
npm install -g openclaw@latest
export $(cat .env | xargs)
bash start.sh
π Connect via CLI
npm install -g openclaw@latest
openclaw channels login --gateway https://YOUR-SPACE-URL.hf.space
# Enter your GATEWAY_TOKEN when prompted
ποΈ Architecture
HuggingClaw/
βββ Dockerfile # Runtime: Node.js + OpenClaw + curl + jq
βββ start.sh # Config generator + validation + orchestrator
βββ keep-alive.sh # Self-ping to prevent HF sleep
βββ workspace-sync.sh # Periodic workspace commit + push
βββ health-server.js # Health endpoint (/health)
βββ dns-fix.js # DNS override for HF network restrictions
βββ .env.example # Complete configuration reference
βββ .gitignore # Keeps secrets out of version control
βββ README.md # You are here
Startup flow:
- Validate secrets β fail fast with clear errors
- Validate HF token β warn if expired
- Auto-create backup dataset if missing
- Restore workspace from HF Dataset
- Generate
openclaw.jsonconfig from env vars - Print startup summary
- Start background services (keep-alive, auto-sync)
- Launch OpenClaw gateway
- On SIGTERM β save workspace β exit cleanly
π Troubleshooting
Missing secrets β Check Settings β Secrets for LLM_API_KEY and GATEWAY_TOKEN
Telegram not working β Verify bot token is valid, check logs for π± Enabling Telegram
Workspace not restoring β Check HF_USERNAME and HF_TOKEN are set, token has write access
Space sleeping β Check logs for π Keep-alive started. If missing, SPACE_HOST might not be set
Control UI blocked β The Space URL is auto-allowlisted. Check logs for origin errors
Version issues β Pin with OPENCLAW_VERSION=2026.3.24 in secrets
π Links
π€ Contributing
Contributions welcome! See CONTRIBUTING.md for guidelines.
π License
MIT β see LICENSE for details.
Made with β€οΈ by @somratpro for the OpenClaw community