Spaces:
Sleeping
Sleeping
Somrat Sorkar
Fix provider names to match OpenRouter API (z-ai, mistralai, moonshotai, deepseek, meta-llama, etc.)
abf5aa0 | title: HuggingClaw | |
| emoji: π¦ | |
| colorFrom: blue | |
| colorTo: purple | |
| sdk: docker | |
| app_port: 7860 | |
| pinned: true | |
| <!-- Badges --> | |
| [](https://github.com/somratpro/huggingclaw) | |
| [](https://opensource.org/licenses/MIT) | |
| [](https://huggingface.co/spaces) | |
| [](https://github.com/openclaw/openclaw) | |
| # π¦ HuggingClaw | |
| Run your own **always-on AI assistant** on HuggingFace Spaces β for free. | |
| Works with **any LLM** (Anthropic, OpenAI, Google), connects via **Telegram**, and persists your workspace to **HF Datasets** automatically. | |
| ### β¨ Features | |
| - **Zero-config** β just add 3 secrets and deploy | |
| - **Any LLM provider** β Claude, GPT-4, Gemini, etc. | |
| - **Built-in keep-alive** β self-pings to prevent HF sleep (no external cron needed) | |
| - **Auto-sync workspace** β commits + pushes changes every 10 min | |
| - **Auto-create backup** β creates the HF Dataset for you if it doesn't exist | |
| - **Graceful shutdown** β saves workspace before container dies | |
| - **Multi-user Telegram** β supports comma-separated user IDs for teams | |
| - **Health endpoint** β `/health` for monitoring | |
| - **Version pinning** β lock OpenClaw to a specific version | |
| - **100% HF-native** β runs entirely on HuggingFace infrastructure | |
| --- | |
| ## π Quick Start | |
| ### 1. Duplicate this Space | |
| [](https://huggingface.co/spaces/somratpro/HuggingClaw?duplicate=true) | |
| Click the button above β name it β set to **Private** | |
| ### 2. Add Required Secrets | |
| Go to **Settings β Secrets**: | |
| | Secret | Value | | |
| |--------|-------| | |
| | `LLM_API_KEY` | Your API key ([Anthropic](https://console.anthropic.com/) / [OpenAI](https://platform.openai.com/) / [Google](https://ai.google.dev/)) | | |
| | `LLM_MODEL` | Model to use (e.g. `google/gemini-2.5-flash`, `anthropic/claude-sonnet-4-5`, `openai/gpt-4`) | | |
| | `GATEWAY_TOKEN` | Run `openssl rand -hex 32` to generate | | |
| ### 3. Deploy | |
| That's it! The Space builds and starts automatically. | |
| ### 4. (Optional) Add Telegram | |
| | Secret | Value | | |
| |--------|-------| | |
| | `TELEGRAM_BOT_TOKEN` | From [@BotFather](https://t.me/BotFather) | | |
| | `TELEGRAM_USER_ID` | Your user ID ([how to find](https://t.me/userinfobot)) | | |
| ### 5. (Optional) Enable Workspace Backup | |
| | Secret | Value | | |
| |--------|-------| | |
| | `HF_USERNAME` | Your HuggingFace username | | |
| | `HF_TOKEN` | [HF token](https://huggingface.co/settings/tokens) with write access | | |
| The backup dataset (`huggingclaw-backup`) is **created automatically** β no manual setup needed. | |
| --- | |
| ## π All Configuration Options | |
| See **`.env.example`** for the complete reference with examples. | |
| #### Required | |
| | Variable | Purpose | | |
| |----------|---------| | |
| | `LLM_API_KEY` | LLM provider API key | | |
| | `LLM_MODEL` | Model to use (e.g. `google/gemini-2.5-flash`, `anthropic/claude-sonnet-4-5`, `openai/gpt-4`) β auto-detects provider from prefix | | |
| | `GATEWAY_TOKEN` | Gateway auth token | | |
| #### Telegram | |
| | Variable | Purpose | | |
| |----------|---------| | |
| | `TELEGRAM_BOT_TOKEN` | Bot token from @BotFather | | |
| | `TELEGRAM_USER_ID` | Single user allowlist | | |
| | `TELEGRAM_USER_IDS` | Multiple users (comma-separated): `123,456,789` | | |
| #### Workspace Backup | |
| | Variable | Default | Purpose | | |
| |----------|---------|---------| | |
| | `HF_USERNAME` | β | Your HF username | | |
| | `HF_TOKEN` | β | HF token (write access) | | |
| | `BACKUP_DATASET_NAME` | `huggingclaw-backup` | Dataset name (auto-created!) | | |
| | `WORKSPACE_GIT_USER` | `openclaw@example.com` | Git commit email | | |
| | `WORKSPACE_GIT_NAME` | `OpenClaw Bot` | Git commit name | | |
| #### Background Services | |
| | Variable | Default | Purpose | | |
| |----------|---------|---------| | |
| | `KEEP_ALIVE_INTERVAL` | `300` (5 min) | Self-ping interval. `0` = disable | | |
| | `SYNC_INTERVAL` | `600` (10 min) | Auto-sync interval | | |
| #### Advanced | |
| | Variable | Default | Purpose | | |
| |----------|---------|---------| | |
| | `OPENCLAW_VERSION` | `latest` | Pin OpenClaw version | | |
| | `HEALTH_PORT` | `7861` | Health endpoint port | | |
| --- | |
| ## π€ LLM Provider Setup | |
| Just set `LLM_MODEL` with the correct provider prefix β **any provider is supported**! The provider is auto-detected from the model name. All model IDs sourced from [OpenRouter](https://openrouter.ai/models). | |
| ### Anthropic (Claude) | |
| ``` | |
| LLM_API_KEY=sk-ant-v0-... | |
| LLM_MODEL=anthropic/claude-sonnet-4.5 | |
| ``` | |
| Models: `anthropic/claude-opus-4.6` Β· `anthropic/claude-sonnet-4.6` Β· `anthropic/claude-sonnet-4.5` Β· `anthropic/claude-haiku-4.5` | |
| ### OpenAI | |
| ``` | |
| LLM_API_KEY=sk-... | |
| LLM_MODEL=openai/gpt-5.4 | |
| ``` | |
| Models: `openai/gpt-5.4-pro` Β· `openai/gpt-5.4` Β· `openai/gpt-5.4-mini` Β· `openai/gpt-5.4-nano` Β· `openai/gpt-4.1` Β· `openai/gpt-4.1-mini` | |
| ### Google (Gemini) | |
| ``` | |
| LLM_API_KEY=AIzaSy... | |
| LLM_MODEL=google/gemini-2.5-flash | |
| ``` | |
| Models: `google/gemini-3.1-pro-preview` Β· `google/gemini-3.1-flash-lite-preview` Β· `google/gemini-2.5-pro` Β· `google/gemini-2.5-flash` | |
| ### DeepSeek | |
| ``` | |
| LLM_API_KEY=sk-... | |
| LLM_MODEL=deepseek/deepseek-v3.2 | |
| ``` | |
| Models: `deepseek/deepseek-v3.2` Β· `deepseek/deepseek-r1-0528` Β· `deepseek/deepseek-r1` Β· `deepseek/deepseek-chat-v3.1` | |
| Get key from: [DeepSeek Platform](https://platform.deepseek.com) | |
| ### Qwen (Alibaba) | |
| ``` | |
| LLM_API_KEY=your_qwen_api_key | |
| LLM_MODEL=qwen/qwen3-max | |
| ``` | |
| Models: `qwen/qwen3.6-plus-preview:free` (free!) Β· `qwen/qwen3-max` Β· `qwen/qwen3-coder` Β· `qwen/qwen3.5-397b-a17b` Β· `qwen/qwen3.5-35b-a3b` | |
| Get key from: [DashScope](https://dashscope.aliyun.com) | |
| ### Z.ai (GLM) | |
| ``` | |
| LLM_API_KEY=your_zai_api_key | |
| LLM_MODEL=z-ai/glm-5-turbo | |
| ``` | |
| Models: `z-ai/glm-5` Β· `z-ai/glm-5-turbo` Β· `z-ai/glm-4.7` Β· `z-ai/glm-4.7-flash` Β· `z-ai/glm-4.5-air:free` (free!) | |
| Get key from: [Z.ai](https://z.ai) | |
| ### Moonshot (Kimi) | |
| ``` | |
| LLM_API_KEY=sk-... | |
| LLM_MODEL=moonshotai/kimi-k2.5 | |
| ``` | |
| Models: `moonshotai/kimi-k2.5` Β· `moonshotai/kimi-k2` Β· `moonshotai/kimi-k2-thinking` | |
| Get key from: [Moonshot API](https://platform.moonshot.cn) | |
| ### Mistral | |
| ``` | |
| LLM_API_KEY=your_mistral_api_key | |
| LLM_MODEL=mistralai/mistral-large-2512 | |
| ``` | |
| Models: `mistralai/mistral-large-2512` Β· `mistralai/mistral-medium-3.1` Β· `mistralai/mistral-small-2603` Β· `mistralai/devstral-medium` Β· `mistralai/codestral-2508` | |
| Get key from: [Mistral Console](https://console.mistral.ai) | |
| ### xAI (Grok) | |
| ``` | |
| LLM_API_KEY=your_xai_api_key | |
| LLM_MODEL=x-ai/grok-4.20-beta | |
| ``` | |
| Models: `x-ai/grok-4.20-beta` Β· `x-ai/grok-4.20-multi-agent-beta` Β· `x-ai/grok-4.1-fast` Β· `x-ai/grok-4` | |
| Get key from: [xAI Console](https://console.x.ai) | |
| ### Meta (Llama) | |
| ``` | |
| LLM_API_KEY=your_meta_api_key | |
| LLM_MODEL=meta-llama/llama-4-maverick | |
| ``` | |
| Models: `meta-llama/llama-4-maverick` Β· `meta-llama/llama-4-scout` Β· `meta-llama/llama-3.3-70b-instruct:free` (free!) | |
| ### MiniMax | |
| ``` | |
| LLM_API_KEY=your_minimax_api_key | |
| LLM_MODEL=minimax/minimax-m2.7 | |
| ``` | |
| Models: `minimax/minimax-m2.7` Β· `minimax/minimax-m2.5` Β· `minimax/minimax-m2.5:free` (free!) | |
| Get key from: [MiniMax Platform](https://api.minimaxi.com) | |
| ### NVIDIA (Nemotron) | |
| ``` | |
| LLM_API_KEY=your_nvidia_api_key | |
| LLM_MODEL=nvidia/nemotron-3-super-120b-a12b | |
| ``` | |
| Models: `nvidia/nemotron-3-super-120b-a12b` Β· `nvidia/nemotron-3-super-120b-a12b:free` (free!) | |
| Get key from: [NVIDIA API](https://api.nvidia.com) | |
| ### Cohere | |
| ``` | |
| LLM_API_KEY=your_cohere_api_key | |
| LLM_MODEL=cohere/command-a | |
| ``` | |
| Models: `cohere/command-a` Β· `cohere/command-r-plus-08-2024` | |
| Get key from: [Cohere Dashboard](https://dashboard.cohere.com) | |
| ### Perplexity | |
| ``` | |
| LLM_API_KEY=your_perplexity_api_key | |
| LLM_MODEL=perplexity/sonar-pro | |
| ``` | |
| Models: `perplexity/sonar-deep-research` Β· `perplexity/sonar-pro` Β· `perplexity/sonar-reasoning-pro` | |
| Get key from: [Perplexity API](https://www.perplexity.ai/settings/api) | |
| ### Xiaomi (MiMo) | |
| ``` | |
| LLM_API_KEY=your_xiaomi_api_key | |
| LLM_MODEL=xiaomi/mimo-v2-pro | |
| ``` | |
| Models: `xiaomi/mimo-v2-pro` (1M context) Β· `xiaomi/mimo-v2-omni` (multimodal) Β· `xiaomi/mimo-v2-flash` | |
| ### ByteDance (Seed) | |
| ``` | |
| LLM_API_KEY=your_bytedance_api_key | |
| LLM_MODEL=bytedance-seed/seed-2.0-lite | |
| ``` | |
| Models: `bytedance-seed/seed-2.0-lite` Β· `bytedance-seed/seed-2.0-mini` | |
| Get key from: [Volcengine](https://www.volcengine.com) | |
| ### Baidu (ERNIE) | |
| ``` | |
| LLM_API_KEY=your_baidu_api_key | |
| LLM_MODEL=baidu/ernie-4.5-300b-a47b | |
| ``` | |
| Models: `baidu/ernie-4.5-300b-a47b` Β· `baidu/ernie-4.5-21b-a3b` | |
| ### Inception (Mercury β ultra-fast) | |
| ``` | |
| LLM_API_KEY=your_inception_api_key | |
| LLM_MODEL=inception/mercury-2 | |
| ``` | |
| Models: `inception/mercury-2` (1000+ tok/sec!) Β· `inception/mercury-coder` | |
| Get key from: [Inception Labs](https://www.inceptionlabs.ai) | |
| ### Amazon (Nova) | |
| ``` | |
| LLM_API_KEY=your_amazon_api_key | |
| LLM_MODEL=amazon/nova-premier-v1 | |
| ``` | |
| Models: `amazon/nova-premier-v1` Β· `amazon/nova-pro-v1` Β· `amazon/nova-lite-v1` | |
| ### OpenRouter (300+ models via single API key) | |
| ``` | |
| LLM_API_KEY=sk-or-v1-... | |
| LLM_MODEL=openrouter/auto | |
| ``` | |
| With OpenRouter, you can access **every model above** with a single API key! Just prefix with `openrouter/`: | |
| - `openrouter/openai/gpt-5.4-pro` | |
| - `openrouter/anthropic/claude-opus-4.6` | |
| - `openrouter/deepseek/deepseek-v3.2` | |
| - `openrouter/qwen/qwen3.6-plus-preview:free` | |
| Get key from: [OpenRouter.ai](https://openrouter.ai) Β· [Full model list](https://openrouter.ai/models) | |
| ### Any Other Provider | |
| HuggingClaw supports **any LLM provider** that OpenClaw supports. Just use: | |
| ``` | |
| LLM_API_KEY=your_api_key | |
| LLM_MODEL=provider/model-name | |
| ``` | |
| The provider prefix is auto-detected and mapped to the appropriate environment variable. | |
| --- | |
| ## π± Telegram Setup | |
| 1. Message [@BotFather](https://t.me/BotFather) β `/newbot` β copy the token | |
| 2. Message [@userinfobot](https://t.me/userinfobot) to get your user ID | |
| 3. Add secrets: `TELEGRAM_BOT_TOKEN` and `TELEGRAM_USER_ID` | |
| 4. Restart the Space β DM your bot π | |
| **Multiple users?** Use `TELEGRAM_USER_IDS=123,456,789` (comma-separated) | |
| --- | |
| ## πΎ Workspace Backup | |
| Set `HF_USERNAME` + `HF_TOKEN` and HuggingClaw handles everything: | |
| 1. **Auto-creates** the dataset if it doesn't exist | |
| 2. **Restores** workspace on every startup | |
| 3. **Auto-syncs** changes every 10 minutes (configurable) | |
| 4. **Saves** on shutdown (graceful SIGTERM handling) | |
| Custom dataset name: `BACKUP_DATASET_NAME=my-custom-backup` | |
| --- | |
| ## π How It Stays Alive | |
| HF Spaces sleeps after 48h of no HTTP requests. HuggingClaw prevents this with: | |
| - **Self-ping** β pings its own URL every 5 min (uses HF's `SPACE_HOST` env var) | |
| - **Health endpoint** β returns `200 OK` with uptime info | |
| - **Zero dependencies** β no external cron, no third-party pinger | |
| Your Space runs forever, powered entirely by HF. π― | |
| --- | |
| ## π» Local Development | |
| ```bash | |
| git clone https://github.com/somratpro/huggingclaw.git | |
| cd huggingclaw | |
| cp .env.example .env | |
| nano .env # fill in your values | |
| ``` | |
| **Docker:** | |
| ```bash | |
| docker build -t huggingclaw . | |
| docker run -p 7860:7860 --env-file .env huggingclaw | |
| ``` | |
| **Without Docker:** | |
| ```bash | |
| npm install -g openclaw@latest | |
| export $(cat .env | xargs) | |
| bash start.sh | |
| ``` | |
| --- | |
| ## π Connect via CLI | |
| ```bash | |
| npm install -g openclaw@latest | |
| openclaw channels login --gateway https://YOUR-SPACE-URL.hf.space | |
| # Enter your GATEWAY_TOKEN when prompted | |
| ``` | |
| --- | |
| ## ποΈ Architecture | |
| ``` | |
| HuggingClaw/ | |
| βββ Dockerfile # Runtime: Node.js + OpenClaw + curl + jq | |
| βββ start.sh # Config generator + validation + orchestrator | |
| βββ keep-alive.sh # Self-ping to prevent HF sleep | |
| βββ workspace-sync.sh # Periodic workspace commit + push | |
| βββ health-server.js # Health endpoint (/health) | |
| βββ dns-fix.js # DNS override for HF network restrictions | |
| βββ .env.example # Complete configuration reference | |
| βββ .gitignore # Keeps secrets out of version control | |
| βββ README.md # You are here | |
| ``` | |
| **Startup flow:** | |
| 1. Validate secrets β fail fast with clear errors | |
| 2. Validate HF token β warn if expired | |
| 3. Auto-create backup dataset if missing | |
| 4. Restore workspace from HF Dataset | |
| 5. Generate `openclaw.json` config from env vars | |
| 6. Print startup summary | |
| 7. Start background services (keep-alive, auto-sync) | |
| 8. Launch OpenClaw gateway | |
| 9. On SIGTERM β save workspace β exit cleanly | |
| --- | |
| ## π Troubleshooting | |
| **Missing secrets** β Check **Settings β Secrets** for `LLM_API_KEY` and `GATEWAY_TOKEN` | |
| **Telegram not working** β Verify bot token is valid, check logs for `π± Enabling Telegram` | |
| **Workspace not restoring** β Check `HF_USERNAME` and `HF_TOKEN` are set, token has write access | |
| **Space sleeping** β Check logs for `π Keep-alive started`. If missing, `SPACE_HOST` might not be set | |
| **Control UI blocked** β The Space URL is auto-allowlisted. Check logs for origin errors | |
| **Version issues** β Pin with `OPENCLAW_VERSION=2026.3.24` in secrets | |
| --- | |
| ## π Links | |
| - [OpenClaw Docs](https://docs.openclaw.ai) Β· [OpenClaw GitHub](https://github.com/openclaw/openclaw) Β· [HF Spaces Docs](https://huggingface.co/docs/hub/spaces) | |
| --- | |
| ## π€ Contributing | |
| Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. | |
| ## π License | |
| MIT β see [LICENSE](LICENSE) for details. | |
| --- | |
| Made with β€οΈ by [@somratpro](https://github.com/somratpro) for the [OpenClaw](https://github.com/openclaw/openclaw) community | |