Spaces:
Sleeping
Sleeping
Somrat Sorkar
Fix provider names to match OpenRouter API (z-ai, mistralai, moonshotai, deepseek, meta-llama, etc.)
abf5aa0 | ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| π¦ HuggingClaw β OpenClaw Gateway for HuggingFace Spaces | |
| ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| Copy this file to .env and fill in your values. | |
| For local development: cp .env.example .env && nano .env | |
| ββ REQUIRED: Core Configuration ββ | |
| [REQUIRED] LLM provider API key | |
| - Anthropic: sk-ant-v0-... | |
| - OpenAI: sk-... | |
| - Google: AIzaSy... | |
| LLM_API_KEY=your_api_key_here | |
| [REQUIRED] LLM model to use (format: provider/model-name) | |
| Auto-detects provider from prefix β any provider is supported! | |
| Model IDs sourced from https://openrouter.ai/api/v1/models | |
| # ββ Top-Tier Providers ββ | |
| # Anthropic Claude: | |
| - anthropic/claude-opus-4.6 | |
| - anthropic/claude-sonnet-4.6 | |
| - anthropic/claude-sonnet-4.5 | |
| - anthropic/claude-haiku-4.5 | |
| # OpenAI: | |
| - openai/gpt-5.4-pro (1M context) | |
| - openai/gpt-5.4 (1M context) | |
| - openai/gpt-5.4-mini | |
| - openai/gpt-5.4-nano | |
| - openai/gpt-4.1 | |
| - openai/gpt-4.1-mini | |
| - openai/gpt-4.1-nano | |
| # Google Gemini: | |
| - google/gemini-3.1-pro-preview | |
| - google/gemini-3.1-flash-lite-preview | |
| - google/gemini-2.5-pro | |
| - google/gemini-2.5-flash | |
| # DeepSeek: | |
| - deepseek/deepseek-v3.2 | |
| - deepseek/deepseek-r1-0528 | |
| - deepseek/deepseek-r1 | |
| - deepseek/deepseek-chat-v3.1 | |
| # ββ Chinese/Asian Providers ββ | |
| # Qwen (Alibaba): | |
| - qwen/qwen3.6-plus-preview:free (1M context, free!) | |
| - qwen/qwen3-max | |
| - qwen/qwen3-coder | |
| - qwen/qwen3.5-397b-a17b | |
| - qwen/qwen3.5-35b-a3b | |
| # Z.ai (GLM): | |
| - z-ai/glm-5 | |
| - z-ai/glm-5-turbo | |
| - z-ai/glm-4.7 | |
| - z-ai/glm-4.7-flash | |
| - z-ai/glm-4.5-air:free (free!) | |
| # Moonshot (Kimi): | |
| - moonshotai/kimi-k2.5 | |
| - moonshotai/kimi-k2 | |
| - moonshotai/kimi-k2-thinking | |
| # Xiaomi (MiMo): | |
| - xiaomi/mimo-v2-pro (1M context) | |
| - xiaomi/mimo-v2-omni (multimodal) | |
| - xiaomi/mimo-v2-flash | |
| # ByteDance Seed: | |
| - bytedance-seed/seed-2.0-lite | |
| - bytedance-seed/seed-2.0-mini | |
| # Baidu (ERNIE): | |
| - baidu/ernie-4.5-300b-a47b | |
| - baidu/ernie-4.5-21b-a3b | |
| # Tencent: | |
| - tencent/hunyuan-a13b-instruct | |
| # StepFun: | |
| - stepfun/step-3.5-flash | |
| - stepfun/step-3.5-flash:free (free!) | |
| # ββ Western Providers ββ | |
| # Mistral: | |
| - mistralai/mistral-large-2512 | |
| - mistralai/mistral-medium-3.1 | |
| - mistralai/mistral-small-2603 | |
| - mistralai/devstral-medium (coding) | |
| - mistralai/codestral-2508 | |
| # xAI (Grok): | |
| - x-ai/grok-4.20-beta | |
| - x-ai/grok-4.20-multi-agent-beta | |
| - x-ai/grok-4.1-fast | |
| - x-ai/grok-4 | |
| # Meta (Llama): | |
| - meta-llama/llama-4-maverick | |
| - meta-llama/llama-4-scout | |
| - meta-llama/llama-3.3-70b-instruct | |
| # NVIDIA: | |
| - nvidia/nemotron-3-super-120b-a12b | |
| - nvidia/nemotron-3-super-120b-a12b:free (free!) | |
| # Cohere: | |
| - cohere/command-a | |
| - cohere/command-r-plus-08-2024 | |
| # Perplexity: | |
| - perplexity/sonar-deep-research | |
| - perplexity/sonar-pro | |
| # MiniMax: | |
| - minimax/minimax-m2.7 | |
| - minimax/minimax-m2.5 | |
| - minimax/minimax-m2.5:free (free!) | |
| # ββ Speed & Specialty ββ | |
| # Inception (ultra-fast reasoning): | |
| - inception/mercury-2 (1000+ tok/sec) | |
| - inception/mercury-coder | |
| # Amazon: | |
| - amazon/nova-premier-v1 | |
| - amazon/nova-pro-v1 | |
| # ββ OpenRouter (100+ models via single API key) ββ | |
| Use any model above via OpenRouter with a single API key! | |
| See https://openrouter.ai/models for complete list | |
| # Or any other provider β format: provider/model-name (auto-detected) | |
| LLM_MODEL=anthropic/claude-sonnet-4.5 | |
| [REQUIRED] Gateway authentication token | |
| Generate: openssl rand -hex 32 | |
| GATEWAY_TOKEN=your_gateway_token_here | |
| ββ OPTIONAL: Telegram Integration ββ | |
| Enable Telegram bot integration | |
| Get bot token from: https://t.me/BotFather | |
| TELEGRAM_BOT_TOKEN=your_bot_token_here | |
| Single user ID for DM access | |
| Get your ID from: https://t.me/userinfobot | |
| TELEGRAM_USER_ID=123456789 | |
| Multiple user IDs (comma-separated for team access) | |
| TELEGRAM_USER_IDS=123456789,987654321,555555555 | |
| ββ OPTIONAL: Workspace Backup to HF Dataset ββ | |
| Enable automatic workspace backup & restore | |
| Your HuggingFace username | |
| HF_USERNAME=your_hf_username | |
| HuggingFace API token (with write access) | |
| Get from: https://huggingface.co/settings/tokens | |
| HF_TOKEN=hf_your_token_here | |
| Name of the backup dataset (auto-created if missing) | |
| Default: huggingclaw-backup | |
| BACKUP_DATASET_NAME=huggingclaw-backup | |
| Git commit email for workspace syncs | |
| Default: openclaw@example.com | |
| WORKSPACE_GIT_USER=openclaw@example.com | |
| Git commit name for workspace syncs | |
| Default: OpenClaw Bot | |
| WORKSPACE_GIT_NAME=OpenClaw Bot | |
| ββ OPTIONAL: Background Services Configuration ββ | |
| Keep-alive ping interval in seconds (prevents HF Spaces sleep) | |
| Default: 300 (5 minutes) | |
| Set to 0 to disable (not recommended on HF Spaces) | |
| KEEP_ALIVE_INTERVAL=300 | |
| Workspace auto-sync interval in seconds | |
| Default: 600 (10 minutes) | |
| SYNC_INTERVAL=600 | |
| ββ OPTIONAL: Advanced Configuration ββ | |
| Pin OpenClaw version (default: latest) | |
| Example: OPENCLAW_VERSION=2026.3.24 | |
| OPENCLAW_VERSION=latest | |
| Health endpoint port (default: 7861) | |
| HEALTH_PORT=7861 | |
| ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| QUICK START CHECKLIST | |
| ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| # β Minimum setup (3 secrets): | |
| 1. LLM_API_KEY - Get from your LLM provider | |
| 2. LLM_MODEL - Choose a model above | |
| 3. GATEWAY_TOKEN - Run: openssl rand -hex 32 | |
| # β Add Telegram (2 more secrets): | |
| 4. TELEGRAM_BOT_TOKEN - From @BotFather | |
| 5. TELEGRAM_USER_ID - From @userinfobot | |
| # β Enable Backup (2 more secrets): | |
| 6. HF_USERNAME - Your HF account name | |
| 7. HF_TOKEN - From HF Settings β Tokens | |
| # ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| DEPLOYMENT OPTIONS | |
| ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| # π¦ HuggingFace Spaces (Recommended): | |
| - Click "Duplicate this Space" | |
| - Go to Settings β Secrets | |
| - Add LLM_API_KEY, LLM_MODEL, GATEWAY_TOKEN | |
| - Deploy (automatic!) | |
| # π³ Docker Local: | |
| docker build -t huggingclaw . | |
| docker run -p 7860:7860 --env-file .env huggingclaw | |
| # π» Direct (without Docker): | |
| npm install -g openclaw@latest | |
| export $(cat .env | xargs) | |
| bash start.sh | |
| # ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| VERIFY YOUR SETUP | |
| ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |
| # After deployment, check: | |
| 1. Logs for "π¦ HuggingClaw Gateway" banner | |
| 2. Health endpoint: curl https://YOUR-SPACE-URL.hf.space/health | |
| 3. Control UI: https://YOUR-SPACE-URL.hf.space | |
| 4. (If Telegram) DM your bot to test | |
| # ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | |