codeclaw-backup / README.md
Somrat Sorkar
Add OpenRouter support + 15+ popular models (Qwen, Grok, MiMo, Seed, Nemotron, GLM, Mercury, etc.)
b19cd4a
|
raw
history blame
12.6 kB
metadata
title: HuggingClaw
emoji: 🦞
colorFrom: blue
colorTo: purple
sdk: docker
app_port: 7860
pinned: true

GitHub Stars License: MIT HF Space OpenClaw

🦞 HuggingClaw

Run your own always-on AI assistant on HuggingFace Spaces β€” for free.

Works with any LLM (Anthropic, OpenAI, Google), connects via Telegram, and persists your workspace to HF Datasets automatically.

✨ Features

  • Zero-config β€” just add 3 secrets and deploy
  • Any LLM provider β€” Claude, GPT-4, Gemini, etc.
  • Built-in keep-alive β€” self-pings to prevent HF sleep (no external cron needed)
  • Auto-sync workspace β€” commits + pushes changes every 10 min
  • Auto-create backup β€” creates the HF Dataset for you if it doesn't exist
  • Graceful shutdown β€” saves workspace before container dies
  • Multi-user Telegram β€” supports comma-separated user IDs for teams
  • Health endpoint β€” /health for monitoring
  • Version pinning β€” lock OpenClaw to a specific version
  • 100% HF-native β€” runs entirely on HuggingFace infrastructure

πŸš€ Quick Start

1. Duplicate this Space

Duplicate this Space

Click the button above β†’ name it β†’ set to Private

2. Add Required Secrets

Go to Settings β†’ Secrets:

Secret Value
LLM_API_KEY Your API key (Anthropic / OpenAI / Google)
LLM_MODEL Model to use (e.g. google/gemini-2.5-flash, anthropic/claude-sonnet-4-5, openai/gpt-4)
GATEWAY_TOKEN Run openssl rand -hex 32 to generate

3. Deploy

That's it! The Space builds and starts automatically.

4. (Optional) Add Telegram

Secret Value
TELEGRAM_BOT_TOKEN From @BotFather
TELEGRAM_USER_ID Your user ID (how to find)

5. (Optional) Enable Workspace Backup

Secret Value
HF_USERNAME Your HuggingFace username
HF_TOKEN HF token with write access

The backup dataset (huggingclaw-backup) is created automatically β€” no manual setup needed.


πŸ“‹ All Configuration Options

See .env.example for the complete reference with examples.

Required

Variable Purpose
LLM_API_KEY LLM provider API key
LLM_MODEL Model to use (e.g. google/gemini-2.5-flash, anthropic/claude-sonnet-4-5, openai/gpt-4) β€” auto-detects provider from prefix
GATEWAY_TOKEN Gateway auth token

Telegram

Variable Purpose
TELEGRAM_BOT_TOKEN Bot token from @BotFather
TELEGRAM_USER_ID Single user allowlist
TELEGRAM_USER_IDS Multiple users (comma-separated): 123,456,789

Workspace Backup

Variable Default Purpose
HF_USERNAME β€” Your HF username
HF_TOKEN β€” HF token (write access)
BACKUP_DATASET_NAME huggingclaw-backup Dataset name (auto-created!)
WORKSPACE_GIT_USER openclaw@example.com Git commit email
WORKSPACE_GIT_NAME OpenClaw Bot Git commit name

Background Services

Variable Default Purpose
KEEP_ALIVE_INTERVAL 300 (5 min) Self-ping interval. 0 = disable
SYNC_INTERVAL 600 (10 min) Auto-sync interval

Advanced

Variable Default Purpose
OPENCLAW_VERSION latest Pin OpenClaw version
HEALTH_PORT 7861 Health endpoint port

πŸ€– LLM Provider Setup

Just set LLM_MODEL with the correct provider prefix β€” any provider is supported! The provider is auto-detected from the model name.

Anthropic (Claude)

LLM_API_KEY=sk-ant-v0-...
LLM_MODEL=anthropic/claude-haiku-4-5

Models: anthropic/claude-opus-4-6 Β· anthropic/claude-sonnet-4-5 Β· anthropic/claude-haiku-4-5

OpenAI

LLM_API_KEY=sk-...
LLM_MODEL=openai/gpt-4

Models: openai/gpt-4-turbo Β· openai/gpt-4 Β· openai/gpt-3.5-turbo

Google (Gemini)

LLM_API_KEY=AIzaSy...
LLM_MODEL=google/gemini-2.5-flash

Models: google/gemini-2.5-flash Β· google/gemini-2.0-flash Β· google/gemini-1.5-pro

Zhipu (ChatGLM) / ZAI

LLM_API_KEY=your_zhipu_api_key
LLM_MODEL=zhipu/glm-4-plus

Models: zhipu/glm-4-plus Β· zhipu/glm-4 Β· zai/glm-4 (alias) Get key from: Zhipu Platform

Moonshot (Kimi)

LLM_API_KEY=sk-...
LLM_MODEL=moonshot/moonshot-v1-128k

Models: moonshot/moonshot-v1-128k Β· moonshot/moonshot-v1-32k Get key from: Moonshot API

MiniMax

LLM_API_KEY=your_minimax_api_key
LLM_MODEL=minimax/minimax-01

Models: minimax/minimax-01 Β· minimax/minimax-text-01 Get key from: MiniMax Platform

Mistral

LLM_API_KEY=your_mistral_api_key
LLM_MODEL=mistral/mistral-large

Models: mistral/mistral-large Β· mistral/mistral-medium Β· mistral/mistral-small Get key from: Mistral Console

Cohere

LLM_API_KEY=your_cohere_api_key
LLM_MODEL=cohere/command-r

Models: cohere/command-r Β· cohere/command-r-plus Get key from: Cohere Dashboard

Groq

LLM_API_KEY=your_groq_api_key
LLM_MODEL=groq/mixtral-8x7b-32768

Models: groq/mixtral-8x7b-32768 Β· groq/llama2-70b-4096 Get key from: Groq Console

Qwen

LLM_API_KEY=your_qwen_api_key
LLM_MODEL=qwen/qwen3.6-plus-preview

Models: qwen/qwen3.6-plus-preview (free!) Β· qwen/qwen3.5-35b-a3b Β· qwen/qwen3.5-9b Get key from: Qwen API

xAI (Grok)

LLM_API_KEY=your_xai_api_key
LLM_MODEL=x-ai/grok-4.20-beta

Models: x-ai/grok-4.20-beta Β· x-ai/grok-4.20-multi-agent-beta Get key from: xAI Console

NVIDIA (Nemotron)

LLM_API_KEY=your_nvidia_api_key
LLM_MODEL=nvidia/nemotron-3-super-120b-a12b

Models: nvidia/nemotron-3-super-120b-a12b Β· nvidia/nemotron-3-super-120b-a12b (free) Get key from: NVIDIA API

Xiaomi (MiMo)

LLM_API_KEY=your_xiaomi_api_key
LLM_MODEL=xiaomi/mimo-v2-pro

Models: xiaomi/mimo-v2-pro (1M context) Β· xiaomi/mimo-v2-omni (multimodal) Get key from: Xiaomi

ByteDance (Seed)

LLM_API_KEY=your_bytedance_api_key
LLM_MODEL=bytedance-seed/seed-2.0-lite

Models: bytedance-seed/seed-2.0-lite Β· bytedance-seed/seed-2.0-mini Get key from: ByteDance

Z.ai (GLM)

LLM_API_KEY=your_zai_api_key
LLM_MODEL=z-ai/glm-5-turbo

Models: z-ai/glm-5-turbo Get key from: Z.ai

OpenRouter (100+ models via single API)

LLM_API_KEY=your_openrouter_api_key
LLM_MODEL=openrouter/google/lyria-3-pro-preview

Popular models via OpenRouter:

  • openrouter/openai/gpt-5.4-pro β€” Latest OpenAI (1M context)
  • openrouter/openai/gpt-5.4-mini β€” Fast, efficient OpenAI
  • openrouter/google/gemini-3.1-flash-lite-preview β€” Google's latest
  • openrouter/anthropic/claude-opus-4-6 β€” Latest Claude via OpenRouter
  • openrouter/mistral/mistral-small-2603 β€” Mistral's latest
  • openrouter/inception/mercury-2 β€” Ultra-fast reasoning (1000 tok/sec)
  • openrouter/qwen/qwen3.6-plus-preview β€” Free tier available!
  • openrouter/x-ai/grok-4.20-beta β€” xAI's latest Grok
  • openrouter/nvidia/nemotron-3-super-120b-a12b β€” NVIDIA's powerhouse
  • openrouter/xiaomi/mimo-v2-pro β€” Xiaomi's 1M context model

Why OpenRouter?

  • Single API key for 100+ models
  • Unified pricing and routing
  • Auto-fallback to other models
  • No vendor lock-in

Get key from: OpenRouter.ai (free tier available!)

Any Other Provider

HuggingClaw supports any LLM provider that OpenClaw supports. Just use:

LLM_API_KEY=your_api_key
LLM_MODEL=provider/model-name

The provider prefix is auto-detected and mapped to the appropriate environment variable.


πŸ“± Telegram Setup

  1. Message @BotFather β†’ /newbot β†’ copy the token
  2. Message @userinfobot to get your user ID
  3. Add secrets: TELEGRAM_BOT_TOKEN and TELEGRAM_USER_ID
  4. Restart the Space β†’ DM your bot πŸŽ‰

Multiple users? Use TELEGRAM_USER_IDS=123,456,789 (comma-separated)


πŸ’Ύ Workspace Backup

Set HF_USERNAME + HF_TOKEN and HuggingClaw handles everything:

  1. Auto-creates the dataset if it doesn't exist
  2. Restores workspace on every startup
  3. Auto-syncs changes every 10 minutes (configurable)
  4. Saves on shutdown (graceful SIGTERM handling)

Custom dataset name: BACKUP_DATASET_NAME=my-custom-backup


πŸ’“ How It Stays Alive

HF Spaces sleeps after 48h of no HTTP requests. HuggingClaw prevents this with:

  • Self-ping β€” pings its own URL every 5 min (uses HF's SPACE_HOST env var)
  • Health endpoint β€” returns 200 OK with uptime info
  • Zero dependencies β€” no external cron, no third-party pinger

Your Space runs forever, powered entirely by HF. 🎯


πŸ’» Local Development

git clone https://github.com/somratpro/huggingclaw.git
cd huggingclaw
cp .env.example .env
nano .env  # fill in your values

Docker:

docker build -t huggingclaw .
docker run -p 7860:7860 --env-file .env huggingclaw

Without Docker:

npm install -g openclaw@latest
export $(cat .env | xargs)
bash start.sh

πŸ”— Connect via CLI

npm install -g openclaw@latest
openclaw channels login --gateway https://YOUR-SPACE-URL.hf.space
# Enter your GATEWAY_TOKEN when prompted

πŸ—οΈ Architecture

HuggingClaw/
β”œβ”€β”€ Dockerfile          # Runtime: Node.js + OpenClaw + curl + jq
β”œβ”€β”€ start.sh            # Config generator + validation + orchestrator
β”œβ”€β”€ keep-alive.sh       # Self-ping to prevent HF sleep
β”œβ”€β”€ workspace-sync.sh   # Periodic workspace commit + push
β”œβ”€β”€ health-server.js    # Health endpoint (/health)
β”œβ”€β”€ dns-fix.js          # DNS override for HF network restrictions
β”œβ”€β”€ .env.example        # Complete configuration reference
β”œβ”€β”€ .gitignore          # Keeps secrets out of version control
└── README.md           # You are here

Startup flow:

  1. Validate secrets β†’ fail fast with clear errors
  2. Validate HF token β†’ warn if expired
  3. Auto-create backup dataset if missing
  4. Restore workspace from HF Dataset
  5. Generate openclaw.json config from env vars
  6. Print startup summary
  7. Start background services (keep-alive, auto-sync)
  8. Launch OpenClaw gateway
  9. On SIGTERM β†’ save workspace β†’ exit cleanly

πŸ› Troubleshooting

Missing secrets β†’ Check Settings β†’ Secrets for LLM_API_KEY and GATEWAY_TOKEN

Telegram not working β†’ Verify bot token is valid, check logs for πŸ“± Enabling Telegram

Workspace not restoring β†’ Check HF_USERNAME and HF_TOKEN are set, token has write access

Space sleeping β†’ Check logs for πŸ’“ Keep-alive started. If missing, SPACE_HOST might not be set

Control UI blocked β†’ The Space URL is auto-allowlisted. Check logs for origin errors

Version issues β†’ Pin with OPENCLAW_VERSION=2026.3.24 in secrets


πŸ“š Links


🀝 Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.

πŸ“„ License

MIT β€” see LICENSE for details.


Made with ❀️ by @somratpro for the OpenClaw community