codeclaw-backup / .env.example
Somrat Sorkar
Add OpenRouter support + 15+ popular models (Qwen, Grok, MiMo, Seed, Nemotron, GLM, Mercury, etc.)
b19cd4a
raw
history blame
6.83 kB
# ════════════════════════════════════════════════════════════════
# 🦞 HuggingClaw β€” OpenClaw Gateway for HuggingFace Spaces
# ════════════════════════════════════════════════════════════════
# Copy this file to .env and fill in your values.
# For local development: cp .env.example .env && nano .env
# ── REQUIRED: Core Configuration ──
# [REQUIRED] LLM provider API key
# - Anthropic: sk-ant-v0-...
# - OpenAI: sk-...
# - Google: AIzaSy...
LLM_API_KEY=your_api_key_here
# [REQUIRED] LLM model to use (format: provider/model-name)
# Auto-detects provider from prefix β€” any provider is supported!
#
# Anthropic Claude:
# - anthropic/claude-opus-4-6
# - anthropic/claude-sonnet-4-5
# - anthropic/claude-haiku-4-5
#
# OpenAI:
# - openai/gpt-4-turbo
# - openai/gpt-4
# - openai/gpt-3.5-turbo
#
# Google Gemini:
# - google/gemini-2.5-flash
# - google/gemini-2.0-flash
# - google/gemini-1.5-pro
#
# Zhipu (ChatGLM) / ZAI:
# - zhipu/glm-4-plus
# - zhipu/glm-4
# - zai/glm-4 (alias)
#
# Moonshot (Kimi):
# - moonshot/moonshot-v1-128k
# - moonshot/moonshot-v1-32k
#
# MiniMax:
# - minimax/minimax-01
# - minimax/minimax-text-01
#
# Mistral:
# - mistral/mistral-large
# - mistral/mistral-medium
# - mistral/mistral-small
#
# Cohere:
# - cohere/command-r
# - cohere/command-r-plus
#
# Groq:
# - groq/mixtral-8x7b-32768
# - groq/llama2-70b-4096
#
# Qwen:
# - qwen/qwen3.6-plus-preview (free, 1M context)
# - qwen/qwen3.5-35b-a3b
# - qwen/qwen3.5-9b
#
# xAI Grok:
# - x-ai/grok-4.20-beta
# - x-ai/grok-4.20-multi-agent-beta
#
# NVIDIA:
# - nvidia/nemotron-3-super-120b-a12b
# - nvidia/nemotron-3-super-120b-a12b (free)
#
# Reka:
# - reka/reka-edge
#
# Xiaomi:
# - xiaomi/mimo-v2-pro (1M context)
# - xiaomi/mimo-v2-omni (256K context, multimodal)
#
# ByteDance Seed:
# - bytedance-seed/seed-2.0-lite
# - bytedance-seed/seed-2.0-mini
#
# Z.ai GLM:
# - z-ai/glm-5-turbo
#
# KwaiPilot:
# - kwaipilot/kat-coder-pro-v2
#
# OpenRouter (any model via OpenRouter proxy):
# - openrouter/google/lyria-3-pro-preview (music generation)
# - openrouter/inception/mercury-2 (fast reasoning)
# Note: With OpenRouter, you can access 100+ models with a single API key!
# See https://openrouter.ai/models for complete list
#
# Or any other provider supported by OpenClaw (format: provider/model-name)
LLM_MODEL=anthropic/claude-sonnet-4-5
# [REQUIRED] Gateway authentication token
# Generate: openssl rand -hex 32
GATEWAY_TOKEN=your_gateway_token_here
# ── OPTIONAL: Telegram Integration ──
# Enable Telegram bot integration
# Get bot token from: https://t.me/BotFather
TELEGRAM_BOT_TOKEN=your_bot_token_here
# Single user ID for DM access
# Get your ID from: https://t.me/userinfobot
TELEGRAM_USER_ID=123456789
# Multiple user IDs (comma-separated for team access)
# TELEGRAM_USER_IDS=123456789,987654321,555555555
# ── OPTIONAL: Workspace Backup to HF Dataset ──
# Enable automatic workspace backup & restore
# Your HuggingFace username
HF_USERNAME=your_hf_username
# HuggingFace API token (with write access)
# Get from: https://huggingface.co/settings/tokens
HF_TOKEN=hf_your_token_here
# Name of the backup dataset (auto-created if missing)
# Default: huggingclaw-backup
BACKUP_DATASET_NAME=huggingclaw-backup
# Git commit email for workspace syncs
# Default: openclaw@example.com
WORKSPACE_GIT_USER=openclaw@example.com
# Git commit name for workspace syncs
# Default: OpenClaw Bot
WORKSPACE_GIT_NAME=OpenClaw Bot
# ── OPTIONAL: Background Services Configuration ──
# Keep-alive ping interval in seconds (prevents HF Spaces sleep)
# Default: 300 (5 minutes)
# Set to 0 to disable (not recommended on HF Spaces)
KEEP_ALIVE_INTERVAL=300
# Workspace auto-sync interval in seconds
# Default: 600 (10 minutes)
SYNC_INTERVAL=600
# ── OPTIONAL: Advanced Configuration ──
# Pin OpenClaw version (default: latest)
# Example: OPENCLAW_VERSION=2026.3.24
OPENCLAW_VERSION=latest
# Health endpoint port (default: 7861)
HEALTH_PORT=7861
# ════════════════════════════════════════════════════════════════
# QUICK START CHECKLIST
# ════════════════════════════════════════════════════════════════
#
# βœ… Minimum setup (3 secrets):
# 1. LLM_API_KEY - Get from your LLM provider
# 2. LLM_MODEL - Choose a model above
# 3. GATEWAY_TOKEN - Run: openssl rand -hex 32
#
# βœ… Add Telegram (2 more secrets):
# 4. TELEGRAM_BOT_TOKEN - From @BotFather
# 5. TELEGRAM_USER_ID - From @userinfobot
#
# βœ… Enable Backup (2 more secrets):
# 6. HF_USERNAME - Your HF account name
# 7. HF_TOKEN - From HF Settings β†’ Tokens
#
# ════════════════════════════════════════════════════════════════
# DEPLOYMENT OPTIONS
# ════════════════════════════════════════════════════════════════
#
# πŸ“¦ HuggingFace Spaces (Recommended):
# - Click "Duplicate this Space"
# - Go to Settings β†’ Secrets
# - Add LLM_API_KEY, LLM_MODEL, GATEWAY_TOKEN
# - Deploy (automatic!)
#
# 🐳 Docker Local:
# docker build -t huggingclaw .
# docker run -p 7860:7860 --env-file .env huggingclaw
#
# πŸ’» Direct (without Docker):
# npm install -g openclaw@latest
# export $(cat .env | xargs)
# bash start.sh
#
# ════════════════════════════════════════════════════════════════
# VERIFY YOUR SETUP
# ════════════════════════════════════════════════════════════════
#
# After deployment, check:
# 1. Logs for "🦞 HuggingClaw Gateway" banner
# 2. Health endpoint: curl https://YOUR-SPACE-URL.hf.space/health
# 3. Control UI: https://YOUR-SPACE-URL.hf.space
# 4. (If Telegram) DM your bot to test
#
# ════════════════════════════════════════════════════════════════