Spaces:
Running
Deployment
Local OpenEnv Validation
bash scripts/bootstrap_openenv.sh
bash scripts/bootstrap_openenv.sh --runtime-check
The first command validates local OpenEnv packaging. The runtime check starts the FastAPI environment service and validates GET /openapi.json, GET /health, GET /metadata, GET /schema, POST /mcp, and the /reset//step//state HTTP contract.
Hugging Face CLI
Use the repository virtual environment CLI:
./.venv/bin/hf version
./.venv/bin/hf auth login
./.venv/bin/hf auth whoami
The global hf command on this workstation currently fails because its installed huggingface_hub and Typer versions are incompatible. Do not use it for final deployment.
Hugging Face Space Deployment
export HF_SPACE_REPO_ID="Vishwa-docs/polyguard-openenv"
bash scripts/deploy_space.sh --repo-id "$HF_SPACE_REPO_ID"
./.venv/bin/hf spaces info "$HF_SPACE_REPO_ID"
openenv validate --url "https://Vishwa-docs-polyguard-openenv.hf.space"
Useful deploy flags:
--dry-run: print commands only.--skip-build: skipopenenv build.--skip-validate: skip local validation.--private: deploy as a private Space.--create-pr: push deployment changes as a pull request when supported by the OpenEnv CLI.
Default deploy configuration is in configs/deployment.yaml.
Required Submission Evidence
After deployment, replace docs/results/hf_space_verification.json with a successful payload that includes:
passed: true- HF Space repo id
- HF Space URL
hf spaces infooutput or summaryopenenv validate --url ...result
Strict acceptance mode will continue to fail until this file reports passed: true.
Hugging Face Training Space
Use this path when local Ollama/GPU training is unavailable. It creates a private Docker Space under the authenticated account, starts the Gradio training runner, and uploads outputs/checkpoints to a private artifact repo.
export HF_TOKEN="<write-token>"
.venv/bin/python scripts/deploy_training_space.py \
--repo-id TheJackBright/polyguard-openenv-training \
--artifact-repo-id TheJackBright/polyguard-openenv-training-artifacts \
--hardware t4-small \
--model-id Qwen/Qwen2.5-0.5B-Instruct
The Space executes the notebook-equivalent training loop from notebooks/09_training_loop.ipynb, including SFT, GRPO, adapter merge, post-save inference, ablations, and comparison reports. After the Space uploads artifacts, pull them locally and stop paid GPU usage:
.venv/bin/python scripts/pull_training_artifacts.py \
--artifact-repo-id TheJackBright/polyguard-openenv-training-artifacts
.venv/bin/python scripts/pause_training_space.py \
--repo-id TheJackBright/polyguard-openenv-training \
--mode cpu-basic
Local Services
bash scripts/run_all_local.sh --quick --skip-train
This builds local data/model assets, skips TRL training, starts the environment/API/UI services, and runs smoke checks. Local inference defaults to the HF Transformers path; set POLYGUARD_ENABLE_OLLAMA=true only when a local Ollama runtime is intentionally available.