File size: 886 Bytes
c296d62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Local checkpoints (not in Git)

Trained weights live here so clones stay small. After cloning, install the published bundle:

```bash
cd polyguard-rl
python scripts/install_hf_active_bundle.py
```

That creates **`active/`** with:

| Path | Contents |
|------|----------|
| `active/active_model_manifest.json` | Which artifact to load (GRPO vs merged vs SFT) |
| `active/grpo_adapter/` | PEFT GRPO adapter (+ tokenizer files) |
| `active/merged/` | Full merged Qwen 0.5B weights (~1 GB) |
| `active/sft_adapter/` | SFT LoRA fallback |

A Hub cache copy may also appear under `.hf_bundles/` (safe to delete after a successful install).

Enable in `.env`: `POLYGUARD_ENABLE_ACTIVE_MODEL=true` and `POLYGUARD_HF_MODEL=Qwen/Qwen2.5-0.5B-Instruct` (base for the adapter path).

**If this folder looks empty in the editor:** run the install command above; then confirm with `ls active/`.