hirann commited on
Commit
bdb2702
Β·
verified Β·
1 Parent(s): 3e96540

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +214 -0
README.md ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: ImmunoOrg 2.0
3
+ emoji: πŸ›‘οΈ
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: docker
7
+ app_port: 7860
8
+ pinned: false
9
+ ---
10
+
11
+ # ImmunoOrg 2.0 β€” The Autonomous, Self-Healing Enterprise
12
+
13
+ > An OpenEnv RL environment where an LLM defender learns to contain
14
+ > cyber-attacks **and** restructure the organization that lets them
15
+ > succeed. Built for the OpenEnv Hackathon (India 2026).
16
+
17
+ ### For judges (60 s)
18
+
19
+ β†’ **[`JUDGES_60_SECONDS.md`](./JUDGES_60_SECONDS.md)** Β· Live app: https://hirann-immunoorg-v3.hf.space/demo (War Room + episode demo on **one** page).
20
+
21
+ **⏱ Crunch time?** **[`WIN_30MIN.md`](./WIN_30MIN.md)** (fastest calm path) β†’ then **[`SUBMIT_NOW.md`](./SUBMIT_NOW.md)** for the full checklist.
22
+ Run **`python scripts/make_hackathon_training_figure.py`** to create **`evidence_grpo_training.png`** in ~2 minutes (real env curve + Colab pointer).
23
+
24
+ | Resource | Link |
25
+ | --- | --- |
26
+ | 🟒 **Live Space (direct host)** | https://hirann-immunoorg-v3.hf.space |
27
+ | πŸ€— **HF Space card** | https://huggingface.co/spaces/hirann/immunoorg-v3 |
28
+ | 🎭 **War Room (Theme #1, inside `/demo`)** | Same page as episode demo β€” **Live LLM War Room** section |
29
+ | πŸ‘©β€βš–οΈ **Judges β€” 60 s** | [`JUDGES_60_SECONDS.md`](./JUDGES_60_SECONDS.md) |
30
+ | πŸ“‹ **Problem statement (Round 2 formal)** | [`PROBLEM_STATEMENT.md`](./PROBLEM_STATEMENT.md) |
31
+ | πŸ“ **Mini-blog (Writeup)** | [`Blog.MD`](./Blog.MD) |
32
+ | ✍️ **Publish HF post + YouTube** | [`PUBLISH_HACKATHON.md`](./PUBLISH_HACKATHON.md) |
33
+ | 🌐 **HF mini-blog (public URL)** | *Replace after publishing:* `HF_MINI_BLOG_URL` |
34
+ | ▢️ **YouTube demo (< 2 min)** | *Replace after upload:* `YOUTUBE_DEMO_URL` |
35
+ | πŸ“– **Judges’ guide (official)** | [What judges look for](https://docs.google.com/document/d/1Odznuzwtb1ecDOm2t6ToZd4MuMXXfO6vWUGcxbC6mFs/edit?tab=t.0#bookmark=kix.2dz0x0nie3me) |
36
+ | 🎬 **Video script (90 sec)** | [`VIDEO_SCRIPT.md`](./VIDEO_SCRIPT.md) |
37
+ | πŸ“” **Training notebook (Colab + TRL GRPO)** | [Open in Colab](https://colab.research.google.com/github/Charannoo/immunoorg/blob/master/ImmunoOrg_Training_Colab.ipynb) Β· [`ImmunoOrg_Training_Colab.ipynb`](./ImmunoOrg_Training_Colab.ipynb) |
38
+ | ⚑ **Win in ~30 min (start here if stressed)** | [`WIN_30MIN.md`](./WIN_30MIN.md) |
39
+ | ⚑ **Deadline playbook (~5 h)** | [`SUBMIT_NOW.md`](./SUBMIT_NOW.md) |
40
+ | πŸ–₯️ **HPC training pipeline** | [`scripts/hpc/HANDOFF.md`](./scripts/hpc/HANDOFF.md) |
41
+ | βœ… **Pre-submit checklist script** | `python scripts/verify_hackathon_submission.py` |
42
+ | πŸ”¬ **Research notes** | [`RESEARCH.md`](./RESEARCH.md) |
43
+ | πŸ§ͺ **Judges' walkthrough** | [`JUDGING_GUIDE.md`](./JUDGING_GUIDE.md) |
44
+ | πŸ’» **GitHub source** | https://github.com/Charannoo/immunoorg |
45
+
46
+ **Before you submit:** publish a Hugging Face **post** or **YouTube** link (see [`PUBLISH_HACKATHON.md`](./PUBLISH_HACKATHON.md)), replace the two placeholder rows above with real URLs, run `python scripts/verify_hackathon_submission.py`, then push GitHub + Space.
47
+
48
+ **Windows + TRL:** if `import trl` fails with `UnicodeDecodeError`, run with UTF-8:
49
+ `set PYTHONUTF8=1` (cmd) or `$env:PYTHONUTF8=1` (PowerShell).
50
+
51
+ ---
52
+
53
+ ## TL;DR
54
+
55
+ Two graphs run in parallel inside one episode:
56
+
57
+ 1. **A technical network** β€” 7-23 nodes (web servers, DBs, CI/CD, DNS) with
58
+ real vulnerability scores.
59
+ 2. **An organizational graph** β€” departments with approval chains, trust
60
+ scores, and political deadlocks.
61
+
62
+ The agent has 28 actions across 3 categories (tactical / strategic /
63
+ diagnostic) and must fix both layers simultaneously, against an adversary
64
+ that adapts to its policy, under conflicting board directives, with a
65
+ **5-track composable reward** that no single signal can hack.
66
+
67
+ Read [`PROBLEM_STATEMENT.md`](./PROBLEM_STATEMENT.md) for the formal
68
+ Round 2 definition (problem / env / capabilities / tasks / reward /
69
+ post-training).
70
+
71
+ ---
72
+
73
+ ## Evidence (committed PNGs β€” judges scan these in seconds)
74
+
75
+ All charts are produced by `python generate_evidence.py` and
76
+ `python scripts/generate_training_evidence.py` and committed to the repo.
77
+
78
+ ![Random vs Heuristic policies across difficulty levels 1-4](./evidence_policy_comparison.png)
79
+ *Random vs Heuristic across all 4 difficulty levels β€” Heuristic policy
80
+ (gold standard for reward shaping) beats Random by 4-11 points,
81
+ proving the env is learnable and reward shaping has signal.*
82
+
83
+ ![Per-scenario reward lift Random vs Heuristic](./evidence_scenario_rewards.png)
84
+ *Per-family reward (10 episodes each, real env rollouts). The heuristic
85
+ policy beats the random baseline in **every** scenario family β€” that
86
+ lift is the signal the GRPO trainer climbs.*
87
+
88
+ ![Self-improvement across 6 generations of org mutation](./evidence_self_improvement.png)
89
+ *6 generations of self-improvement: reward-per-step trends up, time-to-
90
+ containment trends down, org efficiency rises as mutations accumulate.*
91
+
92
+ ![5-track composable reward breakdown](./evidence_5track_reward.png)
93
+ *Per-step contribution of the 5 reward tracks. No single track dominates
94
+ β€” anti-reward-hacking property called out in the brief.*
95
+
96
+ ![Org graph: 3-day vs 4-hour approval](./evidence_org_before_after.png)
97
+ *The "self-healed enterprise" moment: org graph after the agent
98
+ restructures it via `ESTABLISH_DEVSECOPS` + `REDUCE_BUREAUCRACY`.
99
+ Approval latency drops from 72h to 4h.*
100
+
101
+ ![War Room debate + DevSecOps Mesh activity](./evidence_war_room_mesh.png)
102
+ *Multi-agent War Room consensus dynamics + 4-gate DevSecOps Mesh event counts.*
103
+
104
+ **GRPO training curve (`evidence_grpo_training.png`):** generate from a real TRL run, then:
105
+
106
+ ```bash
107
+ python scripts/plot_grpo_log_history.py immunoorg-defender/grpo_log_history.json
108
+ ```
109
+
110
+ Or run **Colab Step 4b**, which saves the figure directly. See [`training_logs/README.md`](./training_logs/README.md).
111
+
112
+ Additional eval PNGs from the full HPC pipeline may be uploaded to
113
+ [`hirann/immunoorg-grpo-defender`](https://huggingface.co/hirann/immunoorg-grpo-defender).
114
+
115
+ ---
116
+
117
+ ## Quick start
118
+
119
+ ### Click the live demo
120
+ β†’ https://hirann-immunoorg-v3.hf.space β†’ **β–Ά Launch interactive demo**
121
+
122
+ ### Run the OpenEnv environment locally
123
+
124
+ ```bash
125
+ git clone https://github.com/Charannoo/immunoorg
126
+ cd immunoorg
127
+ python -m venv .venv && . .venv/Scripts/activate # PowerShell on Windows
128
+ pip install -r requirements.txt
129
+ uvicorn server.main:app --reload --port 7860
130
+ ```
131
+
132
+ Then visit http://localhost:7860 (landing) or http://localhost:7860/demo (Gradio UI).
133
+
134
+ ### Train with GRPO (3 paths)
135
+
136
+ | Where | When to use | Time |
137
+ | --- | --- | --- |
138
+ | **HPC** (`scripts/hpc/run_all.sh`) | Best evidence: full datasets + SFT + GRPO + 100-ep eval, all chained via SLURM, auto-pushes to HF Hub | ~3-4 hr (1Γ— A100) / ~1-1.5 hr (4Γ— A100) |
139
+ | **Colab T4** (`ImmunoOrg_Training_Colab.ipynb`) | Free, browser-only, Qwen2.5-3B | ~30-45 min |
140
+ | **Local CPU smoke** (`python -m training.train_grpo --smoke-test`) | Sanity check only | very slow |
141
+
142
+ See [`scripts/hpc/HANDOFF.md`](./scripts/hpc/HANDOFF.md) for the friend-facing
143
+ HPC instructions.
144
+
145
+ ### Run the test suite
146
+
147
+ ```bash
148
+ pytest tests -q # 32 passed, 1 skipped (live API, only runs when uvicorn is up)
149
+ ```
150
+
151
+ ---
152
+
153
+ ## OpenEnv API surface
154
+
155
+ | Endpoint | Method | Purpose |
156
+ | --- | --- | --- |
157
+ | `/` | GET | Landing page (HTML) with link to /demo |
158
+ | `/demo` | (Gradio) | Interactive visual demo |
159
+ | `/health` | GET | Liveness + version |
160
+ | `/reset` | POST | Start a fresh episode (`{"difficulty": 1, "seed": 42}`) |
161
+ | `/step` | POST | Apply an action (`{"action": {...}}`) |
162
+ | `/state` | GET | Full server-side state (debug / dashboard) |
163
+ | `/directive` | POST | Inject a Board Directive mid-episode |
164
+ | `/trained_status` | GET | Is the trained LoRA loaded yet? |
165
+ | `/openenv.yaml` | GET | Serve the manifest |
166
+ | `/demo` | GET | Gradio: episode demo + **War Room** accordion (Theme #1 LLM debate) |
167
+ | `/api/war-room` | POST | Optional JSON API for the same debate backend |
168
+ | `/admin/training/start` | GET | Kick off GRPO training (token-gated) |
169
+ | `/admin/training/status` | GET | JSON status of the training job |
170
+ | `/admin/training/log` | GET | Tail the training log |
171
+
172
+ Action schema lives in [`openenv.yaml`](./openenv.yaml) and matches
173
+ `immunoorg.models.ImmunoAction`.
174
+
175
+ ---
176
+
177
+ ## How this maps to the judging criteria
178
+
179
+ | Criterion | Weight | Where to look |
180
+ | --- | ---: | --- |
181
+ | **Environment Innovation** | 40% | Socio-technical RL, 5-track reward, War Room, DevSecOps Mesh, 50-step Polymorphic Migration. See [`PROBLEM_STATEMENT.md`](./PROBLEM_STATEMENT.md) Β§1. |
182
+ | **Storytelling** | 30% | Live demo on the Space + [`BLOG_POST.md`](./BLOG_POST.md) + 6 evidence PNGs above + [`VIDEO_SCRIPT.md`](./VIDEO_SCRIPT.md). |
183
+ | **Improvement in Rewards** | 20% | `evidence_*.png` files committed; HPC pipeline produces `evidence_grpo_training.png` + `evidence_eval_per_family.png` from a real Qwen2.5-7B GRPO run. |
184
+ | **Reward & Training Pipeline** | 10% | [`training/train_grpo.py`](./training/train_grpo.py) (3 verifiable reward fns), [`training/dataset_generator.py`](./training/dataset_generator.py) (1700+ scenarios), [`training/scenario_hooks.py`](./training/scenario_hooks.py) (5 elite families), [`scripts/hpc/`](./scripts/hpc/) (full SFT→GRPO→eval pipeline). |
185
+
186
+ ---
187
+
188
+ ## Anti-reward-hacking measures (judge guidance Β§7 + Β§21)
189
+
190
+ - 3 independent reward functions at the trainer + 5-track composable reward in the env.
191
+ - False-positive isolation penalty (burns half the uptime budget).
192
+ - Phase-gated transitions require *real work*, not step counts.
193
+ - Org friction β€” tactical spam denied; agent must do strategic work.
194
+ - War-Room hallucination flagging via shared FactStore.
195
+ - Per-step training penalties for ignoring board directives or retrying denied isolations.
196
+
197
+ Full details in [`PROBLEM_STATEMENT.md`](./PROBLEM_STATEMENT.md) Β§5c and
198
+ [`RESEARCH.md`](./RESEARCH.md).
199
+
200
+ ---
201
+
202
+ ## Status
203
+
204
+ - βœ… OpenEnv: `openenv-core>=0.2.3` (PyPI latest) in Space `requirements.txt` + `openenv.yaml` + HTTP `reset`/`step`/`state`; `import openenv.core` verified at runtime
205
+ - βœ… Hugging Face Space: https://huggingface.co/spaces/hirann/immunoorg-v3
206
+ - βœ… Gradio `/demo` includes **War Room** accordion (LLM debate β€” supports `GROQ_API_KEY`, `OPENAI_API_KEY`, or `ANTHROPIC_API_KEY`)
207
+ - βœ… 2x A10G Hardware: Configured for fast LoRA inference in the live demo.
208
+ - βœ… Colab + TRL GRPO + Unsloth; `training/train_grpo.py` exports `grpo_log_history.json` for plots
209
+ - βœ… Evidence PNGs (env rollouts + rewards) committed; add `evidence_grpo_training.png` from Colab or `scripts/plot_grpo_log_history.py`
210
+ - βœ… Writeups: [`Blog.MD`](./Blog.MD), [`VIDEO_SCRIPT.md`](./VIDEO_SCRIPT.md) β€” **publish** per [`PUBLISH_HACKATHON.md`](./PUBLISH_HACKATHON.md)
211
+ - βœ… Training: Logs and scripts shared in [`training/`](./training/) and [`training_logs/`](./training_logs/)
212
+ - βœ… `python scripts/verify_hackathon_submission.py` for a quick checklist
213
+
214
+ Built for the OpenEnv Hackathon (India 2026).