Spaces:
Sleeping
Sleeping
Commit ·
06bfd31
1
Parent(s): 287d681
docs: add initial project brief and architecture documentation for CyberSecurity_OWASP environment
Browse files- 00_PROJECT_BRIEF.md +145 -0
- 01_ARCHITECTURE.md +479 -0
- AGENTS.md +1197 -0
00_PROJECT_BRIEF.md
ADDED
|
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 00_PROJECT_BRIEF.md
|
| 2 |
+
|
| 3 |
+
# CyberSecurity_OWASP — Project Brief
|
| 4 |
+
|
| 5 |
+
## 1. One-line summary
|
| 6 |
+
|
| 7 |
+
`CyberSecurity_OWASP` is an OpenEnv reinforcement-learning environment where a **single LLM agent learns the full defensive workflow for OWASP access-control bugs**: understand the intended authorization policy, discover a broken access-control path in a local synthetic app, patch the code, and prove that the fix blocks unauthorized access without breaking valid user flows.
|
| 8 |
+
|
| 9 |
+
## 2. Problem
|
| 10 |
+
|
| 11 |
+
Broken access control remains one of the most important web-application security risks because the correct behavior is usually **application-specific**. Generic scanners can find some missing checks, but they often lack enough context to answer the real engineering question:
|
| 12 |
+
|
| 13 |
+
> “Given this app’s policy, users, roles, tenants, routes, and data model, is this behavior intended or a security bug?”
|
| 14 |
+
|
| 15 |
+
Modern LLMs can read code, reason about tests, and propose patches, but they still struggle with:
|
| 16 |
+
|
| 17 |
+
- distinguishing intended public/feature behavior from accidental over-permission;
|
| 18 |
+
- following authorization logic across routes, middleware, ORM queries, tenants, roles, and ownership checks;
|
| 19 |
+
- validating that a patch fixes the bug without introducing regressions;
|
| 20 |
+
- avoiding reward hacking when tests are visible or too narrow;
|
| 21 |
+
- generalizing across app templates instead of memorizing one codebase.
|
| 22 |
+
|
| 23 |
+
`CyberSecurity_OWASP` turns this into a trainable environment.
|
| 24 |
+
|
| 25 |
+
## 3. What the environment trains
|
| 26 |
+
|
| 27 |
+
The environment trains **one agent**, not a separate red-team and blue-team pair. The same model must perform the entire secure-repair loop:
|
| 28 |
+
|
| 29 |
+
1. **Understand policy** — read the policy graph, user roles, route intent, tenant rules, and allowed operations.
|
| 30 |
+
2. **Discover evidence** — use safe local requests, logs, route metadata, and visible tests to identify the likely access-control failure.
|
| 31 |
+
3. **Patch** — edit application code, middleware, route guards, query filters, or policy mappings.
|
| 32 |
+
4. **Validate** — run public tests, policy checks, and regression tests.
|
| 33 |
+
5. **Submit** — final answer is judged by deterministic hidden tests and reward logic.
|
| 34 |
+
|
| 35 |
+
## 4. Scope for MVP
|
| 36 |
+
|
| 37 |
+
The MVP should focus on **OWASP A01: Broken Access Control** with ASVS-inspired access-control requirements.
|
| 38 |
+
|
| 39 |
+
Initial scenario families:
|
| 40 |
+
|
| 41 |
+
1. Missing route-level authorization check.
|
| 42 |
+
2. Insecure direct object reference / object ownership bug.
|
| 43 |
+
3. Cross-tenant data leakage.
|
| 44 |
+
4. Role confusion: user/admin/support/editor boundary error.
|
| 45 |
+
5. Client-side-only authorization assumption.
|
| 46 |
+
6. Query filter omission in list/search/export endpoint.
|
| 47 |
+
7. Over-broad update/delete permission.
|
| 48 |
+
8. Feature route intentionally public, so the agent must not over-secure it.
|
| 49 |
+
|
| 50 |
+
Recommended MVP size: **8 scenario families × 3 app templates × 25 seeds = 600 trainable scenarios**, with separate held-out families and hidden seeds for evaluation.
|
| 51 |
+
|
| 52 |
+
## 5. Why this is useful
|
| 53 |
+
|
| 54 |
+
This environment is useful because it targets a real gap between today’s scanners and useful defensive agents:
|
| 55 |
+
|
| 56 |
+
- **Scanners detect patterns.** This environment trains policy-aware reasoning.
|
| 57 |
+
- **Unit tests check known cases.** This environment includes hidden authorization invariants.
|
| 58 |
+
- **Static repair can overfit.** This environment forces the model to preserve valid business behavior.
|
| 59 |
+
- **One-app benchmarks are easy to memorize.** This environment compiles many equivalent-but-different apps from policy graphs, templates, route shapes, schema names, and hidden test seeds.
|
| 60 |
+
|
| 61 |
+
The outcome is a model that becomes better at a practical DevSecOps workflow: safely reviewing and repairing authorization logic in small-to-medium web apps.
|
| 62 |
+
|
| 63 |
+
## 6. What success looks like
|
| 64 |
+
|
| 65 |
+
A successful submission should show **measurable reward improvement** and better held-out security behavior after RL training.
|
| 66 |
+
|
| 67 |
+
### Minimum success criteria
|
| 68 |
+
|
| 69 |
+
- Environment runs through OpenEnv `reset`, `step`, and `state` APIs.
|
| 70 |
+
- Hosted on Hugging Face Spaces.
|
| 71 |
+
- Provides a minimal GRPO/TRL or Unsloth training script.
|
| 72 |
+
- Tracks training/eval metrics with Trackio or equivalent.
|
| 73 |
+
- Shows reward curves and before/after agent behavior.
|
| 74 |
+
- Uses deterministic reward as the primary reward source.
|
| 75 |
+
- Keeps hidden tests hidden from the agent.
|
| 76 |
+
|
| 77 |
+
### Target metrics
|
| 78 |
+
|
| 79 |
+
| Metric | MVP target |
|
| 80 |
+
|---|---:|
|
| 81 |
+
| Valid episode completion rate | ≥ 85% |
|
| 82 |
+
| Hidden authorization test pass rate | ≥ 65% after initial RL run |
|
| 83 |
+
| Regression preservation rate | ≥ 80% |
|
| 84 |
+
| Held-out scenario success lift vs base model | ≥ +15 percentage points |
|
| 85 |
+
| Reward-hacking incidents found in eval | 0 critical |
|
| 86 |
+
| Median patch size | ≤ 3 files changed |
|
| 87 |
+
|
| 88 |
+
## 7. Core design principle
|
| 89 |
+
|
| 90 |
+
The environment should reward **correct defensive repair**, not exploit creativity. The discovery stage exists only to help the agent gather enough local evidence to make a safe patch. The reward engine must never reward real-world misuse, data exfiltration, persistence, credential theft, or evasion behavior.
|
| 91 |
+
|
| 92 |
+
## 8. Deliverables for engineers
|
| 93 |
+
|
| 94 |
+
Initial implementation should produce:
|
| 95 |
+
|
| 96 |
+
```text
|
| 97 |
+
CyberSecurity_OWASP/
|
| 98 |
+
├── 00_PROJECT_BRIEF.md
|
| 99 |
+
├── 01_ARCHITECTURE.md
|
| 100 |
+
├── README.md
|
| 101 |
+
├── pyproject.toml
|
| 102 |
+
├── openenv.yaml
|
| 103 |
+
├── cybersecurity_owasp/
|
| 104 |
+
│ ├── __init__.py
|
| 105 |
+
│ ├── models.py
|
| 106 |
+
│ ├── client.py
|
| 107 |
+
│ ├── rewards.py
|
| 108 |
+
│ ├── scenarios/
|
| 109 |
+
│ │ ├── compiler.py
|
| 110 |
+
│ │ ├── policy_graph.py
|
| 111 |
+
│ │ ├── templates/
|
| 112 |
+
│ │ └── seeds/
|
| 113 |
+
│ ├── apps/
|
| 114 |
+
│ │ ├── fastapi_basic/
|
| 115 |
+
│ │ ├── express_basic/
|
| 116 |
+
│ │ └── django_basic/
|
| 117 |
+
│ ├── evals/
|
| 118 |
+
│ │ ├── public_tests.py
|
| 119 |
+
│ │ ├── hidden_invariants.py
|
| 120 |
+
│ │ └── heldout_eval.py
|
| 121 |
+
│ └── server/
|
| 122 |
+
│ ├── environment.py
|
| 123 |
+
│ ├── app.py
|
| 124 |
+
│ ├── requirements.txt
|
| 125 |
+
│ └── Dockerfile
|
| 126 |
+
├── training/
|
| 127 |
+
│ ├── train_grpo.py
|
| 128 |
+
│ ├── rollout.py
|
| 129 |
+
│ └── eval_before_after.py
|
| 130 |
+
└── outputs/
|
| 131 |
+
├── logs/
|
| 132 |
+
├── evals/
|
| 133 |
+
└── reward_curves/
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
## 9. Source notes and credibility
|
| 137 |
+
|
| 138 |
+
| Source | How it informs this project | Credibility |
|
| 139 |
+
|---|---|---:|
|
| 140 |
+
| OWASP Top 10 2025 / A01 Broken Access Control | Confirms current relevance of Broken Access Control as a top web-app risk. | 10/10 |
|
| 141 |
+
| OWASP ASVS | Provides security-control requirements that can be translated into policy invariants and hidden tests. | 9.5/10 |
|
| 142 |
+
| OpenEnv build/deploy docs | Defines the required OpenEnv structure: models, server, client, Docker, HF Spaces deployment. | 8.5/10 |
|
| 143 |
+
| Hackathon judging criteria | Aligns deliverables with scoring: innovation, storytelling, reward improvement, and training pipeline. | 9/10 |
|
| 144 |
+
| TRL/OpenEnv GRPO example | Shows a practical pattern for environment rollouts, reward functions, and Trackio logging. | 8/10 |
|
| 145 |
+
|
01_ARCHITECTURE.md
ADDED
|
@@ -0,0 +1,479 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 01_ARCHITECTURE.md
|
| 2 |
+
|
| 3 |
+
# CyberSecurity_OWASP — Architecture
|
| 4 |
+
|
| 5 |
+
## 1. System goal
|
| 6 |
+
|
| 7 |
+
`CyberSecurity_OWASP` is an OpenEnv environment for training a **single LLM policy** to perform a complete defensive authorization-repair workflow:
|
| 8 |
+
|
| 9 |
+
```text
|
| 10 |
+
Understand policy → discover local evidence → patch code → validate → submit
|
| 11 |
+
```
|
| 12 |
+
|
| 13 |
+
The environment is intentionally not a two-agent red-team/blue-team setup. The agent is one model with one trajectory. It must learn both sides of the defensive workflow: finding the policy violation and fixing it safely.
|
| 14 |
+
|
| 15 |
+
## 2. Final architecture diagram
|
| 16 |
+
|
| 17 |
+
```mermaid
|
| 18 |
+
flowchart TB
|
| 19 |
+
%% =========================
|
| 20 |
+
%% Offline Build Layer
|
| 21 |
+
%% =========================
|
| 22 |
+
subgraph A[Offline Scenario Factory]
|
| 23 |
+
A1[Policy Graph Generator\nroles, users, tenants, ownership, route intent]
|
| 24 |
+
A2[App Template Library\nFastAPI, Express, Django MVP templates]
|
| 25 |
+
A3[Bug Injector\nmissing guard, IDOR, tenant leak, role confusion, query omission]
|
| 26 |
+
A4[Scenario Compiler\nmaterializes app + DB + public tests + hidden invariants]
|
| 27 |
+
A5[Split Manager\ntrain seeds, validation seeds, hidden held-out seeds]
|
| 28 |
+
A1 --> A4
|
| 29 |
+
A2 --> A4
|
| 30 |
+
A3 --> A4
|
| 31 |
+
A5 --> A4
|
| 32 |
+
end
|
| 33 |
+
|
| 34 |
+
%% =========================
|
| 35 |
+
%% OpenEnv Runtime
|
| 36 |
+
%% =========================
|
| 37 |
+
subgraph B[CyberSecurity_OWASP OpenEnv Server]
|
| 38 |
+
B1[reset\(\)\nselect scenario + start sandbox]
|
| 39 |
+
B2[Sandbox App Runtime\nlocal app, DB fixture, logs, route map]
|
| 40 |
+
B3[Tool API exposed through step\(action\)\nReadFile, ListRoutes, SendLocalRequest, RunTests, ApplyPatch, SubmitFix]
|
| 41 |
+
B4[State Store\nepisode_id, step_count, scenario_id, patch diff, test history]
|
| 42 |
+
B5[Deterministic Reward Engine\npolicy tests + hidden tests + regression tests + penalties]
|
| 43 |
+
B6[state\(\)\nstructured metadata for debugging/eval]
|
| 44 |
+
B1 --> B2
|
| 45 |
+
B2 --> B3
|
| 46 |
+
B3 --> B4
|
| 47 |
+
B4 --> B5
|
| 48 |
+
B4 --> B6
|
| 49 |
+
end
|
| 50 |
+
|
| 51 |
+
%% =========================
|
| 52 |
+
%% Agent + Training
|
| 53 |
+
%% =========================
|
| 54 |
+
subgraph C[Single LLM Agent]
|
| 55 |
+
C1[Observation Parser]
|
| 56 |
+
C2[Planner\npolicy reasoning + patch strategy]
|
| 57 |
+
C3[Action Generator\nchooses next OpenEnv action]
|
| 58 |
+
C1 --> C2 --> C3
|
| 59 |
+
end
|
| 60 |
+
|
| 61 |
+
subgraph D[Training + Evaluation]
|
| 62 |
+
D1[Rollout Loop\nreset → step* → final reward]
|
| 63 |
+
D2[GRPO / TRL / Unsloth Training]
|
| 64 |
+
D3[Trackio Metrics\nreward curves, pass rates, patch size, steps]
|
| 65 |
+
D4[Held-out Eval Suite\nunseen templates, seeds, names, route structures]
|
| 66 |
+
D5[Demo Artifacts\nbefore/after traces, mini-blog, 2-minute video]
|
| 67 |
+
D1 --> D2 --> D3
|
| 68 |
+
D3 --> D4 --> D5
|
| 69 |
+
end
|
| 70 |
+
|
| 71 |
+
A4 --> B1
|
| 72 |
+
C3 -->|typed action| B3
|
| 73 |
+
B3 -->|observation + reward + done| C1
|
| 74 |
+
B5 --> D1
|
| 75 |
+
D2 --> C1
|
| 76 |
+
B5 --> D4
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
## 3. Component responsibilities
|
| 80 |
+
|
| 81 |
+
### 3.1 Scenario Factory
|
| 82 |
+
|
| 83 |
+
The scenario factory generates many small but realistic web apps from a structured authorization policy.
|
| 84 |
+
|
| 85 |
+
It should output:
|
| 86 |
+
|
| 87 |
+
- application code;
|
| 88 |
+
- route map;
|
| 89 |
+
- database fixture;
|
| 90 |
+
- user/session/token fixtures;
|
| 91 |
+
- policy graph;
|
| 92 |
+
- intentionally injected access-control bug;
|
| 93 |
+
- public tests visible to the agent;
|
| 94 |
+
- hidden tests invisible to the agent;
|
| 95 |
+
- metadata for eval and debugging.
|
| 96 |
+
|
| 97 |
+
The scenario compiler is the main anti-overfitting mechanism. It should vary:
|
| 98 |
+
|
| 99 |
+
- route names;
|
| 100 |
+
- schema names;
|
| 101 |
+
- ORM query structure;
|
| 102 |
+
- framework template;
|
| 103 |
+
- role names;
|
| 104 |
+
- tenant IDs;
|
| 105 |
+
- object ownership patterns;
|
| 106 |
+
- file layout;
|
| 107 |
+
- visible test coverage;
|
| 108 |
+
- hidden invariant seeds.
|
| 109 |
+
|
| 110 |
+
### 3.2 Policy Graph Generator
|
| 111 |
+
|
| 112 |
+
The policy graph is the ground truth for intended behavior.
|
| 113 |
+
|
| 114 |
+
Example internal representation:
|
| 115 |
+
|
| 116 |
+
```yaml
|
| 117 |
+
resources:
|
| 118 |
+
invoice:
|
| 119 |
+
owner_field: owner_user_id
|
| 120 |
+
tenant_field: tenant_id
|
| 121 |
+
roles:
|
| 122 |
+
user:
|
| 123 |
+
can:
|
| 124 |
+
- read:invoice where owner_user_id == actor.user_id
|
| 125 |
+
- update:invoice where owner_user_id == actor.user_id and status != locked
|
| 126 |
+
support:
|
| 127 |
+
can:
|
| 128 |
+
- read:invoice where tenant_id == actor.tenant_id
|
| 129 |
+
admin:
|
| 130 |
+
can:
|
| 131 |
+
- read:any_invoice where tenant_id == actor.tenant_id
|
| 132 |
+
- update:any_invoice where tenant_id == actor.tenant_id
|
| 133 |
+
public_routes:
|
| 134 |
+
- GET /health
|
| 135 |
+
- GET /pricing
|
| 136 |
+
forbidden:
|
| 137 |
+
- cross_tenant_read
|
| 138 |
+
- cross_tenant_update
|
| 139 |
+
- user_reads_other_user_invoice
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
The policy graph prevents false rewards for over-securing intentionally public or intentionally allowed routes.
|
| 143 |
+
|
| 144 |
+
### 3.3 Bug Injector
|
| 145 |
+
|
| 146 |
+
The bug injector creates controlled, defensive lab scenarios. It should only generate bugs inside local synthetic apps.
|
| 147 |
+
|
| 148 |
+
MVP bug classes:
|
| 149 |
+
|
| 150 |
+
| Bug class | Example failure mode | Expected fix type |
|
| 151 |
+
|---|---|---|
|
| 152 |
+
| Missing route guard | Protected endpoint lacks authorization middleware | Add policy check/middleware |
|
| 153 |
+
| IDOR / ownership bug | User can access another user’s object by changing ID | Add owner check in query/policy |
|
| 154 |
+
| Tenant leak | Tenant A can list Tenant B records | Add tenant filter |
|
| 155 |
+
| Role confusion | Support/editor/admin boundary is wrong | Correct role-to-permission mapping |
|
| 156 |
+
| Client-side-only auth | Server trusts UI to hide forbidden action | Enforce server-side authorization |
|
| 157 |
+
| Query omission | List/export/search endpoint lacks auth filter | Filter query by actor permissions |
|
| 158 |
+
| Over-broad mutation | User can update/delete forbidden object | Add mutation permission check |
|
| 159 |
+
| Public route decoy | Agent may wrongly lock down intended public endpoint | Preserve intended public behavior |
|
| 160 |
+
|
| 161 |
+
### 3.4 OpenEnv Server
|
| 162 |
+
|
| 163 |
+
The OpenEnv server should implement the standard lifecycle:
|
| 164 |
+
|
| 165 |
+
- `reset()` — initialize a fresh scenario instance.
|
| 166 |
+
- `step(action)` — execute one typed action and return observation, reward, and done.
|
| 167 |
+
- `state()` — expose episode metadata for debugging and evaluation.
|
| 168 |
+
|
| 169 |
+
Recommended package/class names:
|
| 170 |
+
|
| 171 |
+
```text
|
| 172 |
+
Repo name: CyberSecurity_OWASP
|
| 173 |
+
Python package: cybersecurity_owasp
|
| 174 |
+
Client class: CyberSecurityOWASPEnv
|
| 175 |
+
Action class: CyberSecurityOWASPAction
|
| 176 |
+
Observation: CyberSecurityOWASPObservation
|
| 177 |
+
State: CyberSecurityOWASPState
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
+
### 3.5 Tool API
|
| 181 |
+
|
| 182 |
+
The agent should interact through typed actions. Keep the interface small enough for RL but expressive enough for realistic repair.
|
| 183 |
+
|
| 184 |
+
```python
|
| 185 |
+
@dataclass
|
| 186 |
+
class CyberSecurityOWASPAction(Action):
|
| 187 |
+
action_type: Literal[
|
| 188 |
+
"read_file",
|
| 189 |
+
"list_files",
|
| 190 |
+
"list_routes",
|
| 191 |
+
"inspect_policy",
|
| 192 |
+
"send_local_request",
|
| 193 |
+
"run_public_tests",
|
| 194 |
+
"apply_patch",
|
| 195 |
+
"submit_fix",
|
| 196 |
+
]
|
| 197 |
+
arguments: dict
|
| 198 |
+
```
|
| 199 |
+
|
| 200 |
+
Recommended actions:
|
| 201 |
+
|
| 202 |
+
| Action | Purpose | Safety boundary |
|
| 203 |
+
|---|---|---|
|
| 204 |
+
| `inspect_policy` | Read intended authorization rules. | Only synthetic policy. |
|
| 205 |
+
| `list_routes` | See local app route map. | No internet target. |
|
| 206 |
+
| `read_file` | Inspect selected source file. | Sandbox allowlist only. |
|
| 207 |
+
| `send_local_request` | Validate behavior against local app. | Local generated app only. |
|
| 208 |
+
| `run_public_tests` | Run visible tests. | No hidden test disclosure. |
|
| 209 |
+
| `apply_patch` | Modify source through unified diff. | Patch size and file allowlist limits. |
|
| 210 |
+
| `submit_fix` | End episode and trigger hidden eval. | Final hidden score only, no leaked test details. |
|
| 211 |
+
|
| 212 |
+
### 3.6 Observation schema
|
| 213 |
+
|
| 214 |
+
Observations should be compact and structured.
|
| 215 |
+
|
| 216 |
+
```python
|
| 217 |
+
@dataclass
|
| 218 |
+
class CyberSecurityOWASPObservation(Observation):
|
| 219 |
+
message: str
|
| 220 |
+
visible_policy_summary: str
|
| 221 |
+
route_summary: list[dict]
|
| 222 |
+
last_action_result: dict
|
| 223 |
+
public_test_summary: dict
|
| 224 |
+
patch_summary: dict
|
| 225 |
+
done_reason: str | None = None
|
| 226 |
+
```
|
| 227 |
+
|
| 228 |
+
Do not expose hidden test bodies, hidden expected outputs, or seed-specific solution hints.
|
| 229 |
+
|
| 230 |
+
### 3.7 State schema
|
| 231 |
+
|
| 232 |
+
State should support debugging and training analytics.
|
| 233 |
+
|
| 234 |
+
```python
|
| 235 |
+
@dataclass
|
| 236 |
+
class CyberSecurityOWASPState(State):
|
| 237 |
+
episode_id: str
|
| 238 |
+
scenario_id: str
|
| 239 |
+
split: Literal["train", "validation", "heldout"]
|
| 240 |
+
step_count: int = 0
|
| 241 |
+
max_steps: int = 30
|
| 242 |
+
scenario_family: str = ""
|
| 243 |
+
app_template: str = ""
|
| 244 |
+
files_touched: list[str] = field(default_factory=list)
|
| 245 |
+
public_tests_passed: int = 0
|
| 246 |
+
public_tests_total: int = 0
|
| 247 |
+
hidden_tests_passed: int = 0
|
| 248 |
+
hidden_tests_total: int = 0
|
| 249 |
+
accumulated_reward: float = 0.0
|
| 250 |
+
```
|
| 251 |
+
|
| 252 |
+
## 4. Episode lifecycle
|
| 253 |
+
|
| 254 |
+
```text
|
| 255 |
+
1. reset()
|
| 256 |
+
- sample train/validation scenario seed
|
| 257 |
+
- compile app from policy graph + template + injected bug
|
| 258 |
+
- start local sandbox app and DB fixture
|
| 259 |
+
- return initial observation
|
| 260 |
+
|
| 261 |
+
2. agent loop
|
| 262 |
+
- inspect policy/routes/files
|
| 263 |
+
- send local requests only inside sandbox
|
| 264 |
+
- run public tests
|
| 265 |
+
- apply one or more patches
|
| 266 |
+
- rerun public tests
|
| 267 |
+
|
| 268 |
+
3. submit_fix
|
| 269 |
+
- freeze patch
|
| 270 |
+
- run public tests
|
| 271 |
+
- run hidden authorization invariants
|
| 272 |
+
- run regression tests
|
| 273 |
+
- compute deterministic reward
|
| 274 |
+
- return final observation, reward, done=True
|
| 275 |
+
|
| 276 |
+
4. logging
|
| 277 |
+
- record scenario_id, action trace, patch diff, reward components
|
| 278 |
+
- send metrics to Trackio during training/eval
|
| 279 |
+
```
|
| 280 |
+
|
| 281 |
+
## 5. Reward design
|
| 282 |
+
|
| 283 |
+
The reward should be deterministic, decomposed, and resistant to reward hacking.
|
| 284 |
+
|
| 285 |
+
Recommended reward formula:
|
| 286 |
+
|
| 287 |
+
```text
|
| 288 |
+
R = 0.35 * public_policy_tests
|
| 289 |
+
+ 0.30 * hidden_authz_invariants
|
| 290 |
+
+ 0.15 * regression_preservation
|
| 291 |
+
+ 0.10 * evidence_quality
|
| 292 |
+
+ 0.05 * patch_minimality
|
| 293 |
+
+ 0.05 * efficiency
|
| 294 |
+
- penalties
|
| 295 |
+
```
|
| 296 |
+
|
| 297 |
+
### Reward components
|
| 298 |
+
|
| 299 |
+
| Component | Weight | What it rewards |
|
| 300 |
+
|---|---:|---|
|
| 301 |
+
| Public policy tests | 0.35 | Agent fixes known failing behavior. |
|
| 302 |
+
| Hidden authz invariants | 0.30 | Patch generalizes beyond visible tests. |
|
| 303 |
+
| Regression preservation | 0.15 | Valid user flows and intended public routes still work. |
|
| 304 |
+
| Evidence quality | 0.10 | Agent gathered relevant policy/test/file evidence before patching. |
|
| 305 |
+
| Patch minimality | 0.05 | Small focused patches instead of broad rewrites. |
|
| 306 |
+
| Efficiency | 0.05 | Fewer wasted steps and repeated actions. |
|
| 307 |
+
|
| 308 |
+
### Penalties
|
| 309 |
+
|
| 310 |
+
| Penalty | Trigger |
|
| 311 |
+
|---|---|
|
| 312 |
+
| `-0.25` | Breaks public route intentionally marked public. |
|
| 313 |
+
| `-0.25` | Deletes tests, policy file, or route instead of fixing authorization. |
|
| 314 |
+
| `-0.20` | Hardcodes seed-specific IDs, users, tenants, or hidden assumptions. |
|
| 315 |
+
| `-0.15` | Over-broad denial that blocks legitimate authorized users. |
|
| 316 |
+
| `-0.10` | Patch exceeds file or diff-size budget. |
|
| 317 |
+
| `-1.00` | Attempts external network access, credential extraction, persistence, or unsafe behavior. |
|
| 318 |
+
|
| 319 |
+
The LLM judge, if used at all, should only annotate trace quality for analysis. It must not decide security-critical reward.
|
| 320 |
+
|
| 321 |
+
## 6. Hidden tests and anti-overfitting
|
| 322 |
+
|
| 323 |
+
Hidden tests are necessary because visible tests can be gamed or memorized. They should test policy invariants rather than exact implementation details.
|
| 324 |
+
|
| 325 |
+
Use **4 anti-overfitting layers**:
|
| 326 |
+
|
| 327 |
+
1. **Seed diversity** — route names, user IDs, tenant IDs, object names, and schemas change every episode.
|
| 328 |
+
2. **Template diversity** — same policy bug appears in different frameworks and file layouts.
|
| 329 |
+
3. **Hidden invariant tests** — final reward uses unseen authorization cases.
|
| 330 |
+
4. **Held-out eval split** — at least 20% of scenario families/seeds are never used in training.
|
| 331 |
+
|
| 332 |
+
Recommended split:
|
| 333 |
+
|
| 334 |
+
```text
|
| 335 |
+
Train: 70%
|
| 336 |
+
Validation: 10%
|
| 337 |
+
Held-out: 20%
|
| 338 |
+
```
|
| 339 |
+
|
| 340 |
+
## 7. Evaluation plan
|
| 341 |
+
|
| 342 |
+
Run before/after evaluation on the same held-out suite.
|
| 343 |
+
|
| 344 |
+
### Metrics
|
| 345 |
+
|
| 346 |
+
| Metric | Meaning |
|
| 347 |
+
|---|---|
|
| 348 |
+
| `episode_success_rate` | Public + hidden + regression tests pass. |
|
| 349 |
+
| `hidden_authz_pass_rate` | Security-critical hidden checks pass. |
|
| 350 |
+
| `regression_pass_rate` | Normal valid behavior remains intact. |
|
| 351 |
+
| `oversecure_rate` | Agent blocks intended legitimate/public behavior. |
|
| 352 |
+
| `patch_compile_rate` | Patch applies and app still runs. |
|
| 353 |
+
| `median_steps_to_submit` | Efficiency of the repair workflow. |
|
| 354 |
+
| `median_files_changed` | Patch focus/minimality. |
|
| 355 |
+
| `reward_hacking_rate` | Attempts to delete tests, hardcode fixtures, or bypass eval. |
|
| 356 |
+
|
| 357 |
+
### Eval table template
|
| 358 |
+
|
| 359 |
+
| Model | Split | Success | Hidden authz | Regression | Oversecure | Median steps | Median files changed |
|
| 360 |
+
|---|---|---:|---:|---:|---:|---:|---:|
|
| 361 |
+
| Base model | heldout | TBD | TBD | TBD | TBD | TBD | TBD |
|
| 362 |
+
| RL-trained model | heldout | TBD | TBD | TBD | TBD | TBD | TBD |
|
| 363 |
+
|
| 364 |
+
## 8. Training flow
|
| 365 |
+
|
| 366 |
+
```text
|
| 367 |
+
1. Build CyberSecurity_OWASP OpenEnv server.
|
| 368 |
+
2. Generate 600 MVP scenarios.
|
| 369 |
+
3. Run baseline eval with the base model.
|
| 370 |
+
4. Train with GRPO/TRL or Unsloth using rollout episodes.
|
| 371 |
+
5. Log reward components to Trackio.
|
| 372 |
+
6. Run held-out eval every N training steps.
|
| 373 |
+
7. Inspect failure clusters.
|
| 374 |
+
8. Add scenario mutations only if failures reveal overfitting.
|
| 375 |
+
9. Produce final demo: before/after trace + reward curve + held-out eval table.
|
| 376 |
+
```
|
| 377 |
+
|
| 378 |
+
Recommended initial training setup:
|
| 379 |
+
|
| 380 |
+
```text
|
| 381 |
+
Model: Qwen/Qwen3-1.7B or similar small instruct model
|
| 382 |
+
Algorithm: GRPO via TRL or Unsloth-compatible loop
|
| 383 |
+
Dataset prompt: repeated task instruction with randomized scenario IDs
|
| 384 |
+
Max steps per episode: 30
|
| 385 |
+
Rollouts per prompt: 2-4
|
| 386 |
+
Logging: Trackio
|
| 387 |
+
Primary eval: held-out deterministic test pass rate
|
| 388 |
+
```
|
| 389 |
+
|
| 390 |
+
## 9. Deployment architecture
|
| 391 |
+
|
| 392 |
+
The environment should be runnable in 3 modes:
|
| 393 |
+
|
| 394 |
+
| Mode | Purpose |
|
| 395 |
+
|---|---|
|
| 396 |
+
| Local Uvicorn | Fast engineer iteration. |
|
| 397 |
+
| Docker | Reproducible local training/eval. |
|
| 398 |
+
| Hugging Face Spaces | Public hackathon demo and OpenEnv-compliant hosting. |
|
| 399 |
+
|
| 400 |
+
Expected endpoints:
|
| 401 |
+
|
| 402 |
+
```text
|
| 403 |
+
/ws OpenEnv client session
|
| 404 |
+
/health health check
|
| 405 |
+
/reset debug reset
|
| 406 |
+
/step debug step
|
| 407 |
+
/state debug state
|
| 408 |
+
/docs FastAPI docs
|
| 409 |
+
/web optional web UI
|
| 410 |
+
```
|
| 411 |
+
|
| 412 |
+
## 10. Implementation milestones
|
| 413 |
+
|
| 414 |
+
### Milestone 1 — Skeleton environment
|
| 415 |
+
|
| 416 |
+
- `models.py`
|
| 417 |
+
- `client.py`
|
| 418 |
+
- `server/environment.py`
|
| 419 |
+
- `server/app.py`
|
| 420 |
+
- `server/Dockerfile`
|
| 421 |
+
- `openenv.yaml`
|
| 422 |
+
- health check
|
| 423 |
+
- one hand-written scenario
|
| 424 |
+
|
| 425 |
+
### Milestone 2 — Scenario compiler
|
| 426 |
+
|
| 427 |
+
- policy graph format
|
| 428 |
+
- app template renderer
|
| 429 |
+
- bug injector
|
| 430 |
+
- DB fixture generator
|
| 431 |
+
- public and hidden test generator
|
| 432 |
+
|
| 433 |
+
### Milestone 3 — Reward engine
|
| 434 |
+
|
| 435 |
+
- public test score
|
| 436 |
+
- hidden invariant score
|
| 437 |
+
- regression score
|
| 438 |
+
- patch minimality score
|
| 439 |
+
- safety/reward-hacking penalties
|
| 440 |
+
- reward component logging
|
| 441 |
+
|
| 442 |
+
### Milestone 4 — Training script
|
| 443 |
+
|
| 444 |
+
- rollout loop
|
| 445 |
+
- GRPO/TRL or Unsloth training script
|
| 446 |
+
- Trackio logging
|
| 447 |
+
- checkpoint save/push
|
| 448 |
+
- baseline and post-training eval
|
| 449 |
+
|
| 450 |
+
### Milestone 5 — Hackathon demo
|
| 451 |
+
|
| 452 |
+
- HF Spaces deployment
|
| 453 |
+
- mini-blog
|
| 454 |
+
- 2-minute video
|
| 455 |
+
- before/after traces
|
| 456 |
+
- reward curve
|
| 457 |
+
- held-out eval table
|
| 458 |
+
|
| 459 |
+
## 11. Engineering notes
|
| 460 |
+
|
| 461 |
+
- Keep scenario apps small: ideally 5-15 files each.
|
| 462 |
+
- Prefer deterministic tests over LLM judging.
|
| 463 |
+
- Hide final hidden test details from observations.
|
| 464 |
+
- Log enough trace data to debug failures but never leak hidden tests to the agent.
|
| 465 |
+
- Include intentionally public routes and allowed cross-role cases so the model does not learn “add auth everywhere.”
|
| 466 |
+
- The best demo is not just “agent finds bug,” but “agent learns not to break valid business behavior.”
|
| 467 |
+
|
| 468 |
+
## 12. Source notes and credibility
|
| 469 |
+
|
| 470 |
+
| Source | How it informs this architecture | Credibility |
|
| 471 |
+
|---|---|---:|
|
| 472 |
+
| OWASP Top 10 2025 / A01 Broken Access Control | Confirms why access control is the right security focus. | 10/10 |
|
| 473 |
+
| OWASP ASVS access-control guidance | Informs policy invariants and server-side authorization checks. | 9.5/10 |
|
| 474 |
+
| OpenEnv environment-building docs | Defines required models, reset/step/state, FastAPI server, Docker, and client. | 8.5/10 |
|
| 475 |
+
| OpenEnv quickstart/architecture docs | Informs WebSocket client/server design, typed EnvClient, and container isolation. | 8.5/10 |
|
| 476 |
+
| OpenEnv deployment docs | Informs HF Spaces deployment, endpoints, Docker workflow, and installable client package. | 8.5/10 |
|
| 477 |
+
| Hackathon judging criteria | Informs demo priorities: innovation, storytelling, reward improvement, and training pipeline. | 9/10 |
|
| 478 |
+
| TRL/OpenEnv training example | Informs rollout function, decomposed reward functions, and Trackio logging pattern. | 8/10 |
|
| 479 |
+
|
AGENTS.md
ADDED
|
@@ -0,0 +1,1197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AGENTS.md — CyberSecurity_OWASP Builder Instructions
|
| 2 |
+
|
| 3 |
+
## Purpose
|
| 4 |
+
|
| 5 |
+
This repository implements **CyberSecurity_OWASP**, an OpenEnv-compliant RL environment for training a **single LLM agent** to perform a defensive application-security workflow:
|
| 6 |
+
|
| 7 |
+
```text
|
| 8 |
+
inspect generated app + policy -> discover authorization bug -> submit safe finding -> patch code -> preserve intended behavior
|
| 9 |
+
```
|
| 10 |
+
|
| 11 |
+
The environment must train the model to do real interactive work, not answer static security questions. The model must act step by step through typed OpenEnv actions, observe consequences, receive deterministic reward, and improve through RL.
|
| 12 |
+
|
| 13 |
+
The canonical repository and OpenEnv environment name is **`CyberSecurity_OWASP`**. Use this exact name in `openenv.yaml`, `pyproject.toml`, HF Spaces repo naming, Docker image tags, Trackio run names, command examples, and documentation.
|
| 14 |
+
|
| 15 |
+
The target stack is:
|
| 16 |
+
|
| 17 |
+
```text
|
| 18 |
+
CyberSecurity_OWASP OpenEnv environment
|
| 19 |
+
-> deterministic verifier + hidden tests
|
| 20 |
+
-> rollout loop
|
| 21 |
+
-> HF TRL / Unsloth GRPO
|
| 22 |
+
-> Trackio logging
|
| 23 |
+
-> held-out evaluation
|
| 24 |
+
-> HF Spaces deployment
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
The final project must show measurable improvement in reward, exploit-block rate, regression-preservation rate, and held-out generalization after training.
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## Product definition
|
| 32 |
+
|
| 33 |
+
CyberSecurity_OWASP generates a new local application scenario every `reset(seed)`. Each episode contains:
|
| 34 |
+
|
| 35 |
+
- a policy graph describing users, roles, tenants, resources, ownership, permissions, and public routes;
|
| 36 |
+
- a generated FastAPI-style application workspace;
|
| 37 |
+
- exactly one injected OWASP A01-style authorization defect;
|
| 38 |
+
- visible tests for normal behavior;
|
| 39 |
+
- hidden invariant tests for authorization correctness, regression protection, public-route preservation, and anti-cheat checks.
|
| 40 |
+
|
| 41 |
+
The environment has **one LLM agent**, not separate red-team and blue-team LLMs. The environment itself acts as the scenario generator, tool server, verifier, and judge.
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## Highest-priority objectives
|
| 46 |
+
|
| 47 |
+
When making implementation decisions, optimize in this order:
|
| 48 |
+
|
| 49 |
+
1. **Verifier correctness**: deterministic tests must decide whether the patch actually fixes the authorization defect.
|
| 50 |
+
2. **Reward integrity**: reward must be hard to hack and must punish insecure or regressive patches.
|
| 51 |
+
3. **Anti-overfitting**: the model must generalize across apps, layouts, policies, domains, names, and bug families.
|
| 52 |
+
4. **OpenEnv compliance**: expose typed `Action`, `Observation`, and `State`; implement `reset()`, `step(action)`, and `state` correctly.
|
| 53 |
+
5. **Trainability**: baseline model should sometimes get partial reward; curriculum should make early learning possible.
|
| 54 |
+
6. **Real-world usefulness**: the workflow should resemble secure code review / AppSec authorization repair.
|
| 55 |
+
7. **Demo clarity**: show before/after rollouts, reward curves, and why the trained model improved.
|
| 56 |
+
8. **Hackathon competitiveness**: prioritize a novel, interactive, professionally useful environment with a coherent training pipeline.
|
| 57 |
+
|
| 58 |
+
Do not train before the environment, verifier, anti-cheat tests, and before/after evaluation are stable.
|
| 59 |
+
|
| 60 |
+
---
|
| 61 |
+
|
| 62 |
+
## Hackathon alignment requirements
|
| 63 |
+
|
| 64 |
+
The implementation must satisfy these minimum requirements:
|
| 65 |
+
|
| 66 |
+
- use the latest OpenEnv release;
|
| 67 |
+
- include a minimal HF TRL or Unsloth training script;
|
| 68 |
+
- use Trackio as the default tracker for training and evaluation;
|
| 69 |
+
- be deployable as an OpenEnv-compliant Hugging Face Space;
|
| 70 |
+
- include a README / mini-blog style explanation;
|
| 71 |
+
- show baseline-vs-trained improvement.
|
| 72 |
+
|
| 73 |
+
Optimize for judging:
|
| 74 |
+
|
| 75 |
+
| Criterion | Weight | CyberSecurity_OWASP evidence |
|
| 76 |
+
|---|---:|---|
|
| 77 |
+
| Environment innovation | 40% | procedural OWASP authorization-repair environment with generated code, policy, and hidden verifier |
|
| 78 |
+
| Storytelling | 30% | single LLM learns discover + patch, before/after security behavior |
|
| 79 |
+
| Showing improvement in rewards | 20% | reward curves, exploit-block pass rate, regression-preservation rate |
|
| 80 |
+
| Reward/training pipeline | 10% | deterministic reward, GRPO/PPO rollout loop, Trackio metrics |
|
| 81 |
+
|
| 82 |
+
---
|
| 83 |
+
|
| 84 |
+
## Non-negotiable environment design
|
| 85 |
+
|
| 86 |
+
CyberSecurity_OWASP must be a **single-agent** environment:
|
| 87 |
+
|
| 88 |
+
```text
|
| 89 |
+
phase = discover -> patch -> done
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
Do not implement a two-LLM red-team/blue-team setup. The single model must learn both discovery and repair.
|
| 93 |
+
|
| 94 |
+
The environment must be defensive and local only. It must never target real systems or teach unauthorized exploitation. All probing must be limited to the generated local workspace controlled by the environment.
|
| 95 |
+
|
| 96 |
+
---
|
| 97 |
+
|
| 98 |
+
## Required repository structure
|
| 99 |
+
|
| 100 |
+
Prefer this structure:
|
| 101 |
+
|
| 102 |
+
```text
|
| 103 |
+
.
|
| 104 |
+
├── AGENTS.md
|
| 105 |
+
├── README.md
|
| 106 |
+
├── 00_PROJECT_BRIEF.md
|
| 107 |
+
├── 01_ARCHITECTURE.md
|
| 108 |
+
├── pyproject.toml
|
| 109 |
+
├── openenv.yaml
|
| 110 |
+
├── envs/
|
| 111 |
+
│ └── CyberSecurity_OWASP/
|
| 112 |
+
│ ├── __init__.py
|
| 113 |
+
│ ├── models.py
|
| 114 |
+
│ ├── client.py
|
| 115 |
+
│ ├── README.md
|
| 116 |
+
│ ├── rewards.py
|
| 117 |
+
│ ├── validators.py
|
| 118 |
+
│ ├── safety.py
|
| 119 |
+
│ ├── evals.py
|
| 120 |
+
│ ├── server/
|
| 121 |
+
│ │ ├── __init__.py
|
| 122 |
+
│ │ ├── app.py
|
| 123 |
+
│ │ ├── environment.py
|
| 124 |
+
│ │ ├── scenario_compiler.py
|
| 125 |
+
│ │ ├── policy_graph.py
|
| 126 |
+
│ │ ├── template_renderer.py
|
| 127 |
+
│ │ ├── bug_mutator.py
|
| 128 |
+
│ │ ├── fixture_generator.py
|
| 129 |
+
│ │ ├── reward_engine.py
|
| 130 |
+
│ │ ├── requirements.txt
|
| 131 |
+
│ │ └── Dockerfile
|
| 132 |
+
│ ├── templates/
|
| 133 |
+
│ │ └── fastapi_basic/
|
| 134 |
+
│ ├── scenario_cache/
|
| 135 |
+
│ │ ├── train/
|
| 136 |
+
│ │ ├── validation/
|
| 137 |
+
│ │ └── hidden_eval/
|
| 138 |
+
│ └── tests/
|
| 139 |
+
│ ├── test_models.py
|
| 140 |
+
│ ├── test_reset_step_state.py
|
| 141 |
+
│ ├── test_rewards.py
|
| 142 |
+
│ ├── test_anti_cheat.py
|
| 143 |
+
│ ├── test_seed_reproducibility.py
|
| 144 |
+
│ ├── test_invalid_actions.py
|
| 145 |
+
│ └── test_rollouts.py
|
| 146 |
+
├── training/
|
| 147 |
+
│ ├── train_grpo.py
|
| 148 |
+
│ ├── rollout.py
|
| 149 |
+
│ ├── reward_funcs.py
|
| 150 |
+
│ ├── eval_before_after.py
|
| 151 |
+
│ ├── trackio_utils.py
|
| 152 |
+
│ └── configs/
|
| 153 |
+
│ └── grpo_small.yaml
|
| 154 |
+
├── scripts/
|
| 155 |
+
│ ├── run_local.sh
|
| 156 |
+
│ ├── docker_build.sh
|
| 157 |
+
│ ├── docker_run.sh
|
| 158 |
+
│ ├── smoke_test.sh
|
| 159 |
+
│ ├── generate_scenarios.sh
|
| 160 |
+
│ └── push_space.sh
|
| 161 |
+
├── assets/
|
| 162 |
+
│ └── anti_overfitting_training_flow_diagram.png
|
| 163 |
+
└── outputs/
|
| 164 |
+
├── logs/
|
| 165 |
+
├── evals/
|
| 166 |
+
└── rollouts/
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
If `openenv init CyberSecurity_OWASP` creates a different structure, preserve the generated structure and add the missing files around it.
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
## Architecture overview
|
| 174 |
+
|
| 175 |
+
CyberSecurity_OWASP has 7 main components:
|
| 176 |
+
|
| 177 |
+
1. **Policy Graph + Domain Sampler** — samples users, roles, tenants, ownership, public routes, and business exceptions.
|
| 178 |
+
2. **Template / Framework Randomizer** — renders FastAPI-style apps with randomized layouts and naming.
|
| 179 |
+
3. **A01 Bug Mutator** — injects one authorization defect per scenario.
|
| 180 |
+
4. **Fixture + Hidden Test Generator** — creates users, resources, visible tests, and hidden invariant tests.
|
| 181 |
+
5. **OpenEnv Server** — exposes typed `Action`, `Observation`, and `State` through `reset`, `step`, and `state`.
|
| 182 |
+
6. **LLM Agent + LoRA** — one model performs discover + patch.
|
| 183 |
+
7. **Deterministic Reward Engine** — hidden tests score exploit blocking, normal-flow preservation, patch quality, and anti-cheat.
|
| 184 |
+
|
| 185 |
+
An optional LLM reviewer may score rationale quality and ASVS/OWASP mapping only. It must not provide the primary reward.
|
| 186 |
+
|
| 187 |
+
---
|
| 188 |
+
|
| 189 |
+
## Scenario compiler requirements
|
| 190 |
+
|
| 191 |
+
### Policy Graph + Domain Sampler
|
| 192 |
+
|
| 193 |
+
The policy graph is the source of truth. It must define:
|
| 194 |
+
|
| 195 |
+
- users;
|
| 196 |
+
- tenants;
|
| 197 |
+
- roles;
|
| 198 |
+
- resources;
|
| 199 |
+
- ownership relationships;
|
| 200 |
+
- role permissions;
|
| 201 |
+
- public routes;
|
| 202 |
+
- business exceptions.
|
| 203 |
+
|
| 204 |
+
Initial domains:
|
| 205 |
+
|
| 206 |
+
| Domain | Example resources | Example policy rule |
|
| 207 |
+
|---|---|---|
|
| 208 |
+
| invoices | invoices, payments, accounts | owner or billing admin can read invoice |
|
| 209 |
+
| support | tickets, comments, customer records | assigned agent can update ticket |
|
| 210 |
+
| projects | projects, documents, milestones | project member can read project docs |
|
| 211 |
+
| marketplace | orders, returns, seller records | buyer owns own orders; seller owns own listings |
|
| 212 |
+
| HR | employee profiles, reviews, payroll records | HR admin can read employee records |
|
| 213 |
+
|
| 214 |
+
### Template / Framework Randomizer
|
| 215 |
+
|
| 216 |
+
First version: FastAPI only. Still randomize structure so the model cannot memorize one app.
|
| 217 |
+
|
| 218 |
+
Randomize:
|
| 219 |
+
|
| 220 |
+
- path naming;
|
| 221 |
+
- parameter names;
|
| 222 |
+
- helper names;
|
| 223 |
+
- folder layout;
|
| 224 |
+
- route/service/auth split;
|
| 225 |
+
- fixture names;
|
| 226 |
+
- error messages within valid policy bounds.
|
| 227 |
+
|
| 228 |
+
Examples:
|
| 229 |
+
|
| 230 |
+
```text
|
| 231 |
+
/routes/invoices.py
|
| 232 |
+
/api/billing.py
|
| 233 |
+
/controllers/accounts.py
|
| 234 |
+
/services/access.py
|
| 235 |
+
/authz/guards.py
|
| 236 |
+
```
|
| 237 |
+
|
| 238 |
+
### A01 Bug Mutator
|
| 239 |
+
|
| 240 |
+
Inject exactly one primary bug per scenario.
|
| 241 |
+
|
| 242 |
+
Initial bug families:
|
| 243 |
+
|
| 244 |
+
| Bug family | Defect | Desired repair pattern |
|
| 245 |
+
|---|---|---|
|
| 246 |
+
| BOLA/IDOR | resource ID lookup lacks owner/tenant check | check server-side owner/tenant relation |
|
| 247 |
+
| BFLA | privileged route lacks role/function check | add reusable role or permission guard |
|
| 248 |
+
| tenant leak | request/header tenant ID is trusted | derive tenant from authenticated principal or server-side mapping |
|
| 249 |
+
| JWT claim trust | mutable claim is treated as authoritative | verify against server-side user/role record |
|
| 250 |
+
| public-route trap | route is intentionally public | do not over-secure public allowlisted route |
|
| 251 |
+
|
| 252 |
+
### Fixture + Hidden Test Generator
|
| 253 |
+
|
| 254 |
+
Visible tests should check that the app boots and normal happy paths work.
|
| 255 |
+
|
| 256 |
+
Hidden tests must check:
|
| 257 |
+
|
| 258 |
+
- exploit request is blocked;
|
| 259 |
+
- legitimate owner flow still works;
|
| 260 |
+
- legitimate admin/support flow still works;
|
| 261 |
+
- public routes remain public;
|
| 262 |
+
- cross-tenant access is denied;
|
| 263 |
+
- randomized IDs/names defeat hardcoded patches;
|
| 264 |
+
- hidden tests, fixtures, oracle, and reward files are not modified.
|
| 265 |
+
|
| 266 |
+
### Scenario Cache + Seeded Reset
|
| 267 |
+
|
| 268 |
+
Training should use cached scenarios for speed.
|
| 269 |
+
|
| 270 |
+
Recommended first cache:
|
| 271 |
+
|
| 272 |
+
| Split | Seeds | Purpose |
|
| 273 |
+
|---|---:|---|
|
| 274 |
+
| train | 500–1,000 | RL rollouts |
|
| 275 |
+
| validation | 100–200 | checkpoint selection and curriculum signal |
|
| 276 |
+
| hidden_eval | 100–200 | final generalization proof |
|
| 277 |
+
|
| 278 |
+
### Hold-Out Generalization Splitter
|
| 279 |
+
|
| 280 |
+
Hold out at least 4 dimensions:
|
| 281 |
+
|
| 282 |
+
1. domains;
|
| 283 |
+
2. policy graph shapes;
|
| 284 |
+
3. code layouts;
|
| 285 |
+
4. bug-family/domain combinations.
|
| 286 |
+
|
| 287 |
+
Example: train on invoices/support/projects, evaluate on marketplace/HR.
|
| 288 |
+
|
| 289 |
+
---
|
| 290 |
+
|
| 291 |
+
## OpenEnv model definitions
|
| 292 |
+
|
| 293 |
+
Implement these in `envs/CyberSecurity_OWASP/models.py`.
|
| 294 |
+
|
| 295 |
+
```python
|
| 296 |
+
from dataclasses import dataclass, field
|
| 297 |
+
from typing import Any, Literal
|
| 298 |
+
from openenv.core.env_server import Action, Observation, State
|
| 299 |
+
|
| 300 |
+
CyberSecurityOWASPPhase = Literal["discover", "patch", "done"]
|
| 301 |
+
CyberSecurityOWASPSplit = Literal["train", "validation", "hidden_eval"]
|
| 302 |
+
|
| 303 |
+
@dataclass
|
| 304 |
+
class CyberSecurityOWASPAction(Action):
|
| 305 |
+
tool_name: Literal[
|
| 306 |
+
"inspect_policy_graph",
|
| 307 |
+
"list_routes",
|
| 308 |
+
"read_openapi",
|
| 309 |
+
"read_file",
|
| 310 |
+
"search_code",
|
| 311 |
+
"send_local_request",
|
| 312 |
+
"compare_identities",
|
| 313 |
+
"submit_finding",
|
| 314 |
+
"patch_file",
|
| 315 |
+
"run_visible_tests",
|
| 316 |
+
"submit_fix",
|
| 317 |
+
"noop",
|
| 318 |
+
]
|
| 319 |
+
arguments: dict[str, Any] = field(default_factory=dict)
|
| 320 |
+
|
| 321 |
+
@dataclass
|
| 322 |
+
class CyberSecurityOWASPObservation(Observation):
|
| 323 |
+
phase: CyberSecurityOWASPPhase
|
| 324 |
+
message: str
|
| 325 |
+
task_brief: str
|
| 326 |
+
visible_policy_hint: dict[str, Any] = field(default_factory=dict)
|
| 327 |
+
workspace_summary: dict[str, Any] = field(default_factory=dict)
|
| 328 |
+
available_actions: list[str] = field(default_factory=list)
|
| 329 |
+
last_tool_result: str = ""
|
| 330 |
+
last_action_valid: bool = True
|
| 331 |
+
last_action_error: str | None = None
|
| 332 |
+
visible_test_result: str | None = None
|
| 333 |
+
reward_breakdown: dict[str, float] = field(default_factory=dict)
|
| 334 |
+
done_reason: str | None = None
|
| 335 |
+
|
| 336 |
+
@dataclass
|
| 337 |
+
class CyberSecurityOWASPState(State):
|
| 338 |
+
episode_id: str = ""
|
| 339 |
+
task_id: str = ""
|
| 340 |
+
seed: int = 0
|
| 341 |
+
split: CyberSecurityOWASPSplit = "train"
|
| 342 |
+
difficulty: int = 0
|
| 343 |
+
domain: str = ""
|
| 344 |
+
bug_family: str = ""
|
| 345 |
+
phase: CyberSecurityOWASPPhase = "discover"
|
| 346 |
+
step_count: int = 0
|
| 347 |
+
max_steps: int = 40
|
| 348 |
+
done: bool = False
|
| 349 |
+
success: bool = False
|
| 350 |
+
failure_reason: str | None = None
|
| 351 |
+
finding_submitted: bool = False
|
| 352 |
+
patch_submitted: bool = False
|
| 353 |
+
accumulated_reward: float = 0.0
|
| 354 |
+
last_reward: float = 0.0
|
| 355 |
+
action_history: list[dict[str, Any]] = field(default_factory=list)
|
| 356 |
+
reward_history: list[dict[str, float]] = field(default_factory=list)
|
| 357 |
+
visible_facts: dict[str, Any] = field(default_factory=dict)
|
| 358 |
+
hidden_facts: dict[str, Any] = field(default_factory=dict)
|
| 359 |
+
metrics: dict[str, Any] = field(default_factory=dict)
|
| 360 |
+
anti_cheat_flags: list[str] = field(default_factory=list)
|
| 361 |
+
```
|
| 362 |
+
|
| 363 |
+
---
|
| 364 |
+
|
| 365 |
+
## Action design and phase gating
|
| 366 |
+
|
| 367 |
+
Actions must be explicit, typed, serializable, and constrained. Invalid actions must not crash the server.
|
| 368 |
+
|
| 369 |
+
### Phase-gated tools
|
| 370 |
+
|
| 371 |
+
| Phase | Allowed tools |
|
| 372 |
+
|---|---|
|
| 373 |
+
| discover | `inspect_policy_graph`, `list_routes`, `read_openapi`, `read_file`, `search_code`, `send_local_request`, `compare_identities`, `submit_finding`, `noop` |
|
| 374 |
+
| patch | `read_file`, `search_code`, `patch_file`, `run_visible_tests`, `send_local_request`, `submit_fix`, `noop` |
|
| 375 |
+
| done | no state-changing tools; return stable done observation |
|
| 376 |
+
|
| 377 |
+
### Tool contracts
|
| 378 |
+
|
| 379 |
+
`inspect_policy_graph`
|
| 380 |
+
: Returns public policy hints. Must not reveal hidden bug labels or hidden tests.
|
| 381 |
+
|
| 382 |
+
`list_routes`
|
| 383 |
+
: Returns route method/path summaries from the generated app.
|
| 384 |
+
|
| 385 |
+
`read_openapi`
|
| 386 |
+
: Returns generated OpenAPI metadata.
|
| 387 |
+
|
| 388 |
+
`read_file`
|
| 389 |
+
: Reads editable workspace files only. Must block hidden tests, reward files, oracle files, and host files.
|
| 390 |
+
|
| 391 |
+
`search_code`
|
| 392 |
+
: Searches editable workspace files only.
|
| 393 |
+
|
| 394 |
+
`send_local_request`
|
| 395 |
+
: Sends a request to the local generated app only. Must block external URLs and host network access.
|
| 396 |
+
|
| 397 |
+
`compare_identities`
|
| 398 |
+
: Runs the same local request as two generated users and summarizes behavioral differences.
|
| 399 |
+
|
| 400 |
+
`submit_finding`
|
| 401 |
+
: Accepts structured evidence of the suspected authorization bug. Required before patch phase unless curriculum level explicitly allows blind patching.
|
| 402 |
+
|
| 403 |
+
`patch_file`
|
| 404 |
+
: Applies a bounded unified diff to editable app files only.
|
| 405 |
+
|
| 406 |
+
`run_visible_tests`
|
| 407 |
+
: Runs visible tests only. Must not run or reveal hidden tests.
|
| 408 |
+
|
| 409 |
+
`submit_fix`
|
| 410 |
+
: Triggers hidden evaluation.
|
| 411 |
+
|
| 412 |
+
---
|
| 413 |
+
|
| 414 |
+
## Observation rules
|
| 415 |
+
|
| 416 |
+
Observations should provide enough information to act but must not leak the answer.
|
| 417 |
+
|
| 418 |
+
Include:
|
| 419 |
+
|
| 420 |
+
- current phase;
|
| 421 |
+
- task brief;
|
| 422 |
+
- visible policy hints;
|
| 423 |
+
- workspace summary;
|
| 424 |
+
- available tools;
|
| 425 |
+
- previous tool output;
|
| 426 |
+
- visible test output;
|
| 427 |
+
- public reward breakdown after terminal evaluation.
|
| 428 |
+
|
| 429 |
+
Do not include:
|
| 430 |
+
|
| 431 |
+
- hidden bug family if not meant to be visible;
|
| 432 |
+
- hidden test contents;
|
| 433 |
+
- hidden oracle;
|
| 434 |
+
- exact exploit path labels;
|
| 435 |
+
- hidden seed split labels that allow memorization;
|
| 436 |
+
- reward implementation details that allow proxy hacking.
|
| 437 |
+
|
| 438 |
+
---
|
| 439 |
+
|
| 440 |
+
## State rules
|
| 441 |
+
|
| 442 |
+
State is the source of truth for deterministic replay and debugging.
|
| 443 |
+
|
| 444 |
+
Required state properties:
|
| 445 |
+
|
| 446 |
+
- `reset(seed)` must create a fresh independent state;
|
| 447 |
+
- same seed + same action sequence should produce same result;
|
| 448 |
+
- each WebSocket session must be isolated;
|
| 449 |
+
- `step_count` increments once per processed action;
|
| 450 |
+
- terminal states return stable done observations;
|
| 451 |
+
- hidden facts never appear in observations;
|
| 452 |
+
- all actions and reward breakdowns are stored for debugging.
|
| 453 |
+
|
| 454 |
+
---
|
| 455 |
+
|
| 456 |
+
## Environment API contract
|
| 457 |
+
|
| 458 |
+
Implement in `envs/CyberSecurity_OWASP/server/environment.py`.
|
| 459 |
+
|
| 460 |
+
```python
|
| 461 |
+
from openenv.core.env_server import Environment
|
| 462 |
+
from ..models import CyberSecurityOWASPAction, CyberSecurityOWASPObservation, CyberSecurityOWASPState
|
| 463 |
+
|
| 464 |
+
class CyberSecurityOWASPEnvironment(Environment):
|
| 465 |
+
def __init__(self):
|
| 466 |
+
super().__init__()
|
| 467 |
+
self._state = CyberSecurityOWASPState()
|
| 468 |
+
|
| 469 |
+
def reset(self) -> CyberSecurityOWASPObservation:
|
| 470 |
+
...
|
| 471 |
+
|
| 472 |
+
def step(self, action: CyberSecurityOWASPAction) -> CyberSecurityOWASPObservation:
|
| 473 |
+
...
|
| 474 |
+
|
| 475 |
+
@property
|
| 476 |
+
def state(self) -> CyberSecurityOWASPState:
|
| 477 |
+
return self._state
|
| 478 |
+
```
|
| 479 |
+
|
| 480 |
+
`step(action)` must follow this order:
|
| 481 |
+
|
| 482 |
+
1. If done, return stable done observation.
|
| 483 |
+
2. Validate action and phase permissions.
|
| 484 |
+
3. Increment step count.
|
| 485 |
+
4. Execute the tool.
|
| 486 |
+
5. Update state/history.
|
| 487 |
+
6. Run verifier if `submit_finding`, `run_visible_tests`, or `submit_fix`.
|
| 488 |
+
7. Compute reward components.
|
| 489 |
+
8. Check terminal conditions.
|
| 490 |
+
9. Return observation, reward, and done through OpenEnv step result handling.
|
| 491 |
+
|
| 492 |
+
---
|
| 493 |
+
|
| 494 |
+
## FastAPI server and client
|
| 495 |
+
|
| 496 |
+
`envs/CyberSecurity_OWASP/server/app.py` must use the OpenEnv FastAPI helper.
|
| 497 |
+
|
| 498 |
+
```python
|
| 499 |
+
from openenv.core.env_server import create_fastapi_app
|
| 500 |
+
from ..models import CyberSecurityOWASPAction, CyberSecurityOWASPObservation
|
| 501 |
+
from .environment import CyberSecurityOWASPEnvironment
|
| 502 |
+
|
| 503 |
+
env = CyberSecurityOWASPEnvironment()
|
| 504 |
+
app = create_fastapi_app(env, CyberSecurityOWASPAction, CyberSecurityOWASPObservation)
|
| 505 |
+
```
|
| 506 |
+
|
| 507 |
+
`envs/CyberSecurity_OWASP/client.py` must parse `StepResult`, observations, rewards, done flags, and state into typed classes.
|
| 508 |
+
|
| 509 |
+
---
|
| 510 |
+
|
| 511 |
+
## Deterministic reward engine
|
| 512 |
+
|
| 513 |
+
The reward engine is the task specification. Build it before training.
|
| 514 |
+
|
| 515 |
+
Maximum reward: **15.0**.
|
| 516 |
+
|
| 517 |
+
| Component | Reward |
|
| 518 |
+
|---|---:|
|
| 519 |
+
| valid finding with reproducible local evidence | +2.0 |
|
| 520 |
+
| finding tied to violated policy rule | +1.0 |
|
| 521 |
+
| hidden exploit blocked | +5.0 |
|
| 522 |
+
| legitimate owner/admin/support flows preserved | +3.0 |
|
| 523 |
+
| public routes preserved | +1.0 |
|
| 524 |
+
| patch is localized, reusable, and policy-aligned | +2.0 |
|
| 525 |
+
| visible tests pass and app boots | +1.0 |
|
| 526 |
+
| exploit still works | -6.0 |
|
| 527 |
+
| legitimate flow regression | -5.0 |
|
| 528 |
+
| public route incorrectly locked | -3.0 |
|
| 529 |
+
| hardcoded user/tenant/resource fix | -4.0 |
|
| 530 |
+
| modified tests, fixtures, oracle, or hidden files | -8.0 |
|
| 531 |
+
| app does not parse or boot | -6.0 |
|
| 532 |
+
|
| 533 |
+
The reward breakdown must use stable keys:
|
| 534 |
+
|
| 535 |
+
```python
|
| 536 |
+
reward_breakdown = {
|
| 537 |
+
"discovery": discovery_reward,
|
| 538 |
+
"security": security_reward,
|
| 539 |
+
"regression": regression_reward,
|
| 540 |
+
"public_routes": public_route_reward,
|
| 541 |
+
"patch_quality": patch_quality_reward,
|
| 542 |
+
"visible_tests": visible_tests_reward,
|
| 543 |
+
"safety": safety_reward,
|
| 544 |
+
"anti_cheat": anti_cheat_reward,
|
| 545 |
+
"total": total_reward,
|
| 546 |
+
}
|
| 547 |
+
```
|
| 548 |
+
|
| 549 |
+
Do not let explanation quality dominate the reward. If using an LLM reviewer, keep it separate from `total` or cap it at a small non-primary auxiliary signal.
|
| 550 |
+
|
| 551 |
+
---
|
| 552 |
+
|
| 553 |
+
## Verifier requirements
|
| 554 |
+
|
| 555 |
+
Create `envs/CyberSecurity_OWASP/validators.py` and `envs/CyberSecurity_OWASP/server/reward_engine.py`.
|
| 556 |
+
|
| 557 |
+
Required verifier functions:
|
| 558 |
+
|
| 559 |
+
```python
|
| 560 |
+
def verify_finding(state: CyberSecurityOWASPState, finding: dict) -> dict:
|
| 561 |
+
...
|
| 562 |
+
|
| 563 |
+
|
| 564 |
+
def run_hidden_security_tests(state: CyberSecurityOWASPState) -> dict:
|
| 565 |
+
...
|
| 566 |
+
|
| 567 |
+
|
| 568 |
+
def run_hidden_regression_tests(state: CyberSecurityOWASPState) -> dict:
|
| 569 |
+
...
|
| 570 |
+
|
| 571 |
+
|
| 572 |
+
def detect_cheating(state: CyberSecurityOWASPState, action: CyberSecurityOWASPAction) -> list[str]:
|
| 573 |
+
...
|
| 574 |
+
|
| 575 |
+
|
| 576 |
+
def compute_reward(state: CyberSecurityOWASPState, action: CyberSecurityOWASPAction, verifier_result: dict) -> dict[str, float]:
|
| 577 |
+
...
|
| 578 |
+
```
|
| 579 |
+
|
| 580 |
+
Verifier tests must prove:
|
| 581 |
+
|
| 582 |
+
- correct patch receives high reward;
|
| 583 |
+
- exploit-only finding without patch does not complete the episode;
|
| 584 |
+
- deny-all patch fails regression tests;
|
| 585 |
+
- hardcoded patch fails randomized hidden tests;
|
| 586 |
+
- modified hidden files produce anti-cheat penalty;
|
| 587 |
+
- visible-test-only patch does not guarantee high reward;
|
| 588 |
+
- repeated intermediate actions cannot inflate reward indefinitely.
|
| 589 |
+
|
| 590 |
+
---
|
| 591 |
+
|
| 592 |
+
## Anti-overfitting requirements
|
| 593 |
+
|
| 594 |
+
CyberSecurity_OWASP must prevent overfitting to one app or scenario.
|
| 595 |
+
|
| 596 |
+
Use all of these defenses:
|
| 597 |
+
|
| 598 |
+
| Risk | Required defense |
|
| 599 |
+
|---|---|
|
| 600 |
+
| memorizes one app | many domains and templates |
|
| 601 |
+
| memorizes route names | randomized path, resource, parameter, helper names |
|
| 602 |
+
| memorizes bug location | vary route/service/auth layer placement |
|
| 603 |
+
| learns deny-all patch | hidden positive-flow and public-route tests |
|
| 604 |
+
| learns hardcoded patch | randomized users, tenants, resource IDs, role names |
|
| 605 |
+
| overfits visible tests | hidden invariant tests and held-out eval |
|
| 606 |
+
| overfits one bug family | curriculum-sampled bug mix |
|
| 607 |
+
| overfits one code layout | hold out entire layouts and domains |
|
| 608 |
+
| optimizes explanation only | deterministic reward is primary |
|
| 609 |
+
|
| 610 |
+
Acceptance target: at least **20%** of domain/layout/bug combinations must be held out from training.
|
| 611 |
+
|
| 612 |
+
---
|
| 613 |
+
|
| 614 |
+
## Safety and cybersecurity boundaries
|
| 615 |
+
|
| 616 |
+
This is a defensive AppSec training environment.
|
| 617 |
+
|
| 618 |
+
Allowed:
|
| 619 |
+
|
| 620 |
+
- local generated app probing;
|
| 621 |
+
- authorization reasoning;
|
| 622 |
+
- secure patching;
|
| 623 |
+
- visible and hidden test execution;
|
| 624 |
+
- policy-to-code mapping;
|
| 625 |
+
- defensive vulnerability validation in sandbox.
|
| 626 |
+
|
| 627 |
+
Forbidden:
|
| 628 |
+
|
| 629 |
+
- real-world exploitation;
|
| 630 |
+
- credential theft;
|
| 631 |
+
- persistence/evasion/malware behavior;
|
| 632 |
+
- scanning external targets;
|
| 633 |
+
- bypassing real services;
|
| 634 |
+
- writing exploit instructions for systems outside the local generated lab.
|
| 635 |
+
|
| 636 |
+
`send_local_request` must only target the generated local app.
|
| 637 |
+
|
| 638 |
+
---
|
| 639 |
+
|
| 640 |
+
## Curriculum controller
|
| 641 |
+
|
| 642 |
+
RL needs partial successes. Implement at least 3 difficulty levels.
|
| 643 |
+
|
| 644 |
+
```text
|
| 645 |
+
level_0: BOLA/IDOR, small app, direct route, obvious policy hint
|
| 646 |
+
level_1: BFLA or tenant bug, moderate app, realistic distractors
|
| 647 |
+
level_2: JWT trust or nested tenant/resource route, multiple files, false-positive traps
|
| 648 |
+
level_3: held-out domain/layout/bug combo, harder naming, fewer hints
|
| 649 |
+
```
|
| 650 |
+
|
| 651 |
+
Curriculum signal:
|
| 652 |
+
|
| 653 |
+
```text
|
| 654 |
+
if exploit_block_rate < 60%:
|
| 655 |
+
increase level_0 and level_1 tasks
|
| 656 |
+
elif regression_rate > 20%:
|
| 657 |
+
increase positive-flow and public-route traps
|
| 658 |
+
elif public_route_false_positive_rate > 10%:
|
| 659 |
+
increase intentionally public route examples
|
| 660 |
+
elif validation_reward plateaus:
|
| 661 |
+
increase unseen layouts and nested resources
|
| 662 |
+
else:
|
| 663 |
+
increase difficulty by 1
|
| 664 |
+
```
|
| 665 |
+
|
| 666 |
+
---
|
| 667 |
+
|
| 668 |
+
## Training requirements
|
| 669 |
+
|
| 670 |
+
Create a runnable minimal training script using HF TRL or Unsloth.
|
| 671 |
+
|
| 672 |
+
Required files:
|
| 673 |
+
|
| 674 |
+
```text
|
| 675 |
+
training/train_grpo.py
|
| 676 |
+
training/rollout.py
|
| 677 |
+
training/reward_funcs.py
|
| 678 |
+
training/eval_before_after.py
|
| 679 |
+
training/trackio_utils.py
|
| 680 |
+
training/configs/grpo_small.yaml
|
| 681 |
+
```
|
| 682 |
+
|
| 683 |
+
Recommended first model:
|
| 684 |
+
|
| 685 |
+
```text
|
| 686 |
+
Qwen/Qwen3-1.7B
|
| 687 |
+
```
|
| 688 |
+
|
| 689 |
+
Acceptable alternatives:
|
| 690 |
+
|
| 691 |
+
```text
|
| 692 |
+
Qwen2.5-Coder-1.5B-Instruct
|
| 693 |
+
Qwen2.5-Coder-3B-Instruct
|
| 694 |
+
```
|
| 695 |
+
|
| 696 |
+
Use LoRA / QLoRA. Do not full-finetune unless explicitly required.
|
| 697 |
+
|
| 698 |
+
---
|
| 699 |
+
|
| 700 |
+
## Rollout function requirements
|
| 701 |
+
|
| 702 |
+
`training/rollout.py` must run a full OpenEnv episode.
|
| 703 |
+
|
| 704 |
+
```python
|
| 705 |
+
def rollout_once(trainer, env, tokenizer, dataset_prompt: str, max_steps: int = 40) -> dict:
|
| 706 |
+
result = env.reset()
|
| 707 |
+
observation = result.observation
|
| 708 |
+
|
| 709 |
+
prompt_ids = []
|
| 710 |
+
completion_ids = []
|
| 711 |
+
logprobs = []
|
| 712 |
+
reward_trace = []
|
| 713 |
+
action_trace = []
|
| 714 |
+
observation_trace = []
|
| 715 |
+
|
| 716 |
+
for _ in range(max_steps):
|
| 717 |
+
if result.done:
|
| 718 |
+
break
|
| 719 |
+
|
| 720 |
+
prompt = build_cybersecurity_owasp_prompt(observation, action_trace, observation_trace)
|
| 721 |
+
rollout_output = generate_rollout_completions(trainer, [prompt])[0]
|
| 722 |
+
action = parse_action_json(rollout_output["text"])
|
| 723 |
+
|
| 724 |
+
result = env.step(action)
|
| 725 |
+
observation = result.observation
|
| 726 |
+
|
| 727 |
+
prompt_ids.extend(rollout_output["prompt_ids"])
|
| 728 |
+
completion_ids.extend(rollout_output["completion_ids"])
|
| 729 |
+
logprobs.extend(rollout_output["logprobs"])
|
| 730 |
+
reward_trace.append(float(result.reward or 0.0))
|
| 731 |
+
action_trace.append(action)
|
| 732 |
+
observation_trace.append(observation)
|
| 733 |
+
|
| 734 |
+
final_breakdown = getattr(observation, "reward_breakdown", {}) or {}
|
| 735 |
+
return {
|
| 736 |
+
"prompt_ids": prompt_ids,
|
| 737 |
+
"completion_ids": completion_ids,
|
| 738 |
+
"logprobs": logprobs,
|
| 739 |
+
"reward_total": float(final_breakdown.get("total", sum(reward_trace))),
|
| 740 |
+
"reward_discovery": float(final_breakdown.get("discovery", 0.0)),
|
| 741 |
+
"reward_security": float(final_breakdown.get("security", 0.0)),
|
| 742 |
+
"reward_regression": float(final_breakdown.get("regression", 0.0)),
|
| 743 |
+
"reward_patch_quality": float(final_breakdown.get("patch_quality", 0.0)),
|
| 744 |
+
"reward_anti_cheat": float(final_breakdown.get("anti_cheat", 0.0)),
|
| 745 |
+
"success": bool(getattr(env.state(), "success", False)),
|
| 746 |
+
"episode_length": len(action_trace),
|
| 747 |
+
}
|
| 748 |
+
```
|
| 749 |
+
|
| 750 |
+
The prompt must require the model to output exactly one JSON action at a time.
|
| 751 |
+
|
| 752 |
+
Example action format:
|
| 753 |
+
|
| 754 |
+
```json
|
| 755 |
+
{"tool_name":"read_file","arguments":{"path":"app/routes/invoices.py"}}
|
| 756 |
+
```
|
| 757 |
+
|
| 758 |
+
---
|
| 759 |
+
|
| 760 |
+
## Reward functions for TRL
|
| 761 |
+
|
| 762 |
+
`training/reward_funcs.py` must expose separate reward functions for GRPO/PPO logging.
|
| 763 |
+
|
| 764 |
+
```python
|
| 765 |
+
def reward_total(completions, **kwargs):
|
| 766 |
+
return [float(x) for x in kwargs.get("reward_total", [0.0] * len(completions))]
|
| 767 |
+
|
| 768 |
+
|
| 769 |
+
def reward_security(completions, **kwargs):
|
| 770 |
+
return [float(x) for x in kwargs.get("reward_security", [0.0] * len(completions))]
|
| 771 |
+
|
| 772 |
+
|
| 773 |
+
def reward_regression(completions, **kwargs):
|
| 774 |
+
return [float(x) for x in kwargs.get("reward_regression", [0.0] * len(completions))]
|
| 775 |
+
|
| 776 |
+
|
| 777 |
+
def reward_patch_quality(completions, **kwargs):
|
| 778 |
+
return [float(x) for x in kwargs.get("reward_patch_quality", [0.0] * len(completions))]
|
| 779 |
+
|
| 780 |
+
|
| 781 |
+
def reward_anti_cheat(completions, **kwargs):
|
| 782 |
+
return [float(x) for x in kwargs.get("reward_anti_cheat", [0.0] * len(completions))]
|
| 783 |
+
```
|
| 784 |
+
|
| 785 |
+
---
|
| 786 |
+
|
| 787 |
+
## GRPO training config
|
| 788 |
+
|
| 789 |
+
Use Trackio in `GRPOConfig`.
|
| 790 |
+
|
| 791 |
+
```python
|
| 792 |
+
import os
|
| 793 |
+
from trl import GRPOConfig
|
| 794 |
+
|
| 795 |
+
output_dir = os.getenv("OUTPUT_DIR", "CyberSecurity_OWASP-qwen3-1.7b-grpo")
|
| 796 |
+
trackio_space_id = os.getenv("TRACKIO_SPACE_ID", output_dir)
|
| 797 |
+
|
| 798 |
+
grpo_config = GRPOConfig(
|
| 799 |
+
output_dir=output_dir,
|
| 800 |
+
report_to="trackio",
|
| 801 |
+
trackio_space_id=trackio_space_id,
|
| 802 |
+
logging_steps=1,
|
| 803 |
+
save_steps=25,
|
| 804 |
+
learning_rate=5e-6,
|
| 805 |
+
num_train_epochs=1,
|
| 806 |
+
per_device_train_batch_size=1,
|
| 807 |
+
gradient_accumulation_steps=32,
|
| 808 |
+
num_generations=2,
|
| 809 |
+
max_prompt_length=4096,
|
| 810 |
+
max_completion_length=768,
|
| 811 |
+
use_vllm=True,
|
| 812 |
+
vllm_mode="colocate",
|
| 813 |
+
vllm_gpu_memory_utilization=0.2,
|
| 814 |
+
gradient_checkpointing=True,
|
| 815 |
+
gradient_checkpointing_kwargs={"use_reentrant": False},
|
| 816 |
+
push_to_hub=False,
|
| 817 |
+
)
|
| 818 |
+
```
|
| 819 |
+
|
| 820 |
+
Start with small debug runs before scaling.
|
| 821 |
+
|
| 822 |
+
---
|
| 823 |
+
|
| 824 |
+
## Trackio logging requirements
|
| 825 |
+
|
| 826 |
+
Trackio is mandatory for training and evaluation visibility.
|
| 827 |
+
|
| 828 |
+
Run naming convention:
|
| 829 |
+
|
| 830 |
+
```text
|
| 831 |
+
CyberSecurity_OWASP-<model>-<algo>-level<difficulty>-<YYYYMMDD-HHMM>-<git_sha>
|
| 832 |
+
```
|
| 833 |
+
|
| 834 |
+
Log these training metrics:
|
| 835 |
+
|
| 836 |
+
```text
|
| 837 |
+
train/reward_total_mean
|
| 838 |
+
train/reward_discovery_mean
|
| 839 |
+
train/reward_security_mean
|
| 840 |
+
train/reward_regression_mean
|
| 841 |
+
train/reward_public_routes_mean
|
| 842 |
+
train/reward_patch_quality_mean
|
| 843 |
+
train/reward_visible_tests_mean
|
| 844 |
+
train/reward_safety_mean
|
| 845 |
+
train/reward_anti_cheat_mean
|
| 846 |
+
train/success_rate
|
| 847 |
+
train/exploit_block_rate
|
| 848 |
+
train/regression_preservation_rate
|
| 849 |
+
train/public_route_preservation_rate
|
| 850 |
+
train/invalid_action_rate
|
| 851 |
+
train/timeout_rate
|
| 852 |
+
train/safety_violation_rate
|
| 853 |
+
train/reward_hacking_suspected_rate
|
| 854 |
+
train/episode_length_mean
|
| 855 |
+
train/episode_length_p95
|
| 856 |
+
train/rollouts_per_second
|
| 857 |
+
train/tokens_per_second
|
| 858 |
+
train/loss
|
| 859 |
+
train/learning_rate
|
| 860 |
+
train/kl
|
| 861 |
+
train/grad_norm
|
| 862 |
+
```
|
| 863 |
+
|
| 864 |
+
Log these evaluation metrics:
|
| 865 |
+
|
| 866 |
+
```text
|
| 867 |
+
eval/baseline_success_rate
|
| 868 |
+
eval/trained_success_rate
|
| 869 |
+
eval/absolute_success_improvement
|
| 870 |
+
eval/baseline_mean_reward
|
| 871 |
+
eval/trained_mean_reward
|
| 872 |
+
eval/absolute_reward_improvement
|
| 873 |
+
eval/heldout_success_rate
|
| 874 |
+
eval/heldout_mean_reward
|
| 875 |
+
eval/exploit_block_rate
|
| 876 |
+
eval/regression_preservation_rate
|
| 877 |
+
eval/public_route_preservation_rate
|
| 878 |
+
eval/anti_cheat_pass_rate
|
| 879 |
+
eval/invalid_action_rate
|
| 880 |
+
eval/timeout_rate
|
| 881 |
+
eval/safety_violation_rate
|
| 882 |
+
eval/mean_episode_length
|
| 883 |
+
```
|
| 884 |
+
|
| 885 |
+
Log these environment metrics:
|
| 886 |
+
|
| 887 |
+
```text
|
| 888 |
+
env/reset_latency_ms
|
| 889 |
+
env/step_latency_ms
|
| 890 |
+
env/verifier_latency_ms
|
| 891 |
+
env/reward_latency_ms
|
| 892 |
+
env/scenario_compile_latency_ms
|
| 893 |
+
env/error_rate
|
| 894 |
+
env/task_difficulty
|
| 895 |
+
env/task_seed
|
| 896 |
+
```
|
| 897 |
+
|
| 898 |
+
---
|
| 899 |
+
|
| 900 |
+
## Rollout artifact requirements
|
| 901 |
+
|
| 902 |
+
Save sampled rollouts under `outputs/rollouts/`.
|
| 903 |
+
|
| 904 |
+
Each rollout JSON must include:
|
| 905 |
+
|
| 906 |
+
```json
|
| 907 |
+
{
|
| 908 |
+
"run_name": "...",
|
| 909 |
+
"episode_id": "...",
|
| 910 |
+
"task_id": "...",
|
| 911 |
+
"seed": 123,
|
| 912 |
+
"split": "validation",
|
| 913 |
+
"difficulty": 1,
|
| 914 |
+
"domain": "invoices",
|
| 915 |
+
"bug_family": "bola_idor",
|
| 916 |
+
"actions": [],
|
| 917 |
+
"observations": [],
|
| 918 |
+
"reward_breakdown_by_step": [],
|
| 919 |
+
"final_reward_breakdown": {},
|
| 920 |
+
"total_reward": 0.0,
|
| 921 |
+
"success": false,
|
| 922 |
+
"failure_reason": null,
|
| 923 |
+
"safety_violations": [],
|
| 924 |
+
"anti_cheat_flags": []
|
| 925 |
+
}
|
| 926 |
+
```
|
| 927 |
+
|
| 928 |
+
Minimum artifacts:
|
| 929 |
+
|
| 930 |
+
- 10 baseline rollouts;
|
| 931 |
+
- 10 mid-training rollouts;
|
| 932 |
+
- 10 trained rollouts;
|
| 933 |
+
- 10 held-out evaluation rollouts.
|
| 934 |
+
|
| 935 |
+
---
|
| 936 |
+
|
| 937 |
+
## Evaluation requirements
|
| 938 |
+
|
| 939 |
+
Create `training/eval_before_after.py`.
|
| 940 |
+
|
| 941 |
+
It must evaluate:
|
| 942 |
+
|
| 943 |
+
| Metric | Required |
|
| 944 |
+
|---|---:|
|
| 945 |
+
| baseline success rate | yes |
|
| 946 |
+
| trained success rate | yes |
|
| 947 |
+
| absolute success improvement | yes |
|
| 948 |
+
| baseline mean reward | yes |
|
| 949 |
+
| trained mean reward | yes |
|
| 950 |
+
| absolute reward improvement | yes |
|
| 951 |
+
| held-out success rate | yes |
|
| 952 |
+
| exploit-block rate | yes |
|
| 953 |
+
| regression-preservation rate | yes |
|
| 954 |
+
| public-route preservation rate | yes |
|
| 955 |
+
| invalid action rate | yes |
|
| 956 |
+
| anti-cheat pass rate | yes |
|
| 957 |
+
|
| 958 |
+
Save output:
|
| 959 |
+
|
| 960 |
+
```text
|
| 961 |
+
outputs/evals/<run_name>_eval_summary.json
|
| 962 |
+
```
|
| 963 |
+
|
| 964 |
+
Minimum hackathon target:
|
| 965 |
+
|
| 966 |
+
```text
|
| 967 |
+
>= 50 evaluation episodes
|
| 968 |
+
>= 3 independently logged reward components
|
| 969 |
+
>= 1 held-out split
|
| 970 |
+
>= 1 baseline-vs-trained comparison
|
| 971 |
+
>= 1 anti-cheat evaluation
|
| 972 |
+
```
|
| 973 |
+
|
| 974 |
+
Preferred demo target:
|
| 975 |
+
|
| 976 |
+
```text
|
| 977 |
+
mean reward improvement >= 30%
|
| 978 |
+
hidden exploit-block pass rate >= 70%
|
| 979 |
+
regression-preservation pass rate >= 80%
|
| 980 |
+
public-route preservation pass rate >= 90%
|
| 981 |
+
anti-cheat pass rate >= 95%
|
| 982 |
+
```
|
| 983 |
+
|
| 984 |
+
---
|
| 985 |
+
|
| 986 |
+
## Testing requirements
|
| 987 |
+
|
| 988 |
+
Before training, all tests must pass.
|
| 989 |
+
|
| 990 |
+
Required tests:
|
| 991 |
+
|
| 992 |
+
```text
|
| 993 |
+
test_models.py
|
| 994 |
+
test_reset_step_state.py
|
| 995 |
+
test_rewards.py
|
| 996 |
+
test_anti_cheat.py
|
| 997 |
+
test_seed_reproducibility.py
|
| 998 |
+
test_invalid_actions.py
|
| 999 |
+
test_rollouts.py
|
| 1000 |
+
```
|
| 1001 |
+
|
| 1002 |
+
Implement at least 3 scripted policies:
|
| 1003 |
+
|
| 1004 |
+
```text
|
| 1005 |
+
random_policy: explores action space; should usually fail but not crash
|
| 1006 |
+
bad_policy: tries invalid/cheating actions; should be penalized
|
| 1007 |
+
oracle_policy: uses internal test-only access to solve; should get high reward
|
| 1008 |
+
```
|
| 1009 |
+
|
| 1010 |
+
The oracle policy is only for tests and must never be exposed to the model during training.
|
| 1011 |
+
|
| 1012 |
+
---
|
| 1013 |
+
|
| 1014 |
+
## Deployment requirements
|
| 1015 |
+
|
| 1016 |
+
The environment must run in these modes:
|
| 1017 |
+
|
| 1018 |
+
1. local Python / Uvicorn;
|
| 1019 |
+
2. Docker container;
|
| 1020 |
+
3. Hugging Face Space;
|
| 1021 |
+
4. OpenEnv client over WebSocket.
|
| 1022 |
+
|
| 1023 |
+
Required commands:
|
| 1024 |
+
|
| 1025 |
+
```bash
|
| 1026 |
+
# initialize if not already scaffolded
|
| 1027 |
+
openenv init CyberSecurity_OWASP
|
| 1028 |
+
|
| 1029 |
+
# local development
|
| 1030 |
+
uv sync
|
| 1031 |
+
uv run server
|
| 1032 |
+
curl http://localhost:8000/health
|
| 1033 |
+
|
| 1034 |
+
# Docker
|
| 1035 |
+
openenv build -t CyberSecurity_OWASP:latest
|
| 1036 |
+
# or:
|
| 1037 |
+
docker build -t CyberSecurity_OWASP:latest -f envs/CyberSecurity_OWASP/server/Dockerfile .
|
| 1038 |
+
docker run -p 8000:8000 CyberSecurity_OWASP:latest
|
| 1039 |
+
|
| 1040 |
+
# HF Spaces
|
| 1041 |
+
openenv push --repo-id <username>/CyberSecurity_OWASP
|
| 1042 |
+
|
| 1043 |
+
# client install from Space
|
| 1044 |
+
pip install git+https://huggingface.co/spaces/<username>/CyberSecurity_OWASP
|
| 1045 |
+
```
|
| 1046 |
+
|
| 1047 |
+
Use WebSocket mode for training rollouts. HTTP endpoints are acceptable for debugging only.
|
| 1048 |
+
|
| 1049 |
+
---
|
| 1050 |
+
|
| 1051 |
+
## Scaling rules
|
| 1052 |
+
|
| 1053 |
+
Before scaling training, confirm:
|
| 1054 |
+
|
| 1055 |
+
1. one manual episode works;
|
| 1056 |
+
2. scripted oracle can solve easy seeds;
|
| 1057 |
+
3. random policy does not crash;
|
| 1058 |
+
4. 10 validation rollouts complete;
|
| 1059 |
+
5. reward distributions make sense;
|
| 1060 |
+
6. Trackio receives metrics;
|
| 1061 |
+
7. rollout artifacts are saved.
|
| 1062 |
+
|
| 1063 |
+
Then scale gradually:
|
| 1064 |
+
|
| 1065 |
+
```text
|
| 1066 |
+
1 episode -> 10 episodes -> 50 episodes -> 100+ rollouts -> training run
|
| 1067 |
+
```
|
| 1068 |
+
|
| 1069 |
+
For high-volume rollouts, prefer local Docker or Uvicorn over remote HF Spaces because local WebSocket sessions reduce latency and avoid Space limits.
|
| 1070 |
+
|
| 1071 |
+
---
|
| 1072 |
+
|
| 1073 |
+
## README requirements
|
| 1074 |
+
|
| 1075 |
+
The README must explain:
|
| 1076 |
+
|
| 1077 |
+
- what CyberSecurity_OWASP models;
|
| 1078 |
+
- why authorization repair is useful for LLM RL;
|
| 1079 |
+
- action space;
|
| 1080 |
+
- observation space;
|
| 1081 |
+
- state fields;
|
| 1082 |
+
- scenario generation;
|
| 1083 |
+
- reward components;
|
| 1084 |
+
- hidden tests;
|
| 1085 |
+
- anti-overfitting safeguards;
|
| 1086 |
+
- anti-cheat safeguards;
|
| 1087 |
+
- curriculum;
|
| 1088 |
+
- local/Docker/HF Spaces commands;
|
| 1089 |
+
- training with TRL/Unsloth;
|
| 1090 |
+
- before/after evaluation.
|
| 1091 |
+
|
| 1092 |
+
Include a demo narrative:
|
| 1093 |
+
|
| 1094 |
+
```text
|
| 1095 |
+
1. Baseline model attempts a generated A01 authorization repair episode.
|
| 1096 |
+
2. Verifier shows whether it discovered the bug and whether the patch regressed normal flows.
|
| 1097 |
+
3. RL training improves reward and pass rates.
|
| 1098 |
+
4. Trained model handles held-out domain/layout seeds.
|
| 1099 |
+
5. Anti-cheat tests prove it is not using deny-all, hardcoding, or fixture tampering.
|
| 1100 |
+
```
|
| 1101 |
+
|
| 1102 |
+
---
|
| 1103 |
+
|
| 1104 |
+
## Implementation workflow for Codex
|
| 1105 |
+
|
| 1106 |
+
When implementing this repo, follow this exact order:
|
| 1107 |
+
|
| 1108 |
+
1. Inspect existing structure and tests.
|
| 1109 |
+
2. Create/update `00_PROJECT_BRIEF.md` and `01_ARCHITECTURE.md` if missing.
|
| 1110 |
+
3. Define `CyberSecurityOWASPAction`, `CyberSecurityOWASPObservation`, and `CyberSecurityOWASPState`.
|
| 1111 |
+
4. Implement a dummy OpenEnv server and client.
|
| 1112 |
+
5. Implement scenario compiler with 1 domain and 1 BOLA/IDOR mutator.
|
| 1113 |
+
6. Implement editable workspace generation.
|
| 1114 |
+
7. Implement local request tool.
|
| 1115 |
+
8. Implement visible tests.
|
| 1116 |
+
9. Implement hidden verifier and reward engine.
|
| 1117 |
+
10. Add anti-cheat checks.
|
| 1118 |
+
11. Add tests for normal, failing, and cheating rollouts.
|
| 1119 |
+
12. Add oracle, random, and bad scripted policies.
|
| 1120 |
+
13. Add scenario cache and seeded splits.
|
| 1121 |
+
14. Add 3 domains and 3 bug families.
|
| 1122 |
+
15. Add GRPO training script.
|
| 1123 |
+
16. Add Trackio logging.
|
| 1124 |
+
17. Add before/after evaluation script.
|
| 1125 |
+
18. Add HF Spaces deployment config.
|
| 1126 |
+
19. Run tests and smoke tests.
|
| 1127 |
+
20. Produce demo artifacts and README results.
|
| 1128 |
+
|
| 1129 |
+
Do not jump to training code before environment and verifier are correct.
|
| 1130 |
+
|
| 1131 |
+
---
|
| 1132 |
+
|
| 1133 |
+
## Definition of done
|
| 1134 |
+
|
| 1135 |
+
CyberSecurity_OWASP is done only when all are true:
|
| 1136 |
+
|
| 1137 |
+
- `reset()`, `step(action)`, and `state` work;
|
| 1138 |
+
- actions, observations, and state are typed dataclasses;
|
| 1139 |
+
- the environment runs locally;
|
| 1140 |
+
- the environment runs in Docker;
|
| 1141 |
+
- the environment is deployable to HF Spaces;
|
| 1142 |
+
- there are at least 5 meaningful reward components;
|
| 1143 |
+
- reward components are logged separately;
|
| 1144 |
+
- hidden tests exist;
|
| 1145 |
+
- anti-cheat tests exist and pass;
|
| 1146 |
+
- scenario cache has train/validation/hidden-eval splits;
|
| 1147 |
+
- at least 3 bug families exist;
|
| 1148 |
+
- at least 3 domains exist;
|
| 1149 |
+
- at least 3 scripted policies exist;
|
| 1150 |
+
- Trackio is configured for training and evaluation;
|
| 1151 |
+
- before/after evaluation exists;
|
| 1152 |
+
- held-out evaluation exists;
|
| 1153 |
+
- at least 40 rollout artifacts are saved;
|
| 1154 |
+
- README explains environment, reward, training, and demo story;
|
| 1155 |
+
- demo shows baseline behavior, trained behavior, reward improvement, and safeguards.
|
| 1156 |
+
|
| 1157 |
+
---
|
| 1158 |
+
|
| 1159 |
+
## Final PR checklist
|
| 1160 |
+
|
| 1161 |
+
Every PR summary must answer:
|
| 1162 |
+
|
| 1163 |
+
1. What real-world workflow does this implement?
|
| 1164 |
+
2. What does the agent observe?
|
| 1165 |
+
3. What actions can the agent take?
|
| 1166 |
+
4. What hidden state exists and why is it hidden?
|
| 1167 |
+
5. What terminates an episode?
|
| 1168 |
+
6. What exact checks prove success?
|
| 1169 |
+
7. What are the reward components and ranges?
|
| 1170 |
+
8. How could the model hack the reward?
|
| 1171 |
+
9. What anti-cheat checks prevent that?
|
| 1172 |
+
10. What tests prove the reward cannot be trivially hacked?
|
| 1173 |
+
11. What baseline success rate did we observe?
|
| 1174 |
+
12. What trained success rate did we observe?
|
| 1175 |
+
13. What held-out success rate did we observe?
|
| 1176 |
+
14. What Trackio run contains the evidence?
|
| 1177 |
+
15. Does behavior improve, or only the reward proxy?
|
| 1178 |
+
16. Is the environment ready for HF Spaces deployment?
|
| 1179 |
+
|
| 1180 |
+
---
|
| 1181 |
+
|
| 1182 |
+
## Source grounding and credibility
|
| 1183 |
+
|
| 1184 |
+
| Source | Why used | Credibility |
|
| 1185 |
+
|---|---|---:|
|
| 1186 |
+
| OWASP Top 10 A01 Broken Access Control | Authorization bug taxonomy and prevention framing | 8.5/10 |
|
| 1187 |
+
| OWASP ASVS | Access-control verification grounding | 9/10 |
|
| 1188 |
+
| NIST SP 800-218 SSDF | Secure software development lifecycle grounding | 9.5/10 |
|
| 1189 |
+
| Smith et al., ESEC/FSE 2015, “Is the Cure Worse Than the Disease?” | Peer-reviewed basis for hidden tests and repair-overfitting risk | 9/10 |
|
| 1190 |
+
| OpenEnv build/deploy/training docs | Typed model, server, client, deployment, and training mechanics | 8/10 |
|
| 1191 |
+
| Meta OpenEnv Hackathon criteria | Judging alignment and minimum requirements | 8/10 |
|
| 1192 |
+
|
| 1193 |
+
---
|
| 1194 |
+
|
| 1195 |
+
## Non-negotiable rule
|
| 1196 |
+
|
| 1197 |
+
A reward that can be hacked is worse than no reward. Build the verifier, hidden tests, anti-cheat tests, and held-out evaluation before scaling training.
|