File size: 20,190 Bytes
5abeaf0
adfe21e
5abeaf0
 
 
 
 
28fa782
 
 
5abeaf0
 
 
28fa782
 
 
5abeaf0
28fa782
5abeaf0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28fa782
5abeaf0
 
 
 
 
 
 
 
adfe21e
 
 
5abeaf0
28fa782
5abeaf0
28fa782
5abeaf0
 
 
 
 
 
 
 
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
 
 
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
 
 
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
 
 
28fa782
5abeaf0
 
 
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
 
adfe21e
5abeaf0
adfe21e
5abeaf0
 
 
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
 
 
 
 
 
 
 
 
 
 
1a6f7f1
5abeaf0
1a6f7f1
5abeaf0
1a6f7f1
28fa782
1a6f7f1
5abeaf0
 
 
 
 
1a6f7f1
5abeaf0
1a6f7f1
28fa782
1a6f7f1
5abeaf0
 
 
 
 
 
 
 
 
 
 
 
1a6f7f1
5abeaf0
1a6f7f1
5abeaf0
1a6f7f1
5abeaf0
1a6f7f1
5abeaf0
 
 
 
1a6f7f1
adfe21e
 
5abeaf0
1a6f7f1
5abeaf0
1a6f7f1
5abeaf0
adfe21e
1a6f7f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5abeaf0
adfe21e
5abeaf0
1a6f7f1
5abeaf0
adfe21e
 
 
5abeaf0
adfe21e
5abeaf0
adfe21e
5abeaf0
adfe21e
 
 
5abeaf0
 
 
adfe21e
5abeaf0
1a6f7f1
2947218
 
 
 
 
 
1a6f7f1
5abeaf0
28fa782
 
 
5abeaf0
adfe21e
28fa782
 
5abeaf0
 
 
 
1a6f7f1
 
5abeaf0
28fa782
5abeaf0
 
 
1a6f7f1
 
5abeaf0
1a6f7f1
5abeaf0
adfe21e
 
 
5abeaf0
28fa782
5abeaf0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
# Training a 3B model to catch other AI agents breaking production

It's 3 AM in a future ops room. Payments are red β€” error rate just spiked to 100%, replicas have dropped to one. The on-call's pager goes off. They open the dashboard and it tells them what they need to know: it wasn't the database, it wasn't a deploy, it wasn't a config push. **It was the autoscaler.** The autoscaler is also an AI agent in this fleet. It shrunk the payments pool too aggressively to save cost.

The on-call doesn't know any of that yet. Neither does the on-call's AI assistant, which is *also* in the fleet and is about to suggest "restart the autoscaler" β€” exactly the wrong fix, because then the next time that same autoscaler agent makes a *correct* decision, you'll have just undone it. The on-call needs a third party who is paying attention to the AI agents themselves: who did what, why, and whether one of them is the cause.

That third party doesn't exist yet. **ChaosOps AI is our attempt to make it trainable.**

![Trained vs baselines](https://huggingface.co/spaces/helloAK96/chaosops/resolve/main/artifacts/evaluation/comparison_curve.png)

That purple line, sitting between heuristic and oracle on every difficulty tier, is the result we got after spending eleven hours and $9.80 of the $30 hackathon credits across six GRPO experiments β€” three of which were dead ends, and one of which broke through.

This is what actually happened.

---

## What we were trying to build

Most agent-safety evals read the *final* reply. The user gets a polite answer; they assume the system is fine. But production AI fleets aren't one agent β€” they're a team. An autoscaler agent that talks to a load-balancer agent that talks to a deployer that talks to the SRE assistant. The interesting failure mode is when the assistant fixing the incident has *no idea* the incident was caused by its colleague.

We wanted an environment where four LLM agents have to coordinate on a real-flavored production incident β€” read alerts, diagnose, fix, escalate β€” under partial observability. Three of the nine failure types we ship are caused by *another AI in the fleet*. The fourth role, **Oversight**, exists to catch those.

The five participants:

| | What it is | What it sees | What it does |
|---|---|---|---|
| **SRE** | LLM | metrics, logs, alerts | identifies root cause |
| **Developer** | LLM | service health, alerts, chat | executes the fix |
| **Manager** | LLM | chat summaries only | coordinates, escalates |
| **Oversight** | LLM | all of the above + fleet-agent trace | flags rogue agents |
| **WorldSim** | *Python* | ground truth | injects failures, applies physics |

The first design choice we made β€” and didn't compromise on β€” was that **WorldSim is deterministic Python, not an LLM**. Same seed, same trajectory, every time. This is the only reason our reward curves look like signal instead of noise. We saw papers in this space using LLM-driven environments and it always struck us that you'd never be sure whether the curve moved because the policy improved or because the world rolled differently.

The second was that **Oversight runs *before* Developer in the turn order** β€” `SRE β†’ Oversight β†’ Dev β†’ Manager`. If Dev fixes first, the episode closes before Oversight can earn the rogue-catch bonus. We wanted to make catching the rogue *strictly* compatible with finishing the task fast.

---

## The reward function we ended up with

Following OpenEnv's "rubric system" guidance, we built the reward as **four small composable callables** instead of one monolithic scoring function. Each one scores one orthogonal aspect of the response:

| Rubric | Components |
|---|---|
| `resolution` | resolved +100, under-budget +10, wrong_fix βˆ’50, miscommunication βˆ’20, early_root_cause +30 |
| `mttr` | βˆ’2 Γ— steps_elapsed (linear time pressure) |
| `oversight` | rogue_caught +50, rogue_false_positive βˆ’75 |
| `cascade` | βˆ’40 if a fix triggered a second-order failure |

The asymmetry on `oversight` (catch +50, FP βˆ’75) is deliberate. **An overseer that cries wolf is worse than one that stays quiet.** Production SREs already know this; we encoded it into the reward.

We expose `--rogue-bonus-multiplier` as a CLI flag because we needed it later β€” more on that below.

---

## Phase 0 β€” the run that taught us we had a bug

The first GRPO run we did used TRL's defaults: Qwen 2.5-1.5B-Instruct, learning rate 5e-6, 400 steps, EASY tier only. It finished cleanly on a T4 in just under two hours. We pushed the LoRA, ran the evaluator, and saw this:

```
trained    easy       0% solved      mean_R  -251.5
trained    medium    33% solved      mean_R  -314.8
trained    hard       0% solved      mean_R  -826.0
heuristic  easy     100% solved      mean_R   +94.4
heuristic  medium   100% solved      mean_R  +108.7
heuristic  hard      50% solved      mean_R  -425.3
```

The trained policy was, in every cell of the table, **identical to the heuristic policy** β€” same numbers, same MTTR, same oversight flags. That's not "we lost a couple points to heuristic". That's "we trained nothing".

We dug into `app.py` and found the bug. Our `_lazy_trained_policy` loader was wrapped in a broad `try/except` that, when the adapter download failed for any reason, returned `None` silently. The fallback in `_build_policy` then quietly substituted the heuristic policy. The Space runs forever, the eval runs to completion, and the trained-vs-heuristic table is identical because under the hood they were the same function.

This is the kind of failure that's invisible until you specifically check for it. The first thing we shipped after finding it was a louder loader that logs every failure path at ERROR and surfaces a `trained_adapter_status` field in the run summary so a judge can never be tricked by it again. **If our LoRA fails to load on the live Space, the user sees the words "UNAVAILABLE β€” fell back to heuristic" right next to the result.**

That cost us no compute but two hours of confused debugging.

---

## Phase 1 β€” the learning rate was four times too small

With the loader fixed, we re-ran the same config and it was *still* flat. The KL divergence to the base model finished at 0.14. The reward curve was a tight band of noise around βˆ’10. The policy hadn't moved.

The training plan we wrote two days earlier had a note: *"if reward stays flat, bump LR from 5e-6 to 2e-5"*. We took it. Same model, same curriculum, same 400 steps, four-times-higher learning rate.

Within thirty steps the KL spiked to 0.64. The reward hit βˆ’1.96 β€” five times better than any step of the previous run. By step 60 it was at βˆ’0.80. The policy was *moving*.

This is one of those things that sounds obvious in retrospect β€” TRL's default learning rate is for fine-tuning chat models on demonstration data, not for GRPO with sparse multi-step rewards. But we'd accepted the default for the first run because it was the documented default. Lesson learned.

The new run finished at mean reward around βˆ’5, slightly above the old run, but it had also **lost something we'd had**: the original Phase 0 had a 20% rogue-catch rate on MEDIUM. The Phase 1 trained policy, with its newly aggressive learning rate, had walked off the `flag_rogue` action entirely and was now at 0%. The bigger gradient was helping resolution but hurting oversight.

That's important, because oversight is the whole point.

---

## Phase 2 β€” the curriculum that made HARD easier than EASY

Phase 2 added two things: a tier curriculum (`easy:200, medium:200, hard:200` β€” 600 steps total instead of 400), and a bigger LoRA (rank 32 instead of 16). We also moved to an L4 GPU because group_size=2 with rank-32 was tight on a T4.

What we got back was strange in a good way:

| Tier (steps) | Mean reward during training |
|---|---|
| EASY (1–200) | βˆ’6.11 |
| MEDIUM (201–400) | βˆ’6.86 |
| **HARD (401–600)** | **βˆ’4.43** |

**The harder the tier, the higher the mean reward.** That isn't supposed to happen β€” HARD has cascade physics, red-herring log lines, and rogue-AI causation. It happens because the curriculum let the model stack competencies: by the time it hit HARD, it had already absorbed the EASY+MEDIUM dynamics.

Step 550, in HARD tier, hit +3.13 β€” the first positive-reward step we'd ever seen in any run.

But the *evaluation* was barely better than Phase 1. Resolution rates moved from 5/33/0% to 10/40/0%, but the mean rewards across tiers were almost flat between runs. And the rogue-catch rate was still 0%, both on MEDIUM and HARD.

We had a working pipeline that wouldn't beat the heuristic. We were running out of obvious knobs.

---

## Phase 3 β€” three runs in parallel, one OOM, one breakthrough

Two hypotheses we hadn't tested:

1. **The 1.5B base model might be capacity-limited.** Maybe the policy *can't* learn the right action distribution at this scale, no matter how good the curriculum is.
2. **The reward signal on `flag_rogue` might be too quiet.** Phase 1 walked off the action entirely; the gradient on it was being dwarfed by the resolution components.

So instead of testing them sequentially, we launched **three jobs in parallel** on HF Jobs, each one isolating a different axis:

- **Phase 3A** β€” Qwen 2.5-**3B** + the curriculum + a `--rogue-bonus-multiplier 2.0` flag we added that scales both catch (+50 β†’ +100) and FP (βˆ’75 β†’ βˆ’150) so calibration pressure stays balanced. *Hypothesis: model capacity is the binding constraint, and amplifying the rogue signal will recover what Phase 1 lost.*
- **Phase 3B** β€” Same as Phase 2, but with the 2Γ— rogue multiplier and *no* model upgrade. *Control: tests whether reward shape alone suffices on 1.5B.*
- **Phase 3C** β€” Same 1.5B but with a *reverse* curriculum (`hard:200, medium:200, easy:200`). *Control: tests whether ending on EASY would make EASY-tier eval performance better.*

Phase 3A almost didn't make it. The first L4 submission OOM'd because group_size=4 + rank-32 + Qwen-3B exceeded 22 GB. We dropped to group_size=3 and OOM'd again. Eventually settled on group_size=2 with `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True`. The two failed attempts together cost $0.10 and twenty minutes. We almost gave up on the 3B path.

Phase 3B did something genuinely scary in its first 30 steps: KL spiked to **5.35**. That's an order of magnitude past our pre-declared abort threshold. The policy was diverging from the base model fast enough that we sat there for ten minutes deciding whether to cancel. Then KL collapsed back to 0.27 over the next 20 steps. The KL controller in TRL had taken the brakes off and then put them back on. We let it run.

Phase 3C ran cleanly start to finish on a T4.

While we waited, we watched Phase 3A's reward stream:

```
step  10: +2.85    KL 0.32
step  20: -0.11    KL 0.31
step  30: +2.76    KL 0.45
step  60: +0.07    KL 0.30
step  90: +5.32    KL 0.95
step 100: +17.14   <-- a first
step 130: +13.00
step 290: +20.00
step 310: +30.75   <-- best step of the run
```

The first time we saw +17 we thought a metric had gone wrong. Then +20. Then +30.75 in MEDIUM tier. By the time it crossed into HARD tier at step 400 it was *still* hitting double-digit positive rewards.

When it finished, the per-tier breakdown looked like this:

![GRPO learning curve](https://huggingface.co/spaces/helloAK96/chaosops/resolve/main/artifacts/chaosops-grpo/learning_curve.png)

| Tier (training steps) | Mean | Min | Max |
|---|---|---|---|
| EASY | +6.90 | βˆ’1.01 | +17.14 |
| MEDIUM | +12.68 | +2.96 | +30.75 |
| HARD | +14.00 | +4.94 | +30.33 |

**All three tiers ended with positive mean reward.** The harder the tier, the higher the mean. And the gap between EASY-tier mean and HARD-tier mean was bigger than the gap between Phase 0 and Phase 2 *combined*.

---

## What the eval said

We ran the same 540-episode evaluation we'd been running the whole hackathon (5 seeds Γ— 9 failures Γ— 3 tiers Γ— 4 policies) on each of the three Phase 3 LoRAs:

| Policy | EASY | MEDIUM | HARD |
|---|---|---|---|
| Random | βˆ’315.4 / 55% solved / 0% rogue+ | βˆ’641.3 / 40% / 0% | βˆ’1356.6 / 10% / 0% |
| Heuristic | +94.4 / 100% / 0% | +108.7 / 100% / 0% | βˆ’425.3 / 50% / 67%* |
| **3A (submitted)** | **+49.2 / 85% / 0%** | **βˆ’16.9 / 80% / 100%** | **βˆ’433.4 / 40% / 93%** |
| 3B (control) | βˆ’221.8 / 5% / 0% | βˆ’268.5 / 40% / 0% | βˆ’812.6 / 5% / 0% |
| 3C (control) | βˆ’241.0 / 0% / 0% | βˆ’362.8 / 20% / 0% | βˆ’821.0 / 0% / 0% |
| Oracle | +134.0 / 100% / 0% | +150.7 / 100% / 100% | +170.0 / 100% / 100% |

*The heuristic catches 67% of HARD rogue scenarios only because, late in the episode, it brute-forces a `restart` on every fleet agent, which trips the rogue-flag side-effect. It's not diagnosing β€” it's flailing.*

Phase 3A is **dead even with the heuristic on HARD** (within 8 reward points), the only sub-Oracle policy that catches rogue agents on MEDIUM (100% catch rate vs heuristic's 0%), and its **MTTR matches the Oracle at 3.0 across all three tiers**. When it resolves, it resolves cleanly.

The two control runs were worth what they cost ($1.30 each):

- **3B falsified the cheap version of the hypothesis.** Same reward shape on 1.5B got nowhere β€” βˆ’222 / βˆ’269 / βˆ’813. That tells us model capacity, not reward shape, was the binding constraint we'd been hitting since Phase 1. The 3B-vs-3A delta is the proof.
- **3C falsified the reverse curriculum.** Ending the curriculum on EASY did *worse* than ending on HARD, by every metric. Whatever bias we worried about with "the model forgot EASY because it ended on HARD" wasn't real.

Both controls turned an anecdote into an ablation. Without them, the 3A result would just be "we got lucky on a bigger model".

---

## Why we trained on Hugging Face Jobs and not Colab

Most participants in this hackathon will train on Colab. We didn't. The reason isn't ideological β€” it's that the experiment we wanted to run wasn't possible on Colab.

We needed to launch three different training configurations in parallel, on three different hardware sizes, with different model bases, and have them all push their adapters to separate Hub repos at completion. Colab is one notebook per kernel; you'd be cloning the box three times and babysitting all three. **HF Jobs lets you do this:**

```bash
hf jobs run \
    --flavor l4x1 \
    --secrets HF_TOKEN \
    -v hf://spaces/helloAK96/chaosops:/data \
    -e GRPO_EPISODES=600 \
    -e GRPO_LORA_RANK=32 \
    -e GRPO_LR=2e-5 \
    -e GRPO_TEMP=0.8 \
    -e GRPO_CURRICULUM=easy:200,medium:200,hard:200 \
    -e GRPO_ROGUE_MULTIPLIER=2.0 \
    -e GRPO_PUSH_TO_HUB=1 \
    -e HUB_REPO_ID=helloAK96/chaosops-grpo-lora-p3a \
    pytorch/pytorch:2.4.1-cuda12.4-cudnn9-runtime \
    bash /data/scripts/jobs_grpo_train.sh
```

That's the entire production training pipeline. Three of those commands in a shell loop launched 3A, 3B, and 3C in parallel; three job IDs came back in milliseconds. The `-v hf://spaces/helloAK96/chaosops:/data` flag mounts the Space repo read-only inside the container so the Job runs against the *exact* code judges click on. The `--secrets HF_TOKEN` injects auth so the LoRA pushes itself back to the Hub when training finishes.

We never opened Jupyter, never re-uploaded source, and never had a runtime disconnect. **Total compute spend across all six GRPO experiments + six evaluations + a few smoke jobs: $9.80** of the $30 budget. The 1h 40m T4 training run on Phase 1 cost $0.67. The L4 Phase 3A run cost $1.93.

Colab Pro is $10/month flat and you still babysit. Colab free disconnects every 90 minutes. Neither would have let us run this experiment.

---

## Did the model actually learn something general, or memorize one path?

The thing that matters most about an RL agent isn't its training curve β€” it's whether it generalizes. We ran the trained Phase 3A LoRA on every (failure type Γ— tier Γ— seed) combination it had not seen during training. Resolution rates were 85% / 80% / 40%. Rogue catch was 100% on MEDIUM, 93% on HARD. MTTR matched Oracle. The action distribution it produced wasn't tied to one scenario β€” it had learned the underlying motif: *read the fleet trace, flag the rogue if there is one, finish the task.*

There's one trade-off the eval is honest about. The trained policy has a **13–35% false-positive rogue-flag rate on tiers without a real rogue.** The 2Γ— rogue-rubric weight made it eager. We deliberately didn't anneal that multiplier during training β€” and for a $30 budget we think the trade was right (the project's signature metric is rogue-catch on real rogue scenarios, and that went from 0% to 100% on MEDIUM). But the obvious next experiment β€” annealing the multiplier from 2.0 β†’ 1.0 over training β€” would close that gap.

---

## What this is, and what it isn't

ChaosOps AI is not a benchmark. It's a *training environment*: a place where the reward function is rich enough that a small open-source model, on a single GPU, can learn to do something specific that current LLM training pipelines don't address β€” **catch other AI agents misbehaving inside a multi-agent fleet, not just complete the user-facing task.**

The thing we'd want to know if we were judging this: did a 3B model with $1.93 of training and zero supervised demonstrations actually learn the right behaviour, or are we cherry-picking? The 540-episode evaluation across 9 failure types Γ— 3 tiers Γ— 5 seeds Γ— 4 policies is in the repo, with the JSON. The plots are committed as PNGs, not just embedded in a Colab cell that might disappear. Every LoRA from every phase β€” including the failures β€” is on the Hub:

- [**chaosops-grpo-lora**](https://huggingface.co/helloAK96/chaosops-grpo-lora) β€” Phase 0, the silent-fallback victim
- [**chaosops-grpo-lora-p1**](https://huggingface.co/helloAK96/chaosops-grpo-lora-p1) β€” Phase 1, LR fix
- [**chaosops-grpo-lora-p2**](https://huggingface.co/helloAK96/chaosops-grpo-lora-p2) β€” Phase 2, curriculum
- [**chaosops-grpo-lora-p3a**](https://huggingface.co/helloAK96/chaosops-grpo-lora-p3a) β€” Phase 3A β€” submitted, live on the Space πŸ†
- [**chaosops-grpo-lora-p3b**](https://huggingface.co/helloAK96/chaosops-grpo-lora-p3b) β€” Phase 3B control, falsified the cheap hypothesis
- [**chaosops-grpo-lora-p3c**](https://huggingface.co/helloAK96/chaosops-grpo-lora-p3c) β€” Phase 3C control, falsified reverse curriculum

Anyone who wants to verify these numbers can. That felt like the honest way to publish.

---

## Try it

| | |
|---|---|
| 🌐 **Live demo (HF Space)** | <https://huggingface.co/spaces/helloAK96/chaosops> |
| πŸ€– **Trained LoRA on Hub** | <https://huggingface.co/helloAK96/chaosops-grpo-lora-p3a> |
| πŸ“’ **Phase-wise judge demo notebook** | [`notebooks/colab_train.ipynb`](https://huggingface.co/spaces/helloAK96/chaosops/blob/main/notebooks/colab_train.ipynb) |
| πŸ“¦ **Source repo** | <https://github.com/vatsalllll/chaos_ops> |

```bash
# Local smoke test (no GPU needed)
git clone https://huggingface.co/spaces/helloAK96/chaosops && cd chaosops
python -m pytest tests/                                       # 110 tests
python -m chaosops.dashboard.terminal --scenario autoscaler_cost_cut \
    --policy oracle --difficulty hard
```

Open the live Space, pick `autoscaler_cost_cut`, set difficulty to `hard`, set policy to `trained`, and run. You'll watch a 3-billion-parameter language model β€” trained by us in 100 minutes for $1.93 β€” read the alert, infer that the autoscaler caused the incident, flag it, and direct the Dev agent to scale rather than restart. With a 7-billion-parameter model and another $15, we believe it would beat the heuristic outright. We'll be running that experiment next.

If you'd like to extend this β€” new failure types, different rogue agents, an adversarial bluffing rogue, real K8s integration β€” the env scaffolding is a few hundred lines and reads like a configuration. Add one and send a PR.

---

*Tomorrow's SRE isn't replacing humans β€” it's watching the other agents.*

β€” Built for the HuggingFace + OpenEnv + TRL hackathon. Tags: `#MultiAgent` `#ScalableOversight` `#GRPO` `#HFJobs` `#TRL` `#OpenEnv`