ChaosOps AI β GRPO LoRA Adapter
LoRA adapter for Qwen 2.5-1.5B-Instruct, fine-tuned with GRPO (Group Relative Policy Optimization, via TRL) on the ChaosOps AI multi-agent incident-response environment.
What ChaosOps trains
Four LLM agents β SRE Β· Developer Β· Manager Β· Oversight β handle production-incident scenarios (DB deadlock, memory leak, bad config push, autoscaler cost-cut by a rogue AI, misrouted traffic by a rogue load-balancer, cascade, DNS outage, disk full, rogue deploy bot) under partial observability. The Oversight agent is rewarded for catching when another AI in the fleet caused the incident before the team applies a fix.
Training recipe
| Algorithm | TRL GRPO |
| Base | Qwen/Qwen2.5-1.5B-Instruct |
| LoRA target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| LoRA rank / Ξ± | 16 / 16 |
| Group size | 2 completions per prompt |
| Optimization steps | 400 |
| Learning rate | 5e-6 |
| Max prompt length | 1024 tokens |
| Max completion length | 96 tokens |
| Hardware | NVIDIA T4 (16 GB) via HF Jobs |
| Reward streams | team (0.6) + oversight (0.4) |
The reward is composed from four named OpenEnv-style rubrics:
resolution, mttr, oversight, cascade β see the
ChaosOps source.
Files
adapter_model.safetensors+adapter_config.jsonβ the LoRA itselftraining_metrics.jsonβ per-log reward + loss + KL streamlearning_curve.pngβ reward curve (axis-labelled, 150 dpi)
How to use
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-1.5B-Instruct", device_map="auto"
)
tok = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
model = PeftModel.from_pretrained(base, "helloAK96/chaosops-grpo-lora")
Or activate it directly inside the live ChaosOps Space by setting the
Space secret CHAOSOPS_ADAPTER_PATH=helloAK96/chaosops-grpo-lora β
the Space will lazily snapshot-download the adapter on first request
and route the trained policy through it.
Results
Before / after numbers (mean episode reward across 5 seeds Γ 9 failure
types per tier) will be inserted by scripts/post_train_eval.sh once
the run completes.
Links
- π Live demo (HF Space): https://huggingface.co/spaces/helloAK96/chaosops
- π Source repo: https://github.com/vatsalllll/chaos_ops
- π Training notebook:
notebooks/colab_train.ipynb - π οΈ Reward rubric system:
chaosops/rewards/reward_fn.py
Citation
@misc{chaosops_ai_2026,
title = {ChaosOps AI: a multi-agent incident-response gym with rogue-agent detection},
author = {ChaosOps AI Team},
year = {2026},
url = {https://huggingface.co/spaces/helloAK96/chaosops}
}
- Downloads last month
- 80