File size: 2,551 Bytes
881f9f2 d606d10 881f9f2 d606d10 881f9f2 d606d10 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | # data/scenarios/
*Last updated: 2026-05-07*
Smart Grid transformer maintenance scenarios, following the AssetOpsBench scenario format. Each scenario is a multi-turn agentic task where an LLM agent must use the IoT / FMSR / TSFM / WO MCP tools to diagnose, forecast, or remediate a transformer fault.
## Format
Scenarios follow AssetOpsBench's existing utterance schema with required keys:
- `id` — unique identifier
- `type` — domain label (`IoT`, `FMSR`, `TSFM`, `WO`, or empty for mixed/general)
- `text` — user instruction for the agent
- `category` — task category label
- `characteristic_form` — objective expected answer pattern for grading
For Smart Grid authoring in this repo, we keep additional optional keys:
- `asset_id` — fictional transformer ID (`T-001` to `T-020`)
- `expected_tools` — expected MCP tools in rough order
- `ground_truth` — checkable target answer/action
- `difficulty` — easy / medium / hard
- `domain_tags` — exercised domains (`IoT`, `FMSR`, `TSFM`, `WO`)
See the upstream AssetOpsBench structure in `src/scenarios/local/vibration_utterance.json` and `aobench/scenario-server/src/scenario_server/handlers/*.py` (which consume `id`, `type`, `text`, `category`, `characteristic_form`).
## Targets
- **W2 (Apr 7-13):** 15+ validated scenarios (reviewer)
- **W4 (Apr 21-27):** 30+ scenarios (reviewer + team) — stretch goal per mid-point report
## Conventions
- **File naming:** `<domain>_<NN>_<short_slug>.json`
- e.g. `fmsr_01_dga_arcing_diagnosis.json`, `tsfm_03_rul_forecast_weekly.json`
- **Multi-domain scenarios:** `multi_<NN>_<slug>.json`
- e.g. `multi_01_full_fault_response.json` (IoT sensor alert → FMSR diagnosis → TSFM RUL check → WO creation)
- **Before committing**, validate against the AssetOpsBench scenario schema and confirm the referenced `asset_id` exists in `data/processed/asset_metadata.csv`.
- **Ground truth must be objectively checkable** — if scoring depends on subjective judgment, add a scoring rubric field.
## Validation
Run the validator from repo root before committing scenario changes:
```bash
python data/scenarios/validate_scenarios.py
```
This catches schema violations and negative-fixture regressions before you get to
the heavier harness path. For the full harness workflow, see
[../../docs/eval_harness_readme.md](../../docs/eval_harness_readme.md).
## Status (May 7, 2026)
Canonical package contains 36 validated scenarios: 31 hand-authored scenarios plus 5 generated scenarios promoted after validation and manual review.
|