data/scenarios/
Last updated: 2026-05-07
Smart Grid transformer maintenance scenarios, following the AssetOpsBench scenario format. Each scenario is a multi-turn agentic task where an LLM agent must use the IoT / FMSR / TSFM / WO MCP tools to diagnose, forecast, or remediate a transformer fault.
Format
Scenarios follow AssetOpsBench's existing utterance schema with required keys:
id— unique identifiertype— domain label (IoT,FMSR,TSFM,WO, or empty for mixed/general)text— user instruction for the agentcategory— task category labelcharacteristic_form— objective expected answer pattern for grading
For Smart Grid authoring in this repo, we keep additional optional keys:
asset_id— fictional transformer ID (T-001toT-020)expected_tools— expected MCP tools in rough orderground_truth— checkable target answer/actiondifficulty— easy / medium / harddomain_tags— exercised domains (IoT,FMSR,TSFM,WO)
See the upstream AssetOpsBench structure in src/scenarios/local/vibration_utterance.json and aobench/scenario-server/src/scenario_server/handlers/*.py (which consume id, type, text, category, characteristic_form).
Targets
- W2 (Apr 7-13): 15+ validated scenarios (reviewer)
- W4 (Apr 21-27): 30+ scenarios (reviewer + team) — stretch goal per mid-point report
Conventions
- File naming:
<domain>_<NN>_<short_slug>.json- e.g.
fmsr_01_dga_arcing_diagnosis.json,tsfm_03_rul_forecast_weekly.json
- e.g.
- Multi-domain scenarios:
multi_<NN>_<slug>.json- e.g.
multi_01_full_fault_response.json(IoT sensor alert → FMSR diagnosis → TSFM RUL check → WO creation)
- e.g.
- Before committing, validate against the AssetOpsBench scenario schema and confirm the referenced
asset_idexists indata/processed/asset_metadata.csv. - Ground truth must be objectively checkable — if scoring depends on subjective judgment, add a scoring rubric field.
Validation
Run the validator from repo root before committing scenario changes:
python data/scenarios/validate_scenarios.py
This catches schema violations and negative-fixture regressions before you get to the heavier harness path. For the full harness workflow, see ../../docs/eval_harness_readme.md.
Status (May 7, 2026)
Canonical package contains 36 validated scenarios: 31 hand-authored scenarios plus 5 generated scenarios promoted after validation and manual review.