SmartGridBench Review Code Package
This is an anonymized executable source snapshot for review. It contains the benchmark runners, Smart Grid MCP servers, public-safe synthetic data, canonical scenario files, configs, and tests needed to inspect or reproduce the core benchmark wiring.
The companion data/artifact files in the parent Hugging Face repository contain the paper-facing CSV summaries, scenario catalog, manual judge audit, Croissant metadata, provenance notes, and validation notes.
Quick Start
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python data/scenarios/validate_scenarios.py
python -m compileall -q scripts mcp_servers data tests
DRY_RUN=1 bash scripts/run_experiment.sh configs/example_baseline.env
GPU/vLLM runs additionally require the local model path and serving stack described by the config files. Hosted-LLM judge or 70B spot checks require reviewer-provided credentials.
Included
mcp_servers/: four Smart Grid MCP server domains over public-safe synthetic CSVs.scripts/: benchmark runners, orchestration helpers, scenario generation helpers, judge helper, evidence-export helpers, and serving smoke scripts.configs/: canonical benchmark and smoke env files.data/processed/: synthetic/public-safe fixture data.data/scenarios/: canonical scenario JSON files and negative checks.tests/: focused unit/regression tests for the source snapshot.
License
Code in this review package is provided under the MIT License. Data/artifact files in the parent review repository are provided under CC BY 4.0 unless a file states otherwise.