--- license: mit task_categories: - text-generation language: - en tags: - solidity - smart-contracts - code-generation - agentic-eval - foundry - differential-fuzzing size_categories: - 1K_`, unique within the dataset. | | `domain` | str | `"unknown"` for every row — the source corpus has no domain mapping. | | `contract_name` | str | Solidity contract name (target of the deletion). | | `contract_address` | str | Etherscan address the source was scraped from. | | `pragma` | str | Raw pragma string from the source (`^0.8.10`, `>=0.6.0 <0.8.0`, etc.). | | `pragma_major_minor` | str | `"0.5"` / `"0.6"` / `"0.7"` / `"0.8"` — used for stratification. | | `resolved_solc_version` | str | Highest installed solc satisfying the pragma. The reference image installs every patch version of 0.5.x–0.8.x. | | `target_function_signature` | str | First line of the deleted function header (for the prompt). | | `prompt_context` | str | Surrounding code shown to the model (`sc_ba[0]` from the source corpus). | | `workspace_tar` | str | Base64-encoded `.tar.gz` containing `foundry.toml` + `src/.sol` (with the target function body replaced by `{ revert("TODO"); }`). Stage this into `/work` before the agent runs. | | `scoring_tar` | str | Base64-encoded `.tar.gz` containing `manifest.json` + `origin/.sol`. Stage into `/scoring` *only at scoring time* — never during the agent loop. | | `canary_substrings` | list[str] | Tokens / fragments unique to the deleted function body. The scorer fails the run with `b3_violation` if any of these appear in `/work` after the agent finishes. | ## Reward semantics The in-image scorer (`/opt/scoring/run_diffusc.py` in the reference image) follows SolBench's `echidna()` rule: ``` reward = 1.0 if Diffusc compiles + Echidna passes (or vacuous "no diff" pass) reward = 0.0 on diffusc compile failure, echidna failure, timeout, B3 canary violation, or stub-residue (revert("TODO"); left in) ``` Three pass routes are tagged in `pass_route.txt` for auditability: `exit_0`, `vacuous_no_diff`, `fail`. Runs with disproportionate vacuous passes can be flagged. ## B2 / B3 reward-hack mitigations - **B2 (no exfiltration via web tools).** The verified contracts are public on Etherscan, so any agent with web access can fetch the answer with one query. Runners must disable web/browser/messaging tools; for non-Hermes agents this means setting agent-specific flags or running the sandbox container without network egress. - **B3 (no ground-truth in the workspace).** `scoring_tar` (containing `origin/`) is staged only after the agent loop ends. The scorer additionally greps the candidate for `canary_substrings` and zeroes the reward if any match. - **Stub residue.** If the agent leaves `revert("TODO");` in the candidate, the scorer flags it (`pass_route=stub_residue`) and zeros the reward — otherwise an empty implementation would coast through the vacuous-pass route. ### B3 caveats - 168 of the 3044 rows have an empty `canary_substrings` list (the deleted body uses only identifiers that already appear elsewhere in the contract — common for ERC20-style getters). For these rows B3 enforcement degrades to best-effort; the canary-grep is a no-op and only the stub-residue + diff-fuzz checks remain. Auditors who want strict B3 enforcement should filter to rows with non-empty canaries. ## Running it yourself The dataset is harness-agnostic: any agent runner that can stage two tarballs and exec a Python scorer inside a sandbox container can use it. The only fixed contract is the layout the scorer expects: ``` /work/ <- agent's editable workspace foundry.toml src/.sol <- target file; one function body is `revert("TODO");` /scoring/ <- never visible during the agent loop (B3) manifest.json <- resolved_solc_version, contract_name, fuzz settings, canaries origin/.sol <- ground-truth contract /logs/verifier/ <- scorer writes results here reward.txt <- single float in [0, 1] scoring_log.txt diffusc_stdout.txt / diffusc_stderr.txt echidna_stdout.txt / echidna_stderr.txt b3_violation.txt <- present iff B3 canary check fired slither.txt <- side metric (not used in reward) ``` ### Step-by-step ```python from datasets import load_dataset import base64, io, tarfile, json, pathlib ds = load_dataset("samscrack/solidity-eval-2026", "lite", split="train") row = ds[0] # 1. Stage the workspace into /work BEFORE the agent runs. ws = base64.b64decode(row["workspace_tar"]) with tarfile.open(fileobj=io.BytesIO(ws), mode="r:gz") as t: t.extractall("/path/to/sandbox/work") # 2. Run your agent against /work (forge build / forge test / edit /work/src). # Disable web/browser tools (B2 — verified contracts are public on Etherscan). # Do NOT mount or expose /scoring during this step. run_my_agent(workspace="/path/to/sandbox/work", prompt=row["prompt_context"]) # 3. After the agent returns, stage the scoring assets and run the scorer. sc = base64.b64decode(row["scoring_tar"]) with tarfile.open(fileobj=io.BytesIO(sc), mode="r:gz") as t: t.extractall("/path/to/sandbox/scoring") # 4. Run the in-image scorer. Use the reference image as-is. # (The same container used for the agent loop is fine — see "Sandbox tips" below.) # docker run --rm \ # -v /path/to/sandbox/work:/work \ # -v /path/to/sandbox/scoring:/scoring \ # -v /path/to/sandbox/verifier:/logs/verifier \ # samscrack/solidity-eval-sandbox:dev \ # python /opt/scoring/run_diffusc.py reward = float(pathlib.Path("/path/to/sandbox/verifier/reward.txt").read_text().strip()) ``` ### Reference scorer image ``` samscrack/solidity-eval-sandbox:dev ``` Bundles Foundry 1.6.0-nightly, Echidna 2.2.3 (SolBench fork), Diffusc + Slither 0.9.3 (`webthethird/slither.git@dev-diffusc-testing`), solc-select with every patch of 0.5.x–0.8.x installed, and `/opt/scoring/run_diffusc.py`. A standalone smoke test that does not depend on a harness is included at the eval repo's `scoring-smoke/run_scoring_smoke.sh`: it builds an identical-candidate (`reward=1`) and a wrong-direction-candidate (`reward=0`) case and runs the scorer on each. ### Sandbox tips (lessons from running this end-to-end) 1. **Persist the same sandbox container across staging → agent → scoring.** Each phase uses different tarballs, but the container's filesystem is the only place that links them. Don't tear down the container between phases — the agent's edits to `/work/src/.sol` are exactly what the scorer reads. 2. **The reference image stores solc-select state at `/root/.solc-select`** and exposes it via a symlink at `/opt/py/.solc-select`. If your harness bind-mounts `/root` (e.g. for an isolated agent home dir), that symlink breaks and the scorer fails on `solc-select use `. Either: - don't bind-mount `/root`, or - layer a derived image that materializes the symlink: ```dockerfile FROM samscrack/solidity-eval-sandbox:dev RUN test -L /opt/py/.solc-select && \ cp -a /root/.solc-select /opt/py/.solc-select.real && \ rm /opt/py/.solc-select && \ mv /opt/py/.solc-select.real /opt/py/.solc-select ``` 3. **The agent loop budget should comfortably exceed any single command timeout your harness enforces.** Echidna fuzzing runs up to `fuzz_timeout_s` (default 300s) in `manifest.json`; the scorer wrapper adds compile/setup overhead. A 600s foreground-command cap will silently kill scoring mid-run. Configure ≥ `fuzz_timeout_s + 120`. 4. **CLI agents (Claude Code, aider, etc.) shouldn't receive the prompt as a command-line argument.** The 14–18 KB prompts contain Solidity code with substrings (` & `, `&&`, `;`) that trip many "is this a long-lived process / dangerous command" guards even inside shell quotes. Pipe the prompt via stdin or pass it as a file path: ```bash claude --print --max-turns 40 < /work/.agent_prompt.md ``` 5. **Reference image rebuild for embedded-CLI agents.** If you want to run a coding-agent CLI (Claude Code, aider) inside the sandbox, layer it on top of the reference image and apply the symlink fix from (2) at the same time. Example for Claude Code: ```dockerfile FROM samscrack/solidity-eval-sandbox:dev RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates gnupg \ && curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \ && apt-get install -y --no-install-recommends nodejs \ && rm -rf /var/lib/apt/lists/* \ && npm install -g @anthropic-ai/claude-code RUN test -L /opt/py/.solc-select && \ cp -a /root/.solc-select /opt/py/.solc-select.real && \ rm /opt/py/.solc-select && \ mv /opt/py/.solc-select.real /opt/py/.solc-select ``` ## Provenance - Source corpus: SolBench (https://github.com/ZaoyuChen/SolBench, MIT, FSE 2026), specifically the RACR-4k LCS subset. - Function-stubbing + tarball construction: this dataset's builder (`environments/benchmarks/solidity_eval/dataset/build_modern_subset.py` in the eval repo). - Reference scorer + sandbox image: `samscrack/solidity-eval-sandbox:dev` (Foundry 1.6.0-nightly, Echidna 2.2.3 from the SolBench fork, Diffusc + Slither 0.9.3 from `webthethird/slither.git@dev-diffusc-testing`, solc-select 0.5.0–0.8.34). ## Skipped rows from the source corpus (164 of 3208) | Reason | Count | | --- | --- | | `cannot-stub-funit` | 143 | | `unsupported-pragma-0.4` | 8 | | `no-pragma` | 13 | `cannot-stub-funit` covers constructors, modifiers, receive/fallback declarations, and abstract function shapes the regex-based stubber can't safely rewrite. Future builder iterations could handle these explicitly. ## License MIT, inherited from SolBench. The contract sources are public on Etherscan.