solidity-eval-2026 / README.md
samscrack's picture
docs: add Qwopus 3.6 27B Solidity baseline (46.5%) + clarify pass@1 metric semantics
54bfe19 verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - solidity
  - smart-contracts
  - code-generation
  - agentic-eval
  - foundry
  - differential-fuzzing
size_categories:
  - 1K<n<10K
configs:
  - config_name: lite
    data_files:
      - split: train
        path: lite/train-*
  - config_name: full
    data_files:
      - split: train
        path: full/train-*
    default: true

Solidity Eval (2026)

Agentic Solidity benchmark. Each task hands the agent a Foundry workspace where one function body in a real Etherscan-verified contract has been replaced with revert("TODO");. The agent edits, builds (forge build), and tests (forge test) inside a sandbox until it returns. Reward is the differential-fuzz pass rate (Diffusc + Echidna) of the model's body against the ground-truth body.

This dataset is intended for use with the hermes-agent Solidity Eval environment, but the workspace_tar / scoring_tar shape is harness-agnostic — any agent harness that can stage a tarball into /work and run a scorer over /scoring can use it.

Reference baseline

All scores are pass@1 under SolBench's echidna() rule: per task, a single agentic attempt is scored as 1.0 only if Diffusc compiles the candidate AND Echidna's differential-fuzz finds no behavioral divergence vs. the ground-truth body (or the harness vacuously passes), AND the B3 canary check + stub-residue guard both pass. Slither runs but is recorded as a side metric only — this is not Vul@k. No resampling (group_size=1).

Agent / model Split pass@1 Wall-clock Notes
Qwopus 3.6 27B Solidity (stage4-rft) lite 46.5% (93/200) ~27 min Local serve via vLLM TP=2 FP8 (qwen3_xml tool parser) on 2× Blackwell GPUs. Hermes agent loop (in-process tool dispatch). Same fuzz settings + sandbox image as below.
Claude Code 2.1.128 (Claude Opus 4.7) lite 39.0% (78/200, 1 timeout) ~34 min Anthropic API via Claude Code CLI inside the sandbox. CLI agent backend.

Identical conditions across both rows: lite split (200 tasks), 16-way concurrency, max_agent_turns=40, max_token_length=32000, agent_temperature=0.6, fuzz_timeout_s=300, fuzz_test_calls=50000, fuzz_seed=0xDEADBEEF, reference sandbox image (samscrack/solidity-eval-sandbox:dev extended with the symlink fix below — Claude Code row also adds the Claude Code CLI to the image), terminal_backend=docker on the same 32-core host. Run via hermes-agent's solidity_eval environment.

A non-trivial fraction of failures on lite come from B3 canary false-positives where the deleted function legitimately requires common Solidity primitives (keccak256, encodePacked, addresses unsorted via sortTokens, etc.). The ground-truth canary list is generated by token-overlap with the deleted function body and was not pruned for these false-positives in the v0 build; this is the dominant source of noise on the lite baseline. Filter to rows where the canary set excludes those tokens if you want a less-noisy slice.

Configs

Config Rows Selection
lite 200 Stratified by pragma major.minor over the full cohort (seed=0). Use this for cost-bounded eval runs.
full 3044 Every row from the SolBench RACR-4k corpus that (a) has a parseable Solidity pragma, (b) satisfies the 0.5.0–0.8.34 compiler range installed in the reference image, and (c) parses cleanly through the function-stubbing step.

Schema

Field Type Description
task_id str <contract_name>_<idx>, unique within the dataset.
domain str "unknown" for every row — the source corpus has no domain mapping.
contract_name str Solidity contract name (target of the deletion).
contract_address str Etherscan address the source was scraped from.
pragma str Raw pragma string from the source (^0.8.10, >=0.6.0 <0.8.0, etc.).
pragma_major_minor str "0.5" / "0.6" / "0.7" / "0.8" — used for stratification.
resolved_solc_version str Highest installed solc satisfying the pragma. The reference image installs every patch version of 0.5.x–0.8.x.
target_function_signature str First line of the deleted function header (for the prompt).
prompt_context str Surrounding code shown to the model (sc_ba[0] from the source corpus).
workspace_tar str Base64-encoded .tar.gz containing foundry.toml + src/<contract_name>.sol (with the target function body replaced by { revert("TODO"); }). Stage this into /work before the agent runs.
scoring_tar str Base64-encoded .tar.gz containing manifest.json + origin/<contract_name>.sol. Stage into /scoring only at scoring time — never during the agent loop.
canary_substrings list[str] Tokens / fragments unique to the deleted function body. The scorer fails the run with b3_violation if any of these appear in /work after the agent finishes.

Reward semantics

The in-image scorer (/opt/scoring/run_diffusc.py in the reference image) follows SolBench's echidna() rule:

reward = 1.0 if Diffusc compiles + Echidna passes (or vacuous "no diff" pass)
reward = 0.0 on diffusc compile failure, echidna failure, timeout,
              B3 canary violation, or stub-residue (revert("TODO"); left in)

Three pass routes are tagged in pass_route.txt for auditability: exit_0, vacuous_no_diff, fail. Runs with disproportionate vacuous passes can be flagged.

B2 / B3 reward-hack mitigations

  • B2 (no exfiltration via web tools). The verified contracts are public on Etherscan, so any agent with web access can fetch the answer with one query. Runners must disable web/browser/messaging tools; for non-Hermes agents this means setting agent-specific flags or running the sandbox container without network egress.
  • B3 (no ground-truth in the workspace). scoring_tar (containing origin/) is staged only after the agent loop ends. The scorer additionally greps the candidate for canary_substrings and zeroes the reward if any match.
  • Stub residue. If the agent leaves revert("TODO"); in the candidate, the scorer flags it (pass_route=stub_residue) and zeros the reward — otherwise an empty implementation would coast through the vacuous-pass route.

B3 caveats

  • 168 of the 3044 rows have an empty canary_substrings list (the deleted body uses only identifiers that already appear elsewhere in the contract — common for ERC20-style getters). For these rows B3 enforcement degrades to best-effort; the canary-grep is a no-op and only the stub-residue + diff-fuzz checks remain. Auditors who want strict B3 enforcement should filter to rows with non-empty canaries.

Running it yourself

The dataset is harness-agnostic: any agent runner that can stage two tarballs and exec a Python scorer inside a sandbox container can use it. The only fixed contract is the layout the scorer expects:

/work/                         <- agent's editable workspace
  foundry.toml
  src/<contract_name>.sol      <- target file; one function body is `revert("TODO");`

/scoring/                      <- never visible during the agent loop (B3)
  manifest.json                <- resolved_solc_version, contract_name, fuzz settings, canaries
  origin/<contract_name>.sol   <- ground-truth contract

/logs/verifier/                <- scorer writes results here
  reward.txt                   <- single float in [0, 1]
  scoring_log.txt
  diffusc_stdout.txt / diffusc_stderr.txt
  echidna_stdout.txt / echidna_stderr.txt
  b3_violation.txt             <- present iff B3 canary check fired
  slither.txt                  <- side metric (not used in reward)

Step-by-step

from datasets import load_dataset
import base64, io, tarfile, json, pathlib

ds = load_dataset("samscrack/solidity-eval-2026", "lite", split="train")
row = ds[0]

# 1. Stage the workspace into /work BEFORE the agent runs.
ws = base64.b64decode(row["workspace_tar"])
with tarfile.open(fileobj=io.BytesIO(ws), mode="r:gz") as t:
    t.extractall("/path/to/sandbox/work")

# 2. Run your agent against /work (forge build / forge test / edit /work/src).
#    Disable web/browser tools (B2 — verified contracts are public on Etherscan).
#    Do NOT mount or expose /scoring during this step.
run_my_agent(workspace="/path/to/sandbox/work", prompt=row["prompt_context"])

# 3. After the agent returns, stage the scoring assets and run the scorer.
sc = base64.b64decode(row["scoring_tar"])
with tarfile.open(fileobj=io.BytesIO(sc), mode="r:gz") as t:
    t.extractall("/path/to/sandbox/scoring")

# 4. Run the in-image scorer. Use the reference image as-is.
#    (The same container used for the agent loop is fine — see "Sandbox tips" below.)
#    docker run --rm \
#      -v /path/to/sandbox/work:/work \
#      -v /path/to/sandbox/scoring:/scoring \
#      -v /path/to/sandbox/verifier:/logs/verifier \
#      samscrack/solidity-eval-sandbox:dev \
#      python /opt/scoring/run_diffusc.py

reward = float(pathlib.Path("/path/to/sandbox/verifier/reward.txt").read_text().strip())

Reference scorer image

samscrack/solidity-eval-sandbox:dev

Bundles Foundry 1.6.0-nightly, Echidna 2.2.3 (SolBench fork), Diffusc + Slither 0.9.3 (webthethird/slither.git@dev-diffusc-testing), solc-select with every patch of 0.5.x–0.8.x installed, and /opt/scoring/run_diffusc.py.

A standalone smoke test that does not depend on a harness is included at the eval repo's scoring-smoke/run_scoring_smoke.sh: it builds an identical-candidate (reward=1) and a wrong-direction-candidate (reward=0) case and runs the scorer on each.

Sandbox tips (lessons from running this end-to-end)

  1. Persist the same sandbox container across staging → agent → scoring. Each phase uses different tarballs, but the container's filesystem is the only place that links them. Don't tear down the container between phases — the agent's edits to /work/src/<C>.sol are exactly what the scorer reads.

  2. The reference image stores solc-select state at /root/.solc-select and exposes it via a symlink at /opt/py/.solc-select. If your harness bind-mounts /root (e.g. for an isolated agent home dir), that symlink breaks and the scorer fails on solc-select use <X>. Either:

    • don't bind-mount /root, or
    • layer a derived image that materializes the symlink:
      FROM samscrack/solidity-eval-sandbox:dev
      RUN test -L /opt/py/.solc-select && \
          cp -a /root/.solc-select /opt/py/.solc-select.real && \
          rm /opt/py/.solc-select && \
          mv /opt/py/.solc-select.real /opt/py/.solc-select
      
  3. The agent loop budget should comfortably exceed any single command timeout your harness enforces. Echidna fuzzing runs up to fuzz_timeout_s (default 300s) in manifest.json; the scorer wrapper adds compile/setup overhead. A 600s foreground-command cap will silently kill scoring mid-run. Configure ≥ fuzz_timeout_s + 120.

  4. CLI agents (Claude Code, aider, etc.) shouldn't receive the prompt as a command-line argument. The 14–18 KB prompts contain Solidity code with substrings (&, &&, ;) that trip many "is this a long-lived process / dangerous command" guards even inside shell quotes. Pipe the prompt via stdin or pass it as a file path:

    claude --print --max-turns 40 < /work/.agent_prompt.md
    
  5. Reference image rebuild for embedded-CLI agents. If you want to run a coding-agent CLI (Claude Code, aider) inside the sandbox, layer it on top of the reference image and apply the symlink fix from (2) at the same time. Example for Claude Code:

    FROM samscrack/solidity-eval-sandbox:dev
    RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates gnupg \
     && curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
     && apt-get install -y --no-install-recommends nodejs \
     && rm -rf /var/lib/apt/lists/* \
     && npm install -g @anthropic-ai/claude-code
    RUN test -L /opt/py/.solc-select && \
        cp -a /root/.solc-select /opt/py/.solc-select.real && \
        rm /opt/py/.solc-select && \
        mv /opt/py/.solc-select.real /opt/py/.solc-select
    

Provenance

  • Source corpus: SolBench (https://github.com/ZaoyuChen/SolBench, MIT, FSE 2026), specifically the RACR-4k LCS subset.
  • Function-stubbing + tarball construction: this dataset's builder (environments/benchmarks/solidity_eval/dataset/build_modern_subset.py in the eval repo).
  • Reference scorer + sandbox image: samscrack/solidity-eval-sandbox:dev (Foundry 1.6.0-nightly, Echidna 2.2.3 from the SolBench fork, Diffusc + Slither 0.9.3 from webthethird/slither.git@dev-diffusc-testing, solc-select 0.5.0–0.8.34).

Skipped rows from the source corpus (164 of 3208)

Reason Count
cannot-stub-funit 143
unsupported-pragma-0.4 8
no-pragma 13

cannot-stub-funit covers constructors, modifiers, receive/fallback declarations, and abstract function shapes the regex-based stubber can't safely rewrite. Future builder iterations could handle these explicitly.

License

MIT, inherited from SolBench. The contract sources are public on Etherscan.