samscrack commited on
Commit
b39c3e2
·
verified ·
1 Parent(s): 0a284dd

docs: add harness-agnostic running guide, sandbox tips, and Claude Code 39% baseline

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md CHANGED
@@ -77,6 +77,118 @@ Three pass routes are tagged in `pass_route.txt` for auditability: `exit_0`, `va
77
 
78
  - 168 of the 3044 rows have an empty `canary_substrings` list (the deleted body uses only identifiers that already appear elsewhere in the contract — common for ERC20-style getters). For these rows B3 enforcement degrades to best-effort; the canary-grep is a no-op and only the stub-residue + diff-fuzz checks remain. Auditors who want strict B3 enforcement should filter to rows with non-empty canaries.
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  ## Provenance
81
 
82
  - Source corpus: SolBench (https://github.com/ZaoyuChen/SolBench, MIT, FSE 2026), specifically the RACR-4k LCS subset.
 
77
 
78
  - 168 of the 3044 rows have an empty `canary_substrings` list (the deleted body uses only identifiers that already appear elsewhere in the contract — common for ERC20-style getters). For these rows B3 enforcement degrades to best-effort; the canary-grep is a no-op and only the stub-residue + diff-fuzz checks remain. Auditors who want strict B3 enforcement should filter to rows with non-empty canaries.
79
 
80
+ ## Running it yourself
81
+
82
+ The dataset is harness-agnostic: any agent runner that can stage two tarballs and exec a Python scorer inside a sandbox container can use it. The only fixed contract is the layout the scorer expects:
83
+
84
+ ```
85
+ /work/ <- agent's editable workspace
86
+ foundry.toml
87
+ src/<contract_name>.sol <- target file; one function body is `revert("TODO");`
88
+
89
+ /scoring/ <- never visible during the agent loop (B3)
90
+ manifest.json <- resolved_solc_version, contract_name, fuzz settings, canaries
91
+ origin/<contract_name>.sol <- ground-truth contract
92
+
93
+ /logs/verifier/ <- scorer writes results here
94
+ reward.txt <- single float in [0, 1]
95
+ scoring_log.txt
96
+ diffusc_stdout.txt / diffusc_stderr.txt
97
+ echidna_stdout.txt / echidna_stderr.txt
98
+ b3_violation.txt <- present iff B3 canary check fired
99
+ slither.txt <- side metric (not used in reward)
100
+ ```
101
+
102
+ ### Step-by-step
103
+
104
+ ```python
105
+ from datasets import load_dataset
106
+ import base64, io, tarfile, json, pathlib
107
+
108
+ ds = load_dataset("samscrack/solidity-eval-2026", "lite", split="train")
109
+ row = ds[0]
110
+
111
+ # 1. Stage the workspace into /work BEFORE the agent runs.
112
+ ws = base64.b64decode(row["workspace_tar"])
113
+ with tarfile.open(fileobj=io.BytesIO(ws), mode="r:gz") as t:
114
+ t.extractall("/path/to/sandbox/work")
115
+
116
+ # 2. Run your agent against /work (forge build / forge test / edit /work/src).
117
+ # Disable web/browser tools (B2 — verified contracts are public on Etherscan).
118
+ # Do NOT mount or expose /scoring during this step.
119
+ run_my_agent(workspace="/path/to/sandbox/work", prompt=row["prompt_context"])
120
+
121
+ # 3. After the agent returns, stage the scoring assets and run the scorer.
122
+ sc = base64.b64decode(row["scoring_tar"])
123
+ with tarfile.open(fileobj=io.BytesIO(sc), mode="r:gz") as t:
124
+ t.extractall("/path/to/sandbox/scoring")
125
+
126
+ # 4. Run the in-image scorer. Use the reference image as-is.
127
+ # (The same container used for the agent loop is fine — see "Sandbox tips" below.)
128
+ # docker run --rm \
129
+ # -v /path/to/sandbox/work:/work \
130
+ # -v /path/to/sandbox/scoring:/scoring \
131
+ # -v /path/to/sandbox/verifier:/logs/verifier \
132
+ # samscrack/solidity-eval-sandbox:dev \
133
+ # python /opt/scoring/run_diffusc.py
134
+
135
+ reward = float(pathlib.Path("/path/to/sandbox/verifier/reward.txt").read_text().strip())
136
+ ```
137
+
138
+ ### Reference scorer image
139
+
140
+ ```
141
+ samscrack/solidity-eval-sandbox:dev
142
+ ```
143
+
144
+ Bundles Foundry 1.6.0-nightly, Echidna 2.2.3 (SolBench fork), Diffusc + Slither 0.9.3 (`webthethird/slither.git@dev-diffusc-testing`), solc-select with every patch of 0.5.x–0.8.x installed, and `/opt/scoring/run_diffusc.py`.
145
+
146
+ A standalone smoke test that does not depend on a harness is included at the eval repo's `scoring-smoke/run_scoring_smoke.sh`: it builds an identical-candidate (`reward=1`) and a wrong-direction-candidate (`reward=0`) case and runs the scorer on each.
147
+
148
+ ### Sandbox tips (lessons from running this end-to-end)
149
+
150
+ 1. **Persist the same sandbox container across staging → agent → scoring.** Each phase uses different tarballs, but the container's filesystem is the only place that links them. Don't tear down the container between phases — the agent's edits to `/work/src/<C>.sol` are exactly what the scorer reads.
151
+
152
+ 2. **The reference image stores solc-select state at `/root/.solc-select`** and exposes it via a symlink at `/opt/py/.solc-select`. If your harness bind-mounts `/root` (e.g. for an isolated agent home dir), that symlink breaks and the scorer fails on `solc-select use <X>`. Either:
153
+ - don't bind-mount `/root`, or
154
+ - layer a derived image that materializes the symlink:
155
+ ```dockerfile
156
+ FROM samscrack/solidity-eval-sandbox:dev
157
+ RUN test -L /opt/py/.solc-select && \
158
+ cp -a /root/.solc-select /opt/py/.solc-select.real && \
159
+ rm /opt/py/.solc-select && \
160
+ mv /opt/py/.solc-select.real /opt/py/.solc-select
161
+ ```
162
+
163
+ 3. **The agent loop budget should comfortably exceed any single command timeout your harness enforces.** Echidna fuzzing runs up to `fuzz_timeout_s` (default 300s) in `manifest.json`; the scorer wrapper adds compile/setup overhead. A 600s foreground-command cap will silently kill scoring mid-run. Configure ≥ `fuzz_timeout_s + 120`.
164
+
165
+ 4. **CLI agents (Claude Code, aider, etc.) shouldn't receive the prompt as a command-line argument.** The 14–18 KB prompts contain Solidity code with substrings (` & `, `&&`, `;`) that trip many "is this a long-lived process / dangerous command" guards even inside shell quotes. Pipe the prompt via stdin or pass it as a file path:
166
+ ```bash
167
+ claude --print --max-turns 40 < /work/.agent_prompt.md
168
+ ```
169
+
170
+ 5. **Reference image rebuild for embedded-CLI agents.** If you want to run a coding-agent CLI (Claude Code, aider) inside the sandbox, layer it on top of the reference image and apply the symlink fix from (2) at the same time. Example for Claude Code:
171
+ ```dockerfile
172
+ FROM samscrack/solidity-eval-sandbox:dev
173
+ RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates gnupg \
174
+ && curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
175
+ && apt-get install -y --no-install-recommends nodejs \
176
+ && rm -rf /var/lib/apt/lists/* \
177
+ && npm install -g @anthropic-ai/claude-code
178
+ RUN test -L /opt/py/.solc-select && \
179
+ cp -a /root/.solc-select /opt/py/.solc-select.real && \
180
+ rm /opt/py/.solc-select && \
181
+ mv /opt/py/.solc-select.real /opt/py/.solc-select
182
+ ```
183
+
184
+ ## Reference baseline
185
+
186
+ | Agent | Split | Pass rate | Notes |
187
+ | --- | --- | --- | --- |
188
+ | Claude Code 2.1.128 (Anthropic API default model) | `lite` | **39.0%** (78/200, 1 timeout) | 16-way concurrency, 40 max-turns, default fuzz settings (`fuzz_timeout_s=300`, `fuzz_test_calls=50000`, `fuzz_seed=0xDEADBEEF`). Reference sandbox image extended with Claude Code per the Dockerfile snippet above. Wall-clock ~34 min on a 32-core host. Run via `hermes-agent`'s `solidity_eval` environment with `agent_backend: claude-code`, `terminal_backend: docker`. |
189
+
190
+ A non-trivial fraction of failures on `lite` come from B3 canary false-positives where the deleted function legitimately requires common Solidity primitives (`keccak256`, `encodePacked`, addresses unsorted via `sortTokens`, etc.). The ground-truth canary list is generated by token-overlap with the deleted function body and was not pruned for these false-positives in the v0 build; this is the dominant source of noise on the `lite` baseline. Filter to rows where the canary set excludes those tokens if you want a less-noisy slice.
191
+
192
  ## Provenance
193
 
194
  - Source corpus: SolBench (https://github.com/ZaoyuChen/SolBench, MIT, FSE 2026), specifically the RACR-4k LCS subset.