Datasets:
docs: move Reference baseline to top, attribute Claude Opus 4.7
Browse files
README.md
CHANGED
|
@@ -31,6 +31,14 @@ Agentic Solidity benchmark. Each task hands the agent a Foundry workspace where
|
|
| 31 |
|
| 32 |
This dataset is intended for use with the [hermes-agent](https://github.com/NousResearch/hermes-agent) Solidity Eval environment, but the `workspace_tar` / `scoring_tar` shape is harness-agnostic — any agent harness that can stage a tarball into `/work` and run a scorer over `/scoring` can use it.
|
| 33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
## Configs
|
| 35 |
|
| 36 |
| Config | Rows | Selection |
|
|
@@ -181,14 +189,6 @@ A standalone smoke test that does not depend on a harness is included at the eva
|
|
| 181 |
mv /opt/py/.solc-select.real /opt/py/.solc-select
|
| 182 |
```
|
| 183 |
|
| 184 |
-
## Reference baseline
|
| 185 |
-
|
| 186 |
-
| Agent | Split | Pass rate | Notes |
|
| 187 |
-
| --- | --- | --- | --- |
|
| 188 |
-
| Claude Code 2.1.128 (Anthropic API default model) | `lite` | **39.0%** (78/200, 1 timeout) | 16-way concurrency, 40 max-turns, default fuzz settings (`fuzz_timeout_s=300`, `fuzz_test_calls=50000`, `fuzz_seed=0xDEADBEEF`). Reference sandbox image extended with Claude Code per the Dockerfile snippet above. Wall-clock ~34 min on a 32-core host. Run via `hermes-agent`'s `solidity_eval` environment with `agent_backend: claude-code`, `terminal_backend: docker`. |
|
| 189 |
-
|
| 190 |
-
A non-trivial fraction of failures on `lite` come from B3 canary false-positives where the deleted function legitimately requires common Solidity primitives (`keccak256`, `encodePacked`, addresses unsorted via `sortTokens`, etc.). The ground-truth canary list is generated by token-overlap with the deleted function body and was not pruned for these false-positives in the v0 build; this is the dominant source of noise on the `lite` baseline. Filter to rows where the canary set excludes those tokens if you want a less-noisy slice.
|
| 191 |
-
|
| 192 |
## Provenance
|
| 193 |
|
| 194 |
- Source corpus: SolBench (https://github.com/ZaoyuChen/SolBench, MIT, FSE 2026), specifically the RACR-4k LCS subset.
|
|
|
|
| 31 |
|
| 32 |
This dataset is intended for use with the [hermes-agent](https://github.com/NousResearch/hermes-agent) Solidity Eval environment, but the `workspace_tar` / `scoring_tar` shape is harness-agnostic — any agent harness that can stage a tarball into `/work` and run a scorer over `/scoring` can use it.
|
| 33 |
|
| 34 |
+
## Reference baseline
|
| 35 |
+
|
| 36 |
+
| Agent | Split | Pass rate | Notes |
|
| 37 |
+
| --- | --- | --- | --- |
|
| 38 |
+
| Claude Code 2.1.128 (Claude Opus 4.7) | `lite` | **39.0%** (78/200, 1 timeout) | 16-way concurrency, 40 max-turns, default fuzz settings (`fuzz_timeout_s=300`, `fuzz_test_calls=50000`, `fuzz_seed=0xDEADBEEF`). Reference sandbox image extended with Claude Code per the Dockerfile snippet above. Wall-clock ~34 min on a 32-core host. Run via `hermes-agent`'s `solidity_eval` environment with `agent_backend: claude-code`, `terminal_backend: docker`. |
|
| 39 |
+
|
| 40 |
+
A non-trivial fraction of failures on `lite` come from B3 canary false-positives where the deleted function legitimately requires common Solidity primitives (`keccak256`, `encodePacked`, addresses unsorted via `sortTokens`, etc.). The ground-truth canary list is generated by token-overlap with the deleted function body and was not pruned for these false-positives in the v0 build; this is the dominant source of noise on the `lite` baseline. Filter to rows where the canary set excludes those tokens if you want a less-noisy slice.
|
| 41 |
+
|
| 42 |
## Configs
|
| 43 |
|
| 44 |
| Config | Rows | Selection |
|
|
|
|
| 189 |
mv /opt/py/.solc-select.real /opt/py/.solc-select
|
| 190 |
```
|
| 191 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 192 |
## Provenance
|
| 193 |
|
| 194 |
- Source corpus: SolBench (https://github.com/ZaoyuChen/SolBench, MIT, FSE 2026), specifically the RACR-4k LCS subset.
|