Datasets:
condition stringclasses 4
values | average_pass_rate float64 0.25 1 |
|---|---|
short_sparse | 0.25 |
long_sparse | 0.25 |
short_dense | 1 |
long_dense | 1 |
SignalDepth E15 Context Budget
This is a small prompt-sensitivity benchmark slice for separating two explanations that often get conflated:
- the prompt is too short
- the task contract is underspecified
The narrow result: on this deterministic Python code-task suite, making sparse prompts longer did not help. Making the task contract explicit did.
Key Result
| Condition | Average pass rate | Read |
|---|---|---|
| short_sparse | 0.25 | short and underspecified |
| long_sparse | 0.25 | longer, same missing contract |
| short_dense | 1.00 | short but contract-explicit |
| long_dense | 1.00 | longer and contract-explicit |
Length marginal: short = 0.625, long = 0.625.
Density marginal: sparse = 0.25, dense = 1.00.
Scope
- deterministic Python code tasks
- 4 local-model run archives
- 4 tasks
- 4 prompt conditions
- 192 graded calls
k=3- temperature
0.0
This is not a general long-context benchmark, not a model leaderboard, and not evidence about non-code task families.
Conditions
short_sparse: short prompt, task contract underspecifiedlong_sparse: longer prompt, same missing task contractshort_dense: short prompt with explicit I/O and edge-condition constraintslong_dense: longer prompt with explicit task contract
Dense here means contract-explicit, not merely wordy.
Files
data/e15_summary.json: extracted E15 aggregate from SignalDepth public findingsdata/e15_conditions.csv: condition-level pass ratesdata/e15_models.csv: model-by-condition pass ratesdata/e15_tasks.csv: task-by-condition pass ratesdata/e15_failure_modes.csv: sparse-prompt failure countsschemas/experiment.schema.json: public experiment schemadocs/context-budget.md: method/result notedocs/methodology.md: benchmark methodologyassets/context-budget.svg: chart
Dataset Viewer configs are conditions (default), models, tasks, and failure_modes. The JSON summary is included as the canonical aggregate object and is not mapped to a viewer table.
Reproduce Comparable Runs
The public harness is available here:
https://github.com/signaldepth/prompt-sensitivity-bench
Example:
cd harness
uv run python validate.py e15 --model-name qwen3.5:4b --k 3
Fresh runs should be read as replications on the same prompt family, not byte-for-byte replays of the archived local runs.
Raw Data Boundary
This dataset publishes the derived public E15 summary and reproducibility scaffold. Full raw per-trial archives remain local/private unless curated for a later release.
That boundary is intentional: the artifact is meant to make the condition design, aggregate result, and public harness easy to inspect without implying a full raw-output dump.
Citation
If you use this result, cite the Hugging Face dataset and the GitHub repository:
- Dataset:
signaldepth/e15-context-budget - Code: https://github.com/signaldepth/prompt-sensitivity-bench
- Downloads last month
- 33