Datdanboi25 commited on
Commit
05b4fc0
·
1 Parent(s): 1150a65

Update README.md

Browse files
Files changed (4) hide show
  1. .gitignore +3 -0
  2. AxiomicBanner.png +3 -0
  3. README.md +116 -3
  4. mathmark.jsonl +0 -0
.gitignore ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ *.py
2
+ *.txt
3
+ *.json
AxiomicBanner.png ADDED

Git LFS Details

  • SHA256: 75c38db1d76736a243905a29ba6e85eaf65e14037cd78c4f391bc146fe36fe4e
  • Pointer size: 130 Bytes
  • Size of remote file: 88 kB
README.md CHANGED
@@ -1,3 +1,116 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ size_categories:
6
+ - 1K<n<10K
7
+ pretty_name: ArithMark
8
+ ---
9
+
10
+ ![Axiomic Banner](AxiomicBanner.png)
11
+
12
+ # ArithMark
13
+
14
+ A procedurally generated benchmark for evaluating arithmetic reasoning in language models. Each problem presents a numeric expression and asks the model to identify the correct result.
15
+
16
+ Unlike knowledge-based benchmarks, ArithMark contains no facts a model could have memorised from pretraining. Every problem is generated fresh from random integers and operator sequences, so a model cannot pattern-match to training data — it must actually compute. This makes ArithMark a direct probe of **numerical reasoning capability**: the arithmetic structure that has been built into the model's weights through training, independent of world knowledge or surface-level heuristics.
17
+
18
+ Evaluation is log-likelihood multiple-choice, no chain-of-thought, no prompting tricks. Models are scored purely on how well they assign probability to the correct completion.
19
+
20
+ ---
21
+
22
+ ## Benchmark Results
23
+
24
+ Evaluated using average log-likelihood over ending tokens, normalised by length. Random chance = 25%.
25
+
26
+ | Company | Model | Params | Stage 1 | Stage 2 | Stage 3 | Stage 4 | Stage 5 | Avg |
27
+ | ------------------------ | ------------------------------------- | ------ | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- |
28
+ | Alibaba | Qwen2.5-3B | 3.1B | 97.30% | 75.60% | **63.90%** | **55.40%** | **53.50%** | **69.14%** |
29
+ | Alibaba | Qwen2.5-Math-1.5B | 1.5B | **97.70%** | **77.90%** | 63.20% | 51.00% | 44.10% | 66.78% |
30
+ | Alibaba | Qwen2.5-1.5B | 1.5B | 94.70% | 69.10% | 58.90% | 49.90% | 46.80% | 63.88% |
31
+ | Alibaba | Qwen2.5-Coder-1.5B | 1.5B | 93.00% | 65.20% | 57.60% | 51.30% | 49.20% | 63.26% |
32
+ | Alibaba | Qwen2.5-0.5B | 494M | 85.50% | 57.60% | 48.20% | 40.10% | 38.80% | 54.04% |
33
+ | HuggingFace | SmolLM2-1.7B | 1.7B | 82.70% | 54.50% | 44.90% | 35.20% | 33.30% | 50.12% |
34
+ | EleutherAI | pythia-2.8b | 2.8B | 52.40% | 42.20% | 33.30% | 28.90% | 27.20% | 36.80% |
35
+ | OpenAI | gpt2-xl | 1.6B | 39.90% | 38.30% | 36.10% | 32.80% | 33.50% | 36.12% |
36
+ | OpenAI | gpt2-medium | 345M | 37.70% | 37.20% | 33.60% | 30.90% | 32.80% | 34.44% |
37
+ | HuggingFace | SmolLM2-135M | 135M | 41.20% | 37.40% | 32.80% | 28.70% | 25.90% | 33.20% |
38
+ | Axiomic Labs | GPT-X2-125M *(unreleased)* | 125M | 40.40% | 36.60% | 33.20% | 26.00% | 29.00% | 33.04% |
39
+ | HuggingFace | SmolLM-135M | 135M | 42.70% | 36.50% | 32.60% | 25.30% | 23.90% | 32.20% |
40
+ | OpenAI | gpt2 | 124M | 35.70% | 32.80% | 31.90% | 28.00% | 29.60% | 31.60% |
41
+ | Meta | MobileLLM-125M | 125M | 35.90% | 35.00% | 32.90% | 27.40% | 24.60% | 31.16% |
42
+ | Axiomic Labs | GPT-X-125M | 125M | 38.10% | 33.20% | 29.00% | 26.90% | 25.40% | 30.52% |
43
+ | EleutherAI | pythia-31m | 30M | 36.40% | 31.00% | 29.30% | 27.80% | 26.60% | 30.22% |
44
+ | EleutherAI | pythia-160m | 162M | 35.80% | 30.30% | 28.00% | 28.20% | 27.00% | 29.86% |
45
+ | EleutherAI | pythia-70m | 70M | 36.30% | 30.10% | 28.50% | 27.30% | 26.90% | 29.82% |
46
+ | EleutherAI | pythia-14m | 14M | 34.50% | 29.00% | 26.30% | 24.40% | 24.00% | 27.64% |
47
+
48
+ ---
49
+
50
+ ## Task Format
51
+
52
+ Each problem is presented as a numeric expression followed by four answer choices — a mix of integers and floats so the answer type is never a giveaway:
53
+
54
+ ```
55
+ -13 * 3 + -1 =
56
+ A. -40
57
+ B. -4.2317
58
+ C. 26
59
+ D. -33.8851
60
+ ```
61
+
62
+ Evaluation is log-likelihood multiple-choice: the model scores each continuation and the highest wins. No generation, no chain-of-thought.
63
+
64
+ ---
65
+
66
+ ## Difficulty Stages
67
+
68
+ Problems are organized into 5 stages of increasing complexity by operand count. All stages draw from `{+, -, *, /}`.
69
+
70
+ | Stage | Operands | Operators | Example |
71
+ |-------|----------|-----------|---------|
72
+ | 1 | 2 | 1 | `5 * -3 =` |
73
+ | 2 | 3 | 2 | `7 - 2 * 4 =` |
74
+ | 3 | 4 | 3 | `-6 + 10 / 2 - 1 =` |
75
+ | 4 | 5 | 4 | `3 * -2 + 8 - 1 / 4 =` |
76
+ | 5 | 6 | 5 | `-4 + 2 * 7 - 3 / 1 + 6 =` |
77
+
78
+ 1,000 problems per stage, 5,000 total. Division by zero is rejected and regenerated.
79
+
80
+ ---
81
+
82
+ ## Dataset Format
83
+
84
+ ```json
85
+ {
86
+ "id": "math_s3_00000",
87
+ "stage": 3,
88
+ "n_operands": 4,
89
+ "expression": "-13 * 3 + -1",
90
+ "options": [-40, -4.2317, 26, -33.8851],
91
+ "answer_index": 0,
92
+ "answer": -40
93
+ }
94
+ ```
95
+
96
+ ---
97
+
98
+ ## Generator
99
+
100
+ `mathmark_gen.py` — generates all 5 stages with baked-in distractors.
101
+
102
+ ```
103
+ python mathmark_gen.py
104
+ ```
105
+
106
+ ### Design Decisions
107
+
108
+ **Left-to-right evaluation** — expressions are evaluated left-to-right (no operator precedence). Division by zero is rejected and regenerated.
109
+
110
+ **Integer-safe results** — float results are rounded to 4 decimal places; exact integer floats are cast to int.
111
+
112
+ **Operand range** — integers sampled from [-20, 20] across all stages.
113
+
114
+ **Distractor generation** — each wrong answer is generated by adding a random float offset (±0.5–15.0) to the correct answer. Integer answers get 2 integer distractors and 1 float; float answers get 2 float distractors and 1 integer. This ensures the answer type is never uniquely identifiable.
115
+
116
+ ---
mathmark.jsonl ADDED
The diff for this file is too large to render. See raw diff