Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
13
13
stage
int64
1
5
n_operands
int64
2
6
expression
stringlengths
7
35
options
listlengths
4
4
answer_index
int64
0
3
answer
float64
-409,464
2M
math_s1_00000
1
2
20 + -13 =
[ 21.2027, 7, 8, 14 ]
1
7
math_s1_00001
1
2
-3 - -5 =
[ -11, 2, 11.2045, -3 ]
1
2
math_s1_00002
1
2
-12 + -14 =
[ -26, -20.6054, -34, -11 ]
0
-26
math_s1_00003
1
2
17 + 7 =
[ 25.6431, 24, 23, 12 ]
1
24
math_s1_00004
1
2
-19 - -15 =
[ -16, -8, -4, -7.7361 ]
2
-4
math_s1_00005
1
2
-6 + 12 =
[ 6, 15, -2.2382, 8 ]
0
6
math_s1_00006
1
2
15 / -8 =
[ -6.7405, -6.3557, -1.875, 9 ]
2
-1.875
math_s1_00007
1
2
-6 * 8 =
[ -48, -49.5935, -63, -62 ]
0
-48
math_s1_00008
1
2
-20 / -10 =
[ 2, -12, 17, 10.6476 ]
0
2
math_s1_00009
1
2
1 - -3 =
[ 14, -10, 4, 13.1241 ]
2
4
math_s1_00010
1
2
-7 + 1 =
[ -8, -11.0885, -6, -12 ]
2
-6
math_s1_00011
1
2
-15 + 4 =
[ -22, -18, -11, -20.0313 ]
2
-11
math_s1_00012
1
2
2 * 2 =
[ -3.201, 0, -8, 4 ]
3
4
math_s1_00013
1
2
-18 + 9 =
[ -7.6636, -9, 5, -11 ]
1
-9
math_s1_00014
1
2
4 * -15 =
[ -60, -60.8671, -56, -48 ]
0
-60
math_s1_00015
1
2
20 * 19 =
[ 380, 367, 368.3435, 368 ]
0
380
math_s1_00016
1
2
16 + -8 =
[ 8, 23, -1.8847, 16 ]
0
8
math_s1_00017
1
2
-18 * -6 =
[ 102.8628, 117, 108, 100 ]
2
108
math_s1_00018
1
2
-15 + -6 =
[ -21, -10, -29.8, -9 ]
0
-21
math_s1_00019
1
2
4 / -3 =
[ -1.3333, -11.6929, -1.6552, -5 ]
0
-1.3333
math_s1_00020
1
2
20 - 3 =
[ 4, 20, 29.8523, 17 ]
3
17
math_s1_00021
1
2
3 - 2 =
[ 14.0278, -14, 1, 7 ]
2
1
math_s1_00022
1
2
-3 - -16 =
[ -1, 24, 13, 27.5102 ]
2
13
math_s1_00023
1
2
14 - -5 =
[ 5, 19, 33, 15.9574 ]
1
19
math_s1_00024
1
2
9 * 4 =
[ 25, 48.5967, 36, 37 ]
2
36
math_s1_00025
1
2
20 - 15 =
[ 5, 4, 0, 3.0424 ]
0
5
math_s1_00026
1
2
0 - -17 =
[ 31, 17, 11, 27.558 ]
1
17
math_s1_00027
1
2
-3 - -16 =
[ 18, 13, 4.2454, 10 ]
1
13
math_s1_00028
1
2
16 - 0 =
[ 19, 16, 22, 15.1067 ]
1
16
math_s1_00029
1
2
11 / 5 =
[ 6.8585, 2.2, -13, -2.5335 ]
1
2.2
math_s1_00030
1
2
-11 - -4 =
[ -14, -7, 0.1257, -5 ]
1
-7
math_s1_00031
1
2
-5 * 15 =
[ -77, -79, -86.9081, -75 ]
3
-75
math_s1_00032
1
2
17 / 7 =
[ 9.1876, 11, 2.4286, -9.2228 ]
2
2.4286
math_s1_00033
1
2
3 - -6 =
[ 0.2727, 3, 2, 9 ]
3
9
math_s1_00034
1
2
12 + 11 =
[ 10, 25, 16.9695, 23 ]
3
23
math_s1_00035
1
2
-17 - -13 =
[ -4, -2, -9.5076, 4 ]
0
-4
math_s1_00036
1
2
20 / -10 =
[ 10.3029, -2, 9, 13 ]
1
-2
math_s1_00037
1
2
18 / -16 =
[ -1.125, -13.2059, 6.7365, 11 ]
0
-1.125
math_s1_00038
1
2
4 / 18 =
[ -7.5158, -2, 0.2222, -1.0334 ]
2
0.2222
math_s1_00039
1
2
13 + -4 =
[ 9, 3.9394, 23, -4 ]
0
9
math_s1_00040
1
2
-13 * 14 =
[ -182, -192, -193.0563, -193 ]
0
-182
math_s1_00041
1
2
1 * -13 =
[ -13, -25.559, -24, -23 ]
0
-13
math_s1_00042
1
2
7 / -10 =
[ 8, 13.1784, -0.7, 10.7237 ]
2
-0.7
math_s1_00043
1
2
-20 - -4 =
[ -4, -16, -12, -17.0996 ]
1
-16
math_s1_00044
1
2
12 * -14 =
[ -168, -155, -177.999, -175 ]
0
-168
math_s1_00045
1
2
20 - 12 =
[ 19, 0, 7.4087, 8 ]
3
8
math_s1_00046
1
2
-11 - 3 =
[ -2.4188, -14, -4, -16 ]
1
-14
math_s1_00047
1
2
14 + 13 =
[ 42, 15.1895, 27, 14 ]
2
27
math_s1_00048
1
2
-19 * -13 =
[ 240.1412, 247, 234, 246 ]
1
247
math_s1_00049
1
2
-1 + -5 =
[ -6, 7, -13.6023, -10 ]
0
-6
math_s1_00050
1
2
-5 + 16 =
[ 11, 9.2714, 4, -1 ]
0
11
math_s1_00051
1
2
-15 + 11 =
[ 8, 9.5377, -2, -4 ]
3
-4
math_s1_00052
1
2
14 - -12 =
[ 32, 13.7441, 26, 23 ]
2
26
math_s1_00053
1
2
10 - 15 =
[ -11, 6.0709, -5, -9 ]
2
-5
math_s1_00054
1
2
-4 / 13 =
[ -0.3077, -2.482, 8, 10.18 ]
0
-0.3077
math_s1_00055
1
2
-7 - 14 =
[ -23.9503, -21, -19, -6 ]
1
-21
math_s1_00056
1
2
-1 * 5 =
[ -11, -5, -8.7079, 0 ]
1
-5
math_s1_00057
1
2
8 / 13 =
[ 0.6154, 5.2266, 3, -5.3553 ]
0
0.6154
math_s1_00058
1
2
-13 - -5 =
[ -1.2947, -8, -10, -19 ]
1
-8
math_s1_00059
1
2
-16 + 1 =
[ -9.349, -17, -23, -15 ]
3
-15
math_s1_00060
1
2
17 - 15 =
[ 15, 17, 2, 4.9695 ]
2
2
math_s1_00061
1
2
17 + -6 =
[ 3, -4, 11, 5.5827 ]
2
11
math_s1_00062
1
2
-16 + 20 =
[ 4, -1, 15.3683, 3 ]
0
4
math_s1_00063
1
2
-6 + -16 =
[ -13, -27, -33.2453, -22 ]
3
-22
math_s1_00064
1
2
1 - -16 =
[ 29, 17, 23.0374, 9 ]
1
17
math_s1_00065
1
2
-3 - 11 =
[ -15, -9.2993, -1, -14 ]
3
-14
math_s1_00066
1
2
14 / -12 =
[ -9.732, -6, 5.1069, -1.1667 ]
3
-1.1667
math_s1_00067
1
2
-5 / 10 =
[ -3, -8.9826, 4.0085, -0.5 ]
3
-0.5
math_s1_00068
1
2
-8 + -14 =
[ -22, -16, -19, -32.9898 ]
0
-22
math_s1_00069
1
2
7 / 2 =
[ 2.4454, 5.8565, 11, 3.5 ]
3
3.5
math_s1_00070
1
2
6 + 9 =
[ 4, 26, 2.5144, 15 ]
3
15
math_s1_00071
1
2
-14 / -17 =
[ -12, 3.1493, 0.8235, 15.519 ]
2
0.8235
math_s1_00072
1
2
1 - -14 =
[ 15, 18, 20.062, 8 ]
0
15
math_s1_00073
1
2
-8 / -8 =
[ -14, 1, 7.6954, 8 ]
1
1
math_s1_00074
1
2
-12 - 7 =
[ -4, -32.8365, -20, -19 ]
3
-19
math_s1_00075
1
2
-3 - 9 =
[ -2.8799, -12, -8, 1 ]
1
-12
math_s1_00076
1
2
-16 + 8 =
[ -18.4393, -8, 2, 0 ]
1
-8
math_s1_00077
1
2
-17 + 14 =
[ 0, -3, -3.3558, -6 ]
1
-3
math_s1_00078
1
2
-15 - -5 =
[ -15.1889, -18, -19, -10 ]
3
-10
math_s1_00079
1
2
6 / 11 =
[ -0.1696, 13.7115, 4, 0.5455 ]
3
0.5455
math_s1_00080
1
2
-7 + 5 =
[ 2.7961, -8, 3, -2 ]
3
-2
math_s1_00081
1
2
-10 + 4 =
[ 1.4323, -5, -6, 2 ]
2
-6
math_s1_00082
1
2
4 / -4 =
[ -1, -3, -7.2618, -9 ]
0
-1
math_s1_00083
1
2
-2 / 7 =
[ 3, -4.1818, -0.2857, 13.4705 ]
2
-0.2857
math_s1_00084
1
2
-11 * -8 =
[ 95.7021, 88, 100, 101 ]
1
88
math_s1_00085
1
2
-7 + -17 =
[ -34, -24, -28, -21.2149 ]
1
-24
math_s1_00086
1
2
0 + -17 =
[ -26.5908, -31, -17, -15 ]
2
-17
math_s1_00087
1
2
17 - 10 =
[ 9, 7, 13, -3.0149 ]
1
7
math_s1_00088
1
2
-17 + 12 =
[ 4, -10.5846, -5, -18 ]
2
-5
math_s1_00089
1
2
-9 + -16 =
[ -33.5008, -25, -12, -28 ]
1
-25
math_s1_00090
1
2
-5 + 5 =
[ -15, -5.3899, 11, 0 ]
3
0
math_s1_00091
1
2
16 + -5 =
[ 11, 6, -3.1043, 22 ]
0
11
math_s1_00092
1
2
19 / -15 =
[ 2.7652, 9.1625, -1.2667, 12 ]
2
-1.2667
math_s1_00093
1
2
17 * 16 =
[ 261, 272, 261.239, 278 ]
1
272
math_s1_00094
1
2
-4 * -7 =
[ 22.5684, 28, 16, 22 ]
1
28
math_s1_00095
1
2
-5 / -4 =
[ 1.25, 0.3025, -0.0082, 12 ]
0
1.25
math_s1_00096
1
2
-12 / -1 =
[ 14.3456, 12, 20, 15 ]
1
12
math_s1_00097
1
2
0 + -16 =
[ -9, -27.2308, -16, -7 ]
2
-16
math_s1_00098
1
2
9 + 19 =
[ 24.1462, 28, 24, 30 ]
1
28
math_s1_00099
1
2
-16 - 14 =
[ -30, -40.5255, -37, -20 ]
0
-30
End of preview. Expand in Data Studio

Axiomic Banner

ArithMark

A procedurally generated benchmark for evaluating arithmetic reasoning in language models. Each problem presents a numeric expression and asks the model to identify the correct result.

Unlike knowledge-based benchmarks, ArithMark contains no facts a model could have memorised from pretraining. Every problem is generated fresh from random integers and operator sequences, so a model cannot pattern-match to training data — it must actually compute. This makes ArithMark a direct probe of numerical reasoning capability: the arithmetic structure that has been built into the model's weights through training, independent of world knowledge or surface-level heuristics.

Evaluation is log-likelihood multiple-choice, no chain-of-thought, no prompting tricks. Models are scored purely on how well they assign probability to the correct completion.


Benchmark Results

Evaluated using average log-likelihood over ending tokens, normalised by length. Random chance = 25%.

Company Model Params Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Avg
Alibaba Qwen2.5-3B 3.1B 97.30% 75.60% 63.90% 55.40% 53.50% 69.14%
Alibaba Qwen2.5-Math-1.5B 1.5B 97.70% 77.90% 63.20% 51.00% 44.10% 66.78%
Alibaba Qwen2.5-1.5B 1.5B 94.70% 69.10% 58.90% 49.90% 46.80% 63.88%
Alibaba Qwen2.5-Coder-1.5B 1.5B 93.00% 65.20% 57.60% 51.30% 49.20% 63.26%
Alibaba Qwen2.5-0.5B 494M 85.50% 57.60% 48.20% 40.10% 38.80% 54.04%
HuggingFace SmolLM2-1.7B 1.7B 82.70% 54.50% 44.90% 35.20% 33.30% 50.12%
EleutherAI pythia-2.8b 2.8B 52.40% 42.20% 33.30% 28.90% 27.20% 36.80%
OpenAI gpt2-xl 1.6B 39.90% 38.30% 36.10% 32.80% 33.50% 36.12%
OpenAI gpt2-medium 345M 37.70% 37.20% 33.60% 30.90% 32.80% 34.44%
Axiomic Labs GPT-X2-125M (unreleased) 125M 39.10% 39.20% 34.80% 28.20% 29.50% 34.16%
HuggingFace SmolLM2-135M 135M 41.20% 37.40% 32.80% 28.70% 25.90% 33.20%
HuggingFace SmolLM-135M 135M 42.70% 36.50% 32.60% 25.30% 23.90% 32.20%
OpenAI gpt2 124M 35.70% 32.80% 31.90% 28.00% 29.60% 31.60%
Meta MobileLLM-125M 125M 35.90% 35.00% 32.90% 27.40% 24.60% 31.16%
Axiomic Labs GPT-X-125M 125M 38.10% 33.20% 29.00% 26.90% 25.40% 30.52%
EleutherAI pythia-31m 30M 36.40% 31.00% 29.30% 27.80% 26.60% 30.22%
EleutherAI pythia-160m 162M 35.80% 30.30% 28.00% 28.20% 27.00% 29.86%
EleutherAI pythia-70m 70M 36.30% 30.10% 28.50% 27.30% 26.90% 29.82%
EleutherAI pythia-14m 14M 34.50% 29.00% 26.30% 24.40% 24.00% 27.64%

Task Format

Each problem is presented as a numeric expression followed by four answer choices — a mix of integers and floats so the answer type is never a giveaway:

-13 * 3 + -1 =
A. -40
B. -4.2317
C. 26
D. -33.8851

Evaluation is log-likelihood multiple-choice: the model scores each continuation and the highest wins. No generation, no chain-of-thought.


Difficulty Stages

Problems are organized into 5 stages of increasing complexity by operand count. All stages draw from {+, -, *, /}.

Stage Operands Operators Example
1 2 1 5 * -3 =
2 3 2 7 - 2 * 4 =
3 4 3 -6 + 10 / 2 - 1 =
4 5 4 3 * -2 + 8 - 1 / 4 =
5 6 5 -4 + 2 * 7 - 3 / 1 + 6 =

1,000 problems per stage, 5,000 total. Division by zero is rejected and regenerated.


Dataset Format

{
  "id": "math_s3_00000",
  "stage": 3,
  "n_operands": 4,
  "expression": "-13 * 3 + -1",
  "options": [-40, -4.2317, 26, -33.8851],
  "answer_index": 0,
  "answer": -40
}

Design Decisions

Left-to-right evaluation — expressions are evaluated left-to-right (no operator precedence). Division by zero is rejected and regenerated.

Integer-safe results — float results are rounded to 4 decimal places; exact integer floats are cast to int.

Operand range — integers sampled from [-20, 20] across all stages.

Distractor generation — each wrong answer is generated by adding a random float offset (±0.5–15.0) to the correct answer. Integer answers get 2 integer distractors and 1 float; float answers get 2 float distractors and 1 integer. This ensures the answer type is never uniquely identifiable.


Downloads last month
36

Collection including AxiomicLabs/ArithMark