Leaderboard Official Benchmark
Learn more Experimental
#
MODEL
SCORE
No results found
Dataset Viewer
Auto-converted to Parquet Duplicate
agent
large_stringclasses
5 values
agent_name
large_stringclasses
5 values
average_action_count
float64
0
55.7
average_agent_cost
float64
0
22.8
average_benchmark_cost
float64
0
0.12
average_invalid_action_count
float64
0
39.1
average_invalid_action_percent
float64
0
75.7
average_score
float64
0
0.88
average_steps
float64
0
51.6
benchmark
large_stringclasses
6 values
benchmark_name
large_stringclasses
4 values
benchmark_score
float64
0
0.89
completed_sessions
float64
0
100
incomplete_sessions
float64
0
100
missing_sessions
float64
0
0
model
large_stringclasses
3 values
model_name
large_stringclasses
3 values
percent_error
float64
0
1
percent_finished
float64
0
1
percent_finished_successful
float64
0
1
percent_finished_unsuccessful
float64
0
0.78
percent_successful
float64
0
1
percent_unfinished
float64
0
0.37
planned_sessions
int64
50
100
subset_name
large_stringclasses
4 values
successful_sessions
int64
0
100
total_agent_cost
float64
0
2.28k
total_benchmark_cost
float64
0
12.2
total_run_cost
float64
0
2.28k
total_sessions
int64
50
100
claude_code
Claude Code CLI
49.69
13.083792
0
18.17
36.566714
0.77417
49.69
appworld_test_normal
AppWorld
0.66
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
0.74
0.66
0.08
0.66
0.26
100
test_normal
66
1,308.379215
0
1,308.379215
100
claude_code
Claude Code CLI
0
0
0
0
0
0
0
appworld_test_normal
AppWorld
0
0
100
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
1
0
0
0
0
0
100
test_normal
0
0
0
0
100
claude_code
Claude Code CLI
38.01
3.105508
0
17.26
45.409103
0.72641
38.01
appworld_test_normal
AppWorld
0.36
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
0.86
0.36
0.5
0.36
0.14
100
test_normal
36
310.550762
0
310.550762
100
openai_solo
OpenAI Solo
47.65
22.764768
0
18.16
38.111228
0.79381
47.65
appworld_test_normal
AppWorld
0.68
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
0.77
0.68
0.09
0.68
0.23
100
test_normal
68
2,276.47676
0
2,276.47676
100
openai_solo
OpenAI Solo
0
0
0
0
0
0
0
appworld_test_normal
AppWorld
0
0
100
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
1
0
0
0
0
0
100
test_normal
0
0
0
0
100
openai_solo
OpenAI Solo
33.49
8.695528
0
13.52
40.37026
0.81766
33.49
appworld_test_normal
AppWorld
0.582
98
2
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0.02
0.98
0.57
0.41
0.57
0
100
test_normal
57
869.552842
0
869.552842
100
smolagents_code
SmolAgents Code
41.07
5.585109
0
4.8
11.687363
0.82331
41.07
appworld_test_normal
AppWorld
0.7
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
0.82
0.7
0.12
0.7
0.18
100
test_normal
70
558.51091
0
558.51091
100
smolagents_code
SmolAgents Code
51.59
0.550311
0
39.05
75.692964
0.40232
51.59
appworld_test_normal
AppWorld
0.071
98
2
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0.02
0.61
0.07
0.54
0.07
0.37
100
test_normal
7
55.031142
0
55.031142
100
smolagents_code
SmolAgents Code
49.13
2.542546
0
20.51
41.746387
0.526
49.13
appworld_test_normal
AppWorld
0.13
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
0.71
0.13
0.58
0.13
0.29
100
test_normal
13
254.254602
0
254.254602
100
tool_calling
LiteLLM Tool Calling
55.7
11.32466
0
0.06
0.10772
0.76528
21.99
appworld_test_normal
AppWorld
0.61
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
0.83
0.61
0.22
0.61
0.17
100
test_normal
61
1,132.465995
0
1,132.465995
100
tool_calling
LiteLLM Tool Calling
0
0
0
0
0
0
0
appworld_test_normal
AppWorld
0
0
100
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
1
0
0
0
0
0
100
test_normal
0
0
0
0
100
tool_calling
LiteLLM Tool Calling
33.88
1.881877
0
0.67
1.977568
0.77415
21.76
appworld_test_normal
AppWorld
0.505
99
1
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0.01
0.99
0.5
0.49
0.5
0
100
test_normal
50
188.187702
0
188.187702
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
42.45
3.433202
0
0.06
0.141343
0.78206
20.06
appworld_test_normal
AppWorld
0.64
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
0.82
0.64
0.18
0.64
0.18
100
test_normal
64
343.320175
0
343.320175
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
20.01
0.363713
0
0.45
2.248876
0.58473
10.05
appworld_test_normal
AppWorld
0.22
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.22
0.78
0.22
0
100
test_normal
22
36.371316
0
36.371316
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
32.47
1.304853
0
0.56
1.724669
0.81458
22.59
appworld_test_normal
AppWorld
0.55
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.55
0.45
0.55
0
100
test_normal
55
130.485292
0
130.485292
100
claude_code
Claude Code CLI
14.333333
11.658796
0.001639
0
0
0.529412
31.039216
browsecompplus
BrowseCompPlus
0.529412
0
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
0.843137
0.529412
0.313725
0.529412
0.156863
51
null
27
594.598615
0.083574
594.682189
51
claude_code
Claude Code CLI
0
0.429381
0.001692
0
0
0.43
8.97
browsecompplus
BrowseCompPlus
0.43
0
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.43
0.57
0.43
0
100
null
43
42.93807
0.169218
43.107288
100
claude_code
Claude Code CLI
0
2.845294
0.001463
0
0
0.51
22.88
browsecompplus
BrowseCompPlus
0.51
0
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
0.7
0.51
0.19
0.51
0.3
100
null
51
284.529442
0.146346
284.675788
100
openai_solo
OpenAI Solo
0
7.592457
0.001918
0
0
0.61
27.18
browsecompplus
BrowseCompPlus
0.61
0
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.61
0.39
0.61
0
100
null
61
759.245735
0.191776
759.437511
100
openai_solo
OpenAI Solo
0
0.380437
0.001675
0
0
0.48
14.27
browsecompplus
BrowseCompPlus
0.48
0
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.48
0.52
0.48
0
100
null
48
38.043726
0.167464
38.21119
100
openai_solo
OpenAI Solo
0
0.643307
0.001068
0
0
0.333333
8.454545
browsecompplus
BrowseCompPlus
0.333333
0
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0.020202
0.606061
0.333333
0.272727
0.333333
0.373737
99
null
33
63.687404
0.105778
63.793182
99
smolagents_code
SmolAgents Code
0
6.303725
0.00183
0
0
0.61
24.16
browsecompplus
BrowseCompPlus
0.61
0
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.61
0.39
0.61
0
100
null
61
630.372505
0.183026
630.555531
100
smolagents_code
SmolAgents Code
0
0.171568
0.001547
0
0
0.26
6.57
browsecompplus
BrowseCompPlus
0.26
0
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
0.99
0.26
0.73
0.26
0.01
100
null
26
17.156755
0.154748
17.311503
100
smolagents_code
SmolAgents Code
0
2.388851
0.001138
0
0
0.57
29.63
browsecompplus
BrowseCompPlus
0.57
0
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
0.69
0.57
0.12
0.57
0.31
100
null
57
238.885126
0.113834
238.99896
100
tool_calling
LiteLLM Tool Calling
0
7.093633
0.001762
0
0
0.49
21.66
browsecompplus
BrowseCompPlus
0.49
0
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0.01
0.93
0.49
0.44
0.49
0.06
100
null
49
709.363285
0.176244
709.539529
100
tool_calling
LiteLLM Tool Calling
0
0.296173
0.001666
0
0
0.46
8.14
browsecompplus
BrowseCompPlus
0.46
0
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
0.99
0.46
0.53
0.46
0.01
100
null
46
29.617263
0.1666
29.783863
100
tool_calling
LiteLLM Tool Calling
0
0.439434
0.002357
0
0
0.48
7.85
browsecompplus
BrowseCompPlus
0.48
0
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
0.99
0.48
0.51
0.48
0.01
100
null
48
43.943356
0.235722
44.179078
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
0
7.093633
0.001762
0
0
0.49
21.66
browsecompplus
BrowseCompPlus
0.49
0
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0.01
0.93
0.49
0.44
0.49
0.06
100
null
49
709.363285
0.176244
709.539529
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
0
0.296173
0.001666
0
0
0.46
8.14
browsecompplus
BrowseCompPlus
0.46
0
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
0.99
0.46
0.53
0.46
0.01
100
null
46
29.617263
0.1666
29.783863
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
0
0.439434
0.002357
0
0
0.48
7.85
browsecompplus
BrowseCompPlus
0.48
0
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
0.99
0.48
0.51
0.48
0.01
100
null
48
43.943356
0.235722
44.179078
100
claude_code
Claude Code CLI
0
5.604361
0
0
0
0.742268
31.762887
swebench
SWE-bench
0.742268
97
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
1
0
1
0
97
null
97
543.623015
0
543.623015
97
claude_code
Claude Code CLI
1.63
0.939755
0
0
0
0.58
23.99
swebench
SWE-bench
0.58
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
1
0
1
0
100
null
100
93.975473
0
93.975473
100
claude_code
Claude Code CLI
0
3.679669
0
0
0
0.67
43.72
swebench
SWE-bench
0.67
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.99
0.01
0.99
0
100
null
99
367.966876
0
367.966876
100
openai_solo
OpenAI Solo
7.433735
2.961261
0
0
0
0.807229
34.096386
swebench
SWE-bench
0.807229
83
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
1
0
1
0
83
null
83
245.78468
0
245.78468
83
openai_solo
OpenAI Solo
0
0.259018
0
0
0
0.545455
20.444444
swebench
SWE-bench
0.545455
99
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.989899
0.010101
0.989899
0
99
null
98
25.642766
0
25.642766
99
openai_solo
OpenAI Solo
6.531915
1.579178
0
0
0
0.723404
32.361702
swebench
SWE-bench
0.723404
94
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
1
0
1
0
94
null
94
148.44273
0
148.44273
94
smolagents_code
SmolAgents Code
15.77
4.852183
0
0
0
0.65
39.13
swebench
SWE-bench
0.65
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
1
0
1
0
100
null
100
485.218325
0
485.218325
100
smolagents_code
SmolAgents Code
2.848485
0.450318
0
0
0
0.525253
19.979798
swebench
SWE-bench
0.525253
99
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
1
0
1
0
99
null
99
44.581478
0
44.581478
99
smolagents_code
SmolAgents Code
0
2.209662
0
0
0
0.757576
38.10101
swebench
SWE-bench
0.757576
99
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
1
0
1
0
99
null
99
218.756522
0
218.756522
99
tool_calling
LiteLLM Tool Calling
16.878788
3.971295
0
0
0
0.606061
43.444444
swebench
SWE-bench
0.606061
99
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
1
0
1
0
99
null
99
393.1582
0
393.1582
99
tool_calling
LiteLLM Tool Calling
0
0.247621
0
0
0
0.58
20.47
swebench
SWE-bench
0.57
0
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
1
0
1
0
100
null
100
24.762056
0
24.762056
100
tool_calling
LiteLLM Tool Calling
0
0.695613
0
0
0
0.71
32.55
swebench
SWE-bench
0.71
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
1
0
1
0
100
null
100
69.561342
0
69.561342
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
16.878788
3.971295
0
0
0
0.606061
43.444444
swebench
SWE-bench
0.606061
99
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
1
0
1
0
99
null
99
393.1582
0
393.1582
99
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
0
0.247621
0
0
0
0.58
20.47
swebench
SWE-bench
0.57
0
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
1
0
1
0
100
null
100
24.762056
0
24.762056
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
0
0.695613
0
0
0
0.71
32.55
swebench
SWE-bench
0.71
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
1
0
1
0
100
null
100
69.561342
0
69.561342
100
claude_code
Claude Code CLI
11.5
1.299355
0.01386
0
0
0.66
11.5
tau2_airline
Tau Bench 2
0.66
50
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.66
0.34
0.66
0
50
airline
33
64.967745
0.692992
65.660737
50
claude_code
Claude Code CLI
4.12
0.213383
0.011296
0
0
0.48
10.18
tau2_airline
Tau Bench 2
0.48
50
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.48
0.52
0.48
0
50
airline
24
10.669148
0.564818
11.233966
50
claude_code
Claude Code CLI
12.62
0.339443
0.009578
0
0
0.7
12.62
tau2_airline
Tau Bench 2
0.7
50
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.7
0.3
0.7
0
50
airline
35
16.97213
0.478889
17.451019
50
openai_solo
OpenAI Solo
12.22
0.716584
0.014479
0
0
0.74
12.22
tau2_airline
Tau Bench 2
0.74
50
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.74
0.26
0.74
0
50
airline
37
35.82921
0.723941
36.553151
50
openai_solo
OpenAI Solo
11.4
0.107118
0.008216
0
0
0.5
11.4
tau2_airline
Tau Bench 2
0.5
50
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.5
0.5
0.5
0
50
airline
25
5.355903
0.410809
5.766712
50
openai_solo
OpenAI Solo
10.9
0.213802
0.009739
0
0
0.62
10.9
tau2_airline
Tau Bench 2
0.62
50
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.62
0.38
0.62
0
50
airline
31
10.690106
0.48695
11.177056
50
smolagents_code
SmolAgents Code
11.88
0.780153
0.013339
0
0
0.72
11.88
tau2_airline
Tau Bench 2
0.72
50
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.72
0.28
0.72
0
50
airline
36
39.00764
0.666959
39.674599
50
smolagents_code
SmolAgents Code
10.68
0.293272
0.012412
0
0
0.6
10.68
tau2_airline
Tau Bench 2
0.6
50
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.6
0.4
0.6
0
50
airline
30
14.663588
0.620616
15.284204
50
smolagents_code
SmolAgents Code
12.28
0.195905
0.009922
0
0
0.68
12.28
tau2_airline
Tau Bench 2
0.68
50
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.68
0.32
0.68
0
50
airline
34
9.795244
0.496103
10.291347
50
tool_calling
LiteLLM Tool Calling
12.02
0.469238
0.015317
0
0
0.66
10
tau2_airline
Tau Bench 2
0.66
50
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.66
0.34
0.66
0
50
airline
33
23.46188
0.765841
24.227721
50
tool_calling
LiteLLM Tool Calling
12.06
0.125105
0.014181
0
0
0.54
11.22
tau2_airline
Tau Bench 2
0.54
50
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.54
0.46
0.54
0
50
airline
27
6.255237
0.709038
6.964275
50
tool_calling
LiteLLM Tool Calling
12.72
0.15855
0.010977
0
0
0.7
10.14
tau2_airline
Tau Bench 2
0.7
50
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.7
0.3
0.7
0
50
airline
35
7.927484
0.548844
8.476328
50
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
12.02
0.469238
0.015317
0
0
0.66
10
tau2_airline
Tau Bench 2
0.66
50
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.66
0.34
0.66
0
50
airline
33
23.46188
0.765841
24.227721
50
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
12.06
0.125105
0.014181
0
0
0.54
11.22
tau2_airline
Tau Bench 2
0.54
50
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.54
0.46
0.54
0
50
airline
27
6.255237
0.709038
6.964275
50
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
12.72
0.15855
0.010977
0
0
0.7
10.14
tau2_airline
Tau Bench 2
0.7
50
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.7
0.3
0.7
0
50
airline
35
7.927484
0.548844
8.476328
50
claude_code
Claude Code CLI
12.54
1.598523
0.012835
0
0
0.83
12.54
tau2_retail
Tau Bench 2
0.83
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.83
0.17
0.83
0
100
retail
83
159.852325
1.283548
161.135873
100
claude_code
Claude Code CLI
6.68
0.120064
0.006263
0
0
0.51
9.92
tau2_retail
Tau Bench 2
0.51
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
0.98
0.64
0.34
0.64
0.02
100
retail
64
12.006437
0.626275
12.632711
100
claude_code
Claude Code CLI
11.18
0.185258
0.008511
0
0
0.71
11.18
tau2_retail
Tau Bench 2
0.780488
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.71
0.29
0.71
0
100
retail
71
18.525812
0.851069
19.376881
100
openai_solo
OpenAI Solo
12.54
0.549581
0.012196
0
0
0.85
12.54
tau2_retail
Tau Bench 2
0.85
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.85
0.15
0.85
0
100
retail
85
54.95812
1.219595
56.177715
100
openai_solo
OpenAI Solo
9.55
0.107783
0.007619
0
0
0.53
9.55
tau2_retail
Tau Bench 2
0.535354
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
0.99
0.53
0.46
0.53
0.01
100
retail
53
10.778331
0.761935
11.540265
100
openai_solo
OpenAI Solo
10.62
0.266485
0.008298
0
0
0.73
10.62
tau2_retail
Tau Bench 2
0.73
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.73
0.27
0.73
0
100
retail
73
26.6485
0.829764
27.478264
100
smolagents_code
SmolAgents Code
11.71
0.671186
0.011224
0
0
0.78
11.71
tau2_retail
Tau Bench 2
0.78
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.78
0.22
0.78
0
100
retail
78
67.11861
1.122397
68.241007
100
smolagents_code
SmolAgents Code
11.08
0.251947
0.010713
0
0
0.68
11.08
tau2_retail
Tau Bench 2
0.68
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.68
0.32
0.68
0
100
retail
68
25.194685
1.071263
26.265948
100
smolagents_code
SmolAgents Code
11.3
0.205686
0.008618
0
0
0.75
11.3
tau2_retail
Tau Bench 2
0.757576
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.75
0.25
0.75
0
100
retail
75
20.56864
0.861754
21.430394
100
tool_calling
LiteLLM Tool Calling
12.18
0.467425
0.012659
0
0
0.78
11.33
tau2_retail
Tau Bench 2
0.78
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.78
0.22
0.78
0
100
retail
78
46.742525
1.26594
48.008465
100
tool_calling
LiteLLM Tool Calling
11.26
0.110932
0.011788
0.01
0.08881
0.73
10.33
tau2_retail
Tau Bench 2
0.73
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.73
0.27
0.73
0
100
retail
73
11.093171
1.178822
12.271993
100
tool_calling
LiteLLM Tool Calling
12.24
0.156497
0.009912
0
0
0.82
11.25
tau2_retail
Tau Bench 2
0.82
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.82
0.18
0.82
0
100
retail
82
15.649692
0.991184
16.640876
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
12.18
0.467425
0.012659
0
0
0.78
11.33
tau2_retail
Tau Bench 2
0.78
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.78
0.22
0.78
0
100
retail
78
46.742525
1.26594
48.008465
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
11.26
0.110932
0.011788
0.01
0.08881
0.73
10.33
tau2_retail
Tau Bench 2
0.73
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.73
0.27
0.73
0
100
retail
73
11.093171
1.178822
12.271993
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
12.24
0.156497
0.009912
0
0
0.82
11.25
tau2_retail
Tau Bench 2
0.82
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.82
0.18
0.82
0
100
retail
82
15.649692
0.991184
16.640876
100
claude_code
Claude Code CLI
18.71
2.448865
0.110861
0
0
0.76
18.71
tau2_telecom
Tau Bench 2
0.76
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.76
0.24
0.76
0
100
telecom
76
244.88646
11.086118
255.972578
100
claude_code
Claude Code CLI
6.57
0.098015
0.053513
0
0
0.55
9.36
tau2_telecom
Tau Bench 2
0.55
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.55
0.45
0.55
0
100
telecom
55
9.801496
5.351322
15.152818
100
claude_code
Claude Code CLI
7.16
0.210642
0.044182
0
0
0.71
9.9
tau2_telecom
Tau Bench 2
0.685185
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.71
0.29
0.71
0
100
telecom
71
21.064224
4.41815
25.482374
100
openai_solo
OpenAI Solo
17.15
1.24594
0.122447
0
0
0.84
17.15
tau2_telecom
Tau Bench 2
0.84
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.84
0.16
0.84
0
100
telecom
84
124.59395
12.244745
136.838695
100
openai_solo
OpenAI Solo
9.92
0.147175
0.04162
0
0
0.53
9.92
tau2_telecom
Tau Bench 2
0.53
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.53
0.47
0.53
0
100
telecom
53
14.717546
4.162018
18.879564
100
openai_solo
OpenAI Solo
10.82
0.535132
0.047736
0
0
0.79
10.82
tau2_telecom
Tau Bench 2
0.88764
89
11
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0.11
0.89
0.79
0.1
0.79
0
100
telecom
79
53.513152
4.773591
58.286743
100
smolagents_code
SmolAgents Code
13.77
1.064388
0.08183
0
0
0.58
13.77
tau2_telecom
Tau Bench 2
0.58
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.58
0.42
0.58
0
100
telecom
58
106.43879
8.183033
114.621823
100
smolagents_code
SmolAgents Code
10.11
0.303553
0.049517
0
0
0.71
10.11
tau2_telecom
Tau Bench 2
0.71
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
1
0.71
0.29
0.71
0
100
telecom
71
30.355348
4.951702
35.30705
100
smolagents_code
SmolAgents Code
12.71
0.34617
0.056376
0
0
0.88
12.71
tau2_telecom
Tau Bench 2
0.88
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.88
0.12
0.88
0
100
telecom
88
34.61699
5.63759
40.25458
100
tool_calling
LiteLLM Tool Calling
17.22
0.915781
0.104337
0.01
0.058072
0.76
17.22
tau2_telecom
Tau Bench 2
0.76
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.76
0.24
0.76
0
100
telecom
76
91.578125
10.433726
102.011851
100
tool_calling
LiteLLM Tool Calling
10.18
0.140895
0.058336
0
0
0.53
10.18
tau2_telecom
Tau Bench 2
0.535354
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
0.99
0.53
0.46
0.53
0.01
100
telecom
53
14.089511
5.833562
19.923073
100
tool_calling
LiteLLM Tool Calling
15.29
0.297978
0.069531
0.17
1.111838
0.73
14.84
tau2_telecom
Tau Bench 2
0.73
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.73
0.27
0.73
0
100
telecom
73
29.797836
6.953124
36.75096
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
17.22
0.915781
0.104337
0.01
0.058072
0.76
17.22
tau2_telecom
Tau Bench 2
0.76
100
0
0
openai_aws_claude-opus-4-5
openai/aws/claude-opus-4-5
0
1
0.76
0.24
0.76
0
100
telecom
76
91.578125
10.433726
102.011851
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
10.18
0.140895
0.058336
0
0
0.53
10.18
tau2_telecom
Tau Bench 2
0.535354
100
0
0
openai_Azure_gpt-5.2-2025-12-11
openai/Azure/gpt-5.2-2025-12-11
0
0.99
0.53
0.46
0.53
0.01
100
telecom
53
14.089511
5.833562
19.923073
100
tool_calling_with_shortlisting
LiteLLM Tool Calling with Shortlisting
15.29
0.297978
0.069531
0.17
1.111838
0.73
14.84
tau2_telecom
Tau Bench 2
0.73
100
0
0
openai_gcp_gemini-3-pro-preview
openai/gcp/gemini-3-pro-preview
0
1
0.73
0.27
0.73
0
100
telecom
73
29.797836
6.953124
36.75096
100

Open Agent Leaderboard Results

Detailed evaluation results for general-purpose AI agents across diverse real-world benchmarks — without domain-specific tuning.

Benchmarks

Benchmark Description
AppWorld App-based task completion in simulated smartphone environments
BrowseComp+ Web browsing and complex information retrieval
SWE-bench Software engineering issue resolution on real GitHub repos
TauBench-Airline Customer service agent evaluation (airline domain)
TauBench-Retail Customer service agent evaluation (retail domain)
TauBench-Telecom Customer service agent evaluation (telecom domain)

Agents Evaluated

Agent Framework
Claude Code claude-code
OpenAI Solo openai-agents-python
Smolagent smolagents
React litellm
React + Shortlisting litellm

Models

Results are reported for each agent × model combination: Claude Opus 4.5, GPT-5.2, Gemini Pro 3.

Schema

See CONTRIBUTING.md for submission instructions and results-README.md for full column descriptions.

Note: dataset_info stats in this file are auto-generated by scripts/build_data.py and reflect the last build.

Downloads last month
37

Collection including open-agent-leaderboard/results

Paper for open-agent-leaderboard/results