Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: TypeError
Message: Couldn't cast array of type
struct<leaderboard: struct<acc_norm,none: double, acc_norm_stderr,none: double, prompt_level_loose_acc,none: double, prompt_level_loose_acc_stderr,none: double, inst_level_strict_acc,none: double, inst_level_strict_acc_stderr,none: string, exact_match,none: double, exact_match_stderr,none: double, inst_level_loose_acc,none: double, inst_level_loose_acc_stderr,none: string, acc,none: double, acc_stderr,none: double, prompt_level_strict_acc,none: double, prompt_level_strict_acc_stderr,none: double, alias: string>, leaderboard_bbh: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_boolean_expressions: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_causal_judgement: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_date_understanding: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_disambiguation_qa: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_formal_fallacies: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_geometric_shapes: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_hyperbaton: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_logical_deduction_five_objects: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: stri
...
st_level_loose_acc_stderr,none: string, alias: string>, leaderboard_math_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_algebra_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_counting_and_prob_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_geometry_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_intermediate_algebra_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_num_theory_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_prealgebra_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_precalculus_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_mmlu_pro: struct<acc,none: double, acc_stderr,none: double, alias: string>, leaderboard_musr: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_musr_murder_mysteries: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_musr_object_placements: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_musr_team_allocation: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>>
to
{'leaderboard': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'prompt_level_loose_acc,none': Value(dtype='float64', id=None), 'prompt_level_loose_acc_stderr,none': Value(dtype='float64', id=None), 'inst_level_strict_acc,none': Value(dtype='float64', id=None), 'inst_level_strict_acc_stderr,none': Value(dtype='string', id=None), 'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'inst_level_loose_acc,none': Value(dtype='float64', id=None), 'inst_level_loose_acc_stderr,none': Value(dtype='string', id=None), 'acc,none': Value(dtype='float64', id=None), 'acc_stderr,none': Value(dtype='float64', id=None), 'prompt_level_strict_acc,none': Value(dtype='float64', id=None), 'prompt_level_strict_acc_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_arc_challenge': {'acc,none': Value(dtype='float64', id=None), 'acc_stderr,none': Value(dtype='float64', id=None), 'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_bbh': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_bbh_boolean_expressions': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None)
...
e(dtype='string', id=None)}, 'leaderboard_math_num_theory_hard': {'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_math_prealgebra_hard': {'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_math_precalculus_hard': {'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_mmlu_pro': {'acc,none': Value(dtype='float64', id=None), 'acc_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr_murder_mysteries': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr_object_placements': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr_team_allocation': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}}
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2122, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<leaderboard: struct<acc_norm,none: double, acc_norm_stderr,none: double, prompt_level_loose_acc,none: double, prompt_level_loose_acc_stderr,none: double, inst_level_strict_acc,none: double, inst_level_strict_acc_stderr,none: string, exact_match,none: double, exact_match_stderr,none: double, inst_level_loose_acc,none: double, inst_level_loose_acc_stderr,none: string, acc,none: double, acc_stderr,none: double, prompt_level_strict_acc,none: double, prompt_level_strict_acc_stderr,none: double, alias: string>, leaderboard_bbh: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_boolean_expressions: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_causal_judgement: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_date_understanding: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_disambiguation_qa: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_formal_fallacies: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_geometric_shapes: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_hyperbaton: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_bbh_logical_deduction_five_objects: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: stri
...
st_level_loose_acc_stderr,none: string, alias: string>, leaderboard_math_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_algebra_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_counting_and_prob_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_geometry_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_intermediate_algebra_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_num_theory_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_prealgebra_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_math_precalculus_hard: struct<exact_match,none: double, exact_match_stderr,none: double, alias: string>, leaderboard_mmlu_pro: struct<acc,none: double, acc_stderr,none: double, alias: string>, leaderboard_musr: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_musr_murder_mysteries: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_musr_object_placements: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>, leaderboard_musr_team_allocation: struct<acc_norm,none: double, acc_norm_stderr,none: double, alias: string>>
to
{'leaderboard': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'prompt_level_loose_acc,none': Value(dtype='float64', id=None), 'prompt_level_loose_acc_stderr,none': Value(dtype='float64', id=None), 'inst_level_strict_acc,none': Value(dtype='float64', id=None), 'inst_level_strict_acc_stderr,none': Value(dtype='string', id=None), 'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'inst_level_loose_acc,none': Value(dtype='float64', id=None), 'inst_level_loose_acc_stderr,none': Value(dtype='string', id=None), 'acc,none': Value(dtype='float64', id=None), 'acc_stderr,none': Value(dtype='float64', id=None), 'prompt_level_strict_acc,none': Value(dtype='float64', id=None), 'prompt_level_strict_acc_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_arc_challenge': {'acc,none': Value(dtype='float64', id=None), 'acc_stderr,none': Value(dtype='float64', id=None), 'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_bbh': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_bbh_boolean_expressions': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None)
...
e(dtype='string', id=None)}, 'leaderboard_math_num_theory_hard': {'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_math_prealgebra_hard': {'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_math_precalculus_hard': {'exact_match,none': Value(dtype='float64', id=None), 'exact_match_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_mmlu_pro': {'acc,none': Value(dtype='float64', id=None), 'acc_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr_murder_mysteries': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr_object_placements': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}, 'leaderboard_musr_team_allocation': {'acc_norm,none': Value(dtype='float64', id=None), 'acc_norm_stderr,none': Value(dtype='float64', id=None), 'alias': Value(dtype='string', id=None)}}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2040, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
results dict | groups dict | group_subtasks dict | configs dict | versions dict | n-shot dict | higher_is_better dict | n-samples dict | config dict | git_hash string | date float64 | pretty_env_info string | transformers_version string | upper_git_hash null | tokenizer_pad_token sequence | tokenizer_eos_token sequence | tokenizer_bos_token sequence | eot_token_id int64 | max_length int64 | task_hashes dict | model_source string | model_name string | model_name_sanitized string | system_instruction null | system_instruction_sha null | fewshot_as_multiturn bool | chat_template string | chat_template_sha string | start_time float64 | end_time float64 | total_evaluation_time_seconds string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
{
"leaderboard": {
"acc_norm,none": 0.34531067583344144,
"acc_norm_stderr,none": 0.005162513255647064,
"prompt_level_loose_acc,none": 0.27171903881700554,
"prompt_level_loose_acc_stderr,none": 0.019143116099594022,
"inst_level_strict_acc,none": 0.3980815347721823,
"inst_level_strict_acc_stderr... | {
"leaderboard": {
"acc_norm,none": 0.34531067583344144,
"acc_norm_stderr,none": 0.005162513255647064,
"prompt_level_loose_acc,none": 0.27171903881700554,
"prompt_level_loose_acc_stderr,none": 0.019143116099594022,
"inst_level_strict_acc,none": 0.3980815347721823,
"inst_level_strict_acc_stderr... | {
"leaderboard_bbh": [
"leaderboard_bbh_sports_understanding",
"leaderboard_bbh_object_counting",
"leaderboard_bbh_geometric_shapes",
"leaderboard_bbh_hyperbaton",
"leaderboard_bbh_disambiguation_qa",
"leaderboard_bbh_logical_deduction_three_objects",
"leaderboard_bbh_causal_judgement",
... | {
"leaderboard_arc_challenge": null,
"leaderboard_bbh_boolean_expressions": {
"task": "leaderboard_bbh_boolean_expressions",
"group": "leaderboard_bbh",
"dataset_path": "SaylorTwift/bbh",
"dataset_name": "boolean_expressions",
"test_split": "test",
"doc_to_text": "Q: {{input}}\nA:",
"doc... | {
"leaderboard_arc_challenge": null,
"leaderboard_bbh_boolean_expressions": 0,
"leaderboard_bbh_causal_judgement": 0,
"leaderboard_bbh_date_understanding": 0,
"leaderboard_bbh_disambiguation_qa": 0,
"leaderboard_bbh_formal_fallacies": 0,
"leaderboard_bbh_geometric_shapes": 0,
"leaderboard_bbh_hyperbaton... | {
"leaderboard": 0,
"leaderboard_arc_challenge": null,
"leaderboard_bbh": 3,
"leaderboard_bbh_boolean_expressions": 3,
"leaderboard_bbh_causal_judgement": 3,
"leaderboard_bbh_date_understanding": 3,
"leaderboard_bbh_disambiguation_qa": 3,
"leaderboard_bbh_formal_fallacies": 3,
"leaderboard_bbh_geometr... | {
"leaderboard": {
"acc_norm": true,
"prompt_level_strict_acc": true,
"inst_level_strict_acc": true,
"prompt_level_loose_acc": true,
"inst_level_loose_acc": true,
"exact_match": true,
"acc": true
},
"leaderboard_arc_challenge": null,
"leaderboard_bbh": {
"acc_norm": true
},
"... | {
"leaderboard_arc_challenge": null,
"leaderboard_musr_murder_mysteries": {
"original": 250,
"effective": 250
},
"leaderboard_musr_team_allocation": {
"original": 250,
"effective": 250
},
"leaderboard_musr_object_placements": {
"original": 256,
"effective": 256
},
"leaderboard_if... | {
"model": "hf",
"model_args": "pretrained=0-hero/Matter-0.2-7B-DPO,revision=26a66f0d862e2024ce4ad0a09c37052ac36e8af6,trust_remote_code=False,dtype=bfloat16,parallelize=False",
"model_num_parameters": 7241781248,
"model_dtype": "torch.bfloat16",
"model_revision": "26a66f0d862e2024ce4ad0a09c37052ac36e8af6",
... | 121ee9156f41d6409683a8a87ce7745f41b7ada9 | 1,723,035,419.913854 | PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~20.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.27.7
Libc version: glibc-2.31
Python version: 3.10.14 (main, ... | 4.43.3 | null | [
"</s>",
"2"
] | [
"<|im_end|>",
"32000"
] | [
"<s>",
"1"
] | 32,000 | 32,768 | {
"leaderboard_arc_challenge": null,
"leaderboard_musr_murder_mysteries": "c9ff05e7808041a803457add87c9a00af786f23baf0ce760514e881e7eeab496",
"leaderboard_musr_team_allocation": "7c17cc716b10c3e9b78a2b35029d587431e64d40771955bcd4e58f7fbda2eed4",
"leaderboard_musr_object_placements": "3f84bd347b1caf57c418778f1b5... | hf | 0-hero/Matter-0.2-7B-DPO | 0-hero__Matter-0.2-7B-DPO | null | null | true | {% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful assistant.' %}{% endif %}{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false... | f02c534193010c4bd1e40fe3dd501147fa46cebadb0769d97217a2c785028783 | 674,865.660168 | 676,318.133889 | 1452.4737208929146 |
{"leaderboard":{"acc_norm,none":0.5616484630109222,"acc_norm_stderr,none":0.004912003765119577,"prom(...TRUNCATED) | {"leaderboard":{"acc_norm,none":0.5616484630109222,"acc_norm_stderr,none":0.004912003765119577,"prom(...TRUNCATED) | {"leaderboard_bbh":["leaderboard_bbh_sports_understanding","leaderboard_bbh_tracking_shuffled_object(...TRUNCATED) | {"leaderboard_arc_challenge":{"task":"leaderboard_arc_challenge","group":["leaderboard_reasoning"],"(...TRUNCATED) | {"leaderboard_arc_challenge":1.0,"leaderboard_bbh_boolean_expressions":0.0,"leaderboard_bbh_causal_j(...TRUNCATED) | {"leaderboard":5,"leaderboard_arc_challenge":5,"leaderboard_bbh":3,"leaderboard_bbh_boolean_expressi(...TRUNCATED) | {"leaderboard":{"acc_norm":true,"prompt_level_strict_acc":true,"inst_level_strict_acc":true,"prompt_(...TRUNCATED) | {"leaderboard_arc_challenge":{"original":1172,"effective":1172},"leaderboard_musr_murder_mysteries":(...TRUNCATED) | {"model":"hf","model_args":"pretrained=01-ai/Yi-1.5-34B-32K,revision=2c03a29761e4174f20347a60fbe229b(...TRUNCATED) | 8f1dc26 | 1,718,619,571.767187 | "PyTorch version: 2.3.1+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED) | 4.41.2 | null | null | null | null | null | null | {"leaderboard_arc_challenge":"e7bcf1ccfe7a1ea81095f1363866caa59b2be08f5d399b1c3aa8ab08ebde5109","lea(...TRUNCATED) | hf | 01-ai/Yi-1.5-34B-32K | 01-ai__Yi-1.5-34B-32K | null | null | null | null | null | 1,782,439.526497 | 1,810,024.994195 | 27585.46769775101 |
README.md exists but content is empty.
- Downloads last month
- 3