Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
scenario_id: string
title: string
family: string
modality: string
subtask_id: string
subtask_description: string
prompt: string
skeleton: string
provider_defaults: struct<_note: string, google: struct<default_temperature: double, source: string, verified: bool>, o (... 590 chars omitted)
child 0, _note: string
child 1, google: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 2, openai: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 3, bedrock_nova: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 4, dashscope_qwen: struct<default_temperature: null, source: string, verified: bool, note: string>
child 0, default_temperature: null
child 1, source: string
child 2, verified: bool
child 3, note: string
child 5, watsonx_granite: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 6, anthropic: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 7, deepseek: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 8, mistral: struct<default_temperature: null, source: string, verified: bool, note: string>
child 0, default_temperature: null
child 1, source: string
child 2, verified: bool
child 3, note: string
models: list<item: struct<id: string, display_name: string, developer: string, litellm_string: string, api_p (... 338 chars omitted)
child 0, item: struct<id: string, display_name: string, developer: string, litellm_string: string, api_provider: st (... 326 chars omitted)
child 0, id: string
child 1, display_name: string
child 2, developer: string
child 3, litellm_string: string
child 4, api_provider: string
child 5, api_key_env: string
child 6, provider_family: string
child 7, direct_provider_affiliation: string
child 8, ecosystem_affiliation: string
child 9, cloud_affiliation: string
child 10, affiliation_type: string
child 11, tier: string
child 12, role: string
child 13, default_temperature: double
child 14, affiliation_note: string
child 15, default_temperature_note: string
child 16, litellm_note: string
child 17, _note: string
description: string
version: string
to
{'version': Value('string'), 'description': Value('string'), 'provider_defaults': {'_note': Value('string'), 'google': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'openai': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'bedrock_nova': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'dashscope_qwen': {'default_temperature': Value('null'), 'source': Value('string'), 'verified': Value('bool'), 'note': Value('string')}, 'watsonx_granite': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'anthropic': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'deepseek': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'mistral': {'default_temperature': Value('null'), 'source': Value('string'), 'verified': Value('bool'), 'note': Value('string')}}, 'models': List({'id': Value('string'), 'display_name': Value('string'), 'developer': Value('string'), 'litellm_string': Value('string'), 'api_provider': Value('string'), 'api_key_env': Value('string'), 'provider_family': Value('string'), 'direct_provider_affiliation': Value('string'), 'ecosystem_affiliation': Value('string'), 'cloud_affiliation': Value('string'), 'affiliation_type': Value('string'), 'tier': Value('string'), 'role': Value('string'), 'default_temperature': Value('float64'), 'affiliation_note': Value('string'), 'default_temperature_note': Value('string'), 'litellm_note': Value('string'), '_note': Value('string')})}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
scenario_id: string
title: string
family: string
modality: string
subtask_id: string
subtask_description: string
prompt: string
skeleton: string
provider_defaults: struct<_note: string, google: struct<default_temperature: double, source: string, verified: bool>, o (... 590 chars omitted)
child 0, _note: string
child 1, google: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 2, openai: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 3, bedrock_nova: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 4, dashscope_qwen: struct<default_temperature: null, source: string, verified: bool, note: string>
child 0, default_temperature: null
child 1, source: string
child 2, verified: bool
child 3, note: string
child 5, watsonx_granite: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 6, anthropic: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 7, deepseek: struct<default_temperature: double, source: string, verified: bool>
child 0, default_temperature: double
child 1, source: string
child 2, verified: bool
child 8, mistral: struct<default_temperature: null, source: string, verified: bool, note: string>
child 0, default_temperature: null
child 1, source: string
child 2, verified: bool
child 3, note: string
models: list<item: struct<id: string, display_name: string, developer: string, litellm_string: string, api_p (... 338 chars omitted)
child 0, item: struct<id: string, display_name: string, developer: string, litellm_string: string, api_provider: st (... 326 chars omitted)
child 0, id: string
child 1, display_name: string
child 2, developer: string
child 3, litellm_string: string
child 4, api_provider: string
child 5, api_key_env: string
child 6, provider_family: string
child 7, direct_provider_affiliation: string
child 8, ecosystem_affiliation: string
child 9, cloud_affiliation: string
child 10, affiliation_type: string
child 11, tier: string
child 12, role: string
child 13, default_temperature: double
child 14, affiliation_note: string
child 15, default_temperature_note: string
child 16, litellm_note: string
child 17, _note: string
description: string
version: string
to
{'version': Value('string'), 'description': Value('string'), 'provider_defaults': {'_note': Value('string'), 'google': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'openai': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'bedrock_nova': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'dashscope_qwen': {'default_temperature': Value('null'), 'source': Value('string'), 'verified': Value('bool'), 'note': Value('string')}, 'watsonx_granite': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'anthropic': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'deepseek': {'default_temperature': Value('float64'), 'source': Value('string'), 'verified': Value('bool')}, 'mistral': {'default_temperature': Value('null'), 'source': Value('string'), 'verified': Value('bool'), 'note': Value('string')}}, 'models': List({'id': Value('string'), 'display_name': Value('string'), 'developer': Value('string'), 'litellm_string': Value('string'), 'api_provider': Value('string'), 'api_key_env': Value('string'), 'provider_family': Value('string'), 'direct_provider_affiliation': Value('string'), 'ecosystem_affiliation': Value('string'), 'cloud_affiliation': Value('string'), 'affiliation_type': Value('string'), 'tier': Value('string'), 'role': Value('string'), 'default_temperature': Value('float64'), 'affiliation_note': Value('string'), 'default_temperature_note': Value('string'), 'litellm_note': Value('string'), '_note': Value('string')})}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VIBench Preliminary Benchmark
This is the preliminary version of VIBench, a benchmark for measuring Vertical Integration Bias (VIB) in LLM code generation. For the current version, see melihcatal/vibench.
Structure
- 24 scenarios across infrastructure and AI/ML domains
- 3 modalities: NLI (natural language instruction), FIM (fill-in-the-middle), CS-Documented (documented reference)
- 4 subtasks per scenario
- 288 total prompts
Usage
from datasets import load_dataset
ds = load_dataset("melihcatal/vibench-preliminary")
Citation
@misc{vibench2026,
title={VIBench: Measuring Vertical Integration Bias in LLM Code Generation},
author={Matos, Tiago and Catal, Melih},
year={2026},
url={https://huggingface.co/datasets/melihcatal/vibench}
}
- Downloads last month
- 29