Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ValueError
Message: Expected object or value
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 236, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 250, in _generate_tables
batch = json_encode_fields_in_json_lines(original_batch, json_field_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 90, in json_encode_fields_in_json_lines
examples = [ujson_loads(line) for line in original_batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or valueNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Social Attribution QA Benchmark
The Social Attribution QA Benchmark is a derived benchmark for provenance-aware social attribution question answering over Fediverse data. It is designed to evaluate whether a system can identify who said a statement, what a person said, and whether attribution remains correct under entity, temporal, social, and collaborative constraints.
This release contains 1,200 four-option multiple-choice questions organized
into eight task files. The benchmark is derived from the source dataset
FediData and is released as a benchmark artifact rather than as a raw
social-media dump.
This release is evaluation-oriented and is distributed as task files rather than as train/dev/test splits.
Dataset Summary
The benchmark is organized into two task families:
WSW: Who Said WhatWDWS: What Did Who Say
Each JSON file contains a top-level dictionary with three fields:
metadata: file-level provenance and construction metadatatasks: the benchmark instances for one task typestatistics: counts and difficulty summaries for that task file
Data Files
| File | Task | Questions |
|---|---|---|
WSW_DIRECT.json |
direct attribution | 200 |
WSW_ENTITY.json |
entity-constrained attribution | 200 |
WSW_ASSOC.json |
association reasoning | 100 |
WSW_TEMPORAL.json |
temporal attribution | 100 |
WDWS_DIRECT.json |
direct attribution | 200 |
WDWS_ENTITY.json |
entity-constrained attribution | 200 |
WDWS_COLLAB.json |
collaborative reasoning | 100 |
WDWS_TEMPORAL.json |
temporal attribution | 100 |
Data Structure
Most instances contain the following fields:
question_id: unique question identifiertask_id: canonical task identifierquestion: question textoptions: four answer choicesanswer: gold option label such asAanswer_text: gold answer in text formanswer_path: supporting provenance information for the gold answermetadata: instance-level construction metadatadifficulty: difficulty annotation and score
The collaborative file WDWS_COLLAB.json additionally includes
correct_answer, while its difficulty annotation is not populated in the same
way as the other task files.
Example
import json
with open("WSW_DIRECT.json", "r", encoding="utf-8") as f:
data = json.load(f)
task_name = next(iter(data["tasks"]))
sample = data["tasks"][task_name][0]
print(task_name)
print(sample["question"])
print(sample["options"])
print(sample["answer"], sample["answer_text"])
Example task instance:
{
"question_id": "WSW_T1_11c48887e878431b",
"task_id": "WSW_T1_DIRECT",
"question": "Who said: 'Smoking damages your lungs.'?",
"options": {
"A": "55ee6c1d@mastodon.social",
"B": "bf0398ec@pouet.chapril.org",
"C": "a25f92ab@mastodon.nl",
"D": "ca4390cb@octodon.social"
},
"answer": "A",
"answer_text": "55ee6c1d@mastodon.social"
}
Source Data
This benchmark is derived from the FediData Fediverse corpus:
- FediData: https://zenodo.org/records/15621244
This dataset repository does not redistribute the raw source-data dump. If you want to rebuild the benchmark from source, use the construction code in the project repository and place the downloaded FediData release under the expected build directory.
Related Resources
The full project repository includes:
- the released benchmark files
- the benchmark-construction pipeline
- baseline implementations
- the
ATLASmethod implementation
Project repository:
Intended Use
This release is intended for benchmark evaluation and method comparison. It is most suitable for:
- provenance-aware social attribution QA
- retrieval and reasoning over Fediverse-derived content
- comparison between graph-based, retrieval-based, and agentic QA methods
License
Apache License 2.0.
- Downloads last month
- 66
