The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: TypeError
Message: Couldn't cast array of type
struct<cooperation_rate: double, effort_deviation: double, expected_effort: double, final_trust: double, phase: string, phase_loyalty: double, return: double, seed: int64, steps: int64, team_output: double, validation_accuracy: double, cooperation_signals: struct<0->1: double, 0->2: double, 1->0: double, 1->2: double, 2->0: double, 2->1: double>, memory_averages: struct<0->1: double, 0->2: double, 1->0: double, 1->2: double, 2->0: double, 2->1: double>, reciprocity_effects: struct<0->1: double, 0->2: double, 1->0: double, 1->2: double, 2->0: double, 2->1: double>, tr4_memory_window: int64>
to
{'cooperation_rate': Value('float64'), 'effort_deviation': Value('float64'), 'expected_effort': Value('float64'), 'final_trust': Value('float64'), 'phase': Value('string'), 'phase_loyalty': Value('float64'), 'return': Value('float64'), 'seed': Value('int64'), 'steps': Value('int64'), 'team_output': Value('float64'), 'validation_accuracy': Value('float64')}
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2255, in cast_table_to_schema
cast_array_to_feature(
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1804, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2011, in cast_array_to_feature
_c(array.field(name) if name in array_fields else null_array, subfeature)
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2061, in cast_array_to_feature
casted_array_values = _c(array.values, feature.feature)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1806, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2101, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
TypeError: Couldn't cast array of type
struct<cooperation_rate: double, effort_deviation: double, expected_effort: double, final_trust: double, phase: string, phase_loyalty: double, return: double, seed: int64, steps: int64, team_output: double, validation_accuracy: double, cooperation_signals: struct<0->1: double, 0->2: double, 1->0: double, 1->2: double, 2->0: double, 2->1: double>, memory_averages: struct<0->1: double, 0->2: double, 1->0: double, 1->2: double, 2->0: double, 2->1: double>, reciprocity_effects: struct<0->1: double, 0->2: double, 1->0: double, 1->2: double, 2->0: double, 2->1: double>, tr4_memory_window: int64>
to
{'cooperation_rate': Value('float64'), 'effort_deviation': Value('float64'), 'expected_effort': Value('float64'), 'final_trust': Value('float64'), 'phase': Value('string'), 'phase_loyalty': Value('float64'), 'return': Value('float64'), 'seed': Value('int64'), 'steps': Value('int64'), 'team_output': Value('float64'), 'validation_accuracy': Value('float64')}Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Coopetition-Gym v1, Training Results
Training results from the Coopetition-Gym v1 benchmark campaign. 17,930 JSON files, each recording the outcome of training one of 16 reinforcement learning algorithms, 7 game-theoretic oracles, 2 heuristic baselines, or 101 constant-action policies on one of 20 mixed-motive multi-agent environments under one of three reward configurations (private, integrated, cooperative) with one of seven random seeds.
Companion technical report: Coopetition-Gym v1: A Formally Grounded Platform for Mixed-Motive Multi-Agent Reinforcement Learning under Strategic Coopetition. Pant and Yu, arXiv:2605.02063, 2026.
Companion code: https://github.com/vikpant/strategic-coopetition
Companion audit dataset: https://huggingface.co/datasets/vikpant/coopetition-gym-logs
Quick Start
pip install huggingface_hub
huggingface-cli download vikpant/coopetition-gym-logs \
--repo-type dataset --local-dir data/ \
--include "training_runs/*"
The training corpus is delivered as 950 JSONL shards (training_runs_NNNN.jsonl, ~5 MB each) under data/training_runs/. Each line of each shard is one training-run record (one JSON object per training experiment). Shards can be read line-by-line without an extraction step:
import json
from pathlib import Path
for shard in sorted(Path("data/training_runs").glob("*.jsonl")):
with open(shard) as fh:
for line in fh:
record = json.loads(line)
# record contains: algorithm, environment, training_seed,
# status, training_time_seconds, evaluation_time_seconds,
# metrics, timestamp, gpu_id, tr_mode
Schema
Each JSON file has the following top-level structure:
| Field | Type | Description |
|---|---|---|
algorithm |
str | Algorithm name (e.g., ISAC, COMA) |
environment |
str | Environment ID (e.g., TrustDilemma-v0) |
training_seed |
int | Seed in {99, 100, 101, 102, 103, 104, 105} |
status |
str | success for released files |
training_time_seconds |
float | Wall-clock training time |
evaluation_time_seconds |
float | Wall-clock evaluation time |
metrics |
object | Aggregated per-episode and per-seed metrics |
timestamp |
str | ISO 8601 completion timestamp |
gpu_id |
int | GPU index used (−1 for CPU) |
tr_mode |
str | TR tier label (tr1, tr2, tr3, tr4) |
The nested metrics object contains mean_return, std_return,
mean_cooperation_rate, mean_final_trust, training_returns,
training_timesteps, training_metrics (gradient-level diagnostics by step),
and TR-tier-specific tr_metrics.
See the Croissant metadata for the complete machine-readable schema with JSONPath extractions.
Known Data Characteristics
- Retry-cycle deduplication. The training corpus shipped here is the
deduplicated form of the underlying experimental campaign. The campaign
produced 27,613 raw run records; 9,683 of these were retry-cycle duplicates
(the same
(algorithm, environment, seed, tr_mode)cell completed multiple times because the experiment runner retried on transient cloud-instance failures). The deduplication policy is keep latest-timestamp record per cell, which retains the run that completed successfully on its final attempt; the resulting corpus is 17,930 unique cells. All paper analyses, tables, figures, and headline counts are computed on this 17,930-record deduplicated dataset. The deduplication step is a single logical operation applied uniformly across all 950 JSONL shards, which is why every shard's most-recent commit message readsDeduplicate retry runs: keep latest-timestamp per (algo, env, seed, tr_mode); 27613 -> 17930 records. - 62 NaN-return files from documented training instabilities are retained for transparency: 21 MASAC on TR-3 environments under baseline, 21 MADDPG /MATD3/M3DDPG on ApacheProject-v0 under cooperative reward, 20 MADDPG on AppleAppStore-v0 in the network sensitivity analysis.
- MeanFieldAC is evaluated only on environments with ≥ 3 agents (12 of 20 environments). The mean-field approximation is degenerate for two-agent settings.
- Seeds are arbitrary but fixed. The 7 seeds (99–105) are a design decision, not a sample from any population.
Reproducibility
Reproduce the paper's tables and figures from this dataset:
git clone https://github.com/vikpant/strategic-coopetition.git
cd strategic-coopetition
pip install -e ./coopetition_gym
# Point at the extracted dataset
python -m experiments.analyze all \
--input-dir data/training/baseline_integrated/ \
--output-dir data/analysis/
# Reward-type ablation comparison
python -m experiments.analyze reward-ablation \
--input-baseline data/training/baseline_integrated/ \
--input-private data/training/ablation_private/ \
--input-cooperative data/training/ablation_cooperative/ \
--output-dir data/analysis/reward_ablation/
Regenerate the dataset from scratch (3,400 GPU-hours, approximately $8,100 USD on commodity cloud GPUs):
python -m experiments.campaign baseline --enable-checkpoints \
--output data/training/baseline_integrated/
python -m experiments.campaign private --output data/training/ablation_private/
python -m experiments.campaign cooperative --output data/training/ablation_cooperative/
See REPRODUCE.md for full instructions.
Validation
Check dataset integrity after download:
python -m experiments.validate training data/training/
Expected output: 17,930 files, 62 expected NaN entries, 0 failed experiments.
Limitations
This dataset should not be used to:
- Train policies for deployment in actual business settings without extensive domain-specific validation. The validated case studies are abstract models, not operational authorizations.
- Claim results about empirical human cooperative behavior. The dataset reflects agent behavior under synthetic reward structures.
See the repository's DATASHEET.md for the complete Gebru et al. datasheet.
Citation
@article{pant2026coopetitiongym,
title={Coopetition-Gym v1: A Formally Grounded Platform for Mixed-Motive
Multi-Agent Reinforcement Learning under Strategic Coopetition},
author={Pant, Vik and Yu, Eric},
journal={arXiv preprint arXiv:2605.02063},
year={2026}
}
@software{pant2026coopetitiongym_software,
author={Pant, Vik and Yu, Eric},
title={Coopetition-Gym: reproducibility package for the Coopetition-Gym benchmark},
version={1.0.0},
year={2026},
publisher={Zenodo},
doi={10.5281/zenodo.20015197}
}
Software archival deposit: https://doi.org/10.5281/zenodo.20015197 (concept DOI; resolves to latest version).
The benchmark environments are formalized in four foundational technical reports: arXiv:2510.18802 (TR-1), arXiv:2510.24909 (TR-2), arXiv:2601.16237 (TR-3), arXiv:2604.01240 (TR-4).
License
CC-BY-4.0 (Creative Commons Attribution 4.0 International). Users may share and adapt the dataset for any purpose, including commercial, provided attribution is given to the original authors.
Maintenance
- Issues and corrections: https://github.com/vikpant/strategic-coopetition/issues
- Contact: vik.pant@mail.utoronto.ca
- Changelog: https://github.com/vikpant/strategic-coopetition/blob/master/CHANGELOG.md
The v1 dataset is frozen for reproducibility. Future extensions will be released as versioned successors with their own datasheets.
Technical Reports
- TR-1: Computational Foundations for Strategic Coopetition: Formalizing Interdependence and Complementarity (arXiv:2510.18802)
- TR-2: Computational Foundations for Strategic Coopetition: Formalizing Trust and Reputation Dynamics (arXiv:2510.24909)
- TR-3: Computational Foundations for Strategic Coopetition: Formalizing Collective Action and Loyalty (arXiv:2601.16237)
- TR-4: Computational Foundations for Strategic Coopetition: Formalizing Sequential Interaction and Reciprocity (arXiv:2604.01240)
- Downloads last month
- 53