AutoResearch ECS World Model Dataset (V0)
Entity-Component-System (ECS) game state sequences for training world models.
10 classic games in two formats and two sizes.
Dataset Variants
| Variant |
Episodes |
Frames |
Format D |
Parquet |
| small |
~1K |
329K |
small/format_d/ (34MB) |
small/parquet/ (115MB) |
| full |
~360K |
98.4M |
full/format_d/ (9GB) |
full/parquet/ (7.3GB) |
Dataset Structure
{small,full}/
├── format_d/ # Format D: per-game tar.gz archives
│ ├── {game}.tar.gz # Extract → game={game}/ep_*/{ *.npy, meta.json }
│ ├── unified_manifest.json # Cross-game entity/action/global field definitions
│ ├── tensor_config.json # Build config (velocity, physics material, etc.)
│ └── build_stats.json # Episode/frame counts per game
│
└── parquet/ # Parquet: one file per split per game
└── {game}/
├── train.parquet # ~70% of episodes
├── val.parquet # ~15%
├── test.parquet # ~15%
└── meta.json # registry_dim, state_dim, max_entities
Games (full dataset)
| Game |
Episodes |
Max Entities |
Frames |
| asteroids |
30,000 |
21 |
9.7M |
| breakout |
14,995 |
52 |
7.5M |
| flappy_bird |
50,000 |
10 |
9.5M |
| frogger |
50,000 |
27 |
2.0M |
| platformer |
20,000 |
24 |
5.6M |
| pong |
50,000 |
5 |
21.4M |
| snake |
50,000 |
21 |
4.1M |
| space_invaders |
15,000 |
56 |
6.1M |
| tag |
30,000 |
9 |
29.9M |
| tetris |
50,000 |
64 |
2.6M |
Total: 359,995 episodes, 98.4M frames
Tensor Schema
| Tensor |
Shape |
Description |
| registry |
(N, 34) |
Static entity properties (collider, scale, physics) |
| states |
(T, N, 23) |
Dynamic: pos_xy(2), alive(1), vel_xy(2), gameplay(14), pos_history(4) |
| actions |
(T, 7) |
Unified action vector (7 fields across all games) |
| globals |
(T, 17) |
Global game state (17 fields across all games) |
| terminals |
(T,) |
Episode termination flags |
| mutable_mask |
(N,) |
Which entities are prediction targets |
| type_ids |
(N,) |
Global entity type IDs |
| slot_ids |
(N,) |
Original 64-slot table indices |
| rewards |
(T,) |
Per-frame rewards |
Usage
Parquet (recommended for training)
from huggingface_hub import hf_hub_download
path = hf_hub_download(
"marjanmoodi/AutoResearch-ECS-V0",
"full/parquet/pong/train.parquet",
repo_type="dataset",
)
import pyarrow.parquet as pq
import numpy as np, json
table = pq.read_table(path)
row = table.to_pydict()
states = np.frombuffer(row["states"][0], dtype=row["states_dtype"][0]).reshape(
json.loads(row["states_shape"][0])
)
Format D (for custom pipelines)
from huggingface_hub import hf_hub_download
import tarfile
path = hf_hub_download(
"marjanmoodi/AutoResearch-ECS-V0",
"full/format_d/pong.tar.gz",
repo_type="dataset",
)
with tarfile.open(path) as tar:
tar.extractall("./data")
Generation Pipeline
- ecs-world
generate_all.py — simulate games, record JSONL
- ecs-world
FormatDCacheBuilder — JSONL to Format D numpy tensors
- ecs-vanilla-baselines
export_parquet.py — Format D to Parquet with deterministic splits