| --- |
| pretty_name: PhyCo-Sim |
| Arxiv: |
| license: cc-by-nd-4.0 |
| language: |
| - en |
| size_categories: |
| - 10K<n<100K |
| task_categories: |
| - image-to-video |
| tags: |
| - physics |
| - simulation |
| - synthetic |
| - video |
| - kubric |
| - blender |
| - pybullet |
| - rigid-body |
| - soft-body |
| - depth |
| - segmentation |
| - cvpr2026 |
| --- |
| |
| # PhyCo-Sim: Synthetic Physics Simulation Dataset |
|
|
| [](https://arxiv.org/abs/2604.28169) |
|
|
| This dataset accompanies the paper: |
|
|
| > **PhyCo: Learning Controllable Physical Priors for Generative Motion** |
| > Sriram Narayanan, Ziyu Jiang, Srinivasa G. Narasimhan, Manmohan Chandraker |
| > *CVPR 2026* |
| > [Project Page](https://phyco-video.github.io/) | [arXiv](https://arxiv.org/abs/2604.28169) | [PDF](https://phyco-video.github.io/PhyCo_CVPR2026.pdf) |
|
|
| PhyCo learns fine-grained, continuously controllable physical priors — friction, restitution, deformation, and applied force — from synthetic simulations, enabling physically grounded video generation without a simulator at inference. This dataset provides the training foundation for that system. |
|
|
| --- |
|
|
| ## Overview |
|
|
| PhyCo-Sim contains **~90,000 photorealistic simulation videos** across 9 physics scenario types, rendered using [Kubric](https://github.com/google-research/kubric) (Blender + PyBullet). Each video comes with dense ground-truth annotations: instance segmentation, depth, and full per-object physics metadata. Physical properties are systematically varied across scenes to provide rich, controlled supervision. |
|
|
| | Scenario | Description | Varied Properties | Size | |
| |---|---|---|---| |
| | `ball_drop_v2` | Rigid ball falling onto a platform and bouncing | Bounciness (restitution) | 1.3 GB | |
| | `ball_drop_soft_v4` | Deformable elastic ball falling onto a surface | Deformation stiffness | 9.0 GB | |
| | `ball_drop_v3` | Multiple rigid balls (3–5) dropping simultaneously | Bounciness (restitution) | 1.9 GB | |
| | `ball_wall_collision` | Ball rolling into a wall and bouncing back | Bounciness (restitution) | 2.4 GB | |
| | `cube_deform_soft_v2_noeff` | Rigid ball impacting a soft elastic cube | Deformation stiffness | 2.2 GB | |
| | `friction_slide_flat_v2` | Rectangular brick sliding on a flat surface | Friction, slide direction | 4.2 GB | |
| | `friction_slide_flat_force_v3` | Brick sliding under an applied force | Force magnitude, direction | 2.1 GB | |
| | `jenga_force` | Force applied to a single block in a Jenga tower | Push direction | 3.0 GB | |
| | `pool_table_force` | Force applied to a ball on a billiards table | Force, direction, friction, bounciness | 1.4 GB | |
|
|
| **Total compressed size: ~27.5 GB** |
|
|
| --- |
|
|
| ## Dataset Structure |
|
|
| Each scenario folder contains: |
|
|
| ``` |
| <scenario_name>/ |
| ├── YYYY-MM-DD.tar.gz # Video data, batched by generation date |
| ├── common_caption_cosmos.pt # Pre-computed T5-XXL text embeddings (PyTorch) |
| ├── common_caption_cosmos.txt # Plain-text caption describing the scenario |
| ├── props_of_interest.json # Physical properties varied in this scenario |
| ├── fg_bg_id.json # (some scenarios) Foreground/background seg IDs |
| └── data_stats_json.tar.gz # (some scenarios) Per-property distribution stats |
| ``` |
|
|
| > **Note:** `friction_slide_flat_v2` includes multiple caption variants for different camera viewing angles (`common_caption_cosmos_down.pt`, `common_caption_cosmos_left.pt`, etc.). |
| |
| ### Inside each `.tar.gz` |
| |
| Extracting a `.tar.gz` yields date-stamped folders of individual simulation samples: |
| |
| ``` |
| YYYY-MM-DD/ |
| └── <hex_id>/ # UUID per video |
| ├── rgba.mp4 # RGB video |
| ├── depth.mp4 # Depth map (grayscale, encoded as video) |
| ├── segmentation.mp4 # Instance segmentation masks (color-coded) |
| ├── metadata.json # Full physics and rendering metadata |
| ├── animation_data.pkl # Per-frame object trajectories (position, rotation) |
| └── force_annotated_00000.jpg # (force scenarios only) First frame with force arrow overlay |
| ``` |
| |
| --- |
|
|
| ## Video Specification |
|
|
| | Property | Value | |
| |---|---| |
| | Resolution | 768 × 432 (16:9) | |
| | Frame rate | 24 fps | |
| | Duration | ~4 seconds (~98 frames) | |
| | Video codec | H.264 | |
|
|
| --- |
|
|
| ## Annotations |
|
|
| ### `metadata.json` schema |
|
|
| Each sample's `metadata.json` contains full simulation and rendering state. Key fields: |
|
|
| **Per-object data** (`object_data`): |
| - `position` — [x, y, z] world-space coordinates |
| - `quaternion` — [w, x, y, z] rotation |
| - `scale` — [x, y, z] scale factors |
| - `mass` — mass in kg |
| - `friction` — friction coefficient |
| - `restitution` — bounciness (0 = inelastic, 1 = perfectly elastic) |
| - `segmentation_id` — integer ID matching the segmentation video |
| - `segmentation_color` — [R, G, B] color in the segmentation mask (0–255) |
| - `color` — object surface color [R, G, B] in [0, 1] |
| - `metallic`, `roughness`, `specular` — PBR material properties |
| - Soft body only: `use_neo_hookean`, Neo-Hookean `mu`/`lambda`, spring stiffness/damping/bending parameters |
|
|
| **Scene-level data**: |
| - `simulation_type` — scenario identifier string |
| - `hdri_id`, `hdri_rotation` — environmental lighting |
| - `ground_texture`, `platform_texture`, `platform_name` — surface materials |
| - `camera_diversity` — camera position and framing metadata |
| - `segmentation_color_map` — full color → object ID mapping |
| - `depth_of_field` — focus distance and aperture parameters |
| - `rendering_efficiency` — frame settling info and optimization metrics |
|
|
| **Force datasets** additionally include the force vector (magnitude + direction) in world space, and its projection into image space. |
|
|
| ### Segmentation |
|
|
| `segmentation.mp4` encodes instance masks: each object is assigned a unique solid color, with background as black. The color → object ID mapping is in `metadata.json` under `segmentation_color_map`. |
|
|
| ### Depth |
|
|
| `depth.mp4` encodes linearized scene depth as a grayscale video. For decoding details (min/max depth range, scale factor), refer to the camera parameters in `metadata.json`. |
|
|
| ### Animation trajectories |
|
|
| `animation_data.pkl` is a pickled Python object containing per-frame arrays of object positions and rotations throughout the simulation. Useful for trajectory prediction tasks. |
|
|
| --- |
|
|
| ## Captions and Text Embeddings |
|
|
| Each scenario ships with: |
| - `common_caption_cosmos.txt` — a single natural-language caption describing the physics scenario |
| - `common_caption_cosmos.pt` — the corresponding T5-XXL embedding (PyTorch `torch.load`) |
|
|
| These are scenario-level (not per-video) and were used to condition the Cosmos-Predict2 video diffusion backbone in PhyCo. |
|
|
| --- |
|
|
| ## Generation |
|
|
| Videos were generated using a custom pipeline built on [Kubric](https://github.com/google-research/kubric) with: |
| - **Physics engine**: PyBullet (rigid and soft body dynamics) |
| - **Renderer**: Blender 3.x Cycles (GPU-accelerated) |
| - **Soft bodies**: Tetrahedral VTK meshes with Neo-Hookean elasticity |
| - **Assets**: KuBasic primitives, custom URDF/GLB models |
| - **Lighting**: Randomized HDRI environments with varied ground/platform textures |
|
|
| --- |
|
|
| ## Citation |
|
|
| If you use this dataset, please cite: |
|
|
| ```bibtex |
| @inproceedings{narayanan2026phyco, |
| title = {PhyCo: Learning Controllable Physical Priors for Generative Motion}, |
| author = {Narayanan, Sriram and Jiang, Ziyu and Narasimhan, Srinivasa G. and Chandraker, Manmohan}, |
| booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, |
| year = {2026}, |
| eprint = {2604.28169}, |
| archivePrefix = {arXiv}, |
| primaryClass = {cs.CV} |
| } |
| ``` |
|
|
| --- |
|
|
| ## License |
|
|
| This dataset is licensed under the [CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0/). You are free to share and redistribute the material in any medium or format for any purpose, including commercially, as long as you give appropriate credit and do not distribute modified versions. The underlying Kubric framework is Apache 2.0 licensed. See the [Kubric repository](https://github.com/google-research/kubric) for details. |