image imagewidth (px) 672 672 |
|---|
VOID-Compatible Quadmask Counterfactual Video Dataset
The first publicly available, pre-built quadmask-annotated counterfactual video dataset for physics-aware video object removal, inspired by and fully compatible with Netflix/VOID (arXiv:2604.02296).
What is this?
VOID introduced a powerful framework for removing objects from videos while correcting downstream physical interactions. Their key innovation is the quadmask — a 4-value segmentation mask that tells the model which regions need physical correction after object removal.
While VOID released their model weights and generation code, they did not release pre-built training data. This dataset fills that gap by providing 200 ready-to-use counterfactual scene pairs with full quadmask annotations.
Dataset Structure
Each scene contains a counterfactual pair: the original video (with all objects) and the counterfactual video (with the target object removed and physics re-simulated).
scene_XXXX/
├── rgb_full.avi # V: Original video (all objects present)
├── rgb_removed.avi # V̂: Counterfactual video (target removed, physics re-run)
├── mask_lossless.avi # Mq: Quadmask video (FFV1 lossless codec)
├── mask.mp4 # Mq: Quadmask video (MP4, quantized to exact values)
├── metadata.json # Scene metadata and pipeline configuration
└── quadmask_frames/ # Individual quadmask frames as PNG
├── quadmask_0000.png
├── quadmask_0001.png
└── ...
Quadmask Values (VOID-compatible)
| Value | Color | Meaning |
|---|---|---|
| 0 | ⬛ Black | Object region — pixels belonging to the removed object |
| 63 | 🔲 Dark gray | Overlap — object pixels that also show physical interaction |
| 127 | ⬜ Light gray | Affected area — regions where physics changed due to removal |
| 255 | 🟩 White | Background — unchanged regions |
Technical Specifications
| Property | Value |
|---|---|
| Number of scenes | 200 |
| Frames per scene | 216 (9 seconds) |
| Resolution | 672 × 384 |
| FPS | 24 |
| Total frames | 43,200 |
| Mask codec | FFV1 lossless (AVI) + quantized MP4 |
| Mask cleanliness | 100% — all frames contain only [0, 63, 127, 255] |
Interaction Categories
This dataset includes 12 structured interaction categories, several of which go beyond the random-drop scenarios in Kubric:
| Category | Description |
|---|---|
| Stack Collapse | Block tower collapses when middle block removed |
| Domino Chain | Chain reaction stops when first domino removed |
| Ramp Collision | Rolling object removed, target objects stay still |
| Bowling Pins | Bowling ball removed, pins remain standing |
| Platform Drop | Platform removed, objects fall |
| Newton's Cradle | Pendulum ball removed, momentum transfer stops |
| Balance Board | Weight removed from seesaw, board tips opposite |
| Cascading Shelves | Bottom shelf removed, upper shelves collapse |
| Chain Break | Chain link removed, lower segment falls |
| Support Removal | Arch support removed, beam collapses |
| Projectile Impact | Projectile removed, wall remains intact |
| Multi-Object Push | Pusher removed, objects stay in place |
⚙️ Quadmask Generation Pipeline
The affected-area masks are computed using a 4-method GPU ensemble:
| Method | Weight | Role |
|---|---|---|
| RAFT Optical Flow | 35% | Motion difference between V and V̂ |
| DINOv2 Semantic Diff | 35% | Structural/semantic feature differences |
| SSIM (GPU) | 15% | Perceptual quality differences |
| RGB Pixel Diff | 15% | Raw pixel-level differences |
- Post-processing includes morphological cleanup, gridification (16px cells), and temporal smoothing (window=3).
- Object mask (Mo) is derived from the V − V̂ difference in the first frame, before physics diverges — the only difference at t=0 is the removed object itself.
Generation Stack
- Simulation: Unity 6 HDRP with deterministic PhysX
- Quadmask Pipeline: Python + PyTorch on Google Colab
- GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition (102GB VRAM)
- Processing speed: 62 fps (v3.2), 200 scenes in 27 minutes
- Success rate: 200/200 (0% failure)
Usage
from huggingface_hub import hf_hub_download
import cv2
import json
# Download a scene
scene = "scene_0000"
for f in ["rgb_full.avi", "rgb_removed.avi", "mask_lossless.avi", "metadata.json"]:
hf_hub_download(
repo_id="ErenAta00/VOID-Quadmask-Dataset",
filename=f"{scene}/{f}",
repo_type="dataset",
local_dir="./void_data"
)
# Read quadmask
cap = cv2.VideoCapture(f"./void_data/{scene}/mask_lossless.avi")
ret, frame = cap.read()
mask = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# mask values: 0 (object), 63 (overlap), 127 (affected), 255 (background)
Comparison with VOID Training Data
| Feature | VOID (Kubric) | VOID (HUMOTO) | This Dataset |
|---|---|---|---|
| Scenes | ~1,900 | ~4,500 | 200 (prototype) |
| Source | Blender/PyBullet | MoCap | Unity 6 HDRP |
| Interaction types | Random drops | Human-object | 12 structured categories |
| Joint/mechanical | ❌ | ❌ | ✅ (Newton's cradle, seesaw, chain) |
| Pre-built data | ❌ (code only) | ❌ (code only) | ✅ (ready to use) |
| Mask format | MP4 (lossy) | MP4 (lossy) | FFV1 lossless + MP4 |
| Open access | Code only | Code only | Full dataset |
Citation
If you use this dataset, please cite both this dataset and the original VOID paper:
@misc{ata2026void_quadmask_dataset,
title={VOID-Compatible Quadmask Counterfactual Video Dataset},
author={Eren Ata},
year={2026},
url={[https://huggingface.co/datasets/ErenAta00/VOID-Quadmask-Dataset](https://huggingface.co/datasets/ErenAta00/VOID-Quadmask-Dataset)}
}
@misc{motamed2026void,
title={VOID: Video Object and Interaction Deletion},
author={Saman Motamed and William Harvey and Benjamin Klein and Luc Van Gool and Zhuoning Yuan and Ta-Ying Cheng},
year={2026},
eprint={2604.02296},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
License
Apache 2.0
🙏 Acknowledgments
This dataset was inspired by and built to be compatible with VOID by Netflix and INSAIT. We thank the VOID team for making their model and code openly available.
- Downloads last month
- 23