🌍 World Model Bench — does your world model actually think?
FID measures realism. FVD measures smoothness. But neither tells you whether the model understood the scene.
We just released WM Bench — the first benchmark for cognitive intelligence in world models. The core question: when a beast charges from 3 meters away, does the model know to sprint — not walk? Does it respond differently to a human vs an animal? Does it remember the left corridor was blocked two steps ago?
Those are cognitive questions. No existing benchmark asks them. So we built one.
- 👁 P1 Perception (25%) — Can it read the scene? - 🧠 P2 Cognition (45%) — Does it predict threats, escalate emotions, utilize memory? - 🔥 P3 Embodiment (30%) — Does the body respond with the right motion?
All evaluation is via simple JSON I/O — no 3D engine, no special hardware. Any model with an API can participate.
We also built PROMETHEUS as a live reference implementation — runs in your browser on a T4, no install needed. Combines FloodDiffusion motion generation with a LLM cognitive brain (Perceive → Predict → Decide → Act). Scored 726/1000 (Grade B) on Track C — the only directly verified model so far. Submissions from other teams very welcome.