EgoMemReason / README.md
Ted412's picture
Upload README.md with huggingface_hub
f8a7816 verified
metadata
license: cc-by-nc-4.0
language:
  - en
task_categories:
  - question-answering
  - visual-question-answering
task_ids:
  - multiple-choice-qa
pretty_name: EgoMemReason
size_categories:
  - n<1K
tags:
  - egocentric-video
  - long-video-understanding
  - memory
  - multimodal
  - benchmark
  - video-qa
configs:
  - config_name: default
    data_files:
      - split: test
        path: annotations_public.jsonl

EgoMemReason

A Memory-driven Reasoning Benchmark for Long-Horizon Egocentric Video Understanding.

500 multiple-choice questions over week-long egocentric video (built on EgoLife) that evaluate three complementary kinds of memory:

  • Entity memory — track how object states evolve across days
  • Event memory — recall and order activities separated by hours or days
  • Behavior memory — abstract recurring patterns from sparse, repeated observations

Average 5.1 evidence segments per question and 25.9 hours of memory backtracking — 2× both metrics over the strongest prior week-long benchmark.

Links

Composition

Memory type Capability (query_type) # Qs
Entity Cumulative State Tracking 100
Entity Temporal Counting 100
Event Event Ordering 100
Event Event Linking 100
Behavior Spatial Preference 50
Behavior Activity Pattern 50
Total 500

Schema

This dataset releases the public version — questions and options only, no answer keys (the held-out answer key lives in a private dataset, and submissions are scored against it by the leaderboard Space).

{
  "example_id": 1,
  "p_id": "A1_JAKE_DAY7_19_00_00_q001",
  "identity": "A1_JAKE",
  "query_time": "DAY7, 19:00:00",
  "question": "What do I most often eat for breakfast?",
  "options": {
    "A": "Pancake",
    "B": "Rice",
    "C": "Burger",
    "D": "Dumplings"
  },
  "query_type": "Activity Pattern"
}

Note that questions have 4-10 options (letters A-J). The valid answer set for any given question is the keys of its options dict; Event Ordering questions tend to have the most options.

How to evaluate

  1. Get this dataset:
    from datasets import load_dataset
    ds = load_dataset("Ted412/EgoMemReason")["test"]
    
  2. Get the underlying EgoLife video frames (separate license, see https://egolife-ai.github.io/) — we don't redistribute video here.
  3. For each item, sample frames from (identity, query_time) backwards in time and run your model to pick one letter from options.keys().
  4. Format the predictions as a JSON list:
    [
      {"example_id": 1, "predicted_answer": "A"},
      ...
    ]
    
  5. Submit it on the leaderboard Space: https://huggingface.co/spaces/Ted412/EgoMemReason. Per-split + overall accuracy are computed automatically.

The reference inference scripts for 12 MLLMs and 5 agentic frameworks (Gemini, GPT-5, Qwen3-VL, InternVL3.5, Molmo2, VideoLLaMA3, InternVideo2.5, LongVA, AVP, Ego-R1, SiLVR, WorldMM, …) live in the GitHub repo.

License

  • EgoMemReason annotations (this dataset): CC BY-NC 4.0 — academic research and benchmarking are permitted; commercial use requires written permission.
  • EgoLife video frames (not redistributed here): governed by the EgoLife data license — you must accept their terms separately.

Citation

@misc{wang2026egomemreasonmemorydrivenreasoningbenchmark,
      title={EgoMemReason: A Memory-Driven Reasoning Benchmark for Long-Horizon Egocentric Video Understanding},
      author={Ziyang Wang and Yue Zhang and Shoubin Yu and Ce Zhang and Zengqi Zhao and Jaehong Yoon and Hyunji Lee and Gedas Bertasius and Mohit Bansal},
      year={2026},
      eprint={2605.09874},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2605.09874},
}