File size: 4,457 Bytes
d77820e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f33470
d77820e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71367a6
d77820e
f8a7816
d77820e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71367a6
d77820e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f8a7816
 
 
 
 
 
 
 
d77820e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: cc-by-nc-4.0
language:
  - en
task_categories:
  - question-answering
  - visual-question-answering
task_ids:
  - multiple-choice-qa
pretty_name: EgoMemReason
size_categories:
  - n<1K
tags:
  - egocentric-video
  - long-video-understanding
  - memory
  - multimodal
  - benchmark
  - video-qa
configs:
  - config_name: default
    data_files:
      - split: test
        path: annotations_public.jsonl
---

# EgoMemReason

**A Memory-driven Reasoning Benchmark for Long-Horizon Egocentric Video Understanding.**

500 multiple-choice questions over **week-long egocentric video** (built on [EgoLife](https://egolife-ai.github.io/)) that evaluate three complementary kinds of memory:

- **Entity memory** — track how object states evolve across days
- **Event memory** — recall and order activities separated by hours or days
- **Behavior memory** — abstract recurring patterns from sparse, repeated observations

Average **5.1 evidence segments per question** and **25.9 hours of memory backtracking** — 2× both metrics over the strongest prior week-long benchmark.

## Links

- 🧠 **Leaderboard (HF Space):** <https://huggingface.co/spaces/Ted412/EgoMemReason>
- 💻 **Code & reference eval scripts:** <https://github.com/Ziyang412/EgoMemReason>
- 🌐 **Project page:** <https://egomemreason.github.io/>
- 🎬 **EgoLife video frames (separate license):** <https://egolife-ai.github.io/>
- 📄 **Paper:** <https://arxiv.org/abs/2605.09874>

## Composition

| Memory type | Capability (`query_type`) | # Qs |
|---|---|---:|
| Entity   | Cumulative State Tracking | 100 |
| Entity   | Temporal Counting         | 100 |
| Event    | Event Ordering            | 100 |
| Event    | Event Linking             | 100 |
| Behavior | Spatial Preference        |  50 |
| Behavior | Activity Pattern          |  50 |
| **Total** | | **500** |

## Schema

This dataset releases the **public** version — questions and options only, no answer keys (the held-out answer key lives in a private dataset, and submissions are scored against it by the leaderboard Space).

```json
{
  "example_id": 1,
  "p_id": "A1_JAKE_DAY7_19_00_00_q001",
  "identity": "A1_JAKE",
  "query_time": "DAY7, 19:00:00",
  "question": "What do I most often eat for breakfast?",
  "options": {
    "A": "Pancake",
    "B": "Rice",
    "C": "Burger",
    "D": "Dumplings"
  },
  "query_type": "Activity Pattern"
}
```

Note that **questions have 4-10 options** (letters A-J). The valid answer set for any given question is the keys of its `options` dict; Event Ordering questions tend to have the most options.

## How to evaluate

1. Get this dataset:
   ```python
   from datasets import load_dataset
   ds = load_dataset("Ted412/EgoMemReason")["test"]
   ```
2. Get the underlying EgoLife video frames (separate license, see <https://egolife-ai.github.io/>) — we don't redistribute video here.
3. For each item, sample frames from `(identity, query_time)` backwards in time and run your model to pick one letter from `options.keys()`.
4. Format the predictions as a JSON list:
   ```json
   [
     {"example_id": 1, "predicted_answer": "A"},
     ...
   ]
   ```
5. Submit it on the leaderboard Space: <https://huggingface.co/spaces/Ted412/EgoMemReason>. Per-split + overall accuracy are computed automatically.

The reference inference scripts for 12 MLLMs and 5 agentic frameworks (Gemini, GPT-5, Qwen3-VL, InternVL3.5, Molmo2, VideoLLaMA3, InternVideo2.5, LongVA, AVP, Ego-R1, SiLVR, WorldMM, …) live in the [GitHub repo](https://github.com/Ziyang412/EgoMemReason).

## License

- **EgoMemReason annotations** (this dataset): [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) — academic research and benchmarking are permitted; commercial use requires written permission.
- **EgoLife video frames** (not redistributed here): governed by the [EgoLife data license](https://egolife-ai.github.io/) — you must accept their terms separately.

## Citation

```bibtex
@misc{wang2026egomemreasonmemorydrivenreasoningbenchmark,
      title={EgoMemReason: A Memory-Driven Reasoning Benchmark for Long-Horizon Egocentric Video Understanding},
      author={Ziyang Wang and Yue Zhang and Shoubin Yu and Ce Zhang and Zengqi Zhao and Jaehong Yoon and Hyunji Lee and Gedas Bertasius and Mohit Bansal},
      year={2026},
      eprint={2605.09874},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2605.09874},
}
```