Ted412 commited on
Commit
d77820e
·
verified ·
1 Parent(s): 71a53b7

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +119 -3
README.md CHANGED
@@ -1,3 +1,119 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ task_categories:
6
+ - question-answering
7
+ - visual-question-answering
8
+ task_ids:
9
+ - multiple-choice-qa
10
+ pretty_name: EgoMemReason
11
+ size_categories:
12
+ - n<1K
13
+ tags:
14
+ - egocentric-video
15
+ - long-video-understanding
16
+ - memory
17
+ - multimodal
18
+ - benchmark
19
+ - video-qa
20
+ configs:
21
+ - config_name: default
22
+ data_files:
23
+ - split: test
24
+ path: annotations_public.json
25
+ ---
26
+
27
+ # EgoMemReason
28
+
29
+ **A Memory-driven Reasoning Benchmark for Long-Horizon Egocentric Video Understanding.**
30
+
31
+ 500 multiple-choice questions over **week-long egocentric video** (built on [EgoLife](https://egolife-ai.github.io/)) that evaluate three complementary kinds of memory:
32
+
33
+ - **Entity memory** — track how object states evolve across days
34
+ - **Event memory** — recall and order activities separated by hours or days
35
+ - **Behavior memory** — abstract recurring patterns from sparse, repeated observations
36
+
37
+ Average **5.1 evidence segments per question** and **25.9 hours of memory backtracking** — 2× both metrics over the strongest prior week-long benchmark.
38
+
39
+ ## Links
40
+
41
+ - 🧠 **Leaderboard (HF Space):** <https://huggingface.co/spaces/Ted412/EgoMemReason>
42
+ - 💻 **Code & reference eval scripts:** <https://github.com/Ziyang412/EgoMemReason>
43
+ - 🌐 **Project page:** <https://Ziyang412.github.io/EgoMemReason>
44
+ - 🎬 **EgoLife video frames (separate license):** <https://egolife-ai.github.io/>
45
+ - 📄 **Paper:** *coming soon*
46
+
47
+ ## Composition
48
+
49
+ | Memory type | Capability (`query_type`) | # Qs |
50
+ |---|---|---:|
51
+ | Entity | Cumulative State Tracking | 100 |
52
+ | Entity | Temporal Counting | 100 |
53
+ | Event | Event Ordering | 100 |
54
+ | Event | Event Linking | 100 |
55
+ | Behavior | Spatial Preference | 50 |
56
+ | Behavior | Activity Pattern | 50 |
57
+ | **Total** | | **500** |
58
+
59
+ ## Schema
60
+
61
+ This dataset releases the **public** version — questions and options only, no answer keys (the held-out answer key lives in a private dataset, and submissions are scored against it by the leaderboard Space).
62
+
63
+ ```json
64
+ {
65
+ "example_id": 1,
66
+ "p_id": "A1_JAKE_DAY7_19_00_00",
67
+ "identity": "A1_JAKE",
68
+ "query_time": "DAY7, 19:00:00",
69
+ "question": "What do I most often eat for breakfast?",
70
+ "options": {
71
+ "A": "Pancake",
72
+ "B": "Rice",
73
+ "C": "Burger",
74
+ "D": "Dumplings"
75
+ },
76
+ "query_type": "Activity Pattern"
77
+ }
78
+ ```
79
+
80
+ Note that **questions have 4-10 options** (letters A-J). The valid answer set for any given question is the keys of its `options` dict; Event Ordering questions tend to have the most options.
81
+
82
+ ## How to evaluate
83
+
84
+ 1. Get this dataset:
85
+ ```python
86
+ from datasets import load_dataset
87
+ ds = load_dataset("Ted412/EgoMemReason")["test"]
88
+ ```
89
+ 2. Get the underlying EgoLife video frames (separate license, see <https://egolife-ai.github.io/>) — we don't redistribute video here.
90
+ 3. For each item, sample frames from `(identity, query_time)` backwards in time and run your model to pick one letter from `options.keys()`.
91
+ 4. Format the predictions as a JSON list:
92
+ ```json
93
+ [
94
+ {"example_id": 1, "predicted_answer": "A"},
95
+ ...
96
+ ]
97
+ ```
98
+ 5. Submit it on the leaderboard Space: <https://huggingface.co/spaces/Ted412/EgoMemReason>. Per-split + overall accuracy are computed automatically.
99
+
100
+ The reference inference scripts for 12 MLLMs and 5 agentic frameworks (Gemini, GPT-5, Qwen3-VL, InternVL3.5, Molmo2, VideoLLaMA3, InternVideo2.5, LongVA, AVP, Ego-R1, SiLVR, WorldMM, …) live in the [GitHub repo](https://github.com/Ziyang412/EgoMemReason).
101
+
102
+ ## License
103
+
104
+ - **EgoMemReason annotations** (this dataset): [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) — academic research and benchmarking are permitted; commercial use requires written permission.
105
+ - **EgoLife video frames** (not redistributed here): governed by the [EgoLife data license](https://egolife-ai.github.io/) — you must accept their terms separately.
106
+
107
+ ## Citation
108
+
109
+ ```bibtex
110
+ @article{wang2026egomemreason,
111
+ title = {EgoMemReason: A Memory-driven Reasoning Benchmark for
112
+ Long-Horizon Egocentric Video Understanding},
113
+ author = {Wang, Ziyang and Zhang, Yue and Yu, Shoubin and Zhang, Ce and
114
+ Zhao, Zengqi and Yoon, Jaehong and Lee, Hyunji and
115
+ Bertasius, Gedas and Bansal, Mohit},
116
+ year = {2026},
117
+ journal = {arXiv preprint}
118
+ }
119
+ ```