Datasets:
Languages:
English
ArXiv:
Tags:
egocentric-video
activity-recognition
hand-object-interaction
segmentation
relation-extraction
vision-language-models
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,55 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- video-classification
|
| 5 |
+
- object-detection
|
| 6 |
+
- visual-question-answering
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
tags:
|
| 10 |
+
- egocentric-video
|
| 11 |
+
- activity-recognition
|
| 12 |
+
- hand-object-interaction
|
| 13 |
+
- segmentation
|
| 14 |
+
- relation-extraction
|
| 15 |
+
- vision-language-models
|
| 16 |
+
- benchmark
|
| 17 |
+
- coffee
|
| 18 |
+
pretty_name: BARISTA
|
| 19 |
+
size_categories:
|
| 20 |
+
- 100<n<1K
|
| 21 |
---
|
| 22 |
+
|
| 23 |
+
# BARISTA
|
| 24 |
+
|
| 25 |
+
BARISTA is a densely annotated egocentric video dataset of coffee preparation, designed for unified benchmarking of vision-language models across spatial, temporal, relational, and procedural understanding tasks.
|
| 26 |
+
|
| 27 |
+
The dataset contains **185 egocentric videos** (~4.4 hours, 30 FPS, 1280×720 to 1920×1080) covering three coffee preparation methods: **capsule machines**, **portafilter machines**, and **fully automatic machines**. Videos were recorded in controlled indoor setups using iPhones, Apple Vision Pro, RayBan Meta 3, and RayBan Wayfarer smart glasses.
|
| 28 |
+
|
| 29 |
+
## Dataset structure
|
| 30 |
+
|
| 31 |
+
Each video is stored in its own directory:
|
| 32 |
+
|
| 33 |
+
```
|
| 34 |
+
<video_id>/
|
| 35 |
+
coco_annotation.json # COCO-style annotations (masks, bboxes, attributes, relations, activities)
|
| 36 |
+
video.mp4 # raw video
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
`coco_annotation.json` follows the COCO format extended with additional top-level keys:
|
| 40 |
+
|
| 41 |
+
| Key | Description |
|
| 42 |
+
|-----|-------------|
|
| 43 |
+
| `annotations` | Per-frame instance annotations. Fields: `id`, `image_id` (0-based frame index), `object_id` (UUID), `bbox` ([x, y, w, h]), `segmentation` (COCO RLE with `counts` and `size`), `area` |
|
| 44 |
+
| `attributes` | Segment-level key-value attributes per object. Fields: `id`, `object_id`, `attribute_type` (e.g. `"color"`, `"state"`), `value`, `image_ranges` (list of `{image_id_start, image_id_end}`) |
|
| 45 |
+
| `relations` | Directed typed relations between object pairs. Fields: `id`, `source_object_id`, `target_object_id`, `relation_type` (e.g. `"position"`, `"human_actions"`), `value`, `image_ranges` |
|
| 46 |
+
| `categories` | Object categories. Fields: `id` (UUID), `name`. |
|
| 47 |
+
| `activities` | Fine-grained verb+noun activity segments. Fields: `id`, `display_name`, `activity_class_id` (UUID), `image_range` (`{image_id_start, image_id_end}`) |
|
| 48 |
+
| `process_steps` | High-level process step segments. Same fields as `activities` |
|
| 49 |
+
| `object_id_to_category_id` | Map from object UUID to category UUID (needed to resolve annotation `object_id` → category) |
|
| 50 |
+
| `video_metadata` | List with one entry. Fields: `document_id`, `video_index`, `width`, `height`, `frame_count`, `fps`, `length_in_ms`, `recording_device_type`, `recording_device_version` |
|
| 51 |
+
| `split` | Dataset split: `"train"` or `"test"` |
|
| 52 |
+
|
| 53 |
+
## Loading the data and running evaluations
|
| 54 |
+
|
| 55 |
+
See the [project repository](https://github.com/Ramblr-GmbH/BARISTA) for the dataset loader and the VLM benchmarking pipeline.
|