Datasets:
Add video-text-to-text task category and usage instructions
Browse filesHi! I'm Niels, part of the community science team at Hugging Face.
I'm opening this PR to improve the dataset card for VideoCUA. The changes include:
- Adding the `video-text-to-text` task category to the YAML metadata, which improves discoverability on the Hugging Face Hub.
- Adding a "Usage" section with code snippets derived from the official GitHub repository to guide users through downloading and processing the raw video data into trajectories.
README.md
CHANGED
|
@@ -1,5 +1,11 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
- GUI
|
| 5 |
- CUA
|
|
@@ -9,10 +15,6 @@ tags:
|
|
| 9 |
- computer-use
|
| 10 |
- video-demonstrations
|
| 11 |
- desktop-automation
|
| 12 |
-
language:
|
| 13 |
-
- en
|
| 14 |
-
size_categories:
|
| 15 |
-
- 10K<n<100K
|
| 16 |
---
|
| 17 |
|
| 18 |
<p align="center">
|
|
@@ -46,9 +48,36 @@ Unlike sparse screenshot datasets, VideoCUA preserves the full temporal dynamics
|
|
| 46 |
|
| 47 |
VideoCUA is part of [CUA-Suite](https://cua-suite.github.io/), a unified ecosystem that also includes:
|
| 48 |
|
| 49 |
-
- [**UI-Vision**](https://uivision.github.io/) — A
|
| 50 |
- [**GroundCUA**](https://groundcua.github.io/) — A large-scale pixel-precise UI grounding dataset with 5M+ human-verified element annotations.
|
| 51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
## Repository Structure
|
| 53 |
|
| 54 |
```
|
|
@@ -99,7 +128,7 @@ Each application zip in `raw_data/` contains multiple task folders identified by
|
|
| 99 |
}
|
| 100 |
```
|
| 101 |
|
| 102 |
-
Each action entry includes a `groundcua_id` field — this is the unique identifier for the corresponding screenshot in the [GroundCUA](https://huggingface.co/datasets/ServiceNow/GroundCUA) repository. Using this ID, you can look up the fully annotated screenshot
|
| 103 |
|
| 104 |
## Citation
|
| 105 |
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: mit
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10K<n<100K
|
| 7 |
+
task_categories:
|
| 8 |
+
- video-text-to-text
|
| 9 |
tags:
|
| 10 |
- GUI
|
| 11 |
- CUA
|
|
|
|
| 15 |
- computer-use
|
| 16 |
- video-demonstrations
|
| 17 |
- desktop-automation
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
---
|
| 19 |
|
| 20 |
<p align="center">
|
|
|
|
| 48 |
|
| 49 |
VideoCUA is part of [CUA-Suite](https://cua-suite.github.io/), a unified ecosystem that also includes:
|
| 50 |
|
| 51 |
+
- [**UI-Vision**](https://uivision.github.io/) — A desktop-centric benchmark evaluating element grounding, layout understanding, and action prediction.
|
| 52 |
- [**GroundCUA**](https://groundcua.github.io/) — A large-scale pixel-precise UI grounding dataset with 5M+ human-verified element annotations.
|
| 53 |
|
| 54 |
+
## Usage
|
| 55 |
+
|
| 56 |
+
To process the raw video data and action logs into trajectories for training or evaluation, you can use the synthesis pipeline provided in the [GitHub repository](https://github.com/ServiceNow/GroundCUA/tree/main/VideoCUA).
|
| 57 |
+
|
| 58 |
+
### 1. Download & Extract
|
| 59 |
+
```bash
|
| 60 |
+
bash download_data.sh --repo ServiceNow/VideoCUA --output_dir ./VideoCUA
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
### 2. Convert to Trace Format
|
| 64 |
+
To extract video frames at each action timestamp and produce standardized trajectories:
|
| 65 |
+
```bash
|
| 66 |
+
python convert_videocua.py \
|
| 67 |
+
--data_dir ./VideoCUA/data \
|
| 68 |
+
--output_dir ./videocua_processed \
|
| 69 |
+
--num_workers 4
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
### 3. Generate CoT Annotations
|
| 73 |
+
```bash
|
| 74 |
+
python gen_cot.py \
|
| 75 |
+
--task_list_path ./videocua_processed/task_list.json \
|
| 76 |
+
--model claude-sonnet-4.5 \
|
| 77 |
+
--num_threads 4 \
|
| 78 |
+
--suffix cot_v1
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
## Repository Structure
|
| 82 |
|
| 83 |
```
|
|
|
|
| 128 |
}
|
| 129 |
```
|
| 130 |
|
| 131 |
+
Each action entry includes a `groundcua_id` field — this is the unique identifier for the corresponding screenshot in the [GroundCUA](https://huggingface.co/datasets/ServiceNow/GroundCUA) repository. Using this ID, you can look up the fully annotated screenshot in GroundCUA, linking the video action trajectory to dense UI grounding annotations.
|
| 132 |
|
| 133 |
## Citation
|
| 134 |
|