Datasets:

Modalities:
Image
Video
ArXiv:
Libraries:
Datasets
License:
vpraveen-nv commited on
Commit
cf80f9d
·
verified ·
1 Parent(s): f16cfe8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +144 -5
README.md CHANGED
@@ -1,5 +1,144 @@
1
- ---
2
- license: other
3
- license_name: nvidia-evaluation-data-license
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-evaluation-data-license
4
+ license_link: LICENSE
5
+ ---
6
+
7
+ # VANTAGE-BENCH
8
+
9
+ *Video ANalysis Tasks Across Generalized Environments*
10
+
11
+ ## Dataset Description
12
+
13
+ VANTAGE-BENCH is the first public benchmark purpose-built for evaluating visual understanding on video captured by fixed infrastructure cameras. It spans three real-world domains — warehouse, smart city / Intelligent Transportation Systems (ITS), and smart spaces — across six spatio-temporal video understanding tasks including video question answering (VQA), temporal grounding, dense video captioning, event verification, spatial grounding, and spatio-temporal tracking.
14
+
15
+ This dataset is for evaluation purposes only.
16
+
17
+ ## Dataset Owner(s)
18
+
19
+ NVIDIA Corporation
20
+
21
+ ## Dataset Creation Date
22
+
23
+ April 24, 2026
24
+
25
+ ## License/Terms of Use:
26
+
27
+ [Visit the NVIDIA Legal Release Process](https://nvidia.sharepoint.com/sites/ProductLegalSupport) for instructions on getting legal support for a license selection:
28
+
29
+ ## Dataset Characterization
30
+
31
+ **Data Collection Method**<br>
32
+ Hybrid: Human, Synthetic, Automated. Video data is sourced from vendor-provided footage (GoPro captures of warehouse and smart space environments), synthetic generation (DriveSim collision and multi-camera scenarios), and publicly scraped sources (Dubuque highway/ITS footage).
33
+
34
+ **Labeling Method**<br>
35
+ Hybrid: Human, Synthetic, Pseudolabeled. Annotations for VQA, dense video captions, and temporal localization are primarily human-authored. Spatial grounding labels (2D/3D bounding boxes, referring expressions) use a combination of human annotation and pseudolabeling pipelines (detection + SAM for spatial pointing). Event verification labels are human-curated. Annotations are held server-side for evaluation only.
36
+
37
+ ### Directory Structure
38
+
39
+ ```text
40
+ VANTAGE-BENCH/
41
+ ├── vqa/ # Video question answering
42
+ ├── dense_captioning/ # Dense video captioning
43
+ ├── temporal_localization/ # Temporal localization
44
+ ├── event_verification/ # Event verification
45
+ ├── 2dbbox/ # 2D object localization
46
+ ├── referring/ # 2D referring expressions
47
+ ├── pointing/ # 2D spatial pointing
48
+ ├── tracking/ # Spatio-temporal tracking
49
+ └── README.md # Dataset documentation and submission instructions
50
+ ```
51
+
52
+ ## Evaluation
53
+
54
+ ### Tasks and Submission Formats
55
+
56
+ | Category | Task | Expected Submission Format | Metric |
57
+ |----------|------|---------------------------|--------|
58
+ | Semantic | VQA | JSON (question-answer pairs) | Accuracy |
59
+ | Semantic | Event Verification | Binary labels per video/image (JSON) | F1 Score |
60
+ | Temporal | Dense Video Captioning | Timestamped captions (JSON) | SODA-c |
61
+ | Temporal | Temporal Localization | Temporal segments with event labels (JSON) | mAP@tIoU |
62
+ | Spatial | 2D Object Localization | KITTI format | mAP@IoU |
63
+ | Spatial | 2D Referring Expressions | Bounding box predictions (JSON) | Acc@IoU |
64
+ | Spatial | 2D Spatial Pointing | Point coordinates (JSON) | Pointing Accuracy |
65
+ | Spatial | Spatio-Temporal Tracking | MOT-compatible format | HOTA |
66
+
67
+ ### Metric Notes
68
+
69
+ - **Accuracy**: Percentage of correct predictions.
70
+ - **SODA-c**: Metric for dense video captioning quality across event coverage and language quality.
71
+ - **mAP@tIoU**: Mean Average Precision measured over temporal IoU thresholds.
72
+ - **F1 Score**: Harmonic mean of precision and recall.
73
+ - **mAP@IoU**: Mean Average Precision measured over spatial IoU thresholds.
74
+ - **Acc@IoU**: Correct grounding if predicted box overlaps target above IoU threshold.
75
+ - **Pointing Accuracy**: Percentage of correctly selected target regions.
76
+ - **HOTA**: Higher Order Tracking Accuracy, combining detection and association quality.
77
+
78
+ ### Evaluation Server
79
+
80
+ Predictions are submitted to the evaluation server hosted on HuggingFace. The server computes metrics against held-out annotations and updates the public leaderboard.
81
+
82
+ Evaluation server: TBD
83
+
84
+ ## Dataset Format
85
+
86
+ Video (mp4) and Images (jpg).
87
+
88
+ ## Dataset Quantification
89
+
90
+ | Category | Task | Videos | Entries |
91
+ |----------|------|--------|---------|
92
+ | Semantic | VQA | 296 | 1,257 |
93
+ | Semantic | Event Verification | TBD | TBD |
94
+ | Temporal | Dense Video Captioning | 104 | 717 |
95
+ | Temporal | Temporal Localization | 221 | 1,280 |
96
+ | Spatial | 2D Object Localization | 3 | 27,404 bounding boxes (628 frames) |
97
+ | Spatial | 2D Referring Expressions | 1,503 images | 3,276 expressions |
98
+ | Spatial | 2D Spatial Pointing | 1,005 | 5,018 images |
99
+ | Spatial | Spatio-Temporal Tracking | 200 clips (8 frames/clip) | 200 objects, 1,600 frames |
100
+
101
+ **Total unique videos:** 312 (309 across VQA/DVC/Temporal + 3 exclusive 2D BBox)
102
+ **Total entries (VQA + DVC + Temporal):** 3,254
103
+ **Total Data Storage:** 42 GB
104
+
105
+ ## References
106
+
107
+ ```bibtex
108
+ @inproceedings{Fujita2020SODA,
109
+ author = {Soichiro Fujita and Tsutomu Hirao and Hidetaka Kamigaito and Manabu Okumura and Masaaki Nagata},
110
+ title = {{SODA}: Story Oriented Dense Video Captioning Evaluation Framework},
111
+ booktitle = {Proc. ECCV},
112
+ year = {2020}
113
+ }
114
+
115
+ @inproceedings{Fu2024BLINK,
116
+ author = {Xingyu Fu and Yushi Hu and Bangzheng Li and Yu Feng and Haoyu Wang and Xudong Lin and Dan Roth and Noah A. Smith and Wei-Chiu Ma and Ranjay Krishna},
117
+ title = {{BLINK}: Multimodal Large Language Models Can See but Not Perceive},
118
+ booktitle = {Proc. ECCV},
119
+ year = {2024}
120
+ }
121
+
122
+ @article{Sun2025RefDrone,
123
+ author = {Zhichao Sun and Yuda Zou and Xian Sun and Yingchao Feng and Wenhui Diao and Menglong Yan and Kun Fu},
124
+ title = {{RefDrone}: A Challenging Benchmark for Referring Expression Comprehension in Drone Scenes},
125
+ journal = {arXiv preprint arXiv:2502.00392},
126
+ year = {2025}
127
+ }
128
+ ```
129
+
130
+ **TBD:** Update with VANTAGE-BENCH paper citation and final HuggingFace repo link.
131
+
132
+ ## Ethical Considerations
133
+
134
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer teams to ensure this dataset meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
135
+ Please report quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://app.intigriti.com/programs/nvidia/nvidiavdp/detail).
136
+
137
+ ## Changelog
138
+
139
+ - **2026-04-14:** Initial dataset release.
140
+
141
+ ## Notes & Known Issues
142
+
143
+ - Ground truth annotations are not publicly released. All evaluation is performed server-side.
144
+ - Some warehouse videos are concatenated clips from longer recording sessions.