Update dataset card metadata, paper link and usage
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,18 +1,20 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-nc-sa-4.0
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
|
| 7 |
# OWM Benchmark
|
| 8 |
|
| 9 |
[](https://compvis.github.io/myriad)
|
| 10 |
-
[](https://
|
| 11 |
[](https://huggingface.co/CompVis/myriad)
|
| 12 |
|
| 13 |
## Abstract
|
| 14 |
|
| 15 |
-
The OWM benchmark was proposed in the paper [Envisioning the Future, One Step at a Time](
|
| 16 |
|
| 17 |
OWM is a benchmark of 95 curated videos with motion annotations, with the distribution of motion constrained to enable the evaluation of probabilistic motion prediction methods.
|
| 18 |
Videos are obtained from Pexels ([Pexels License](https://www.pexels.com/license/)). We manually annotate relevant objects and the type of motion observed. We use an off-the-shelf tracker to obtain motion trajectories and manually verify correctness.
|
|
@@ -24,21 +26,23 @@ Videos are obtained from Pexels ([Pexels License](https://www.pexels.com/license
|
|
| 24 |
|
| 25 |

|
| 26 |
|
| 27 |
-
*OWM samples
|
| 28 |
|
| 29 |
## Usage
|
| 30 |
|
| 31 |
We provide code to run the OWM evaluation in our [GitHub repository](https://github.com/CompVis/flow-poke-transformer).
|
| 32 |
|
| 33 |
-
To run the evaluation, first download the data by running `hf download CompVis/owm-95 --repo-type dataset`
|
|
|
|
|
|
|
| 34 |
```shell
|
| 35 |
-
python -m scripts.
|
| 36 |
```
|
| 37 |
|
| 38 |
## License
|
| 39 |
|
| 40 |
- Videos are sourced from Pexels and thus licensed under the [Pexels License](https://www.pexels.com/license/)
|
| 41 |
-
- Metadata and motion annotations are provided under the [CC-BY-NC-SA-
|
| 42 |
|
| 43 |
## Citation
|
| 44 |
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: cc-by-nc-sa-4.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- other
|
| 7 |
---
|
| 8 |
|
| 9 |
# OWM Benchmark
|
| 10 |
|
| 11 |
[](https://compvis.github.io/myriad)
|
| 12 |
+
[](https://huggingface.co/papers/2604.09527)
|
| 13 |
[](https://huggingface.co/CompVis/myriad)
|
| 14 |
|
| 15 |
## Abstract
|
| 16 |
|
| 17 |
+
The OWM benchmark was proposed in the paper [Envisioning the Future, One Step at a Time](https://huggingface.co/papers/2604.09527) and used to evaluate the [MYRIAD](https://huggingface.co/CompVis/myriad/) model.
|
| 18 |
|
| 19 |
OWM is a benchmark of 95 curated videos with motion annotations, with the distribution of motion constrained to enable the evaluation of probabilistic motion prediction methods.
|
| 20 |
Videos are obtained from Pexels ([Pexels License](https://www.pexels.com/license/)). We manually annotate relevant objects and the type of motion observed. We use an off-the-shelf tracker to obtain motion trajectories and manually verify correctness.
|
|
|
|
| 26 |
|
| 27 |

|
| 28 |
|
| 29 |
+
*OWM samples include complex real-world scenes with different motion types and complexities.*
|
| 30 |
|
| 31 |
## Usage
|
| 32 |
|
| 33 |
We provide code to run the OWM evaluation in our [GitHub repository](https://github.com/CompVis/flow-poke-transformer).
|
| 34 |
|
| 35 |
+
To run the evaluation, first download the data by running `hf download CompVis/owm-95 --repo-type dataset`.
|
| 36 |
+
|
| 37 |
+
Then run the evaluation script via:
|
| 38 |
```shell
|
| 39 |
+
python -m scripts.myriad_eval.openset_prediction --data-root path/to/data --ckpt-path path/to/checkpoint --dataset-name owm
|
| 40 |
```
|
| 41 |
|
| 42 |
## License
|
| 43 |
|
| 44 |
- Videos are sourced from Pexels and thus licensed under the [Pexels License](https://www.pexels.com/license/)
|
| 45 |
+
- Metadata and motion annotations are provided under the [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license
|
| 46 |
|
| 47 |
## Citation
|
| 48 |
|