| --- |
| license: mit |
| homepage: https://microsoft.github.io/AVGen-Bench/ |
| task_categories: |
| - text-to-video |
| - text-to-audio |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: metadata.parquet |
| --- |
| |
| # AVGen-Bench Generated Videos Data Card |
|
|
| ## Overview |
|
|
| This data card describes the generated audio-video outputs stored directly in the repository root by model directory. |
|
|
| The collection is intended for **benchmarking and qualitative/quantitative evaluation** of text-to-audio-video (T2AV) systems. It was presented in the paper [AVGen-Bench: A Task-Driven Benchmark for Multi-Granular Evaluation of Text-to-Audio-Video Generation](https://arxiv.org/abs/2604.08540). It is not a training dataset. Each item is a model-generated video produced from a prompt defined in `prompts/*.json`. |
|
|
| [](http://aka.ms/avgenbench) |
| [](https://github.com/microsoft/AVGen-Bench) |
| [](https://arxiv.org/abs/2604.08540) |
|
|
| For Hugging Face Hub compatibility, the repository includes a root-level `metadata.parquet` file so the Dataset Viewer can expose each video as a structured row with prompt metadata instead of treating the repo as an unindexed file dump. |
| The relative video path is stored as a plain string column (`video_path`) rather than a media-typed `file_name` column, which avoids current Dataset Viewer post-processing failures on video rows. |
|
|
| ## Sample Usage |
|
|
| As described in the GitHub repository, you can generate videos from the benchmark prompts using the following command: |
|
|
| ```bash |
| python batch_generate.py \ |
| --provider sora2 \ |
| --task_type video_generation \ |
| --prompts_dir ./prompts \ |
| --out_dir ./generated_videos/sora2 \ |
| --concurrency 2 \ |
| --seconds 12 \ |
| --size 1280x720 |
| ``` |
|
|
| ## What This Dataset Contains |
|
|
| The dataset is organized by: |
|
|
| 1. Model directory |
| 2. Video category |
| 3. Generated `.mp4` files |
|
|
| A typical top-level structure is: |
|
|
| ```text |
| AVGen-Bench/ |
| ├── Kling_2.6/ |
| ├── LTX-2/ |
| ├── LTX-2.3/ |
| ├── MOVA_360p_Emu3.5/ |
| ├── MOVA_360p_NanoBanana_2/ |
| ├── Ovi_11/ |
| ├── Seedance_1.5_pro/ |
| ├── Sora_2/ |
| ├── Veo_3.1_fast/ |
| ├── Veo_3.1_quality/ |
| ├── Wan_2.2_HunyuanVideo-Foley/ |
| ├── Wan_2.6/ |
| ├── metadata.parquet |
| ├── prompts/ |
| └── reference_image/ # optional, depending on generation pipeline |
| ``` |
|
|
| Within each model directory, videos are grouped by category, for example: |
|
|
| ```text |
| Veo_3.1_fast/ |
| ├── ads/ |
| ├── animals/ |
| ├── asmr/ |
| ├── chemical_reaction/ |
| ├── cooking/ |
| ├── gameplays/ |
| ├── movie_trailer/ |
| ├── musical_instrument_tutorial/ |
| ├── news/ |
| ├── physical_experiment/ |
| └── sports/ |
| ``` |
|
|
| ## Prompt Coverage |
|
|
| Prompt definitions are stored in `prompts/*.json`. |
|
|
| The current prompt set contains **235 prompts** across **11 categories**: |
|
|
| | Category | Prompt count | |
| |---|---:| |
| | `ads` | 20 | |
| | `animals` | 20 | |
| | `asmr` | 20 | |
| | `chemical_reaction` | 20 | |
| | `cooking` | 20 | |
| | `gameplays` | 20 | |
| | `movie_trailer` | 20 | |
| | `musical_instrument_tutorial` | 35 | |
| | `news` | 20 | |
| | `physical_experiment` | 20 | |
| | `sports` | 20 | |
|
|
| Prompt JSON entries typically contain: |
|
|
| - `content`: a short content descriptor used for naming or indexing |
| - `prompt`: the full generation prompt |
|
|
|
|
| ## Data Instance Format |
|
|
| Each generated item is typically: |
|
|
| - A single `.mp4` file |
| - Containing model-generated video and, when supported by the model/pipeline, synthesized audio |
| - Stored under `<model>/<category>/` |
|
|
| The filename is usually derived from prompt content after sanitization. Exact naming may vary by generation script or provider wrapper. |
| In the standard export pipeline, the filename is derived from the prompt's `content` field using the following logic: |
|
|
| ```python |
| def safe_filename(name: str, max_len: int = 180) -> str: |
| name = str(name).strip() |
| name = re.sub(r"[/\\:*?\"<>|\ |
| \\r\\t]", "_", name) |
| name = re.sub(r"\\s+", " ", name).strip() |
| if not name: |
| name = "untitled" |
| if len(name) > max_len: |
| name = name[:max_len].rstrip() |
| return name |
| ``` |
|
|
| So the expected output path pattern is: |
|
|
| ```text |
| <model>/<category>/<safe_filename(content)>.mp4 |
| ``` |
|
|
| For Dataset Viewer indexing, `metadata.parquet` stores one row per exported video with: |
|
|
| - `video_path`: relative path to the `.mp4` stored as a plain string |
| - `model`: model directory name |
| - `category`: benchmark category |
| - `content`: prompt short name |
| - `prompt`: full generation prompt |
| - `prompt_id`: index inside `prompts/<category>.json` |
|
|
| ## How The Data Was Produced |
|
|
| The videos were generated by running different T2AV systems on a shared benchmark prompt set. |
|
|
| Important properties: |
|
|
| - All systems are evaluated against the same category structure |
| - Outputs are model-generated rather than human-recorded |
| - Different models may expose different generation settings, resolutions, or conditioning mechanisms |
| - Some pipelines may additionally use first-frame or reference-image inputs, depending on the underlying model |
|
|
| ## Intended Uses |
|
|
| This dataset is intended for: |
|
|
| - Benchmarking T2AV generation systems |
| - Running AVGen-Bench evaluation scripts |
| - Comparing failure modes across models |
| - Qualitative demo curation |
| - Error analysis by category or prompt type |
|
|
| ## Out-of-Scope Uses |
|
|
| This dataset is not intended for: |
|
|
| - Training a general-purpose video generation model |
| - Treating model outputs as factual evidence of real-world events |
| - Safety certification of a model without additional testing |
| - Any claim that benchmark performance fully captures downstream deployment quality |
|
|
| ## Known Limitations |
|
|
| - Outputs are synthetic and inherit the biases and failure modes of the generating models |
| - Some categories emphasize benchmark stress-testing rather than natural real-world frequency |
| - File availability may vary across models if a generation job failed, timed out, or was filtered |
| - Different model providers enforce different safety and moderation policies; some prompts may be rejected during provider-side review, which can lead to missing videos for specific models even when the prompt exists in the benchmark |
|
|
|
|
| ## Risks and Responsible Use |
|
|
| Because these are generated videos: |
|
|
| - Visual realism does not imply factual correctness |
| - Audio may contain artifacts, intelligibility failures, or misleading synchronization |
| - Generated content may reflect stereotypes, implausible causal structure, or unsafe outputs inherited from upstream models |
|
|
| Anyone redistributing results should clearly label them as synthetic model outputs. |
|
|
| ## Citation |
|
|
| If you find AVGen-Bench useful, please cite: |
|
|
| ```bibtex |
| @misc{zhou2026avgenbenchtaskdrivenbenchmarkmultigranular, |
| title={AVGen-Bench: A Task-Driven Benchmark for Multi-Granular Evaluation of Text-to-Audio-Video Generation}, |
| author={Ziwei Zhou and Zeyuan Lai and Rui Wang and Yifan Yang and Zhen Xing and Yuqing Yang and Qi Dai and Lili Qiu and Chong Luo}, |
| year={2026}, |
| eprint={2604.08540}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CV}, |
| url={https://arxiv.org/abs/2604.08540}, |
| } |
| ``` |