Datasets:
metadata
license: cc-by-nc-nd-4.0
task_categories:
- text-to-audio
size_categories:
- 1M<n<10M
pretty_name: AudioX-IFcaps
gated: true
extra_gated_prompt: >-
Please fill out the form below to request access. We will review your request
within 2-3 business days.
extra_gated_fields:
Full Name: text
Email Address: text
Affiliation (University / Company / Institute): text
Position / Title:
type: select
options:
- Professor / Researcher
- PhD Student
- Master's Student
- Industry Researcher / Engineer
- label: Other
value: other
Country: country
Intended Use Case: text
I agree to use this dataset for non-commercial research purposes only: checkbox
I agree not to redistribute or share the dataset with third parties: checkbox
extra_gated_heading: Request Access to This Dataset
extra_gated_description: >-
This dataset is gated. Please provide your affiliation and intended use to
help us review your request.
extra_gated_button_content: Submit Access Request
[ICLR 2026] AudioX-IFcaps: Instruction-Following Audio Caption Dataset
AudioX-IFcaps (Instruction-Following) is a large-scale, high-quality multimodal dataset designed for training unified audio and music generation models. The dataset contains over 7 million samples with fine-grained, structured annotations that enable precise control over audio generation, including sound event categories, counts, temporal ordering, and timestamps.
π Dataset Statistics
- General Audio: ~1.3m 10-second video-audio clips
- Music: ~5.7m 10-second video-music clips
- Total Duration: ~16k hours of audio content
π Citation
If you use this dataset in your research, please cite:
@article{tian2025audiox,
title={Audiox: Diffusion transformer for anything-to-audio generation},
author={Tian, Zeyue and Jin, Yizhu and Liu, Zhaoyang and Yuan, Ruibin and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},
journal={arXiv preprint arXiv:2503.10522},
year={2025}
}
@inproceedings{tian2025vidmuse,
title={Vidmuse: A simple video-to-music generation framework with long-short-term modeling},
author={Tian, Zeyue and Liu, Zhaoyang and Yuan, Ruibin and Pan, Jiahao and Liu, Qifeng and Tan, Xu and Chen, Qifeng and Xue, Wei and Guo, Yike},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={18782--18793},
year={2025}
}
π Related Resources
- Paper: AudioX: Diffusion Transformer for Anything-to-Audio Generation (Accepted to ICLR 2026)
- Project Page: https://zeyuet.github.io/AudioX/
- Code: GitHub Repository
Note: This dataset is part of the AudioX project. For more information, please refer to the paper and project page.