File size: 3,178 Bytes
9c49f91 b8b7f57 9c49f91 f8e26d6 2fe5b3c 75a11d7 2fe5b3c f3769c7 b850911 2472210 b850911 ceb0e43 b850911 ceb0e43 2472210 ceb0e43 3a0df9b 386fa87 3a0df9b 386fa87 3a0df9b 3fe0910 8be54bf 3fe0910 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 | ---
license: cc-by-nc-4.0
language:
- zh
- en
viewer: false
---
[](https://arxiv.org/abs/2604.22245)
# LAT-Bench
LAT-Bench is the first benchmark designed for evaluating **temporal awareness in long-form audio understanding**.
Unlike existing benchmarks limited to short clips, LAT-Bench supports **audio durations up to 30 minutes**, enabling evaluation under realistic long-form scenarios.
The benchmark covers three core tasks:
- **Dense Audio Captioning (DAC)**: generate temporally grounded descriptions over the full audio
- **Temporal Audio Grounding (TAG)**: localize relevant time spans for a given query
- **Targeted Audio Captioning (TAC)**: produce descriptions for specific temporal segments
LAT-Bench contains approximately **40 hours of long-form audio**, including:
- **25 hours in Chinese**
- **15 hours in English**
The dataset spans diverse real-world scenarios, including conversations, lifestyle vlogs, educational content, and so on.
## Data Distribution
<p align="center">
<img src="./Figures/bench-figure.png" width="600"/>
<em>Figure 1: Duration and scenario distributions of LAT-Bench across Chinese and English.</em>
</p>
LAT-Bench exhibits balanced coverage across duration ranges and scenarios, ensuring robust evaluation under diverse long-form settings.
<p align="center">
<em>Table 1: Temporal annotation statistics of LAT-Bench across DAC, TAG, and TAC tasks.</em>
<img src="./Figures/bench-table.png" width="600"/>
</p>
The annotations provide comprehensive temporal coverage across the beginning, middle, and end of audio sequences.
## Data Organization
LAT-Bench is organized into two types of files: **metadata files** and **task files**.
### Metadata Files
- `./meta/bench-CN-meta.jsonl`
- `./meta/bench-EN-meta.jsonl`
These files provide metadata for each audio sample, including:
- `id`: unique identifier
- `url`: source link for downloading the audio
- `title`: original audio title
- `duration`: duration in seconds
### Task Files
Dense Audio Captioning (DAC)
- `./task/bench-CN-DAC.jsonl`
- `./task/bench-EN-DAC.jsonl`
Temporal Audio Grounding (TAG)
- `./task/bench-CN-TAG.jsonl`
- `./task/bench-EN-TAG.jsonl`
Targeted Audio Captioning (TAC)
- `./task/bench-CN-TAC.jsonl`
- `./task/bench-EN-TAC.jsonl`
Each task file contains benchmark instances in a unified format.
The audios field references the corresponding audio sample using the id from metadata files.
## Evaluation Protocol
For detailed evaluation protocols and metrics, please refer to the official repository:
👉 https://github.com/alanshaoTT/LAT-Audio-Repo
## Citation
If you find this work useful, please cite:
```bibtex
@article{shao2026lataudio,
title={Listening with Time: Precise Temporal Awareness for Long-Form Audio Understanding},
author={Shao, Mingchen and Su, Hang and Tian, Wenjie and Mu, Bingshen and Lin, Zhennan and Fan, Lichun and Luo, Zhenbo and Luan, Jian and Xie, Lei},
journal={arXiv preprint arXiv:2604.22245},
year={2026}
}
```
## Contact
For questions, feedback, or collaboration inquiries, please contact:
📧 mcshao@mail.nwpu.edu.cn |