| --- |
| license: cc-by-nc-4.0 |
| task_categories: |
| - automatic-speech-recognition |
| - voice-activity-detection |
| language: |
| - en |
| - zh |
| - multilingual |
| size_categories: |
| - n<1K |
| pretty_name: MSDWild (val = test) — Speaker Diarization in the Wild |
| tags: |
| - speaker-diarization |
| - diarization |
| - msdwild |
| - in-the-wild |
| - multimodal |
| - rttm |
| - audio |
| annotations_creators: |
| - expert-generated |
| source_datasets: |
| - extended|msdwild |
| dataset_info: |
| - config_name: few |
| features: |
| - name: audio |
| dtype: audio |
| - name: rttm |
| dtype: string |
| - name: file_id |
| dtype: string |
| splits: |
| - name: test |
| num_examples: 490 |
| - config_name: many |
| features: |
| - name: audio |
| dtype: audio |
| - name: rttm |
| dtype: string |
| - name: file_id |
| dtype: string |
| splits: |
| - name: test |
| num_examples: 177 |
| --- |
| |
| # MSDWild — validation / test split (speaker diarization in the wild) |
|
|
| Copie du split d'évaluation de **MSDWild** (Liu et al., Interspeech 2022), un |
| corpus de diarisation *in the wild* multimodal construit à partir de vidéos |
| réelles (talk-shows, interviews, débats…). MSDWild fournit deux sets de |
| validation utilisés comme **test de facto** par la communauté : |
|
|
| - **few** : 2 à 4 locuteurs par session (490 fichiers) — tâche "classique" |
| - **many** : 5+ locuteurs par session (177 fichiers) — tâche conversationnelle |
| dense, beaucoup plus difficile |
|
|
| Les RTTMs sont ceux distribués officiellement sur le repo upstream. Les audios |
| sont les WAV officiels (Google Drive, 7.56 GB). |
|
|
| ## Contenu |
|
|
| | Sous-split | Fichiers | Profil | |
| |------------|----------|--------| |
| | few | 490 | 2–4 locuteurs, segments courts | |
| | many | 177 | 5+ locuteurs, densité de tour de parole élevée | |
|
|
| - Audio : **WAV** 16 kHz mono (tels que distribués par MSDWild upstream) |
| - Annotations : **RTTM** standard pyannote |
| - Langues : multilingue (principalement anglais et chinois, sources web) |
| - Licence : **CC-BY-NC-4.0** — recherche uniquement (voir l'accord de licence |
| MSDWild upstream) |
|
|
| ## Structure |
|
|
| ``` |
| dia-MsdWild-test/ |
| ├── README.md |
| ├── audio/ |
| │ ├── few/<file_id>.wav # 490 |
| │ └── many/<file_id>.wav # 177 |
| └── rttm/ |
| ├── few/<file_id>.rttm # 490 |
| └── many/<file_id>.rttm # 177 |
| ``` |
|
|
| Chaque ligne RTTM suit le format standard : |
| ``` |
| SPEAKER <file_id> 1 <start_sec> <duration_sec> <NA> <NA> <speaker_id> <NA> <NA> |
| ``` |
|
|
| ## Utilisation |
|
|
| ### Téléchargement |
| ```python |
| from huggingface_hub import snapshot_download |
| root = snapshot_download("ggfox00000/dia-MsdWild-test", repo_type="dataset") |
| # root/audio/few/*.wav, root/rttm/few/*.rttm, root/audio/many/*.wav, ... |
| ``` |
|
|
| ### Évaluation DER (pyannote) |
| ```python |
| from pathlib import Path |
| from pyannote.audio import Pipeline |
| from pyannote.metrics.diarization import DiarizationErrorRate |
| from pyannote.database.util import load_rttm |
| |
| pipe = Pipeline.from_pretrained("pyannote/speaker-diarization-3.1") |
| |
| for subset in ("few", "many"): |
| metric = DiarizationErrorRate() |
| audio_dir = Path(root) / "audio" / subset |
| rttm_dir = Path(root) / "rttm" / subset |
| for wav in sorted(audio_dir.glob("*.wav")): |
| hyp = pipe(str(wav)) |
| ref = next(iter(load_rttm(str(rttm_dir / f"{wav.stem}.rttm")).values())) |
| metric(ref, hyp) |
| print(f"{subset}: DER = {abs(metric):.3f}") |
| ``` |
|
|
| ## Source |
|
|
| - MSDWild — Liu et al. 2022 |
| https://github.com/X-LANCE/MSDWILD |
| https://x-lance.github.io/MSDWILD |
|
|
| ## Licence |
|
|
| **CC-BY-NC-4.0** — usage **non commercial**, recherche uniquement. Se référer |
| à l'accord de licence MSDWild upstream |
| (`MSDWILD_license_agreement.pdf`) avant toute réutilisation. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @inproceedings{liu22t_interspeech, |
| author = {Tao Liu and Shuai Fan and Xu Xiang and Hongbo Song and Shaoxiong Lin and Jiaqi Sun and Tianyuan Han and Siyuan Chen and Binwei Yao and Sen Liu and Yifei Wu and Yanmin Qian and Kai Yu}, |
| title = {{MSDWild: Multi-modal Speaker Diarization Dataset in the Wild}}, |
| booktitle = {Interspeech}, |
| year = {2022}, |
| pages = {1476--1480}, |
| doi = {10.21437/Interspeech.2022-10466}, |
| } |
| ``` |
|
|