Datasets:
ArXiv:
License:
Duplicate from ASLP-lab/WSC-Train
Browse filesCo-authored-by: ASLP-lab <ASLP-lab@users.noreply.huggingface.co>
- .gitattributes +60 -0
- README.md +172 -0
- WenetSpeech-Chuan.zip +3 -0
- wsc_metadata.jsonl +3 -0
.gitattributes
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.mds filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
# Audio files - uncompressed
|
| 39 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
# Audio files - compressed
|
| 43 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 48 |
+
# Image files - uncompressed
|
| 49 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 53 |
+
# Image files - compressed
|
| 54 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 57 |
+
# Video files - compressed
|
| 58 |
+
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
+
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
wsc_metadata.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
<h1 align="center">
|
| 7 |
+
WenetSpeech-Chuan: A Large-Scale Sichuanese Corpus With Rich Annotation For Dialectal Speech Processing
|
| 8 |
+
</h1>
|
| 9 |
+
|
| 10 |
+
<p align="center">
|
| 11 |
+
Yuhang Dai<sup>1</sup><sup>,*</sup>, Ziyu Zhang<sup>1</sup><sup>,*</sup>, Shuai Wang<sup>4</sup><sup>,5</sup>,
|
| 12 |
+
Longhao Li<sup>1</sup>, Zhao Guo<sup>1</sup>, Tianlun Zuo<sup>1</sup>,
|
| 13 |
+
Shuiyuan Wang<sup>1</sup>, Hongfei Xue<sup>1</sup>, Chengyou Wang<sup>1</sup>,
|
| 14 |
+
Qing Wang<sup>3</sup>, Xin Xu<sup>2</sup>, Hui Bu<sup>2</sup>, Jie Li<sup>3</sup>,
|
| 15 |
+
Jian Kang<sup>3</sup>, Binbin Zhang<sup>5</sup>, Lei Xie<sup>1</sup><sup>,╀</sup>
|
| 16 |
+
</p>
|
| 17 |
+
<p align="center">
|
| 18 |
+
<sup>1</sup> Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University <br>
|
| 19 |
+
<sup>2</sup> Beijing AISHELL Technology Co., Ltd. <br>
|
| 20 |
+
<sup>3</sup> Institute of Artificial Intelligence (TeleAI), China Telecom <br>
|
| 21 |
+
<sup>4</sup> School of Intelligence Science and Technology, Nanjing University <br>
|
| 22 |
+
<sup>5</sup> WeNet Open Source Community <br>
|
| 23 |
+
</p>
|
| 24 |
+
|
| 25 |
+
<p align="center">
|
| 26 |
+
📑 <a href="https://arxiv.org/abs/2509.18004">Paper</a>    |   
|
| 27 |
+
🐙 <a href="https://github.com/ASLP-lab/WenetSpeech-Chuan">GitHub</a>    |   
|
| 28 |
+
🤗 <a href="https://huggingface.co/collections/ASLP-lab/wenetspeech-chuan-68bade9d02bcb1faece65bda">HuggingFace</a>
|
| 29 |
+
<br>
|
| 30 |
+
🎤 <a href="https://aslp-lab.github.io/WenetSpeech-Chuan/">Demo Page</a>    |   
|
| 31 |
+
💬 <a href="https://github.com/ASLP-lab/WenetSpeech-Chuan?tab=readme-ov-file#contact">Contact Us</a>
|
| 32 |
+
</p>
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
<div align="center">
|
| 36 |
+
<img width="800px" src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/logo/WenetSpeech-Chuan-Logo.png?raw=true" />
|
| 37 |
+
</div>
|
| 38 |
+
|
| 39 |
+
## Dataset
|
| 40 |
+
### WenetSpeech-Chuan Overview
|
| 41 |
+
|
| 42 |
+
* Contains 10,000 hours of large-scale Chuan-Yu dialect speech corpus with rich annotations, the largest open-source resource for Chuan-Yu dialect speech research.</li>
|
| 43 |
+
* Stores metadata in a single JSON file, including audio path, duration, text confidence, speaker identity, SNR, DNSMOS, age, gender, and character-level timestamps. Additional metadata tags may be added in the future.</li>
|
| 44 |
+
* Covers ten domains: Short videos, Entertainment, Live streams, Documentary, Audiobook, Drama, Interview, News and others.</li>
|
| 45 |
+
|
| 46 |
+
<div align="center">
|
| 47 |
+
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/figs/domain.png?raw=true" width="300" style="display:inline-block; margin-right:10px;" />
|
| 48 |
+
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/blob/main/src/figs/quality_distribution.jpg?raw=true" width="300" style="display:inline-block;" />
|
| 49 |
+
</div>
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
### Metadata Format
|
| 56 |
+
We store all audio metadata in a standardized JSON format, where the core fields include `utt_id` (unique identifier for each audio segment), `rover_result` (ROVER result of three ASR transcriptions), `confidence` (confidence score of text transcription), `jyutping_confidence` (confidence score of Cantonese pinyin transcriptions), and `duration` (audio duration); speaker attributes include `speaker_id`, `gender`, and `age`; audio quality assessment metrics include `sample_rate`, `DNSMOS`, and `SNR`; timestamp information includes `timestamp` (precisely recording segment boundaries with `start` and `end`); and extended metadata under the `meta_info` field includes `program` (program name), `region` (geographical information), `link` (original content link), and `domain` (domain classification).
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
#### 📂 Content Tree
|
| 60 |
+
```
|
| 61 |
+
WenetSpeech-Chuan
|
| 62 |
+
├── metadata.jsonl
|
| 63 |
+
├── .gitattributes
|
| 64 |
+
└── README.md
|
| 65 |
+
```
|
| 66 |
+
<!-- WenetSpeech-Chuan
|
| 67 |
+
├── metadata.jsonl
|
| 68 |
+
│
|
| 69 |
+
├── audio_labels/
|
| 70 |
+
│ ├── wav_utt_id.jsonl
|
| 71 |
+
│ ├── wav_utt_id.jsonl
|
| 72 |
+
│ ├── ...
|
| 73 |
+
│ └── wav_utt_id.jsonl
|
| 74 |
+
│
|
| 75 |
+
├── .gitattributes
|
| 76 |
+
└── README.md -->
|
| 77 |
+
|
| 78 |
+
#### Data sample:
|
| 79 |
+
|
| 80 |
+
###### metadata.jsonl
|
| 81 |
+
|
| 82 |
+
{<br>
|
| 83 |
+
"utt": 音频id, <br>
|
| 84 |
+
"filename":音频文件名(type: str), <br>
|
| 85 |
+
"text": 转录抄本(type: str), <br>
|
| 86 |
+
"domain": 参考领域信息(type: list[str]), <br>
|
| 87 |
+
"gender": 说话人性别(type: str), <br>
|
| 88 |
+
"age": 说话人年龄标签 (type: int范围, eg: 中年(36~59)), <br>
|
| 89 |
+
"wvmos": 音频质量分数(type: float), <br>
|
| 90 |
+
"confidence": 转录文本置信度(0-1)(type: str), <br>
|
| 91 |
+
"emotion": 说话人情感标签 (type: str,eg: 愤怒), <br>
|
| 92 |
+
} <br>
|
| 93 |
+
|
| 94 |
+
**example:**
|
| 95 |
+
|
| 96 |
+
{ <br>
|
| 97 |
+
"utt": "013165495633_09mNC_9_5820", <br>
|
| 98 |
+
"filename": "013165495633_09mNC_9_5820.wav", <br>
|
| 99 |
+
"text": "还是选二手装好了的别墅诚心入如意的直接入住的好好", <br>
|
| 100 |
+
"domain": [ <br>
|
| 101 |
+
"短视频" <br>
|
| 102 |
+
], <br>
|
| 103 |
+
"gender": "Male", <br>
|
| 104 |
+
"age": "YOUTH", <br>
|
| 105 |
+
"wvmos": 2.124380588531494, <br>
|
| 106 |
+
"confidence": 0.8333, <br>
|
| 107 |
+
"emotion": angry, <br>
|
| 108 |
+
} <br>
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
<!-- ###### audio_labels/wav_utt_id.jsonl:
|
| 113 |
+
{ <br>
|
| 114 |
+
"wav_utt_id_timestamp": 以 转化为wav后的长音频id_时间戳信息 作为切分后的短音频id (type: str), <br>
|
| 115 |
+
"wav_utt_id_timestamp_path": 短音频数据路径 (type: str), <br>
|
| 116 |
+
"audio_clip_id": 该段短音频在长音频中的切分顺序编号, <br>
|
| 117 |
+
"timestamp": 时间戳信息, <br>
|
| 118 |
+
"wvmos_score": wvmos分数,衡量音频片段质量 (type: float), <br>
|
| 119 |
+
"text": 对应时间戳的音频片段的抄本 (type: str), <br>
|
| 120 |
+
"text_punc": 带标点的抄本 (type: str), <br>
|
| 121 |
+
"spk_num": 音频片段说话人个数,single/multi (type: str) <br>
|
| 122 |
+
"confidence": 抄本置信度 (type: float), <br>
|
| 123 |
+
"emotion": 说话人情感标签 (type: str,eg: 愤怒), <br>
|
| 124 |
+
"age": 说话人年龄标签 (type: int范围, eg: 中年(36~59)), <br>
|
| 125 |
+
"gender": 说话人性别标签 (type: str,eg: 男/女), <br>
|
| 126 |
+
} <br>
|
| 127 |
+
-->
|
| 128 |
+
<!-- #### Data sample(EN):
|
| 129 |
+
|
| 130 |
+
###### metadata.jsonl
|
| 131 |
+
{ <br>
|
| 132 |
+
"utt_id": Original long audio ID, <br>
|
| 133 |
+
"wav_utt_id": Converted long audio ID after transforming to WAV format, <br>
|
| 134 |
+
"source_audio_path": Path to the original long audio file, <br>
|
| 135 |
+
"audio_labels": Path to the label file of short audio segments cut from the converted long audio, <br>
|
| 136 |
+
"url": Download link for the original long audio <br>
|
| 137 |
+
} <br>
|
| 138 |
+
|
| 139 |
+
|
| 140 |
+
###### audio_labels/wav_utt_id.jsonl:
|
| 141 |
+
|
| 142 |
+
{ <br>
|
| 143 |
+
"wav_utt_id_timestamp": Short audio segment ID, composed of the converted long audio ID + timestamp information (type: str), <br>
|
| 144 |
+
"wav_utt_id_timestamp_path": Path to the short audio data (type: str), <br>
|
| 145 |
+
"audio_clip_id": Sequence number of this short segment within the long audio, <br>
|
| 146 |
+
"timestamp": Timestamp information, <br>
|
| 147 |
+
"wvmos_score": WVMOS score, measuring the quality of the audio segment (type: float), <br>
|
| 148 |
+
"text": Transcript of the audio segment corresponding to the timestamp (type: str), <br>
|
| 149 |
+
"text_punc": Transcript with punctuation (type: str), <br>
|
| 150 |
+
"spk_num": Number of speakers in the audio segment, single/multi (type: str), <br>
|
| 151 |
+
"confidence": Confidence score of the transcript (type: float), <br>
|
| 152 |
+
"emotion": Speaker’s emotion label (type: str, e.g., anger), <br>
|
| 153 |
+
"age": Speaker’s age label (type: int range, e.g., middle-aged (36–59)), <br>
|
| 154 |
+
"gender": Speaker’s gender label (type: str, e.g., male/female) <br>
|
| 155 |
+
} <br>
|
| 156 |
+
-->
|
| 157 |
+
### WenetSpeech Usage
|
| 158 |
+
You can obtain the original video source through the `link` field in the metadata file (`metadata.json`). Segment the audio according to the `timestamps` field to extract the corresponding record. For pre-processed audio data, please contact us using the information provided below.
|
| 159 |
+
|
| 160 |
+
## Contact
|
| 161 |
+
If you have any questions or would like to collaborate, feel free to reach out to our research team via email: yhdai@mail.nwpu.edu.cn or ziyu_zhang@mail.nwpu.edu.cn.
|
| 162 |
+
|
| 163 |
+
You’re also welcome to join our WeChat group for technical discussions, updates, and — as mentioned above — access to pre-processed audio data.
|
| 164 |
+
|
| 165 |
+
<p align="center">
|
| 166 |
+
<img src="https://github.com/ASLP-lab/WenetSpeech-Chuan/raw/main/src/figs/wechat.jpg" width="300" alt="WeChat Group QR Code"/>
|
| 167 |
+
<em>Scan to join our WeChat discussion group</em>
|
| 168 |
+
</p>
|
| 169 |
+
|
| 170 |
+
<p align="center">
|
| 171 |
+
<img src="https://github.com/ASLP-lab/WenetSpeech-Yue/raw/main/figs/npu@aslp.jpeg" width="300" alt="Official Account QR Code"/>
|
| 172 |
+
</p>
|
WenetSpeech-Chuan.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4b262e62a9f8b04f0917da7f1fb14488fb8aaa5dcd1ced44e79f9e06cbdd0070
|
| 3 |
+
size 499488539
|
wsc_metadata.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4096e8950ce61ec472b17e32c8b0af1810391ba0208e5e411598e7fce1f45b10
|
| 3 |
+
size 1434877820
|