47z commited on
Commit
5d5b381
·
verified ·
1 Parent(s): 179d2f2

Update README with new dataset card

Browse files
Files changed (1) hide show
  1. README.md +179 -36
README.md CHANGED
@@ -3,65 +3,208 @@ language:
3
  - en
4
  - zh
5
  license: cc-by-nc-4.0
6
- task_categories:
7
- - translation
8
- - automatic-speech-recognition
9
- pretty_name: MoVE — Mixture of Vocalization Experts Dataset
10
  size_categories:
11
  - 100K<n<1M
 
 
 
 
12
  tags:
13
- - speech-to-speech-translation
14
- - expressive-speech
15
- - emotion
16
- - bilingual
17
- - tts
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
 
20
- # MoVE Dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- Bilingual (EN↔ZH) expressive speech-to-speech translation dataset.
23
- 858,312 pairs · 5 emotion categories: `angry`, `happy`, `sad`, `laugh`, `crying`.
24
 
25
- > **Paper:** [MoVE: Translating Laughter and Tears via Mixture of Vocalization Experts in Speech-to-Speech Translation](https://arxiv.org/abs/2604.17435) (Interspeech 2026)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- ## Directory Structure
28
 
29
  ```
30
- ├── en/{emotion}/*.flac # English TTS audio (FLAC, lossless)
31
- ├── zh/{emotion}/*.flac # Chinese TTS audio (FLAC, lossless)
32
- ├── metadata.tsv # Pair metadata (see below)
33
- └── make_kimi_train.py # Convert to Kimi-Audio training format
34
  ```
35
 
36
- ## metadata.tsv
37
 
38
- Columns: `zh_path`, `en_path`, `zh_text`, `en_text`, `category`
39
 
40
- All paths are relative to this directory and use `.flac` extension.
41
 
42
  ```bash
43
- python make_metadata_tsv.py
 
 
 
 
 
 
 
 
 
44
  ```
45
 
46
- ## Kimi-Audio Training Format
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
- > ⚠️ **WARNING**: `make_kimi_train.py` converts all `.flac` files to `.wav` **in-place**
49
- > and deletes the original `.flac` files. WAV files are approximately **2–3× larger**
50
- > than FLAC. Run this only when you intend to use the data for local Kimi-Audio training
51
- > and no longer need the FLAC files.
52
 
53
- Produces `metadata_kimi_train.jsonl` in Kimi-Audio conversation format:
54
 
55
- ```json
56
- {"task_type": "s-s", "conversation": [
57
- {"role": "user", "message_type": "text", "content": "Translate the given English speech into Chinese while preserving its expressiveness."},
58
- {"role": "user", "message_type": "audio", "content": "en/angry/angry_000001_en.wav"},
59
- {"role": "assistant", "message_type": "audio-text", "content": ["zh/angry/angry_000001_zh.wav", "..."]}
60
- ]}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ```
62
 
63
- Each pair generates two entries (EN→ZH and ZH→EN).
 
 
 
64
 
65
  ```bash
66
- python make_kimi_train.py
 
 
 
 
67
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - en
4
  - zh
5
  license: cc-by-nc-4.0
6
+ pretty_name: MoVE — Multilingual Voice Emotion (paired EN/ZH)
 
 
 
7
  size_categories:
8
  - 100K<n<1M
9
+ task_categories:
10
+ - audio-to-audio
11
+ - text-to-speech
12
+ - translation
13
  tags:
14
+ - speech-translation
15
+ - emotional-tts
16
+ - paired-speech
17
+ configs:
18
+ - config_name: "1hr"
19
+ data_files:
20
+ - split: train
21
+ path: metadata_1hr.tsv
22
+ - config_name: "50hr"
23
+ data_files:
24
+ - split: train
25
+ path: metadata_50hr.tsv
26
+ - config_name: "100hr"
27
+ data_files:
28
+ - split: train
29
+ path: metadata_100hr.tsv
30
+ - config_name: "500hr"
31
+ data_files:
32
+ - split: train
33
+ path: metadata_500hr.tsv
34
+ - config_name: "1000hr"
35
+ data_files:
36
+ - split: train
37
+ path: metadata_1000hr.tsv
38
  ---
39
 
40
+ # MoVE — Multilingual Voice Emotion (paired EN/ZH)
41
+
42
+ Repo: [`47z/MoVE`](https://huggingface.co/datasets/47z/MoVE)
43
+
44
+ Paired English / Chinese emotional speech for speech-to-speech translation
45
+ training. Every row is a `(en_wav, zh_wav)` pair belonging to one of five
46
+ emotion categories: **angry, crying, happy, laugh, sad**.
47
+
48
+ ## Subsets (nested, emotion-balanced)
49
+
50
+ | subset | en hours | pairs | per-emotion hours | balanced? |
51
+ |-------:|---------:|--------:|------------------:|:---------:|
52
+ | 1hr | 1.00 | 839 | 0.20 each | ✓ |
53
+ | 50hr | 50.00 | 42,467 | 10.00 each | ✓ |
54
+ | 100hr | 99.99 | 85,004 | 20.00 each | ✓ |
55
+ | 500hr | 500.00 | 425,952 | 100.00 each | ✓ |
56
+ | 1000hr | 1055.67 | 896,477 | 95–133 | ✗ (full data) |
57
+
58
+ Smaller subsets are strict subsets of larger ones. Hours are measured by
59
+ **English** audio duration; the Chinese side is paired one-to-one and
60
+ ~5% longer in total.
61
 
62
+ ## Repository layout
 
63
 
64
+ ```
65
+ move-1000hr/
66
+ ├── README.md
67
+ ├── metadata_1hr.tsv # 5 columns: zh_path, en_path, zh_text, en_text, category
68
+ ├── metadata_50hr.tsv
69
+ ├── metadata_100hr.tsv
70
+ ├── metadata_500hr.tsv
71
+ ├── metadata_1000hr.tsv # full
72
+ ├── metadata_all.tsv # all 896k pairs + 5 boolean cols (in_1hr ... in_1000hr)
73
+ ├── plan.tsv # full plan with tier + shard_id + durations
74
+ ├── entries.jsonl.gz # lossless original metadata (optional)
75
+ ├── data/ # audio packed into tar shards
76
+ │ ├── s1hr_000.tar
77
+ │ ├── s50hr_*.tar
78
+ │ ├── s100hr_*.tar
79
+ │ ├── s500hr_*.tar
80
+ │ └── s1000hr_*.tar
81
+ └── scripts/
82
+ └── tsv_to_metadata_shard.py # paired TSV -> conversation-format JSONL
83
+ ```
84
 
85
+ Inside each tar, paths preserve the original tree:
86
 
87
  ```
88
+ en/<emotion>/<id>_en.wav
89
+ zh/<emotion>/<id>_zh.wav
 
 
90
  ```
91
 
92
+ So `tar -xf data/*.tar` reproduces a flat `en/` + `zh/` directory layout.
93
 
94
+ ## Quick start
95
 
96
+ ### Download just one subset (e.g. 100hr 34 GB audio)
97
 
98
  ```bash
99
+ huggingface-cli download 47z/MoVE \
100
+ --repo-type dataset --local-dir ./move \
101
+ --include "metadata_100hr.tsv" \
102
+ "data/s1hr_*.tar" \
103
+ "data/s50hr_*.tar" \
104
+ "data/s100hr_*.tar" \
105
+ "scripts/*"
106
+
107
+ cd move
108
+ for f in data/*.tar; do tar -xf "$f"; done # extracts en/ and zh/
109
  ```
110
 
111
+ The wav paths in `metadata_100hr.tsv` (e.g. `en/happy/happy_000001_en.wav`)
112
+ now resolve to local files.
113
+
114
+ ### Download the full 1000hr (~340 GB)
115
+
116
+ ```bash
117
+ huggingface-cli download 47z/MoVE \
118
+ --repo-type dataset --local-dir ./move
119
+ cd move && for f in data/*.tar; do tar -xf "$f"; done
120
+ ```
121
+
122
+ ### Required tar shards per subset
123
+
124
+ | subset | tar prefixes to download |
125
+ |---------|--------------------------|
126
+ | 1hr | `s1hr_*` |
127
+ | 50hr | `s1hr_*`, `s50hr_*` |
128
+ | 100hr | `s1hr_*`, `s50hr_*`, `s100hr_*` |
129
+ | 500hr | `s1hr_*`, `s50hr_*`, `s100hr_*`, `s500hr_*` |
130
+ | 1000hr | all (`s*_*`) |
131
 
132
+ ## metadata.tsv schema
 
 
 
133
 
134
+ Each `metadata_<size>.tsv` is tab-separated with this header:
135
 
136
+ | column | example | description |
137
+ |----------|---------|-------------|
138
+ | zh_path | `zh/happy/happy_000001_zh.wav` | path to Chinese wav (relative) |
139
+ | en_path | `en/happy/happy_000001_en.wav` | path to English wav (relative) |
140
+ | zh_text | `父亲。一个人出去约会...` | Chinese transcript |
141
+ | en_text | `Dater. A person who...` | English transcript |
142
+ | category | `happy` | one of {angry, crying, happy, laugh, sad} |
143
+
144
+ ### metadata_all.tsv (overview)
145
+
146
+ Same 5 columns plus one `subset` column:
147
+
148
+ | column | values | meaning |
149
+ |----------|:------:|---------|
150
+ | `subset` | `1hr` / `50hr` / `100hr` / `500hr` / `1000hr` | the **smallest** subset this row belongs to |
151
+
152
+ Subsets are nested, so a row with `subset="1hr"` is also in 50hr, 100hr,
153
+ 500hr and 1000hr; `subset="1000hr"` means the row is only in the full
154
+ dataset. Subset sizes (cumulative):
155
+
156
+ | subset value | cumulative includes | total rows |
157
+ |--------------|---------------------|-----------:|
158
+ | `1hr` | self | 839 |
159
+ | `50hr` | 1hr + 50hr | 42,467 |
160
+ | `100hr` | 1hr + 50hr + 100hr | 85,004 |
161
+ | `500hr` | + 500hr | 425,952 |
162
+ | `1000hr` | full | 896,477 |
163
+
164
+ Filter with pandas:
165
+
166
+ ```python
167
+ import pandas as pd
168
+ df = pd.read_csv("metadata_all.tsv", sep="\t")
169
+
170
+ # Build any subset by including this tier and all smaller ones:
171
+ order = ["1hr", "50hr", "100hr", "500hr", "1000hr"]
172
+ def subset(df, target):
173
+ keep = order[:order.index(target) + 1]
174
+ return df[df["subset"].isin(keep)]
175
+
176
+ df_50hr = subset(df, "50hr") # equivalent to metadata_50hr.tsv
177
  ```
178
 
179
+ ## Convert to training format (s-s conversation)
180
+
181
+ If your trainer expects the original `metadata_shard` format
182
+ (`{task_type, conversation:[...]}`), use the included script:
183
 
184
  ```bash
185
+ python scripts/tsv_to_metadata_shard.py metadata_100hr.tsv \
186
+ -o metadata_shard.jsonl
187
+ # or split:
188
+ python scripts/tsv_to_metadata_shard.py metadata_100hr.tsv \
189
+ --shard-size 50000 -o metadata_shard.jsonl
190
  ```
191
+
192
+ Each TSV row produces 2 conversations (en→zh + zh→en). Use
193
+ `--directions en2zh` / `--directions zh2en` to keep one direction.
194
+
195
+ ## How it was built
196
+
197
+ - En/Zh paired by the `<id>_en` / `<id>_zh` suffix on the original
198
+ `entries_shard_*.jsonl` records (2,801 unpaired entries dropped).
199
+ - Tier assignment per emotion: shuffle (seed=42), accumulate English
200
+ duration, label each pair with the smallest subset it falls into.
201
+ - Tar shards are emotion-balanced (round-robin) within each tier and
202
+ capped to keep individual tars manageable (≤ ~20 GB).
203
+
204
+ ## Citation
205
+
206
+ (TODO)
207
+
208
+ ## License
209
+
210
+ CC BY-NC 4.0 (TODO confirm).