47z commited on
Commit
a981877
·
verified ·
1 Parent(s): 5d5b381

Update README: remove balanced column, add citation, describe Kimi-Audio training format

Browse files
Files changed (1) hide show
  1. README.md +39 -22
README.md CHANGED
@@ -47,13 +47,13 @@ emotion categories: **angry, crying, happy, laugh, sad**.
47
 
48
  ## Subsets (nested, emotion-balanced)
49
 
50
- | subset | en hours | pairs | per-emotion hours | balanced? |
51
- |-------:|---------:|--------:|------------------:|:---------:|
52
- | 1hr | 1.00 | 839 | 0.20 each | ✓ |
53
- | 50hr | 50.00 | 42,467 | 10.00 each | ✓ |
54
- | 100hr | 99.99 | 85,004 | 20.00 each | ✓ |
55
- | 500hr | 500.00 | 425,952 | 100.00 each | ✓ |
56
- | 1000hr | 1055.67 | 896,477 | 95–133 | (full data) |
57
 
58
  Smaller subsets are strict subsets of larger ones. Hours are measured by
59
  **English** audio duration; the Chinese side is paired one-to-one and
@@ -62,16 +62,14 @@ Smaller subsets are strict subsets of larger ones. Hours are measured by
62
  ## Repository layout
63
 
64
  ```
65
- move-1000hr/
66
  ├── README.md
67
  ├── metadata_1hr.tsv # 5 columns: zh_path, en_path, zh_text, en_text, category
68
  ├── metadata_50hr.tsv
69
  ├── metadata_100hr.tsv
70
  ├── metadata_500hr.tsv
71
  ├── metadata_1000hr.tsv # full
72
- ├── metadata_all.tsv # all 896k pairs + 5 boolean cols (in_1hr ... in_1000hr)
73
- ├── plan.tsv # full plan with tier + shard_id + durations
74
- ├── entries.jsonl.gz # lossless original metadata (optional)
75
  ├── data/ # audio packed into tar shards
76
  │ ├── s1hr_000.tar
77
  │ ├── s50hr_*.tar
@@ -79,7 +77,7 @@ move-1000hr/
79
  │ ├── s500hr_*.tar
80
  │ └── s1000hr_*.tar
81
  └── scripts/
82
- └── tsv_to_metadata_shard.py # paired TSV -> conversation-format JSONL
83
  ```
84
 
85
  Inside each tar, paths preserve the original tree:
@@ -176,21 +174,33 @@ def subset(df, target):
176
  df_50hr = subset(df, "50hr") # equivalent to metadata_50hr.tsv
177
  ```
178
 
179
- ## Convert to training format (s-s conversation)
180
 
181
- If your trainer expects the original `metadata_shard` format
182
- (`{task_type, conversation:[...]}`), use the included script:
 
 
183
 
184
  ```bash
185
  python scripts/tsv_to_metadata_shard.py metadata_100hr.tsv \
186
  -o metadata_shard.jsonl
187
- # or split:
188
- python scripts/tsv_to_metadata_shard.py metadata_100hr.tsv \
189
- --shard-size 50000 -o metadata_shard.jsonl
190
  ```
191
 
192
- Each TSV row produces 2 conversations (en→zh + zh→en). Use
193
- `--directions en2zh` / `--directions zh2en` to keep one direction.
 
 
 
 
 
 
 
 
 
 
 
 
 
194
 
195
  ## How it was built
196
 
@@ -203,8 +213,15 @@ Each TSV row produces 2 conversations (en→zh + zh→en). Use
203
 
204
  ## Citation
205
 
206
- (TODO)
 
 
 
 
 
 
 
207
 
208
  ## License
209
 
210
- CC BY-NC 4.0 (TODO confirm).
 
47
 
48
  ## Subsets (nested, emotion-balanced)
49
 
50
+ | subset | en hours | pairs | per-emotion hours |
51
+ |-------:|---------:|--------:|------------------:|
52
+ | 1hr | 1.00 | 839 | 0.20 each |
53
+ | 50hr | 50.00 | 42,467 | 10.00 each |
54
+ | 100hr | 99.99 | 85,004 | 20.00 each |
55
+ | 500hr | 500.00 | 425,952 | 100.00 each |
56
+ | 1000hr | 1055.67 | 896,477 | 95–133 (full) |
57
 
58
  Smaller subsets are strict subsets of larger ones. Hours are measured by
59
  **English** audio duration; the Chinese side is paired one-to-one and
 
62
  ## Repository layout
63
 
64
  ```
65
+ 47z/MoVE/
66
  ├── README.md
67
  ├── metadata_1hr.tsv # 5 columns: zh_path, en_path, zh_text, en_text, category
68
  ├── metadata_50hr.tsv
69
  ├── metadata_100hr.tsv
70
  ├── metadata_500hr.tsv
71
  ├── metadata_1000hr.tsv # full
72
+ ├── metadata_all.tsv # all 896k pairs + subset column
 
 
73
  ├── data/ # audio packed into tar shards
74
  │ ├── s1hr_000.tar
75
  │ ├── s50hr_*.tar
 
77
  │ ├── s500hr_*.tar
78
  │ └── s1000hr_*.tar
79
  └── scripts/
80
+ └── tsv_to_metadata_shard.py # TSV -> Kimi-Audio training JSONL
81
  ```
82
 
83
  Inside each tar, paths preserve the original tree:
 
174
  df_50hr = subset(df, "50hr") # equivalent to metadata_50hr.tsv
175
  ```
176
 
177
+ ## Convert to Kimi-Audio training format
178
 
179
+ The included script `tsv_to_metadata_shard.py` converts any metadata TSV
180
+ into the conversation-format JSONL used to replicate the
181
+ [Kimi-Audio](https://github.com/MoonshotAI/Kimi-Audio) speech-to-speech
182
+ training pipeline described in our paper.
183
 
184
  ```bash
185
  python scripts/tsv_to_metadata_shard.py metadata_100hr.tsv \
186
  -o metadata_shard.jsonl
 
 
 
187
  ```
188
 
189
+ Output schema (one JSON object per line):
190
+
191
+ ```json
192
+ {
193
+ "task_type": "s-s",
194
+ "conversation": [
195
+ {"role": "user", "message_type": "text", "content": "Translate the given English speech into Chinese while preserving its expressiveness."},
196
+ {"role": "user", "message_type": "audio", "content": "en/happy/happy_000001_en.wav"},
197
+ {"role": "assistant", "message_type": "audio-text", "content": ["zh/happy/happy_000001_zh.wav", "中文文本..."]}
198
+ ]
199
+ }
200
+ ```
201
+
202
+ Each TSV row produces 2 conversations by default (en→zh + zh→en). Use
203
+ `--directions en2zh` / `--directions zh2en` to keep one direction only.
204
 
205
  ## How it was built
206
 
 
213
 
214
  ## Citation
215
 
216
+ ```bibtex
217
+ @article{chen2026move,
218
+ title={MoVE: Translating Laughter and Tears via Mixture of Vocalization Experts in Speech-to-Speech Translation},
219
+ author={Chen, Szu-Chi and Tsai, I-Ning and Lin, Yi-Cheng and Huang, Sung-Feng and Lee, Hung-yi},
220
+ journal={arXiv preprint arXiv:2604.17435},
221
+ year={2026}
222
+ }
223
+ ```
224
 
225
  ## License
226
 
227
+ CC BY-NC 4.0