File size: 6,961 Bytes
b0f2500
 
 
 
 
5d5b381
b0f2500
 
5d5b381
 
 
 
b0f2500
5d5b381
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b0f2500
e412140
5d5b381
 
 
 
 
 
 
 
 
 
a981877
 
 
 
 
 
 
5d5b381
 
 
 
b0f2500
5d5b381
b0f2500
5d5b381
a981877
5d5b381
 
 
 
 
 
a981877
5d5b381
 
 
 
 
 
 
3572a0c
5d5b381
e412140
5d5b381
e412140
 
5d5b381
 
e412140
 
5d5b381
e412140
5d5b381
e412140
5d5b381
e412140
 
5d5b381
 
 
 
 
 
 
 
 
 
e412140
 
5d5b381
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e412140
5d5b381
e412140
5d5b381
e412140
5d5b381
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e412140
 
a981877
5d5b381
3572a0c
a981877
 
 
e412140
 
3572a0c
5d5b381
e412140
5d5b381
a981877
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d5b381
 
 
 
 
 
 
 
 
 
 
 
a981877
 
 
 
 
 
 
 
5d5b381
 
 
a981877
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
---
language:
  - en
  - zh
license: cc-by-nc-4.0
pretty_name: MoVE  Multilingual Voice Emotion (paired EN/ZH)
size_categories:
  - 100K<n<1M
task_categories:
  - audio-to-audio
  - text-to-speech
  - translation
tags:
  - speech-translation
  - emotional-tts
  - paired-speech
configs:
  - config_name: "1hr"
    data_files:
      - split: train
        path: metadata_1hr.tsv
  - config_name: "50hr"
    data_files:
      - split: train
        path: metadata_50hr.tsv
  - config_name: "100hr"
    data_files:
      - split: train
        path: metadata_100hr.tsv
  - config_name: "500hr"
    data_files:
      - split: train
        path: metadata_500hr.tsv
  - config_name: "1000hr"
    data_files:
      - split: train
        path: metadata_1000hr.tsv
---

# MoVE — Multilingual Voice Emotion (paired EN/ZH)

Repo: [`47z/MoVE`](https://huggingface.co/datasets/47z/MoVE)

Paired English / Chinese emotional speech for speech-to-speech translation
training. Every row is a `(en_wav, zh_wav)` pair belonging to one of five
emotion categories: **angry, crying, happy, laugh, sad**.

## Subsets (nested, emotion-balanced)

| subset | en hours | pairs   | per-emotion hours |
|-------:|---------:|--------:|------------------:|
| 1hr    | 1.00     | 839     | 0.20 each         |
| 50hr   | 50.00    | 42,467  | 10.00 each        |
| 100hr  | 99.99    | 85,004  | 20.00 each        |
| 500hr  | 500.00   | 425,952 | 100.00 each       |
| 1000hr | 1055.67  | 896,477 | 95–133 (full)     |

Smaller subsets are strict subsets of larger ones. Hours are measured by
**English** audio duration; the Chinese side is paired one-to-one and
~5% longer in total.

## Repository layout

```
47z/MoVE/
├── README.md
├── metadata_1hr.tsv       # 5 columns: zh_path, en_path, zh_text, en_text, category
├── metadata_50hr.tsv
├── metadata_100hr.tsv
├── metadata_500hr.tsv
├── metadata_1000hr.tsv    # full
├── metadata_all.tsv       # all 896k pairs + subset column
├── data/                  # audio packed into tar shards
│   ├── s1hr_000.tar
│   ├── s50hr_*.tar
│   ├── s100hr_*.tar
│   ├── s500hr_*.tar
│   └── s1000hr_*.tar
└── scripts/
    └── tsv_to_metadata.py   # TSV -> Kimi-Audio training JSONL
```

Inside each tar, paths preserve the original tree:

```
en/<emotion>/<id>_en.wav
zh/<emotion>/<id>_zh.wav
```

So `tar -xf data/*.tar` reproduces a flat `en/` + `zh/` directory layout.

## Quick start

### Download just one subset (e.g. 100hr ≈ 34 GB audio)

```bash
huggingface-cli download 47z/MoVE \
  --repo-type dataset --local-dir ./move \
  --include "metadata_100hr.tsv" \
            "data/s1hr_*.tar" \
            "data/s50hr_*.tar" \
            "data/s100hr_*.tar" \
            "scripts/*"

cd move
for f in data/*.tar; do tar -xf "$f"; done   # extracts en/ and zh/
```

The wav paths in `metadata_100hr.tsv` (e.g. `en/happy/happy_000001_en.wav`)
now resolve to local files.

### Download the full 1000hr (~340 GB)

```bash
huggingface-cli download 47z/MoVE \
  --repo-type dataset --local-dir ./move
cd move && for f in data/*.tar; do tar -xf "$f"; done
```

### Required tar shards per subset

| subset  | tar prefixes to download |
|---------|--------------------------|
| 1hr     | `s1hr_*` |
| 50hr    | `s1hr_*`, `s50hr_*` |
| 100hr   | `s1hr_*`, `s50hr_*`, `s100hr_*` |
| 500hr   | `s1hr_*`, `s50hr_*`, `s100hr_*`, `s500hr_*` |
| 1000hr  | all (`s*_*`) |

## metadata.tsv schema

Each `metadata_<size>.tsv` is tab-separated with this header:

| column   | example | description |
|----------|---------|-------------|
| zh_path  | `zh/happy/happy_000001_zh.wav` | path to Chinese wav (relative) |
| en_path  | `en/happy/happy_000001_en.wav` | path to English wav (relative) |
| zh_text  | `父亲。一个人出去约会...`         | Chinese transcript |
| en_text  | `Dater. A person who...`        | English transcript |
| category | `happy`                         | one of {angry, crying, happy, laugh, sad} |

### metadata_all.tsv (overview)

Same 5 columns plus one `subset` column:

| column   | values | meaning |
|----------|:------:|---------|
| `subset` | `1hr` / `50hr` / `100hr` / `500hr` / `1000hr` | the **smallest** subset this row belongs to |

Subsets are nested, so a row with `subset="1hr"` is also in 50hr, 100hr,
500hr and 1000hr; `subset="1000hr"` means the row is only in the full
dataset. Subset sizes (cumulative):

| subset value | cumulative includes | total rows |
|--------------|---------------------|-----------:|
| `1hr`        | self                | 839 |
| `50hr`       | 1hr + 50hr          | 42,467 |
| `100hr`      | 1hr + 50hr + 100hr  | 85,004 |
| `500hr`      | + 500hr             | 425,952 |
| `1000hr`     | full                | 896,477 |

Filter with pandas:

```python
import pandas as pd
df = pd.read_csv("metadata_all.tsv", sep="\t")

# Build any subset by including this tier and all smaller ones:
order = ["1hr", "50hr", "100hr", "500hr", "1000hr"]
def subset(df, target):
    keep = order[:order.index(target) + 1]
    return df[df["subset"].isin(keep)]

df_50hr = subset(df, "50hr")   # equivalent to metadata_50hr.tsv
```

## Convert to Kimi-Audio training format

The included script `tsv_to_metadata.py` converts any metadata TSV
into the conversation-format JSONL used to replicate the
[Kimi-Audio](https://github.com/MoonshotAI/Kimi-Audio) speech-to-speech
training pipeline described in our paper.

```bash
python scripts/tsv_to_metadata.py metadata_100hr.tsv \
  -o metadata_shard.jsonl
```

Output schema (one JSON object per line):

```json
{
  "task_type": "s-s",
  "conversation": [
    {"role": "user", "message_type": "text", "content": "Translate the given English speech into Chinese while preserving its expressiveness."},
    {"role": "user", "message_type": "audio", "content": "en/happy/happy_000001_en.wav"},
    {"role": "assistant", "message_type": "audio-text", "content": ["zh/happy/happy_000001_zh.wav", "中文文本..."]}
  ]
}
```

Each TSV row produces 2 conversations by default (en→zh + zh→en). Use
`--directions en2zh` / `--directions zh2en` to keep one direction only.

## How it was built

- En/Zh paired by the `<id>_en` / `<id>_zh` suffix on the original
  `entries_shard_*.jsonl` records (2,801 unpaired entries dropped).
- Tier assignment per emotion: shuffle (seed=42), accumulate English
  duration, label each pair with the smallest subset it falls into.
- Tar shards are emotion-balanced (round-robin) within each tier and
  capped to keep individual tars manageable (≤ ~20 GB).

## Citation

```bibtex
@article{chen2026move,
  title={MoVE: Translating Laughter and Tears via Mixture of Vocalization Experts in Speech-to-Speech Translation},
  author={Chen, Szu-Chi and Tsai, I-Ning and Lin, Yi-Cheng and Huang, Sung-Feng and Lee, Hung-yi},
  journal={arXiv preprint arXiv:2604.17435},
  year={2026}
}
```

## License

CC BY-NC 4.0