Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# WavCube: Unifying Speech Representation for Understanding and Generation via Semantic-Acoustic Joint Modeling
|
| 2 |
+
|
| 3 |
+
<p align="center">
|
| 4 |
+
<img src="doc/wavcube_logo.png" alt="WavCube Logo" width="400"/>
|
| 5 |
+
</p>
|
| 6 |
+
|
| 7 |
+
[](https://github.com/yanghaha0908/WavCube)
|
| 8 |
+
[](https://arxiv.org/abs/2605.06407)
|
| 9 |
+
[](https://huggingface.co/yhaha/WavCube)
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
WavCube is a 128-dim, 50Hz continuous representation that unifies speech understanding,
|
| 13 |
+
reconstruction, and generation within a single space.
|
| 14 |
+
This is the official code for the paper [WavCube: Unifying Speech Representation for Understanding and Generation via Semantic-Acoustic Joint Modeling](https://arxiv.org/pdf/2605.06407) [[abs](https://arxiv.org/abs/2605.06407)].
|
| 15 |
+
|
| 16 |
+
## β¨ Key Features
|
| 17 |
+
- **Unified Speech Representation** β A single continuous latent space that simultaneously supports speech understanding, reconstruction, and generation.
|
| 18 |
+
- **Semantic-Acoustic Joint Modeling** β Harmonizes high-level semantic structures with low-level acoustic textures.
|
| 19 |
+
- **Compact & Diffusion-Friendly** β Features a compact 128-dimensional bottleneck (8x compression from standard SSL features) enabling easier diffusion modeling.
|
| 20 |
+
<!-- By infusing fine-grained acoustic details into a distilled SSL semantic manifold, -->
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
## π οΈ Installation
|
| 25 |
+
|
| 26 |
+
We recommend creating a fresh conda environment for installation.
|
| 27 |
+
### Env Setup
|
| 28 |
+
```bash
|
| 29 |
+
conda create -n WavCube python=3.10 -y
|
| 30 |
+
conda activate WavCube
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### Basic Requirements
|
| 34 |
+
```bash
|
| 35 |
+
git clone https://github.com/yanghaha0908/WavCube.git
|
| 36 |
+
cd WavCube
|
| 37 |
+
pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu126
|
| 38 |
+
conda install -c conda-forge sox ffmpeg libsndfile
|
| 39 |
+
pip install -e ".[train]"
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## π Quick Start
|
| 43 |
+
|
| 44 |
+
### Checkpoint Download
|
| 45 |
+
Pre-trained model checkpoints are available. Please use the following links to download the checkpoints:
|
| 46 |
+
|
| 47 |
+
| Representation | Dimension | Sample Rate | Frame Rate |
|
| 48 |
+
|----------------|-----------|-------------|------------|
|
| 49 |
+
| π€ [WavCube](https://huggingface.co/yhaha/WavCube/tree/main/WavCube) | 128 | 16k Hz | 50 Hz |
|
| 50 |
+
| π€ [WavCube-pro](https://huggingface.co/yhaha/WavCube/tree/main/WavCube-Pro) | 128 | 16k Hz | 50 Hz |
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
### Extract Representation from Speech
|
| 54 |
+
You can get continuous representations from raw wav using the following code:
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
python wav_to_feature.py \
|
| 58 |
+
--audio 19_198_000000_000002.wav \
|
| 59 |
+
--config configs/WavCube-stage2.yaml \
|
| 60 |
+
--ckpt WavCube/checkpoints/vocos_checkpoint_epoch=177_step=195000_val_loss=3.3080.ckpt \
|
| 61 |
+
--output 19_198_000000_000002.pt
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
### Reconstruct Speech from Representation
|
| 65 |
+
|
| 66 |
+
You can reconstruct waveform from representations using the following code:
|
| 67 |
+
|
| 68 |
+
```bash
|
| 69 |
+
python feature_to_wav.py \
|
| 70 |
+
--feature 19_198_000000_000002.pt \
|
| 71 |
+
--config configs/WavCube-stage2.yaml \
|
| 72 |
+
--ckpt WavCube/checkpoints/vocos_checkpoint_epoch=177_step=195000_val_loss=3.3080.ckpt
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
<!-- ## π‘ Tips
|
| 76 |
+
- For devices that do not support BF16, you can manually disable PyTorch's mixed precision manager.
|
| 77 |
+
- If you encounter any issues or have questions, please feel free to open an issue. -->
|
| 78 |
+
|
| 79 |
+
## π§ Training
|
| 80 |
+
|
| 81 |
+
WavCube employs a **two-stage training** pipeline, all scripts are located in `scripts/train/`.
|
| 82 |
+
|
| 83 |
+
```bash
|
| 84 |
+
# ----------------- WavCube -----------------
|
| 85 |
+
bash scripts/train/train_WavCube_stage1.sh
|
| 86 |
+
bash scripts/train/train_WavCube_stage2.sh
|
| 87 |
+
|
| 88 |
+
# --------------- WavCube-Pro ---------------
|
| 89 |
+
bash scripts/train/train_WavCube_pro_stage1.sh
|
| 90 |
+
bash scripts/train/train_WavCube_pro_stage2.sh
|
| 91 |
+
# Note: Update `stage1_ckpt_path` in config to your Stage 1 checkpoint before running.
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## π€ Additional Resources
|
| 95 |
+
|
| 96 |
+
### Evaluation Checkpoints
|
| 97 |
+
|
| 98 |
+
To make it easier to reproduce our results, we have uploaded supplementary resources to our π€ [WavCube](https://huggingface.co/yhaha/WavCube/tree/main/ckpts). These include the `wavlm-large` weights and the necessary evaluation checkpoints for computing metrics such as WER, Speaker Similarity, and UTMOS.
|
| 99 |
+
|
| 100 |
+
```bash
|
| 101 |
+
# For offline testing or if you experience network issues, you can manually copy the checkpoints to your local cache:
|
| 102 |
+
cp -r ckpts/hub ~/.cache/torch/
|
| 103 |
+
cp ckpts/utmos22_strong_step7459_v1.pt ~/.cache/torch/hub/checkpoints/
|
| 104 |
+
cp -r ckpts/s3prl ~/.cache
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### Data Preparation
|
| 108 |
+
|
| 109 |
+
**Small-scale data** β uses `VocosDataModule`. Prepare a filelist of audio paths for training and validation:
|
| 110 |
+
|
| 111 |
+
```bash
|
| 112 |
+
find $TRAIN_DATASET_DIR -name "*.wav" > filelist.train
|
| 113 |
+
find $VAL_DATASET_DIR -name "*.wav" > filelist.val
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
Each line is a plain audio path, for example:
|
| 117 |
+
```
|
| 118 |
+
/data/LibriSpeech/test-clean/672/122797/672-122797-0026.flac
|
| 119 |
+
/data/LibriSpeech/test-clean/672/122797/672-122797-0071.flac
|
| 120 |
+
/data/LibriSpeech/test-clean/672/122797/672-122797-0037.flac
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
**Large-scale data** β uses `VocosEmiliaDataModule`. Two files are required:
|
| 124 |
+
|
| 125 |
+
1. **Filelist** β same format as above for LibriSpeech; for LibriHeavy, each line is a JSON entry, for example:
|
| 126 |
+
```json
|
| 127 |
+
{"id": "medium/968/.../voyagesdolittle_55_lofting_64kb_38", "start": 22.32, "duration": 19.36, "channel": 0, "recording": {"sources": [{"source": "download/librilight/medium/968/.../voyagesdolittle_55_lofting_64kb.flac"}], "sampling_rate": 16000}, "type": "MonoCut"}
|
| 128 |
+
```
|
| 129 |
+
|
| 130 |
+
2. **Index file** (`.idx`) β a byte-offset index for fast random access, generated via:
|
| 131 |
+
```bash
|
| 132 |
+
python data/generate_idx.py
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
Example data manifest files for both formats are provided in the `data/` directory for reference.
|
| 136 |
+
|
| 137 |
+
|
| 138 |
+
## β€οΈ Acknowledgements
|
| 139 |
+
|
| 140 |
+
We sincerely thank the authors of the following open-source projects, whose excellent work laid the foundation for WavCube: [Semantic-VAE](https://github.com/ZhikangNiu/Semantic-VAE), [F5-TTS](https://github.com/swivid/f5-tts), [Vocos](https://github.com/gemelo-ai/vocos), [MiMo-Audio-Tokenizer](https://github.com/XiaomiMiMo/MiMo-Audio-Tokenizer), [s3prl](https://github.com/s3prl/s3prl).
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
## π Citation
|
| 145 |
+
|
| 146 |
+
If you find this repo helpful, please cite our work:
|
| 147 |
+
|
| 148 |
+
```bibtex
|
| 149 |
+
@misc{[CITATION_KEY],
|
| 150 |
+
title={[Paper Title Placeholder]},
|
| 151 |
+
author={[Author List]},
|
| 152 |
+
year={2025},
|
| 153 |
+
eprint={[ARXIV_ID]},
|
| 154 |
+
archivePrefix={arXiv},
|
| 155 |
+
primaryClass={cs.SD},
|
| 156 |
+
url={https://arxiv.org/abs/[ARXIV_ID]},
|
| 157 |
+
}
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
## π License
|
| 161 |
+
|
| 162 |
+
The code in this repository is released under the MIT license, see [LICENSE](LICENSE) for details.
|