Blue β PyTorch weights (training, finetuning & voice export)
This repository contains Safetensors / PyTorch checkpoints and multilingual latent statistics for BlueTTS β Hebrew-first multilingual text-to-speech with optional English, Spanish, Italian, German, and mixed-language synthesis in the reference code.
Project home (install, ONNX inference, examples): https://github.com/maxmelichov/BlueTTS
Live ONNX demo (browser): Hugging Face Space β notmax123/Blue
End-user synthesis: Use the ONNX model bundle
notmax123/blue-onnxwith the BlueTTS README. Thisnotmax123/bluerepo supplies training / finetuning weights and files needed to export new voice style JSON for ONNX; it is not the ONNX runtime bundle.
Files
| File | Role |
|---|---|
blue_codec.safetensors |
Audio codec: mel β latent, discrete/continuous conversion. |
stats_multilingual.pt |
Latent mean/std for normalization (same statistics as training). |
vf_estimator.safetensors |
Text-to-latent acoustic model (text encoder, reference encoder, flow-matching core). |
duration_predictor.safetensors |
Duration predictor checkpoint. |
Download
Repo id is case-sensitive β use notmax123/blue (not Blue).
hf download notmax123/blue --repo-type model --local-dir ./pt_weights
Equivalent with the classic CLI:
huggingface-cli download notmax123/blue --repo-type model --local-dir ./pt_weights
How to use
Training or finetuning: Follow the training directory in the BlueTTS GitHub repository.
New voices for ONNX inference: Clone BlueTTS, install with the
exportextra, download these weights locally, and runscripts/export_new_voice.py(see script docstring and project README).
License
MIT β see the BlueTTS repository for the full license text.