Blue β€” PyTorch weights (training, finetuning & voice export)

This repository contains Safetensors / PyTorch checkpoints and multilingual latent statistics for BlueTTS β€” Hebrew-first multilingual text-to-speech with optional English, Spanish, Italian, German, and mixed-language synthesis in the reference code.

Project home (install, ONNX inference, examples): https://github.com/maxmelichov/BlueTTS

Live ONNX demo (browser): Hugging Face Space β€” notmax123/Blue

End-user synthesis: Use the ONNX model bundle notmax123/blue-onnx with the BlueTTS README. This notmax123/blue repo supplies training / finetuning weights and files needed to export new voice style JSON for ONNX; it is not the ONNX runtime bundle.

Files

File Role
blue_codec.safetensors Audio codec: mel ↔ latent, discrete/continuous conversion.
stats_multilingual.pt Latent mean/std for normalization (same statistics as training).
vf_estimator.safetensors Text-to-latent acoustic model (text encoder, reference encoder, flow-matching core).
duration_predictor.safetensors Duration predictor checkpoint.

Download

Repo id is case-sensitive β€” use notmax123/blue (not Blue).

hf download notmax123/blue --repo-type model --local-dir ./pt_weights

Equivalent with the classic CLI:

huggingface-cli download notmax123/blue --repo-type model --local-dir ./pt_weights

How to use

  1. Training or finetuning: Follow the training directory in the BlueTTS GitHub repository.

  2. New voices for ONNX inference: Clone BlueTTS, install with the export extra, download these weights locally, and run scripts/export_new_voice.py (see script docstring and project README).

License

MIT β€” see the BlueTTS repository for the full license text.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support