How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16# Run inference directly in the terminal:
llama-cli -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16# Run inference directly in the terminal:
./llama-cli -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16# Run inference directly in the terminal:
./build/bin/llama-cli -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16Use Docker
docker model run hf.co/Tinysoft/Cosyvoice2-0.5B-GGUF:F16Quick Links
This only works with the token ID directly. The tokenizer is completely busted.
Update: the f16 and q8 now has the bias head. The q6 and q4 doesn't but I'm not sure how usable they are...
CosyVoice also has a rich pre- and post- processing on top of the LLM step, so you can't do TTS out of the box with llamacpp. Nevertheless, the LLM step is the slowest, and switching from pytorch to llamacpp yields 10x perf gain.
- Downloads last month
- 262
Hardware compatibility
Log In to add your hardware
16-bit
Model tree for Tinysoft/Cosyvoice2-0.5B-GGUF
Base model
FunAudioLLM/CosyVoice2-0.5B
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16# Run inference directly in the terminal: llama-cli -hf Tinysoft/Cosyvoice2-0.5B-GGUF:F16