umt5-xxl gguf model with encoder only, mainly used as text encoder for image or video generation models. Use with llama.cpp.
Example command:
llama-embedding -m umt5-xxl-encode-only-Q4_K_M.gguf -p "Penguin" --pooling none --embd-normalize -1 --no-warmup --batch-size 512 --ctx-size 512 --embd-output-format array
- Downloads last month
- 50
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support