umt5-xxl gguf model with encoder only, mainly used as text encoder for image or video generation models. Use with llama.cpp.

Example command:

llama-embedding -m umt5-xxl-encode-only-Q4_K_M.gguf -p "Penguin" --pooling none --embd-normalize -1 --no-warmup --batch-size 512 --ctx-size 512 --embd-output-format array
Downloads last month
50
GGUF
Model size
6B params
Architecture
t5encoder
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ziyaad30/umt5-xxl-encoder-gguf

Base model

google/umt5-xxl
Quantized
(2)
this model