all-MiniLM-L6-v2 GGUF

GGUF format of sentence-transformers/all-MiniLM-L6-v2 for use with CrispEmbed and Ollama.

Files

File Quantization Size
all-MiniLM-L6-v2-f32.gguf F32 0 MB
all-MiniLM-L6-v2-q4_k.gguf Q4_K 0 MB
all-MiniLM-L6-v2-q8_0.gguf Q8_0 0 MB
all-MiniLM-L6-v2.gguf F32 0 MB

Recommended: Q8_0 for quality (cos vs HF: 0.9998), Q4_K for size (0.970).

Quick Start

CrispEmbed

./crispembed -m all-MiniLM-L6-v2 "Hello world"
./crispembed-server -m all-MiniLM-L6-v2 --port 8080

Ollama (with CrispStrobe fork)

# Create model
echo "FROM all-MiniLM-L6-v2-q8_0.gguf" > Modelfile
ollama create all-MiniLM-L6-v2 -f Modelfile

# Embed
curl http://localhost:11434/api/embed -d '{"model":"all-MiniLM-L6-v2","input":["Hello world"]}'

Python (CrispEmbed)

from crispembed import CrispEmbed
model = CrispEmbed("all-MiniLM-L6-v2-q8_0.gguf")
vectors = model.encode(["Hello world", "Goodbye world"])

Model Details

Property Value
Architecture BERT
Parameters 22M
Embedding Dimension 384
Layers 6
Pooling mean
Tokenizer WordPiece
Language en
Q8_0 vs HuggingFace 0.9998
Q4_K vs HuggingFace 0.970

Server API

CrispEmbed server supports four API dialects:

  • POST /embed โ€” native
  • POST /v1/embeddings โ€” OpenAI-compatible
  • POST /api/embed โ€” Ollama-compatible
  • POST /api/embeddings โ€” Ollama legacy

Credits

Downloads last month
352
GGUF
Model size
22.7M params
Architecture
bert
Hardware compatibility
Log In to add your hardware

8-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for cstr/all-MiniLM-L6-v2-GGUF

Quantized
(75)
this model