apertus-8b-soc-gguf
GGUF conversion of Colby/apertus-8b-soc, a LoRA fine-tune of swiss-ai/Apertus-8B-2509 on marcodsn/SOC-2508 (Synthetic Online Conversations).
Quantizations
| File | Format | Size |
|---|---|---|
| apertus-8b-soc-f16.gguf | FP16 | ~16 GB |
| apertus-8b-soc-q8_0.gguf | Q8_0 | ~8 GB |
| apertus-8b-soc-q5_k_m.gguf | Q5_K_M | ~5 GB |
| apertus-8b-soc-q4_k_m.gguf | Q4_K_M | ~4 GB |
Ollama usage
hf download Colby/apertus-8b-soc-gguf apertus-8b-soc-q4_k_m.gguf
ollama create apertus-soc:8b -f Modelfile # FROM ./apertus-8b-soc-q4_k_m.gguf
ollama run apertus-soc:8b
- Downloads last month
- 126
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Colby/apertus-8b-soc-gguf
Base model
swiss-ai/Apertus-8B-2509