output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)Kiel-Pro-0.5B-v3-GGUF
GGUF quantizations of AksaraLLM/Kiel-Pro-0.5B-v3 for inference with llama.cpp, Ollama, LM Studio, and other GGUF runtimes.
Files
| File | Quant | Size | Recommended use |
|---|---|---|---|
Kiel-Pro-0.5B-v3.f16.gguf |
F16 | 0.99 GB | lossless from safetensors |
Kiel-Pro-0.5B-v3.q8_0.gguf |
Q8_0 | 0.53 GB | near-lossless, ~2ร smaller |
Kiel-Pro-0.5B-v3.q6_k.gguf |
Q6_K | 0.51 GB | high quality, ~2.5ร smaller |
Kiel-Pro-0.5B-v3.q5_k_m.gguf |
Q5_K_M | 0.42 GB | good quality, ~3ร smaller |
Kiel-Pro-0.5B-v3.q4_k_m.gguf |
Q4_K_M | 0.40 GB | recommended default, ~4ร smaller |
CPU benchmark (AMD EPYC 7763, 2 threads, AVX2)
| Quant | Prompt eval (32 tok) | Generation (16 tok) |
|---|---|---|
q4_k_m |
36.7 tok/s | 20.1 tok/s |
So a 494M model at q4_k_m runs comfortably on a CPU laptop. Larger quants (q5_k_m, q6_k, q8_0) trade a bit of speed for better quality.
Quick start โ llama.cpp
huggingface-cli download AksaraLLM/Kiel-Pro-0.5B-v3-GGUF Kiel-Pro-0.5B-v3.q4_k_m.gguf --local-dir .
./llama-cli -m Kiel-Pro-0.5B-v3.q4_k_m.gguf -p "Indonesia adalah" -n 64
Quick start โ Ollama
huggingface-cli download AksaraLLM/Kiel-Pro-0.5B-v3-GGUF Kiel-Pro-0.5B-v3.q4_k_m.gguf Modelfile --local-dir .
ollama create aksara-kiel-pro-0.5b-v3 -f Modelfile
ollama run aksara-kiel-pro-0.5b-v3 "Apa ibukota Indonesia?"
Source model
See AksaraLLM/Kiel-Pro-0.5B-v3 for architecture, training data, eval results, and limitations.
Conversion provenance
- Converted with
convert_hf_to_gguf.pyfrom llama.cpp - Quantized with
llama-quantizefrom the same build - Architecture detected as
qwen2 - All files listed above are reproducible from the source HF safetensors
- Downloads last month
- 332
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="AksaraLLM/Kiel-Pro-0.5B-v3-GGUF", filename="", )