aksarallm-1.5b-native-GGUF

GGUF quantizations of AksaraLLM/aksarallm-1.5b-native for inference with llama.cpp, Ollama, LM Studio, and other GGUF runtimes.

Files

File Quant Size Recommended use
aksarallm-1.5b-native.f16.gguf F16 4.08 GB lossless from safetensors
aksarallm-1.5b-native.q8_0.gguf Q8_0 2.17 GB near-lossless, ~2ร— smaller
aksarallm-1.5b-native.q6_k.gguf Q6_K 1.77 GB high quality, ~2.5ร— smaller
aksarallm-1.5b-native.q5_k_m.gguf Q5_K_M 1.53 GB good quality, ~3ร— smaller
aksarallm-1.5b-native.q4_k_m.gguf Q4_K_M 1.35 GB recommended default, ~4ร— smaller

CPU benchmark (AMD EPYC 7763, 2 threads, AVX2)

Quant Prompt eval (32 tok) Generation (16 tok)
q4_k_m 17.2 tok/s 9.7 tok/s

So a 2.04B model at q4_k_m runs comfortably on a CPU laptop. Larger quants (q5_k_m, q6_k, q8_0) trade a bit of speed for better quality.

Quick start โ€” llama.cpp

huggingface-cli download AksaraLLM/aksarallm-1.5b-native-GGUF aksarallm-1.5b-native.q4_k_m.gguf --local-dir .
./llama-cli -m aksarallm-1.5b-native.q4_k_m.gguf -p "Indonesia adalah" -n 64

Quick start โ€” Ollama

huggingface-cli download AksaraLLM/aksarallm-1.5b-native-GGUF aksarallm-1.5b-native.q4_k_m.gguf Modelfile --local-dir .
ollama create aksara-aksarallm-1.5b-native -f Modelfile
ollama run aksara-aksarallm-1.5b-native "Lanjutkan: Indonesia adalah negara"

Source model

See AksaraLLM/aksarallm-1.5b-native for architecture, training data, eval results, and limitations.

Conversion provenance

  • Converted with convert_hf_to_gguf.py from llama.cpp
  • Quantized with llama-quantize from the same build
  • Architecture detected as llama
  • All files listed above are reproducible from the source HF safetensors

Note on the from-scratch model

This is a llama-3-style decoder built and trained from scratch by the AksaraLLM project. It does not use the Qwen2 ChatML template โ€” it expects a plain ### Instruksi: ... ### Jawaban: ... style prompt (set up automatically by the included Modelfile).

Downloads last month
319
GGUF
Model size
2B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AksaraLLM/aksarallm-1.5b-native-GGUF

Quantized
(1)
this model