llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)aksarallm-1.5b-native-GGUF
GGUF quantizations of AksaraLLM/aksarallm-1.5b-native for inference with llama.cpp, Ollama, LM Studio, and other GGUF runtimes.
Files
| File | Quant | Size | Recommended use |
|---|---|---|---|
aksarallm-1.5b-native.f16.gguf |
F16 | 4.08 GB | lossless from safetensors |
aksarallm-1.5b-native.q8_0.gguf |
Q8_0 | 2.17 GB | near-lossless, ~2ร smaller |
aksarallm-1.5b-native.q6_k.gguf |
Q6_K | 1.77 GB | high quality, ~2.5ร smaller |
aksarallm-1.5b-native.q5_k_m.gguf |
Q5_K_M | 1.53 GB | good quality, ~3ร smaller |
aksarallm-1.5b-native.q4_k_m.gguf |
Q4_K_M | 1.35 GB | recommended default, ~4ร smaller |
CPU benchmark (AMD EPYC 7763, 2 threads, AVX2)
| Quant | Prompt eval (32 tok) | Generation (16 tok) |
|---|---|---|
q4_k_m |
17.2 tok/s | 9.7 tok/s |
So a 2.04B model at q4_k_m runs comfortably on a CPU laptop. Larger quants (q5_k_m, q6_k, q8_0) trade a bit of speed for better quality.
Quick start โ llama.cpp
huggingface-cli download AksaraLLM/aksarallm-1.5b-native-GGUF aksarallm-1.5b-native.q4_k_m.gguf --local-dir .
./llama-cli -m aksarallm-1.5b-native.q4_k_m.gguf -p "Indonesia adalah" -n 64
Quick start โ Ollama
huggingface-cli download AksaraLLM/aksarallm-1.5b-native-GGUF aksarallm-1.5b-native.q4_k_m.gguf Modelfile --local-dir .
ollama create aksara-aksarallm-1.5b-native -f Modelfile
ollama run aksara-aksarallm-1.5b-native "Lanjutkan: Indonesia adalah negara"
Source model
See AksaraLLM/aksarallm-1.5b-native for architecture, training data, eval results, and limitations.
Conversion provenance
- Converted with
convert_hf_to_gguf.pyfrom llama.cpp - Quantized with
llama-quantizefrom the same build - Architecture detected as
llama - All files listed above are reproducible from the source HF safetensors
Note on the from-scratch model
This is a llama-3-style decoder built and trained from scratch by the AksaraLLM project. It does not use the Qwen2 ChatML template โ it expects a plain ### Instruksi: ... ### Jawaban: ... style prompt (set up automatically by the included Modelfile).
- Downloads last month
- 319
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for AksaraLLM/aksarallm-1.5b-native-GGUF
Base model
AksaraLLM/aksarallm-1.5b-native
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="AksaraLLM/aksarallm-1.5b-native-GGUF", filename="", )