Komodo 7B (Minang LPK) โ€” GGUF (Q4_K_M)

GGUF quantized model for llama.cpp.

File

  • komodo-merged-q4_k_m.gguf โ€” Q4_K_M

Run (llama.cpp)

```bash ./llama-cli -m komodo-merged-q4_k_m.gguf -p "### Instruksi:\nUbah ke Minang lemes...\n### Input:\nSaya mau makan.\n### Output:\n" ```

Metadata

  • Quantization: Q4_K_M
  • Context length: 4096
Downloads last month
1
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SutanRifkyt/komodo-qlora-minang-lpk-GGUF

Quantized
(10)
this model