Komodo 7B (Minang LPK) โ GGUF (Q4_K_M)
GGUF quantized model for llama.cpp.
File
komodo-merged-q4_k_m.ggufโ Q4_K_M
Run (llama.cpp)
```bash ./llama-cli -m komodo-merged-q4_k_m.gguf -p "### Instruksi:\nUbah ke Minang lemes...\n### Input:\nSaya mau makan.\n### Output:\n" ```
Metadata
- Quantization: Q4_K_M
- Context length: 4096
- Downloads last month
- 1
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for SutanRifkyt/komodo-qlora-minang-lpk-GGUF
Base model
Yellow-AI-NLP/komodo-7b-base