- GGUF importance matrix (imatrix) quants for https://huggingface.co/OpenBuddy/openbuddy-qwen1.5-14b-v20.1-32k
- The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.
- The imatrix is being used on the K-quants as well.
| Layers | Context | Template |
|---|---|---|
40 |
32768 |
<|im_start|>user |
- Downloads last month
- 21
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for dranger003/openbuddy-qwen1.5-14b-v20.1-32k-iMat.GGUF
Base model
OpenBuddy/openbuddy-qwen1.5-14b-v20.1-32k