Zamba2 1.2B Instruct v2 - GGUF
GGUF conversions of Zyphra/Zamba2-1.2B-instruct-v2 for use with llama.cpp.
Architecture
Zamba2 is a hybrid Mamba-2 + shared Transformer architecture by Zyphra.
- 1.2B parameters, 38 layers (6 hybrid attention blocks, 1 shared transformer)
- Hidden size: 2048, attention hidden: 4096
- SSM: d_state=128, d_conv=4, ngroups=1
- Requires llama.cpp with Zamba2 support (PR #21412)
Available Quantizations
| Quant | Size | BPW | Prompt tok/s | Gen tok/s | Hardware |
|---|---|---|---|---|---|
| Q4_0 | 984 MB | 4.59 | 2,677 | 308.6 | RTX 4090 |
| Q8_0 | 1.8 GB | 8.51 | 2,375 | 200.6 | RTX 4090 |
Sample Output (Q4_0)
Q: What is the capital of France and why is it significant?
A: The capital of France is Paris. It is considered one of the most significant cities in the world due to its historical, cultural, and cultural significance. Paris has been the capital of France since the third century and has been a major center of art, science, and philosophy since the Middle Ages. It is home to iconic landmarks such as the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral, making it a major global tourist destination.
Usage
# Build llama.cpp with Zamba2 support
git clone https://github.com/echo313unfolding/llama.cpp -b zamba2-support
cd llama.cpp && cmake -B build -DGGML_CUDA=ON && cmake --build build -j
# Run
./build/bin/llama-cli -m zamba2-1.2b-instruct-v2-q4_0.gguf \
-p "<|im_start|>user\nWhat is quantum computing?<|im_end|>\n<|im_start|>assistant\n" \
-n 256 -ngl 999 -e --no-conversation
Conversion Details
Converted from HF safetensors using a custom Zamba2-to-GGUF converter:
- Mamba-2 SSM layers: A_log to A conversion, conv1d squeeze, dt projection
- Shared transformer blocks with per-layer LoRA unfolding (W_eff = W_shared + B @ A)
- Per-layer
n_head_kvarray (0 for Mamba layers, 32 for attention layers) - BPE tokenizer (v2 format)
- F32 master, quantized with
llama-quantize
Credits
- Downloads last month
- 207
Hardware compatibility
Log In to add your hardware
4-bit
8-bit