Zamba2 2.7B Instruct v2 - GGUF
GGUF conversions of Zyphra/Zamba2-2.7B-instruct-v2 for use with llama.cpp.
Architecture
Zamba2 is a hybrid Mamba-2 + shared Transformer architecture by Zyphra.
- 2.7B parameters, 54 layers (9 hybrid attention blocks, 2 shared transformers cycling even/odd)
- Hidden size: 2560, attention hidden: 5120
- SSM: d_state=64, d_conv=4, ngroups=1
- No RoPE (
use_mem_rope=false) โ attention without positional encoding - Requires llama.cpp with Zamba2 support (PR #21412)
Available Quantizations
| Quant | Size | BPW | PPL (WikiText-2) | Prompt tok/s | Gen tok/s | Hardware |
|---|---|---|---|---|---|---|
| Q4_0 | 2.1 GB | 4.56 | 17.53 | 280.9 | 26.6 | RTX 4090 |
| Q8_0 | 3.9 GB | 8.51 | 16.94 | 278.3 | 26.0 | RTX 4090 |
Perplexity Comparison (WikiText-2, n_ctx=512)
| Config | Description | PPL |
|---|---|---|
| A | Q4_0 weights + F16 KV | 17.53 |
| D | Q8_0 weights + F16 KV | 16.94 |
| E | F32 weights + F16 KV | ~16.8 (pending) |
| B/C | Q4/Q8 KV cache | Requires FA (head_dim=160) |
Note on KV cache quantization: The 2.7B model has head_dim=160, which is not a multiple of 64. This prevents the Hadamard rotation used by llama.cpp's standard KV quantization. Quantized KV cache requires Flash Attention, which is not yet wired for Zamba2.
Bug Fix (v2 release โ 2026-04-04)
These GGUFs replace an earlier release that had a critical bug. The 2.7B model has use_mem_rope=false in its HuggingFace config, meaning it does not use rotary position embeddings in attention. The original converter applied RoPE unconditionally, corrupting all attention computations. This was the ONLY Zamba2 size affected (1.2B and 7B both have use_mem_rope=true).
- Before fix: PPL = 37.90 (Q4_0), 37.69 (Q8_0), 37.68 (F32)
- After fix: PPL = 17.53 (Q4_0), 16.94 (Q8_0)
- Now correctly ordered: 1.2B (22.5) > 2.7B (17.5) > 7B (13.8)
Sample Output (Q8_0)
Q: What is the capital of France?
A: The capital of France is Paris.
Usage
# Build llama.cpp with Zamba2 support
git clone https://github.com/echo313unfolding/llama.cpp -b zamba2-support
cd llama.cpp && cmake -B build -DGGML_CUDA=ON && cmake --build build -j
# Run
./build/bin/llama-cli -m zamba2-2.7b-instruct-v2-q8_0.gguf \
-p "<|im_start|>user\nWhat is quantum computing?<|im_end|>\n<|im_start|>assistant\n" \
-n 256 -ngl 999 -e --no-conversation
Conversion Details
Converted from HF safetensors using a custom Zamba2-to-GGUF converter:
- Mamba-2 SSM layers: A_log to A conversion, conv1d squeeze, dt projection
- 2 shared transformer blocks with per-layer LoRA unfolding (W_eff = W_shared + B @ A)
- Per-layer
n_head_kvarray (0 for Mamba layers, 32 for attention layers) rope_dimension_count=0foruse_mem_rope=falsemodels (disables RoPE in attention)- BPE tokenizer (v2 format)
- F32 master, quantized with
llama-quantize
Credits
- Downloads last month
- 199
4-bit
8-bit