Laguna-XS.2-GGUF / README.md
davide221's picture
Update README.md
d0be991 verified
metadata
license: apache-2.0
base_model: poolside/Laguna-XS.2
tags:
  - gguf
  - llama.cpp
  - moe
  - code
  - quantized
  - laguna
  - pflash
  - dflash

Laguna-XS.2 GGUF (BF16 + Q4_K_M)

GGUF conversions of poolside/Laguna-XS.2, a 33B-A3B (3B active) MoE coding model from Poolside under Apache 2.0. Built for use with lucebox-hub (dflash + PFlash) on consumer GPUs.

Files

File Quant Size BPW Notes
laguna-xs2-bf16.gguf BF16 66.9 GB 16.01 reference, identical math to HF transformers fp/bf16
laguna-xs2-Q4_K_M.gguf Q4_K_M 20.3 GB 4.85 imatrix-calibrated, fits a single 24 GB GPU
laguna-xs2.imatrix imatrix 188 MB Bartowski calibration_datav3 (134 chunks, 68608 tokens)

Architecture

  • 40 layers, n_embd 2048, n_head_kv 8, head_dim 128
  • Per-layer head count [48, 64, 64, 64] × 10 (4-layer SWA pattern: full, sw, sw, sw)
  • 256 experts, top-8 routing, 1 always-on shared expert
  • Sigmoid router, expert weights scale 2.5
  • Sliding window 512, partial RoPE with YaRN (orig ctx 4096, factor 32)
  • Vocab 100,352, BOS=2, EOS=2, PAD=9

Quality

Metric BF16 Q4_K_M Δ
Perplexity (Bartowski v3, 20×512) 10.7594 ± 0.522 11.2854 ± 0.553 +4.9%

Imatrix calibration uses Bartowski calibration_datav3.txt (multilingual + code mix), the same corpus Unsloth-distributed quants use.

Verified vs the official Poolside HF reference (BF16, eager attention, greedy decoding): logits match exactly for the first 30+ tokens on a B-tree explanation prompt; subsequent divergence is fp precision drift, not a graph bug.

Performance (RTX 3090 24 GB, Q4_K_M)

Measured with bench_laguna_generate from lucebox-hub (dflash autoregressive forward, no spec-decode draft yet):

Workload Throughput Notes
Decode @ ctx=128 (greedy) 113 tok/s n_gen=128
Decode @ ctx=1K 104 tok/s
Decode @ ctx=4K 65 tok/s
128K TTFT via dflash + PFlash 15.91 s 5.4× faster than llama.cpp pp131072 (86.60 s)
Loader VRAM 18.77 GiB + 110 MiB tok_embd kept on CPU

Usage

lucebox-hub (dflash + PFlash, recommended for 128K)

# clone
git clone https://github.com/Luce-Org/lucebox-hub
cd lucebox-hub/dflash

# build with sm_86 (3090 / A6000)
cmake -B build -DCMAKE_CUDA_ARCHITECTURES=86
cmake --build build -j

# fetch the Q4_K_M GGUF + Poolside tokenizer
hf download Lucebox/Laguna-XS.2-GGUF laguna-xs2-Q4_K_M.gguf --local-dir models/
hf download poolside/Laguna-XS.2 chat_template.jinja tokenizer.json tokenizer_config.json \
   special_tokens_map.json config.json --local-dir models/Laguna-XS-2

# run the OpenAI server (same server.py as qwen35, arch auto-detected from GGUF).
# -ctk/-ctv q4_0 keeps the 131K KV cache under ~6 GB so weights + KV fit on 24 GB.
python3 scripts/server.py \
  --target models/laguna-xs2-Q4_K_M.gguf \
  --tokenizer models/Laguna-XS-2 \
  --port 8000 --max-ctx 131072 \
  -ctk q4_0 -ctv q4_0

# chat
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"luce-dflash","messages":[{"role":"user","content":"hello"}],"stream":true}'

License

Apache 2.0, inherited from upstream poolside/Laguna-XS.2.

See also