Qwen 2.5 3B โ BonfyreFPQ v12 Native
- Format: .fpq v12 (rANS entropy coded E8 + 6-bit tiles + FP16 scales)
- Base model: Qwen/Qwen2.5-3B (3.09B params)
- Size: 3.1 GB (vs 6.17 GB BF16 = 1.9ร compression)
- Bits per weight: 8.43 bpw
- Quality: avg cosine 0.999714, worst 0.998316
- Inference validated: logit cosine 0.9896, top-1 agreement 100% (Qwen2.5-3B)
- Wan 2.1 video generation: successfully generated video from decoded v12 weights
Decode
./bonfyre-fpqx decode model-00001-of-00002.fpq # outputs BF16 safetensors
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for NICKO/Qwen2.5-3B-BonfyreFPQ-v12
Base model
Qwen/Qwen2.5-3B