Qwen3.5-27B-Omnimerge-v2-GGUF
GGUF quantizations of ManniX-ITA/Qwen3.5-27B-Omnimerge-v2 — a 3-way weight-space merge using the Omnimerge v2 method (OBIM + DAREx + EMR).
V2 outperforms v1 across all benchmarks, with +8 pp on GPQA Diamond reasoning and +16 pp over the best source model.
Benchmark Results (Q6_K)
| Benchmark | Omnimerge v1 | Omnimerge v2 | Best source (Claude-distill) |
|---|---|---|---|
| GPQA Diamond (198q, flex) | 61.11% | 69.19% (+8.08 pp) | 53.03% |
| MBPP pass@1 | 71.80% | 74.60% (+2.80 pp) | 71.20% |
| HumanEval pass@1 | 79.88% | 79.27% (-0.61 pp) | 76.22% |
Recommended Usage
llama-server -m Qwen3.5-27B-Omnimerge-v2-Q6_K.gguf -c 32768 -ngl 99 \
--reasoning-format deepseek --reasoning-budget 16384 \
--temp 0.6 --top-p 0.95 --top-k 20 --dry-multiplier 0.5
Source Models
| Source | Weight |
|---|---|
| Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled | 0.40 |
| ValiantLabs/Qwen3.5-27B-Esper3.1 | 0.35 |
| Jackrong/Qwen3.5-27B-Gemini-3.1-Pro-Reasoning-Distill | 0.25 |
See the model card for full methodology.
License
Apache-2.0
- Downloads last month
- 836
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support