Qwen3.5-35B-A3B-EQ-v5-GGUF
GGUF quantizations of nivvis/Qwen3.5-35B-A3B-EQ-v5 for llama.cpp, Ollama, and LM Studio.
Available quantizations
| Quant | Size | BPW | Notes |
|---|---|---|---|
| F16 | ~65 GB | 16.01 | Full precision, lossless conversion |
| Q4_K_M | ~20 GB | 4.88 | Best 4-bit balance, recommended for most users |
Qwen3.5-35B-A3B-EQ-v5
A DPO fine-tune of Qwen3.5-35B-A3B-heretic-v2.
The tune optimized for two things:
- bringing warmth, emotional intelligence, general chat improvement to Qwen 3.5 series
- countering some negative tendencies of Heretic models (overwillingness to agree, be sycophantic, etc)
This is still intended as a general use model (agentic, coding, general chat). Tuning was lightly & with precision. More general benchmarks to follow.
What this model does
This model is trained to be a better conversational partner in emotionally complex situations, while maintaining base model capabilities. It:
- Validates without sycophancy — empathizes with frustration without rubber-stamping bad behavior
- Sets boundaries warmly — names uncomfortable truths without lecturing
- Sounds human — conversational tone, not therapist-speak. better tone vs vanilla Qwen 3.5, e.g.
"It sounds like"
Key specs
| Base | Qwen/Qwen3.5-35B-A3B |
| Parent | llmfan46/Qwen3.5-35B-A3B-heretic-v2 (decensored via MPOA+SOMA) |
| Fine-tune | DPO with LoRA (r=32, alpha=64) |
| Training data | DPO preference pairs with diverse, simulated (real-situation-based) generated dialogue |
EQ-Bench 3 results
Ranked #8 on raw score* EQ-Bench 3 with only 3B active parameters — competitive with frontier models at a fraction of the compute.
| # | Model | Raw Score |
|---|---|---|
| 1 | horizon-alpha | 202.3 |
| 2 | Kimi-K2-Instruct | 202.0 |
| 3 | gemini-2.5-pro-preview-06-05 | 200.5 |
| 4 | o3 | 199.0 |
| 5 | gpt-5 | 195.6 |
| 8 | EQ-v5 (this model, 3B active) | 193.6 |
| 10 | claude-opus-4 | 192.6 |
*Table lists all models available in EQ-Bench 3 repo (so known judge, settings etc so we can be as apples to apples on raw score). Still raw score is not ideal. ELO submission pending. Better than no stats!
See the BF16 model card for full benchmarks and training details.
How to use
llama-server (OpenAI-compatible API)
llama-server \
-m Qwen3.5-35B-A3B-EQ-v5-Q4_K_M.gguf \
--host 0.0.0.0 --port 30000 \
-ngl 99 --jinja
Split shards: point to the -00001-of-* file, llama.cpp auto-detects the rest.
Ollama
ollama run hf.co/nivvis/Qwen3.5-35B-A3B-EQ-v5-GGUF:Q4_K_M
Thinking mode
This model supports thinking mode. To disable (for faster, direct responses):
{"chat_template_kwargs": {"enable_thinking": false}}
Sampling recommendations
- With thinking:
temp=1.0, top_p=0.95, top_k=20, presence_penalty=1.5 - Without thinking:
temp=0.7, top_p=0.8, max_tokens=2048
Performance (Q4_K_M, single RTX 5090)
- 109 t/s generation
- 653 t/s prompt processing
Other formats
- BF16 safetensors — original weights
- FP8 — block-wise FP8 for vLLM/SGLang
Lineage
Qwen/Qwen3.5-35B-A3B
→ llmfan46/Qwen3.5-35B-A3B-heretic-v2 (decensored)
→ nivvis/Qwen3.5-35B-A3B-EQ-v5 (DPO for EQ)
→ this repo (GGUF quantizations)
License
Apache 2.0, following the base Qwen3.5 license.
- Downloads last month
- 371
4-bit
16-bit
Model tree for nivvis/Qwen3.5-35B-A3B-EQ-v5-GGUF
Base model
Qwen/Qwen3.5-35B-A3B-Base