⚡ Each donation = another big MoE quantized

I host 30+ free APEX MoE quantizations as independent research. My only local hardware is an NVIDIA DGX Spark (122 GB unified memory) — enough for ~30-50B-class MoEs, but bigger ones (200B+) require rented compute on H100/H200/Blackwell, typically $20-100 per quant.
If APEX quants are useful to you, your support directly funds those bigger runs.

🎉 Patreon (Monthly)  |  ☕ Buy Me a Coffee  |  ⭐ GitHub Sponsors

💚 Big thanks to Hugging Face for generously donating additional storage — much appreciated.

Darwin-36B-Opus — APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of FINAL-Bench/Darwin-36B-Opus.

Brought to you by the LocalAI team | APEX Project | Technical Report

Available Files

File Profile Size Best For
Darwin-36B-Opus-APEX-I-Balanced.gguf I-Balanced 24 GB Best overall quality/size ratio
Darwin-36B-Opus-APEX-Balanced.gguf Balanced 24 GB General purpose
Darwin-36B-Opus-APEX-I-Quality.gguf I-Quality 22 GB Highest quality with imatrix
Darwin-36B-Opus-APEX-Quality.gguf Quality 22 GB Highest quality standard
Darwin-36B-Opus-APEX-I-Compact.gguf I-Compact 16 GB Consumer GPUs, best quality/size
Darwin-36B-Opus-APEX-Compact.gguf Compact 16 GB Consumer GPUs
Darwin-36B-Opus-APEX-I-Mini.gguf I-Mini 13 GB Smallest "safe" tier
Darwin-36B-Opus-APEX-I-Nano.gguf I-Nano 11 GB Experimental — IQ2_XXS mid-layer experts
Darwin-36B-Opus-F16.gguf F16 reference 65 GB Full-precision reference

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient — edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

The key insight: in MoE models, expert FFN tensors make up the bulk of model weight but only ~8/256 experts activate per token. APEX compresses middle-layer experts more aggressively while preserving edge layers (first/last 5) and keeping attention, SSM/Mamba, and shared expert tensors at higher precision.

See the APEX project for full details, technical report, and scripts.

Nano (experimental tier)

The APEX Nano tier pushes mid-layer routed experts to IQ2_XXS (2.06 bpw), near-edge to IQ2_S, edges to Q3_K, with shared experts kept at Q5_K. About 20% smaller than Mini with modest quality cost — viable only on MoE thanks to sparse per-token expert activation. Requires imatrix.

Benchmarks pending. Feedback welcome.

Architecture

  • Base: Qwen 3.5 MoE (Qwen3_5MoeForCausalLM) — evolutionary-merge reasoning fine-tune
  • Layers: 40
  • Experts: 256 routed (8 active per token)
  • Total Parameters: ~36B
  • Active Parameters: ~3B per token
  • Hidden size: 2048
  • Attention: Hybrid (full attention every 4th layer, linear/Mamba otherwise)
  • APEX Config: 5+5 symmetric edge gradient across 40 layers
  • Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)

Run with LocalAI

local-ai run mudler/Darwin-36B-Opus-APEX-GGUF@Darwin-36B-Opus-APEX-I-Balanced.gguf

Credits

Downloads last month
-
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mudler/Darwin-36B-Opus-APEX-GGUF

Quantized
(5)
this model