qwen3-next-thinking

Qwen3-Next-REAP-20B-A3B-Thinking has the following specifications:

  • Type: Causal Language Models
  • Number of Parameters: 20B in total and 3B activated
  • Hidden Dimension: 2048
  • Number of Layers: 48
  • Hybrid Layout: 12 * (3 * (Gated DeltaNet -> MoE) -> 1 * (Gated Attention -> MoE))
  • Gated Attention:
  • Number of Attention Heads: 16 for Q and 2 for KV
  • Head Dimension: 256
  • Rotary Position Embedding Dimension: 64
  • Gated DeltaNet:
    **Number of Linear Attention Heads: 32 for V and 16 for QK
    **Head Dimension: 128
  • Mixture of Experts:
  • **Number of Experts: 128 (uniformly pruned from 512)
  • **Number of Activated Experts: 10
  • **Number of Shared Experts: 1
  • Context Length: 262,144 natively and extensible up to 1,010,000 tokens
  • Compression Method: REAP (Router-weighted Expert Activation Pruning)
  • Compression Ratio: 75% expert pruning
  • Specialized: Math, Physics, Control Engineering, Scientific Writing
Downloads last month
198
GGUF
Model size
22B params
Architecture
qwen3next
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lovedheart/Qwen3-Next-REAP-20B-A3B-Thinking-GGUF

Quantized
(49)
this model

Collection including lovedheart/Qwen3-Next-REAP-20B-A3B-Thinking-GGUF