theprint-MoE-8x3-0126-GGUF

An 18B parameter Mixture of Experts model combining 8 specialized 3B experts, with 2 experts activated per token by default (configurable up to 4 at inference).

Architecture

  • Base model: theprint/GeneralChat-Llama3.2-3B (provides shared attention layers)
  • Total parameters: ~18B
  • Active parameters: ~5B (2 experts) or ~9B (4 experts)
  • Gate mode: Hidden (prompt-based router initialization)

Full Model

For more information about this model, including access to the safetensor files, please see theprint/theprint-moe-8x3-0126.

Downloads last month
2,073
GGUF
Model size
18B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for theprint/theprint-moe-8x3-0126-GGUF

Quantized
(3)
this model

Collection including theprint/theprint-moe-8x3-0126-GGUF