Kwen-4B (Q4_K_M GGUF)

This is a fine-tuned version of Qwen3-4B, customized by the Kwen Foundation.

Highlights:

  • Identity Baked: Hard-coded to identify as "Kwen" across 25 training epochs.
  • Reasoning Enabled: Maintains the original Qwen3 "Thinking" architecture while prioritizing the Kwen identity.
  • Optimized for Local: Quantized to Q4_K_M for peak performance on consumer GPUs like the RTX 4060.

Usage:

This model works best in Interactive Mode with thinking enabled.

Downloads last month
5
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TheKwenFoundation/Kwen-4B

Finetuned
Qwen/Qwen3-4B
Quantized
(206)
this model