MiroThinker-4B-SFT-v0.2-GGUF

MiroThinker-4B-SFT-v0.2 is an open-source agentic language model based on Qwen3-4B, fine-tuned for complex, multi-turn reasoning, task decomposition, and integration with external tools and APIs. It excels in retrieval-augmented generation, code execution, web browsing, and document processing, making it suitable for real-world applications requiring advanced problem-solving and long-context understanding. The model supports dynamic control over thinking modes and offers extensive benchmark validation, optimized for deployment with various inference frameworks to enable efficient and flexible AI agent workflows.

Model Files

File Name Quant Type File Size
MiroThinker-4B-SFT-v0.2.BF16.gguf BF16 8.05 GB
MiroThinker-4B-SFT-v0.2.F16.gguf F16 8.05 GB
MiroThinker-4B-SFT-v0.2.F32.gguf F32 16.1 GB
MiroThinker-4B-SFT-v0.2.Q2_K.gguf Q2_K 1.67 GB
MiroThinker-4B-SFT-v0.2.Q3_K_L.gguf Q3_K_L 2.24 GB
MiroThinker-4B-SFT-v0.2.Q3_K_M.gguf Q3_K_M 2.08 GB
MiroThinker-4B-SFT-v0.2.Q3_K_S.gguf Q3_K_S 1.89 GB
MiroThinker-4B-SFT-v0.2.Q4_K_M.gguf Q4_K_M 2.5 GB
MiroThinker-4B-SFT-v0.2.Q4_K_S.gguf Q4_K_S 2.38 GB
MiroThinker-4B-SFT-v0.2.Q5_K_M.gguf Q5_K_M 2.89 GB
MiroThinker-4B-SFT-v0.2.Q5_K_S.gguf Q5_K_S 2.82 GB
MiroThinker-4B-SFT-v0.2.Q6_K.gguf Q6_K 3.31 GB
MiroThinker-4B-SFT-v0.2.Q8_0.gguf Q8_0 4.28 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
19
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/MiroThinker-4B-SFT-v0.2-GGUF

Quantized
(3)
this model