Qwen3 0.6B parity GGUF (Q8_0)

Same-origin GGUF parity artifact for Qwen/Qwen3-0.6B, produced for backend comparison work in mesh-llm.

  • Source checkpoint: Qwen/Qwen3-0.6B
  • Conversion flow: original checkpoint -> GGUF f16 -> GGUF Q8_0
  • Intended pair: meshllm/qwen3-0.6b-parity-8bit-mlx

This repo is for backend-parity testing rather than for claiming best overall model quality.

Downloads last month
44
GGUF
Model size
0.8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for meshllm/qwen3-0.6b-parity-q8_0-gguf

Finetuned
Qwen/Qwen3-0.6B
Quantized
(289)
this model