Qwen3-0.6B SEO Bilingual - GGUF

GGUF quantized versions of Kelnux/Qwen3-0.6B-seo-bilingual.

Available Files

File Quantization Size
Qwen3-0.6B-seo-bilingual-f16.gguf FP16 1.19 GB
Qwen3-0.6B-seo-bilingual-q8_0.gguf Q8_0 633 MB

Usage with Ollama

  1. Download the GGUF file and Modelfile
  2. Run:
ollama create qwen3-seo-bilingual -f Modelfile
ollama run qwen3-seo-bilingual

Model Details

See the full model card at Kelnux/Qwen3-0.6B-seo-bilingual.

Downloads last month
4
GGUF
Model size
0.6B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Kelnux/Qwen3-0.6B-seo-bilingual-GGUF

Finetuned
Qwen/Qwen3-0.6B
Quantized
(1)
this model