Qwen3-8B SEO Uczciwe SEO - GGUF
Fine-tuned Qwen3-8B model specialized in SEO knowledge and Uczciwe SEO (uczciweseo.pl) domain expertise.
Available Quantizations
| File | Quant | Size | Description |
|---|---|---|---|
| Qwen3-8B-seo-uczciweseo-Q4_K_M.gguf | Q4_K_M | 4.7 GB | Recommended - Best balance of quality and size |
| Qwen3-8B-seo-uczciweseo-Q5_K_M.gguf | Q5_K_M | 5.4 GB | Higher quality, slightly larger |
| Qwen3-8B-seo-uczciweseo-Q8_0.gguf | Q8_0 | 8.1 GB | Near-lossless quality |
| Qwen3-8B-seo-uczciweseo-f16.gguf | F16 | 15.3 GB | Full precision (no quantization) |
Model Details
- Base model: Qwen/Qwen3-8B
- Fine-tuning: LoRA (r=8, alpha=16) on Apple Silicon MPS
- Dataset: 1,199 examples (70% domain + 30% bilingual SEO)
- Training: 1 epoch, 68 steps, ~7.3 hours
- Languages: Polish, English
- Uses ChatML template with /no_think mode
Capabilities
- SEO knowledge (technical, on-page, off-page, local)
- Uczciwe SEO brand and offer knowledge
- Google Ads, Bing Ads, AI SEO, CRO
- Bilingual (PL/EN) responses
- Hallucination resistance
Test Results (28 questions)
| Category | Score |
|---|---|
| SEO English | 100% |
| Hallucination Resistance | 100% |
| Cross-language | 88% |
| SEO Polish | 60% |
| Brand Knowledge | 52% |
| Overall | 66% |
- Downloads last month
- 32
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support