Qwen3-0.6B SEO Bilingual (PL+EN)
A bilingual (Polish + English) SEO expert model fine-tuned from Qwen/Qwen3-0.6B using LoRA.
Model Details
- Base Model: Qwen/Qwen3-0.6B
- Fine-tuning Method: LoRA (r=16, alpha=32)
- Training Dataset: metehan777/global-seo-knowledge + Polish translations
- Training Examples: 7,880 bilingual instruction-tuning examples
- Languages: English, Polish
- Final Training Loss: 0.6887
- Final Accuracy: ~90%
Key Features
- Responds in Polish when asked in Polish
- Responds in English when asked in English
- Follows cross-language instructions (e.g., "What is SEO? Odpowiedz po polsku.")
- Covers 2,065 SEO terms and concepts across 21 categories
Training Data Composition
| Type | Count | Description |
|---|---|---|
| Polish Q&A | 2,065 | Polish questions with Polish answers |
| English Q&A | 2,065 | English questions with English answers |
| Cross-language | 2,065 | Mixed language instructions |
| Polish (no system) | ~1,000 | Polish Q&A without system prompt |
| Instruction meta | ~685 | Short-form Polish instructions |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Kelnux/Qwen3-0.6B-seo-bilingual")
tokenizer = AutoTokenizer.from_pretrained("Kelnux/Qwen3-0.6B-seo-bilingual")
# Polish question
prompt = "<|im_start|>system\nJestes ekspertem SEO.<|im_end|>\n<|im_start|>user\nCo to jest E-E-A-T w SEO?<|im_end|>\n<|im_start|>assistant\n"
# English question
prompt = "<|im_start|>system\nYou are an SEO expert.<|im_end|>\n<|im_start|>user\nWhat is backlinking in SEO?<|im_end|>\n<|im_start|>assistant\n"
GGUF Versions
GGUF quantized versions available at: Kelnux/Qwen3-0.6B-seo-bilingual-GGUF
Ollama
To run with Ollama using the GGUF version, create a Modelfile and run:
ollama create qwen3-seo-bilingual -f Modelfile
ollama run qwen3-seo-bilingual
- Downloads last month
- 2
Model tree for Kelnux/Qwen3-0.6B-seo-bilingual
Base model
Qwen/Qwen3-0.6B-Base Finetuned
Qwen/Qwen3-0.6B