Qwen 0.6B Distilled Model
This is a distilled version of Qwen 0.6B model, trained using knowledge distillation from Qwen 4B.
Model Details
- Base Model: Qwen/Qwen3-0.6B
- Teacher Model: Qwen/Qwen3-4B
- Distillation Method: Knowledge Distillation
- Training Framework: PyTorch + Transformers
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Yahhhh/qwen3-0.6b-distilled")
model = AutoModelForCausalLM.from_pretrained("Yahhhh/qwen3-0.6b-distilled")
# Generate text
prompt = "Explain quantum computing:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Performance
This distilled model maintains competitive performance while being more efficient than the original model.
Training Details
- Distillation loss combining cross-entropy and KL divergence
- Temperature-based softmax for knowledge transfer
- Trained on diverse text datasets
- Downloads last month
- 3