qwen2-webshop-federated
This is a federated learning model based on Qwen2-1.5B-Instruct, trained using GRPO (Generalized Reward Policy Optimization) on the WebShop dataset.
Model Details
- Base Model: Qwen2-1.5B-Instruct
- Training Method: Federated Learning with GRPO
- Dataset: WebShop
- Architecture: Qwen2ForCausalLM
- Parameters: ~1.5B
- Hidden Size: 1536
- Layers: 28
- Attention Heads: 12
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("qwen2-webshop-federated")
model = AutoModelForCausalLM.from_pretrained("qwen2-webshop-federated")
# Example usage
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
This model was trained using federated learning with the following configuration:
- Total clients: 100
- Clients per round: 2
- Rounds: 70
- Epochs per client: 3
- Minimum goals per client: 100
The model was aggregated after round 70, global step 0.
- Downloads last month
- 5