Qwen2.5-7B-Instruct SFT โ Game24 (16384)
Fine-tuned from Qwen/Qwen2.5-7B-Instruct using QLoRA (4-bit NF4 quantization + LoRA adapters, merged before upload).
Training Configuration
- Learning rate: 2e-5 (cosine schedule, 5% warmup)
- Batch size: 1 per device, gradient accumulation 16 (effective batch size 16)
- Epochs: 3
- Max sequence length: 16384
- Precision: bf16
- Weight decay: 0.01
QLoRA
- Quantization: 4-bit NF4 with double quantization
- LoRA rank: 64
- LoRA alpha: 128
- LoRA dropout: 0.05
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Loss
- completion_only_loss: prompt tokens are masked; loss is computed only on assistant completion tokens
- Dataset is converted from
messagestoprompt/completionformat before training
Dataset
Trained on tinyllms/game24-trajectories. Examples exceeding max_seq_len are filtered out. A 10% holdout is used for evaluation (eval runs every 10 steps).
Infrastructure
- GPU: NVIDIA H100 80GB
- Framework: TRL 0.29 + Ray Train
- Tracking: Weights & Biases (project:
pocket-sheet-sft) - Ray Job ID: raysubmit_YQdr4KmFd34rx54D
DC-Cu Metrics: https://www.notion.so/game24-evaluate-Qwen-324a0a65fbb880e68e1dc22cf7bd7674 DC-Cu w/ summarization Metrics:
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support