payelb's picture
Upload PPO-aligned TinyLlama-1.1B model using baseline reward model on UltraFeedback_openbmb
16fb831 verified
# payelb/UltraFeedback_openbmb_TinyLlama-1.1B_aligned_with_baseline_RM
Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Alignment dataset: openbmb/UltraFeedback
Reward model: payelb/UltraFeedback_openbmb_roberta-base_1k_fixed_baseline
Method: PPO alignment with LoRA adapters.
Notes:
- Reward normalization and clipping enabled
- KL control enabled
- pad_token_id/eos_token_id explicitly set