# payelb/UltraFeedback_openbmb_TinyLlama-1.1B_aligned_with_baseline_RM Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 Alignment dataset: openbmb/UltraFeedback Reward model: payelb/UltraFeedback_openbmb_roberta-base_1k_fixed_baseline Method: PPO alignment with LoRA adapters. Notes: - Reward normalization and clipping enabled - KL control enabled - pad_token_id/eos_token_id explicitly set