| # payelb/UltraFeedback_openbmb_Llama-3.2-1B_aligned_with_baseline_deberta_RM | |
| Base model: meta-llama/Llama-3.2-1B-Instruct | |
| Alignment dataset: openbmb/UltraFeedback | |
| Reward model: payelb/UltraFeedback_openbmb_reward-model-deberta-v3-base_1k_fixed_baseline | |
| Method: PPO alignment with LoRA adapters. | |
| Notes: | |
| - Reward normalization and clipping enabled | |
| - KL control enabled | |
| - pad_token_id/eos_token_id explicitly set | |
| - DeBERTa RM loaded on a single device (no device_map='auto') | |