payelb's picture
Upload PPO-aligned Llama-3.2-1B model using MARS DeBERTa reward model on PKUSafeRLHF
9c4f7f8 verified
|
raw
history blame
454 Bytes

payelb/PKUSafeRLHF_Llama-3.2-1B_aligned_with_MARS_deberta_RM

Base model: meta-llama/Llama-3.2-1B-Instruct

Alignment dataset: PKU-Alignment/PKU-SafeRLHF

Reward model: payelb/PKUSafeRLHF_reward-model-deberta-v3-base_1k_fixed_MARS

Method: PPO alignment with LoRA adapters.

Notes:

  • Reward normalization and clipping enabled
  • KL control enabled
  • pad_token_id/eos_token_id explicitly set
  • DeBERTa RM loaded on a single device (no device_map='auto')