payelb's picture
Upload PPO-aligned TinyLlama-1.1B model using baseline DeBERTa reward model on HHRLHF
d8c9a83 verified

payelb/HHRLHF_TinyLlama-1.1B_aligned_with_baseline_deberta_RM

Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0

Alignment dataset: Anthropic/hh-rlhf

Reward model: payelb/HHRLHF_reward-model-deberta-v3-base_1k_fixed_baseline

Method: PPO alignment with LoRA adapters.

Notes:

  • Reward normalization and clipping enabled
  • KL control enabled
  • pad_token_id/eos_token_id explicitly set
  • DeBERTa RM loaded on a single device (no device_map='auto')