File size: 433 Bytes
b1e515f
959cc2a
b1e515f
959cc2a
b1e515f
959cc2a
b1e515f
959cc2a
b1e515f
959cc2a
b1e515f
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# payelb/HHRLHF_Llama-3.2-1B_aligned_with_WoN_deberta_RM

Base model: meta-llama/Llama-3.2-1B-Instruct

Alignment dataset: Anthropic/hh-rlhf

Reward model: payelb/HHRLHF_reward-model-deberta-v3-base_1k_fixed_WoN

Method: PPO alignment with LoRA adapters.

Notes:
- Reward normalization and clipping enabled
- KL control enabled
- pad_token_id/eos_token_id explicitly set
- DeBERTa RM loaded on a single device (no device_map='auto')