YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
payelb/HHRLHF_Llama-3.2-1B_aligned_with_baseline_deberta_RM
Base model: meta-llama/Llama-3.2-1B-Instruct
Alignment dataset: Anthropic/hh-rlhf
Reward model: payelb/HHRLHF_reward-model-deberta-v3-base_1k_fixed_baseline
Method: PPO alignment with LoRA adapters.
Notes:
- Reward normalization and clipping enabled
- KL control enabled
- pad_token_id/eos_token_id explicitly set
- DeBERTa RM loaded on a single device (no device_map='auto')
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support