Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
wang binghai's picture
1 3 1

wang binghai

refrain-wbh
webxos's profile picture 0xSojalSec's profile picture blahblahblahw's profile picture
·
  • refrain-wbh

AI & ML interests

None yet

Organizations

Qwen's profile picture

submitted a paper to Daily Papers 2 months ago

Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models

Paper • 2602.04649 • Published Feb 4 • 12
authored 3 papers 11 months ago

Secrets of RLHF in Large Language Models Part II: Reward Modeling

Paper • 2401.06080 • Published Jan 11, 2024 • 27

RMB: Comprehensively Benchmarking Reward Models in LLM Alignment

Paper • 2410.09893 • Published Oct 13, 2024

WorldPM: Scaling Human Preference Modeling

Paper • 2505.10527 • Published May 15, 2025 • 34
authored a paper almost 3 years ago

Secrets of RLHF in Large Language Models Part I: PPO

Paper • 2307.04964 • Published Jul 11, 2023 • 30
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs