Improve model card for RLVER: Add metadata, links, abstract, and usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +75 -3
README.md CHANGED
@@ -1,8 +1,80 @@
1
  ---
 
 
2
  license: other
3
  license_name: license
4
  license_link: LICENSE
5
- base_model:
6
- - Qwen/Qwen2.5-7B-Instruct
7
  ---
8
- https://www.arxiv.org/abs/2507.03112
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B-Instruct
4
  license: other
5
  license_name: license
6
  license_link: LICENSE
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
  ---
10
+
11
+ # RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents
12
+
13
+ This repository contains the model checkpoint for **RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents**.
14
+
15
+ ## Abstract
16
+
17
+ Large language models (LLMs) excel at logical and algorithmic reasoning, yet their emotional intelligence (EQ) still lags far behind their cognitive prowess. While reinforcement learning from verifiable rewards (RLVR) has advanced in other domains, its application to dialogue-especially for emotional intelligence-remains underexplored. In this work, we introduce RLVER, the first end-to-end reinforcement learning framework that leverages verifiable emotion rewards from simulated users to cultivate higher-order empathetic abilities in LLMs. Within this framework, self-consistent affective simulated users engage in dialogue rollouts and produce deterministic emotion scores during conversations, serving as reward signals to guide the LLM's learning. Fine-tuning publicly available Qwen2.5-7B-Instruct model with PPO boosts its Sentient-Benchmark score from 13.3 to 79.2 while largely preserving mathematical and coding competence. Extensive experiments reveal that: (i) RLVER consistently improves multiple dialogue capabilities; (ii) Thinking and non-thinking models show distinct trends--thinking models excel in empathy and insight, while non-thinking models favor action; (iii) GRPO often yields stable gains, while PPO can push certain capabilities to a higher ceiling; (iv) More challenging environments are not always better-moderate ones can yield stronger outcomes. Our results show that RLVER is a practical route toward emotionally intelligent and broadly capable language agents.
18
+
19
+ ## Links
20
+
21
+ * **Paper (Hugging Face)**: [https://huggingface.co/papers/2507.03112](https://huggingface.co/papers/2507.03112)
22
+ * **Paper (arXiv)**: [https://arxiv.org/abs/2507.03112](https://arxiv.org/abs/2507.03112)
23
+ * **GitHub Repository**: [https://github.com/Tencent/digitalhuman/tree/main/RLVER](https://github.com/Tencent/digitalhuman/tree/main/RLVER)
24
+
25
+ ## Usage
26
+
27
+ This model can be loaded and used with the Hugging Face `transformers` library for text generation and empathetic dialogue tasks.
28
+
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+ import torch
32
+
33
+ # Replace "RLVER/RLVER-Qwen2.5-7B-Instruct" with the actual model ID if different for this repository.
34
+ model_id = "RLVER/RLVER-Qwen2.5-7B-Instruct"
35
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
36
+ model = AutoModelForCausalLM.from_pretrained(
37
+ model_id,
38
+ torch_dtype=torch.bfloat16,
39
+ device_map="auto"
40
+ )
41
+
42
+ # Example for empathetic dialogue
43
+ messages = [
44
+ {"role": "system", "content": "You are a helpful and empathetic assistant."},
45
+ {"role": "user", "content": "I'm feeling really down today. My cat just ran away."},
46
+ ]
47
+
48
+ text = tokenizer.apply_chat_template(
49
+ messages,
50
+ tokenize=False,
51
+ add_generation_prompt=True
52
+ )
53
+
54
+ model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
55
+
56
+ generated_ids = model.generate(
57
+ model_inputs.input_ids,
58
+ max_new_tokens=200,
59
+ do_sample=True,
60
+ temperature=0.7,
61
+ top_k=50,
62
+ top_p=0.95
63
+ )
64
+
65
+ response = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
66
+ print(response)
67
+ ```
68
+
69
+ ## Citation
70
+
71
+ If you find our work helpful or inspiring, please feel free to cite it using the following BibTeX entry:
72
+
73
+ ```bibtex
74
+ @article{zhou2025rlver,
75
+ title={RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents},
76
+ author={Zhou, Yifei and Zhang, Haoran and Fang, Jieming and Tian, Wei and Zheng, Yichen and Xu, Haoyu and Hu, Bo and Zhang, Yang and Liu, Wenhai and Zhao, Wei and Lin, Li and Zhao, Xingyu and Yan, Yan},
77
+ journal={arXiv preprint arXiv:2507.03112},
78
+ year={2025}
79
+ }
80
+ ```