Social-R1
Collection
3 items β’ Updated
SocialR1-8B is a social reasoning model built on Qwen3-8B, trained with trajectory-level reinforcement learning (GRPO) using the Social-R1 framework. It enhances Theory-of-Mind (ToM) and social inference capabilities by aligning reasoning processes with the Social Information Processing (SIP) theory.
π Paper: Social-R1: Enhancing Social Reasoning in LLMs through Trajectory-Level Reinforcement Learning
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Jincenzi/SocialR1-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
messages = [
{"role": "user", "content": "You should first think about the reasoning process in the mind and then provide with the answer.The reasoning process and answer are enclosed within <think> </think> and <Answer> </Answer> tags, respectively."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=2048)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
SocialR1-8B is evaluated across three complementary settings:
| Resource | Link |
|---|---|
| Paper | arXiv:2603.09249 |
| SocialR1-4B | Jincenzi/SocialR1-4B |
@inproceedings{wu2026socialr1,
title={Social-R1: Enhancing Social Reasoning in LLMs through Trajectory-Level Reinforcement Learning},
author={Wu, Jincenzi and Lei, Yuxuan and Lian, Jianxun and Huang, Yitian and Zhou, Lexin and Li, Haotian and Yang, Deng and Xie, Xing and Meng, Helen},
booktitle={Arxiv},
year={2026}
}
For questions or discussions, please contact jincenziwu@gmail.com.