Papers
arxiv:2604.06159

Target Policy Optimization

Published on Apr 7
· Submitted by
Jean Kaddour
on Apr 16
Authors:

Abstract

Target Policy Optimization separates policy update decisions from probability assignment in reinforcement learning, improving performance over standard policy gradient methods in sparse reward scenarios.

AI-generated summary

In RL, given a prompt, we sample a group of completions from a model and score them. Two questions follow: which completions should gain probability mass, and how should the parameters move to realize that change? Standard policy-gradient methods answer both at once, so the update can overshoot or undershoot depending on the learning rate, clipping, and other optimizer choices. We introduce Target Policy Optimization (TPO), which separates the two questions. Given scored completions, TPO constructs a target distribution q_i propto p_i^{,old} exp(u_i) and fits the policy to it by cross-entropy. The loss gradient on sampled-completion logits is p^θ- q, which vanishes once the policy matches the target. On tabular bandits, transformer sequence tasks, and billion-parameter LLM RLVR, TPO matches PG, PPO, GRPO, and DG on easy tasks and substantially outperforms them under sparse reward. Code is available at https://github.com/JeanKaddour/tpo.

Community

Paper submitter

TPO basically turns GRPO into supervised learning: build a target distribution over sampled completions, then fit with cross-entropy.

The gradient vanishes once the target is matched. No clipping. No importance ratios.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.06159
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.06159 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.06159 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.06159 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.