Papers
arxiv:2503.01224

CE-U: Cross Entropy Unlearning

Published on Mar 15, 2025
Authors:

Abstract

CE-U, a novel loss function for unlearning in large language models, addresses gradient instability issues in gradient ascent approaches and unifies standard learning and unlearning frameworks while achieving state-of-the-art results on the TOFU benchmark.

AI-generated summary

Large language models memorize sensitive data from their pretraining corpora. In this work, we propose CE-U (Cross Entropy Unlearning), a loss function for unlearning. CE-U addresses fundamental limitations of gradient ascent approaches that suffer from vanishing gradients when model confidence is high and exploding gradients when confidence is low. We also unify standard cross entropy learning and unlearning into a single framework. On the TOFU benchmark for unlearning, CE-U achieves state-of-the-art results on LLaMA2-7B models without using an extra oracle model or additional positive samples. Our analysis reveals that the problematic gradient ascent component also exists in reinforcement learning algorithms like DPO and GRPO. This suggests that applying CE-U approach to reinforcement learning could be promising to improve stability and convergence.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2503.01224
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.01224 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.01224 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.01224 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.