Papers
arxiv:2508.01324

Towards Evaluation for Real-World LLM Unlearning

Published on Aug 2, 2025
Authors:
,
,
,
,
,
,

Abstract

A new unlearning evaluation metric called DCUE is proposed that addresses limitations in existing metrics by identifying core tokens and correcting distributional biases using validation sets and the Kolmogorov-Smirnov test.

AI-generated summary

This paper analyzes the limitations of existing unlearning evaluation metrics in terms of practicality, exactness, and robustness in real-world LLM unlearning scenarios. To overcome these limitations, we propose a new metric called Distribution Correction-based Unlearning Evaluation (DCUE). It identifies core tokens and corrects distributional biases in their confidence scores using a validation set. The evaluation results are quantified using the Kolmogorov-Smirnov test. Experimental results demonstrate that DCUE overcomes the limitations of existing metrics, which also guides the design of more practical and reliable unlearning algorithms in the future.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2508.01324
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.01324 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.01324 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.01324 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.