Papers
arxiv:2503.01854

A Comprehensive Survey of Machine Unlearning Techniques for Large Language Models

Published on May 31, 2025
Authors:
,
,
,
,
,
,
,
,

Abstract

LLM unlearning techniques remove undesirable data influence from large language models while preserving utility, offering a systematic survey of approaches, evaluation methods, and future research directions.

AI-generated summary

This study investigates the machine unlearning techniques within the context of large language models (LLMs), referred to as LLM unlearning. LLM unlearning offers a principled approach to removing the influence of undesirable data (e.g., sensitive or illegal information) from LLMs, while preserving their overall utility without requiring full retraining. Despite growing research interest, there is no comprehensive survey that systematically organizes existing work and distills key insights; here, we aim to bridge this gap. We begin by introducing the definition and the paradigms of LLM unlearning, followed by a comprehensive taxonomy of existing unlearning studies. Next, we categorize current unlearning approaches, summarizing their strengths and limitations. Additionally, we review evaluation metrics and benchmarks, providing a structured overview of current assessment methodologies. Finally, we outline promising directions for future research, highlighting key challenges and opportunities in the field.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2503.01854
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.01854 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.01854 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.01854 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.