Papers
arxiv:2504.02883

SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models

Published on Apr 2, 2025
Authors:
,
,
,
,
,
,
,
,

Abstract

SemEval-2025 Task 4 presents a comprehensive evaluation of techniques for removing sensitive content from large language models across diverse scenarios including synthetic creative texts, PII-containing biographies, and real training documents.

AI-generated summary

We introduce SemEval-2025 Task 4: unlearning sensitive content from Large Language Models (LLMs). The task features 3 subtasks for LLM unlearning spanning different use cases: (1) unlearn long form synthetic creative documents spanning different genres; (2) unlearn short form synthetic biographies containing personally identifiable information (PII), including fake names, phone number, SSN, email and home addresses, and (3) unlearn real documents sampled from the target model's training dataset. We received over 100 submissions from over 30 institutions and we summarize the key techniques and lessons in this paper.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2504.02883
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.02883 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.02883 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.02883 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.