Papers
arxiv:2601.05192

LELA: an LLM-based Entity Linking Approach with Zero-Shot Domain Adaptation

Published on Jan 8
Authors:
,
,

Abstract

LELA is a modular coarse-to-fine entity linking method that utilizes large language models without requiring fine-tuning, demonstrating strong performance across different domains and knowledge bases.

AI-generated summary

Entity linking (mapping ambiguous mentions in text to entities in a knowledge base) is a foundational step in tasks such as knowledge graph construction, question-answering, and information extraction. Our method, LELA, is a modular coarse-to-fine approach that leverages the capabilities of large language models (LLMs), and works with different target domains, knowledge bases and LLMs, without any fine-tuning phase. Our experiments across various entity linking settings show that LELA is highly competitive with fine-tuned approaches, and substantially outperforms the non-fine-tuned ones.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2601.05192
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.05192 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.05192 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.