Papers
arxiv:2006.11605

Studying Attention Models in Sentiment Attitude Extraction Task

Published on Jun 20, 2020
Authors:

Abstract

Attentive context encoders improve sentiment attitude extraction in Russian texts, increasing F1 scores by 1.5-5.9% compared to models without attention mechanisms.

AI-generated summary

In the sentiment attitude extraction task, the aim is to identify <<attitudes>> -- sentiment relations between entities mentioned in text. In this paper, we provide a study on attention-based context encoders in the sentiment attitude extraction task. For this task, we adapt attentive context encoders of two types: (i) feature-based; (ii) self-based. Our experiments with a corpus of Russian analytical texts RuSentRel illustrate that the models trained with attentive encoders outperform ones that were trained without them and achieve 1.5-5.9% increase by F1. We also provide the analysis of attention weight distributions in dependence on the term type.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2006.11605
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2006.11605 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2006.11605 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2006.11605 in a Space README.md to link it from this page.

Collections including this paper 1