venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | How effective is BERT without word ordering? Implications for language understanding and data privacy | Ordered word sequences contain the rich structures that define language. However, it’s often not clear if or how modern pretrained language models utilize these structures. We show that the token representations and self-attention activations within BERT are surprisingly resilient to shuffling the order of input tokens... | 3823bc42bc9db061170c93a5bb5bf749 | 2,021 | [
"ordered word sequences contain the rich structures that define language .",
"however , it ’ s often not clear if or how modern pretrained language models utilize these structures .",
"we show that the token representations and self - attention activations within bert are surprisingly resilient to shuffling the... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ordered word sequences",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"ordered",
"word",
"sequences"
],
"offsets": [
0,
1,
2
... | [
"ordered",
"word",
"sequences",
"contain",
"the",
"rich",
"structures",
"that",
"define",
"language",
".",
"however",
",",
"it",
"’",
"s",
"often",
"not",
"clear",
"if",
"or",
"how",
"modern",
"pretrained",
"language",
"models",
"utilize",
"these",
"structures... |
ACL | Bayesian Hierarchical Words Representation Learning | This paper presents the Bayesian Hierarchical Words Representation (BHWR) learning algorithm. BHWR facilitates Variational Bayes word representation learning combined with semantic taxonomy modeling via hierarchical priors. By propagating relevant information between related words, BHWR utilizes the taxonomy to improve... | f4103f1e09033e82636eec3058cf2bc2 | 2,020 | [
"this paper presents the bayesian hierarchical words representation ( bhwr ) learning algorithm .",
"bhwr facilitates variational bayes word representation learning combined with semantic taxonomy modeling via hierarchical priors .",
"by propagating relevant information between related words , bhwr utilizes the... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "bayesian hierarchical words representation learning algorithm",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"bayesian",
"hierarchical",
"words",
"representation",
... | [
"this",
"paper",
"presents",
"the",
"bayesian",
"hierarchical",
"words",
"representation",
"(",
"bhwr",
")",
"learning",
"algorithm",
".",
"bhwr",
"facilitates",
"variational",
"bayes",
"word",
"representation",
"learning",
"combined",
"with",
"semantic",
"taxonomy",
... |
ACL | Quantity Tagger: A Latent-Variable Sequence Labeling Approach to Solving Addition-Subtraction Word Problems | An arithmetic word problem typically includes a textual description containing several constant quantities. The key to solving the problem is to reveal the underlying mathematical relations (such as addition and subtraction) among quantities, and then generate equations to find solutions. This work presents a novel app... | 9892d198e4dca7d9a85fc18cea699c1f | 2,019 | [
"an arithmetic word problem typically includes a textual description containing several constant quantities .",
"the key to solving the problem is to reveal the underlying mathematical relations ( such as addition and subtraction ) among quantities , and then generate equations to find solutions .",
"this work ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "arithmetic word problem",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"arithmetic",
"word",
"problem"
],
"offsets": [
1,
2,
3
... | [
"an",
"arithmetic",
"word",
"problem",
"typically",
"includes",
"a",
"textual",
"description",
"containing",
"several",
"constant",
"quantities",
".",
"the",
"key",
"to",
"solving",
"the",
"problem",
"is",
"to",
"reveal",
"the",
"underlying",
"mathematical",
"rela... |
ACL | Competence-based Multimodal Curriculum Learning for Medical Report Generation | Medical report generation task, which targets to produce long and coherent descriptions of medical images, has attracted growing research interests recently. Different from the general image captioning tasks, medical report generation is more challenging for data-driven neural models. This is mainly due to 1) the serio... | 296169a89fef68cb42352ebf5f8136e0 | 2,021 | [
"medical report generation task , which targets to produce long and coherent descriptions of medical images , has attracted growing research interests recently .",
"different from the general image captioning tasks , medical report generation is more challenging for data - driven neural models .",
"this is main... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "medical report generation task",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"medical",
"report",
"generation",
"task"
],
"offsets": [
0,
... | [
"medical",
"report",
"generation",
"task",
",",
"which",
"targets",
"to",
"produce",
"long",
"and",
"coherent",
"descriptions",
"of",
"medical",
"images",
",",
"has",
"attracted",
"growing",
"research",
"interests",
"recently",
".",
"different",
"from",
"the",
"... |
ACL | Sentence-aware Contrastive Learning for Open-Domain Passage Retrieval | Training dense passage representations via contrastive learning has been shown effective for Open-Domain Passage Retrieval (ODPR). Existing studies focus on further optimizing by improving negative sampling strategy or extra pretraining. However, these studies keep unknown in capturing passage with internal representat... | 085a1f0d191b94bfe225758d49551b87 | 2,022 | [
"training dense passage representations via contrastive learning has been shown effective for open - domain passage retrieval ( odpr ) .",
"existing studies focus on further optimizing by improving negative sampling strategy or extra pretraining .",
"however , these studies keep unknown in capturing passage wit... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain passage retrieval",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"passage",
"retrieval"
],
"offsets": ... | [
"training",
"dense",
"passage",
"representations",
"via",
"contrastive",
"learning",
"has",
"been",
"shown",
"effective",
"for",
"open",
"-",
"domain",
"passage",
"retrieval",
"(",
"odpr",
")",
".",
"existing",
"studies",
"focus",
"on",
"further",
"optimizing",
... |
ACL | GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling | Multi-intent SLU can handle multiple intents in an utterance, which has attracted increasing attention. However, the state-of-the-art joint models heavily rely on autoregressive approaches, resulting in two issues: slow inference speed and information leakage. In this paper, we explore a non-autoregressive model for jo... | 4076b7ba3c08cd16e655a629f73b2b3b | 2,021 | [
"multi - intent slu can handle multiple intents in an utterance , which has attracted increasing attention .",
"however , the state - of - the - art joint models heavily rely on autoregressive approaches , resulting in two issues : slow inference speed and information leakage .",
"in this paper , we explore a n... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - intent slu",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"intent",
"slu"
],
"offsets": [
0,
1,
2... | [
"multi",
"-",
"intent",
"slu",
"can",
"handle",
"multiple",
"intents",
"in",
"an",
"utterance",
",",
"which",
"has",
"attracted",
"increasing",
"attention",
".",
"however",
",",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"joint",
"models",
"heavily... |
ACL | Interactive Classification by Asking Informative Questions | We study the potential for interaction in natural language classification. We add a limited form of interaction for intent classification, where users provide an initial query using natural language, and the system asks for additional information using binary or multi-choice questions. At each turn, our system decides ... | eaed7bf28425753e7e6b9593de2a631e | 2,020 | [
"we study the potential for interaction in natural language classification .",
"we add a limited form of interaction for intent classification , where users provide an initial query using natural language , and the system asks for additional information using binary or multi - choice questions .",
"at each turn... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
93
]
},
{
"text": "on two domains",
"nugget_type": "LIM",
... | [
"we",
"study",
"the",
"potential",
"for",
"interaction",
"in",
"natural",
"language",
"classification",
".",
"we",
"add",
"a",
"limited",
"form",
"of",
"interaction",
"for",
"intent",
"classification",
",",
"where",
"users",
"provide",
"an",
"initial",
"query",
... |
ACL | Hierarchical Entity Typing via Multi-level Learning to Rank | We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction. At training, our novel multi-level learning-to-rank loss compares positive types against negative siblings according to the type tree. During prediction, we define a coarse-to-fin... | 88fb156b65c6fa308fffc8882592526e | 2,020 | [
"we propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction .",
"at training , our novel multi - level learning - to - rank loss compares positive types against negative siblings according to the type tree .",
"during prediction , ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "method",
"nugget_type": "APP",
"arg... | [
"we",
"propose",
"a",
"novel",
"method",
"for",
"hierarchical",
"entity",
"classification",
"that",
"embraces",
"ontological",
"structure",
"at",
"both",
"training",
"and",
"during",
"prediction",
".",
"at",
"training",
",",
"our",
"novel",
"multi",
"-",
"level"... |
ACL | Structural Pre-training for Dialogue Comprehension | Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training. However, even with the help of the powerful PrLMs, it is still challenging to effectively capture task-related knowledge from dialogue tex... | df2b8750fb2da84b7081e77062a4088c | 2,021 | [
"pre - trained language models ( prlms ) have demonstrated superior performance due to their strong ability to learn universal language representations from self - supervised pre - training .",
"however , even with the help of the powerful prlms , it is still challenging to effectively capture task - related know... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - trained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"trained",
"language",
"models"
],
"offsets": [
... | [
"pre",
"-",
"trained",
"language",
"models",
"(",
"prlms",
")",
"have",
"demonstrated",
"superior",
"performance",
"due",
"to",
"their",
"strong",
"ability",
"to",
"learn",
"universal",
"language",
"representations",
"from",
"self",
"-",
"supervised",
"pre",
"-"... |
ACL | Transition-based Bubble Parsing: Improvements on Coordination Structure Prediction | We propose a transition-based bubble parser to perform coordination structure identification and dependency-based syntactic analysis simultaneously. Bubble representations were proposed in the formal linguistics literature decades ago; they enhance dependency trees by encoding coordination boundaries and internal relat... | c9dcf42ac685bbbb5f0295012ce849e1 | 2,021 | [
"we propose a transition - based bubble parser to perform coordination structure identification and dependency - based syntactic analysis simultaneously .",
"bubble representations were proposed in the formal linguistics literature decades ago ; they enhance dependency trees by encoding coordination boundaries an... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "transition - based bubble parser",
"nugget_... | [
"we",
"propose",
"a",
"transition",
"-",
"based",
"bubble",
"parser",
"to",
"perform",
"coordination",
"structure",
"identification",
"and",
"dependency",
"-",
"based",
"syntactic",
"analysis",
"simultaneously",
".",
"bubble",
"representations",
"were",
"proposed",
... |
ACL | Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing | Research on parsing language to SQL has largely ignored the structure of the database (DB) schema, either because the DB was very simple, or because it was observed at both training and test time. In spider, a recently-released text-to-SQL dataset, new and complex DBs are given at test time, and so the structure of the... | 4f2fa7563d9ad7ae55bc9904684d8f62 | 2,019 | [
"research on parsing language to sql has largely ignored the structure of the database ( db ) schema , either because the db was very simple , or because it was observed at both training and test time .",
"in spider , a recently - released text - to - sql dataset , new and complex dbs are given at test time , and... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "structure of the database schema",
"nugget_type": "MOD",
"argument_type": "Fault",
"tokens": [
"structure",
"of",
"the",
"database",
"schema"
],
"offsets":... | [
"research",
"on",
"parsing",
"language",
"to",
"sql",
"has",
"largely",
"ignored",
"the",
"structure",
"of",
"the",
"database",
"(",
"db",
")",
"schema",
",",
"either",
"because",
"the",
"db",
"was",
"very",
"simple",
",",
"or",
"because",
"it",
"was",
"... |
ACL | Incorporating Linguistic Constraints into Keyphrase Generation | Keyphrases, that concisely describe the high-level topics discussed in a document, are very useful for a wide range of natural language processing tasks. Though existing keyphrase generation methods have achieved remarkable performance on this task, they generate many overlapping phrases (including sub-phrases or super... | 2c4b69794dc413b3c3b5798de472cc73 | 2,019 | [
"keyphrases , that concisely describe the high - level topics discussed in a document , are very useful for a wide range of natural language processing tasks .",
"though existing keyphrase generation methods have achieved remarkable performance on this task , they generate many overlapping phrases ( including sub... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "keyphrases",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"keyphrases"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "useful",
"tokens"... | [
"keyphrases",
",",
"that",
"concisely",
"describe",
"the",
"high",
"-",
"level",
"topics",
"discussed",
"in",
"a",
"document",
",",
"are",
"very",
"useful",
"for",
"a",
"wide",
"range",
"of",
"natural",
"language",
"processing",
"tasks",
".",
"though",
"exis... |
ACL | Adaptive Testing and Debugging of NLP Models | Current approaches to testing and debugging NLP models rely on highly variable human creativity and extensive labor, or only work for a very restrictive class of bugs. We present AdaTest, a process which uses large scale language models (LMs) in partnership with human feedback to automatically write unit tests highligh... | d4a2e9fb9b7463dc5b8772bc7ced1eb2 | 2,022 | [
"current approaches to testing and debugging nlp models rely on highly variable human creativity and extensive labor , or only work for a very restrictive class of bugs .",
"we present adatest , a process which uses large scale language models ( lms ) in partnership with human feedback to automatically write unit... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "current approaches to testing and debugging nlp models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"current",
"approaches",
"to",
"testing",
"and",
... | [
"current",
"approaches",
"to",
"testing",
"and",
"debugging",
"nlp",
"models",
"rely",
"on",
"highly",
"variable",
"human",
"creativity",
"and",
"extensive",
"labor",
",",
"or",
"only",
"work",
"for",
"a",
"very",
"restrictive",
"class",
"of",
"bugs",
".",
"... |
ACL | A Generate-and-Rank Framework with Semantic Type Regularization for Biomedical Concept Normalization | Concept normalization, the task of linking textual mentions of concepts to concepts in an ontology, is challenging because ontologies are large. In most cases, annotated datasets cover only a small sample of the concepts, yet concept normalizers are expected to predict all concepts in the ontology. In this paper, we pr... | bec3a662992f2636db75f42e72d2c58f | 2,020 | [
"concept normalization , the task of linking textual mentions of concepts to concepts in an ontology , is challenging because ontologies are large .",
"in most cases , annotated datasets cover only a small sample of the concepts , yet concept normalizers are expected to predict all concepts in the ontology .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "concept normalization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"concept",
"normalization"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"concept",
"normalization",
",",
"the",
"task",
"of",
"linking",
"textual",
"mentions",
"of",
"concepts",
"to",
"concepts",
"in",
"an",
"ontology",
",",
"is",
"challenging",
"because",
"ontologies",
"are",
"large",
".",
"in",
"most",
"cases",
",",
"annotated",... |
ACL | MPII: Multi-Level Mutual Promotion for Inference and Interpretation | In order to better understand the rationale behind model behavior, recent works have exploited providing interpretation to support the inference prediction. However, existing methods tend to provide human-unfriendly interpretation, and are prone to sub-optimal performance due to one-side promotion, i.e. either inferenc... | be0217f49cf451ed0f47cb4a2f479482 | 2,022 | [
"in order to better understand the rationale behind model behavior , recent works have exploited providing interpretation to support the inference prediction .",
"however , existing methods tend to provide human - unfriendly interpretation , and are prone to sub - optimal performance due to one - side promotion ,... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "interpretation",
"nugget_type": "FEA",
"argument_type": "TriedComponent",
"tokens": [
"interpretation"
],
"offsets": [
16
]
},
{
"text": "better understand",... | [
"in",
"order",
"to",
"better",
"understand",
"the",
"rationale",
"behind",
"model",
"behavior",
",",
"recent",
"works",
"have",
"exploited",
"providing",
"interpretation",
"to",
"support",
"the",
"inference",
"prediction",
".",
"however",
",",
"existing",
"methods... |
ACL | Structural Characterization for Dialogue Disentanglement | Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine. Previous studies mainly focus on utterance encoding ... | bb01db06b913b7ef410f5546a72e61e5 | 2,022 | [
"tangled multi - party dialogue contexts lead to challenges for dialogue reading comprehension , where multiple dialogue threads flow simultaneously within a common dialogue record , increasing difficulties in understanding the dialogue history for both human and machine .",
"previous studies mainly focus on utte... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "dialogue reading comprehension",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"dialogue",
"reading",
"comprehension"
],
"offsets": [
10,
11... | [
"tangled",
"multi",
"-",
"party",
"dialogue",
"contexts",
"lead",
"to",
"challenges",
"for",
"dialogue",
"reading",
"comprehension",
",",
"where",
"multiple",
"dialogue",
"threads",
"flow",
"simultaneously",
"within",
"a",
"common",
"dialogue",
"record",
",",
"inc... |
ACL | Inducing Document Structure for Aspect-based Summarization | Automatic summarization is typically treated as a 1-to-1 mapping from document to summary. Documents such as news articles, however, are structured and often cover multiple topics or aspects; and readers may be interested in only some of them. We tackle the task of aspect-based summarization, where, given a document an... | 587bbef7684909709a96f7af05adbf62 | 2,019 | [
"automatic summarization is typically treated as a 1 - to - 1 mapping from document to summary .",
"documents such as news articles , however , are structured and often cover multiple topics or aspects ; and readers may be interested in only some of them .",
"we tackle the task of aspect - based summarization ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "automatic summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"automatic",
"summarization"
],
"offsets": [
0,
1
]
}
],
... | [
"automatic",
"summarization",
"is",
"typically",
"treated",
"as",
"a",
"1",
"-",
"to",
"-",
"1",
"mapping",
"from",
"document",
"to",
"summary",
".",
"documents",
"such",
"as",
"news",
"articles",
",",
"however",
",",
"are",
"structured",
"and",
"often",
"... |
ACL | Sequence Labeling Parsing by Learning across Representations | We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions.To do so, we cast the problem as multitask learning (MTL). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly... | c223eed2451bd502da121098f7037969 | 2,019 | [
"we use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions .",
"to do so , we cast the problem as multitask learning ( mtl ) .",
"first , we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the ot... | [
{
"event_type": "PUR",
"arguments": [
{
"text": "across constituency",
"nugget_type": "TAK",
"argument_type": "Aim",
"tokens": [
"across",
"constituency"
],
"offsets": [
12,
13
]
},
{
"t... | [
"we",
"use",
"parsing",
"as",
"sequence",
"labeling",
"as",
"a",
"common",
"framework",
"to",
"learn",
"across",
"constituency",
"and",
"dependency",
"syntactic",
"abstractions",
".",
"to",
"do",
"so",
",",
"we",
"cast",
"the",
"problem",
"as",
"multitask",
... |
ACL | Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study | Sequence-to-sequence (seq2seq) models have achieved tremendous success in text generation tasks. However, there is no guarantee that they can always generate sentences without grammatical errors. In this paper, we present a preliminary empirical study on whether and how much automatic grammatical error correction can h... | ad988ad9a7c7c5cfe9e1f5bf7c7ee140 | 2,019 | [
"sequence - to - sequence ( seq2seq ) models have achieved tremendous success in text generation tasks .",
"however , there is no guarantee that they can always generate sentences without grammatical errors .",
"in this paper , we present a preliminary empirical study on whether and how much automatic grammatic... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text generation tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"generation",
"tasks"
],
"offsets": [
14,
15,
16
... | [
"sequence",
"-",
"to",
"-",
"sequence",
"(",
"seq2seq",
")",
"models",
"have",
"achieved",
"tremendous",
"success",
"in",
"text",
"generation",
"tasks",
".",
"however",
",",
"there",
"is",
"no",
"guarantee",
"that",
"they",
"can",
"always",
"generate",
"sent... |
ACL | TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task | TACRED is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE). But, even with recent advances in unsupervised pre-training and knowledge enhanced neural RE, models still show a high error rate. In this paper, we investigate the questions: Have we reached a performance ceiling or is th... | 5121de07fcf8e56d2db9ba9aba98ee6c | 2,020 | [
"tacred is one of the largest , most widely used crowdsourced datasets in relation extraction ( re ) .",
"but , even with recent advances in unsupervised pre - training and knowledge enhanced neural re , models still show a high error rate .",
"in this paper , we investigate the questions : have we reached a pe... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "relation extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"relation",
"extraction"
],
"offsets": [
13,
14
]
}
],
"trig... | [
"tacred",
"is",
"one",
"of",
"the",
"largest",
",",
"most",
"widely",
"used",
"crowdsourced",
"datasets",
"in",
"relation",
"extraction",
"(",
"re",
")",
".",
"but",
",",
"even",
"with",
"recent",
"advances",
"in",
"unsupervised",
"pre",
"-",
"training",
"... |
ACL | A Neural Transition-based Model for Argumentation Mining | The goal of argumentation mining is to automatically extract argumentation structures from argumentative texts. Most existing methods determine argumentative relations by exhaustively enumerating all possible pairs of argument components, which suffer from low efficiency and class imbalance. Moreover, due to the comple... | 587c5bc033851e955707b71abd9937dd | 2,021 | [
"the goal of argumentation mining is to automatically extract argumentation structures from argumentative texts .",
"most existing methods determine argumentative relations by exhaustively enumerating all possible pairs of argument components , which suffer from low efficiency and class imbalance .",
"moreover ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "argumentation mining",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"argumentation",
"mining"
],
"offsets": [
3,
4
]
}
],
"trig... | [
"the",
"goal",
"of",
"argumentation",
"mining",
"is",
"to",
"automatically",
"extract",
"argumentation",
"structures",
"from",
"argumentative",
"texts",
".",
"most",
"existing",
"methods",
"determine",
"argumentative",
"relations",
"by",
"exhaustively",
"enumerating",
... |
ACL | BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization | The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a ... | cb8f32b1eaf6cd2855804ebbf85810f5 | 2,019 | [
"the success of neural summarization models stems from the meticulous encodings of source articles .",
"to overcome the impediments of limited and sometimes noisy training data , one promising direction is to make better use of the available training data by applying filters during summarization .",
"in this pa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural summarization models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"summarization",
"models"
],
"offsets": [
3,
4,
... | [
"the",
"success",
"of",
"neural",
"summarization",
"models",
"stems",
"from",
"the",
"meticulous",
"encodings",
"of",
"source",
"articles",
".",
"to",
"overcome",
"the",
"impediments",
"of",
"limited",
"and",
"sometimes",
"noisy",
"training",
"data",
",",
"one",... |
ACL | Adversarial NLI: A New Benchmark for Natural Language Understanding | We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test se... | c2ac902c8e0e0586889686c764068f05 | 2,020 | [
"we introduce a new large - scale nli benchmark dataset , collected via an iterative , adversarial human - and - model - in - the - loop procedure .",
"we show that training models on this new dataset leads to state - of - the - art performance on a variety of popular nli benchmarks , while posing a more difficul... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "large - scale nli benchmark dataset",
"nugg... | [
"we",
"introduce",
"a",
"new",
"large",
"-",
"scale",
"nli",
"benchmark",
"dataset",
",",
"collected",
"via",
"an",
"iterative",
",",
"adversarial",
"human",
"-",
"and",
"-",
"model",
"-",
"in",
"-",
"the",
"-",
"loop",
"procedure",
".",
"we",
"show",
... |
ACL | UniTE: Unified Translation Evaluation | Translation quality evaluation plays a crucial role in machine translation. According to the input format, it is mainly separated into three tasks, i.e., reference-only, source-only and source-reference-combined. Recent methods, despite their promising results, are specifically designed and optimized on one of them. Th... | 2c254cb138e290c4ae41af24adf914b8 | 2,022 | [
"translation quality evaluation plays a crucial role in machine translation .",
"according to the input format , it is mainly separated into three tasks , i . e . , reference - only , source - only and source - reference - combined .",
"recent methods , despite their promising results , are specifically designe... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "translation quality evaluation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"translation",
"quality",
"evaluation"
],
"offsets": [
0,
1,
... | [
"translation",
"quality",
"evaluation",
"plays",
"a",
"crucial",
"role",
"in",
"machine",
"translation",
".",
"according",
"to",
"the",
"input",
"format",
",",
"it",
"is",
"mainly",
"separated",
"into",
"three",
"tasks",
",",
"i",
".",
"e",
".",
",",
"refe... |
ACL | Contextualized Sparse Representations for Real-Time Open-Domain Question Answering | Open-domain question answering can be formulated as a phrase retrieval problem, in which we can expect huge scalability and speed benefit but often suffer from low accuracy due to the limitation of existing phrase representation models. In this paper, we aim to improve the quality of each phrase embedding by augmenting... | 0d6ea4d60bbc3f35069eca81565ba234 | 2,020 | [
"open - domain question answering can be formulated as a phrase retrieval problem , in which we can expect huge scalability and speed benefit but often suffer from low accuracy due to the limitation of existing phrase representation models .",
"in this paper , we aim to improve the quality of each phrase embeddin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"question",
"answering"
],
"offsets"... | [
"open",
"-",
"domain",
"question",
"answering",
"can",
"be",
"formulated",
"as",
"a",
"phrase",
"retrieval",
"problem",
",",
"in",
"which",
"we",
"can",
"expect",
"huge",
"scalability",
"and",
"speed",
"benefit",
"but",
"often",
"suffer",
"from",
"low",
"acc... |
ACL | Answering Ambiguous Questions through Generative Evidence Fusion and Round-Trip Prediction | In open-domain question answering, questions are highly likely to be ambiguous because users may not know the scope of relevant topics when formulating them. Therefore, a system needs to find possible interpretations of the question, and predict one or multiple plausible answers. When multiple plausible answers are fou... | 326313a5c7fe83f21582e0b376b89917 | 2,021 | [
"in open - domain question answering , questions are highly likely to be ambiguous because users may not know the scope of relevant topics when formulating them .",
"therefore , a system needs to find possible interpretations of the question , and predict one or multiple plausible answers .",
"when multiple pla... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
74
]
},
{
"text": "model",
"nugget_type": "APP",
"arg... | [
"in",
"open",
"-",
"domain",
"question",
"answering",
",",
"questions",
"are",
"highly",
"likely",
"to",
"be",
"ambiguous",
"because",
"users",
"may",
"not",
"know",
"the",
"scope",
"of",
"relevant",
"topics",
"when",
"formulating",
"them",
".",
"therefore",
... |
ACL | Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in Multitask End-to-End Speech Translation | Speech translation (ST) aims to learn transformations from speech in the source language to the text in the target language. Previous works show that multitask learning improves the ST performance, in which the recognition decoder generates the text of the source language, and the translation decoder obtains the final ... | e1ee651318f3c4be676d178ec0941e4a | 2,020 | [
"speech translation ( st ) aims to learn transformations from speech in the source language to the text in the target language .",
"previous works show that multitask learning improves the st performance , in which the recognition decoder generates the text of the source language , and the translation decoder obt... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "speech translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"speech",
"translation"
],
"offsets": [
0,
1
]
}
],
"trigger"... | [
"speech",
"translation",
"(",
"st",
")",
"aims",
"to",
"learn",
"transformations",
"from",
"speech",
"in",
"the",
"source",
"language",
"to",
"the",
"text",
"in",
"the",
"target",
"language",
".",
"previous",
"works",
"show",
"that",
"multitask",
"learning",
... |
ACL | UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning | Recent parameter-efficient language model tuning (PELT) methods manage to match the performance of fine-tuning with much fewer trainable parameters and perform especially well when training data is limited. However, different PELT methods may perform rather differently on the same task, making it nontrivial to select t... | ee16007c0f87fd53bf1a3e3328105c07 | 2,022 | [
"recent parameter - efficient language model tuning ( pelt ) methods manage to match the performance of fine - tuning with much fewer trainable parameters and perform especially well when training data is limited .",
"however , different pelt methods may perform rather differently on the same task , making it non... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "parameter - efficient language model tuning",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"parameter",
"-",
"efficient",
"language",
"model",
"t... | [
"recent",
"parameter",
"-",
"efficient",
"language",
"model",
"tuning",
"(",
"pelt",
")",
"methods",
"manage",
"to",
"match",
"the",
"performance",
"of",
"fine",
"-",
"tuning",
"with",
"much",
"fewer",
"trainable",
"parameters",
"and",
"perform",
"especially",
... |
ACL | Interactive Construction of User-Centric Dictionary for Text Analytics | We propose a methodology to construct a term dictionary for text analytics through an interactive process between a human and a machine, which helps the creation of flexible dictionaries with precise granularity required in typical text analysis. This paper introduces the first formulation of interactive dictionary con... | 4e07b66c85604de476be954b696ae5eb | 2,020 | [
"we propose a methodology to construct a term dictionary for text analytics through an interactive process between a human and a machine , which helps the creation of flexible dictionaries with precise granularity required in typical text analysis .",
"this paper introduces the first formulation of interactive di... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "interactive process between a human and a machine",
"nugget_type": "MOD",
"argument_type": "BaseComponent",
"tokens": [
"interactive",
"process",
"between",
"a",
"human",
... | [
"we",
"propose",
"a",
"methodology",
"to",
"construct",
"a",
"term",
"dictionary",
"for",
"text",
"analytics",
"through",
"an",
"interactive",
"process",
"between",
"a",
"human",
"and",
"a",
"machine",
",",
"which",
"helps",
"the",
"creation",
"of",
"flexible"... |
ACL | Softmax Bottleneck Makes Language Models Unable to Represent Multi-mode Word Distributions | Neural language models (LMs) such as GPT-2 estimate the probability distribution over the next word by a softmax over the vocabulary. The softmax layer produces the distribution based on the dot products of a single hidden state and the embeddings of words in the vocabulary. However, we discover that this single hidden... | 7493b2214cd1d1877fac57801d8de909 | 2,022 | [
"neural language models ( lms ) such as gpt - 2 estimate the probability distribution over the next word by a softmax over the vocabulary .",
"the softmax layer produces the distribution based on the dot products of a single hidden state and the embeddings of words in the vocabulary .",
"however , we discover t... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "single hidden state",
"nugget_type": "FEA",
"argument_type": "Concern",
"tokens": [
"single",
"hidden",
"state"
],
"offsets": [
57,
58,
59
... | [
"neural",
"language",
"models",
"(",
"lms",
")",
"such",
"as",
"gpt",
"-",
"2",
"estimate",
"the",
"probability",
"distribution",
"over",
"the",
"next",
"word",
"by",
"a",
"softmax",
"over",
"the",
"vocabulary",
".",
"the",
"softmax",
"layer",
"produces",
... |
ACL | Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation | Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work,... | 08e3ca6997eddf675236c4e9ac12e2d5 | 2,019 | [
"pretrained contextual and non - contextual subword embeddings have become available in over 250 languages , allowing massively multilingual nlp .",
"however , while there is no dearth of pretrained embeddings , the distinct lack of systematic evaluations makes it difficult for practitioners to choose between the... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained contextual and non - contextual subword embeddings",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"pretrained",
"contextual",
"and",
"non",
"-",... | [
"pretrained",
"contextual",
"and",
"non",
"-",
"contextual",
"subword",
"embeddings",
"have",
"become",
"available",
"in",
"over",
"250",
"languages",
",",
"allowing",
"massively",
"multilingual",
"nlp",
".",
"however",
",",
"while",
"there",
"is",
"no",
"dearth... |
ACL | BERT-based Lexical Substitution | Previous studies on lexical substitution tend to obtain substitute candidates by finding the target word’s synonyms from lexical resources (e.g., WordNet) and then rank the candidates based on its contexts. These approaches have two limitations: (1) They are likely to overlook good substitute candidates that are not th... | 661c551abea38ac32188283773040e8a | 2,019 | [
"previous studies on lexical substitution tend to obtain substitute candidates by finding the target word ’ s synonyms from lexical resources ( e . g . , wordnet ) and then rank the candidates based on its contexts .",
"these approaches have two limitations : ( 1 ) they are likely to overlook good substitute cand... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "target word ’ s synonyms",
"nugget_type": "FEA",
"argument_type": "TriedComponent",
"tokens": [
"target",
"word",
"’",
"s",
"synonyms"
],
"offsets": [
... | [
"previous",
"studies",
"on",
"lexical",
"substitution",
"tend",
"to",
"obtain",
"substitute",
"candidates",
"by",
"finding",
"the",
"target",
"word",
"’",
"s",
"synonyms",
"from",
"lexical",
"resources",
"(",
"e",
".",
"g",
".",
",",
"wordnet",
")",
"and",
... |
ACL | Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network | Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs. In this paper, we introduce the topic entity graph, a local sub-graph of an entity, to represent enti... | 968e10defc7d4bd99a90aa3434eab763 | 2,019 | [
"previous cross - lingual knowledge graph ( kg ) alignment studies rely on entity embeddings derived only from monolingual kg structural information , which may fail at matching entities that have different facts in two kgs .",
"in this paper , we introduce the topic entity graph , a local sub - graph of an entit... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge graph",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"knowledge",
"graph"
],
"offsets": [
4,
5
]
}
],
"trigger": {
... | [
"previous",
"cross",
"-",
"lingual",
"knowledge",
"graph",
"(",
"kg",
")",
"alignment",
"studies",
"rely",
"on",
"entity",
"embeddings",
"derived",
"only",
"from",
"monolingual",
"kg",
"structural",
"information",
",",
"which",
"may",
"fail",
"at",
"matching",
... |
ACL | Revisiting the Negative Data of Distantly Supervised Relation Extraction | Distantly supervision automatically generates plenty of training samples for relation extraction. However, it also incurs two major problems: noisy labels and imbalanced training data. Previous works focus more on reducing wrongly labeled relations (false positives) while few explore the missing relations that are caus... | 7170bbfb109ba5f46a50ecff04a815f3 | 2,021 | [
"distantly supervision automatically generates plenty of training samples for relation extraction .",
"however , it also incurs two major problems : noisy labels and imbalanced training data .",
"previous works focus more on reducing wrongly labeled relations ( false positives ) while few explore the missing re... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "distantly supervision",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"distantly",
"supervision"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"distantly",
"supervision",
"automatically",
"generates",
"plenty",
"of",
"training",
"samples",
"for",
"relation",
"extraction",
".",
"however",
",",
"it",
"also",
"incurs",
"two",
"major",
"problems",
":",
"noisy",
"labels",
"and",
"imbalanced",
"training",
"dat... |
ACL | Coherence boosting: When your pretrained language model is not paying enough attention | Long-range semantic coherence remains a challenge in automatic language generation and understanding. We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction. We present coherence boosting, an inference procedure that increases a LM’s focus on a long co... | aacec03053d1b68c164a457e9bf1866d | 2,022 | [
"long - range semantic coherence remains a challenge in automatic language generation and understanding .",
"we demonstrate that large language models have insufficiently learned the effect of distant words on next - token prediction .",
"we present coherence boosting , an inference procedure that increases a l... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "yields",
"nugget_type": "E-CMP",
"argument_type": "Content",
"tokens": [
"yields"
],
"offsets": [
99
]
}
],
"trigger": {
"text": "found",
"tokens": [
... | [
"long",
"-",
"range",
"semantic",
"coherence",
"remains",
"a",
"challenge",
"in",
"automatic",
"language",
"generation",
"and",
"understanding",
".",
"we",
"demonstrate",
"that",
"large",
"language",
"models",
"have",
"insufficiently",
"learned",
"the",
"effect",
... |
ACL | Generating Diverse Translations with Sentence Codes | Users of machine translation systems may desire to obtain multiple candidates translated in different ways. In this work, we attempt to obtain diverse translations by using sentence codes to condition the sentence generation. We describe two methods to extract the codes, either with or without the help of syntax inform... | 62010d532d2761a76d3d6e94837cf5dc | 2,019 | [
"users of machine translation systems may desire to obtain multiple candidates translated in different ways .",
"in this work , we attempt to obtain diverse translations by using sentence codes to condition the sentence generation .",
"we describe two methods to extract the codes , either with or without the he... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine translation systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"machine",
"translation",
"systems"
],
"offsets": [
2,
3,
... | [
"users",
"of",
"machine",
"translation",
"systems",
"may",
"desire",
"to",
"obtain",
"multiple",
"candidates",
"translated",
"in",
"different",
"ways",
".",
"in",
"this",
"work",
",",
"we",
"attempt",
"to",
"obtain",
"diverse",
"translations",
"by",
"using",
"... |
ACL | Machine Translation for Livonian: Catering to 20 Speakers | Livonian is one of the most endangered languages in Europe with just a tiny handful of speakers and virtually no publicly available corpora. In this paper we tackle the task of developing neural machine translation (NMT) between Livonian and English, with a two-fold aim: on one hand, preserving the language and on the ... | 9447f0b95b5a72c0472652612affa4f8 | 2,022 | [
"livonian is one of the most endangered languages in europe with just a tiny handful of speakers and virtually no publicly available corpora .",
"in this paper we tackle the task of developing neural machine translation ( nmt ) between livonian and english , with a two - fold aim : on one hand , preserving the la... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
27
]
},
{
"text": "neural machine translation",
"nugget_typ... | [
"livonian",
"is",
"one",
"of",
"the",
"most",
"endangered",
"languages",
"in",
"europe",
"with",
"just",
"a",
"tiny",
"handful",
"of",
"speakers",
"and",
"virtually",
"no",
"publicly",
"available",
"corpora",
".",
"in",
"this",
"paper",
"we",
"tackle",
"the"... |
ACL | On the probability–quality paradox in language generation | When generating natural language from neural probabilistic models, high probability does not always coincide with high quality: It has often been observed that mode-seeking decoding methods, i.e., those that produce high-probability text under the model, lead to unnatural language. On the other hand, the lower-probabil... | 07b858421dbcb7c61ed746ffc04feeb3 | 2,022 | [
"when generating natural language from neural probabilistic models , high probability does not always coincide with high quality : it has often been observed that mode - seeking decoding methods , i . e . , those that produce high - probability text under the model , lead to unnatural language .",
"on the other h... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"natural",
"language"
],
"offsets": [
2,
3
]
}
],
"trigger": {
... | [
"when",
"generating",
"natural",
"language",
"from",
"neural",
"probabilistic",
"models",
",",
"high",
"probability",
"does",
"not",
"always",
"coincide",
"with",
"high",
"quality",
":",
"it",
"has",
"often",
"been",
"observed",
"that",
"mode",
"-",
"seeking",
... |
ACL | Achieving Conversational Goals with Unsupervised Post-hoc Knowledge Injection | A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge. One way to alleviate this issue is to extract relevant kn... | 32eec6db1e6265e9868a02eab31204cd | 2,022 | [
"a limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses , primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge .",
"one way to alleviate this issue is to extract... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "current neural dialog models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"current",
"neural",
"dialog",
"models"
],
"offsets": [
3,
... | [
"a",
"limitation",
"of",
"current",
"neural",
"dialog",
"models",
"is",
"that",
"they",
"tend",
"to",
"suffer",
"from",
"a",
"lack",
"of",
"specificity",
"and",
"informativeness",
"in",
"generated",
"responses",
",",
"primarily",
"due",
"to",
"dependence",
"on... |
ACL | Perceiving the World: Question-guided Reinforcement Learning for Text-based Games | Text-based games provide an interactive way to study natural language processing. While deep reinforcement learning has shown effectiveness in developing the game playing agent, the low sample efficiency and the large action space remain to be the two major challenges that hinder the DRL from being applied in the real ... | 694a3875bfa265adfd895fa2a6770a05 | 2,022 | [
"text - based games provide an interactive way to study natural language processing .",
"while deep reinforcement learning has shown effectiveness in developing the game playing agent , the low sample efficiency and the large action space remain to be the two major challenges that hinder the drl from being applie... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
10,
11,
... | [
"text",
"-",
"based",
"games",
"provide",
"an",
"interactive",
"way",
"to",
"study",
"natural",
"language",
"processing",
".",
"while",
"deep",
"reinforcement",
"learning",
"has",
"shown",
"effectiveness",
"in",
"developing",
"the",
"game",
"playing",
"agent",
"... |
ACL | Improving Personalized Explanation Generation through Visualization | In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existi... | 9a794a7792986c80ab4554c27cf9c75a | 2,022 | [
"in modern recommender systems , there are usually comments or reviews from users that justify their ratings for different items .",
"trained on such textual corpus , explainable recommendation models learn to discover user interests and generate personalized explanations .",
"though able to provide plausible e... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "modern recommender systems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"modern",
"recommender",
"systems"
],
"offsets": [
1,
2,
... | [
"in",
"modern",
"recommender",
"systems",
",",
"there",
"are",
"usually",
"comments",
"or",
"reviews",
"from",
"users",
"that",
"justify",
"their",
"ratings",
"for",
"different",
"items",
".",
"trained",
"on",
"such",
"textual",
"corpus",
",",
"explainable",
"... |
ACL | Disentangled Knowledge Transfer for OOD Intent Discovery with Unified Contrastive Learning | Discovering Out-of-Domain(OOD) intents is essential for developing new skills in a task-oriented dialogue system. The key challenge is how to transfer prior IND knowledge to OOD clustering. Different from existing work based on shared intent representation, we propose a novel disentangled knowledge transfer method via ... | d3efde5b872d96e7a974d8d254a90bfd | 2,022 | [
"discovering out - of - domain ( ood ) intents is essential for developing new skills in a task - oriented dialogue system .",
"the key challenge is how to transfer prior ind knowledge to ood clustering .",
"different from existing work based on shared intent representation , we propose a novel disentangled kno... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "task - oriented dialogue system",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"task",
"-",
"oriented",
"dialogue",
"system"
],
"offsets": ... | [
"discovering",
"out",
"-",
"of",
"-",
"domain",
"(",
"ood",
")",
"intents",
"is",
"essential",
"for",
"developing",
"new",
"skills",
"in",
"a",
"task",
"-",
"oriented",
"dialogue",
"system",
".",
"the",
"key",
"challenge",
"is",
"how",
"to",
"transfer",
... |
ACL | An End-to-End Progressive Multi-Task Learning Framework for Medical Named Entity Recognition and Normalization | Medical named entity recognition (NER) and normalization (NEN) are fundamental for constructing knowledge graphs and building QA systems. Existing implementations for medical NER and NEN are suffered from the error propagation between the two tasks. The mispredicted mentions from NER will directly influence the results... | d6dafd942f375205eecbcf4f0c88a548 | 2,021 | [
"medical named entity recognition ( ner ) and normalization ( nen ) are fundamental for constructing knowledge graphs and building qa systems .",
"existing implementations for medical ner and nen are suffered from the error propagation between the two tasks .",
"the mispredicted mentions from ner will directly ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge graphs",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"knowledge",
"graphs"
],
"offsets": [
16,
17
]
},
{
"text... | [
"medical",
"named",
"entity",
"recognition",
"(",
"ner",
")",
"and",
"normalization",
"(",
"nen",
")",
"are",
"fundamental",
"for",
"constructing",
"knowledge",
"graphs",
"and",
"building",
"qa",
"systems",
".",
"existing",
"implementations",
"for",
"medical",
"... |
ACL | Multi-TimeLine Summarization (MTLS): Improving Timeline Summarization by Generating Multiple Summaries | In this paper, we address a novel task, Multiple TimeLine Summarization (MTLS), which extends the flexibility and versatility of Time-Line Summarization (TLS). Given any collection of time-stamped news articles, MTLS automatically discovers important yet different stories and generates a corresponding time-line for eac... | b823e561a9b2cca1f9a0c5d9475725cb | 2,021 | [
"in this paper , we address a novel task , multiple timeline summarization ( mtls ) , which extends the flexibility and versatility of time - line summarization ( tls ) .",
"given any collection of time - stamped news articles , mtls automatically discovers important yet different stories and generates a correspo... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "multiple timeline summarization",
"nugget_t... | [
"in",
"this",
"paper",
",",
"we",
"address",
"a",
"novel",
"task",
",",
"multiple",
"timeline",
"summarization",
"(",
"mtls",
")",
",",
"which",
"extends",
"the",
"flexibility",
"and",
"versatility",
"of",
"time",
"-",
"line",
"summarization",
"(",
"tls",
... |
ACL | Modeling Language Usage and Listener Engagement in Podcasts | While there is an abundance of advice to podcast creators on how to speak in ways that engage their listeners, there has been little data-driven analysis of podcasts that relates linguistic style with engagement. In this paper, we investigate how various factors – vocabulary diversity, distinctiveness, emotion, and syn... | b676e0d5de9434d3438d51a2f2f66651 | 2,021 | [
"while there is an abundance of advice to podcast creators on how to speak in ways that engage their listeners , there has been little data - driven analysis of podcasts that relates linguistic style with engagement .",
"in this paper , we investigate how various factors – vocabulary diversity , distinctiveness ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "data - driven analysis",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"data",
"-",
"driven",
"analysis"
],
"offsets": [
25,
26,
... | [
"while",
"there",
"is",
"an",
"abundance",
"of",
"advice",
"to",
"podcast",
"creators",
"on",
"how",
"to",
"speak",
"in",
"ways",
"that",
"engage",
"their",
"listeners",
",",
"there",
"has",
"been",
"little",
"data",
"-",
"driven",
"analysis",
"of",
"podca... |
ACL | Multimodal fusion via cortical network inspired losses | Information integration from different modalities is an active area of research. Human beings and, in general, biological neural systems are quite adept at using a multitude of signals from different sensory perceptive fields to interact with the environment and each other. Recent work in deep fusion models via neural ... | 8ce84417ba93b2d245a1240ee8a52b58 | 2,022 | [
"information integration from different modalities is an active area of research .",
"human beings and , in general , biological neural systems are quite adept at using a multitude of signals from different sensory perceptive fields to interact with the environment and each other .",
"recent work in deep fusion... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "information integration",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"information",
"integration"
],
"offsets": [
0,
1
]
}
],
... | [
"information",
"integration",
"from",
"different",
"modalities",
"is",
"an",
"active",
"area",
"of",
"research",
".",
"human",
"beings",
"and",
",",
"in",
"general",
",",
"biological",
"neural",
"systems",
"are",
"quite",
"adept",
"at",
"using",
"a",
"multitud... |
ACL | DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization | Large-scale pre-trained sequence-to-sequence models like BART and T5 achieve state-of-the-art performance on many generative NLP tasks. However, such models pose a great challenge in resource-constrained scenarios owing to their large memory requirements and high latency. To alleviate this issue, we propose to jointly ... | e9961b80944be88e620d3ca2e95fcb68 | 2,022 | [
"large - scale pre - trained sequence - to - sequence models like bart and t5 achieve state - of - the - art performance on many generative nlp tasks .",
"however , such models pose a great challenge in resource - constrained scenarios owing to their large memory requirements and high latency .",
"to alleviate ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "generative nlp tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"generative",
"nlp",
"tasks"
],
"offsets": [
27,
28,
29
... | [
"large",
"-",
"scale",
"pre",
"-",
"trained",
"sequence",
"-",
"to",
"-",
"sequence",
"models",
"like",
"bart",
"and",
"t5",
"achieve",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"on",
"many",
"generative",
"nlp",
"tasks",
".",
"however",... |
ACL | Efficient Strategies for Hierarchical Text Classification: External Knowledge and Auxiliary Tasks | In hierarchical text classification, we perform a sequence of inference steps to predict the category of a document from top to bottom of a given class taxonomy. Most of the studies have focused on developing novels neural network architectures to deal with the hierarchical structure, but we prefer to look for efficien... | 565c084c5d63a66c2f2a0d9246af3e1d | 2,020 | [
"in hierarchical text classification , we perform a sequence of inference steps to predict the category of a document from top to bottom of a given class taxonomy .",
"most of the studies have focused on developing novels neural network architectures to deal with the hierarchical structure , but we prefer to look... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
5
]
},
{
"text": "a sequence of inference steps",
"nugget_t... | [
"in",
"hierarchical",
"text",
"classification",
",",
"we",
"perform",
"a",
"sequence",
"of",
"inference",
"steps",
"to",
"predict",
"the",
"category",
"of",
"a",
"document",
"from",
"top",
"to",
"bottom",
"of",
"a",
"given",
"class",
"taxonomy",
".",
"most",... |
ACL | Enhancing Cross-lingual Natural Language Inference by Prompt-learning from Cross-lingual Templates | Cross-lingual natural language inference (XNLI) is a fundamental task in cross-lingual natural language understanding. Recently this task is commonly addressed by pre-trained cross-lingual language models. Existing methods usually enhance pre-trained language models with additional data, such as annotated parallel corp... | c898a42ed1a3b905ee6577a479e57a00 | 2,022 | [
"cross - lingual natural language inference ( xnli ) is a fundamental task in cross - lingual natural language understanding .",
"recently this task is commonly addressed by pre - trained cross - lingual language models .",
"existing methods usually enhance pre - trained language models with additional data , s... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual natural language inference",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"natural",
"language",
"infere... | [
"cross",
"-",
"lingual",
"natural",
"language",
"inference",
"(",
"xnli",
")",
"is",
"a",
"fundamental",
"task",
"in",
"cross",
"-",
"lingual",
"natural",
"language",
"understanding",
".",
"recently",
"this",
"task",
"is",
"commonly",
"addressed",
"by",
"pre",... |
ACL | DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations | Sentence embeddings are an important component of many natural language processing (NLP) systems. Like word embeddings, sentence embeddings are typically learned on large text corpora and then transferred to various downstream tasks, such as clustering and retrieval. Unlike word embeddings, the highest performing solut... | 5887bf3e85230ce169204f1f344fe066 | 2,021 | [
"sentence embeddings are an important component of many natural language processing ( nlp ) systems .",
"like word embeddings , sentence embeddings are typically learned on large text corpora and then transferred to various downstream tasks , such as clustering and retrieval .",
"unlike word embeddings , the hi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sentence embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"sentence",
"embeddings"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"sentence",
"embeddings",
"are",
"an",
"important",
"component",
"of",
"many",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"systems",
".",
"like",
"word",
"embeddings",
",",
"sentence",
"embeddings",
"are",
"typically",
"learned",
"on",
"large",
"text"... |
ACL | EigenSent: Spectral sentence embeddings using higher-order Dynamic Mode Decomposition | Distributed representation of words, or word embeddings, have motivated methods for calculating semantic representations of word sequences such as phrases, sentences and paragraphs. Most of the existing methods to do so either use algorithms to learn such representations, or improve on calculating weighted averages of ... | 105ef87deb3268f9e32fe42cbd48e730 | 2,019 | [
"distributed representation of words , or word embeddings , have motivated methods for calculating semantic representations of word sequences such as phrases , sentences and paragraphs .",
"most of the existing methods to do so either use algorithms to learn such representations , or improve on calculating weight... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "methods for calculating semantic representations of word sequences",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"methods",
"for",
"calculating",
"semantic",
... | [
"distributed",
"representation",
"of",
"words",
",",
"or",
"word",
"embeddings",
",",
"have",
"motivated",
"methods",
"for",
"calculating",
"semantic",
"representations",
"of",
"word",
"sequences",
"such",
"as",
"phrases",
",",
"sentences",
"and",
"paragraphs",
".... |
ACL | Word Sense Disambiguation: Towards Interactive Context Exploitation from Both Word and Sense Perspectives | Lately proposed Word Sense Disambiguation (WSD) systems have approached the estimated upper bound of the task on standard evaluation benchmarks. However, these systems typically implement the disambiguation of words in a document almost independently, underutilizing sense and word dependency in context. In this paper, ... | cd187f151bb6eb30cc132553d16a2333 | 2,021 | [
"lately proposed word sense disambiguation ( wsd ) systems have approached the estimated upper bound of the task on standard evaluation benchmarks .",
"however , these systems typically implement the disambiguation of words in a document almost independently , underutilizing sense and word dependency in context .... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "word sense disambiguation systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"word",
"sense",
"disambiguation",
"systems"
],
"offsets": [
... | [
"lately",
"proposed",
"word",
"sense",
"disambiguation",
"(",
"wsd",
")",
"systems",
"have",
"approached",
"the",
"estimated",
"upper",
"bound",
"of",
"the",
"task",
"on",
"standard",
"evaluation",
"benchmarks",
".",
"however",
",",
"these",
"systems",
"typicall... |
ACL | Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension | This paper considers the reading comprehension task in which multiple documents are given as input. Prior work has shown that a pipeline of retriever, reader, and reranker can improve the overall performance. However, the pipeline system is inefficient since the input is re-encoded within each module, and is unable to ... | 95417241bf1f22cb2c5d9bd230ce257a | 2,019 | [
"this paper considers the reading comprehension task in which multiple documents are given as input .",
"prior work has shown that a pipeline of retriever , reader , and reranker can improve the overall performance .",
"however , the pipeline system is inefficient since the input is re - encoded within each mod... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "inefficient",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"inefficient"
],
"offsets": [
42
]
}
],
"trigger": {
"text": "inefficient",
"... | [
"this",
"paper",
"considers",
"the",
"reading",
"comprehension",
"task",
"in",
"which",
"multiple",
"documents",
"are",
"given",
"as",
"input",
".",
"prior",
"work",
"has",
"shown",
"that",
"a",
"pipeline",
"of",
"retriever",
",",
"reader",
",",
"and",
"rera... |
ACL | Breaking Down the Invisible Wall of Informal Fallacies in Online Discussions | People debate on a variety of topics on online platforms such as Reddit, or Facebook. Debates can be lengthy, with users exchanging a wealth of information and opinions. However, conversations do not always go smoothly, and users sometimes engage in unsound argumentation techniques to prove a claim. These techniques ar... | b23a3365b832c5dbbbe09400e6d950be | 2,021 | [
"people debate on a variety of topics on online platforms such as reddit , or facebook .",
"debates can be lengthy , with users exchanging a wealth of information and opinions .",
"however , conversations do not always go smoothly , and users sometimes engage in unsound argumentation techniques to prove a claim... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "fallacies",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"fallacies"
],
"offsets": [
60
]
}
],
"trigger": {
"text": "provide",
"tokens"... | [
"people",
"debate",
"on",
"a",
"variety",
"of",
"topics",
"on",
"online",
"platforms",
"such",
"as",
"reddit",
",",
"or",
"facebook",
".",
"debates",
"can",
"be",
"lengthy",
",",
"with",
"users",
"exchanging",
"a",
"wealth",
"of",
"information",
"and",
"op... |
ACL | Exploring Pre-trained Language Models for Event Extraction and Generation | Traditional approaches to the task of ACE event extraction usually depend on manually annotated data, which is often laborious to create and limited in size. Therefore, in addition to the difficulty of event extraction itself, insufficient training data hinders the learning process as well. To promote event extraction,... | bbbe9b7445e034056e5e0ea058c12fde | 2,019 | [
"traditional approaches to the task of ace event extraction usually depend on manually annotated data , which is often laborious to create and limited in size .",
"therefore , in addition to the difficulty of event extraction itself , insufficient training data hinders the learning process as well .",
"to promo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ace event extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"ace",
"event",
"extraction"
],
"offsets": [
6,
7,
8
... | [
"traditional",
"approaches",
"to",
"the",
"task",
"of",
"ace",
"event",
"extraction",
"usually",
"depend",
"on",
"manually",
"annotated",
"data",
",",
"which",
"is",
"often",
"laborious",
"to",
"create",
"and",
"limited",
"in",
"size",
".",
"therefore",
",",
... |
ACL | How to Ask Good Questions? Try to Leverage Paraphrases | Given a sentence and its relevant answer, how to ask good questions is a challenging task, which has many real applications. Inspired by human’s paraphrasing capability to ask questions of the same meaning but with diverse expressions, we propose to incorporate paraphrase knowledge into question generation(QG) to gener... | 7db18da7186c68fa65dbb752cacceed3 | 2,020 | [
"given a sentence and its relevant answer , how to ask good questions is a challenging task , which has many real applications .",
"inspired by human ’ s paraphrasing capability to ask questions of the same meaning but with diverse expressions , we propose to incorporate paraphrase knowledge into question generat... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ask good questions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"ask",
"good",
"questions"
],
"offsets": [
10,
11,
12
]... | [
"given",
"a",
"sentence",
"and",
"its",
"relevant",
"answer",
",",
"how",
"to",
"ask",
"good",
"questions",
"is",
"a",
"challenging",
"task",
",",
"which",
"has",
"many",
"real",
"applications",
".",
"inspired",
"by",
"human",
"’",
"s",
"paraphrasing",
"ca... |
ACL | Is Attention Explanation? An Introduction to the Debate | The performance of deep learning models in NLP and other fields of machine learning has led to a rise in their popularity, and so the need for explanations of these models becomes paramount. Attention has been seen as a solution to increase performance, while providing some explanations. However, a debate has started t... | e312aa0d5886965ba65e6d2e965d54d1 | 2,022 | [
"the performance of deep learning models in nlp and other fields of machine learning has led to a rise in their popularity , and so the need for explanations of these models becomes paramount .",
"attention has been seen as a solution to increase performance , while providing some explanations .",
"however , a ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "deep learning models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"deep",
"learning",
"models"
],
"offsets": [
3,
4,
5
... | [
"the",
"performance",
"of",
"deep",
"learning",
"models",
"in",
"nlp",
"and",
"other",
"fields",
"of",
"machine",
"learning",
"has",
"led",
"to",
"a",
"rise",
"in",
"their",
"popularity",
",",
"and",
"so",
"the",
"need",
"for",
"explanations",
"of",
"these... |
ACL | Multidirectional Associative Optimization of Function-Specific Word Representations | We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures. Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together. The model retains information a... | bef6f5e8eba2a2793bbb1f0ab7eeff16 | 2,020 | [
"we present a neural framework for learning associations between interrelated groups of words such as the ones found in subject - verb - object ( svo ) structures .",
"our model induces a joint function - specific word vector space , where vectors of e . g . plausible svo compositions lie close together .",
"th... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "neural framework",
"nugget_type": "APP",
... | [
"we",
"present",
"a",
"neural",
"framework",
"for",
"learning",
"associations",
"between",
"interrelated",
"groups",
"of",
"words",
"such",
"as",
"the",
"ones",
"found",
"in",
"subject",
"-",
"verb",
"-",
"object",
"(",
"svo",
")",
"structures",
".",
"our",
... |
ACL | A Methodology for Creating Question Answering Corpora Using Inverse Data Annotation | In this paper, we introduce a novel methodology to efficiently construct a corpus for question answering over structured data. For this, we introduce an intermediate representation that is based on the logical query plan in a database, called Operation Trees (OT). This representation allows us to invert the annotation ... | 7197cfaba0561fecb795e591e1d2c8f1 | 2,020 | [
"in this paper , we introduce a novel methodology to efficiently construct a corpus for question answering over structured data .",
"for this , we introduce an intermediate representation that is based on the logical query plan in a database , called operation trees ( ot ) .",
"this representation allows us to ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "novel methodology",
"nugget_type": "APP",
... | [
"in",
"this",
"paper",
",",
"we",
"introduce",
"a",
"novel",
"methodology",
"to",
"efficiently",
"construct",
"a",
"corpus",
"for",
"question",
"answering",
"over",
"structured",
"data",
".",
"for",
"this",
",",
"we",
"introduce",
"an",
"intermediate",
"repres... |
ACL | Why Overfitting Isn’t Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries | Cross-lingual word embeddings (CLWE) are often evaluated on bilingual lexicon induction (BLI). Recent CLWE methods use linear projections, which underfit the training dictionary, to generalize on BLI. However, underfitting can hinder generalization to other downstream tasks that rely on words from the training dictiona... | 7a691f990b69bfb6341ed85ea88bb977 | 2,020 | [
"cross - lingual word embeddings ( clwe ) are often evaluated on bilingual lexicon induction ( bli ) .",
"recent clwe methods use linear projections , which underfit the training dictionary , to generalize on bli .",
"however , underfitting can hinder generalization to other downstream tasks that rely on words ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual word embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"word",
"embeddings"
],
"offsets": ... | [
"cross",
"-",
"lingual",
"word",
"embeddings",
"(",
"clwe",
")",
"are",
"often",
"evaluated",
"on",
"bilingual",
"lexicon",
"induction",
"(",
"bli",
")",
".",
"recent",
"clwe",
"methods",
"use",
"linear",
"projections",
",",
"which",
"underfit",
"the",
"trai... |
ACL | Non-Linear Instance-Based Cross-Lingual Mapping for Non-Isomorphic Embedding Spaces | We present InstaMap, an instance-based method for learning projection-based cross-lingual word embeddings. Unlike prior work, it deviates from learning a single global linear projection. InstaMap is a non-parametric model that learns a non-linear projection by iteratively: (1) finding a globally optimal rotation of the... | c094daef338aab5c775f2d29880d35a6 | 2,020 | [
"we present instamap , an instance - based method for learning projection - based cross - lingual word embeddings .",
"unlike prior work , it deviates from learning a single global linear projection .",
"instamap is a non - parametric model that learns a non - linear projection by iteratively : ( 1 ) finding a ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "instamap",
"nugget_type": "APP",
"a... | [
"we",
"present",
"instamap",
",",
"an",
"instance",
"-",
"based",
"method",
"for",
"learning",
"projection",
"-",
"based",
"cross",
"-",
"lingual",
"word",
"embeddings",
".",
"unlike",
"prior",
"work",
",",
"it",
"deviates",
"from",
"learning",
"a",
"single"... |
ACL | MultiQT: Multimodal learning for real-time question tracking in speech | We address a challenging and practical task of labeling questions in speech in real time during telephone calls to emergency medical services in English, which embeds within a broader decision support system for emergency call-takers. We propose a novel multimodal approach to real-time sequence labeling in speech. Our ... | b90aebf711d79a238ad3ed47a650b73d | 2,020 | [
"we address a challenging and practical task of labeling questions in speech in real time during telephone calls to emergency medical services in english , which embeds within a broader decision support system for emergency call - takers .",
"we propose a novel multimodal approach to real - time sequence labeling... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "challenging and practical task of labeling questi... | [
"we",
"address",
"a",
"challenging",
"and",
"practical",
"task",
"of",
"labeling",
"questions",
"in",
"speech",
"in",
"real",
"time",
"during",
"telephone",
"calls",
"to",
"emergency",
"medical",
"services",
"in",
"english",
",",
"which",
"embeds",
"within",
"... |
ACL | Grounding Conversations with Improvised Dialogues | Effective dialogue involves grounding, the process of establishing mutual knowledge that is essential for communication between people. Modern dialogue systems are not explicitly trained to build common ground, and therefore overlook this important aspect of communication. Improvisational theater (improv) intrinsically... | 82435e6af39f39a7b576f25ba51ac6fc | 2,020 | [
"effective dialogue involves grounding , the process of establishing mutual knowledge that is essential for communication between people .",
"modern dialogue systems are not explicitly trained to build common ground , and therefore overlook this important aspect of communication .",
"improvisational theater ( i... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "modern dialogue systems",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"modern",
"dialogue",
"systems"
],
"offsets": [
19,
20,
2... | [
"effective",
"dialogue",
"involves",
"grounding",
",",
"the",
"process",
"of",
"establishing",
"mutual",
"knowledge",
"that",
"is",
"essential",
"for",
"communication",
"between",
"people",
".",
"modern",
"dialogue",
"systems",
"are",
"not",
"explicitly",
"trained",... |
ACL | GAN-BERT: Generative Adversarial Learning for Robust Text Classification with a Bunch of Labeled Examples | Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Natural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of examples. In many real scenarios, obtaining high- quality annotated data is expensive and time consuming; in ... | c68a15920b8ada517c7312eab81cf4f2 | 2,020 | [
"recent transformer - based architectures , e . g . , bert , provide impressive results in many natural language processing tasks .",
"however , most of the adopted benchmarks are made of ( sometimes hundreds of ) thousands of examples .",
"in many real scenarios , obtaining high - quality annotated data is exp... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"tasks"
],
"offsets": [
... | [
"recent",
"transformer",
"-",
"based",
"architectures",
",",
"e",
".",
"g",
".",
",",
"bert",
",",
"provide",
"impressive",
"results",
"in",
"many",
"natural",
"language",
"processing",
"tasks",
".",
"however",
",",
"most",
"of",
"the",
"adopted",
"benchmark... |
ACL | Searching for Effective Neural Extractive Summarization: What Works and What’s Next | The recent years have seen remarkable success in the use of deep neural networks on text summarization. However, there is no clear understanding of why they perform so well, or how they might be improved. In this paper, we seek to better understand how neural extractive summarization systems could benefit from differen... | 18d004b2adac48d9f7d0c056a60d5ab3 | 2,019 | [
"the recent years have seen remarkable success in the use of deep neural networks on text summarization .",
"however , there is no clear understanding of why they perform so well , or how they might be improved .",
"in this paper , we seek to better understand how neural extractive summarization systems could b... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "deep neural networks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"deep",
"neural",
"networks"
],
"offsets": [
11,
12,
13
... | [
"the",
"recent",
"years",
"have",
"seen",
"remarkable",
"success",
"in",
"the",
"use",
"of",
"deep",
"neural",
"networks",
"on",
"text",
"summarization",
".",
"however",
",",
"there",
"is",
"no",
"clear",
"understanding",
"of",
"why",
"they",
"perform",
"so"... |
ACL | Signal in Noise: Exploring Meaning Encoded in Random Character Sequences with Character-Aware Language Models | Natural language processing models learn word representations based on the distributional hypothesis, which asserts that word context (e.g., co-occurrence) correlates with meaning. We propose that n-grams composed of random character sequences, or garble, provide a novel context for studying word meaning both within an... | 752771bfee6c5a6a2ad2db3a0e6de832 | 2,022 | [
"natural language processing models learn word representations based on the distributional hypothesis , which asserts that word context ( e . g . , co - occurrence ) correlates with meaning .",
"we propose that n - grams composed of random character sequences , or garble , provide a novel context for studying wor... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"models"
],
"offsets": [
... | [
"natural",
"language",
"processing",
"models",
"learn",
"word",
"representations",
"based",
"on",
"the",
"distributional",
"hypothesis",
",",
"which",
"asserts",
"that",
"word",
"context",
"(",
"e",
".",
"g",
".",
",",
"co",
"-",
"occurrence",
")",
"correlates... |
ACL | KaFSP: Knowledge-Aware Fuzzy Semantic Parsing for Conversational Question Answering over a Large-Scale Knowledge Base | In this paper, we study two issues of semantic parsing approaches to conversational question answering over a large-scale knowledge base: (1) The actions defined in grammar are not sufficient to handle uncertain reasoning common in real-world scenarios. (2) Knowledge base information is not well exploited and incorpora... | d452ddc0b1a2953f6e308d53b9b3c6e0 | 2,022 | [
"in this paper , we study two issues of semantic parsing approaches to conversational question answering over a large - scale knowledge base : ( 1 ) the actions defined in grammar are not sufficient to handle uncertain reasoning common in real - world scenarios .",
"( 2 ) knowledge base information is not well ex... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "two issues of semantic parsing approaches to conv... | [
"in",
"this",
"paper",
",",
"we",
"study",
"two",
"issues",
"of",
"semantic",
"parsing",
"approaches",
"to",
"conversational",
"question",
"answering",
"over",
"a",
"large",
"-",
"scale",
"knowledge",
"base",
":",
"(",
"1",
")",
"the",
"actions",
"defined",
... |
ACL | Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact Verification | Fact verification is a challenging task that requires simultaneously reasoning and aggregating over multiple retrieved pieces of evidence to evaluate the truthfulness of a claim. Existing approaches typically (i) explore the semantic interaction between the claim and evidence at different granularity levels but fail to... | 3dc3b3af025a594ab23ca0d7d35bce7d | 2,021 | [
"fact verification is a challenging task that requires simultaneously reasoning and aggregating over multiple retrieved pieces of evidence to evaluate the truthfulness of a claim .",
"existing approaches typically ( i ) explore the semantic interaction between the claim and evidence at different granularity level... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "fact verification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"fact",
"verification"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"fact",
"verification",
"is",
"a",
"challenging",
"task",
"that",
"requires",
"simultaneously",
"reasoning",
"and",
"aggregating",
"over",
"multiple",
"retrieved",
"pieces",
"of",
"evidence",
"to",
"evaluate",
"the",
"truthfulness",
"of",
"a",
"claim",
".",
"exist... |
ACL | Improving Event Representation via Simultaneous Weakly Supervised Contrastive Learning and Clustering | Representations of events described in text are important for various tasks. In this work, we present SWCC: a Simultaneous Weakly supervised Contrastive learning and Clustering framework for event representation learning. SWCC learns event representations by making better use of co-occurrence information of events. Spe... | 8a24191b82f34c6aa886d522cedc1fd9 | 2,022 | [
"representations of events described in text are important for various tasks .",
"in this work , we present swcc : a simultaneous weakly supervised contrastive learning and clustering framework for event representation learning .",
"swcc learns event representations by making better use of co - occurrence infor... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "representations of events",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"representations",
"of",
"events"
],
"offsets": [
0,
1,
... | [
"representations",
"of",
"events",
"described",
"in",
"text",
"are",
"important",
"for",
"various",
"tasks",
".",
"in",
"this",
"work",
",",
"we",
"present",
"swcc",
":",
"a",
"simultaneous",
"weakly",
"supervised",
"contrastive",
"learning",
"and",
"clustering"... |
ACL | Low-Dimensional Hyperbolic Knowledge Graph Embeddings | Knowledge graph (KG) embeddings learn low- dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and pars... | 8a07d860794a96ad9a1c13ce9ac55ec1 | 2,020 | [
"knowledge graph ( kg ) embeddings learn low - dimensional representations of entities and relations to predict missing facts .",
"kgs often exhibit hierarchical and logical patterns which must be preserved in the embedding space .",
"for hierarchical data , hyperbolic embedding methods have shown promise for h... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "knowledge graph",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"knowledge",
"graph"
],
"offsets": [
0,
1
]
},
{
"text": ... | [
"knowledge",
"graph",
"(",
"kg",
")",
"embeddings",
"learn",
"low",
"-",
"dimensional",
"representations",
"of",
"entities",
"and",
"relations",
"to",
"predict",
"missing",
"facts",
".",
"kgs",
"often",
"exhibit",
"hierarchical",
"and",
"logical",
"patterns",
"w... |
ACL | Every Child Should Have Parents: A Taxonomy Refinement Algorithm Based on Hyperbolic Term Embeddings | We introduce the use of Poincaré embeddings to improve existing state-of-the-art approaches to domain-specific taxonomy induction from text as a signal for both relocating wrong hyponym terms within a (pre-induced) taxonomy as well as for attaching disconnected terms in a taxonomy. This method substantially improves pr... | 4352187ca73a5e51a1b63dfd2f956992 | 2,019 | [
"we introduce the use of poincare embeddings to improve existing state - of - the - art approaches to domain - specific taxonomy induction from text as a signal for both relocating wrong hyponym terms within a ( pre - induced ) taxonomy as well as for attaching disconnected terms in a taxonomy .",
"this method su... | [
{
"event_type": "PUR",
"arguments": [
{
"text": "existing state - of - the - art approaches",
"nugget_type": "APP",
"argument_type": "Aim",
"tokens": [
"existing",
"state",
"-",
"of",
"-",
"the",
"-",... | [
"we",
"introduce",
"the",
"use",
"of",
"poincare",
"embeddings",
"to",
"improve",
"existing",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"approaches",
"to",
"domain",
"-",
"specific",
"taxonomy",
"induction",
"from",
"text",
"as",
"a",
"signal",
"for",
"b... |
ACL | Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension | Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer). Despite the effectiveness of existing methods on this benchmark, they treat these two sub-tasks indiv... | 71a9aa6fcfda214073d50d53d910b9f9 | 2,020 | [
"natural questions is a new challenging machine reading comprehension benchmark with two - grained answers , which are a long answer ( typically a paragraph ) and a short answer ( one or more entities inside the long answer ) .",
"despite the effectiveness of existing methods on this benchmark , they treat these ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural questions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"questions"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"natural",
"questions",
"is",
"a",
"new",
"challenging",
"machine",
"reading",
"comprehension",
"benchmark",
"with",
"two",
"-",
"grained",
"answers",
",",
"which",
"are",
"a",
"long",
"answer",
"(",
"typically",
"a",
"paragraph",
")",
"and",
"a",
"short",
"... |
ACL | Unsupervised Dependency Graph Network | Recent work has identified properties of pretrained self-attention models that mirror those of dependency parse structures. In particular, some self-attention heads correspond well to individual dependency types. Inspired by these developments, we propose a new competitive mechanism that encourages these attention head... | 9e82e4e86a7f94b1bfe00a1cacd56e84 | 2,022 | [
"recent work has identified properties of pretrained self - attention models that mirror those of dependency parse structures .",
"in particular , some self - attention heads correspond well to individual dependency types .",
"inspired by these developments , we propose a new competitive mechanism that encourag... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
39
]
},
{
"text": "competitive mechanism",
"nugget_type": "AP... | [
"recent",
"work",
"has",
"identified",
"properties",
"of",
"pretrained",
"self",
"-",
"attention",
"models",
"that",
"mirror",
"those",
"of",
"dependency",
"parse",
"structures",
".",
"in",
"particular",
",",
"some",
"self",
"-",
"attention",
"heads",
"correspon... |
ACL | DRTS Parsing with Structure-Aware Encoding and Decoding | Discourse representation tree structure (DRTS) parsing is a novel semantic parsing task which has been concerned most recently. State-of-the-art performance can be achieved by a neural sequence-to-sequence model, treating the tree construction as an incremental sequence generation problem. Structural information such a... | e56e635a018ccee05f69b0d8e9922fcd | 2,020 | [
"discourse representation tree structure ( drts ) parsing is a novel semantic parsing task which has been concerned most recently .",
"state - of - the - art performance can be achieved by a neural sequence - to - sequence model , treating the tree construction as an incremental sequence generation problem .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "discourse representation tree structure parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"discourse",
"representation",
"tree",
"structure",
"parsing... | [
"discourse",
"representation",
"tree",
"structure",
"(",
"drts",
")",
"parsing",
"is",
"a",
"novel",
"semantic",
"parsing",
"task",
"which",
"has",
"been",
"concerned",
"most",
"recently",
".",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
"can"... |
ACL | ABC: Attention with Bounded-memory Control | Transformer architectures have achieved state- of-the-art results on a variety of natural language processing (NLP) tasks. However, their attention mechanism comes with a quadratic complexity in sequence lengths, making the computational overhead prohibitive, especially for long sequences. Attention context can be seen... | e054fc43fa0a3bf339ac99dcb8ed4325 | 2,022 | [
"transformer architectures have achieved state - of - the - art results on a variety of natural language processing ( nlp ) tasks .",
"however , their attention mechanism comes with a quadratic complexity in sequence lengths , making the computational overhead prohibitive , especially for long sequences .",
"at... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
16,
17,
... | [
"transformer",
"architectures",
"have",
"achieved",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"on",
"a",
"variety",
"of",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"tasks",
".",
"however",
",",
"their",
"attention",
"mechanism",
"com... |
ACL | Unsupervised Cross-lingual Representation Learning at Scale | This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-... | 9ecf00ff401d2f5c00b0675c267e4bee | 2,020 | [
"this paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross - lingual transfer tasks .",
"we train a transformer - based masked language model on one hundred languages , using more than two terabytes of filtered commoncrawl data .",
"... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "leads",
"nugget_type": "E-FAC",
"argument_type": "Content",
"tokens": [
"leads"
],
"offsets": [
10
]
}
],
"trigger": {
"text": "shows",
"tokens": [
... | [
"this",
"paper",
"shows",
"that",
"pretraining",
"multilingual",
"language",
"models",
"at",
"scale",
"leads",
"to",
"significant",
"performance",
"gains",
"for",
"a",
"wide",
"range",
"of",
"cross",
"-",
"lingual",
"transfer",
"tasks",
".",
"we",
"train",
"a"... |
ACL | When classifying grammatical role, BERT doesn’t care about word order... except when it matters | Because meaning can often be inferred from lexical semantics alone, word order is often a redundant cue in natural language. For example, the words chopped, chef, and onion are more likely used to convey “The chef chopped the onion,” not “The onion chopped the chef.” Recent work has shown large language models to be su... | a90baae9fafa679a2228d31b99c8b09e | 2,022 | [
"because meaning can often be inferred from lexical semantics alone , word order is often a redundant cue in natural language .",
"for example , the words chopped , chef , and onion are more likely used to convey “ the chef chopped the onion , ” not “ the onion chopped the chef . ”",
"recent work has shown larg... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "large language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"large",
"language",
"models"
],
"offsets": [
60,
61,
62
... | [
"because",
"meaning",
"can",
"often",
"be",
"inferred",
"from",
"lexical",
"semantics",
"alone",
",",
"word",
"order",
"is",
"often",
"a",
"redundant",
"cue",
"in",
"natural",
"language",
".",
"for",
"example",
",",
"the",
"words",
"chopped",
",",
"chef",
... |
ACL | Multimodal Dialogue Response Generation | Responsing with image has been recognized as an important capability for an intelligent conversational agent. Yet existing works only focus on exploring the multimodal dialogue models which depend on retrieval-based methods, but neglecting generation methods. To fill in the gaps, we first present a new task: multimodal... | fae70352d28668fd4d9be304be085e6f | 2,022 | [
"responsing with image has been recognized as an important capability for an intelligent conversational agent .",
"yet existing works only focus on exploring the multimodal dialogue models which depend on retrieval - based methods , but neglecting generation methods .",
"to fill in the gaps , we first present a... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "intelligent conversational agent",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"intelligent",
"conversational",
"agent"
],
"offsets": [
12,
... | [
"responsing",
"with",
"image",
"has",
"been",
"recognized",
"as",
"an",
"important",
"capability",
"for",
"an",
"intelligent",
"conversational",
"agent",
".",
"yet",
"existing",
"works",
"only",
"focus",
"on",
"exploring",
"the",
"multimodal",
"dialogue",
"models"... |
ACL | Multi Task Learning For Zero Shot Performance Prediction of Multilingual Models | Massively Multilingual Transformer based Language Models have been observed to be surprisingly effective on zero-shot transfer across languages, though the performance varies from language to language depending on the pivot language(s) used for fine-tuning. In this work, we build upon some of the existing techniques fo... | 3be45e72a470005131930e21e843806b | 2,022 | [
"massively multilingual transformer based language models have been observed to be surprisingly effective on zero - shot transfer across languages , though the performance varies from language to language depending on the pivot language ( s ) used for fine - tuning .",
"in this work , we build upon some of the ex... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilingual transformer based language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"multilingual",
"transformer",
"based",
"language",
"models"
... | [
"massively",
"multilingual",
"transformer",
"based",
"language",
"models",
"have",
"been",
"observed",
"to",
"be",
"surprisingly",
"effective",
"on",
"zero",
"-",
"shot",
"transfer",
"across",
"languages",
",",
"though",
"the",
"performance",
"varies",
"from",
"la... |
ACL | Learning Interpretable Relationships between Entities, Relations and Concepts via Bayesian Structure Learning on Open Domain Facts | Concept graphs are created as universal taxonomies for text understanding in the open-domain knowledge. The nodes in concept graphs include both entities and concepts. The edges are from entities to concepts, showing that an entity is an instance of a concept. In this paper, we propose the task of learning interpretabl... | afe3b74bc8299695856bbee0e97e22bf | 2,020 | [
"concept graphs are created as universal taxonomies for text understanding in the open - domain knowledge .",
"the nodes in concept graphs include both entities and concepts .",
"the edges are from entities to concepts , showing that an entity is an instance of a concept .",
"in this paper , we propose the ta... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text understanding",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"understanding"
],
"offsets": [
8,
9
]
}
],
"trigger"... | [
"concept",
"graphs",
"are",
"created",
"as",
"universal",
"taxonomies",
"for",
"text",
"understanding",
"in",
"the",
"open",
"-",
"domain",
"knowledge",
".",
"the",
"nodes",
"in",
"concept",
"graphs",
"include",
"both",
"entities",
"and",
"concepts",
".",
"the... |
ACL | A Prism Module for Semantic Disentanglement in Name Entity Recognition | Natural Language Processing has been perplexed for many years by the problem that multiple semantics are mixed inside a word, even with the help of context. To solve this problem, we propose a prism module to disentangle the semantic aspects of words and reduce noise at the input layer of a model. In the prism module, ... | e09aec06f2478144450e48172f52bce8 | 2,019 | [
"natural language processing has been perplexed for many years by the problem that multiple semantics are mixed inside a word , even with the help of context .",
"to solve this problem , we propose a prism module to disentangle the semantic aspects of words and reduce noise at the input layer of a model .",
"in... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
0,
1,
... | [
"natural",
"language",
"processing",
"has",
"been",
"perplexed",
"for",
"many",
"years",
"by",
"the",
"problem",
"that",
"multiple",
"semantics",
"are",
"mixed",
"inside",
"a",
"word",
",",
"even",
"with",
"the",
"help",
"of",
"context",
".",
"to",
"solve",
... |
ACL | Subgraph Retrieval Enhanced Model for Multi-hop Knowledge Base Question Answering | Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning. The desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises. However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on ... | 6ebec541c30b1b2f511e14e368e690ae | 2,022 | [
"recent works on knowledge base question answering ( kbqa ) retrieve subgraphs for easier reasoning .",
"the desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises .",
"however , the existing retrieval is either heuristic or interwoven with the reasoning , ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge base question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"knowledge",
"base",
"question",
"answering"
],
"offsets": [
... | [
"recent",
"works",
"on",
"knowledge",
"base",
"question",
"answering",
"(",
"kbqa",
")",
"retrieve",
"subgraphs",
"for",
"easier",
"reasoning",
".",
"the",
"desired",
"subgraph",
"is",
"crucial",
"as",
"a",
"small",
"one",
"may",
"exclude",
"the",
"answer",
... |
ACL | Contrastive Learning for Many-to-many Multilingual Neural Machine Translation | Existing multilingual machine translation approaches mainly focus on English-centric directions, while the non-English directions still lag behind. In this work, we aim to build a many-to-many translation system with an emphasis on the quality of non-English language directions. Our intuition is based on the hypothesis... | 37c00f37800ce468c6d0de8c6aa5f85e | 2,021 | [
"existing multilingual machine translation approaches mainly focus on english - centric directions , while the non - english directions still lag behind .",
"in this work , we aim to build a many - to - many translation system with an emphasis on the quality of non - english language directions .",
"our intuiti... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "multilingual machine translation",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"multilingual",
"machine",
"translation"
],
"offsets": [
1,
... | [
"existing",
"multilingual",
"machine",
"translation",
"approaches",
"mainly",
"focus",
"on",
"english",
"-",
"centric",
"directions",
",",
"while",
"the",
"non",
"-",
"english",
"directions",
"still",
"lag",
"behind",
".",
"in",
"this",
"work",
",",
"we",
"aim... |
ACL | Cross-Sentence Grammatical Error Correction | Automatic grammatical error correction (GEC) research has made remarkable progress in the past decade. However, all existing approaches to GEC correct errors by considering a single sentence alone and ignoring crucial cross-sentence context. Some errors can only be corrected reliably using cross-sentence context and mo... | 12234b2048c75e29bba8e62cf710640f | 2,019 | [
"automatic grammatical error correction ( gec ) research has made remarkable progress in the past decade .",
"however , all existing approaches to gec correct errors by considering a single sentence alone and ignoring crucial cross - sentence context .",
"some errors can only be corrected reliably using cross -... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "automatic grammatical error correction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"automatic",
"grammatical",
"error",
"correction"
],
"offsets":... | [
"automatic",
"grammatical",
"error",
"correction",
"(",
"gec",
")",
"research",
"has",
"made",
"remarkable",
"progress",
"in",
"the",
"past",
"decade",
".",
"however",
",",
"all",
"existing",
"approaches",
"to",
"gec",
"correct",
"errors",
"by",
"considering",
... |
ACL | Learning from Sibling Mentions with Scalable Graph Inference in Fine-Grained Entity Typing | In this paper, we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts, which consequently limits their overall typing performance. To this end, we propose to exploit sibling mentions for enhancing the mention representations.Specifically, we present two diff... | c5705e58b436bdd4037da5f4b7b879f4 | 2,022 | [
"in this paper , we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts , which consequently limits their overall typing performance .",
"to this end , we propose to exploit sibling mentions for enhancing the mention representations .",
"specifically... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Finder",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "struggle",
"nugget_type": "E-FAC",
"a... | [
"in",
"this",
"paper",
",",
"we",
"firstly",
"empirically",
"find",
"that",
"existing",
"models",
"struggle",
"to",
"handle",
"hard",
"mentions",
"due",
"to",
"their",
"insufficient",
"contexts",
",",
"which",
"consequently",
"limits",
"their",
"overall",
"typin... |
ACL | ExCAR: Event Graph Knowledge Enhanced Explainable Causal Reasoning | Prior work infers the causation between events mainly based on the knowledge induced from the annotated causal event pairs. However, additional evidence information intermediate to the cause and effect remains unexploited. By incorporating such information, the logical law behind the causality can be unveiled, and the ... | 9b82ff382bc70e022ea11872b04e366c | 2,021 | [
"prior work infers the causation between events mainly based on the knowledge induced from the annotated causal event pairs .",
"however , additional evidence information intermediate to the cause and effect remains unexploited .",
"by incorporating such information , the logical law behind the causality can be... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "annotated causal event pairs",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"annotated",
"causal",
"event",
"pairs"
],
"offsets": [
... | [
"prior",
"work",
"infers",
"the",
"causation",
"between",
"events",
"mainly",
"based",
"on",
"the",
"knowledge",
"induced",
"from",
"the",
"annotated",
"causal",
"event",
"pairs",
".",
"however",
",",
"additional",
"evidence",
"information",
"intermediate",
"to",
... |
ACL | ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation | In multi-turn dialogue generation, response is usually related with only a few contexts. Therefore, an ideal model should be able to detect these relevant contexts and produce a suitable response accordingly. However, the widely used hierarchical recurrent encoder-decoder models just treat all the contexts indiscrimina... | bda5a379e31957a757b91a871a6d7407 | 2,019 | [
"in multi - turn dialogue generation , response is usually related with only a few contexts .",
"therefore , an ideal model should be able to detect these relevant contexts and produce a suitable response accordingly .",
"however , the widely used hierarchical recurrent encoder - decoder models just treat all t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "hierarchical recurrent encoder - decoder models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"hierarchical",
"recurrent",
"encoder",
"-",
"decoder",
... | [
"in",
"multi",
"-",
"turn",
"dialogue",
"generation",
",",
"response",
"is",
"usually",
"related",
"with",
"only",
"a",
"few",
"contexts",
".",
"therefore",
",",
"an",
"ideal",
"model",
"should",
"be",
"able",
"to",
"detect",
"these",
"relevant",
"contexts",... |
ACL | In Factuality: Efficient Integration of Relevant Facts for Visual Question Answering | Visual Question Answering (VQA) methods aim at leveraging visual input to answer questions that may require complex reasoning over entities. Current models are trained on labelled data that may be insufficient to learn complex knowledge representations. In this paper, we propose a new method to enhance the reasoning ca... | 91b8a0929d6aec4b1c17e9bfbd0c43d7 | 2,021 | [
"visual question answering ( vqa ) methods aim at leveraging visual input to answer questions that may require complex reasoning over entities .",
"current models are trained on labelled data that may be insufficient to learn complex knowledge representations .",
"in this paper , we propose a new method to enha... | [
{
"event_type": "ITT",
"arguments": [],
"trigger": {
"text": "require",
"tokens": [
"require"
],
"offsets": [
17
]
}
},
{
"event_type": "RWF",
"arguments": [
{
"text": "insufficient",
"nugget_type": "WEA",
"argum... | [
"visual",
"question",
"answering",
"(",
"vqa",
")",
"methods",
"aim",
"at",
"leveraging",
"visual",
"input",
"to",
"answer",
"questions",
"that",
"may",
"require",
"complex",
"reasoning",
"over",
"entities",
".",
"current",
"models",
"are",
"trained",
"on",
"l... |
ACL | Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning | We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language. Our starting point is a language model that has been trained on generic, not task-specific language data. W... | 3e9d7993f7b887420db101af4a456743 | 2,020 | [
"we present a method for combining multi - agent communication and traditional data - driven approaches to natural language learning , with an end goal of teaching agents to communicate with humans in natural language .",
"our starting point is a language model that has been trained on generic , not task - specif... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "method",
"nugget_type": "APP",
"arg... | [
"we",
"present",
"a",
"method",
"for",
"combining",
"multi",
"-",
"agent",
"communication",
"and",
"traditional",
"data",
"-",
"driven",
"approaches",
"to",
"natural",
"language",
"learning",
",",
"with",
"an",
"end",
"goal",
"of",
"teaching",
"agents",
"to",
... |
ACL | Replicating and Extending “Because Their Treebanks Leak”: Graph Isomorphism, Covariants, and Parser Performance | Søgaard (2020) obtained results suggesting the fraction of trees occurring in the test data isomorphic to trees in the training set accounts for a non-trivial variation in parser performance. Similar to other statistical analyses in NLP, the results were based on evaluating linear regressions. However, the study had me... | 6b5a6384d480effdc9126c86dff05768 | 2,021 | [
"søgaard ( 2020 ) obtained results suggesting the fraction of trees occurring in the test data isomorphic to trees in the training set accounts for a non - trivial variation in parser performance .",
"similar to other statistical analyses in nlp , the results were based on evaluating linear regressions .",
"how... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "accounts",
"nugget_type": "E-FAC",
"argument_type": "Content",
"tokens": [
"accounts"
],
"offsets": [
23
]
}
],
"trigger": {
"text": "obtained",
"token... | [
"søgaard",
"(",
"2020",
")",
"obtained",
"results",
"suggesting",
"the",
"fraction",
"of",
"trees",
"occurring",
"in",
"the",
"test",
"data",
"isomorphic",
"to",
"trees",
"in",
"the",
"training",
"set",
"accounts",
"for",
"a",
"non",
"-",
"trivial",
"variati... |
ACL | Debiased Contrastive Learning of Unsupervised Sentence Representations | Recently, contrastive learning has been shown to be effective in improving pre-trained language models (PLM) to derive high-quality sentence representations. It aims to pull close positive examples to enhance the alignment while push apart irrelevant negatives for the uniformity of the whole representation space.Howeve... | ade24b3ce140a3aa41c34610951f74b5 | 2,022 | [
"recently , contrastive learning has been shown to be effective in improving pre - trained language models ( plm ) to derive high - quality sentence representations .",
"it aims to pull close positive examples to enhance the alignment while push apart irrelevant negatives for the uniformity of the whole represent... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - trained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"trained",
"language",
"models"
],
"offsets": [
... | [
"recently",
",",
"contrastive",
"learning",
"has",
"been",
"shown",
"to",
"be",
"effective",
"in",
"improving",
"pre",
"-",
"trained",
"language",
"models",
"(",
"plm",
")",
"to",
"derive",
"high",
"-",
"quality",
"sentence",
"representations",
".",
"it",
"a... |
ACL | When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People? | Social biases are encoded in word embeddings. This presents a unique opportunity to study society historically and at scale, and a unique danger when embeddings are used in downstream applications. Here, we investigate the extent to which publicly-available word embeddings accurately reflect beliefs about certain kinds... | 66c7a54a294cad213271bdb177cac6a3 | 2,020 | [
"social biases are encoded in word embeddings .",
"this presents a unique opportunity to study society historically and at scale , and a unique danger when embeddings are used in downstream applications .",
"here , we investigate the extent to which publicly - available word embeddings accurately reflect belief... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "social biases",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"social",
"biases"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"social",
"biases",
"are",
"encoded",
"in",
"word",
"embeddings",
".",
"this",
"presents",
"a",
"unique",
"opportunity",
"to",
"study",
"society",
"historically",
"and",
"at",
"scale",
",",
"and",
"a",
"unique",
"danger",
"when",
"embeddings",
"are",
"used",
... |
ACL | Dataset Geography: Mapping Language Data to Language Users | As language technologies become more ubiquitous, there are increasing efforts towards expanding the language diversity and coverage of natural language processing (NLP) systems. Arguably, the most important factor influencing the quality of modern NLP systems is data availability. In this work, we study the geographica... | 29e2418e07495cb77ee0b6a4b2ec70a0 | 2,022 | [
"as language technologies become more ubiquitous , there are increasing efforts towards expanding the language diversity and coverage of natural language processing ( nlp ) systems .",
"arguably , the most important factor influencing the quality of modern nlp systems is data availability .",
"in this work , we... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing systems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"systems"
],
"offsets": [
... | [
"as",
"language",
"technologies",
"become",
"more",
"ubiquitous",
",",
"there",
"are",
"increasing",
"efforts",
"towards",
"expanding",
"the",
"language",
"diversity",
"and",
"coverage",
"of",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"systems",
".",
... |
ACL | A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings Based on Graph Modularity | Cross-lingual word embeddings encode the meaning of words from different languages into a shared low-dimensional space. An important requirement for many downstream tasks is that word similarity should be independent of language—i.e., word vectors within one language should not be more similar to each other than to wor... | bdd0fda25418003713bbc5bd5a464b6f | 2,019 | [
"cross - lingual word embeddings encode the meaning of words from different languages into a shared low - dimensional space .",
"an important requirement for many downstream tasks is that word similarity should be independent of language — i . e . , word vectors within one language should not be more similar to e... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual word embeddings",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"word",
"embeddings"
],
"offsets": ... | [
"cross",
"-",
"lingual",
"word",
"embeddings",
"encode",
"the",
"meaning",
"of",
"words",
"from",
"different",
"languages",
"into",
"a",
"shared",
"low",
"-",
"dimensional",
"space",
".",
"an",
"important",
"requirement",
"for",
"many",
"downstream",
"tasks",
... |
ACL | Automatic Identification and Classification of Bragging in Social Media | Bragging is a speech act employed with the goal of constructing a favorable self-image through positive statements about oneself. It is widespread in daily communication and especially popular in social media, where users aim to build a positive image of their persona directly or indirectly. In this paper, we present t... | 36a8e6a1d360891f8d983c1c926c4b9e | 2,022 | [
"bragging is a speech act employed with the goal of constructing a favorable self - image through positive statements about oneself .",
"it is widespread in daily communication and especially popular in social media , where users aim to build a positive image of their persona directly or indirectly .",
"in this... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "bragging",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"bragging"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "speech act",
"tokens"... | [
"bragging",
"is",
"a",
"speech",
"act",
"employed",
"with",
"the",
"goal",
"of",
"constructing",
"a",
"favorable",
"self",
"-",
"image",
"through",
"positive",
"statements",
"about",
"oneself",
".",
"it",
"is",
"widespread",
"in",
"daily",
"communication",
"an... |
ACL | Composing Elementary Discourse Units in Abstractive Summarization | In this paper, we argue that elementary discourse unit (EDU) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization. To well handle the problem of composing EDUs into an informative and fluent summary, we propose a novel summarization method that first designs an EDU... | fb9720b899e8560095e69d4a826b34ce | 2,020 | [
"in this paper , we argue that elementary discourse unit ( edu ) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization .",
"to well handle the problem of composing edus into an informative and fluent summary , we propose a novel summarization method that first... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "elementary discourse unit",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"elementary",
"discourse",
"unit"
],
"offsets": [
7,
8,
... | [
"in",
"this",
"paper",
",",
"we",
"argue",
"that",
"elementary",
"discourse",
"unit",
"(",
"edu",
")",
"is",
"a",
"more",
"appropriate",
"textual",
"unit",
"of",
"content",
"selection",
"than",
"the",
"sentence",
"unit",
"in",
"abstractive",
"summarization",
... |
ACL | Cluster & Tune: Boost Cold Start Performance in Text Classification | In real-world scenarios, a text classification task often begins with a cold start, when labeled data is scarce. In such cases, the common practice of fine-tuning pre-trained models, such as BERT, for a target classification task, is prone to produce poor performance. We suggest a method to boost the performance of suc... | c3f9e0b74e84ae3ba20a9e013ca91eec | 2,022 | [
"in real - world scenarios , a text classification task often begins with a cold start , when labeled data is scarce .",
"in such cases , the common practice of fine - tuning pre - trained models , such as bert , for a target classification task , is prone to produce poor performance .",
"we suggest a method to... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text classification task",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"classification",
"task"
],
"offsets": [
7,
8,
9
... | [
"in",
"real",
"-",
"world",
"scenarios",
",",
"a",
"text",
"classification",
"task",
"often",
"begins",
"with",
"a",
"cold",
"start",
",",
"when",
"labeled",
"data",
"is",
"scarce",
".",
"in",
"such",
"cases",
",",
"the",
"common",
"practice",
"of",
"fin... |
ACL | WLASL-LEX: a Dataset for Recognising Phonological Properties in American Sign Language | Signed Language Processing (SLP) concerns the automated processing of signed languages, the main means of communication of Deaf and hearing impaired individuals. SLP features many different tasks, ranging from sign recognition to translation and production of signed speech, but has been overlooked by the NLP community ... | a24694411e6327b05196073141e6c2ec | 2,022 | [
"signed language processing ( slp ) concerns the automated processing of signed languages , the main means of communication of deaf and hearing impaired individuals .",
"slp features many different tasks , ranging from sign recognition to translation and production of signed speech , but has been overlooked by th... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "signed language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"signed",
"language",
"processing"
],
"offsets": [
0,
1,
... | [
"signed",
"language",
"processing",
"(",
"slp",
")",
"concerns",
"the",
"automated",
"processing",
"of",
"signed",
"languages",
",",
"the",
"main",
"means",
"of",
"communication",
"of",
"deaf",
"and",
"hearing",
"impaired",
"individuals",
".",
"slp",
"features",... |
ACL | Towards Automating Healthcare Question Answering in a Noisy Multilingual Low-Resource Setting | We discuss ongoing work into automating a multilingual digital helpdesk service available via text messaging to pregnant and breastfeeding mothers in South Africa. Our anonymized dataset consists of short informal questions, often in low-resource languages, with unreliable language labels, spelling errors and code-mixi... | bbc622b56e7d19b4910f2259acc8ce1d | 2,019 | [
"we discuss ongoing work into automating a multilingual digital helpdesk service available via text messaging to pregnant and breastfeeding mothers in south africa .",
"our anonymized dataset consists of short informal questions , often in low - resource languages , with unreliable language labels , spelling erro... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilingual digital helpdesk service",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multilingual",
"digital",
"helpdesk",
"service"
],
"offsets": [... | [
"we",
"discuss",
"ongoing",
"work",
"into",
"automating",
"a",
"multilingual",
"digital",
"helpdesk",
"service",
"available",
"via",
"text",
"messaging",
"to",
"pregnant",
"and",
"breastfeeding",
"mothers",
"in",
"south",
"africa",
".",
"our",
"anonymized",
"datas... |
ACL | Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets | Auditing NLP systems for computational harms like surfacing stereotypes is an elusive goal. Several recent efforts have focused on benchmark datasets consisting of pairs of contrastive sentences, which are often accompanied by metrics that aggregate an NLP system’s behavior on these pairs into measurements of harms. We... | 37ec140e237d4ab8b5174b638d806573 | 2,021 | [
"auditing nlp systems for computational harms like surfacing stereotypes is an elusive goal .",
"several recent efforts have focused on benchmark datasets consisting of pairs of contrastive sentences , which are often accompanied by metrics that aggregate an nlp system ’ s behavior on these pairs into measurement... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "benchmark datasets",
"nugget_type": "DST",
"argument_type": "Target",
"tokens": [
"benchmark",
"datasets"
],
"offsets": [
20,
21
]
}
],
"trigge... | [
"auditing",
"nlp",
"systems",
"for",
"computational",
"harms",
"like",
"surfacing",
"stereotypes",
"is",
"an",
"elusive",
"goal",
".",
"several",
"recent",
"efforts",
"have",
"focused",
"on",
"benchmark",
"datasets",
"consisting",
"of",
"pairs",
"of",
"contrastive... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.