venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | IMPLI: Investigating NLI Models’ Performance on Figurative Language | Natural language inference (NLI) has been widely used as a task to train and evaluate models for language understanding. However, the ability of NLI models to perform inferences requiring understanding of figurative language such as idioms and metaphors remains understudied. We introduce the IMPLI (Idiomatic and Metaph... | 9ad557b66e90e9d98312d28e785e1e44 | 2,022 | [
"natural language inference ( nli ) has been widely used as a task to train and evaluate models for language understanding .",
"however , the ability of nli models to perform inferences requiring understanding of figurative language such as idioms and metaphors remains understudied .",
"we introduce the impli (... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "nli",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"nli"
],
"offsets": [
96
]
}
],
"trigger": {
"text": "used",
"tokens": [
"us... | [
"natural",
"language",
"inference",
"(",
"nli",
")",
"has",
"been",
"widely",
"used",
"as",
"a",
"task",
"to",
"train",
"and",
"evaluate",
"models",
"for",
"language",
"understanding",
".",
"however",
",",
"the",
"ability",
"of",
"nli",
"models",
"to",
"pe... |
ACL | Incorporating Stock Market Signals for Twitter Stance Detection | Research in stance detection has so far focused on models which leverage purely textual input. In this paper, we investigate the integration of textual and financial signals for stance detection in the financial domain. Specifically, we propose a robust multi-task neural architecture that combines textual input with hi... | 830e0b4ffd077a4005b8ef381d08d9c1 | 2,022 | [
"research in stance detection has so far focused on models which leverage purely textual input .",
"in this paper , we investigate the integration of textual and financial signals for stance detection in the financial domain .",
"specifically , we propose a robust multi - task neural architecture that combines ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "stance detection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"stance",
"detection"
],
"offsets": [
2,
3
]
}
],
"trigger": {
... | [
"research",
"in",
"stance",
"detection",
"has",
"so",
"far",
"focused",
"on",
"models",
"which",
"leverage",
"purely",
"textual",
"input",
".",
"in",
"this",
"paper",
",",
"we",
"investigate",
"the",
"integration",
"of",
"textual",
"and",
"financial",
"signals... |
ACL | ABCD: A Graph Framework to Convert Complex Sentences to a Covering Set of Simple Sentences | Atomic clauses are fundamental text units for understanding complex sentences. Identifying the atomic sentences within complex sentences is important for applications such as summarization, argument mining, discourse analysis, discourse parsing, and question answering. Previous work mainly relies on rule-based methods ... | 853146ed8128ec9ad2064f1d09162ebe | 2,021 | [
"atomic clauses are fundamental text units for understanding complex sentences .",
"identifying the atomic sentences within complex sentences is important for applications such as summarization , argument mining , discourse analysis , discourse parsing , and question answering .",
"previous work mainly relies o... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "atomic clauses",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"atomic",
"clauses"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"atomic",
"clauses",
"are",
"fundamental",
"text",
"units",
"for",
"understanding",
"complex",
"sentences",
".",
"identifying",
"the",
"atomic",
"sentences",
"within",
"complex",
"sentences",
"is",
"important",
"for",
"applications",
"such",
"as",
"summarization",
"... |
ACL | Structurizing Misinformation Stories via Rationalizing Fact-Checks | Misinformation has recently become a well-documented matter of public concern. Existing studies on this topic have hitherto adopted a coarse concept of misinformation, which incorporates a broad spectrum of story types ranging from political conspiracies to misinterpreted pranks. This paper aims to structurize these mi... | bef766ac739a954f66f7f5ec815c1690 | 2,021 | [
"misinformation has recently become a well - documented matter of public concern .",
"existing studies on this topic have hitherto adopted a coarse concept of misinformation , which incorporates a broad spectrum of story types ranging from political conspiracies to misinterpreted pranks .",
"this paper aims to ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "misinformation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"misinformation"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "become",
... | [
"misinformation",
"has",
"recently",
"become",
"a",
"well",
"-",
"documented",
"matter",
"of",
"public",
"concern",
".",
"existing",
"studies",
"on",
"this",
"topic",
"have",
"hitherto",
"adopted",
"a",
"coarse",
"concept",
"of",
"misinformation",
",",
"which",
... |
ACL | In Layman’s Terms: Semi-Open Relation Extraction from Scientific Texts | Information Extraction (IE) from scientific texts can be used to guide readers to the central information in scientific documents. But narrow IE systems extract only a fraction of the information captured, and Open IE systems do not perform well on the long and complex sentences encountered in scientific texts. In this... | ec7954489ad7aa76acff3cce6b524e68 | 2,020 | [
"information extraction ( ie ) from scientific texts can be used to guide readers to the central information in scientific documents .",
"but narrow ie systems extract only a fraction of the information captured , and open ie systems do not perform well on the long and complex sentences encountered in scientific ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "information extraction ( ie ) from scientific texts",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"information",
"extraction",
"(",
"ie",
")",
"... | [
"information",
"extraction",
"(",
"ie",
")",
"from",
"scientific",
"texts",
"can",
"be",
"used",
"to",
"guide",
"readers",
"to",
"the",
"central",
"information",
"in",
"scientific",
"documents",
".",
"but",
"narrow",
"ie",
"systems",
"extract",
"only",
"a",
... |
ACL | Towards more equitable question answering systems: How much more data do you need? | Question answering (QA) in English has been widely explored, but multilingual datasets are relatively new, with several methods attempting to bridge the gap between high- and low-resourced languages using data augmentation through translation and cross-lingual transfer. In this project we take a step back and study whi... | 95bc304a09bd0f8cbc7100a66f7cd4d2 | 2,021 | [
"question answering ( qa ) in english has been widely explored , but multilingual datasets are relatively new , with several methods attempting to bridge the gap between high - and low - resourced languages using data augmentation through translation and cross - lingual transfer .",
"in this project we take a ste... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "question answering",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"question",
"answering"
],
"offsets": [
0,
1
]
}
],
"trigger"... | [
"question",
"answering",
"(",
"qa",
")",
"in",
"english",
"has",
"been",
"widely",
"explored",
",",
"but",
"multilingual",
"datasets",
"are",
"relatively",
"new",
",",
"with",
"several",
"methods",
"attempting",
"to",
"bridge",
"the",
"gap",
"between",
"high",... |
ACL | Towards Robustness of Text-to-SQL Models Against Natural and Realistic Adversarial Table Perturbation | The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications. Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables. Motivated by this, we propose the Adve... | 3aac99be6ce416bcc712a037805ee6b1 | 2,022 | [
"the robustness of text - to - sql parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications .",
"previous studies along this line primarily focused on perturbations in the natural language question side , neglecting the variability of tables .",
"motivated by thi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text - to - sql parsers",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"text",
"-",
"to",
"-",
"sql",
"parsers"
],
"offsets": [
... | [
"the",
"robustness",
"of",
"text",
"-",
"to",
"-",
"sql",
"parsers",
"against",
"adversarial",
"perturbations",
"plays",
"a",
"crucial",
"role",
"in",
"delivering",
"highly",
"reliable",
"applications",
".",
"previous",
"studies",
"along",
"this",
"line",
"prima... |
ACL | Efficient Dialogue State Tracking by Selectively Overwriting Memory | Recent works in dialogue state tracking (DST) focus on an open vocabulary-based setting to resolve scalability and generalization issues of the predefined ontology-based approaches. However, they are inefficient in that they predict the dialogue state at every turn from scratch. Here, we consider dialogue state as an e... | a50b5410479165b6b4c0516507e765be | 2,020 | [
"recent works in dialogue state tracking ( dst ) focus on an open vocabulary - based setting to resolve scalability and generalization issues of the predefined ontology - based approaches .",
"however , they are inefficient in that they predict the dialogue state at every turn from scratch .",
"here , we consid... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "inefficient",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"inefficient"
],
"offsets": [
35
]
}
],
"trigger": {
"text": "inefficient",
"... | [
"recent",
"works",
"in",
"dialogue",
"state",
"tracking",
"(",
"dst",
")",
"focus",
"on",
"an",
"open",
"vocabulary",
"-",
"based",
"setting",
"to",
"resolve",
"scalability",
"and",
"generalization",
"issues",
"of",
"the",
"predefined",
"ontology",
"-",
"based... |
ACL | Continual Relation Learning via Episodic Memory Activation and Reconsolidation | Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations. Some pioneering work has proved that storing a handful of historical relation examples in episodic memory and replaying them in subsequent tra... | 38fc76b139725013270be20373ed560b | 2,020 | [
"continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations .",
"some pioneering work has proved that storing a handful of historical relation examples in episodic memory and replaying them in sub... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "continual relation learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"continual",
"relation",
"learning"
],
"offsets": [
0,
1,
... | [
"continual",
"relation",
"learning",
"aims",
"to",
"continually",
"train",
"a",
"model",
"on",
"new",
"data",
"to",
"learn",
"incessantly",
"emerging",
"novel",
"relations",
"while",
"avoiding",
"catastrophically",
"forgetting",
"old",
"relations",
".",
"some",
"p... |
ACL | Modeling Word Formation in English–German Neural Machine Translation | This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology. Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate mod... | 19f6470d3e49563daef2736e12d4da6d | 2,020 | [
"this paper studies strategies to model word formation in nmt using rich linguistic information , namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology .",
"our linguistically sound segmentation is combined with a method for target - side inflection to a... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "strategies",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"strategies"
],
"offsets": [
3
]
},
{
"text": "model",
"nugget_type": "E-P... | [
"this",
"paper",
"studies",
"strategies",
"to",
"model",
"word",
"formation",
"in",
"nmt",
"using",
"rich",
"linguistic",
"information",
",",
"namely",
"a",
"word",
"segmentation",
"approach",
"that",
"goes",
"beyond",
"splitting",
"into",
"substrings",
"by",
"c... |
ACL | De-biasing Distantly Supervised Named Entity Recognition via Causal Intervention | Distant supervision tackles the data bottleneck in NER by automatically generating training instances via dictionary matching. Unfortunately, the learning of DS-NER is severely dictionary-biased, which suffers from spurious correlations and therefore undermines the effectiveness and the robustness of the learned models... | 9be75708fb2b7871880f0bbfd8adb75c | 2,021 | [
"distant supervision tackles the data bottleneck in ner by automatically generating training instances via dictionary matching .",
"unfortunately , the learning of ds - ner is severely dictionary - biased , which suffers from spurious correlations and therefore undermines the effectiveness and the robustness of t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "distant supervision",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"distant",
"supervision"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"distant",
"supervision",
"tackles",
"the",
"data",
"bottleneck",
"in",
"ner",
"by",
"automatically",
"generating",
"training",
"instances",
"via",
"dictionary",
"matching",
".",
"unfortunately",
",",
"the",
"learning",
"of",
"ds",
"-",
"ner",
"is",
"severely",
... |
ACL | Frugal Paradigm Completion | Lexica distinguishing all morphologically related forms of each lexeme are crucial to many language technologies, yet building them is expensive. We propose a frugal paradigm completion approach that predicts all related forms in a morphological paradigm from as few manually provided forms as possible. It induces typol... | 2dd590d179d1a22833a5689f5cf4cda6 | 2,020 | [
"lexica distinguishing all morphologically related forms of each lexeme are crucial to many language technologies , yet building them is expensive .",
"we propose a frugal paradigm completion approach that predicts all related forms in a morphological paradigm from as few manually provided forms as possible .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "lexica distinguishing all morphologically related forms of each lexeme",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"lexica",
"distinguishing",
"all",
"morphologic... | [
"lexica",
"distinguishing",
"all",
"morphologically",
"related",
"forms",
"of",
"each",
"lexeme",
"are",
"crucial",
"to",
"many",
"language",
"technologies",
",",
"yet",
"building",
"them",
"is",
"expensive",
".",
"we",
"propose",
"a",
"frugal",
"paradigm",
"com... |
ACL | Divide and Rule: Effective Pre-Training for Context-Aware Multi-Encoder Translation Models | Multi-encoder models are a broad family of context-aware neural machine translation systems that aim to improve translation quality by encoding document-level contextual information alongside the current sentence. The context encoding is undertaken by contextual parameters, trained on document-level data. In this work,... | 0dd2f6e36d3bc6149a7cee8507b25f2f | 2,022 | [
"multi - encoder models are a broad family of context - aware neural machine translation systems that aim to improve translation quality by encoding document - level contextual information alongside the current sentence .",
"the context encoding is undertaken by contextual parameters , trained on document - level... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "context - aware neural machine translation systems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"context",
"-",
"aware",
"neural",
"machine",
"... | [
"multi",
"-",
"encoder",
"models",
"are",
"a",
"broad",
"family",
"of",
"context",
"-",
"aware",
"neural",
"machine",
"translation",
"systems",
"that",
"aim",
"to",
"improve",
"translation",
"quality",
"by",
"encoding",
"document",
"-",
"level",
"contextual",
... |
ACL | Towards Making the Most of Cross-Lingual Transfer for Zero-Shot Neural Machine Translation | This paper demonstrates that multilingual pretraining and multilingual fine-tuning are both critical for facilitating cross-lingual transfer in zero-shot translation, where the neural machine translation (NMT) model is tested on source languages unseen during supervised training. Following this idea, we present SixT+, ... | 3ab2499a5a230ddac2afcdbf5e0dd452 | 2,022 | [
"this paper demonstrates that multilingual pretraining and multilingual fine - tuning are both critical for facilitating cross - lingual transfer in zero - shot translation , where the neural machine translation ( nmt ) model is tested on source languages unseen during supervised training .",
"following this idea... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilingual pretraining and multilingual fine - tuning",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"multilingual",
"pretraining",
"and",
"multilingual",
... | [
"this",
"paper",
"demonstrates",
"that",
"multilingual",
"pretraining",
"and",
"multilingual",
"fine",
"-",
"tuning",
"are",
"both",
"critical",
"for",
"facilitating",
"cross",
"-",
"lingual",
"transfer",
"in",
"zero",
"-",
"shot",
"translation",
",",
"where",
"... |
ACL | The TechQA Dataset | We introduce TECHQA, a domain-adaptation question answering dataset for the technical support domain. The TECHQA corpus highlights two real-world issues from the automated customer support domain. First, it contains actual questions posed by users on a technical forum, rather than questions generated specifically for a... | 95958057769b24b42d8a0aad4aa62514 | 2,020 | [
"we introduce techqa , a domain - adaptation question answering dataset for the technical support domain .",
"the techqa corpus highlights two real - world issues from the automated customer support domain .",
"first , it contains actual questions posed by users on a technical forum , rather than questions gene... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "technical support domain",
"nugget_type": "... | [
"we",
"introduce",
"techqa",
",",
"a",
"domain",
"-",
"adaptation",
"question",
"answering",
"dataset",
"for",
"the",
"technical",
"support",
"domain",
".",
"the",
"techqa",
"corpus",
"highlights",
"two",
"real",
"-",
"world",
"issues",
"from",
"the",
"automat... |
ACL | Crowdsourcing and Aggregating Nested Markable Annotations | One of the key steps in language resource creation is the identification of the text segments to be annotated, or markables, which depending on the task may vary from nominal chunks for named entity resolution to (potentially nested) noun phrases in coreference resolution (or mentions) to larger text segments in text s... | 301ebe034bd79606affe2d8b68a64e47 | 2,019 | [
"one of the key steps in language resource creation is the identification of the text segments to be annotated , or markables , which depending on the task may vary from nominal chunks for named entity resolution to ( potentially nested ) noun phrases in coreference resolution ( or mentions ) to larger text segment... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "language resource creation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"language",
"resource",
"creation"
],
"offsets": [
6,
7,
... | [
"one",
"of",
"the",
"key",
"steps",
"in",
"language",
"resource",
"creation",
"is",
"the",
"identification",
"of",
"the",
"text",
"segments",
"to",
"be",
"annotated",
",",
"or",
"markables",
",",
"which",
"depending",
"on",
"the",
"task",
"may",
"vary",
"f... |
ACL | Rethinking and Refining the Distinct Metric | Distinct is a widely used automatic metric for evaluating diversity in language generation tasks.However, we observed that the original approach to calculating distinct scores has evident biases that tend to assign higher penalties to longer sequences. We refine the calculation of distinct scores by scaling the number ... | 191a407b694853368ead7542ba247f05 | 2,022 | [
"distinct is a widely used automatic metric for evaluating diversity in language generation tasks .",
"however , we observed that the original approach to calculating distinct scores has evident biases that tend to assign higher penalties to longer sequences .",
"we refine the calculation of distinct scores by ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "distinct",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"distinct"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "metric",
"tokens": [
... | [
"distinct",
"is",
"a",
"widely",
"used",
"automatic",
"metric",
"for",
"evaluating",
"diversity",
"in",
"language",
"generation",
"tasks",
".",
"however",
",",
"we",
"observed",
"that",
"the",
"original",
"approach",
"to",
"calculating",
"distinct",
"scores",
"h... |
ACL | Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction | Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels. Current relation extraction methods try to alleviate the noise by multi-instance learning and by providing supporting linguistic and contextual information to more efficiently guide the relation... | e61d5062601799f2034405441aa7903f | 2,019 | [
"distantly supervised relation extraction is widely used to extract relational facts from text , but suffers from noisy labels .",
"current relation extraction methods try to alleviate the noise by multi - instance learning and by providing supporting linguistic and contextual information to more efficiently guid... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "distantly supervised relation extraction",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"distantly",
"supervised",
"relation",
"extraction"
],
"offs... | [
"distantly",
"supervised",
"relation",
"extraction",
"is",
"widely",
"used",
"to",
"extract",
"relational",
"facts",
"from",
"text",
",",
"but",
"suffers",
"from",
"noisy",
"labels",
".",
"current",
"relation",
"extraction",
"methods",
"try",
"to",
"alleviate",
... |
ACL | Misinfo Reaction Frames: Reasoning about Readers’ Reactions to News Headlines | Even to a simple and short news headline, readers react in a multitude of ways: cognitively (e.g. inferring the writer’s intent), emotionally (e.g. feeling distrust), and behaviorally (e.g. sharing the news with their friends). Such reactions are instantaneous and yet complex, as they rely on factors that go beyond int... | 94ff3b76b6bd9ebf5401630656eeed15 | 2,022 | [
"even to a simple and short news headline , readers react in a multitude of ways : cognitively ( e . g . inferring the writer ’ s intent ) , emotionally ( e . g . feeling distrust ) , and behaviorally ( e . g . sharing the news with their friends ) .",
"such reactions are instantaneous and yet complex , as they r... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "reactions",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"reactions"
],
"offsets": [
57
]
}
],
"trigger": {
"text": "instantaneous",
"t... | [
"even",
"to",
"a",
"simple",
"and",
"short",
"news",
"headline",
",",
"readers",
"react",
"in",
"a",
"multitude",
"of",
"ways",
":",
"cognitively",
"(",
"e",
".",
"g",
".",
"inferring",
"the",
"writer",
"’",
"s",
"intent",
")",
",",
"emotionally",
"(",... |
ACL | To Boldly Query What No One Has Annotated Before? The Frontiers of Corpus Querying | Corpus query systems exist to address the multifarious information needs of any person interested in the content of annotated corpora. In this role they play an important part in making those resources usable for a wider audience. Over the past decades, several such query systems and languages have emerged, varying gre... | b0e745ae854ffccd46409f8197bfea5b | 2,020 | [
"corpus query systems exist to address the multifarious information needs of any person interested in the content of annotated corpora .",
"in this role they play an important part in making those resources usable for a wider audience .",
"over the past decades , several such query systems and languages have em... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "corpus query systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"corpus",
"query",
"systems"
],
"offsets": [
0,
1,
2
... | [
"corpus",
"query",
"systems",
"exist",
"to",
"address",
"the",
"multifarious",
"information",
"needs",
"of",
"any",
"person",
"interested",
"in",
"the",
"content",
"of",
"annotated",
"corpora",
".",
"in",
"this",
"role",
"they",
"play",
"an",
"important",
"par... |
ACL | CAKE: A Scalable Commonsense-Aware Framework For Multi-View Knowledge Graph Completion | Knowledge graphs store a large number of factual triples while they are still incomplete, inevitably. The previous knowledge graph completion (KGC) models predict missing links between entities merely relying on fact-view data, ignoring the valuable commonsense knowledge. The previous knowledge graph embedding (KGE) te... | 9d4b4d1ff205de38726ace5ee63c89f2 | 2,022 | [
"knowledge graphs store a large number of factual triples while they are still incomplete , inevitably .",
"the previous knowledge graph completion ( kgc ) models predict missing links between entities merely relying on fact - view data , ignoring the valuable commonsense knowledge .",
"the previous knowledge g... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "previous knowledge graph completion models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"previous",
"knowledge",
"graph",
"completion",
"models"
... | [
"knowledge",
"graphs",
"store",
"a",
"large",
"number",
"of",
"factual",
"triples",
"while",
"they",
"are",
"still",
"incomplete",
",",
"inevitably",
".",
"the",
"previous",
"knowledge",
"graph",
"completion",
"(",
"kgc",
")",
"models",
"predict",
"missing",
"... |
ACL | Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation | We present a neural approach called IRNet for complex and cross-domain Text-to-SQL. IRNet aims to address two challenges: 1) the mismatch between intents expressed in natural language (NL) and the implementation details in SQL; 2) the challenge in predicting columns caused by the large number of out-of-domain words. In... | 8783384af0d85622d18f34e21154cc9c | 2,019 | [
"we present a neural approach called irnet for complex and cross - domain text - to - sql .",
"irnet aims to address two challenges : 1 ) the mismatch between intents expressed in natural language ( nl ) and the implementation details in sql ; 2 ) the challenge in predicting columns caused by the large number of ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "complex and cross - domain text - to - sql",
... | [
"we",
"present",
"a",
"neural",
"approach",
"called",
"irnet",
"for",
"complex",
"and",
"cross",
"-",
"domain",
"text",
"-",
"to",
"-",
"sql",
".",
"irnet",
"aims",
"to",
"address",
"two",
"challenges",
":",
"1",
")",
"the",
"mismatch",
"between",
"inten... |
ACL | Non-neural Models Matter: a Re-evaluation of Neural Referring Expression Generation Systems | In recent years, neural models have often outperformed rule-based and classic Machine Learning approaches in NLG. These classic approaches are now often disregarded, for example when new neural models are evaluated. We argue that they should not be overlooked, since, for some tasks, well-designed non-neural approaches ... | f7c13ab22a7b2e2b15e90517202e5d27 | 2,022 | [
"in recent years , neural models have often outperformed rule - based and classic machine learning approaches in nlg .",
"these classic approaches are now often disregarded , for example when new neural models are evaluated .",
"we argue that they should not be overlooked , since , for some tasks , well - desig... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"models"
],
"offsets": [
4,
5
]
}
],
"trigger": {
... | [
"in",
"recent",
"years",
",",
"neural",
"models",
"have",
"often",
"outperformed",
"rule",
"-",
"based",
"and",
"classic",
"machine",
"learning",
"approaches",
"in",
"nlg",
".",
"these",
"classic",
"approaches",
"are",
"now",
"often",
"disregarded",
",",
"for"... |
ACL | Bridging the Generalization Gap in Text-to-SQL Parsing with Schema Expansion | Text-to-SQL parsers map natural language questions to programs that are executable over tables to generate answers, and are typically evaluated on large-scale datasets like Spider (Yu et al., 2018). We argue that existing benchmarks fail to capture a certain out-of-domain generalization problem that is of significant p... | b1ca9564b6fdc1117523985ecb081de1 | 2,022 | [
"text - to - sql parsers map natural language questions to programs that are executable over tables to generate answers , and are typically evaluated on large - scale datasets like spider ( yu et al . , 2018 ) .",
"we argue that existing benchmarks fail to capture a certain out - of - domain generalization proble... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "text - to - sql parsers",
"nugget_type": "MOD",
"argument_type": "Subject",
"tokens": [
"text",
"-",
"to",
"-",
"sql",
"parsers"
],
"offsets": [
... | [
"text",
"-",
"to",
"-",
"sql",
"parsers",
"map",
"natural",
"language",
"questions",
"to",
"programs",
"that",
"are",
"executable",
"over",
"tables",
"to",
"generate",
"answers",
",",
"and",
"are",
"typically",
"evaluated",
"on",
"large",
"-",
"scale",
"data... |
ACL | Multimodal Sarcasm Target Identification in Tweets | Sarcasm is important to sentiment analysis on social media. Sarcasm Target Identification (STI) deserves further study to understand sarcasm in depth. However, text lacking context or missing sarcasm target makes target identification very difficult. In this paper, we introduce multimodality to STI and present Multimod... | 365b1bf3d8f0bef691280ed4f0e22bf0 | 2,022 | [
"sarcasm is important to sentiment analysis on social media .",
"sarcasm target identification ( sti ) deserves further study to understand sarcasm in depth .",
"however , text lacking context or missing sarcasm target makes target identification very difficult .",
"in this paper , we introduce multimodality ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sarcasm target identification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"sarcasm",
"target",
"identification"
],
"offsets": [
10,
11,
... | [
"sarcasm",
"is",
"important",
"to",
"sentiment",
"analysis",
"on",
"social",
"media",
".",
"sarcasm",
"target",
"identification",
"(",
"sti",
")",
"deserves",
"further",
"study",
"to",
"understand",
"sarcasm",
"in",
"depth",
".",
"however",
",",
"text",
"lacki... |
ACL | A Large-Scale Corpus for Conversation Disentanglement | Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation str... | 55f5ce151fa6d0036ccac6adcb778a83 | 2,019 | [
"disentangling conversations mixed together in a single stream of messages is a difficult task , made harder by the lack of large manually annotated datasets .",
"we created a new dataset of 77 , 563 messages manually annotated with reply - structure graphs that both disentangle conversations and define internal ... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "disentangling conversations",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"disentangling",
"conversations"
],
"offsets": [
0,
1
]
},
... | [
"disentangling",
"conversations",
"mixed",
"together",
"in",
"a",
"single",
"stream",
"of",
"messages",
"is",
"a",
"difficult",
"task",
",",
"made",
"harder",
"by",
"the",
"lack",
"of",
"large",
"manually",
"annotated",
"datasets",
".",
"we",
"created",
"a",
... |
ACL | Enabling Lightweight Fine-tuning for Pre-trained Language Model Compression based on Matrix Product Operators | This paper presents a novel pre-trained language models (PLM) compression approach based on the matrix product operator (short as MPO) from quantum many-body physics. It can decompose an original matrix into central tensors (containing the core information) and auxiliary tensors (with only a small proportion of paramet... | 91428db8b0d52aeaaf260f7d73ae5de6 | 2,021 | [
"this paper presents a novel pre - trained language models ( plm ) compression approach based on the matrix product operator ( short as mpo ) from quantum many - body physics .",
"it can decompose an original matrix into central tensors ( containing the core information ) and auxiliary tensors ( with only a small... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "pre - trained language models compression approach",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"pre",
"-",
"trained",
"language",
"models",
"... | [
"this",
"paper",
"presents",
"a",
"novel",
"pre",
"-",
"trained",
"language",
"models",
"(",
"plm",
")",
"compression",
"approach",
"based",
"on",
"the",
"matrix",
"product",
"operator",
"(",
"short",
"as",
"mpo",
")",
"from",
"quantum",
"many",
"-",
"body... |
ACL | How reparametrization trick broke differentially-private text representation learning | As privacy gains traction in the NLP community, researchers have started adopting various approaches to privacy-preserving methods. One of the favorite privacy frameworks, differential privacy (DP), is perhaps the most compelling thanks to its fundamental theoretical guarantees. Despite the apparent simplicity of the g... | ebd43768dd56cd15df1f92a82a1467c8 | 2,022 | [
"as privacy gains traction in the nlp community , researchers have started adopting various approaches to privacy - preserving methods .",
"one of the favorite privacy frameworks , differential privacy ( dp ) , is perhaps the most compelling thanks to its fundamental theoretical guarantees .",
"despite the appa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "privacy",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"privacy"
],
"offsets": [
1
]
},
{
"text": "in the nlp community",
"nugget_typ... | [
"as",
"privacy",
"gains",
"traction",
"in",
"the",
"nlp",
"community",
",",
"researchers",
"have",
"started",
"adopting",
"various",
"approaches",
"to",
"privacy",
"-",
"preserving",
"methods",
".",
"one",
"of",
"the",
"favorite",
"privacy",
"frameworks",
",",
... |
ACL | Retrieval-Enhanced Adversarial Training for Neural Response Generation | Dialogue systems are usually built on either generation-based or retrieval-based approaches, yet they do not benefit from the advantages of different models. In this paper, we propose a Retrieval-Enhanced Adversarial Training (REAT) method for neural response generation. Distinct from existing approaches, the REAT meth... | 29d4c00424889811533e097951ccce13 | 2,019 | [
"dialogue systems are usually built on either generation - based or retrieval - based approaches , yet they do not benefit from the advantages of different models .",
"in this paper , we propose a retrieval - enhanced adversarial training ( reat ) method for neural response generation .",
"distinct from existin... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
32
]
},
{
"text": "retrieval - enhanced adversarial training",
... | [
"dialogue",
"systems",
"are",
"usually",
"built",
"on",
"either",
"generation",
"-",
"based",
"or",
"retrieval",
"-",
"based",
"approaches",
",",
"yet",
"they",
"do",
"not",
"benefit",
"from",
"the",
"advantages",
"of",
"different",
"models",
".",
"in",
"thi... |
ACL | Query Graph Generation for Answering Multi-hop Complex Questions from Knowledge Bases | Previous work on answering complex questions from knowledge bases usually separately addresses two types of complexity: questions with constraints and questions with multiple hops of relations. In this paper, we handle both types of complexity at the same time. Motivated by the observation that early incorporation of c... | e9a94cc2ef18ce0506ee5d4e74755b11 | 2,020 | [
"previous work on answering complex questions from knowledge bases usually separately addresses two types of complexity : questions with constraints and questions with multiple hops of relations .",
"in this paper , we handle both types of complexity at the same time .",
"motivated by the observation that early... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
32
]
},
{
"text": "questions with constraints and questions with mu... | [
"previous",
"work",
"on",
"answering",
"complex",
"questions",
"from",
"knowledge",
"bases",
"usually",
"separately",
"addresses",
"two",
"types",
"of",
"complexity",
":",
"questions",
"with",
"constraints",
"and",
"questions",
"with",
"multiple",
"hops",
"of",
"r... |
ACL | Improving Entity Linking through Semantic Reinforced Entity Embeddings | Entity embeddings, which represent different aspects of each entity with a single vector like word embeddings, are a key component of neural entity linking models. Existing entity embeddings are learned from canonical Wikipedia articles and local contexts surrounding target entities. Such entity embeddings are effectiv... | d36c841c7969f511884f5a7da9842726 | 2,020 | [
"entity embeddings , which represent different aspects of each entity with a single vector like word embeddings , are a key component of neural entity linking models .",
"existing entity embeddings are learned from canonical wikipedia articles and local contexts surrounding target entities .",
"such entity embe... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "entity embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"entity",
"embeddings"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"entity",
"embeddings",
",",
"which",
"represent",
"different",
"aspects",
"of",
"each",
"entity",
"with",
"a",
"single",
"vector",
"like",
"word",
"embeddings",
",",
"are",
"a",
"key",
"component",
"of",
"neural",
"entity",
"linking",
"models",
".",
"existing... |
ACL | Attend, Translate and Summarize: An Efficient Method for Neural Cross-Lingual Summarization | Cross-lingual summarization aims at summarizing a document in one language (e.g., Chinese) into another language (e.g., English). In this paper, we propose a novel method inspired by the translation pattern in the process of obtaining a cross-lingual summary. We first attend to some words in the source text, then trans... | c76dbfb6723f5e082859ecdd3b9fb880 | 2,020 | [
"cross - lingual summarization aims at summarizing a document in one language ( e . g . , chinese ) into another language ( e . g . , english ) .",
"in this paper , we propose a novel method inspired by the translation pattern in the process of obtaining a cross - lingual summary .",
"we first attend to some wo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"summarization"
],
"offsets": [
0,
... | [
"cross",
"-",
"lingual",
"summarization",
"aims",
"at",
"summarizing",
"a",
"document",
"in",
"one",
"language",
"(",
"e",
".",
"g",
".",
",",
"chinese",
")",
"into",
"another",
"language",
"(",
"e",
".",
"g",
".",
",",
"english",
")",
".",
"in",
"th... |
ACL | Generalising Multilingual Concept-to-Text NLG with Language Agnostic Delexicalisation | Concept-to-text Natural Language Generation is the task of expressing an input meaning representation in natural language. Previous approaches in this task have been able to generalise to rare or unseen instances by relying on a delexicalisation of the input. However, this often requires that the input appears verbatim... | c8f970d4e8acbb8a8f9cfcbc4bac39fb | 2,021 | [
"concept - to - text natural language generation is the task of expressing an input meaning representation in natural language .",
"previous approaches in this task have been able to generalise to rare or unseen instances by relying on a delexicalisation of the input .",
"however , this often requires that the ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "concept - to - text natural language generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"concept",
"-",
"to",
"-",
"text",
"natural",
... | [
"concept",
"-",
"to",
"-",
"text",
"natural",
"language",
"generation",
"is",
"the",
"task",
"of",
"expressing",
"an",
"input",
"meaning",
"representation",
"in",
"natural",
"language",
".",
"previous",
"approaches",
"in",
"this",
"task",
"have",
"been",
"able... |
ACL | From Paraphrasing to Semantic Parsing: Unsupervised Semantic Parsing via Synchronous Semantic Decoding | Semantic parsing is challenging due to the structure gap and the semantic gap between utterances and logical forms. In this paper, we propose an unsupervised semantic parsing method - Synchronous Semantic Decoding (SSD), which can simultaneously resolve the semantic gap and the structure gap by jointly leveraging parap... | 4c99960ad2bd504a877235e9f22e7f41 | 2,021 | [
"semantic parsing is challenging due to the structure gap and the semantic gap between utterances and logical forms .",
"in this paper , we propose an unsupervised semantic parsing method - synchronous semantic decoding ( ssd ) , which can simultaneously resolve the semantic gap and the structure gap by jointly l... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "semantic parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"semantic",
"parsing"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"semantic",
"parsing",
"is",
"challenging",
"due",
"to",
"the",
"structure",
"gap",
"and",
"the",
"semantic",
"gap",
"between",
"utterances",
"and",
"logical",
"forms",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"an",
"unsupervised",
"semantic",
"parsin... |
ACL | UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive Learning | Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other. They can only utilize single-modal data (i.e., text or image) or limited multi-modal data (i.e., image-text pairs). In this work, we propose a UNIfied-MOdal pre-training architecture, namely... | 502ea5b505ad4083fc561fbb0d24f878 | 2,021 | [
"existed pre - training methods either focus on single - modal tasks or multi - modal tasks , and cannot effectively adapt to each other .",
"they can only utilize single - modal data ( i . e . , text or image ) or limited multi - modal data ( i . e . , image - text pairs ) .",
"in this work , we propose a unif... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "existed pre - training methods",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"existed",
"pre",
"-",
"training",
"methods"
],
"offsets": [... | [
"existed",
"pre",
"-",
"training",
"methods",
"either",
"focus",
"on",
"single",
"-",
"modal",
"tasks",
"or",
"multi",
"-",
"modal",
"tasks",
",",
"and",
"cannot",
"effectively",
"adapt",
"to",
"each",
"other",
".",
"they",
"can",
"only",
"utilize",
"singl... |
ACL | Machine Translation into Low-resource Language Varieties | State-of-the-art machine translation (MT) systems are typically trained to generate “standard” target language; however, many languages have multiple varieties (regional varieties, dialects, sociolects, non-native varieties) that are different from the standard language. Such varieties are often low-resource, and hence... | 51ec7f7b7b84fd97e167d63925f3ef90 | 2,021 | [
"state - of - the - art machine translation ( mt ) systems are typically trained to generate “ standard ” target language ; however , many languages have multiple varieties ( regional varieties , dialects , sociolects , non - native varieties ) that are different from the standard language .",
"such varieties are... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "mt",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"mt"
],
"offsets": [
81
]
}
],
"trigger": {
"text": "generate",
"tokens": [
"... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"machine",
"translation",
"(",
"mt",
")",
"systems",
"are",
"typically",
"trained",
"to",
"generate",
"“",
"standard",
"”",
"target",
"language",
";",
"however",
",",
"many",
"languages",
"have",
"multiple",
"var... |
ACL | A Relational Memory-based Embedding Model for Triple Classification and Search Personalization | Knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems. To this end, we introduce a novel embedding model, named R-MeN, that explores a relational memory network to encode potential dependencies in re... | 010118454e360e9f70cf5c34d1073b49 | 2,020 | [
"knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems .",
"to this end , we introduce a novel embedding model , named r - men , that explores a relational memory network to encode potential de... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "knowledge graph embedding methods",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"knowledge",
"graph",
"embedding",
"methods"
],
"offsets": [
... | [
"knowledge",
"graph",
"embedding",
"methods",
"often",
"suffer",
"from",
"a",
"limitation",
"of",
"memorizing",
"valid",
"triples",
"to",
"predict",
"new",
"ones",
"for",
"triple",
"classification",
"and",
"search",
"personalization",
"problems",
".",
"to",
"this"... |
ACL | Interactive Machine Comprehension with Information Seeking Agents | Existing machine reading comprehension (MRC) models do not scale effectively to real-world applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary inf... | 36d409dfff18aea521f13aaf9358959d | 2,020 | [
"existing machine reading comprehension ( mrc ) models do not scale effectively to real - world applications like web - level information retrieval and question answering ( qa ) .",
"we argue that this stems from the nature of mrc datasets : most of these are static environments wherein the supporting documents a... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "existing machine reading comprehension models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"existing",
"machine",
"reading",
"comprehension",
"models"
... | [
"existing",
"machine",
"reading",
"comprehension",
"(",
"mrc",
")",
"models",
"do",
"not",
"scale",
"effectively",
"to",
"real",
"-",
"world",
"applications",
"like",
"web",
"-",
"level",
"information",
"retrieval",
"and",
"question",
"answering",
"(",
"qa",
"... |
ACL | BERTTune: Fine-Tuning Neural Machine Translation with BERTScore | Neural machine translation models are often biased toward the limited translation references seen during training. To amend this form of overfitting, in this paper we propose fine-tuning the models with a novel training objective based on the recently-proposed BERTScore evaluation metric. BERTScore is a scoring functio... | c3c2edef0aef324c848c073954db081f | 2,021 | [
"neural machine translation models are often biased toward the limited translation references seen during training .",
"to amend this form of overfitting , in this paper we propose fine - tuning the models with a novel training objective based on the recently - proposed bertscore evaluation metric .",
"bertscor... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation",
"models"
],
"offsets": [
... | [
"neural",
"machine",
"translation",
"models",
"are",
"often",
"biased",
"toward",
"the",
"limited",
"translation",
"references",
"seen",
"during",
"training",
".",
"to",
"amend",
"this",
"form",
"of",
"overfitting",
",",
"in",
"this",
"paper",
"we",
"propose",
... |
ACL | Adaptive Nearest Neighbor Machine Translation | kNN-MT, recently proposed by Khandelwal et al. (2020a), successfully combines pre-trained neural machine translation (NMT) model with token-level k-nearest-neighbor (kNN) retrieval to improve the translation accuracy. However, the traditional kNN algorithm used in kNN-MT simply retrieves a same number of nearest neighb... | 0020cabccef2e8af1465b119875a9180 | 2,021 | [
"knn - mt , recently proposed by khandelwal et al . ( 2020a ) , successfully combines pre - trained neural machine translation ( nmt ) model with token - level k - nearest - neighbor ( knn ) retrieval to improve the translation accuracy .",
"however , the traditional knn algorithm used in knn - mt simply retrieve... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "when the retrieved neighbors include noises",
"nugget_type": "LIM",
"argument_type": "Condition",
"tokens": [
"when",
"the",
"retrieved",
"neighbors",
"include",
... | [
"knn",
"-",
"mt",
",",
"recently",
"proposed",
"by",
"khandelwal",
"et",
"al",
".",
"(",
"2020a",
")",
",",
"successfully",
"combines",
"pre",
"-",
"trained",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"model",
"with",
"token",
"-",
"level",
"k... |
ACL | Hyperbolic Capsule Networks for Multi-Label Classification | Although deep neural networks are effective at extracting high-level features, classification methods usually encode an input into a vector representation via simple feature aggregation operations (e.g. pooling). Such operations limit the performance. For instance, a multi-label document may contain several concepts. I... | 178c6ea2fc4ffa05ce935dfa6d4fec1f | 2,020 | [
"although deep neural networks are effective at extracting high - level features , classification methods usually encode an input into a vector representation via simple feature aggregation operations ( e . g . pooling ) .",
"such operations limit the performance .",
"for instance , a multi - label document may... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "classification methods",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"classification",
"methods"
],
"offsets": [
13,
14
]
}
],
... | [
"although",
"deep",
"neural",
"networks",
"are",
"effective",
"at",
"extracting",
"high",
"-",
"level",
"features",
",",
"classification",
"methods",
"usually",
"encode",
"an",
"input",
"into",
"a",
"vector",
"representation",
"via",
"simple",
"feature",
"aggregat... |
ACL | Better Exploiting Latent Variables in Text Modeling | We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging. Consistent gains in performance on two different datasets, Penn Treebank and Yahoo, ind... | 20855b3e0b7b9e626ff1c5825b8f61d1 | 2,019 | [
"we show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging .",
"consistent gains in performance on two different datasets , penn treebank an... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "improving",
"nugget_type": "E-PUR",
"argument_type": "Target",
"tokens": [
"improving"
],
"offsets": [
14
]
},
{
"text": "latent variables",
"nugget_... | [
"we",
"show",
"that",
"sampling",
"latent",
"variables",
"multiple",
"times",
"at",
"a",
"gradient",
"step",
"helps",
"in",
"improving",
"a",
"variational",
"autoencoder",
"and",
"propose",
"a",
"simple",
"and",
"effective",
"method",
"to",
"better",
"exploit",
... |
ACL | Semi-supervised Contextual Historical Text Normalization | Historical text normalization, the task of mapping historical word forms to their modern counterparts, has recently attracted a lot of interest (Bollmann, 2019; Tang et al., 2018; Lusetti et al., 2018; Bollmann et al., 2018;Robertson and Goldwater, 2018; Bollmannet al., 2017; Korchagina, 2017). Yet, virtually all appro... | c6ec1106ea9156df60583066e8040b05 | 2,020 | [
"historical text normalization , the task of mapping historical word forms to their modern counterparts , has recently attracted a lot of interest ( bollmann , 2019 ; tang et al . , 2018 ; lusetti et al . , 2018 ; bollmann et al . , 2018 ; robertson and goldwater , 2018 ; bollmannet al . , 2017 ; korchagina , 2017 ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "historical text normalization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"historical",
"text",
"normalization"
],
"offsets": [
0,
1,
... | [
"historical",
"text",
"normalization",
",",
"the",
"task",
"of",
"mapping",
"historical",
"word",
"forms",
"to",
"their",
"modern",
"counterparts",
",",
"has",
"recently",
"attracted",
"a",
"lot",
"of",
"interest",
"(",
"bollmann",
",",
"2019",
";",
"tang",
... |
ACL | latent-GLAT: Glancing at Latent Variables for Parallel Text Generation | Recently, parallel text generation has received widespread attention due to its success in generation efficiency. Although many advanced techniques are proposed to improve its generation quality, they still need the help of an autoregressive model for training to overcome the one-to-many multi-modal phenomenon in the d... | 5daae5f2f80edac0f1483215e1828e46 | 2,022 | [
"recently , parallel text generation has received widespread attention due to its success in generation efficiency .",
"although many advanced techniques are proposed to improve its generation quality , they still need the help of an autoregressive model for training to overcome the one - to - many multi - modal ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "parallel text generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"parallel",
"text",
"generation"
],
"offsets": [
2,
3,
4
... | [
"recently",
",",
"parallel",
"text",
"generation",
"has",
"received",
"widespread",
"attention",
"due",
"to",
"its",
"success",
"in",
"generation",
"efficiency",
".",
"although",
"many",
"advanced",
"techniques",
"are",
"proposed",
"to",
"improve",
"its",
"generat... |
ACL | Graph based Neural Networks for Event Factuality Prediction using Syntactic and Semantic Structures | Event factuality prediction (EFP) is the task of assessing the degree to which an event mentioned in a sentence has happened. For this task, both syntactic and semantic information are crucial to identify the important context words. The previous work for EFP has only combined these information in a simple way that can... | e9a0d7c8aa8f67831cf648cdc508d92c | 2,019 | [
"event factuality prediction ( efp ) is the task of assessing the degree to which an event mentioned in a sentence has happened .",
"for this task , both syntactic and semantic information are crucial to identify the important context words .",
"the previous work for efp has only combined these information in a... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "event factuality prediction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"event",
"factuality",
"prediction"
],
"offsets": [
0,
1,
... | [
"event",
"factuality",
"prediction",
"(",
"efp",
")",
"is",
"the",
"task",
"of",
"assessing",
"the",
"degree",
"to",
"which",
"an",
"event",
"mentioned",
"in",
"a",
"sentence",
"has",
"happened",
".",
"for",
"this",
"task",
",",
"both",
"syntactic",
"and",... |
ACL | Generalized Data Augmentation for Low-Resource Translation | Low-resource language pairs with a paucity of parallel data pose challenges for machine translation in terms of both adequacy and fluency. Data augmentation utilizing a large amount of monolingual data is regarded as an effective way to alleviate the problem. In this paper, we propose a general framework of data augmen... | 64d8c9c28f38f1f1aa469873c133e3cd | 2,019 | [
"low - resource language pairs with a paucity of parallel data pose challenges for machine translation in terms of both adequacy and fluency .",
"data augmentation utilizing a large amount of monolingual data is regarded as an effective way to alleviate the problem .",
"in this paper , we propose a general fram... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "challenges",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"challenges"
],
"offsets": [
12
]
},
{
"text": "machine translation",
"nugge... | [
"low",
"-",
"resource",
"language",
"pairs",
"with",
"a",
"paucity",
"of",
"parallel",
"data",
"pose",
"challenges",
"for",
"machine",
"translation",
"in",
"terms",
"of",
"both",
"adequacy",
"and",
"fluency",
".",
"data",
"augmentation",
"utilizing",
"a",
"lar... |
ACL | On-device Structured and Context Partitioned Projection Networks | A challenging problem in on-device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency. To address this challenge, we propose an on-device neural network SGNN++ which dynamically learns compact projection vectors from raw text using structured and co... | c7bc515a1f0c547091c0a0e352db808a | 2,019 | [
"a challenging problem in on - device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency .",
"to address this challenge , we propose an on - device neural network sgnn + + which dynamically learns compact projection vectors from raw text using... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
33
]
},
{
"text": "on - device neural network",
"nugget_type"... | [
"a",
"challenging",
"problem",
"in",
"on",
"-",
"device",
"text",
"classification",
"is",
"to",
"build",
"highly",
"accurate",
"neural",
"models",
"that",
"can",
"fit",
"in",
"small",
"memory",
"footprint",
"and",
"have",
"low",
"latency",
".",
"to",
"addres... |
ACL | Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency | We address the problem of adversarial attacks on text classification, which is rarely studied comparing to attacks on image classification. The challenge of this task is to generate adversarial examples that maintain lexical correctness, grammatical correctness and semantic similarity. Based on the synonyms substitutio... | e1a7ac807b116c2bb0d6b247ca68810b | 2,019 | [
"we address the problem of adversarial attacks on text classification , which is rarely studied comparing to attacks on image classification .",
"the challenge of this task is to generate adversarial examples that maintain lexical correctness , grammatical correctness and semantic similarity .",
"based on the s... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "adversarial attacks on text classification",
... | [
"we",
"address",
"the",
"problem",
"of",
"adversarial",
"attacks",
"on",
"text",
"classification",
",",
"which",
"is",
"rarely",
"studied",
"comparing",
"to",
"attacks",
"on",
"image",
"classification",
".",
"the",
"challenge",
"of",
"this",
"task",
"is",
"to"... |
ACL | Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning” for Neural Machine Translation | Noise and domain are important aspects of data quality for neural machine translation. Existing research focus separately on domain-data selection, clean-data selection, or their static combination, leaving the dynamic interaction across them not explicitly examined. This paper introduces a “co-curricular learning” met... | 719bfaae69c9ec0b8a66f3ff468c456c | 2,019 | [
"noise and domain are important aspects of data quality for neural machine translation .",
"existing research focus separately on domain - data selection , clean - data selection , or their static combination , leaving the dynamic interaction across them not explicitly examined .",
"this paper introduces a “ co... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "noise and domain",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"noise",
"and",
"domain"
],
"offsets": [
0,
1,
2
]
... | [
"noise",
"and",
"domain",
"are",
"important",
"aspects",
"of",
"data",
"quality",
"for",
"neural",
"machine",
"translation",
".",
"existing",
"research",
"focus",
"separately",
"on",
"domain",
"-",
"data",
"selection",
",",
"clean",
"-",
"data",
"selection",
"... |
ACL | Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming | We treat projective dependency trees as latent variables in our probabilistic model and induce them in such a way as to be beneficial for a downstream task, without relying on any direct tree supervision. Our approach relies on Gumbel perturbations and differentiable dynamic programming. Unlike previous approaches to l... | a5dd7200a8170c4720eccdb28694959a | 2,019 | [
"we treat projective dependency trees as latent variables in our probabilistic model and induce them in such a way as to be beneficial for a downstream task , without relying on any direct tree supervision .",
"our approach relies on gumbel perturbations and differentiable dynamic programming .",
"unlike previo... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "projective dependency trees",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"projective",
"dependency",
"trees"
],
"offsets": [
2,
3,... | [
"we",
"treat",
"projective",
"dependency",
"trees",
"as",
"latent",
"variables",
"in",
"our",
"probabilistic",
"model",
"and",
"induce",
"them",
"in",
"such",
"a",
"way",
"as",
"to",
"be",
"beneficial",
"for",
"a",
"downstream",
"task",
",",
"without",
"rely... |
ACL | A Taxonomy of Empathetic Questions in Social Dialogs | Effective question-asking is a crucial component of a successful conversational chatbot. It could help the bots manifest empathy and render the interaction more engaging by demonstrating attention to the speaker’s emotions. However, current dialog generation approaches do not model this subtle emotion regulation techni... | d57ffa752f4aeab2d10b83e6a835f732 | 2,022 | [
"effective question - asking is a crucial component of a successful conversational chatbot .",
"it could help the bots manifest empathy and render the interaction more engaging by demonstrating attention to the speaker ’ s emotions .",
"however , current dialog generation approaches do not model this subtle emo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "effective question - asking",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"effective",
"question",
"-",
"asking"
],
"offsets": [
0,
... | [
"effective",
"question",
"-",
"asking",
"is",
"a",
"crucial",
"component",
"of",
"a",
"successful",
"conversational",
"chatbot",
".",
"it",
"could",
"help",
"the",
"bots",
"manifest",
"empathy",
"and",
"render",
"the",
"interaction",
"more",
"engaging",
"by",
... |
ACL | Tackling Fake News Detection by Continually Improving Social Context Representations using Graph Neural Networks | Easy access, variety of content, and fast widespread interactions are some of the reasons making social media increasingly popular. However, this rise has also enabled the propagation of fake news, text published by news sources with an intent to spread misinformation and sway beliefs. Detecting it is an important and ... | a6b96915a5725c974a22b842afab3516 | 2,022 | [
"easy access , variety of content , and fast widespread interactions are some of the reasons making social media increasingly popular .",
"however , this rise has also enabled the propagation of fake news , text published by news sources with an intent to spread misinformation and sway beliefs .",
"detecting it... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "fake news",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"fake",
"news"
],
"offsets": [
32,
33
]
}
],
"trigger": {
"text"... | [
"easy",
"access",
",",
"variety",
"of",
"content",
",",
"and",
"fast",
"widespread",
"interactions",
"are",
"some",
"of",
"the",
"reasons",
"making",
"social",
"media",
"increasingly",
"popular",
".",
"however",
",",
"this",
"rise",
"has",
"also",
"enabled",
... |
ACL | (Un)solving Morphological Inflection: Lemma Overlap Artificially Inflates Models’ Performance | In the domain of Morphology, Inflection is a fundamental and important task that gained a lot of traction in recent years, mostly via SIGMORPHON’s shared-tasks.With average accuracy above 0.9 over the scores of all languages, the task is considered mostly solved using relatively generic neural seq2seq models, even with... | 20098187eb028561d43bcb716e15033b | 2,022 | [
"in the domain of morphology , inflection is a fundamental and important task that gained a lot of traction in recent years , mostly via sigmorphon ’ s shared - tasks .",
"with average accuracy above 0 . 9 over the scores of all languages , the task is considered mostly solved using relatively generic neural seq2... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "inflection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"inflection"
],
"offsets": [
6
]
}
],
"trigger": {
"text": "task",
"tokens": ... | [
"in",
"the",
"domain",
"of",
"morphology",
",",
"inflection",
"is",
"a",
"fundamental",
"and",
"important",
"task",
"that",
"gained",
"a",
"lot",
"of",
"traction",
"in",
"recent",
"years",
",",
"mostly",
"via",
"sigmorphon",
"’",
"s",
"shared",
"-",
"tasks... |
ACL | CDL: Curriculum Dual Learning for Emotion-Controllable Response Generation | Emotion-controllable response generation is an attractive and valuable task that aims to make open-domain conversations more empathetic and engaging. Existing methods mainly enhance the emotion expression by adding regularization terms to standard cross-entropy loss and thus influence the training process. However, due... | d8e5e519c24bd548329c71baab345392 | 2,020 | [
"emotion - controllable response generation is an attractive and valuable task that aims to make open - domain conversations more empathetic and engaging .",
"existing methods mainly enhance the emotion expression by adding regularization terms to standard cross - entropy loss and thus influence the training proc... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "emotion - controllable response generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"emotion",
"-",
"controllable",
"response",
"generation"
... | [
"emotion",
"-",
"controllable",
"response",
"generation",
"is",
"an",
"attractive",
"and",
"valuable",
"task",
"that",
"aims",
"to",
"make",
"open",
"-",
"domain",
"conversations",
"more",
"empathetic",
"and",
"engaging",
".",
"existing",
"methods",
"mainly",
"e... |
ACL | Leveraging Explicit Lexico-logical Alignments in Text-to-SQL Parsing | Text-to-SQL aims to parse natural language questions into SQL queries, which is valuable in providing an easy interface to access large databases. Previous work has observed that leveraging lexico-logical alignments is very helpful to improve parsing performance. However, current attention-based approaches can only mod... | 063b57df42f3c12c09a2e32c238deb3f | 2,022 | [
"text - to - sql aims to parse natural language questions into sql queries , which is valuable in providing an easy interface to access large databases .",
"previous work has observed that leveraging lexico - logical alignments is very helpful to improve parsing performance .",
"however , current attention - ba... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
72
]
},
{
"text": "approach",
"nugget_type": "APP",
"... | [
"text",
"-",
"to",
"-",
"sql",
"aims",
"to",
"parse",
"natural",
"language",
"questions",
"into",
"sql",
"queries",
",",
"which",
"is",
"valuable",
"in",
"providing",
"an",
"easy",
"interface",
"to",
"access",
"large",
"databases",
".",
"previous",
"work",
... |
ACL | Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction | This paper investigates how to effectively incorporate a pre-trained masked language model (MLM), such as BERT, into an encoder-decoder (EncDec) model for grammatical error correction (GEC). The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a... | a3ef8848da0f5fe48f3e939a2bc335be | 2,020 | [
"this paper investigates how to effectively incorporate a pre - trained masked language model ( mlm ) , such as bert , into an encoder - decoder ( encdec ) model for grammatical error correction ( gec ) .",
"the answer to this question is not as straightforward as one might expect because the previous common meth... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "pre - trained masked language model",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"pre",
"-",
"trained",
"masked",
"language",
"model"
... | [
"this",
"paper",
"investigates",
"how",
"to",
"effectively",
"incorporate",
"a",
"pre",
"-",
"trained",
"masked",
"language",
"model",
"(",
"mlm",
")",
",",
"such",
"as",
"bert",
",",
"into",
"an",
"encoder",
"-",
"decoder",
"(",
"encdec",
")",
"model",
... |
ACL | Boosting Dialog Response Generation | Neural models have become one of the most important approaches to dialog response generation. However, they still tend to generate the most common and generic responses in the corpus all the time. To address this problem, we designed an iterative training process and ensemble method based on boosting. We combined our m... | a0505f666443decbf91a8e67a1ff8f2d | 2,019 | [
"neural models have become one of the most important approaches to dialog response generation .",
"however , they still tend to generate the most common and generic responses in the corpus all the time .",
"to address this problem , we designed an iterative training process and ensemble method based on boosting... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "dialog response generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"dialog",
"response",
"generation"
],
"offsets": [
11,
12,
... | [
"neural",
"models",
"have",
"become",
"one",
"of",
"the",
"most",
"important",
"approaches",
"to",
"dialog",
"response",
"generation",
".",
"however",
",",
"they",
"still",
"tend",
"to",
"generate",
"the",
"most",
"common",
"and",
"generic",
"responses",
"in",... |
ACL | "You might think about slightly revising the title”: Identifying Hedges in Peer-tutoring Interactions | Hedges have an important role in the management of rapport. In peer-tutoring, they are notably used by tutors in dyads experiencing low rapport to tone down the impact of instructions and negative feedback.Pursuing the objective of building a tutoring agent that manages rapport with teenagers in order to improve learni... | 9f4d100977f2e7a5de80f9aed5f77911 | 2,022 | [
"hedges have an important role in the management of rapport .",
"in peer - tutoring , they are notably used by tutors in dyads experiencing low rapport to tone down the impact of instructions and negative feedback .",
"pursuing the objective of building a tutoring agent that manages rapport with teenagers in or... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "hedges",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"hedges"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "role",
"tokens": [
... | [
"hedges",
"have",
"an",
"important",
"role",
"in",
"the",
"management",
"of",
"rapport",
".",
"in",
"peer",
"-",
"tutoring",
",",
"they",
"are",
"notably",
"used",
"by",
"tutors",
"in",
"dyads",
"experiencing",
"low",
"rapport",
"to",
"tone",
"down",
"the"... |
ACL | Towards Table-to-Text Generation with Numerical Reasoning | Recent neural text generation models have shown significant improvement in generating descriptive text from structured data such as table formats. One of the remaining important challenges is generating more analytical descriptions that can be inferred from facts in a data source. The use of a template-based generator ... | 82c92a15f06d69fb99bbfb85e0fd0858 | 2,021 | [
"recent neural text generation models have shown significant improvement in generating descriptive text from structured data such as table formats .",
"one of the remaining important challenges is generating more analytical descriptions that can be inferred from facts in a data source .",
"the use of a template... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural text generation models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"text",
"generation",
"models"
],
"offsets": [
1,
... | [
"recent",
"neural",
"text",
"generation",
"models",
"have",
"shown",
"significant",
"improvement",
"in",
"generating",
"descriptive",
"text",
"from",
"structured",
"data",
"such",
"as",
"table",
"formats",
".",
"one",
"of",
"the",
"remaining",
"important",
"challe... |
ACL | Saliency as Evidence: Event Detection with Trigger Saliency Attribution | Event detection (ED) is a critical subtask of event extraction that seeks to identify event triggers of certain types in texts.Despite significant advances in ED, existing methods typically follow a “one model fits all types” approach, which sees no differences between event types and often results in a quite skewed pe... | fd7d352302e27ede375851030589d5d6 | 2,022 | [
"event detection ( ed ) is a critical subtask of event extraction that seeks to identify event triggers of certain types in texts .",
"despite significant advances in ed , existing methods typically follow a “ one model fits all types ” approach , which sees no differences between event types and often results in... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ed",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"ed"
],
"offsets": [
126
]
}
],
"trigger": {
"text": "critical subtask of event extraction"... | [
"event",
"detection",
"(",
"ed",
")",
"is",
"a",
"critical",
"subtask",
"of",
"event",
"extraction",
"that",
"seeks",
"to",
"identify",
"event",
"triggers",
"of",
"certain",
"types",
"in",
"texts",
".",
"despite",
"significant",
"advances",
"in",
"ed",
",",
... |
ACL | Continual Prompt Tuning for Dialog State Tracking | A desirable dialog system should be able to continually learn new skills without forgetting old ones, and thereby adapt to new domains or tasks in its life cycle. However, continually training a model often leads to a well-known catastrophic forgetting issue. In this paper, we present Continual Prompt Tuning, a paramet... | 8d11cdea700f973c8f3e2bbfd64df792 | 2,022 | [
"a desirable dialog system should be able to continually learn new skills without forgetting old ones , and thereby adapt to new domains or tasks in its life cycle .",
"however , continually training a model often leads to a well - known catastrophic forgetting issue .",
"in this paper , we present continual pr... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "dialog system",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"dialog",
"system"
],
"offsets": [
2,
3
]
}
],
"trigger": {
... | [
"a",
"desirable",
"dialog",
"system",
"should",
"be",
"able",
"to",
"continually",
"learn",
"new",
"skills",
"without",
"forgetting",
"old",
"ones",
",",
"and",
"thereby",
"adapt",
"to",
"new",
"domains",
"or",
"tasks",
"in",
"its",
"life",
"cycle",
".",
"... |
ACL | Transformers in the loop: Polarity in neural models of language | Representation of linguistic phenomena in computational language models is typically assessed against the predictions of existing linguistic theories of these phenomena. Using the notion of polarity as a case study, we show that this is not always the most adequate set-up. We probe polarity via so-called ‘negative pola... | 9e29fb6276af6ec6ce4d766f3b23c5f7 | 2,022 | [
"representation of linguistic phenomena in computational language models is typically assessed against the predictions of existing linguistic theories of these phenomena .",
"using the notion of polarity as a case study , we show that this is not always the most adequate set - up .",
"we probe polarity via so -... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "computational language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"computational",
"language",
"models"
],
"offsets": [
5,
6,
... | [
"representation",
"of",
"linguistic",
"phenomena",
"in",
"computational",
"language",
"models",
"is",
"typically",
"assessed",
"against",
"the",
"predictions",
"of",
"existing",
"linguistic",
"theories",
"of",
"these",
"phenomena",
".",
"using",
"the",
"notion",
"of... |
ACL | Semantic Parsing with Dual Learning | Semantic parsing converts natural language queries into structured logical forms. The lack of training data is still one of the most serious problems in this area. In this work, we develop a semantic parsing framework with the dual learning algorithm, which enables a semantic parser to make full use of data (labeled an... | 921f090fb4e2eae5993201c9b6d44ccd | 2,019 | [
"semantic parsing converts natural language queries into structured logical forms .",
"the lack of training data is still one of the most serious problems in this area .",
"in this work , we develop a semantic parsing framework with the dual learning algorithm , which enables a semantic parser to make full use ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "semantic parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"semantic",
"parsing"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"semantic",
"parsing",
"converts",
"natural",
"language",
"queries",
"into",
"structured",
"logical",
"forms",
".",
"the",
"lack",
"of",
"training",
"data",
"is",
"still",
"one",
"of",
"the",
"most",
"serious",
"problems",
"in",
"this",
"area",
".",
"in",
"t... |
ACL | Label-Specific Dual Graph Neural Network for Multi-Label Text Classification | Multi-label text classification is one of the fundamental tasks in natural language processing. Previous studies have difficulties to distinguish similar labels well because they learn the same document representations for different labels, that is they do not explicitly extract label-specific semantic components from ... | d758f7070414641c9ca4bfe88155388f | 2,021 | [
"multi - label text classification is one of the fundamental tasks in natural language processing .",
"previous studies have difficulties to distinguish similar labels well because they learn the same document representations for different labels , that is they do not explicitly extract label - specific semantic ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - label text classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"label",
"text",
"classification"
],
"offset... | [
"multi",
"-",
"label",
"text",
"classification",
"is",
"one",
"of",
"the",
"fundamental",
"tasks",
"in",
"natural",
"language",
"processing",
".",
"previous",
"studies",
"have",
"difficulties",
"to",
"distinguish",
"similar",
"labels",
"well",
"because",
"they",
... |
ACL | Learning Syntactic Dense Embedding with Correlation Graph for Automatic Readability Assessment | Deep learning models for automatic readability assessment generally discard linguistic features traditionally used in machine learning models for the task. We propose to incorporate linguistic features into neural network models by learning syntactic dense embeddings based on linguistic features. To cope with the relat... | 9e348fb5d787fd8bf3acd10f46629f2f | 2,021 | [
"deep learning models for automatic readability assessment generally discard linguistic features traditionally used in machine learning models for the task .",
"we propose to incorporate linguistic features into neural network models by learning syntactic dense embeddings based on linguistic features .",
"to co... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "deep learning models for automatic readability assessment",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"deep",
"learning",
"models",
"for",
"automatic",
... | [
"deep",
"learning",
"models",
"for",
"automatic",
"readability",
"assessment",
"generally",
"discard",
"linguistic",
"features",
"traditionally",
"used",
"in",
"machine",
"learning",
"models",
"for",
"the",
"task",
".",
"we",
"propose",
"to",
"incorporate",
"linguis... |
ACL | Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset | Handing in a paper or exercise and merely receiving “bad” or “incorrect” as feedback is not very helpful when the goal is to improve. Unfortunately, this is currently the kind of feedback given by Automatic Short Answer Grading (ASAG) systems. One of the reasons for this is a lack of content-focused elaborated feedback... | b8ba7a36b50b3ac6dca06b887d68eb3a | 2,022 | [
"handing in a paper or exercise and merely receiving “ bad ” or “ incorrect ” as feedback is not very helpful when the goal is to improve .",
"unfortunately , this is currently the kind of feedback given by automatic short answer grading ( asag ) systems .",
"one of the reasons for this is a lack of content - f... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "automatic short answer grading systems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"automatic",
"short",
"answer",
"grading",
"systems"
],
... | [
"handing",
"in",
"a",
"paper",
"or",
"exercise",
"and",
"merely",
"receiving",
"“",
"bad",
"”",
"or",
"“",
"incorrect",
"”",
"as",
"feedback",
"is",
"not",
"very",
"helpful",
"when",
"the",
"goal",
"is",
"to",
"improve",
".",
"unfortunately",
",",
"this"... |
ACL | Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting | With the recent proliferation of the use of text classifications, researchers have found that there are certain unintended biases in text classification datasets. For example, texts containing some demographic identity-terms (e.g., “gay”, “black”) are more likely to be abusive in existing abusive language detection dat... | c0a45fc221a7e6d7c3c34d5605b321cf | 2,020 | [
"with the recent proliferation of the use of text classifications , researchers have found that there are certain unintended biases in text classification datasets .",
"for example , texts containing some demographic identity - terms ( e . g . , “ gay ” , “ black ” ) are more likely to be abusive in existing abus... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text classifications",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"classifications"
],
"offsets": [
8,
9
]
}
],
"trig... | [
"with",
"the",
"recent",
"proliferation",
"of",
"the",
"use",
"of",
"text",
"classifications",
",",
"researchers",
"have",
"found",
"that",
"there",
"are",
"certain",
"unintended",
"biases",
"in",
"text",
"classification",
"datasets",
".",
"for",
"example",
",",... |
ACL | Generating Query Focused Summaries from Query-Free Resources | The availability of large-scale datasets has driven the development of neural models that create generic summaries from single or multiple documents. In this work we consider query focused summarization (QFS), a task for which training data in the form of queries, documents, and summaries is not readily available. We p... | 8833de78fc23567cd64f53463d13f556 | 2,021 | [
"the availability of large - scale datasets has driven the development of neural models that create generic summaries from single or multiple documents .",
"in this work we consider query focused summarization ( qfs ) , a task for which training data in the form of queries , documents , and summaries is not readi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"models"
],
"offsets": [
12,
13
]
}
],
"trigger": {
... | [
"the",
"availability",
"of",
"large",
"-",
"scale",
"datasets",
"has",
"driven",
"the",
"development",
"of",
"neural",
"models",
"that",
"create",
"generic",
"summaries",
"from",
"single",
"or",
"multiple",
"documents",
".",
"in",
"this",
"work",
"we",
"consid... |
ACL | Generating Long and Informative Reviews with Aspect-Aware Coarse-to-Fine Decoding | Generating long and informative review text is a challenging natural language generation task. Previous work focuses on word-level generation, neglecting the importance of topical and syntactic characteristics from natural languages. In this paper, we propose a novel review generation model by characterizing an elabora... | 605ac5b58f409024a1791c000910702d | 2,019 | [
"generating long and informative review text is a challenging natural language generation task .",
"previous work focuses on word - level generation , neglecting the importance of topical and syntactic characteristics from natural languages .",
"in this paper , we propose a novel review generation model by char... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "generating long and informative review text",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"generating",
"long",
"and",
"informative",
"review",
... | [
"generating",
"long",
"and",
"informative",
"review",
"text",
"is",
"a",
"challenging",
"natural",
"language",
"generation",
"task",
".",
"previous",
"work",
"focuses",
"on",
"word",
"-",
"level",
"generation",
",",
"neglecting",
"the",
"importance",
"of",
"topi... |
ACL | Determinantal Beam Search | Beam search is a go-to strategy for decoding neural sequence models. The algorithm can naturally be viewed as a subset optimization problem, albeit one where the corresponding set function does not reflect interactions between candidates. Empirically, this leads to sets often exhibiting high overlap, e.g., strings may ... | 8e8c3cb2e51cc78915712b141bbde4e0 | 2,021 | [
"beam search is a go - to strategy for decoding neural sequence models .",
"the algorithm can naturally be viewed as a subset optimization problem , albeit one where the corresponding set function does not reflect interactions between candidates .",
"empirically , this leads to sets often exhibiting high overla... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "beam search",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"beam",
"search"
],
"offsets": [
0,
1
]
}
],
"trigger": {
"tex... | [
"beam",
"search",
"is",
"a",
"go",
"-",
"to",
"strategy",
"for",
"decoding",
"neural",
"sequence",
"models",
".",
"the",
"algorithm",
"can",
"naturally",
"be",
"viewed",
"as",
"a",
"subset",
"optimization",
"problem",
",",
"albeit",
"one",
"where",
"the",
... |
ACL | When did you become so smart, oh wise one?! Sarcasm Explanation in Multi-modal Multi-party Dialogues | Indirect speech such as sarcasm achieves a constellation of discourse goals in human communication. While the indirectness of figurative language warrants speakers to achieve certain pragmatic goals, it is challenging for AI agents to comprehend such idiosyncrasies of human communication. Though sarcasm identification ... | 39f9cb5b1640be1ad263f3b62220197e | 2,022 | [
"indirect speech such as sarcasm achieves a constellation of discourse goals in human communication .",
"while the indirectness of figurative language warrants speakers to achieve certain pragmatic goals , it is challenging for ai agents to comprehend such idiosyncrasies of human communication .",
"though sarca... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "indirect speech",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"indirect",
"speech"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"indirect",
"speech",
"such",
"as",
"sarcasm",
"achieves",
"a",
"constellation",
"of",
"discourse",
"goals",
"in",
"human",
"communication",
".",
"while",
"the",
"indirectness",
"of",
"figurative",
"language",
"warrants",
"speakers",
"to",
"achieve",
"certain",
"p... |
ACL | Computational Historical Linguistics and Language Diversity in South Asia | South Asia is home to a plethora of languages, many of which severely lack access to new language technologies. This linguistic diversity also results in a research environment conducive to the study of comparative, contact, and historical linguistics–fields which necessitate the gathering of extensive data from many l... | 3572fbcffb0c6e54251e3fe4b83f6ccb | 2,022 | [
"south asia is home to a plethora of languages , many of which severely lack access to new language technologies .",
"this linguistic diversity also results in a research environment conducive to the study of comparative , contact , and historical linguistics – fields which necessitate the gathering of extensive ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
94
]
},
{
"text": "recent developments",
"nugget_type": "TA... | [
"south",
"asia",
"is",
"home",
"to",
"a",
"plethora",
"of",
"languages",
",",
"many",
"of",
"which",
"severely",
"lack",
"access",
"to",
"new",
"language",
"technologies",
".",
"this",
"linguistic",
"diversity",
"also",
"results",
"in",
"a",
"research",
"env... |
ACL | Phrase-aware Unsupervised Constituency Parsing | Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task. Despite their high accuracy in identifying low-level structures, prior arts tend to struggle in capturing high-level structures like clauses, since the MLM task usually only requires ... | 7cab1b76b9b14c7801b03ca1e188188a | 2,022 | [
"recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling ( mlm ) as the proxy task .",
"despite their high accuracy in identifying low - level structures , prior arts tend to struggle in capturing high - level structures like clauses , since the mlm task usu... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "unsupervised grammar induction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"unsupervised",
"grammar",
"induction"
],
"offsets": [
7,
8,
... | [
"recent",
"studies",
"have",
"achieved",
"inspiring",
"success",
"in",
"unsupervised",
"grammar",
"induction",
"using",
"masked",
"language",
"modeling",
"(",
"mlm",
")",
"as",
"the",
"proxy",
"task",
".",
"despite",
"their",
"high",
"accuracy",
"in",
"identifyi... |
ACL | Optimal Transport-based Alignment of Learned Character Representations for String Similarity | String similarity models are vital for record linkage, entity resolution, and search. In this work, we present STANCE–a learned model for computing the similarity of two strings. Our approach encodes the characters of each string, aligns the encodings using Sinkhorn Iteration (alignment is posed as an instance of optim... | 5946c6a585aa7390f4b379b956deea42 | 2,019 | [
"string similarity models are vital for record linkage , entity resolution , and search .",
"in this work , we present stance – a learned model for computing the similarity of two strings .",
"our approach encodes the characters of each string , aligns the encodings using sinkhorn iteration ( alignment is posed... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "string similarity models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"string",
"similarity",
"models"
],
"offsets": [
0,
1,
2
... | [
"string",
"similarity",
"models",
"are",
"vital",
"for",
"record",
"linkage",
",",
"entity",
"resolution",
",",
"and",
"search",
".",
"in",
"this",
"work",
",",
"we",
"present",
"stance",
"–",
"a",
"learned",
"model",
"for",
"computing",
"the",
"similarity",... |
ACL | Improving Multi-turn Dialogue Modelling with Utterance ReWriter | Recent research has achieved impressive results in single-turn dialogue modelling. In the multi-turn setting, however, current models are still far from satisfactory. One major challenge is the frequently occurred coreference and information omission in our daily conversation, making it hard for machines to understand ... | 6ce7d50ac044cfc66b6c9ce17bb184d1 | 2,019 | [
"recent research has achieved impressive results in single - turn dialogue modelling .",
"in the multi - turn setting , however , current models are still far from satisfactory .",
"one major challenge is the frequently occurred coreference and information omission in our daily conversation , making it hard for... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "single - turn dialogue modelling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"single",
"-",
"turn",
"dialogue",
"modelling"
],
"offsets"... | [
"recent",
"research",
"has",
"achieved",
"impressive",
"results",
"in",
"single",
"-",
"turn",
"dialogue",
"modelling",
".",
"in",
"the",
"multi",
"-",
"turn",
"setting",
",",
"however",
",",
"current",
"models",
"are",
"still",
"far",
"from",
"satisfactory",
... |
ACL | Adapting Coreference Resolution Models through Active Learning | Neural coreference resolution models trained on one dataset may not transfer to new, low-resource domains. Active learning mitigates this problem by sampling a small subset of data for annotators to label. While active learning is well-defined for classification tasks, its application to coreference resolution is neith... | 8f0a59c3410a807e43542955a85855fa | 2,022 | [
"neural coreference resolution models trained on one dataset may not transfer to new , low - resource domains .",
"active learning mitigates this problem by sampling a small subset of data for annotators to label .",
"while active learning is well - defined for classification tasks , its application to corefere... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural coreference resolution models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"coreference",
"resolution",
"models"
],
"offsets": [
... | [
"neural",
"coreference",
"resolution",
"models",
"trained",
"on",
"one",
"dataset",
"may",
"not",
"transfer",
"to",
"new",
",",
"low",
"-",
"resource",
"domains",
".",
"active",
"learning",
"mitigates",
"this",
"problem",
"by",
"sampling",
"a",
"small",
"subse... |
ACL | Improved Speech Representations with Multi-Target Autoregressive Predictive Coding | Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech. One example is Autoregressive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to generate an unseen future frame given a context such as r... | 1761691ed1464144d310d13190355bd0 | 2,020 | [
"training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech .",
"one example is autoregressive predictive coding ( chung et al . , 2019 ) , which trains an autoregressive rnn to generate an unseen future frame given a c... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "predictive coding",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"predictive",
"coding"
],
"offsets": [
4,
5
]
}
],
"trigger": ... | [
"training",
"objectives",
"based",
"on",
"predictive",
"coding",
"have",
"recently",
"been",
"shown",
"to",
"be",
"very",
"effective",
"at",
"learning",
"meaningful",
"representations",
"from",
"unlabeled",
"speech",
".",
"one",
"example",
"is",
"autoregressive",
... |
ACL | Heuristic Authorship Obfuscation | Authorship verification is the task of determining whether two texts were written by the same author. We deal with the adversary task, called authorship obfuscation: preventing verification by altering a to-be-obfuscated text. Our new obfuscation approach (1) models writing style difference as the Jensen-Shannon distan... | 3f5dc8e16084466ba75499ef50c95d0f | 2,019 | [
"authorship verification is the task of determining whether two texts were written by the same author .",
"we deal with the adversary task , called authorship obfuscation : preventing verification by altering a to - be - obfuscated text .",
"our new obfuscation approach ( 1 ) models writing style difference as ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "authorship verification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"authorship",
"verification"
],
"offsets": [
0,
1
]
}
],
... | [
"authorship",
"verification",
"is",
"the",
"task",
"of",
"determining",
"whether",
"two",
"texts",
"were",
"written",
"by",
"the",
"same",
"author",
".",
"we",
"deal",
"with",
"the",
"adversary",
"task",
",",
"called",
"authorship",
"obfuscation",
":",
"preven... |
ACL | Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction | While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an ... | 114f8d7ac3a82211119ff741febf27ca | 2,019 | [
"while the fast - paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions , keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult .",
"the community could greatly be... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "increasingly difficult",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"increasingly",
"difficult"
],
"offsets": [
41,
42
]
},
{
... | [
"while",
"the",
"fast",
"-",
"paced",
"inception",
"of",
"novel",
"tasks",
"and",
"new",
"datasets",
"helps",
"foster",
"active",
"research",
"in",
"a",
"community",
"towards",
"interesting",
"directions",
",",
"keeping",
"track",
"of",
"the",
"abundance",
"of... |
ACL | It Takes Two to Lie: One to Lie, and One to Listen | Trust is implicit in many online text conversations—striking up new friendships, or asking for tech support. But trust can be betrayed through deception. We study the language and dynamics of deception in the negotiation-based game Diplomacy, where seven players compete for world domination by forging and breaking alli... | 60fdb97e1df2b99db8a00ff4d8ce3c1b | 2,020 | [
"trust is implicit in many online text conversations — striking up new friendships , or asking for tech support .",
"but trust can be betrayed through deception .",
"we study the language and dynamics of deception in the negotiation - based game diplomacy , where seven players compete for world domination by fo... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
28
]
},
{
"text": "language and dynamics of deception in the negoti... | [
"trust",
"is",
"implicit",
"in",
"many",
"online",
"text",
"conversations",
"—",
"striking",
"up",
"new",
"friendships",
",",
"or",
"asking",
"for",
"tech",
"support",
".",
"but",
"trust",
"can",
"be",
"betrayed",
"through",
"deception",
".",
"we",
"study",
... |
ACL | Reasoning with Latent Structure Refinement for Document-Level Relation Extraction | Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches c... | d950be690cee2122af06d2d33698da80 | 2,020 | [
"document - level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter - sentence entities .",
"however , effective aggregation of relevant information in the document remains a challenging research question .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "document - level relation extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"document",
"-",
"level",
"relation",
"extraction"
],
"... | [
"document",
"-",
"level",
"relation",
"extraction",
"requires",
"integrating",
"information",
"within",
"and",
"across",
"multiple",
"sentences",
"of",
"a",
"document",
"and",
"capturing",
"complex",
"interactions",
"between",
"inter",
"-",
"sentence",
"entities",
"... |
ACL | Distributionally Robust Finetuning BERT for Covariate Drift in Spoken Language Understanding | In this study, we investigate robustness against covariate drift in spoken language understanding (SLU). Covariate drift can occur in SLUwhen there is a drift between training and testing regarding what users request or how they request it. To study this we propose a method that exploits natural variations in data to c... | dbca1c9f27bc11a71c259d97d5d41d23 | 2,022 | [
"in this study , we investigate robustness against covariate drift in spoken language understanding ( slu ) .",
"covariate drift can occur in sluwhen there is a drift between training and testing regarding what users request or how they request it .",
"to study this we propose a method that exploits natural var... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "robustness against covariate drift",
"nug... | [
"in",
"this",
"study",
",",
"we",
"investigate",
"robustness",
"against",
"covariate",
"drift",
"in",
"spoken",
"language",
"understanding",
"(",
"slu",
")",
".",
"covariate",
"drift",
"can",
"occur",
"in",
"sluwhen",
"there",
"is",
"a",
"drift",
"between",
... |
ACL | Generalizing Natural Language Analysis through Span-relation Representations | Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format c... | 587df991e9bf88c650a072b452b1c7cd | 2,020 | [
"natural language processing covers a wide variety of tasks predicting syntax , semantics , and information content , and usually each type of output is generated with specially designed architectures .",
"in this paper , we provide the simple insight that a great variety of tasks can be represented in a single u... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
0,
1,
... | [
"natural",
"language",
"processing",
"covers",
"a",
"wide",
"variety",
"of",
"tasks",
"predicting",
"syntax",
",",
"semantics",
",",
"and",
"information",
"content",
",",
"and",
"usually",
"each",
"type",
"of",
"output",
"is",
"generated",
"with",
"specially",
... |
ACL | Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency | Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts. This work investigates three aspects of structured pruning on multilingual pre-trained language models: settings, algorithms, and efficiency. Experiments on nin... | e3780d9717b27779393b241781b519d0 | 2,022 | [
"structured pruning has been extensively studied on monolingual pre - trained language models and is yet to be fully evaluated on their multilingual counterparts .",
"this work investigates three aspects of structured pruning on multilingual pre - trained language models : settings , algorithms , and efficiency .... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "structured pruning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"structured",
"pruning"
],
"offsets": [
0,
1
]
}
],
"trigger"... | [
"structured",
"pruning",
"has",
"been",
"extensively",
"studied",
"on",
"monolingual",
"pre",
"-",
"trained",
"language",
"models",
"and",
"is",
"yet",
"to",
"be",
"fully",
"evaluated",
"on",
"their",
"multilingual",
"counterparts",
".",
"this",
"work",
"investi... |
ACL | A Study of Non-autoregressive Model for Sequence Generation | Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel, resulting in faster generation speed compared to their autoregressive (AR) counterparts but at the cost of lower accuracy. Different techniques including knowledge distillation and source-target alignment have been proposed to bridge the... | fbdc290069c8882ae38de0a7dd9384ce | 2,020 | [
"non - autoregressive ( nar ) models generate all the tokens of a sequence in parallel , resulting in faster generation speed compared to their autoregressive ( ar ) counterparts but at the cost of lower accuracy .",
"different techniques including knowledge distillation and source - target alignment have been pr... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
220
]
},
{
"text": "analysis model",
"nugget_type": "APP",
... | [
"non",
"-",
"autoregressive",
"(",
"nar",
")",
"models",
"generate",
"all",
"the",
"tokens",
"of",
"a",
"sequence",
"in",
"parallel",
",",
"resulting",
"in",
"faster",
"generation",
"speed",
"compared",
"to",
"their",
"autoregressive",
"(",
"ar",
")",
"count... |
ACL | Unsupervised Joint Training of Bilingual Word Embeddings | State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the original dissimilarity between the word embedding spac... | 31db3b336313b3a424d634e53f82fa61 | 2,019 | [
"state - of - the - art methods for unsupervised bilingual word embeddings ( bwe ) train a mapping function that maps pre - trained monolingual word embeddings into a bilingual space .",
"despite its remarkable results , unsupervised mapping is also well - known to be limited by the original dissimilarity between... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "unsupervised bilingual word embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"unsupervised",
"bilingual",
"word",
"embeddings"
],
"offsets":... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"methods",
"for",
"unsupervised",
"bilingual",
"word",
"embeddings",
"(",
"bwe",
")",
"train",
"a",
"mapping",
"function",
"that",
"maps",
"pre",
"-",
"trained",
"monolingual",
"word",
"embeddings",
"into",
"a",
... |
ACL | End-to-End Training of Neural Retrievers for Open-Domain Question Answering | Recent work on training neural retrievers for open-domain question answering (OpenQA) has employed both supervised and unsupervised approaches. However, it remains unclear how unsupervised and supervised methods can be used most effectively for neural retrievers. In this work, we systematically study retriever pre-trai... | 73bd0a69824aa662463e463cca5aeff1 | 2,021 | [
"recent work on training neural retrievers for open - domain question answering ( openqa ) has employed both supervised and unsupervised approaches .",
"however , it remains unclear how unsupervised and supervised methods can be used most effectively for neural retrievers .",
"in this work , we systematically s... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "retriever pre - training",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"retriever",
"pre",
"-",
"training"
],
"offsets": [
49,
5... | [
"recent",
"work",
"on",
"training",
"neural",
"retrievers",
"for",
"open",
"-",
"domain",
"question",
"answering",
"(",
"openqa",
")",
"has",
"employed",
"both",
"supervised",
"and",
"unsupervised",
"approaches",
".",
"however",
",",
"it",
"remains",
"unclear",
... |
ACL | MPC-BERT: A Pre-Trained Language Model for Multi-Party Conversation Understanding | Recently, various neural models for multi-party conversation (MPC) have achieved impressive improvements on a variety of tasks such as addressee recognition, speaker identification and response prediction. However, these existing methods on MPC usually represent interlocutors and utterances individually and ignore the ... | a8ce2e8e2466284038f4c21195922adf | 2,021 | [
"recently , various neural models for multi - party conversation ( mpc ) have achieved impressive improvements on a variety of tasks such as addressee recognition , speaker identification and response prediction .",
"however , these existing methods on mpc usually represent interlocutors and utterances individual... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "addressee recognition",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"addressee",
"recognition"
],
"offsets": [
24,
25
]
},
{
... | [
"recently",
",",
"various",
"neural",
"models",
"for",
"multi",
"-",
"party",
"conversation",
"(",
"mpc",
")",
"have",
"achieved",
"impressive",
"improvements",
"on",
"a",
"variety",
"of",
"tasks",
"such",
"as",
"addressee",
"recognition",
",",
"speaker",
"ide... |
ACL | Bootstrapping Techniques for Polysynthetic Morphological Analysis | Polysynthetic languages have exceptionally large and sparse vocabularies, thanks to the number of morpheme slots and combinations in a word. This complexity, together with a general scarcity of written data, poses a challenge to the development of natural language technologies. To address this challenge, we offer lingu... | 4d4b70235c312056707aab52b32cd36f | 2,020 | [
"polysynthetic languages have exceptionally large and sparse vocabularies , thanks to the number of morpheme slots and combinations in a word .",
"this complexity , together with a general scarcity of written data , poses a challenge to the development of natural language technologies .",
"to address this chall... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
50
]
},
{
"text": "linguistically - informed approaches",
"nu... | [
"polysynthetic",
"languages",
"have",
"exceptionally",
"large",
"and",
"sparse",
"vocabularies",
",",
"thanks",
"to",
"the",
"number",
"of",
"morpheme",
"slots",
"and",
"combinations",
"in",
"a",
"word",
".",
"this",
"complexity",
",",
"together",
"with",
"a",
... |
ACL | Improved Language Modeling by Decoding the Past | Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token. This biases the model towards retaining more contextual information, in turn ... | 30f805a46913151bf933834544e588ce | 2,019 | [
"highly regularized lstms achieve impressive results on several benchmark datasets in language modeling .",
"we propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token .",
"this biases the model towards retaining more contextual info... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
14
]
},
{
"text": "regularization method",
"nugget_type": "AP... | [
"highly",
"regularized",
"lstms",
"achieve",
"impressive",
"results",
"on",
"several",
"benchmark",
"datasets",
"in",
"language",
"modeling",
".",
"we",
"propose",
"a",
"new",
"regularization",
"method",
"based",
"on",
"decoding",
"the",
"last",
"token",
"in",
"... |
ACL | Changing the World by Changing the Data | NLP community is currently investing a lot more research and resources into development of deep learning models than training data. While we have made a lot of progress, it is now clear that our models learn all kinds of spurious patterns, social biases, and annotation artifacts. Algorithmic solutions have so far had l... | 13d93f56d58eb088c0ea71487ce6770c | 2,021 | [
"nlp community is currently investing a lot more research and resources into development of deep learning models than training data .",
"while we have made a lot of progress , it is now clear that our models learn all kinds of spurious patterns , social biases , and annotation artifacts .",
"algorithmic solutio... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "training data",
"nugget_type": "DST",
"argument_type": "Target",
"tokens": [
"training",
"data"
],
"offsets": [
18,
19
]
}
],
"trigger": {
... | [
"nlp",
"community",
"is",
"currently",
"investing",
"a",
"lot",
"more",
"research",
"and",
"resources",
"into",
"development",
"of",
"deep",
"learning",
"models",
"than",
"training",
"data",
".",
"while",
"we",
"have",
"made",
"a",
"lot",
"of",
"progress",
"... |
ACL | Enhancing Answer Boundary Detection for Multilingual Machine Reading Comprehension | Multilingual pre-trained models could leverage the training data from a rich source language (such as English) to improve performance on low resource languages. However, the transfer quality for multilingual Machine Reading Comprehension (MRC) is significantly worse than sentence classification tasks mainly due to the ... | 59d16db1712ed5fc53bcca255f90cb12 | 2,020 | [
"multilingual pre - trained models could leverage the training data from a rich source language ( such as english ) to improve performance on low resource languages .",
"however , the transfer quality for multilingual machine reading comprehension ( mrc ) is significantly worse than sentence classification tasks ... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "multilingual pre - trained models",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"multilingual",
"pre",
"-",
"trained",
"models"
],
"offse... | [
"multilingual",
"pre",
"-",
"trained",
"models",
"could",
"leverage",
"the",
"training",
"data",
"from",
"a",
"rich",
"source",
"language",
"(",
"such",
"as",
"english",
")",
"to",
"improve",
"performance",
"on",
"low",
"resource",
"languages",
".",
"however",... |
ACL | MILIE: Modular & Iterative Multilingual Open Information Extraction | Open Information Extraction (OpenIE) is the task of extracting (subject, predicate, object) triples from natural language sentences. Current OpenIE systems extract all triple slots independently. In contrast, we explore the hypothesis that it may be beneficial to extract triple slots iteratively: first extract easy slo... | d6243d89dfae69823ba5501b6159998d | 2,022 | [
"open information extraction ( openie ) is the task of extracting ( subject , predicate , object ) triples from natural language sentences .",
"current openie systems extract all triple slots independently .",
"in contrast , we explore the hypothesis that it may be beneficial to extract triple slots iteratively... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open information extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"information",
"extraction"
],
"offsets": [
0,
1,
... | [
"open",
"information",
"extraction",
"(",
"openie",
")",
"is",
"the",
"task",
"of",
"extracting",
"(",
"subject",
",",
"predicate",
",",
"object",
")",
"triples",
"from",
"natural",
"language",
"sentences",
".",
"current",
"openie",
"systems",
"extract",
"all"... |
ACL | Exploring Listwise Evidence Reasoning with T5 for Fact Verification | This work explores a framework for fact verification that leverages pretrained sequence-to-sequence transformer models for sentence selection and label prediction, two key sub-tasks in fact verification. Most notably, improving on previous pointwise aggregation approaches for label prediction, we take advantage of T5 u... | 0df954a55ed82ab8951b6b04a23d59b5 | 2,021 | [
"this work explores a framework for fact verification that leverages pretrained sequence - to - sequence transformer models for sentence selection and label prediction , two key sub - tasks in fact verification .",
"most notably , improving on previous pointwise aggregation approaches for label prediction , we ta... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "data augmentation",
"nugget_type": "MOD",
"argument_type": "TriedComponent",
"tokens": [
"data",
"augmentation"
],
"offsets": [
58,
59
]
},
{
... | [
"this",
"work",
"explores",
"a",
"framework",
"for",
"fact",
"verification",
"that",
"leverages",
"pretrained",
"sequence",
"-",
"to",
"-",
"sequence",
"transformer",
"models",
"for",
"sentence",
"selection",
"and",
"label",
"prediction",
",",
"two",
"key",
"sub... |
ACL | Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queries | Despite end-to-end neural systems making significant progress in the last decade for task-oriented as well as chit-chat based dialogue systems, most dialogue systems rely on hybrid approaches which use a combination of rule-based, retrieval and generative approaches for generating a set of ranked responses. Such dialog... | 392ad226db747c1929cb8867232d9fe3 | 2,021 | [
"despite end - to - end neural systems making significant progress in the last decade for task - oriented as well as chit - chat based dialogue systems , most dialogue systems rely on hybrid approaches which use a combination of rule - based , retrieval and generative approaches for generating a set of ranked respo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "end - to - end neural systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"neural",
"systems"
... | [
"despite",
"end",
"-",
"to",
"-",
"end",
"neural",
"systems",
"making",
"significant",
"progress",
"in",
"the",
"last",
"decade",
"for",
"task",
"-",
"oriented",
"as",
"well",
"as",
"chit",
"-",
"chat",
"based",
"dialogue",
"systems",
",",
"most",
"dialogu... |
ACL | Few-Shot Learning with Siamese Networks and Label Tuning | We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification. In recent years, an approach based on neural textual entailment models has been found to give strong results on a diverse range of tasks. In this work, we show that with proper pre... | 4bc0310506eee008e8c38c036c35d854 | 2,022 | [
"we study the problem of building text classifiers with little or no training data , commonly known as zero and few - shot text classification .",
"in recent years , an approach based on neural textual entailment models has been found to give strong results on a diverse range of tasks .",
"in this work , we sho... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "zero and few - shot text classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"zero",
"and",
"few",
"-",
"shot",
"text",
"clas... | [
"we",
"study",
"the",
"problem",
"of",
"building",
"text",
"classifiers",
"with",
"little",
"or",
"no",
"training",
"data",
",",
"commonly",
"known",
"as",
"zero",
"and",
"few",
"-",
"shot",
"text",
"classification",
".",
"in",
"recent",
"years",
",",
"an"... |
ACL | KenMeSH: Knowledge-enhanced End-to-end Biomedical Text Labelling | Currently, Medical Subject Headings (MeSH) are manually assigned to every biomedical article published and subsequently recorded in the PubMed database to facilitate retrieving relevant information. With the rapid growth of the PubMed database, large-scale biomedical document indexing becomes increasingly important. Me... | f438ee71b5ac35ef93e485de523e12da | 2,022 | [
"currently , medical subject headings ( mesh ) are manually assigned to every biomedical article published and subsequently recorded in the pubmed database to facilitate retrieving relevant information .",
"with the rapid growth of the pubmed database , large - scale biomedical document indexing becomes increasin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "medical subject headings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"medical",
"subject",
"headings"
],
"offsets": [
2,
3,
4
... | [
"currently",
",",
"medical",
"subject",
"headings",
"(",
"mesh",
")",
"are",
"manually",
"assigned",
"to",
"every",
"biomedical",
"article",
"published",
"and",
"subsequently",
"recorded",
"in",
"the",
"pubmed",
"database",
"to",
"facilitate",
"retrieving",
"relev... |
ACL | Automatic Evaluation of Local Topic Quality | Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classification. Even recent models, which aim to improv... | 85c7cf911242e5fa05a4e9f5bea41125 | 2,019 | [
"topic models are typically evaluated with respect to the global topic distributions that they generate , using metrics such as coherence , but without regard to local ( token - level ) topic assignments .",
"token - level assignments are important for downstream tasks such as classification .",
"even recent mo... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
76
]
},
{
"text": "task",
"nugget_type": "TAK",
"argu... | [
"topic",
"models",
"are",
"typically",
"evaluated",
"with",
"respect",
"to",
"the",
"global",
"topic",
"distributions",
"that",
"they",
"generate",
",",
"using",
"metrics",
"such",
"as",
"coherence",
",",
"but",
"without",
"regard",
"to",
"local",
"(",
"token"... |
ACL | Edited Media Understanding Frames: Reasoning About the Intent and Implications of Visual Misinformation | Understanding manipulated media, from automatically generated ‘deepfakes’ to manually edited ones, raises novel research challenges. Because the vast majority of edited or manipulated images are benign, such as photoshopped images for visual enhancements, the key challenge is to understand the complex layers of underly... | 9de9bdb87944464ab2a06ec9c80690a2 | 2,021 | [
"understanding manipulated media , from automatically generated ‘ deepfakes ’ to manually edited ones , raises novel research challenges .",
"because the vast majority of edited or manipulated images are benign , such as photoshopped images for visual enhancements , the key challenge is to understand the complex ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
67
]
},
{
"text": "edited media frames",
"nugget_type": "APP"... | [
"understanding",
"manipulated",
"media",
",",
"from",
"automatically",
"generated",
"‘",
"deepfakes",
"’",
"to",
"manually",
"edited",
"ones",
",",
"raises",
"novel",
"research",
"challenges",
".",
"because",
"the",
"vast",
"majority",
"of",
"edited",
"or",
"man... |
ACL | Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology | Gender stereotypes are manifest in most of the world’s languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present... | acaf597a4b58ba7af80415f029d45657 | 2,019 | [
"gender stereotypes are manifest in most of the world ’ s languages and are consequently propagated or amplified by nlp systems .",
"although research has focused on mitigating gender stereotypes in english , the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languag... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "gender stereotypes",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"gender",
"stereotypes"
],
"offsets": [
0,
1
]
},
{
"te... | [
"gender",
"stereotypes",
"are",
"manifest",
"in",
"most",
"of",
"the",
"world",
"’",
"s",
"languages",
"and",
"are",
"consequently",
"propagated",
"or",
"amplified",
"by",
"nlp",
"systems",
".",
"although",
"research",
"has",
"focused",
"on",
"mitigating",
"ge... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.