venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | Don’t Let Discourse Confine Your Model: Sequence Perturbations for Improved Event Language Models | Event language models represent plausible sequences of events. Most existing approaches train autoregressive models on text, which successfully capture event co-occurrence but unfortunately constrain the model to follow the discourse order in which events are presented. Other domains may employ different discourse orde... | 9470d376124644d2f61d458321f4e828 | 2,021 | [
"event language models represent plausible sequences of events .",
"most existing approaches train autoregressive models on text , which successfully capture event co - occurrence but unfortunately constrain the model to follow the discourse order in which events are presented .",
"other domains may employ diff... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "plausible sequences of events",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"plausible",
"sequences",
"of",
"events"
],
"offsets": [
4,
... | [
"event",
"language",
"models",
"represent",
"plausible",
"sequences",
"of",
"events",
".",
"most",
"existing",
"approaches",
"train",
"autoregressive",
"models",
"on",
"text",
",",
"which",
"successfully",
"capture",
"event",
"co",
"-",
"occurrence",
"but",
"unfor... |
ACL | CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding | Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations. Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, ne... | 6dff7fbe1396b8e85de80d9043b1b1dd | 2,021 | [
"despite pre - trained language models have proven useful for learning high - quality semantic representations , these models are still vulnerable to simple perturbations .",
"recent works aimed to improve the robustness of pre - trained models mainly focus on adversarial training from perturbed examples with sim... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "pre - trained language models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"pre",
"-",
"trained",
"language",
"models"
],
"offsets": [
... | [
"despite",
"pre",
"-",
"trained",
"language",
"models",
"have",
"proven",
"useful",
"for",
"learning",
"high",
"-",
"quality",
"semantic",
"representations",
",",
"these",
"models",
"are",
"still",
"vulnerable",
"to",
"simple",
"perturbations",
".",
"recent",
"w... |
ACL | Towards Integration of Statistical Hypothesis Tests into Deep Neural Networks | We report our ongoing work about a new deep architecture working in tandem with a statistical test procedure for jointly training texts and their label descriptions for multi-label and multi-class classification tasks. A statistical hypothesis testing method is used to extract the most informative words for each given ... | a971712a246b977413d1bfa9f7aeb768 | 2,019 | [
"we report our ongoing work about a new deep architecture working in tandem with a statistical test procedure for jointly training texts and their label descriptions for multi - label and multi - class classification tasks .",
"a statistical hypothesis testing method is used to extract the most informative words ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "deep architecture working",
"nugget_type"... | [
"we",
"report",
"our",
"ongoing",
"work",
"about",
"a",
"new",
"deep",
"architecture",
"working",
"in",
"tandem",
"with",
"a",
"statistical",
"test",
"procedure",
"for",
"jointly",
"training",
"texts",
"and",
"their",
"label",
"descriptions",
"for",
"multi",
"... |
ACL | Hierarchical Sketch Induction for Paraphrase Generation | We propose a generative model of paraphrase generation, that encourages syntactic diversity by conditioning on an explicit syntactic sketch. We introduce Hierarchical Refinement Quantized Variational Autoencoders (HRQ-VAE), a method for learning decompositions of dense encodings as a sequence of discrete latent variabl... | cc2462dd5e4863c5f80a3d1c84a66f7e | 2,022 | [
"we propose a generative model of paraphrase generation , that encourages syntactic diversity by conditioning on an explicit syntactic sketch .",
"we introduce hierarchical refinement quantized variational autoencoders ( hrq - vae ) , a method for learning decompositions of dense encodings as a sequence of discre... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "generative model of paraphrase generation",
... | [
"we",
"propose",
"a",
"generative",
"model",
"of",
"paraphrase",
"generation",
",",
"that",
"encourages",
"syntactic",
"diversity",
"by",
"conditioning",
"on",
"an",
"explicit",
"syntactic",
"sketch",
".",
"we",
"introduce",
"hierarchical",
"refinement",
"quantized"... |
ACL | Words Aren’t Enough, Their Order Matters: On the Robustness of Grounding Visual Referring Expressions | Visual referring expression recognition is a challenging task that requires natural language understanding in the context of an image. We critically examine RefCOCOg, a standard benchmark for this task, using a human study and show that 83.7% of test instances do not require reasoning on linguistic structure, i.e., wor... | 2e56c22df1000c3f9c0d4a9abec0125f | 2,020 | [
"visual referring expression recognition is a challenging task that requires natural language understanding in the context of an image .",
"we critically examine refcocog , a standard benchmark for this task , using a human study and show that 83 . 7 % of test instances do not require reasoning on linguistic stru... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "visual referring expression recognition",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"visual",
"referring",
"expression",
"recognition"
],
"offsets... | [
"visual",
"referring",
"expression",
"recognition",
"is",
"a",
"challenging",
"task",
"that",
"requires",
"natural",
"language",
"understanding",
"in",
"the",
"context",
"of",
"an",
"image",
".",
"we",
"critically",
"examine",
"refcocog",
",",
"a",
"standard",
"... |
ACL | Can We Predict New Facts with Open Knowledge Graph Embeddings? A Benchmark for Open Link Prediction | Open Information Extraction systems extract (“subject text”, “relation text”, “object text”) triples from raw text. Some triples are textual versions of facts, i.e., non-canonicalized mentions of entities and relations. In this paper, we investigate whether it is possible to infer new facts directly from the open knowl... | eac513fd7857b06c686853770e66b878 | 2,020 | [
"open information extraction systems extract ( “ subject text ” , “ relation text ” , “ object text ” ) triples from raw text .",
"some triples are textual versions of facts , i . e . , non - canonicalized mentions of entities and relations .",
"in this paper , we investigate whether it is possible to infer new... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
52
]
},
{
"text": "new facts directly from the open knowledge graph... | [
"open",
"information",
"extraction",
"systems",
"extract",
"(",
"“",
"subject",
"text",
"”",
",",
"“",
"relation",
"text",
"”",
",",
"“",
"object",
"text",
"”",
")",
"triples",
"from",
"raw",
"text",
".",
"some",
"triples",
"are",
"textual",
"versions",
... |
ACL | Neural Reranking for Dependency Parsing: An Evaluation | Recent work has shown that neural rerankers can improve results for dependency parsing over the top k trees produced by a base parser. However, all neural rerankers so far have been evaluated on English and Chinese only, both languages with a configurational word order and poor morphology. In the paper, we re-assess th... | c2b0775833f8b4e2c258171b6e265ca9 | 2,020 | [
"recent work has shown that neural rerankers can improve results for dependency parsing over the top k trees produced by a base parser .",
"however , all neural rerankers so far have been evaluated on english and chinese only , both languages with a configurational word order and poor morphology .",
"in the pap... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural rerankers",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"neural",
"rerankers"
],
"offsets": [
5,
6
]
}
],
"trigger": {
... | [
"recent",
"work",
"has",
"shown",
"that",
"neural",
"rerankers",
"can",
"improve",
"results",
"for",
"dependency",
"parsing",
"over",
"the",
"top",
"k",
"trees",
"produced",
"by",
"a",
"base",
"parser",
".",
"however",
",",
"all",
"neural",
"rerankers",
"so"... |
ACL | Dependency-based Mixture Language Models | Various models have been proposed to incorporate knowledge of syntactic structures into neural language models. However, previous works have relied heavily on elaborate components for a specific language model, usually recurrent neural network (RNN), which makes themselves unwieldy in practice to fit into other neural ... | 451f3e9998af7fea98c195ca50932af6 | 2,022 | [
"various models have been proposed to incorporate knowledge of syntactic structures into neural language models .",
"however , previous works have relied heavily on elaborate components for a specific language model , usually recurrent neural network ( rnn ) , which makes themselves unwieldy in practice to fit in... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural language models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"language",
"models"
],
"offsets": [
12,
13,
14
... | [
"various",
"models",
"have",
"been",
"proposed",
"to",
"incorporate",
"knowledge",
"of",
"syntactic",
"structures",
"into",
"neural",
"language",
"models",
".",
"however",
",",
"previous",
"works",
"have",
"relied",
"heavily",
"on",
"elaborate",
"components",
"for... |
ACL | Engage the Public: Poll Question Generation for Social Media Posts | This paper presents a novel task to generate poll questions for social media posts. It offers an easy way to hear the voice from the public and learn from their feelings to important social topics. While most related work tackles formal languages (e.g., exam papers), we generate poll questions for short and colloquial ... | 20959cb85a1bfeada3276eab757b33c0 | 2,021 | [
"this paper presents a novel task to generate poll questions for social media posts .",
"it offers an easy way to hear the voice from the public and learn from their feelings to important social topics .",
"while most related work tackles formal languages ( e . g . , exam papers ) , we generate poll questions f... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "task",
"nugget_type": "TAK",
"argument_type": "Content",
"tokens": [
"task"
],
"offsets": [
5
]
},
{
"text": "social media posts",
"nugget_type": "TA... | [
"this",
"paper",
"presents",
"a",
"novel",
"task",
"to",
"generate",
"poll",
"questions",
"for",
"social",
"media",
"posts",
".",
"it",
"offers",
"an",
"easy",
"way",
"to",
"hear",
"the",
"voice",
"from",
"the",
"public",
"and",
"learn",
"from",
"their",
... |
ACL | Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text | Modern neural language models can produce remarkably fluent and grammatical text. So much, in fact, that recent work by Clark et al. (2021) has reported that conventional crowdsourcing can no longer reliably distinguish between machine-authored (GPT-3) and human-authored writing. As errors in machine generations become... | f2586e9386fde2ce50a280fcbaef70d0 | 2,022 | [
"modern neural language models can produce remarkably fluent and grammatical text .",
"so much , in fact , that recent work by clark et al . ( 2021 ) has reported that conventional crowdsourcing can no longer reliably distinguish between machine - authored ( gpt - 3 ) and human - authored writing .",
"as errors... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "robust machine text evaluation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"robust",
"machine",
"text",
"evaluation"
],
"offsets": [
77,... | [
"modern",
"neural",
"language",
"models",
"can",
"produce",
"remarkably",
"fluent",
"and",
"grammatical",
"text",
".",
"so",
"much",
",",
"in",
"fact",
",",
"that",
"recent",
"work",
"by",
"clark",
"et",
"al",
".",
"(",
"2021",
")",
"has",
"reported",
"t... |
ACL | EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers | In this paper, we propose a neural model EPT-X (Expression-Pointer Transformer with Explanations), which utilizes natural language explanations to solve an algebraic word problem. To enhance the explainability of the encoding process of a neural model, EPT-X adopts the concepts of plausibility and faithfulness which ar... | ea147cb5d18211cecc6e4d52f7531041 | 2,022 | [
"in this paper , we propose a neural model ept - x ( expression - pointer transformer with explanations ) , which utilizes natural language explanations to solve an algebraic word problem .",
"to enhance the explainability of the encoding process of a neural model , ept - x adopts the concepts of plausibility and... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "ept - x",
"nugget_type": "APP",
"ar... | [
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"neural",
"model",
"ept",
"-",
"x",
"(",
"expression",
"-",
"pointer",
"transformer",
"with",
"explanations",
")",
",",
"which",
"utilizes",
"natural",
"language",
"explanations",
"to",
"solve",
"an",
"algebr... |
ACL | Database reasoning over text | Neural models have shown impressive performance gains in answering queries from natural language text. However, existing works are unable to support database queries, such as “List/Count all female athletes who were born in 20th century”, which require reasoning over sets of relevant facts with operations such as join,... | 69cb0b43fdb360fc3f3052006b08b152 | 2,021 | [
"neural models have shown impressive performance gains in answering queries from natural language text .",
"however , existing works are unable to support database queries , such as “ list / count all female athletes who were born in 20th century ” , which require reasoning over sets of relevant facts with operat... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "existing works",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"existing",
"works"
],
"offsets": [
17,
18
]
},
{
"text": ... | [
"neural",
"models",
"have",
"shown",
"impressive",
"performance",
"gains",
"in",
"answering",
"queries",
"from",
"natural",
"language",
"text",
".",
"however",
",",
"existing",
"works",
"are",
"unable",
"to",
"support",
"database",
"queries",
",",
"such",
"as",
... |
ACL | Can Transformer be Too Compositional? Analysing Idiom Processing in Neural Machine Translation | Unlike literal expressions, idioms’ meanings do not directly follow from their parts, posing a challenge for neural machine translation (NMT). NMT models are often unable to translate idioms accurately and over-generate compositional, literal translations. In this work, we investigate whether the non-compositionality o... | 8e49a34000cdfdbfd5ff2ec77d2bc7d5 | 2,022 | [
"unlike literal expressions , idioms ’ meanings do not directly follow from their parts , posing a challenge for neural machine translation ( nmt ) .",
"nmt models are often unable to translate idioms accurately and over - generate compositional , literal translations .",
"in this work , we investigate whether ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
19,
20,
... | [
"unlike",
"literal",
"expressions",
",",
"idioms",
"’",
"meanings",
"do",
"not",
"directly",
"follow",
"from",
"their",
"parts",
",",
"posing",
"a",
"challenge",
"for",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
".",
"nmt",
"models",
"are",
"often"... |
ACL | An Effective and Efficient Entity Alignment Decoding Algorithm via Third-Order Tensor Isomorphism | Entity alignment (EA) aims to discover the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs.For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding p... | deefe957ab578bebf853e2278897aafd | 2,022 | [
"entity alignment ( ea ) aims to discover the equivalent entity pairs between kgs , which is a crucial step for integrating multi - source kgs .",
"for a long time , most researchers have regarded ea as a pure graph representation learning task and focused on improving graph encoders while paying little attention... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "integrating multi - source kgs",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"integrating",
"multi",
"-",
"source",
"kgs"
],
"offsets": [
... | [
"entity",
"alignment",
"(",
"ea",
")",
"aims",
"to",
"discover",
"the",
"equivalent",
"entity",
"pairs",
"between",
"kgs",
",",
"which",
"is",
"a",
"crucial",
"step",
"for",
"integrating",
"multi",
"-",
"source",
"kgs",
".",
"for",
"a",
"long",
"time",
"... |
ACL | Coreference Resolution with Entity Equalization | A key challenge in coreference resolution is to capture properties of entity clusters, and use those in the resolution process. Here we provide a simple and effective approach for achieving this, via an “Entity Equalization” mechanism. The Equalization approach represents each mention in a cluster via an approximation ... | 4e16aca0eb55c1ae29f248300dca81c8 | 2,019 | [
"a key challenge in coreference resolution is to capture properties of entity clusters , and use those in the resolution process .",
"here we provide a simple and effective approach for achieving this , via an “ entity equalization ” mechanism .",
"the equalization approach represents each mention in a cluster ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "properties of entity clusters",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"properties",
"of",
"entity",
"clusters"
],
"offsets": [
9,
... | [
"a",
"key",
"challenge",
"in",
"coreference",
"resolution",
"is",
"to",
"capture",
"properties",
"of",
"entity",
"clusters",
",",
"and",
"use",
"those",
"in",
"the",
"resolution",
"process",
".",
"here",
"we",
"provide",
"a",
"simple",
"and",
"effective",
"a... |
ACL | CNNs found to jump around more skillfully than RNNs: Compositional Generalization in Seq2seq Convolutional Networks | Lake and Baroni (2018) introduced the SCAN dataset probing the ability of seq2seq models to capture compositional generalizations, such as inferring the meaning of “jump around” 0-shot from the component words. Recurrent networks (RNNs) were found to completely fail the most challenging generalization cases. We test he... | e5da2d627d02302b073e904b0ee84f44 | 2,019 | [
"lake and baroni ( 2018 ) introduced the scan dataset probing the ability of seq2seq models to capture compositional generalizations , such as inferring the meaning of “ jump around ” 0 - shot from the component words .",
"recurrent networks ( rnns ) were found to completely fail the most challenging generalizati... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "probing",
"nugget_type": "E-PUR",
"argument_type": "Target",
"tokens": [
"probing"
],
"offsets": [
10
]
},
{
"text": "scan dataset",
"nugget_type": "... | [
"lake",
"and",
"baroni",
"(",
"2018",
")",
"introduced",
"the",
"scan",
"dataset",
"probing",
"the",
"ability",
"of",
"seq2seq",
"models",
"to",
"capture",
"compositional",
"generalizations",
",",
"such",
"as",
"inferring",
"the",
"meaning",
"of",
"“",
"jump",... |
ACL | Identifying the Human Values behind Arguments | This paper studies the (often implicit) human values behind natural language arguments, such as to have freedom of thought or to be broadminded. Values are commonly accepted answers to why some option is desirable in the ethical sense and are thus essential both in real-world argumentation and theoretical argumentation... | ccb229e53e96ccc121668fa69dd7228a | 2,022 | [
"this paper studies the ( often implicit ) human values behind natural language arguments , such as to have freedom of thought or to be broadminded .",
"values are commonly accepted answers to why some option is desirable in the ethical sense and are thus essential both in real - world argumentation and theoretic... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
79
]
},
{
"text": "operationalization of human values",
"nugg... | [
"this",
"paper",
"studies",
"the",
"(",
"often",
"implicit",
")",
"human",
"values",
"behind",
"natural",
"language",
"arguments",
",",
"such",
"as",
"to",
"have",
"freedom",
"of",
"thought",
"or",
"to",
"be",
"broadminded",
".",
"values",
"are",
"commonly",... |
ACL | A Novel Bi-directional Interrelated Model for Joint Intent Detection and Slot Filling | A spoken language understanding (SLU) system includes two main tasks, slot filling (SF) and intent detection (ID). The joint model for the two tasks is becoming a tendency in SLU. But the bi-directional interrelated connections between the intent and slots are not established in the existing joint models. In this paper... | aff1748e3931b59ebb7334b090207482 | 2,019 | [
"a spoken language understanding ( slu ) system includes two main tasks , slot filling ( sf ) and intent detection ( id ) .",
"the joint model for the two tasks is becoming a tendency in slu .",
"but the bi - directional interrelated connections between the intent and slots are not established in the existing j... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "spoken language understanding system",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"spoken",
"language",
"understanding",
"system"
],
"offsets": [
... | [
"a",
"spoken",
"language",
"understanding",
"(",
"slu",
")",
"system",
"includes",
"two",
"main",
"tasks",
",",
"slot",
"filling",
"(",
"sf",
")",
"and",
"intent",
"detection",
"(",
"id",
")",
".",
"the",
"joint",
"model",
"for",
"the",
"two",
"tasks",
... |
ACL | Multilingual Mix: Example Interpolation Improves Multilingual Neural Machine Translation | Multilingual neural machine translation models are trained to maximize the likelihood of a mix of examples drawn from multiple language pairs. The dominant inductive bias applied to these models is a shared vocabulary and a shared set of parameters across languages; the inputs and labels corresponding to examples drawn... | 596275e2787ca941e152f389ce9b6667 | 2,022 | [
"multilingual neural machine translation models are trained to maximize the likelihood of a mix of examples drawn from multiple language pairs .",
"the dominant inductive bias applied to these models is a shared vocabulary and a shared set of parameters across languages ; the inputs and labels corresponding to ex... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilingual neural machine translation models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multilingual",
"neural",
"machine",
"translation",
"models"
... | [
"multilingual",
"neural",
"machine",
"translation",
"models",
"are",
"trained",
"to",
"maximize",
"the",
"likelihood",
"of",
"a",
"mix",
"of",
"examples",
"drawn",
"from",
"multiple",
"language",
"pairs",
".",
"the",
"dominant",
"inductive",
"bias",
"applied",
"... |
ACL | Incremental Transformer with Deliberation Decoder for Document Grounded Conversations | Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document. Obviously, document knowledge plays a critical role in Document Grounded Conversations, while existing dialogue models do not exploit this kind of knowledge effectively enough. In this paper, we... | 330e4b90defa34da391a90dfc5610ca0 | 2,019 | [
"document grounded conversations is a task to generate dialogue responses when chatting about the content of a given document .",
"obviously , document knowledge plays a critical role in document grounded conversations , while existing dialogue models do not exploit this kind of knowledge effectively enough .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "document grounded conversations",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"document",
"grounded",
"conversations"
],
"offsets": [
0,
1... | [
"document",
"grounded",
"conversations",
"is",
"a",
"task",
"to",
"generate",
"dialogue",
"responses",
"when",
"chatting",
"about",
"the",
"content",
"of",
"a",
"given",
"document",
".",
"obviously",
",",
"document",
"knowledge",
"plays",
"a",
"critical",
"role"... |
ACL | Automated Topical Component Extraction Using Neural Network Attention Scores from Source-based Essay Scoring | While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE an... | ecd9642080b9a290b95e9f94e25ddeaa | 2,020 | [
"while automated essay scoring ( aes ) can reliably grade essays at scale , automated writing evaluation ( awe ) additionally provides formative feedback to guide essay revision .",
"however , a neural aes typically does not provide useful feature representations for supporting awe .",
"this paper presents a me... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "neural automated essay scoring",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"neural",
"automated",
"essay",
"scoring"
],
"offsets": [
32... | [
"while",
"automated",
"essay",
"scoring",
"(",
"aes",
")",
"can",
"reliably",
"grade",
"essays",
"at",
"scale",
",",
"automated",
"writing",
"evaluation",
"(",
"awe",
")",
"additionally",
"provides",
"formative",
"feedback",
"to",
"guide",
"essay",
"revision",
... |
ACL | Paraphrase Generation by Learning How to Edit from Samples | Neural sequence to sequence text generation has been proved to be a viable approach to paraphrase generation. Despite promising results, paraphrases generated by these models mostly suffer from lack of quality and diversity. To address these problems, we propose a novel retrieval-based method for paraphrase generation.... | 4c8bf533bb730d228cb0aea31dc180c7 | 2,020 | [
"neural sequence to sequence text generation has been proved to be a viable approach to paraphrase generation .",
"despite promising results , paraphrases generated by these models mostly suffer from lack of quality and diversity .",
"to address these problems , we propose a novel retrieval - based method for p... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural sequence to sequence text generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"sequence",
"to",
"sequence",
"text",
"gener... | [
"neural",
"sequence",
"to",
"sequence",
"text",
"generation",
"has",
"been",
"proved",
"to",
"be",
"a",
"viable",
"approach",
"to",
"paraphrase",
"generation",
".",
"despite",
"promising",
"results",
",",
"paraphrases",
"generated",
"by",
"these",
"models",
"mos... |
ACL | Open-Domain Targeted Sentiment Analysis via Span-Based Extraction and Classification | Open-domain targeted sentiment analysis aims to detect opinion targets along with their sentiment polarities from a sentence. Prior work typically formulates this task as a sequence tagging problem. However, such formulation suffers from problems such as huge search space and sentiment inconsistency. To address these p... | 6cb37fbaff8d81f5b3f5a9c437d64355 | 2,019 | [
"open - domain targeted sentiment analysis aims to detect opinion targets along with their sentiment polarities from a sentence .",
"prior work typically formulates this task as a sequence tagging problem .",
"however , such formulation suffers from problems such as huge search space and sentiment inconsistency... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain targeted sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"targeted",
"sentiment",
"analysi... | [
"open",
"-",
"domain",
"targeted",
"sentiment",
"analysis",
"aims",
"to",
"detect",
"opinion",
"targets",
"along",
"with",
"their",
"sentiment",
"polarities",
"from",
"a",
"sentence",
".",
"prior",
"work",
"typically",
"formulates",
"this",
"task",
"as",
"a",
... |
ACL | Autoencoding Keyword Correlation Graph for Document Clustering | Document clustering requires a deep understanding of the complex structure of long-text; in particular, the intra-sentential (local) and inter-sentential features (global). Existing representation learning models do not fully capture these features. To address this, we present a novel graph-based representation for doc... | 56b834f0e92143665e89d2d385deb269 | 2,020 | [
"document clustering requires a deep understanding of the complex structure of long - text ; in particular , the intra - sentential ( local ) and inter - sentential features ( global ) .",
"existing representation learning models do not fully capture these features .",
"to address this , we present a novel grap... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "document clustering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"document",
"clustering"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"document",
"clustering",
"requires",
"a",
"deep",
"understanding",
"of",
"the",
"complex",
"structure",
"of",
"long",
"-",
"text",
";",
"in",
"particular",
",",
"the",
"intra",
"-",
"sentential",
"(",
"local",
")",
"and",
"inter",
"-",
"sentential",
"featur... |
ACL | Can You Put it All Together: Evaluating Conversational Agents’ Ability to Blend Skills | Being engaging, knowledgeable, and empathetic are all desirable general qualities in a conversational agent. Previous work has introduced tasks and datasets that aim to help agents to learn those qualities in isolation and gauge how well they can express them. But rather than being specialized in one single quality, a ... | 130943a6569ef9f4b65c4a5fd403287a | 2,020 | [
"being engaging , knowledgeable , and empathetic are all desirable general qualities in a conversational agent .",
"previous work has introduced tasks and datasets that aim to help agents to learn those qualities in isolation and gauge how well they can express them .",
"but rather than being specialized in one... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conversational agent",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"conversational",
"agent"
],
"offsets": [
14,
15
]
}
],
"tr... | [
"being",
"engaging",
",",
"knowledgeable",
",",
"and",
"empathetic",
"are",
"all",
"desirable",
"general",
"qualities",
"in",
"a",
"conversational",
"agent",
".",
"previous",
"work",
"has",
"introduced",
"tasks",
"and",
"datasets",
"that",
"aim",
"to",
"help",
... |
ACL | Negative Lexically Constrained Decoding for Paraphrase Generation | Paraphrase generation can be regarded as monolingual translation. Unlike bilingual machine translation, paraphrase generation rewrites only a limited portion of an input sentence. Hence, previous methods based on machine translation often perform conservatively to fail to make necessary rewrites. To solve this problem,... | ed606195f256220cce70be51c311b175 | 2,019 | [
"paraphrase generation can be regarded as monolingual translation .",
"unlike bilingual machine translation , paraphrase generation rewrites only a limited portion of an input sentence .",
"hence , previous methods based on machine translation often perform conservatively to fail to make necessary rewrites .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "paraphrase generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"paraphrase",
"generation"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"paraphrase",
"generation",
"can",
"be",
"regarded",
"as",
"monolingual",
"translation",
".",
"unlike",
"bilingual",
"machine",
"translation",
",",
"paraphrase",
"generation",
"rewrites",
"only",
"a",
"limited",
"portion",
"of",
"an",
"input",
"sentence",
".",
"he... |
ACL | Learning Transferable Feature Representations Using Neural Networks | Learning representations such that the source and target distributions appear as similar as possible has benefited transfer learning tasks across several applications. Generally it requires labeled data from the source and only unlabeled data from the target to learn such representations. While these representations ac... | f0d7a6e441ff2a77c696ccf87788be8c | 2,019 | [
"learning representations such that the source and target distributions appear as similar as possible has benefited transfer learning tasks across several applications .",
"generally it requires labeled data from the source and only unlabeled data from the target to learn such representations .",
"while these r... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "transfer learning tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"transfer",
"learning",
"tasks"
],
"offsets": [
16,
17,
18... | [
"learning",
"representations",
"such",
"that",
"the",
"source",
"and",
"target",
"distributions",
"appear",
"as",
"similar",
"as",
"possible",
"has",
"benefited",
"transfer",
"learning",
"tasks",
"across",
"several",
"applications",
".",
"generally",
"it",
"requires... |
ACL | Multi-Cell Compositional LSTM for NER Domain Adaptation | Cross-domain NER is a challenging yet practical problem. Entity mentions can be highly different across domains. However, the correlations between entity types can be relatively more stable across domains. We investigate a multi-cell compositional LSTM structure for multi-task learning, modeling each entity type using ... | 52f1268b31c7a849a2520346c74584b0 | 2,020 | [
"cross - domain ner is a challenging yet practical problem .",
"entity mentions can be highly different across domains .",
"however , the correlations between entity types can be relatively more stable across domains .",
"we investigate a multi - cell compositional lstm structure for multi - task learning , m... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - domain ner",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"domain",
"ner"
],
"offsets": [
0,
1,
2... | [
"cross",
"-",
"domain",
"ner",
"is",
"a",
"challenging",
"yet",
"practical",
"problem",
".",
"entity",
"mentions",
"can",
"be",
"highly",
"different",
"across",
"domains",
".",
"however",
",",
"the",
"correlations",
"between",
"entity",
"types",
"can",
"be",
... |
ACL | Generating Scientific Claims for Zero-Shot Scientific Fact Checking | Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data, as annotation requires domain expertise. To address this challenge, we propose scientific claim generation, the task of generating one or more atomic and verifiable claims fro... | 39ab3b96759781570fd26201e195863c | 2,022 | [
"automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data , as annotation requires domain expertise .",
"to address this challenge , we propose scientific claim generation , the task of generating one or more atomic and verifia... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "lack",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"lack"
],
"offsets": [
15
]
},
{
"text": "significant amounts of training data",
"... | [
"automated",
"scientific",
"fact",
"checking",
"is",
"difficult",
"due",
"to",
"the",
"complexity",
"of",
"scientific",
"language",
"and",
"a",
"lack",
"of",
"significant",
"amounts",
"of",
"training",
"data",
",",
"as",
"annotation",
"requires",
"domain",
"expe... |
ACL | Learning to Faithfully Rationalize by Construction | In many settings it is important for one to be able to understand why a model made a particular prediction. In NLP this often entails extracting snippets of an input text ‘responsible for’ corresponding model output; when such a snippet comprises tokens that indeed informed the model’s prediction, it is a faithful expl... | 2869488ae2f7da50493cbb923a60586b | 2,020 | [
"in many settings it is important for one to be able to understand why a model made a particular prediction .",
"in nlp this often entails extracting snippets of an input text ‘ responsible for ’ corresponding model output ; when such a snippet comprises tokens that indeed informed the model ’ s prediction , it i... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "model",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"model"
],
"offsets": [
82
]
},
{
"text": "produce",
"nugget_type": "E-PUR",
... | [
"in",
"many",
"settings",
"it",
"is",
"important",
"for",
"one",
"to",
"be",
"able",
"to",
"understand",
"why",
"a",
"model",
"made",
"a",
"particular",
"prediction",
".",
"in",
"nlp",
"this",
"often",
"entails",
"extracting",
"snippets",
"of",
"an",
"inpu... |
ACL | Learning to Tag OOV Tokens by Integrating Contextual Representation and Background Knowledge | Neural-based context-aware models for slot tagging have achieved state-of-the-art performance. However, the presence of OOV(out-of-vocab) words significantly degrades the performance of neural-based models, especially in a few-shot scenario. In this paper, we propose a novel knowledge-enhanced slot tagging model to int... | f7e5f6cbf39b199cc9775594a7a48510 | 2,020 | [
"neural - based context - aware models for slot tagging have achieved state - of - the - art performance .",
"however , the presence of oov ( out - of - vocab ) words significantly degrades the performance of neural - based models , especially in a few - shot scenario .",
"in this paper , we propose a novel kno... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "slot tagging",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"slot",
"tagging"
],
"offsets": [
8,
9
]
}
],
"trigger": {
"t... | [
"neural",
"-",
"based",
"context",
"-",
"aware",
"models",
"for",
"slot",
"tagging",
"have",
"achieved",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"performance",
".",
"however",
",",
"the",
"presence",
"of",
"oov",
"(",
"out",
"-",
"of",
"-",
"vocab"... |
ACL | Predicting the Growth of Morphological Families from Social and Linguistic Factors | We present the first study that examines the evolution of morphological families, i.e., sets of morphologically related words such as “trump”, “antitrumpism”, and “detrumpify”, in social media. We introduce the novel task of Morphological Family Expansion Prediction (MFEP) as predicting the increase in the size of a mo... | 41332b9699140ae4b72058f1dea95152 | 2,020 | [
"we present the first study that examines the evolution of morphological families , i . e . , sets of morphologically related words such as “ trump ” , “ antitrumpism ” , and “ detrumpify ” , in social media .",
"we introduce the novel task of morphological family expansion prediction ( mfep ) as predicting the i... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
42
]
},
{
"text": "predicting",
"nugget_type": "E-PUR",
... | [
"we",
"present",
"the",
"first",
"study",
"that",
"examines",
"the",
"evolution",
"of",
"morphological",
"families",
",",
"i",
".",
"e",
".",
",",
"sets",
"of",
"morphologically",
"related",
"words",
"such",
"as",
"“",
"trump",
"”",
",",
"“",
"antitrumpism... |
ACL | CoDA21: Evaluating Language Understanding Capabilities of NLP Models With Context-Definition Alignment | Pretrained language models (PLMs) have achieved superhuman performance on many benchmarks, creating a need for harder tasks. We introduce CoDA21 (Context Definition Alignment), a challenging benchmark that measures natural language understanding (NLU) capabilities of PLMs: Given a definition and a context each for k wo... | 2a2d8a40fdb4bdf6ba972bc010fbca86 | 2,022 | [
"pretrained language models ( plms ) have achieved superhuman performance on many benchmarks , creating a need for harder tasks .",
"we introduce coda21 ( context definition alignment ) , a challenging benchmark that measures natural language understanding ( nlu ) capabilities of plms :",
"given a definition an... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"language",
"models"
],
"offsets": [
0,
1,
... | [
"pretrained",
"language",
"models",
"(",
"plms",
")",
"have",
"achieved",
"superhuman",
"performance",
"on",
"many",
"benchmarks",
",",
"creating",
"a",
"need",
"for",
"harder",
"tasks",
".",
"we",
"introduce",
"coda21",
"(",
"context",
"definition",
"alignment"... |
ACL | Learning to Segment Actions from Observation and Narration | We apply a generative segmental model of task structure, guided by narration, to action segmentation in video. We focus on unsupervised and weakly-supervised settings where no action labels are known during training. Despite its simplicity, our model performs competitively with previous work on a dataset of naturalisti... | 8e379c213d20b142fe3d3d4ebce945b7 | 2,020 | [
"we apply a generative segmental model of task structure , guided by narration , to action segmentation in video .",
"we focus on unsupervised and weakly - supervised settings where no action labels are known during training .",
"despite its simplicity , our model performs competitively with previous work on a ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "generative segmental model of task structure",
... | [
"we",
"apply",
"a",
"generative",
"segmental",
"model",
"of",
"task",
"structure",
",",
"guided",
"by",
"narration",
",",
"to",
"action",
"segmentation",
"in",
"video",
".",
"we",
"focus",
"on",
"unsupervised",
"and",
"weakly",
"-",
"supervised",
"settings",
... |
ACL | Recognising Agreement and Disagreement between Stances with Reason Comparing Networks | We identify agreement and disagreement between utterances that express stances towards a topic of discussion. Existing methods focus mainly on conversational settings, where dialogic features are used for (dis)agreement inference. We extend this scope and seek to detect stance (dis)agreement in a broader setting, where... | 8054ba69f421ed2d4552e85681ffb8b1 | 2,019 | [
"we identify agreement and disagreement between utterances that express stances towards a topic of discussion .",
"existing methods focus mainly on conversational settings , where dialogic features are used for ( dis ) agreement inference .",
"we extend this scope and seek to detect stance ( dis ) agreement in ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "agreement and disagreement between utterances",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"agreement",
"and",
"disagreement",
"between",
"utterances"
... | [
"we",
"identify",
"agreement",
"and",
"disagreement",
"between",
"utterances",
"that",
"express",
"stances",
"towards",
"a",
"topic",
"of",
"discussion",
".",
"existing",
"methods",
"focus",
"mainly",
"on",
"conversational",
"settings",
",",
"where",
"dialogic",
"... |
ACL | A Three-Parameter Rank-Frequency Relation in Natural Languages | We present that, the rank-frequency relation in textual data follows f ∝ r-𝛼(r+𝛾)-𝛽, where f is the token frequency and r is the rank by frequency, with (𝛼, 𝛽, 𝛾) as parameters. The formulation is derived based on the empirical observation that d2 (x+y)/dx2 is a typical impulse function, where (x,y)=(log r, log f... | ec088af71c948767e9e66e0406edc3c2 | 2,020 | [
"we present that , the rank - frequency relation in textual data follows f [UNK] r - [UNK] ( r + [UNK] ) - [UNK] , where f is the token frequency and r is the rank by frequency , with ( [UNK] , [UNK] , [UNK] ) as parameters .",
"the formulation is derived based on the empirical observation that d2 ( x + y ) / dx2... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "rank - frequency relation",
"nugget_type": ... | [
"we",
"present",
"that",
",",
"the",
"rank",
"-",
"frequency",
"relation",
"in",
"textual",
"data",
"follows",
"f",
"[UNK]",
"r",
"-",
"[UNK]",
"(",
"r",
"+",
"[UNK]",
")",
"-",
"[UNK]",
",",
"where",
"f",
"is",
"the",
"token",
"frequency",
"and",
"r... |
ACL | The Limitations of Limited Context for Constituency Parsing | Incorporating syntax into neural approaches in NLP has a multitude of practical and scientific benefits. For instance, a language model that is syntax-aware is likely to be able to produce better samples; even a discriminative model like BERT with a syntax module could be used for core NLP tasks like unsupervised synta... | 15ae800c928d1b29c94b0c6f4b1325a5 | 2,021 | [
"incorporating syntax into neural approaches in nlp has a multitude of practical and scientific benefits .",
"for instance , a language model that is syntax - aware is likely to be able to produce better samples ; even a discriminative model like bert with a syntax module could be used for core nlp tasks like uns... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "syntax",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"syntax"
],
"offsets": [
1
]
}
],
"trigger": {
"text": "incorporating",
"tokens":... | [
"incorporating",
"syntax",
"into",
"neural",
"approaches",
"in",
"nlp",
"has",
"a",
"multitude",
"of",
"practical",
"and",
"scientific",
"benefits",
".",
"for",
"instance",
",",
"a",
"language",
"model",
"that",
"is",
"syntax",
"-",
"aware",
"is",
"likely",
... |
ACL | Simple and Effective Text Matching with Richer Alignment Features | In this paper, we present a fast and strong neural approach for general purpose text matching applications. We explore what is sufficient to build a fast and well-performed text matching model and propose to keep three key features available for inter-sequence alignment: original point-wise features, previous aligned f... | 3ec492153f0cb4d21876d37113e0ebcf | 2,019 | [
"in this paper , we present a fast and strong neural approach for general purpose text matching applications .",
"we explore what is sufficient to build a fast and well - performed text matching model and propose to keep three key features available for inter - sequence alignment : original point - wise features ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "fast and strong neural approach",
"nugget_t... | [
"in",
"this",
"paper",
",",
"we",
"present",
"a",
"fast",
"and",
"strong",
"neural",
"approach",
"for",
"general",
"purpose",
"text",
"matching",
"applications",
".",
"we",
"explore",
"what",
"is",
"sufficient",
"to",
"build",
"a",
"fast",
"and",
"well",
"... |
ACL | ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation | Residual networks are an Euler discretization of solutions to Ordinary Differential Equations (ODE). This paper explores a deeper relationship between Transformer and numerical ODE methods. We first show that a residual block of layers in Transformer can be described as a higher-order solution to ODE. Inspired by this,... | 2f4bf36aa465f90dd3525101792583e1 | 2,022 | [
"residual networks are an euler discretization of solutions to ordinary differential equations ( ode ) .",
"this paper explores a deeper relationship between transformer and numerical ode methods .",
"we first show that a residual block of layers in transformer can be described as a higher - order solution to o... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ordinary differential equations",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"ordinary",
"differential",
"equations"
],
"offsets": [
9,
1... | [
"residual",
"networks",
"are",
"an",
"euler",
"discretization",
"of",
"solutions",
"to",
"ordinary",
"differential",
"equations",
"(",
"ode",
")",
".",
"this",
"paper",
"explores",
"a",
"deeper",
"relationship",
"between",
"transformer",
"and",
"numerical",
"ode",... |
ACL | MOLEMAN: Mention-Only Linking of Entities with a Mention Annotation Network | We present an instance-based nearest neighbor approach to entity linking. In contrast to most prior entity retrieval systems which represent each entity with a single vector, we build a contextualized mention-encoder that learns to place similar mentions of the same entity closer in vector space than mentions of differ... | d7e222b24d2c8d7ae9fb0c43e7777aea | 2,021 | [
"we present an instance - based nearest neighbor approach to entity linking .",
"in contrast to most prior entity retrieval systems which represent each entity with a single vector , we build a contextualized mention - encoder that learns to place similar mentions of the same entity closer in vector space than me... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "instance - based nearest neighbor approach",
... | [
"we",
"present",
"an",
"instance",
"-",
"based",
"nearest",
"neighbor",
"approach",
"to",
"entity",
"linking",
".",
"in",
"contrast",
"to",
"most",
"prior",
"entity",
"retrieval",
"systems",
"which",
"represent",
"each",
"entity",
"with",
"a",
"single",
"vecto... |
ACL | Meta-Learning with Variational Semantic Memory for Word Sense Disambiguation | A critical challenge faced by supervised word sense disambiguation (WSD) is the lack of large annotated datasets with sufficient coverage of words in their diversity of senses. This inspired recent research on few-shot WSD using meta-learning. While such work has successfully applied meta-learning to learn new word sen... | 3005721f0d00db967ec274592b390b6b | 2,021 | [
"a critical challenge faced by supervised word sense disambiguation ( wsd ) is the lack of large annotated datasets with sufficient coverage of words in their diversity of senses .",
"this inspired recent research on few - shot wsd using meta - learning .",
"while such work has successfully applied meta - learn... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "word sense disambiguation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"word",
"sense",
"disambiguation"
],
"offsets": [
6,
7,
... | [
"a",
"critical",
"challenge",
"faced",
"by",
"supervised",
"word",
"sense",
"disambiguation",
"(",
"wsd",
")",
"is",
"the",
"lack",
"of",
"large",
"annotated",
"datasets",
"with",
"sufficient",
"coverage",
"of",
"words",
"in",
"their",
"diversity",
"of",
"sens... |
ACL | Issues with Entailment-based Zero-shot Text Classification | The general format of natural language inference (NLI) makes it tempting to be used for zero-shot text classification by casting any target label into a sentence of hypothesis and verifying whether or not it could be entailed by the input, aiming at generic classification applicable on any specified label space. In thi... | 314c422c9ca9558db58853027a3cca15 | 2,021 | [
"the general format of natural language inference ( nli ) makes it tempting to be used for zero - shot text classification by casting any target label into a sentence of hypothesis and verifying whether or not it could be entailed by the input , aiming at generic classification applicable on any specified label spa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "general format of natural language inference",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"general",
"format",
"of",
"natural",
"language",
"in... | [
"the",
"general",
"format",
"of",
"natural",
"language",
"inference",
"(",
"nli",
")",
"makes",
"it",
"tempting",
"to",
"be",
"used",
"for",
"zero",
"-",
"shot",
"text",
"classification",
"by",
"casting",
"any",
"target",
"label",
"into",
"a",
"sentence",
... |
ACL | The Risk of Racial Bias in Hate Speech Detection | We investigate how annotators’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations. We first uncover unexpected correlations between surface markers of African American English (AAE) and ratings of toxicity i... | 333ddb4958468d293243bc4782c99225 | 2,019 | [
"we investigate how annotators ’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models , potentially amplifying harm against minority populations .",
"we first uncover unexpected correlations between surface markers of african american english ( aae ) and rating... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "annotators ’ insensitivity to differences",
... | [
"we",
"investigate",
"how",
"annotators",
"’",
"insensitivity",
"to",
"differences",
"in",
"dialect",
"can",
"lead",
"to",
"racial",
"bias",
"in",
"automatic",
"hate",
"speech",
"detection",
"models",
",",
"potentially",
"amplifying",
"harm",
"against",
"minority"... |
ACL | Early Detection of Sexual Predators in Chats | An important risk that children face today is online grooming, where a so-called sexual predator establishes an emotional connection with a minor online with the objective of sexual abuse. Prior work has sought to automatically identify grooming chats, but only after an incidence has already happened in the context of ... | 302bc09071ffa8213b0806c1b2d745f3 | 2,021 | [
"an important risk that children face today is online grooming , where a so - called sexual predator establishes an emotional connection with a minor online with the objective of sexual abuse .",
"prior work has sought to automatically identify grooming chats , but only after an incidence has already happened in ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "online grooming",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"online",
"grooming"
],
"offsets": [
8,
9
]
}
],
"trigger": {
... | [
"an",
"important",
"risk",
"that",
"children",
"face",
"today",
"is",
"online",
"grooming",
",",
"where",
"a",
"so",
"-",
"called",
"sexual",
"predator",
"establishes",
"an",
"emotional",
"connection",
"with",
"a",
"minor",
"online",
"with",
"the",
"objective"... |
ACL | When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion | Though machine translation errors caused by the lack of context beyond one sentence have long been acknowledged, the development of context-aware NMT systems is hampered by several problems. Firstly, standard metrics are not sensitive to improvements in consistency in document-level translations. Secondly, previous wor... | d9c63f42d331156ca520b98b867a32da | 2,019 | [
"though machine translation errors caused by the lack of context beyond one sentence have long been acknowledged , the development of context - aware nmt systems is hampered by several problems .",
"firstly , standard metrics are not sensitive to improvements in consistency in document - level translations .",
... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "standard metrics",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"standard",
"metrics"
],
"offsets": [
34,
35
]
},
{
"tex... | [
"though",
"machine",
"translation",
"errors",
"caused",
"by",
"the",
"lack",
"of",
"context",
"beyond",
"one",
"sentence",
"have",
"long",
"been",
"acknowledged",
",",
"the",
"development",
"of",
"context",
"-",
"aware",
"nmt",
"systems",
"is",
"hampered",
"by... |
ACL | Text-Free Prosody-Aware Generative Spoken Language Modeling | Speech pre-training has primarily demonstrated efficacy on classification tasks, while its capability of generating novel speech, similar to how GPT-2 can generate coherent paragraphs, has barely been explored. Generative Spoken Language Modeling (GSLM) (CITATION) is the only prior work addressing the generative aspect... | 89786e1bf80c3a9f49cd4828f9658e4b | 2,022 | [
"speech pre - training has primarily demonstrated efficacy on classification tasks , while its capability of generating novel speech , similar to how gpt - 2 can generate coherent paragraphs , has barely been explored .",
"generative spoken language modeling ( gslm ) ( citation ) is the only prior work addressing... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "generative spoken language modeling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"generative",
"spoken",
"language",
"modeling"
],
"offsets": [
... | [
"speech",
"pre",
"-",
"training",
"has",
"primarily",
"demonstrated",
"efficacy",
"on",
"classification",
"tasks",
",",
"while",
"its",
"capability",
"of",
"generating",
"novel",
"speech",
",",
"similar",
"to",
"how",
"gpt",
"-",
"2",
"can",
"generate",
"coher... |
ACL | Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter | Lexicon information and pre-trained models, such as BERT, have been combined to explore Chinese sequence labeling tasks due to their respective strengths. However, existing methods solely fuse lexicon features via a shallow and random initialized sequence layer and do not integrate them into the bottom layers of BERT. ... | 469c024f83ad364611945647677a414c | 2,021 | [
"lexicon information and pre - trained models , such as bert , have been combined to explore chinese sequence labeling tasks due to their respective strengths .",
"however , existing methods solely fuse lexicon features via a shallow and random initialized sequence layer and do not integrate them into the bottom ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "chinese sequence labeling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"chinese",
"sequence",
"labeling"
],
"offsets": [
17,
18,
... | [
"lexicon",
"information",
"and",
"pre",
"-",
"trained",
"models",
",",
"such",
"as",
"bert",
",",
"have",
"been",
"combined",
"to",
"explore",
"chinese",
"sequence",
"labeling",
"tasks",
"due",
"to",
"their",
"respective",
"strengths",
".",
"however",
",",
"... |
ACL | Time-Out: Temporal Referencing for Robust Modeling of Lexical Semantic Change | State-of-the-art models of lexical semantic change detection suffer from noise stemming from vector space alignment. We have empirically tested the Temporal Referencing method for lexical semantic change and show that, by avoiding alignment, it is less affected by this noise. We show that, trained on a diachronic corpu... | 5fd042839e7f02875d8806f1c97d0a50 | 2,019 | [
"state - of - the - art models of lexical semantic change detection suffer from noise stemming from vector space alignment .",
"we have empirically tested the temporal referencing method for lexical semantic change and show that , by avoiding alignment , it is less affected by this noise .",
"we show that , tra... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "noise stemming from vector space alignment",
"nugget_type": "FEA",
"argument_type": "Fault",
"tokens": [
"noise",
"stemming",
"from",
"vector",
"space",
"alignme... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"models",
"of",
"lexical",
"semantic",
"change",
"detection",
"suffer",
"from",
"noise",
"stemming",
"from",
"vector",
"space",
"alignment",
".",
"we",
"have",
"empirically",
"tested",
"the",
"temporal",
"referencing... |
ACL | UMIC: An Unreferenced Metric for Image Captioning via Contrastive Learning | Despite the success of various text generation metrics such as BERTScore, it is still difficult to evaluate the image captions without enough reference captions due to the diversity of the descriptions. In this paper, we introduce a new metric UMIC, an Unreferenced Metric for Image Captioning which does not require ref... | 817a69ab79b9a712ac73e1f72ef5f6da | 2,021 | [
"despite the success of various text generation metrics such as bertscore , it is still difficult to evaluate the image captions without enough reference captions due to the diversity of the descriptions .",
"in this paper , we introduce a new metric umic , an unreferenced metric for image captioning which does n... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text generation metrics",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"text",
"generation",
"metrics"
],
"offsets": [
5,
6,
7
... | [
"despite",
"the",
"success",
"of",
"various",
"text",
"generation",
"metrics",
"such",
"as",
"bertscore",
",",
"it",
"is",
"still",
"difficult",
"to",
"evaluate",
"the",
"image",
"captions",
"without",
"enough",
"reference",
"captions",
"due",
"to",
"the",
"di... |
ACL | Towards Debiasing Sentence Representations | As natural language processing methods are increasingly deployed in real-world scenarios such as healthcare, legal systems, and social science, it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes. Previous work has revealed the presence of social biases in widely us... | e29922b10e4a510a81d8d12684a0f09e | 2,020 | [
"as natural language processing methods are increasingly deployed in real - world scenarios such as healthcare , legal systems , and social science , it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes .",
"previous work has revealed the presence of social bia... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing methods",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"methods"
],
"offsets": [
... | [
"as",
"natural",
"language",
"processing",
"methods",
"are",
"increasingly",
"deployed",
"in",
"real",
"-",
"world",
"scenarios",
"such",
"as",
"healthcare",
",",
"legal",
"systems",
",",
"and",
"social",
"science",
",",
"it",
"becomes",
"necessary",
"to",
"re... |
ACL | Multi-Granularity Interaction Network for Extractive and Abstractive Multi-Document Summarization | In this paper, we propose a multi-granularity interaction network for extractive and abstractive multi-document summarization, which jointly learn semantic representations for words, sentences, and documents. The word representations are used to generate an abstractive summary while the sentence representations are use... | abf13ad3f28f964a910e37fdb205c065 | 2,020 | [
"in this paper , we propose a multi - granularity interaction network for extractive and abstractive multi - document summarization , which jointly learn semantic representations for words , sentences , and documents .",
"the word representations are used to generate an abstractive summary while the sentence repr... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "multi - granularity interaction network",
"... | [
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"multi",
"-",
"granularity",
"interaction",
"network",
"for",
"extractive",
"and",
"abstractive",
"multi",
"-",
"document",
"summarization",
",",
"which",
"jointly",
"learn",
"semantic",
"representations",
"for",
... |
ACL | Controllable Dictionary Example Generation: Generating Example Sentences for Specific Targeted Audiences | Example sentences for targeted words in a dictionary play an important role to help readers understand the usage of words. Traditionally, example sentences in a dictionary are usually created by linguistics experts, which are labor-intensive and knowledge-intensive. In this paper, we introduce the problem of dictionary... | 61b3da4e11f0de4adf6d6aa7c32f9b10 | 2,022 | [
"example sentences for targeted words in a dictionary play an important role to help readers understand the usage of words .",
"traditionally , example sentences in a dictionary are usually created by linguistics experts , which are labor - intensive and knowledge - intensive .",
"in this paper , we introduce t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "usage of words",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"usage",
"of",
"words"
],
"offsets": [
17,
18,
19
]
}... | [
"example",
"sentences",
"for",
"targeted",
"words",
"in",
"a",
"dictionary",
"play",
"an",
"important",
"role",
"to",
"help",
"readers",
"understand",
"the",
"usage",
"of",
"words",
".",
"traditionally",
",",
"example",
"sentences",
"in",
"a",
"dictionary",
"a... |
ACL | Control Image Captioning Spatially and Temporally | Generating image captions with user intention is an emerging need. The recently published Localized Narratives dataset takes mouse traces as another input to the image captioning task, which is an intuitive and efficient way for a user to control what to describe in the image. However, how to effectively employ traces ... | 502a38777c05342c70ffdbaf7409ccbc | 2,021 | [
"generating image captions with user intention is an emerging need .",
"the recently published localized narratives dataset takes mouse traces as another input to the image captioning task , which is an intuitive and efficient way for a user to control what to describe in the image .",
"however , how to effecti... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "image captions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"image",
"captions"
],
"offsets": [
1,
2
]
}
],
"trigger": {
... | [
"generating",
"image",
"captions",
"with",
"user",
"intention",
"is",
"an",
"emerging",
"need",
".",
"the",
"recently",
"published",
"localized",
"narratives",
"dataset",
"takes",
"mouse",
"traces",
"as",
"another",
"input",
"to",
"the",
"image",
"captioning",
"... |
ACL | Stance Detection in COVID-19 Tweets | The prevalence of the COVID-19 pandemic in day-to-day life has yielded large amounts of stance detection data on social media sites, as users turn to social media to share their views regarding various issues related to the pandemic, e.g. stay at home mandates and wearing face masks when out in public. We set out to ma... | 44c42cf33a69f9f2d98b8b802f7b0f7a | 2,021 | [
"the prevalence of the covid - 19 pandemic in day - to - day life has yielded large amounts of stance detection data on social media sites , as users turn to social media to share their views regarding various issues related to the pandemic , e . g . stay at home mandates and wearing face masks when out in public .... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "stance detection data",
"nugget_type": "DST",
"argument_type": "Target",
"tokens": [
"stance",
"detection",
"data"
],
"offsets": [
20,
21,
22
... | [
"the",
"prevalence",
"of",
"the",
"covid",
"-",
"19",
"pandemic",
"in",
"day",
"-",
"to",
"-",
"day",
"life",
"has",
"yielded",
"large",
"amounts",
"of",
"stance",
"detection",
"data",
"on",
"social",
"media",
"sites",
",",
"as",
"users",
"turn",
"to",
... |
ACL | Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference | While recent progress on abstractive summarization has led to remarkably fluent summaries, factual errors in generated summaries still severely limit their use in practice. In this paper, we evaluate summaries produced by state-of-the-art models via crowdsourcing and show that such errors occur frequently, in particula... | df59f20c8fcb15bd473437e2b71c3130 | 2,019 | [
"while recent progress on abstractive summarization has led to remarkably fluent summaries , factual errors in generated summaries still severely limit their use in practice .",
"in this paper , we evaluate summaries produced by state - of - the - art models via crowdsourcing and show that such errors occur frequ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "abstractive summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"abstractive",
"summarization"
],
"offsets": [
4,
5
]
}
],... | [
"while",
"recent",
"progress",
"on",
"abstractive",
"summarization",
"has",
"led",
"to",
"remarkably",
"fluent",
"summaries",
",",
"factual",
"errors",
"in",
"generated",
"summaries",
"still",
"severely",
"limit",
"their",
"use",
"in",
"practice",
".",
"in",
"th... |
ACL | Prevent the Language Model from being Overconfident in Neural Machine Translation | The Neural Machine Translation (NMT) model is essentially a joint language model conditioned on both the source sentence and partial translation. Therefore, the NMT model naturally involves the mechanism of the Language Model (LM) that predicts the next token only based on partial translation. Despite its success, NMT ... | fe261399267fdc0cc55cfe294923bbbb | 2,021 | [
"the neural machine translation ( nmt ) model is essentially a joint language model conditioned on both the source sentence and partial translation .",
"therefore , the nmt model naturally involves the mechanism of the language model ( lm ) that predicts the next token only based on partial translation .",
"des... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
1,
2,
... | [
"the",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"model",
"is",
"essentially",
"a",
"joint",
"language",
"model",
"conditioned",
"on",
"both",
"the",
"source",
"sentence",
"and",
"partial",
"translation",
".",
"therefore",
",",
"the",
"nmt",
"model"... |
ACL | MIND: A Large-scale Dataset for News Recommendation | News recommendation is an important technique for personalized news service. Compared with product and movie recommendations which have been comprehensively studied, the research on news recommendation is much more limited, mainly due to the lack of a high-quality benchmark dataset. In this paper, we present a large-sc... | 2d5c75a8b3c70c41f2360f3cd561a4e0 | 2,020 | [
"news recommendation is an important technique for personalized news service .",
"compared with product and movie recommendations which have been comprehensively studied , the research on news recommendation is much more limited , mainly due to the lack of a high - quality benchmark dataset .",
"in this paper ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "news recommendation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"news",
"recommendation"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"news",
"recommendation",
"is",
"an",
"important",
"technique",
"for",
"personalized",
"news",
"service",
".",
"compared",
"with",
"product",
"and",
"movie",
"recommendations",
"which",
"have",
"been",
"comprehensively",
"studied",
",",
"the",
"research",
"on",
"n... |
ACL | An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis | Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence. This task is usually done in a pipeline manner, with aspect term extraction performed first, followed by sentiment predictions toward the extracted aspect terms. While easier to develop, s... | 05dbe3abffd26cda4d8408884be8b352 | 2,019 | [
"aspect - based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence .",
"this task is usually done in a pipeline manner , with aspect term extraction performed first , followed by sentiment predictions toward the extracted aspect terms .",
"while... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aspect - based sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aspect",
"-",
"based",
"sentiment",
"analysis"
],
"offset... | [
"aspect",
"-",
"based",
"sentiment",
"analysis",
"produces",
"a",
"list",
"of",
"aspect",
"terms",
"and",
"their",
"corresponding",
"sentiments",
"for",
"a",
"natural",
"language",
"sentence",
".",
"this",
"task",
"is",
"usually",
"done",
"in",
"a",
"pipeline"... |
ACL | FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization | Neural abstractive summarization models are prone to generate content inconsistent with the source document, i.e. unfaithful. Existing automatic metrics do not capture such mistakes effectively. We tackle the problem of evaluating faithfulness of a generated summary given its source document. We first collected human a... | 81d4c495b869c72c96caf15e04f90516 | 2,020 | [
"neural abstractive summarization models are prone to generate content inconsistent with the source document , i . e . unfaithful .",
"existing automatic metrics do not capture such mistakes effectively .",
"we tackle the problem of evaluating faithfulness of a generated summary given its source document .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural abstractive summarization models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"abstractive",
"summarization",
"models"
],
"offsets... | [
"neural",
"abstractive",
"summarization",
"models",
"are",
"prone",
"to",
"generate",
"content",
"inconsistent",
"with",
"the",
"source",
"document",
",",
"i",
".",
"e",
".",
"unfaithful",
".",
"existing",
"automatic",
"metrics",
"do",
"not",
"capture",
"such",
... |
ACL | Which Linguist Invented the Lightbulb? Presupposition Verification for Question-Answering | Many Question-Answering (QA) datasets contain unanswerable questions, but their treatment in QA systems remains primitive. Our analysis of the Natural Questions (Kwiatkowski et al. 2019) dataset reveals that a substantial portion of unanswerable questions (~21%) can be explained based on the presence of unverifiable pr... | a331c8a5de558dccf4df68d692e8221c | 2,021 | [
"many question - answering ( qa ) datasets contain unanswerable questions , but their treatment in qa systems remains primitive .",
"our analysis of the natural questions ( kwiatkowski et al . 2019 ) dataset reveals that a substantial portion of unanswerable questions ( ~ 21 % ) can be explained based on the pres... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "unanswerable questions",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"unanswerable",
"questions"
],
"offsets": [
9,
10
]
},
{
... | [
"many",
"question",
"-",
"answering",
"(",
"qa",
")",
"datasets",
"contain",
"unanswerable",
"questions",
",",
"but",
"their",
"treatment",
"in",
"qa",
"systems",
"remains",
"primitive",
".",
"our",
"analysis",
"of",
"the",
"natural",
"questions",
"(",
"kwiatk... |
ACL | Multi-stage Pre-training over Simplified Multimodal Pre-training Models | Multimodal pre-training models, such as LXMERT, have achieved excellent results in downstream tasks. However, current pre-trained models require large amounts of training data and have huge model sizes, which make them impossible to apply in low-resource situations. How to obtain similar or even better performance than... | eae5a0e58313531fb9b076cb738ce356 | 2,021 | [
"multimodal pre - training models , such as lxmert , have achieved excellent results in downstream tasks .",
"however , current pre - trained models require large amounts of training data and have huge model sizes , which make them impossible to apply in low - resource situations .",
"how to obtain similar or e... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multimodal pre - training models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"multimodal",
"pre",
"-",
"training",
"models"
],
"offsets"... | [
"multimodal",
"pre",
"-",
"training",
"models",
",",
"such",
"as",
"lxmert",
",",
"have",
"achieved",
"excellent",
"results",
"in",
"downstream",
"tasks",
".",
"however",
",",
"current",
"pre",
"-",
"trained",
"models",
"require",
"large",
"amounts",
"of",
"... |
ACL | A Meta-framework for Spatiotemporal Quantity Extraction from Text | News events are often associated with quantities (e.g., the number of COVID-19 patients or the number of arrests in a protest), and it is often important to extract their type, time, and location from unstructured text in order to analyze these quantity events. This paper thus formulates the NLP problem of spatiotempor... | a0d7f381b9ae21f071271578770547f3 | 2,022 | [
"news events are often associated with quantities ( e . g . , the number of covid - 19 patients or the number of arrests in a protest ) , and it is often important to extract their type , time , and location from unstructured text in order to analyze these quantity events .",
"this paper thus formulates the nlp p... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "news events",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"news",
"events"
],
"offsets": [
0,
1
]
}
],
"trigger": {
"tex... | [
"news",
"events",
"are",
"often",
"associated",
"with",
"quantities",
"(",
"e",
".",
"g",
".",
",",
"the",
"number",
"of",
"covid",
"-",
"19",
"patients",
"or",
"the",
"number",
"of",
"arrests",
"in",
"a",
"protest",
")",
",",
"and",
"it",
"is",
"oft... |
ACL | Alternative Input Signals Ease Transfer in Multilingual Machine Translation | Recent work in multilingual machine translation (MMT) has focused on the potential of positive transfer between languages, particularly cases where higher-resourced languages can benefit lower-resourced ones. While training an MMT model, the supervision signals learned from one language pair can be transferred to the o... | f855b48d2de6bf85e24e1f9bdb1af561 | 2,022 | [
"recent work in multilingual machine translation ( mmt ) has focused on the potential of positive transfer between languages , particularly cases where higher - resourced languages can benefit lower - resourced ones .",
"while training an mmt model , the supervision signals learned from one language pair can be t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilingual machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multilingual",
"machine",
"translation"
],
"offsets": [
3,
... | [
"recent",
"work",
"in",
"multilingual",
"machine",
"translation",
"(",
"mmt",
")",
"has",
"focused",
"on",
"the",
"potential",
"of",
"positive",
"transfer",
"between",
"languages",
",",
"particularly",
"cases",
"where",
"higher",
"-",
"resourced",
"languages",
"... |
ACL | One Size Does Not Fit All: Generating and Evaluating Variable Number of Keyphrases | Different texts shall by nature correspond to different number of keyphrases. This desideratum is largely missing from existing neural keyphrase generation models. In this study, we address this problem from both modeling and evaluation perspectives. We first propose a recurrent generative model that generates multiple... | 74a1be7f4b8e1a503bd05f5a393d64e1 | 2,020 | [
"different texts shall by nature correspond to different number of keyphrases .",
"this desideratum is largely missing from existing neural keyphrase generation models .",
"in this study , we address this problem from both modeling and evaluation perspectives .",
"we first propose a recurrent generative model... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural keyphrase generation models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"keyphrase",
"generation",
"models"
],
"offsets": [
... | [
"different",
"texts",
"shall",
"by",
"nature",
"correspond",
"to",
"different",
"number",
"of",
"keyphrases",
".",
"this",
"desideratum",
"is",
"largely",
"missing",
"from",
"existing",
"neural",
"keyphrase",
"generation",
"models",
".",
"in",
"this",
"study",
"... |
ACL | CorefQA: Coreference Resolution as Query-based Span Prediction | In this paper, we present CorefQA, an accurate and extensible approach for the coreference resolution task. We formulate the problem as a span prediction task, like in question answering: A query is generated for each candidate mention using its surrounding context, and a span prediction module is employed to extract t... | c625aefaa18710ada62e0ad15600d515 | 2,020 | [
"in this paper , we present corefqa , an accurate and extensible approach for the coreference resolution task .",
"we formulate the problem as a span prediction task , like in question answering : a query is generated for each candidate mention using its surrounding context , and a span prediction module is emplo... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "corefqa",
"nugget_type": "APP",
"ar... | [
"in",
"this",
"paper",
",",
"we",
"present",
"corefqa",
",",
"an",
"accurate",
"and",
"extensible",
"approach",
"for",
"the",
"coreference",
"resolution",
"task",
".",
"we",
"formulate",
"the",
"problem",
"as",
"a",
"span",
"prediction",
"task",
",",
"like",... |
ACL | Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation | Massively multilingual models for neural machine translation (NMT) are theoretically attractive, but often underperform bilingual models and deliver poor zero-shot translations. In this paper, we explore ways to improve them. We argue that multilingual NMT requires stronger modeling capacity to support language pairs w... | eff3d42111a3df4916324abb933a8fb4 | 2,020 | [
"massively multilingual models for neural machine translation ( nmt ) are theoretically attractive , but often underperform bilingual models and deliver poor zero - shot translations .",
"in this paper , we explore ways to improve them .",
"we argue that multilingual nmt requires stronger modeling capacity to s... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
4,
5,
... | [
"massively",
"multilingual",
"models",
"for",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"are",
"theoretically",
"attractive",
",",
"but",
"often",
"underperform",
"bilingual",
"models",
"and",
"deliver",
"poor",
"zero",
"-",
"shot",
"translations",
".",... |
ACL | BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models | We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset of them) are being modified. We show that with small-to-medium training data, applying BitFit on pre-trained BERT models is competitive with (and sometimes better than) fine-tuning the entire model. For larger data, the ... | 21f298ac3baf89f90bdba7037be133f6 | 2,022 | [
"we introduce bitfit , a sparse - finetuning method where only the bias - terms of the model ( or a subset of them ) are being modified .",
"we show that with small - to - medium training data , applying bitfit on pre - trained bert models is competitive with ( and sometimes better than ) fine - tuning the entire... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "sparse - finetuning method",
"nugget_type":... | [
"we",
"introduce",
"bitfit",
",",
"a",
"sparse",
"-",
"finetuning",
"method",
"where",
"only",
"the",
"bias",
"-",
"terms",
"of",
"the",
"model",
"(",
"or",
"a",
"subset",
"of",
"them",
")",
"are",
"being",
"modified",
".",
"we",
"show",
"that",
"with"... |
ACL | Explain Yourself! Leveraging Language Models for Commonsense Reasoning | Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world-knowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted ann... | a8916af1e91f9376460a1035dfdfd066 | 2,019 | [
"deep learning models perform poorly on tasks that require commonsense reasoning , which often necessitates some form of world - knowledge or reasoning over information not immediately present in the input .",
"we collect human explanations for commonsense reasoning in the form of natural language sequences and h... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "commonsense reasoning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"commonsense",
"reasoning"
],
"offsets": [
9,
10
]
}
],
"t... | [
"deep",
"learning",
"models",
"perform",
"poorly",
"on",
"tasks",
"that",
"require",
"commonsense",
"reasoning",
",",
"which",
"often",
"necessitates",
"some",
"form",
"of",
"world",
"-",
"knowledge",
"or",
"reasoning",
"over",
"information",
"not",
"immediately",... |
ACL | Human-in-the-Loop for Data Collection: a Multi-Target Counter Narrative Dataset to Fight Online Hate Speech | Undermining the impact of hateful content with informed and non-aggressive responses, called counter narratives, has emerged as a possible solution for having healthier online communities. Thus, some NLP studies have started addressing the task of counter narrative generation. Although such studies have made an effort ... | dbf8ac84252fe052a0d2b9fca9398c29 | 2,021 | [
"undermining the impact of hateful content with informed and non - aggressive responses , called counter narratives , has emerged as a possible solution for having healthier online communities .",
"thus , some nlp studies have started addressing the task of counter narrative generation .",
"although such studie... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "healthier online communities",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"healthier",
"online",
"communities"
],
"offsets": [
26,
27,
... | [
"undermining",
"the",
"impact",
"of",
"hateful",
"content",
"with",
"informed",
"and",
"non",
"-",
"aggressive",
"responses",
",",
"called",
"counter",
"narratives",
",",
"has",
"emerged",
"as",
"a",
"possible",
"solution",
"for",
"having",
"healthier",
"online"... |
ACL | Image Retrieval from Contextual Descriptions | The ability to integrate context, including perceptual and temporal cues, plays a pivotal role in grounding the meaning of a linguistic utterance. In order to measure to what extent current vision-and-language models master this ability, we devise a new multimodal challenge, Image Retrieval from Contextual Descriptions... | 196af50e22ccdcb75ecd2e5282061e5d | 2,022 | [
"the ability to integrate context , including perceptual and temporal cues , plays a pivotal role in grounding the meaning of a linguistic utterance .",
"in order to measure to what extent current vision - and - language models master this ability , we devise a new multimodal challenge , image retrieval from cont... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
43
]
},
{
"text": "image retrieval from contextual descriptions",
... | [
"the",
"ability",
"to",
"integrate",
"context",
",",
"including",
"perceptual",
"and",
"temporal",
"cues",
",",
"plays",
"a",
"pivotal",
"role",
"in",
"grounding",
"the",
"meaning",
"of",
"a",
"linguistic",
"utterance",
".",
"in",
"order",
"to",
"measure",
"... |
ACL | ConTinTin: Continual Learning from Task Instructions | The mainstream machine learning paradigms for NLP often work with two underlying presumptions. First, the target task is predefined and static; a system merely needs to learn to solve it exclusively. Second, the supervision of a task mainly comes from a set of labeled examples. A question arises: how to build a system ... | 21370a6a142896b8828c1d666913f3d5 | 2,022 | [
"the mainstream machine learning paradigms for nlp often work with two underlying presumptions .",
"first , the target task is predefined and static ; a system merely needs to learn to solve it exclusively .",
"second , the supervision of a task mainly comes from a set of labeled examples .",
"a question aris... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine learning paradigms",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"machine",
"learning",
"paradigms"
],
"offsets": [
2,
3,
... | [
"the",
"mainstream",
"machine",
"learning",
"paradigms",
"for",
"nlp",
"often",
"work",
"with",
"two",
"underlying",
"presumptions",
".",
"first",
",",
"the",
"target",
"task",
"is",
"predefined",
"and",
"static",
";",
"a",
"system",
"merely",
"needs",
"to",
... |
ACL | What Makes a Good Counselor? Learning to Distinguish between High-quality and Low-quality Counseling Conversations | The quality of a counseling intervention relies highly on the active collaboration between clients and counselors. In this paper, we explore several linguistic aspects of the collaboration process occurring during counseling conversations. Specifically, we address the differences between high-quality and low-quality co... | 0d8f96609fb925fbd2ed3bfc290f7f97 | 2,019 | [
"the quality of a counseling intervention relies highly on the active collaboration between clients and counselors .\\nin this paper , we explore several linguistic aspects of the collaboration process occurring during counseling conversations .\\nspecifically , we address the differences between high - quality and... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "several linguistic aspects of the collaboration process occurring",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"several",
"linguistic",
"aspects",
"of",
... | [
"the",
"quality",
"of",
"a",
"counseling",
"intervention",
"relies",
"highly",
"on",
"the",
"active",
"collaboration",
"between",
"clients",
"and",
"counselors",
".\\nin",
"this",
"paper",
",",
"we",
"explore",
"several",
"linguistic",
"aspects",
"of",
"the",
"c... |
ACL | Expressing Visual Relationships via Language | Describing images with text is a fundamental problem in vision-language research. Current studies in this domain mostly focus on single image captioning. However, in various real applications (e.g., image editing, difference interpretation, and retrieval), generating relational captions for two images, can also be very... | e1b37ad48892f01a7824995b4ce9b742 | 2,019 | [
"describing images with text is a fundamental problem in vision - language research .",
"current studies in this domain mostly focus on single image captioning .",
"however , in various real applications ( e . g . , image editing , difference interpretation , and retrieval ) , generating relational captions for... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "vision - language research",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"vision",
"-",
"language",
"research"
],
"offsets": [
9,
... | [
"describing",
"images",
"with",
"text",
"is",
"a",
"fundamental",
"problem",
"in",
"vision",
"-",
"language",
"research",
".",
"current",
"studies",
"in",
"this",
"domain",
"mostly",
"focus",
"on",
"single",
"image",
"captioning",
".",
"however",
",",
"in",
... |
ACL | Dynamic Contextualized Word Embeddings | Static word embeddings that represent words by a single vector cannot capture the variability of word meaning in different linguistic and extralinguistic contexts. Building on prior work on contextualized and dynamic word embeddings, we introduce dynamic contextualized word embeddings that represent words as a function... | a9bb5ed3faec15ecccf30c1256fc6dc1 | 2,021 | [
"static word embeddings that represent words by a single vector cannot capture the variability of word meaning in different linguistic and extralinguistic contexts .",
"building on prior work on contextualized and dynamic word embeddings , we introduce dynamic contextualized word embeddings that represent words a... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "static word embeddings",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"static",
"word",
"embeddings"
],
"offsets": [
0,
1,
2
... | [
"static",
"word",
"embeddings",
"that",
"represent",
"words",
"by",
"a",
"single",
"vector",
"cannot",
"capture",
"the",
"variability",
"of",
"word",
"meaning",
"in",
"different",
"linguistic",
"and",
"extralinguistic",
"contexts",
".",
"building",
"on",
"prior",
... |
ACL | Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences | The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergen... | 61146e715032788da048c365d8b39794 | 2,020 | [
"the patterns in which the syntax of different languages converges and diverges are often used to inform work on cross - lingual transfer .",
"nevertheless , little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs .",
"we propose a framework for... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "syntax of different languages converges and diverges",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"syntax",
"of",
"different",
"languages",
"converges",
... | [
"the",
"patterns",
"in",
"which",
"the",
"syntax",
"of",
"different",
"languages",
"converges",
"and",
"diverges",
"are",
"often",
"used",
"to",
"inform",
"work",
"on",
"cross",
"-",
"lingual",
"transfer",
".",
"nevertheless",
",",
"little",
"empirical",
"work... |
ACL | ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information | Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding. In this work, we propose ChineseBERT, which incorporates both the glyph and pinyin information of Chinese characters ... | 5cc01d6e4379e7c21a7bfb0fd63f8baa | 2,021 | [
"recent pretraining models in chinese neglect two important aspects specific to the chinese language : glyph and pinyin , which carry significant syntax and semantic information for language understanding .",
"in this work , we propose chinesebert , which incorporates both the glyph and pinyin information of chin... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "recent pretraining models in chinese",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"recent",
"pretraining",
"models",
"in",
"chinese"
],
... | [
"recent",
"pretraining",
"models",
"in",
"chinese",
"neglect",
"two",
"important",
"aspects",
"specific",
"to",
"the",
"chinese",
"language",
":",
"glyph",
"and",
"pinyin",
",",
"which",
"carry",
"significant",
"syntax",
"and",
"semantic",
"information",
"for",
... |
ACL | Analyzing analytical methods: The case of phonology in neural models of spoken language | Given the fast development of analysis techniques for NLP and speech processing systems, few systematic studies have been conducted to compare the strengths and weaknesses of each method. As a step in this direction we study the case of representations of phonology in neural network models of spoken language. We use tw... | 452c56eba8dc64712767cb01f382371b | 2,020 | [
"given the fast development of analysis techniques for nlp and speech processing systems , few systematic studies have been conducted to compare the strengths and weaknesses of each method .",
"as a step in this direction we study the case of representations of phonology in neural network models of spoken languag... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
36
]
},
{
"text": "case of representations of phonology",
"... | [
"given",
"the",
"fast",
"development",
"of",
"analysis",
"techniques",
"for",
"nlp",
"and",
"speech",
"processing",
"systems",
",",
"few",
"systematic",
"studies",
"have",
"been",
"conducted",
"to",
"compare",
"the",
"strengths",
"and",
"weaknesses",
"of",
"each... |
ACL | PhotoChat: A Human-Human Dialogue Dataset With Photo Sharing Behavior For Joint Image-Text Modeling | We present a new human-human dialogue dataset - PhotoChat, the first dataset that casts light on the photo sharing behavior in online messaging. PhotoChat contains 12k dialogues, each of which is paired with a user photo that is shared during the conversation. Based on this dataset, we propose two tasks to facilitate r... | 5453e5fbe7f5d27fd556334585e8c91c | 2,021 | [
"we present a new human - human dialogue dataset - photochat , the first dataset that casts light on the photo sharing behavior in online messaging .",
"photochat contains 12k dialogues , each of which is paired with a user photo that is shared during the conversation .",
"based on this dataset , we propose two... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "human - human dialogue dataset",
"nugget_ty... | [
"we",
"present",
"a",
"new",
"human",
"-",
"human",
"dialogue",
"dataset",
"-",
"photochat",
",",
"the",
"first",
"dataset",
"that",
"casts",
"light",
"on",
"the",
"photo",
"sharing",
"behavior",
"in",
"online",
"messaging",
".",
"photochat",
"contains",
"12... |
ACL | TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling | We present a novel approach to the problem of text style transfer. Unlike previous approaches requiring style-labeled training data, our method makes use of readily-available unlabeled text by relying on the implicit connection in style between adjacent sentences, and uses labeled data only at inference time. We adapt ... | 4af816d01b5f93d8665dd57ca9a7dd43 | 2,021 | [
"we present a novel approach to the problem of text style transfer .",
"unlike previous approaches requiring style - labeled training data , our method makes use of readily - available unlabeled text by relying on the implicit connection in style between adjacent sentences , and uses labeled data only at inferenc... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "novel approach",
"nugget_type": "APP",
... | [
"we",
"present",
"a",
"novel",
"approach",
"to",
"the",
"problem",
"of",
"text",
"style",
"transfer",
".",
"unlike",
"previous",
"approaches",
"requiring",
"style",
"-",
"labeled",
"training",
"data",
",",
"our",
"method",
"makes",
"use",
"of",
"readily",
"-... |
ACL | Rethinking Self-Supervision Objectives for Generalizable Coherence Modeling | Given the claims of improved text generation quality across various pre-trained neural models, we consider the coherence evaluation of machine generated text to be one of the principal applications of coherence models that needs to be investigated. Prior work in neural coherence modeling has primarily focused on devisi... | b84baf80d2f15d5256280645466dae96 | 2,022 | [
"given the claims of improved text generation quality across various pre - trained neural models , we consider the coherence evaluation of machine generated text to be one of the principal applications of coherence models that needs to be investigated .",
"prior work in neural coherence modeling has primarily foc... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - trained neural models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"trained",
"neural",
"models"
],
"offsets": [
... | [
"given",
"the",
"claims",
"of",
"improved",
"text",
"generation",
"quality",
"across",
"various",
"pre",
"-",
"trained",
"neural",
"models",
",",
"we",
"consider",
"the",
"coherence",
"evaluation",
"of",
"machine",
"generated",
"text",
"to",
"be",
"one",
"of",... |
ACL | Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning | Unsupervised machine translation, which utilizes unpaired monolingual corpora as training data, has achieved comparable performance against supervised machine translation. However, it still suffers from data-scarce domains. To address this issue, this paper presents a novel meta-learning algorithm for unsupervised neur... | 37fef7d7bc3eb6d39d75695d5012e3ad | 2,021 | [
"unsupervised machine translation , which utilizes unpaired monolingual corpora as training data , has achieved comparable performance against supervised machine translation .",
"however , it still suffers from data - scarce domains .",
"to address this issue , this paper presents a novel meta - learning algori... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "unsupervised machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"unsupervised",
"machine",
"translation"
],
"offsets": [
0,
... | [
"unsupervised",
"machine",
"translation",
",",
"which",
"utilizes",
"unpaired",
"monolingual",
"corpora",
"as",
"training",
"data",
",",
"has",
"achieved",
"comparable",
"performance",
"against",
"supervised",
"machine",
"translation",
".",
"however",
",",
"it",
"st... |
ACL | Exclusive Hierarchical Decoding for Deep Keyphrase Generation | Keyphrase generation (KG) aims to summarize the main ideas of a document into a set of keyphrases. A new setting is recently introduced into this problem, in which, given a document, the model needs to predict a set of keyphrases and simultaneously determine the appropriate number of keyphrases to produce. Previous wor... | 6a70379be865efe503d024b902c9ca9d | 2,020 | [
"keyphrase generation ( kg ) aims to summarize the main ideas of a document into a set of keyphrases .",
"a new setting is recently introduced into this problem , in which , given a document , the model needs to predict a set of keyphrases and simultaneously determine the appropriate number of keyphrases to produ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "keyphrase generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"keyphrase",
"generation"
],
"offsets": [
0,
1
]
}
],
"trig... | [
"keyphrase",
"generation",
"(",
"kg",
")",
"aims",
"to",
"summarize",
"the",
"main",
"ideas",
"of",
"a",
"document",
"into",
"a",
"set",
"of",
"keyphrases",
".",
"a",
"new",
"setting",
"is",
"recently",
"introduced",
"into",
"this",
"problem",
",",
"in",
... |
ACL | VAULT: VAriable Unified Long Text Representation for Machine Reading Comprehension | Existing models on Machine Reading Comprehension (MRC) require complex model architecture for effectively modeling long texts with paragraph representation and classification, thereby making inference computationally inefficient for production use. In this work, we propose VAULT: a light-weight and parallel-efficient p... | 029c94e6733d54ab28495a498d86404c | 2,021 | [
"existing models on machine reading comprehension ( mrc ) require complex model architecture for effectively modeling long texts with paragraph representation and classification , thereby making inference computationally inefficient for production use .",
"in this work , we propose vault : a light - weight and pa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine reading comprehension",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"machine",
"reading",
"comprehension"
],
"offsets": [
3,
4,
... | [
"existing",
"models",
"on",
"machine",
"reading",
"comprehension",
"(",
"mrc",
")",
"require",
"complex",
"model",
"architecture",
"for",
"effectively",
"modeling",
"long",
"texts",
"with",
"paragraph",
"representation",
"and",
"classification",
",",
"thereby",
"mak... |
ACL | On Vision Features in Multimodal Machine Translation | Previous work on multimodal machine translation (MMT) has focused on the way of incorporating vision features into translation but little attention is on the quality of vision models. In this work, we investigate the impact of vision models on MMT. Given the fact that Transformer is becoming popular in computer vision,... | e5fcab3b334a14ff1ea9984f09ff5597 | 2,022 | [
"previous work on multimodal machine translation ( mmt ) has focused on the way of incorporating vision features into translation but little attention is on the quality of vision models .",
"in this work , we investigate the impact of vision models on mmt .",
"given the fact that transformer is becoming popular... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multimodal machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multimodal",
"machine",
"translation"
],
"offsets": [
3,
4,
... | [
"previous",
"work",
"on",
"multimodal",
"machine",
"translation",
"(",
"mmt",
")",
"has",
"focused",
"on",
"the",
"way",
"of",
"incorporating",
"vision",
"features",
"into",
"translation",
"but",
"little",
"attention",
"is",
"on",
"the",
"quality",
"of",
"visi... |
ACL | Span-Level Model for Relation Extraction | Relation Extraction is the task of identifying entity mention spans in raw text and then identifying relations between pairs of the entity mentions. Recent approaches for this span-level task have been token-level models which have inherent limitations. They cannot easily define and implement span-level features, canno... | 804b0d149dc3f747ee1a888fbfec49b3 | 2,019 | [
"relation extraction is the task of identifying entity mention spans in raw text and then identifying relations between pairs of the entity mentions .",
"recent approaches for this span - level task have been token - level models which have inherent limitations .",
"they cannot easily define and implement span ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "relation extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"relation",
"extraction"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"relation",
"extraction",
"is",
"the",
"task",
"of",
"identifying",
"entity",
"mention",
"spans",
"in",
"raw",
"text",
"and",
"then",
"identifying",
"relations",
"between",
"pairs",
"of",
"the",
"entity",
"mentions",
".",
"recent",
"approaches",
"for",
"this",
... |
ACL | Cross-lingual Text Classification with Heterogeneous Graph Neural Network | Cross-lingual text classification aims at training a classifier on the source language and transferring the knowledge to target languages, which is very useful for low-resource languages. Recent multilingual pretrained language models (mPLM) achieve impressive results in cross-lingual classification tasks, but rarely c... | ee51bbb60735b662992e91fc11478085 | 2,021 | [
"cross - lingual text classification aims at training a classifier on the source language and transferring the knowledge to target languages , which is very useful for low - resource languages .",
"recent multilingual pretrained language models ( mplm ) achieve impressive results in cross - lingual classification... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual text classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"text",
"classification"
],
"of... | [
"cross",
"-",
"lingual",
"text",
"classification",
"aims",
"at",
"training",
"a",
"classifier",
"on",
"the",
"source",
"language",
"and",
"transferring",
"the",
"knowledge",
"to",
"target",
"languages",
",",
"which",
"is",
"very",
"useful",
"for",
"low",
"-",
... |
ACL | Achieving Reliable Human Assessment of Open-Domain Dialogue Systems | Evaluation of open-domain dialogue systems is highly challenging and development of better techniques is highlighted time and again as desperately needed. Despite substantial efforts to carry out reliable live evaluation of systems in recent competitions, annotations have been abandoned and reported as too unreliable t... | ac9eb3b0d72b667ab4e727b85f207eeb | 2,022 | [
"evaluation of open - domain dialogue systems is highly challenging and development of better techniques is highlighted time and again as desperately needed .",
"despite substantial efforts to carry out reliable live evaluation of systems in recent competitions , annotations have been abandoned and reported as to... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain dialogue systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"dialogue",
"systems"
],
"offsets": [
... | [
"evaluation",
"of",
"open",
"-",
"domain",
"dialogue",
"systems",
"is",
"highly",
"challenging",
"and",
"development",
"of",
"better",
"techniques",
"is",
"highlighted",
"time",
"and",
"again",
"as",
"desperately",
"needed",
".",
"despite",
"substantial",
"efforts... |
ACL | A Batch Normalized Inference Network Keeps the KL Vanishing Away | Variational Autoencoder (VAE) is widely used as a generative model to approximate a model’s posterior on latent variables by combining the amortized variational inference and deep neural networks. However, when paired with strong autoregressive decoders, VAE often converges to a degenerated local optimum known as “post... | d264b6603e572fb242cdb66a68bca578 | 2,020 | [
"variational autoencoder ( vae ) is widely used as a generative model to approximate a model ’ s posterior on latent variables by combining the amortized variational inference and deep neural networks .",
"however , when paired with strong autoregressive decoders , vae often converges to a degenerated local optim... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "vae",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"vae"
],
"offsets": [
3
]
}
],
"trigger": {
"text": "used",
"tokens": [
"use... | [
"variational",
"autoencoder",
"(",
"vae",
")",
"is",
"widely",
"used",
"as",
"a",
"generative",
"model",
"to",
"approximate",
"a",
"model",
"’",
"s",
"posterior",
"on",
"latent",
"variables",
"by",
"combining",
"the",
"amortized",
"variational",
"inference",
"... |
ACL | AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension | Recent machine reading comprehension datasets such as ReClor and LogiQA require performing logical reasoning over text. Conventional neural models are insufficient for logical reasoning, while symbolic reasoners cannot directly apply to text. To meet the challenge, we present a neural-symbolic approach which, to predic... | 445d6fd35e42a34cb2b72fb62f9db70c | 2,022 | [
"recent machine reading comprehension datasets such as reclor and logiqa require performing logical reasoning over text .",
"conventional neural models are insufficient for logical reasoning , while symbolic reasoners cannot directly apply to text .",
"to meet the challenge , we present a neural - symbolic appr... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine reading comprehension datasets",
"nugget_type": "DST",
"argument_type": "Target",
"tokens": [
"machine",
"reading",
"comprehension",
"datasets"
],
"offsets":... | [
"recent",
"machine",
"reading",
"comprehension",
"datasets",
"such",
"as",
"reclor",
"and",
"logiqa",
"require",
"performing",
"logical",
"reasoning",
"over",
"text",
".",
"conventional",
"neural",
"models",
"are",
"insufficient",
"for",
"logical",
"reasoning",
",",... |
ACL | Adaptive Compression of Word Embeddings | Distributed representations of words have been an indispensable component for natural language processing (NLP) tasks. However, the large memory footprint of word embeddings makes it challenging to deploy NLP models to memory-constrained devices (e.g., self-driving cars, mobile devices). In this paper, we propose a nov... | f63de5c23cce0cc5bb67d42ab12e7bed | 2,020 | [
"distributed representations of words have been an indispensable component for natural language processing ( nlp ) tasks .",
"however , the large memory footprint of word embeddings makes it challenging to deploy nlp models to memory - constrained devices ( e . g . , self - driving cars , mobile devices ) .",
"... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
10,
11,
... | [
"distributed",
"representations",
"of",
"words",
"have",
"been",
"an",
"indispensable",
"component",
"for",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"tasks",
".",
"however",
",",
"the",
"large",
"memory",
"footprint",
"of",
"word",
"embeddings",
"ma... |
ACL | Would you Rather? A New Benchmark for Learning Machine Alignment with Cultural Values and Social Preferences | Understanding human preferences, along with cultural and social nuances, lives at the heart of natural language understanding. Concretely, we present a new task and corpus for learning alignments between machine and human preferences. Our newly introduced problem is concerned with predicting the preferable options from... | 02a83374bfd4e607bf434f9358c9910b | 2,020 | [
"understanding human preferences , along with cultural and social nuances , lives at the heart of natural language understanding .",
"concretely , we present a new task and corpus for learning alignments between machine and human preferences .",
"our newly introduced problem is concerned with predicting the pre... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "understanding human preferences , along with cultural and social nuances",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"understanding",
"human",
"preferences",
",",... | [
"understanding",
"human",
"preferences",
",",
"along",
"with",
"cultural",
"and",
"social",
"nuances",
",",
"lives",
"at",
"the",
"heart",
"of",
"natural",
"language",
"understanding",
".",
"concretely",
",",
"we",
"present",
"a",
"new",
"task",
"and",
"corpus... |
ACL | Towards Robustness of Text-to-SQL Models against Synonym Substitution | Recently, there has been significant progress in studying neural networks to translate text descriptions into SQL queries. Despite achieving good performance on some public benchmarks, existing text-to-SQL models typically rely on the lexical matching between words in natural language (NL) questions and tokens in table... | 90997c8e24cc219c41ba7ed1b1843a66 | 2,021 | [
"recently , there has been significant progress in studying neural networks to translate text descriptions into sql queries .",
"despite achieving good performance on some public benchmarks , existing text - to - sql models typically rely on the lexical matching between words in natural language ( nl ) questions ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural networks",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"networks"
],
"offsets": [
9,
10
]
}
],
"trigger": {
... | [
"recently",
",",
"there",
"has",
"been",
"significant",
"progress",
"in",
"studying",
"neural",
"networks",
"to",
"translate",
"text",
"descriptions",
"into",
"sql",
"queries",
".",
"despite",
"achieving",
"good",
"performance",
"on",
"some",
"public",
"benchmarks... |
ACL | Improving Image Captioning Evaluation by Considering Inter References Variance | Evaluating image captions is very challenging partially due to the fact that there are multiple correct captions for every single image. Most of the existing one-to-one metrics operate by penalizing mismatches between reference and generative caption without considering the intrinsic variance between ground truth capti... | a9a6db156c1e205e45ab573278fabc76 | 2,020 | [
"evaluating image captions is very challenging partially due to the fact that there are multiple correct captions for every single image .",
"most of the existing one - to - one metrics operate by penalizing mismatches between reference and generative caption without considering the intrinsic variance between gro... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "image captions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"image",
"captions"
],
"offsets": [
1,
2
]
}
],
"trigger": {
... | [
"evaluating",
"image",
"captions",
"is",
"very",
"challenging",
"partially",
"due",
"to",
"the",
"fact",
"that",
"there",
"are",
"multiple",
"correct",
"captions",
"for",
"every",
"single",
"image",
".",
"most",
"of",
"the",
"existing",
"one",
"-",
"to",
"-"... |
ACL | Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? | Motivated by human attention, computational attention mechanisms have been designed to help neural networks adjust their focus on specific parts of the input data. While attention mechanisms are claimed to achieve interpretability, little is known about the actual relationships between machine and human attention. In t... | 125942491a015a78f8b6a20531649fff | 2,020 | [
"motivated by human attention , computational attention mechanisms have been designed to help neural networks adjust their focus on specific parts of the input data .",
"while attention mechanisms are claimed to achieve interpretability , little is known about the actual relationships between machine and human at... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "computational attention mechanisms",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"computational",
"attention",
"mechanisms"
],
"offsets": [
5,
... | [
"motivated",
"by",
"human",
"attention",
",",
"computational",
"attention",
"mechanisms",
"have",
"been",
"designed",
"to",
"help",
"neural",
"networks",
"adjust",
"their",
"focus",
"on",
"specific",
"parts",
"of",
"the",
"input",
"data",
".",
"while",
"attentio... |
ACL | Contrastive Learning-Enhanced Nearest Neighbor Mechanism for Multi-Label Text Classification | Multi-Label Text Classification (MLTC) is a fundamental and challenging task in natural language processing. Previous studies mainly focus on learning text representation and modeling label correlation but neglect the rich knowledge from the existing similar instances when predicting labels of a specific text. To make ... | bd7e317bcdd9deaf66cedaf2ea28f7c8 | 2,022 | [
"multi - label text classification ( mltc ) is a fundamental and challenging task in natural language processing .",
"previous studies mainly focus on learning text representation and modeling label correlation but neglect the rich knowledge from the existing similar instances when predicting labels of a specific... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - label text classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"label",
"text",
"classification"
],
"offset... | [
"multi",
"-",
"label",
"text",
"classification",
"(",
"mltc",
")",
"is",
"a",
"fundamental",
"and",
"challenging",
"task",
"in",
"natural",
"language",
"processing",
".",
"previous",
"studies",
"mainly",
"focus",
"on",
"learning",
"text",
"representation",
"and"... |
ACL | Personalized Transformer for Explainable Recommendation | Personalization of natural language generation plays a vital role in a large spectrum of tasks, such as explainable recommendation, review summarization and dialog systems. In these tasks, user and item IDs are important identifiers for personalization. Transformer, which is demonstrated with strong language modeling c... | cfc6a7968b333318c9aa4ef6d676d0f1 | 2,021 | [
"personalization of natural language generation plays a vital role in a large spectrum of tasks , such as explainable recommendation , review summarization and dialog systems .",
"in these tasks , user and item ids are important identifiers for personalization .",
"transformer , which is demonstrated with stron... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "personalization of natural language generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"personalization",
"of",
"natural",
"language",
"generation"
... | [
"personalization",
"of",
"natural",
"language",
"generation",
"plays",
"a",
"vital",
"role",
"in",
"a",
"large",
"spectrum",
"of",
"tasks",
",",
"such",
"as",
"explainable",
"recommendation",
",",
"review",
"summarization",
"and",
"dialog",
"systems",
".",
"in",... |
ACL | Breaking the Corpus Bottleneck for Context-Aware Neural Machine Translation with Cross-Task Pre-training | Context-aware neural machine translation (NMT) remains challenging due to the lack of large-scale document-level parallel corpora. To break the corpus bottleneck, in this paper we aim to improve context-aware NMT by taking the advantage of the availability of both large-scale sentence-level parallel dataset and source-... | aa8accbc86b3cb5e002a9fa9275cf42a | 2,021 | [
"context - aware neural machine translation ( nmt ) remains challenging due to the lack of large - scale document - level parallel corpora .",
"to break the corpus bottleneck , in this paper we aim to improve context - aware nmt by taking the advantage of the availability of both large - scale sentence - level pa... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "lack",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"lack"
],
"offsets": [
14
]
}
],
"trigger": {
"text": "lack",
"tokens": [
"l... | [
"context",
"-",
"aware",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"remains",
"challenging",
"due",
"to",
"the",
"lack",
"of",
"large",
"-",
"scale",
"document",
"-",
"level",
"parallel",
"corpora",
".",
"to",
"break",
"the",
"corpus",
"bottleneck... |
ACL | Evaluating Dialogue Generation Systems via Response Selection | Existing automatic evaluation metrics for open-domain dialogue response generation systems correlate poorly with human evaluation. We focus on evaluating response generation systems via response selection. To evaluate systems properly via response selection, we propose a method to construct response selection test sets... | 977911cbe74d89f2486b82ae8b22c856 | 2,020 | [
"existing automatic evaluation metrics for open - domain dialogue response generation systems correlate poorly with human evaluation .",
"we focus on evaluating response generation systems via response selection .",
"to evaluate systems properly via response selection , we propose a method to construct response... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
18
]
},
{
"text": "response selection",
"nugget_type": "APP... | [
"existing",
"automatic",
"evaluation",
"metrics",
"for",
"open",
"-",
"domain",
"dialogue",
"response",
"generation",
"systems",
"correlate",
"poorly",
"with",
"human",
"evaluation",
".",
"we",
"focus",
"on",
"evaluating",
"response",
"generation",
"systems",
"via",... |
ACL | A Multi-Task Architecture on Relevance-based Neural Query Translation | We describe a multi-task learning approach to train a Neural Machine Translation (NMT) model with a Relevance-based Auxiliary Task (RAT) for search query translation. The translation process for Cross-lingual Information Retrieval (CLIR) task is usually treated as a black box and it is performed as an independent step.... | 00e5b4f57e70a9152ec2bb2d411a138c | 2,019 | [
"we describe a multi - task learning approach to train a neural machine translation ( nmt ) model with a relevance - based auxiliary task ( rat ) for search query translation .",
"the translation process for cross - lingual information retrieval ( clir ) task is usually treated as a black box and it is performed ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "multi - task learning approach",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"multi",
"-",
"task",
"learning",
"approach"
],
"offsets": [... | [
"we",
"describe",
"a",
"multi",
"-",
"task",
"learning",
"approach",
"to",
"train",
"a",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"model",
"with",
"a",
"relevance",
"-",
"based",
"auxiliary",
"task",
"(",
"rat",
")",
"for",
"search",
"query",
... |
ACL | FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metrics for Automatic Text Generation | Fast and reliable evaluation metrics are key to R&D progress. While traditional natural language generation metrics are fast, they are not very reliable. Conversely, new metrics based on large pretrained language models are much more reliable, but require significant computational resources. In this paper, we propose F... | 341376ea91f979ded264bb180de241bd | 2,022 | [
"fast and reliable evaluation metrics are key to r & d progress .",
"while traditional natural language generation metrics are fast , they are not very reliable .",
"conversely , new metrics based on large pretrained language models are much more reliable , but require significant computational resources .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "evaluation metrics",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"evaluation",
"metrics"
],
"offsets": [
3,
4
]
}
],
"trigger"... | [
"fast",
"and",
"reliable",
"evaluation",
"metrics",
"are",
"key",
"to",
"r",
"&",
"d",
"progress",
".",
"while",
"traditional",
"natural",
"language",
"generation",
"metrics",
"are",
"fast",
",",
"they",
"are",
"not",
"very",
"reliable",
".",
"conversely",
"... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.