venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | A Contextual Hierarchical Attention Network with Adaptive Objective for Dialogue State Tracking | Recent studies in dialogue state tracking (DST) leverage historical information to determine states which are generally represented as slot-value pairs. However, most of them have limitations to efficiently exploit relevant context due to the lack of a powerful mechanism for modeling interactions between the slot and t... | dd3f7e548d5355eec14fc82d8e4428f2 | 2,020 | [
"recent studies in dialogue state tracking ( dst ) leverage historical information to determine states which are generally represented as slot - value pairs .",
"however , most of them have limitations to efficiently exploit relevant context due to the lack of a powerful mechanism for modeling interactions betwee... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "dialogue state tracking",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"dialogue",
"state",
"tracking"
],
"offsets": [
3,
4,
5
... | [
"recent",
"studies",
"in",
"dialogue",
"state",
"tracking",
"(",
"dst",
")",
"leverage",
"historical",
"information",
"to",
"determine",
"states",
"which",
"are",
"generally",
"represented",
"as",
"slot",
"-",
"value",
"pairs",
".",
"however",
",",
"most",
"of... |
ACL | Diversifying Dialog Generation via Adaptive Label Smoothing | Neural dialogue generation models trained with the one-hot target distribution suffer from the over-confidence issue, which leads to poor generation diversity as widely reported in the literature. Although existing approaches such as label smoothing can alleviate this issue, they fail to adapt to diverse dialog context... | 310f49693c086ccb920ab06af887e997 | 2,021 | [
"neural dialogue generation models trained with the one - hot target distribution suffer from the over - confidence issue , which leads to poor generation diversity as widely reported in the literature .",
"although existing approaches such as label smoothing can alleviate this issue , they fail to adapt to diver... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "suffer",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"suffer"
],
"offsets": [
12
]
},
{
"text": "over - confidence issue",
"nugget_ty... | [
"neural",
"dialogue",
"generation",
"models",
"trained",
"with",
"the",
"one",
"-",
"hot",
"target",
"distribution",
"suffer",
"from",
"the",
"over",
"-",
"confidence",
"issue",
",",
"which",
"leads",
"to",
"poor",
"generation",
"diversity",
"as",
"widely",
"r... |
ACL | Contextual Embeddings: When Are They Worth It? | We study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pretrained embeddings (e.g., GloVe), and an even simpler baseline—random word embeddings—focusing on the impact of the training set size and the linguistic properties of the task. Surprisin... | c7d75e3daa674b8c9e055723ddc6c84d | 2,020 | [
"we study the settings for which deep contextual embeddings ( e . g . , bert ) give large improvements in performance relative to classic pretrained embeddings ( e . g . , glove ) , and an even simpler baseline — random word embeddings — focusing on the impact of the training set size and the linguistic properties ... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Finder",
"tokens": [
"we"
],
"offsets": [
65
]
},
{
"text": "perform",
"nugget_type": "E-FAC",
"a... | [
"we",
"study",
"the",
"settings",
"for",
"which",
"deep",
"contextual",
"embeddings",
"(",
"e",
".",
"g",
".",
",",
"bert",
")",
"give",
"large",
"improvements",
"in",
"performance",
"relative",
"to",
"classic",
"pretrained",
"embeddings",
"(",
"e",
".",
"... |
ACL | Span-based Semantic Parsing for Compositional Generalization | Despite the success of sequence-to-sequence (seq2seq) models in semantic parsing, recent work has shown that they fail in compositional generalization, i.e., the ability to generalize to new structures built of components observed during training. In this work, we posit that a span-based parser should lead to better co... | e74cd1cfd90f9fbc8841509b299ac931 | 2,021 | [
"despite the success of sequence - to - sequence ( seq2seq ) models in semantic parsing , recent work has shown that they fail in compositional generalization , i . e . , the ability to generalize to new structures built of components observed during training .",
"in this work , we posit that a span - based parse... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "sequence - to - sequence ( seq2seq ) models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"sequence",
"-",
"to",
"-",
"sequence",
"(",
... | [
"despite",
"the",
"success",
"of",
"sequence",
"-",
"to",
"-",
"sequence",
"(",
"seq2seq",
")",
"models",
"in",
"semantic",
"parsing",
",",
"recent",
"work",
"has",
"shown",
"that",
"they",
"fail",
"in",
"compositional",
"generalization",
",",
"i",
".",
"e... |
ACL | SAS: Dialogue State Tracking via Slot Attention and Slot Information Sharing | Dialogue state tracker is responsible for inferring user intentions through dialogue history. Previous methods have difficulties in handling dialogues with long interaction context, due to the excessive information. We propose a Dialogue State Tracker with Slot Attention and Slot Information Sharing (SAS) to reduce red... | d4c2b6823c977a9a2fc5008c2a4666d8 | 2,020 | [
"dialogue state tracker is responsible for inferring user intentions through dialogue history .",
"previous methods have difficulties in handling dialogues with long interaction context , due to the excessive information .",
"we propose a dialogue state tracker with slot attention and slot information sharing (... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "dialogue history",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"dialogue",
"history"
],
"offsets": [
10,
11
]
},
{
... | [
"dialogue",
"state",
"tracker",
"is",
"responsible",
"for",
"inferring",
"user",
"intentions",
"through",
"dialogue",
"history",
".",
"previous",
"methods",
"have",
"difficulties",
"in",
"handling",
"dialogues",
"with",
"long",
"interaction",
"context",
",",
"due",
... |
ACL | Revisiting the Compositional Generalization Abilities of Neural Sequence Models | Compositional generalization is a fundamental trait in humans, allowing us to effortlessly combine known phrases to form novel sentences. Recent works have claimed that standard seq-to-seq models severely lack the ability to compositionally generalize. In this paper, we focus on one-shot primitive generalization as int... | 063b7a29b867807dc52b7780d3ebaac7 | 2,022 | [
"compositional generalization is a fundamental trait in humans , allowing us to effortlessly combine known phrases to form novel sentences .",
"recent works have claimed that standard seq - to - seq models severely lack the ability to compositionally generalize .",
"in this paper , we focus on one - shot primit... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "standard seq - to - seq models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"standard",
"seq",
"-",
"to",
"-",
"seq",
"models"
... | [
"compositional",
"generalization",
"is",
"a",
"fundamental",
"trait",
"in",
"humans",
",",
"allowing",
"us",
"to",
"effortlessly",
"combine",
"known",
"phrases",
"to",
"form",
"novel",
"sentences",
".",
"recent",
"works",
"have",
"claimed",
"that",
"standard",
"... |
ACL | Importance-based Neuron Allocation for Multilingual Neural Machine Translation | Multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages. However, the current multilingual translation paradigm often makes the model tend to preserve the general knowledge, but ignore the language-specific knowledge. Some previous works... | dc50d12ff1a5cc14a989603a1eb17539 | 2,021 | [
"multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages .",
"however , the current multilingual translation paradigm often makes the model tend to preserve the general knowledge , but ignore the language - specific knowledge .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilingual neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multilingual",
"neural",
"machine",
"translation"
],
"offsets... | [
"multilingual",
"neural",
"machine",
"translation",
"with",
"a",
"single",
"model",
"has",
"drawn",
"much",
"attention",
"due",
"to",
"its",
"capability",
"to",
"deal",
"with",
"multiple",
"languages",
".",
"however",
",",
"the",
"current",
"multilingual",
"tran... |
ACL | Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity | When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near ... | ab3f84f0efd721490d5c6d817746aa1a | 2,022 | [
"when primed with only a handful of training samples , very large , pretrained language models such as gpt - 3 have shown competitive results when compared to fully - supervised , fine - tuned , large , pretrained language models .",
"we demonstrate that the order in which the samples are provided can make the di... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"language",
"models"
],
"offsets": [
13,
14,
... | [
"when",
"primed",
"with",
"only",
"a",
"handful",
"of",
"training",
"samples",
",",
"very",
"large",
",",
"pretrained",
"language",
"models",
"such",
"as",
"gpt",
"-",
"3",
"have",
"shown",
"competitive",
"results",
"when",
"compared",
"to",
"fully",
"-",
... |
ACL | Detecting Propaganda Techniques in Memes | Propaganda can be defined as a form of communication that aims to influence the opinions or the actions of people towards a specific goal; this is achieved by means of well-defined rhetorical and psychological devices. Propaganda, in the form we know it today, can be dated back to the beginning of the 17th century. How... | c0135810b88c417d9cb1ee0a8c57a66e | 2,021 | [
"propaganda can be defined as a form of communication that aims to influence the opinions or the actions of people towards a specific goal ; this is achieved by means of well - defined rhetorical and psychological devices .",
"propaganda , in the form we know it today , can be dated back to the beginning of the 1... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "propaganda",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"propaganda"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "defined",
"tokens... | [
"propaganda",
"can",
"be",
"defined",
"as",
"a",
"form",
"of",
"communication",
"that",
"aims",
"to",
"influence",
"the",
"opinions",
"or",
"the",
"actions",
"of",
"people",
"towards",
"a",
"specific",
"goal",
";",
"this",
"is",
"achieved",
"by",
"means",
... |
ACL | A Multilingual BPE Embedding Space for Universal Sentiment Lexicon Induction | We present a new method for sentiment lexicon induction that is designed to be applicable to the entire range of typological diversity of the world’s languages. We evaluate our method on Parallel Bible Corpus+ (PBC+), a parallel corpus of 1593 languages. The key idea is to use Byte Pair Encodings (BPEs) as basic units ... | 3faa4b0cef8856549b1d2841b8d1dab6 | 2,019 | [
"we present a new method for sentiment lexicon induction that is designed to be applicable to the entire range of typological diversity of the world ’ s languages .",
"we evaluate our method on parallel bible corpus + ( pbc + ) , a parallel corpus of 1593 languages .",
"the key idea is to use byte pair encoding... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "method for sentiment lexicon induction",
"n... | [
"we",
"present",
"a",
"new",
"method",
"for",
"sentiment",
"lexicon",
"induction",
"that",
"is",
"designed",
"to",
"be",
"applicable",
"to",
"the",
"entire",
"range",
"of",
"typological",
"diversity",
"of",
"the",
"world",
"’",
"s",
"languages",
".",
"we",
... |
ACL | Learning to execute instructions in a Minecraft dialogue | The Minecraft Collaborative Building Task is a two-player game in which an Architect (A) instructs a Builder (B) to construct a target structure in a simulated Blocks World Environment. We define the subtask of predicting correct action sequences (block placements and removals) in a given game context, and show that ca... | 3fa7c1ec3ba33c90238efe16ceac643e | 2,020 | [
"the minecraft collaborative building task is a two - player game in which an architect ( a ) instructs a builder ( b ) to construct a target structure in a simulated blocks world environment .",
"we define the subtask of predicting correct action sequences ( block placements and removals ) in a given game contex... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "minecraft collaborative building task",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"minecraft",
"collaborative",
"building",
"task"
],
"offsets": [... | [
"the",
"minecraft",
"collaborative",
"building",
"task",
"is",
"a",
"two",
"-",
"player",
"game",
"in",
"which",
"an",
"architect",
"(",
"a",
")",
"instructs",
"a",
"builder",
"(",
"b",
")",
"to",
"construct",
"a",
"target",
"structure",
"in",
"a",
"simu... |
ACL | Fair and Argumentative Language Modeling for Computational Argumentation | Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models. To this end... | 0afdd636f33148fb6d3df7282ae2a4b2 | 2,022 | [
"although much work in nlp has focused on measuring and mitigating stereotypical bias in semantic spaces , research addressing bias in computational argumentation is still in its infancy .",
"in this paper , we address this research gap and conduct a thorough investigation of bias in argumentative language models... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "stereotypical bias",
"nugget_type": "WEA",
"argument_type": "Target",
"tokens": [
"stereotypical",
"bias"
],
"offsets": [
11,
12
]
}
],
"trigge... | [
"although",
"much",
"work",
"in",
"nlp",
"has",
"focused",
"on",
"measuring",
"and",
"mitigating",
"stereotypical",
"bias",
"in",
"semantic",
"spaces",
",",
"research",
"addressing",
"bias",
"in",
"computational",
"argumentation",
"is",
"still",
"in",
"its",
"in... |
ACL | Does BERT Know that the IS-A Relation Is Transitive? | The success of a natural language processing (NLP) system on a task does not amount to fully understanding the complexity of the task, typified by many deep learning models. One such question is: can a black-box model make logically consistent predictions for transitive relations? Recent studies suggest that pre-traine... | c1a2ab72ed6282c879fee6124957fb86 | 2,022 | [
"the success of a natural language processing ( nlp ) system on a task does not amount to fully understanding the complexity of the task , typified by many deep learning models .",
"one such question is : can a black - box model make logically consistent predictions for transitive relations ?",
"recent studies ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing system",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"system"
],
"offsets": [
... | [
"the",
"success",
"of",
"a",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"system",
"on",
"a",
"task",
"does",
"not",
"amount",
"to",
"fully",
"understanding",
"the",
"complexity",
"of",
"the",
"task",
",",
"typified",
"by",
"many",
"deep",
"learn... |
ACL | Zero-shot Text Classification via Reinforced Self-training | Zero-shot learning has been a tough problem since no labeled data is available for unseen classes during training, especially for classes with low similarity. In this situation, transferring from seen classes to unseen classes is extremely hard. To tackle this problem, in this paper we propose a self-training based met... | e1d5fc3e23af97c5800bf02b7a3acf7b | 2,020 | [
"zero - shot learning has been a tough problem since no labeled data is available for unseen classes during training , especially for classes with low similarity .",
"in this situation , transferring from seen classes to unseen classes is extremely hard .",
"to tackle this problem , in this paper we propose a s... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "zero - shot learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"zero",
"-",
"shot",
"learning"
],
"offsets": [
0,
1,
... | [
"zero",
"-",
"shot",
"learning",
"has",
"been",
"a",
"tough",
"problem",
"since",
"no",
"labeled",
"data",
"is",
"available",
"for",
"unseen",
"classes",
"during",
"training",
",",
"especially",
"for",
"classes",
"with",
"low",
"similarity",
".",
"in",
"this... |
ACL | Wetin dey with these comments? Modeling Sociolinguistic Factors Affecting Code-switching Behavior in Nigerian Online Discussions | Multilingual individuals code switch between languages as a part of a complex communication process. However, most computational studies have examined only one or a handful of contextual factors predictive of switching. Here, we examine Naija-English code switching in a rich contextual environment to understand the soc... | 7caf4c98c8e3d3b531228a0b08630e7a | 2,019 | [
"multilingual individuals code switch between languages as a part of a complex communication process .",
"however , most computational studies have examined only one or a handful of contextual factors predictive of switching .",
"here , we examine naija - english code switching in a rich contextual environment ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
36
]
},
{
"text": "naija - english code switching",
"nugget... | [
"multilingual",
"individuals",
"code",
"switch",
"between",
"languages",
"as",
"a",
"part",
"of",
"a",
"complex",
"communication",
"process",
".",
"however",
",",
"most",
"computational",
"studies",
"have",
"examined",
"only",
"one",
"or",
"a",
"handful",
"of",
... |
ACL | DefSent: Sentence Embeddings using Definition Sentences | Sentence embedding methods using natural language inference (NLI) datasets have been successfully applied to various tasks. However, these methods are only available for limited languages due to relying heavily on the large NLI datasets. In this paper, we propose DefSent, a sentence embedding method that uses definitio... | 265882d14c960818cf269cb172aa4439 | 2,021 | [
"sentence embedding methods using natural language inference ( nli ) datasets have been successfully applied to various tasks .",
"however , these methods are only available for limited languages due to relying heavily on the large nli datasets .",
"in this paper , we propose defsent , a sentence embedding meth... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sentence embedding methods",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"sentence",
"embedding",
"methods"
],
"offsets": [
0,
1,
... | [
"sentence",
"embedding",
"methods",
"using",
"natural",
"language",
"inference",
"(",
"nli",
")",
"datasets",
"have",
"been",
"successfully",
"applied",
"to",
"various",
"tasks",
".",
"however",
",",
"these",
"methods",
"are",
"only",
"available",
"for",
"limite... |
ACL | Multi-grained Attention with Object-level Grounding for Visual Question Answering | Attention mechanisms are widely used in Visual Question Answering (VQA) to search for visual clues related to the question. Most approaches train attention models from a coarse-grained association between sentences and images, which tends to fail on small objects or uncommon concepts. To address this problem, this pape... | e7800ea027c606549b39dd3d595cefb2 | 2,019 | [
"attention mechanisms are widely used in visual question answering ( vqa ) to search for visual clues related to the question .",
"most approaches train attention models from a coarse - grained association between sentences and images , which tends to fail on small objects or uncommon concepts .",
"to address t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "attention mechanisms",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"attention",
"mechanisms"
],
"offsets": [
0,
1
]
}
],
"trig... | [
"attention",
"mechanisms",
"are",
"widely",
"used",
"in",
"visual",
"question",
"answering",
"(",
"vqa",
")",
"to",
"search",
"for",
"visual",
"clues",
"related",
"to",
"the",
"question",
".",
"most",
"approaches",
"train",
"attention",
"models",
"from",
"a",
... |
ACL | Decoding Part-of-Speech from Human EEG Signals | This work explores techniques to predict Part-of-Speech (PoS) tags from neural signals measured at millisecond resolution with electroencephalography (EEG) during text reading. We first show that information about word length, frequency and word class is encoded by the brain at different post-stimulus latencies. We the... | d33f123399a897145b15577c525aeaa3 | 2,022 | [
"this work explores techniques to predict part - of - speech ( pos ) tags from neural signals measured at millisecond resolution with electroencephalography ( eeg ) during text reading .",
"we first show that information about word length , frequency and word class is encoded by the brain at different post - stim... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Finder",
"tokens": [
"we"
],
"offsets": [
31
]
},
{
"text": "encoded",
"nugget_type": "E-FAC",
"a... | [
"this",
"work",
"explores",
"techniques",
"to",
"predict",
"part",
"-",
"of",
"-",
"speech",
"(",
"pos",
")",
"tags",
"from",
"neural",
"signals",
"measured",
"at",
"millisecond",
"resolution",
"with",
"electroencephalography",
"(",
"eeg",
")",
"during",
"text... |
ACL | Targeting the Benchmark: On Methodology in Current Natural Language Processing Research | It has become a common pattern in our field: One group introduces a language task, exemplified by a dataset, which they argue is challenging enough to serve as a benchmark. They also provide a baseline model for it, which then soon is improved upon by other groups. Often, research efforts then move on, and the pattern ... | ae5aa89caeec81436360880f9a127da7 | 2,021 | [
"it has become a common pattern in our field : one group introduces a language task , exemplified by a dataset , which they argue is challenging enough to serve as a benchmark .",
"they also provide a baseline model for it , which then soon is improved upon by other groups .",
"often , research efforts then mov... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
90
]
},
{
"text": "possible argumentations and their parts",
... | [
"it",
"has",
"become",
"a",
"common",
"pattern",
"in",
"our",
"field",
":",
"one",
"group",
"introduces",
"a",
"language",
"task",
",",
"exemplified",
"by",
"a",
"dataset",
",",
"which",
"they",
"argue",
"is",
"challenging",
"enough",
"to",
"serve",
"as",
... |
ACL | Deduplicating Training Data Makes Language Models Better | We find that existing language modeling datasets contain many near-duplicate examples and long repetitive substrings.As a result, over 1% of the unprompted output of language models trained on these datasets is copied verbatim from the training data.We develop two tools that allow us to deduplicate training datasets—fo... | e55d81fc3f5ae71fe10faac361f45689 | 2,022 | [
"we find that existing language modeling datasets contain many near - duplicate examples and long repetitive substrings .",
"as a result , over 1 % of the unprompted output of language models trained on these datasets is copied verbatim from the training data .",
"we develop two tools that allow us to deduplica... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "existing language modeling datasets",
"nugget_type": "DST",
"argument_type": "Concern",
"tokens": [
"existing",
"language",
"modeling",
"datasets"
],
"offsets": [
... | [
"we",
"find",
"that",
"existing",
"language",
"modeling",
"datasets",
"contain",
"many",
"near",
"-",
"duplicate",
"examples",
"and",
"long",
"repetitive",
"substrings",
".",
"as",
"a",
"result",
",",
"over",
"1",
"%",
"of",
"the",
"unprompted",
"output",
"o... |
ACL | Data Augmentation with Adversarial Training for Cross-Lingual NLI | Due to recent pretrained multilingual representation models, it has become feasible to exploit labeled data from one language to train a cross-lingual model that can then be applied to multiple new languages. In practice, however, we still face the problem of scarce labeled data, leading to subpar results. In this pape... | cb09b74708934578bbe281dcc77ab3d4 | 2,021 | [
"due to recent pretrained multilingual representation models , it has become feasible to exploit labeled data from one language to train a cross - lingual model that can then be applied to multiple new languages .",
"in practice , however , we still face the problem of scarce labeled data , leading to subpar resu... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "scarce labeled data",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"scarce",
"labeled",
"data"
],
"offsets": [
47,
48,
49
... | [
"due",
"to",
"recent",
"pretrained",
"multilingual",
"representation",
"models",
",",
"it",
"has",
"become",
"feasible",
"to",
"exploit",
"labeled",
"data",
"from",
"one",
"language",
"to",
"train",
"a",
"cross",
"-",
"lingual",
"model",
"that",
"can",
"then",... |
ACL | Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing | We propose a general strategy named ‘divide, conquer and combine’ for multimodal fusion. Instead of directly fusing features at holistic level, we conduct fusion hierarchically so that both local and global interactions are considered for a comprehensive interpretation of multimodal embeddings. In the ‘divide’ and ‘con... | c9bf1501d1d4c25a80583686bf9b325e | 2,019 | [
"we propose a general strategy named ‘ divide , conquer and combine ’ for multimodal fusion .",
"instead of directly fusing features at holistic level , we conduct fusion hierarchically so that both local and global interactions are considered for a comprehensive interpretation of multimodal embeddings .",
"in ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "divide , conquer and combine",
"nugget_type... | [
"we",
"propose",
"a",
"general",
"strategy",
"named",
"‘",
"divide",
",",
"conquer",
"and",
"combine",
"’",
"for",
"multimodal",
"fusion",
".",
"instead",
"of",
"directly",
"fusing",
"features",
"at",
"holistic",
"level",
",",
"we",
"conduct",
"fusion",
"hie... |
ACL | DisSent: Learning Sentence Representations from Explicit Discourse Relations | Learning effective representations of sentences is one of the core missions of natural language understanding. Existing models either train on a vast amount of text, or require costly, manually curated sentence relation datasets. We show that with dependency parsing and rule-based rubrics, we can curate a high quality ... | 8b4dfcef1a8d4341eabce3afaf5d5fbb | 2,019 | [
"learning effective representations of sentences is one of the core missions of natural language understanding .",
"existing models either train on a vast amount of text , or require costly , manually curated sentence relation datasets .",
"we show that with dependency parsing and rule - based rubrics , we can ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "effective representations of sentences",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"effective",
"representations",
"of",
"sentences"
],
"offsets":... | [
"learning",
"effective",
"representations",
"of",
"sentences",
"is",
"one",
"of",
"the",
"core",
"missions",
"of",
"natural",
"language",
"understanding",
".",
"existing",
"models",
"either",
"train",
"on",
"a",
"vast",
"amount",
"of",
"text",
",",
"or",
"requ... |
ACL | Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References | Due to its potential applications, open-domain dialogue generation has become popular and achieved remarkable progress in recent years, but sometimes suffers from generic responses. Previous models are generally trained based on 1-to-1 mapping from an input query to its response, which actually ignores the nature of 1-... | 2f1b33dd9967a6229cfa72b17073374e | 2,019 | [
"due to its potential applications , open - domain dialogue generation has become popular and achieved remarkable progress in recent years , but sometimes suffers from generic responses .",
"previous models are generally trained based on 1 - to - 1 mapping from an input query to its response , which actually igno... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain dialogue generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"dialogue",
"generation"
],
"offset... | [
"due",
"to",
"its",
"potential",
"applications",
",",
"open",
"-",
"domain",
"dialogue",
"generation",
"has",
"become",
"popular",
"and",
"achieved",
"remarkable",
"progress",
"in",
"recent",
"years",
",",
"but",
"sometimes",
"suffers",
"from",
"generic",
"respo... |
ACL | Fast and Accurate Non-Projective Dependency Tree Linearization | We propose a graph-based method to tackle the dependency tree linearization task. We formulate the task as a Traveling Salesman Problem (TSP), and use a biaffine attention model to calculate the edge costs. We facilitate the decoding by solving the TSP for each subtree and combining the solution into a projective tree.... | afce5556b77ffca722fd37aaa1d4c63e | 2,020 | [
"we propose a graph - based method to tackle the dependency tree linearization task .",
"we formulate the task as a traveling salesman problem ( tsp ) , and use a biaffine attention model to calculate the edge costs .",
"we facilitate the decoding by solving the tsp for each subtree and combining the solution i... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "graph - based method",
"nugget_type": "APP"... | [
"we",
"propose",
"a",
"graph",
"-",
"based",
"method",
"to",
"tackle",
"the",
"dependency",
"tree",
"linearization",
"task",
".",
"we",
"formulate",
"the",
"task",
"as",
"a",
"traveling",
"salesman",
"problem",
"(",
"tsp",
")",
",",
"and",
"use",
"a",
"b... |
ACL | Latent Retrieval for Weakly Supervised Open Domain Question Answering | Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR.... | 941f01e3f537cdabf09c13cee133d47f | 2,019 | [
"recent work on open domain question answering ( qa ) assumes strong supervision of the supporting evidence and / or assumes a blackbox information retrieval ( ir ) system to retrieve evidence candidates .",
"we argue that both are suboptimal , since gold evidence is not always available , and qa is fundamentally... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open domain question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"domain",
"question",
"answering"
],
"offsets": [
3,
... | [
"recent",
"work",
"on",
"open",
"domain",
"question",
"answering",
"(",
"qa",
")",
"assumes",
"strong",
"supervision",
"of",
"the",
"supporting",
"evidence",
"and",
"/",
"or",
"assumes",
"a",
"blackbox",
"information",
"retrieval",
"(",
"ir",
")",
"system",
... |
ACL | MAAM: A Morphology-Aware Alignment Model for Unsupervised Bilingual Lexicon Induction | The task of unsupervised bilingual lexicon induction (UBLI) aims to induce word translations from monolingual corpora in two languages. Previous work has shown that morphological variation is an intractable challenge for the UBLI task, where the induced translation in failure case is usually morphologically related to ... | 663d54ac27b8c42469ff477f496d6c9a | 2,019 | [
"the task of unsupervised bilingual lexicon induction ( ubli ) aims to induce word translations from monolingual corpora in two languages .",
"previous work has shown that morphological variation is an intractable challenge for the ubli task , where the induced translation in failure case is usually morphological... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ubli",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"ubli"
],
"offsets": [
69
]
}
],
"trigger": {
"text": "induce",
"tokens": [
... | [
"the",
"task",
"of",
"unsupervised",
"bilingual",
"lexicon",
"induction",
"(",
"ubli",
")",
"aims",
"to",
"induce",
"word",
"translations",
"from",
"monolingual",
"corpora",
"in",
"two",
"languages",
".",
"previous",
"work",
"has",
"shown",
"that",
"morphologica... |
ACL | Learning to Ask Unanswerable Questions for Machine Reading Comprehension | Machine reading comprehension with unanswerable questions is a challenging task. In this work, we propose a data augmentation technique by automatically generating relevant unanswerable questions according to an answerable question paired with its corresponding paragraph that contains the answer. We introduce a pair-to... | 8efbd136cc02e3b233fcbda528545234 | 2,019 | [
"machine reading comprehension with unanswerable questions is a challenging task .",
"in this work , we propose a data augmentation technique by automatically generating relevant unanswerable questions according to an answerable question paired with its corresponding paragraph that contains the answer .",
"we i... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
15
]
},
{
"text": "data augmentation technique",
"nugget_type... | [
"machine",
"reading",
"comprehension",
"with",
"unanswerable",
"questions",
"is",
"a",
"challenging",
"task",
".",
"in",
"this",
"work",
",",
"we",
"propose",
"a",
"data",
"augmentation",
"technique",
"by",
"automatically",
"generating",
"relevant",
"unanswerable",
... |
ACL | Gender in Danger? Evaluating Speech Translation Technology on the MuST-SHE Corpus | Translating from languages without productive grammatical gender like English into gender-marked languages is a well-known difficulty for machines. This difficulty is also due to the fact that the training data on which models are built typically reflect the asymmetries of natural languages, gender bias included. Exclu... | 8592e5849263f74171b85668954794e2 | 2,020 | [
"translating from languages without productive grammatical gender like english into gender - marked languages is a well - known difficulty for machines .",
"this difficulty is also due to the fact that the training data on which models are built typically reflect the asymmetries of natural languages , gender bias... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "for machines",
"nugget_type": "LIM",
"argument_type": "Condition",
"tokens": [
"for",
"machines"
],
"offsets": [
20,
21
]
},
{
"text": "t... | [
"translating",
"from",
"languages",
"without",
"productive",
"grammatical",
"gender",
"like",
"english",
"into",
"gender",
"-",
"marked",
"languages",
"is",
"a",
"well",
"-",
"known",
"difficulty",
"for",
"machines",
".",
"this",
"difficulty",
"is",
"also",
"due... |
ACL | Simple Unsupervised Summarization by Contextual Matching | We propose an unsupervised method for sentence summarization using only language modeling. The approach employs two language models, one that is generic (i.e. pretrained), and the other that is specific to the target domain. We show that by using a product-of-experts criteria these are enough for maintaining continuous... | dcdacec11c4e11bf3ac837de3d01da5e | 2,019 | [
"we propose an unsupervised method for sentence summarization using only language modeling .",
"the approach employs two language models , one that is generic ( i . e . pretrained ) , and the other that is specific to the target domain .",
"we show that by using a product - of - experts criteria these are enoug... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "unsupervised method",
"nugget_type": "APP",... | [
"we",
"propose",
"an",
"unsupervised",
"method",
"for",
"sentence",
"summarization",
"using",
"only",
"language",
"modeling",
".",
"the",
"approach",
"employs",
"two",
"language",
"models",
",",
"one",
"that",
"is",
"generic",
"(",
"i",
".",
"e",
".",
"pretr... |
ACL | Improving Neural Language Models by Segmenting, Attending, and Predicting the Future | Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights a... | 976fe07ced5bb2fbfd53e118617cfc51 | 2,019 | [
"common language models typically predict the next word given the context .",
"in this work , we propose a method that improves language modeling by learning to align the given context and the following phrase .",
"the model does not require any linguistic annotation of phrase segmentation .",
"instead , we d... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"language",
"models"
],
"offsets": [
1,
2
]
}
],
"trigger": {
... | [
"common",
"language",
"models",
"typically",
"predict",
"the",
"next",
"word",
"given",
"the",
"context",
".",
"in",
"this",
"work",
",",
"we",
"propose",
"a",
"method",
"that",
"improves",
"language",
"modeling",
"by",
"learning",
"to",
"align",
"the",
"giv... |
ACL | Unsupervised Neural Single-Document Summarization of Reviews via Learning Latent Discourse Structure and its Ranking | This paper focuses on the end-to-end abstractive summarization of a single product review without supervision. We assume that a review can be described as a discourse tree, in which the summary is the root, and the child sentences explain their parent in detail. By recursively estimating a parent from its children, our... | 25b32afe6586696914f67a9ce7c46292 | 2,019 | [
"this paper focuses on the end - to - end abstractive summarization of a single product review without supervision .",
"we assume that a review can be described as a discourse tree , in which the summary is the root , and the child sentences explain their parent in detail .",
"by recursively estimating a parent... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "end - to - end abstractive summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"abstractive",
"s... | [
"this",
"paper",
"focuses",
"on",
"the",
"end",
"-",
"to",
"-",
"end",
"abstractive",
"summarization",
"of",
"a",
"single",
"product",
"review",
"without",
"supervision",
".",
"we",
"assume",
"that",
"a",
"review",
"can",
"be",
"described",
"as",
"a",
"dis... |
ACL | Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation | Accurate, automatic evaluation of machine translation is critical for system tuning, and evaluating progress in the field. We proposed a simple unsupervised metric, and additional supervised metrics which rely on contextual word embeddings to encode the translation and reference sentences. We find that these models riv... | 6951f18d9139c739fe794e3888f4e308 | 2,019 | [
"accurate , automatic evaluation of machine translation is critical for system tuning , and evaluating progress in the field .",
"we proposed a simple unsupervised metric , and additional supervised metrics which rely on contextual word embeddings to encode the translation and reference sentences .",
"we find t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"machine",
"translation"
],
"offsets": [
5,
6
]
}
],
"trigge... | [
"accurate",
",",
"automatic",
"evaluation",
"of",
"machine",
"translation",
"is",
"critical",
"for",
"system",
"tuning",
",",
"and",
"evaluating",
"progress",
"in",
"the",
"field",
".",
"we",
"proposed",
"a",
"simple",
"unsupervised",
"metric",
",",
"and",
"ad... |
ACL | Building a User-Generated Content North-African Arabizi Treebank: Tackling Hell | We introduce the first treebank for a romanized user-generated content variety of Algerian, a North-African Arabic dialect known for its frequent usage of code-switching. Made of 1500 sentences, fully annotated in morpho-syntax and Universal Dependency syntax, with full translation at both the word and the sentence lev... | cee24bcf5d6fbcd43d4b07419437e866 | 2,020 | [
"we introduce the first treebank for a romanized user - generated content variety of algerian , a north - african arabic dialect known for its frequent usage of code - switching .",
"made of 1500 sentences , fully annotated in morpho - syntax and universal dependency syntax , with full translation at both the wor... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "first treebank",
"nugget_type": "DST",
... | [
"we",
"introduce",
"the",
"first",
"treebank",
"for",
"a",
"romanized",
"user",
"-",
"generated",
"content",
"variety",
"of",
"algerian",
",",
"a",
"north",
"-",
"african",
"arabic",
"dialect",
"known",
"for",
"its",
"frequent",
"usage",
"of",
"code",
"-",
... |
ACL | Joint Models for Answer Verification in Question Answering Systems | This paper studies joint models for selecting correct answer sentences among the top k provided by answer sentence selection (AS2) modules, which are core components of retrieval-based Question Answering (QA) systems. Our work shows that a critical step to effectively exploiting an answer set regards modeling the inter... | 507349574c2678043fbc1497be212065 | 2,021 | [
"this paper studies joint models for selecting correct answer sentences among the top k provided by answer sentence selection ( as2 ) modules , which are core components of retrieval - based question answering ( qa ) systems .",
"our work shows that a critical step to effectively exploiting an answer set regards ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
66
]
},
{
"text": "three - way multi - classifier",
"nugget_t... | [
"this",
"paper",
"studies",
"joint",
"models",
"for",
"selecting",
"correct",
"answer",
"sentences",
"among",
"the",
"top",
"k",
"provided",
"by",
"answer",
"sentence",
"selection",
"(",
"as2",
")",
"modules",
",",
"which",
"are",
"core",
"components",
"of",
... |
ACL | On the Importance of Diversity in Question Generation for QA | Automatic question generation (QG) has shown promise as a source of synthetic training data for question answering (QA). In this paper we ask: Is textual diversity in QG beneficial for downstream QA? Using top-p nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promo... | 44837aca51d22e42b7f3250037e5ccd5 | 2,020 | [
"automatic question generation ( qg ) has shown promise as a source of synthetic training data for question answering ( qa ) .",
"in this paper we ask : is textual diversity in qg beneficial for downstream qa ?",
"using top - p nucleus sampling to derive samples from a transformer - based question generator , w... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"question",
"answering"
],
"offsets": [
17,
18
]
}
],
"trigge... | [
"automatic",
"question",
"generation",
"(",
"qg",
")",
"has",
"shown",
"promise",
"as",
"a",
"source",
"of",
"synthetic",
"training",
"data",
"for",
"question",
"answering",
"(",
"qa",
")",
".",
"in",
"this",
"paper",
"we",
"ask",
":",
"is",
"textual",
"... |
ACL | Learning an Unreferenced Metric for Online Dialogue Evaluation | Evaluating the quality of a dialogue interaction between two agents is a difficult task, especially in open-domain chit-chat style dialogue. There have been recent efforts to develop automatic dialogue evaluation metrics, but most of them do not generalize to unseen datasets and/or need a human-generated reference resp... | 86fd5329cddaed4376539948a18ea10a | 2,020 | [
"evaluating the quality of a dialogue interaction between two agents is a difficult task , especially in open - domain chit - chat style dialogue .",
"there have been recent efforts to develop automatic dialogue evaluation metrics , but most of them do not generalize to unseen datasets and / or need a human - gen... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "quality of a dialogue interaction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"quality",
"of",
"a",
"dialogue",
"interaction"
],
"offset... | [
"evaluating",
"the",
"quality",
"of",
"a",
"dialogue",
"interaction",
"between",
"two",
"agents",
"is",
"a",
"difficult",
"task",
",",
"especially",
"in",
"open",
"-",
"domain",
"chit",
"-",
"chat",
"style",
"dialogue",
".",
"there",
"have",
"been",
"recent"... |
ACL | KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers | The goal of database question answering is to enable natural language querying of real-life relational databases in diverse application domains. Recently, large-scale datasets such as Spider and WikiSQL facilitated novel modeling techniques for text-to-SQL parsing, improving zero-shot generalization to unseen databases... | bab4e6c3113026a61497e73786c42419 | 2,021 | [
"the goal of database question answering is to enable natural language querying of real - life relational databases in diverse application domains .",
"recently , large - scale datasets such as spider and wikisql facilitated novel modeling techniques for text - to - sql parsing , improving zero - shot generalizat... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "database question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"database",
"question",
"answering"
],
"offsets": [
3,
4,
... | [
"the",
"goal",
"of",
"database",
"question",
"answering",
"is",
"to",
"enable",
"natural",
"language",
"querying",
"of",
"real",
"-",
"life",
"relational",
"databases",
"in",
"diverse",
"application",
"domains",
".",
"recently",
",",
"large",
"-",
"scale",
"da... |
ACL | Data Augmentation for Text Generation Without Any Augmented Data | Data augmentation is an effective way to improve the performance of many neural text generation models. However, current data augmentation methods need to define or choose proper data mapping functions that map the original samples into the augmented samples. In this work, we derive an objective to formulate the proble... | 70f965a6ef80d51713b7652d1b07b298 | 2,021 | [
"data augmentation is an effective way to improve the performance of many neural text generation models .",
"however , current data augmentation methods need to define or choose proper data mapping functions that map the original samples into the augmented samples .",
"in this work , we derive an objective to f... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "data augmentation",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"data",
"augmentation"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"data",
"augmentation",
"is",
"an",
"effective",
"way",
"to",
"improve",
"the",
"performance",
"of",
"many",
"neural",
"text",
"generation",
"models",
".",
"however",
",",
"current",
"data",
"augmentation",
"methods",
"need",
"to",
"define",
"or",
"choose",
"p... |
ACL | Confusionset-guided Pointer Networks for Chinese Spelling Check | This paper proposes Confusionset-guided Pointer Networks for Chinese Spell Check (CSC) task. More concretely, our approach utilizes the off-the-shelf confusionset for guiding the character generation. To this end, our novel Seq2Seq model jointly learns to copy a correct character from an input sentence through a pointe... | 074467358d7d47be338e4a7960756a3d | 2,019 | [
"this paper proposes confusionset - guided pointer networks for chinese spell check ( csc ) task .",
"more concretely , our approach utilizes the off - the - shelf confusionset for guiding the character generation .",
"to this end , our novel seq2seq model jointly learns to copy a correct character from an inpu... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "confusionset - guided pointer networks",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"confusionset",
"-",
"guided",
"pointer",
"networks"
],
... | [
"this",
"paper",
"proposes",
"confusionset",
"-",
"guided",
"pointer",
"networks",
"for",
"chinese",
"spell",
"check",
"(",
"csc",
")",
"task",
".",
"more",
"concretely",
",",
"our",
"approach",
"utilizes",
"the",
"off",
"-",
"the",
"-",
"shelf",
"confusions... |
ACL | FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing | We present a benchmark suite of four datasets for evaluating the fairness of pre-trained language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Switzerland, and China), five languages (English, German, French, Italian and Chinese) ... | 65284f1289392003985eeb93c299ad63 | 2,022 | [
"we present a benchmark suite of four datasets for evaluating the fairness of pre - trained language models and the techniques used to fine - tune them for downstream tasks .",
"our benchmarks cover four jurisdictions ( european council , usa , switzerland , and china ) , five languages ( english , german , frenc... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "benchmark suite of four datasets",
"nugget_... | [
"we",
"present",
"a",
"benchmark",
"suite",
"of",
"four",
"datasets",
"for",
"evaluating",
"the",
"fairness",
"of",
"pre",
"-",
"trained",
"language",
"models",
"and",
"the",
"techniques",
"used",
"to",
"fine",
"-",
"tune",
"them",
"for",
"downstream",
"task... |
ACL | NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better | Effectively finetuning pretrained language models (PLMs) is critical for their success in downstream tasks. However, PLMs may have risks in overfitting the pretraining tasks and data, which usually have gap with the target downstream tasks. Such gap may be difficult for existing PLM finetuning methods to overcome and l... | 641a2053a0501819a783339c12c39d0c | 2,022 | [
"effectively finetuning pretrained language models ( plms ) is critical for their success in downstream tasks .",
"however , plms may have risks in overfitting the pretraining tasks and data , which usually have gap with the target downstream tasks .",
"such gap may be difficult for existing plm finetuning meth... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"language",
"models"
],
"offsets": [
2,
3,
... | [
"effectively",
"finetuning",
"pretrained",
"language",
"models",
"(",
"plms",
")",
"is",
"critical",
"for",
"their",
"success",
"in",
"downstream",
"tasks",
".",
"however",
",",
"plms",
"may",
"have",
"risks",
"in",
"overfitting",
"the",
"pretraining",
"tasks",
... |
ACL | Multilingual and Cross-Lingual Graded Lexical Entailment | Grounded in cognitive linguistics, graded lexical entailment (GR-LE) is concerned with fine-grained assertions regarding the directional hierarchical relationships between concepts on a continuous scale. In this paper, we present the first work on cross-lingual generalisation of GR-LE relation. Starting from HyperLex, ... | 73bf2463e0357a5988c0b4385cfea4a4 | 2,019 | [
"grounded in cognitive linguistics , graded lexical entailment ( gr - le ) is concerned with fine - grained assertions regarding the directional hierarchical relationships between concepts on a continuous scale .",
"in this paper , we present the first work on cross - lingual generalisation of gr - le relation ."... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "graded lexical entailment",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"graded",
"lexical",
"entailment"
],
"offsets": [
5,
6,
... | [
"grounded",
"in",
"cognitive",
"linguistics",
",",
"graded",
"lexical",
"entailment",
"(",
"gr",
"-",
"le",
")",
"is",
"concerned",
"with",
"fine",
"-",
"grained",
"assertions",
"regarding",
"the",
"directional",
"hierarchical",
"relationships",
"between",
"concep... |
ACL | Unsupervised Paraphrasing by Simulated Annealing | We propose UPSA, a novel approach that accomplishes Unsupervised Paraphrasing by Simulated Annealing. We model paraphrase generation as an optimization problem and propose a sophisticated objective function, involving semantic similarity, expression diversity, and language fluency of paraphrases. UPSA searches the sent... | b0a1d4106bdd03744fc120107fecddbb | 2,020 | [
"we propose upsa , a novel approach that accomplishes unsupervised paraphrasing by simulated annealing .",
"we model paraphrase generation as an optimization problem and propose a sophisticated objective function , involving semantic similarity , expression diversity , and language fluency of paraphrases .",
"u... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "upsa",
"nugget_type": "APP",
"argum... | [
"we",
"propose",
"upsa",
",",
"a",
"novel",
"approach",
"that",
"accomplishes",
"unsupervised",
"paraphrasing",
"by",
"simulated",
"annealing",
".",
"we",
"model",
"paraphrase",
"generation",
"as",
"an",
"optimization",
"problem",
"and",
"propose",
"a",
"sophistic... |
ACL | Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming Data | We consider a novel question answering (QA) task where the machine needs to read from large streaming data (long documents or videos) without knowing when the questions will be given, which is difficult to solve with existing QA methods due to their lack of scalability. To tackle this problem, we propose a novel end-to... | e24df8d1455fc02192e07855d1d57ae0 | 2,019 | [
"we consider a novel question answering ( qa ) task where the machine needs to read from large streaming data ( long documents or videos ) without knowing when the questions will be given , which is difficult to solve with existing qa methods due to their lack of scalability .",
"to tackle this problem , we propo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "novel question answering task",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"novel",
"question",
"answering",
"task"
],
"offsets": [
3,
... | [
"we",
"consider",
"a",
"novel",
"question",
"answering",
"(",
"qa",
")",
"task",
"where",
"the",
"machine",
"needs",
"to",
"read",
"from",
"large",
"streaming",
"data",
"(",
"long",
"documents",
"or",
"videos",
")",
"without",
"knowing",
"when",
"the",
"qu... |
ACL | Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation | In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence. In this paper, we investigate multi-encoder approaches in document-level neural machine translation (NMT). Surprisingly, we find that the context encoder does not only ... | f3e591fc2b84587a0d4f33ffccd792ce | 2,020 | [
"in encoder - decoder neural models , multiple encoders are in general used to represent the contextual information in addition to the individual sentence .",
"in this paper , we investigate multi - encoder approaches in document - level neural machine translation ( nmt ) .",
"surprisingly , we find that the co... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "encoder - decoder neural models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"encoder",
"-",
"decoder",
"neural",
"models"
],
"offsets": ... | [
"in",
"encoder",
"-",
"decoder",
"neural",
"models",
",",
"multiple",
"encoders",
"are",
"in",
"general",
"used",
"to",
"represent",
"the",
"contextual",
"information",
"in",
"addition",
"to",
"the",
"individual",
"sentence",
".",
"in",
"this",
"paper",
",",
... |
ACL | Detecting Subevents using Discourse and Narrative Features | Recognizing the internal structure of events is a challenging language processing task of great importance for text understanding. We present a supervised model for automatically identifying when one event is a subevent of another. Building on prior work, we introduce several novel features, in particular discourse and... | ad04c27c8a7926a2621d9d2a2e68bca0 | 2,019 | [
"recognizing the internal structure of events is a challenging language processing task of great importance for text understanding .",
"we present a supervised model for automatically identifying when one event is a subevent of another .",
"building on prior work , we introduce several novel features , in parti... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "recognizing the internal structure of events",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"recognizing",
"the",
"internal",
"structure",
"of",
... | [
"recognizing",
"the",
"internal",
"structure",
"of",
"events",
"is",
"a",
"challenging",
"language",
"processing",
"task",
"of",
"great",
"importance",
"for",
"text",
"understanding",
".",
"we",
"present",
"a",
"supervised",
"model",
"for",
"automatically",
"ident... |
ACL | Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach | It is commonly believed that knowledge of syntactic structure should improve language modeling. However, effectively and computationally efficiently incorporating syntactic structure into neural language models has been a challenging topic. In this paper, we make use of a multi-task objective, i.e., the models simultan... | 0964886c1d8b868c53d1d31e10761a4a | 2,020 | [
"it is commonly believed that knowledge of syntactic structure should improve language modeling .",
"however , effectively and computationally efficiently incorporating syntactic structure into neural language models has been a challenging topic .",
"in this paper , we make use of a multi - task objective , i .... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge of syntactic structure",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"knowledge",
"of",
"syntactic",
"structure"
],
"offsets": [
... | [
"it",
"is",
"commonly",
"believed",
"that",
"knowledge",
"of",
"syntactic",
"structure",
"should",
"improve",
"language",
"modeling",
".",
"however",
",",
"effectively",
"and",
"computationally",
"efficiently",
"incorporating",
"syntactic",
"structure",
"into",
"neura... |
ACL | CCMatrix: Mining Billions of High-Quality Parallel Sentences on the Web | We show that margin-based bitext mining in a multilingual sentence space can be successfully scaled to operate on monolingual corpora of billions of sentences. We use 32 snapshots of a curated common crawl corpus (Wenzel et al, 2019) totaling 71 billion unique sentences. Using one unified approach for 90 languages, we ... | 479123c9da67afeb731c7a7bea1ce64f | 2,021 | [
"we show that margin - based bitext mining in a multilingual sentence space can be successfully scaled to operate on monolingual corpora of billions of sentences .",
"we use 32 snapshots of a curated common crawl corpus ( wenzel et al , 2019 ) totaling 71 billion unique sentences .",
"using one unified approach... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Finder",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "operate",
"nugget_type": "E-FAC",
"ar... | [
"we",
"show",
"that",
"margin",
"-",
"based",
"bitext",
"mining",
"in",
"a",
"multilingual",
"sentence",
"space",
"can",
"be",
"successfully",
"scaled",
"to",
"operate",
"on",
"monolingual",
"corpora",
"of",
"billions",
"of",
"sentences",
".",
"we",
"use",
"... |
ACL | Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension | In this paper, we study machine reading comprehension (MRC) on long texts: where a model takes as inputs a lengthy document and a query, extracts a text span from the document as an answer. State-of-the-art models (e.g., BERT) tend to use a stack of transformer layers that are pre-trained from a large number of unlabel... | ef82ca352f68eaedfa3141f3a8d0b97d | 2,020 | [
"in this paper , we study machine reading comprehension ( mrc ) on long texts : where a model takes as inputs a lengthy document and a query , extracts a text span from the document as an answer .",
"state - of - the - art models ( e . g . , bert ) tend to use a stack of transformer layers that are pre - trained ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "machine reading comprehension",
"nugget_t... | [
"in",
"this",
"paper",
",",
"we",
"study",
"machine",
"reading",
"comprehension",
"(",
"mrc",
")",
"on",
"long",
"texts",
":",
"where",
"a",
"model",
"takes",
"as",
"inputs",
"a",
"lengthy",
"document",
"and",
"a",
"query",
",",
"extracts",
"a",
"text",
... |
ACL | MECT: Multi-Metadata Embedding based Cross-Transformer for Chinese Named Entity Recognition | Recently, word enhancement has become very popular for Chinese Named Entity Recognition (NER), reducing segmentation errors and increasing the semantic and boundary information of Chinese words. However, these methods tend to ignore the information of the Chinese character structure after integrating the lexical inform... | 4fefe6498490b57d8bbb26a0b3c21b2e | 2,021 | [
"recently , word enhancement has become very popular for chinese named entity recognition ( ner ) , reducing segmentation errors and increasing the semantic and boundary information of chinese words .",
"however , these methods tend to ignore the information of the chinese character structure after integrating th... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "word enhancement",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"word",
"enhancement"
],
"offsets": [
2,
3
]
}
],
"trigger": {
... | [
"recently",
",",
"word",
"enhancement",
"has",
"become",
"very",
"popular",
"for",
"chinese",
"named",
"entity",
"recognition",
"(",
"ner",
")",
",",
"reducing",
"segmentation",
"errors",
"and",
"increasing",
"the",
"semantic",
"and",
"boundary",
"information",
... |
ACL | Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model | Stereotypical language expresses widely-held beliefs about different social categories. Many stereotypes are overtly negative, while others may appear positive on the surface, but still lead to negative consequences. In this work, we present a computational approach to interpreting stereotypes in text through the Stere... | d9bf0a530f00219cca4d40d9f0736db7 | 2,021 | [
"stereotypical language expresses widely - held beliefs about different social categories .",
"many stereotypes are overtly negative , while others may appear positive on the surface , but still lead to negative consequences .",
"in this work , we present a computational approach to interpreting stereotypes in ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "stereotypical language",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"stereotypical",
"language"
],
"offsets": [
0,
1
]
}
],
"... | [
"stereotypical",
"language",
"expresses",
"widely",
"-",
"held",
"beliefs",
"about",
"different",
"social",
"categories",
".",
"many",
"stereotypes",
"are",
"overtly",
"negative",
",",
"while",
"others",
"may",
"appear",
"positive",
"on",
"the",
"surface",
",",
... |
ACL | XLM-E: Cross-lingual Language Model Pre-training via ELECTRA | In this paper, we introduce ELECTRA-style tasks to cross-lingual language model pre-training. Specifically, we present two pre-training tasks, namely multilingual replaced token detection, and translation replaced token detection. Besides, we pretrain the model, named as XLM-E, on both multilingual and parallel corpora... | 3b82d7880efdffb134bc1c4febbb17af | 2,022 | [
"in this paper , we introduce electra - style tasks to cross - lingual language model pre - training .",
"specifically , we present two pre - training tasks , namely multilingual replaced token detection , and translation replaced token detection .",
"besides , we pretrain the model , named as xlm - e , on both... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "electra - style tasks",
"nugget_type": "T... | [
"in",
"this",
"paper",
",",
"we",
"introduce",
"electra",
"-",
"style",
"tasks",
"to",
"cross",
"-",
"lingual",
"language",
"model",
"pre",
"-",
"training",
".",
"specifically",
",",
"we",
"present",
"two",
"pre",
"-",
"training",
"tasks",
",",
"namely",
... |
ACL | Learning Source Phrase Representations for Neural Machine Translation | The Transformer translation model (Vaswani et al., 2017) based on a multi-head attention mechanism can be computed effectively in parallel and has significantly pushed forward the performance of Neural Machine Translation (NMT). Though intuitively the attentional network can connect distant words via shorter network pa... | e13056b0602634fad0e817fa531230a3 | 2,020 | [
"the transformer translation model ( vaswani et al . , 2017 ) based on a multi - head attention mechanism can be computed effectively in parallel and has significantly pushed forward the performance of neural machine translation ( nmt ) .",
"though intuitively the attentional network can connect distant words via... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "transformer translation model",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"transformer",
"translation",
"model"
],
"offsets": [
1,
2,
... | [
"the",
"transformer",
"translation",
"model",
"(",
"vaswani",
"et",
"al",
".",
",",
"2017",
")",
"based",
"on",
"a",
"multi",
"-",
"head",
"attention",
"mechanism",
"can",
"be",
"computed",
"effectively",
"in",
"parallel",
"and",
"has",
"significantly",
"pus... |
ACL | Modeling Code-Switch Languages Using Bilingual Parallel Corpus | Language modeling is the technique to estimate the probability of a sequence of words. A bilingual language model is expected to model the sequential dependency for words across languages, which is difficult due to the inherent lack of suitable training data as well as diverse syntactic structure across languages. We p... | 4ea903c74050f70d722ac2d69a9fe138 | 2,020 | [
"language modeling is the technique to estimate the probability of a sequence of words .",
"a bilingual language model is expected to model the sequential dependency for words across languages , which is difficult due to the inherent lack of suitable training data as well as diverse syntactic structure across lan... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "bilingual language model",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"bilingual",
"language",
"model"
],
"offsets": [
16,
17,
... | [
"language",
"modeling",
"is",
"the",
"technique",
"to",
"estimate",
"the",
"probability",
"of",
"a",
"sequence",
"of",
"words",
".",
"a",
"bilingual",
"language",
"model",
"is",
"expected",
"to",
"model",
"the",
"sequential",
"dependency",
"for",
"words",
"acr... |
ACL | Changes in European Solidarity Before and During COVID-19: Evidence from a Large Crowd- and Expert-Annotated Twitter Dataset | We introduce the well-established social scientific concept of social solidarity and its contestation, anti-solidarity, as a new problem setting to supervised machine learning in NLP to assess how European solidarity discourses changed before and after the COVID-19 outbreak was declared a global pandemic. To this end, ... | 9db2d25a54b9008216f438b242be2778 | 2,021 | [
"we introduce the well - established social scientific concept of social solidarity and its contestation , anti - solidarity , as a new problem setting to supervised machine learning in nlp to assess how european solidarity discourses changed before and after the covid - 19 outbreak was declared a global pandemic .... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "well - established social scientific concept of soc... | [
"we",
"introduce",
"the",
"well",
"-",
"established",
"social",
"scientific",
"concept",
"of",
"social",
"solidarity",
"and",
"its",
"contestation",
",",
"anti",
"-",
"solidarity",
",",
"as",
"a",
"new",
"problem",
"setting",
"to",
"supervised",
"machine",
"le... |
ACL | EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing | We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach. Most current neural sentence simplification systems are variants of sequence-to-sequence models adopted from machine translation. These methods learn to simplify ... | 4b92a683d70747a7e44ae51a6f730811 | 2,019 | [
"we present the first sentence simplification model that learns explicit edit operations ( add , delete , and keep ) via a neural programmer - interpreter approach .",
"most current neural sentence simplification systems are variants of sequence - to - sequence models adopted from machine translation .",
"these... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "first sentence simplification model",
"nugg... | [
"we",
"present",
"the",
"first",
"sentence",
"simplification",
"model",
"that",
"learns",
"explicit",
"edit",
"operations",
"(",
"add",
",",
"delete",
",",
"and",
"keep",
")",
"via",
"a",
"neural",
"programmer",
"-",
"interpreter",
"approach",
".",
"most",
"... |
ACL | This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation | Given the overwhelming number of emails, an effective subject line becomes essential to better inform the recipient of the email’s content. In this paper, we propose and study the task of email subject line generation: automatically generating an email subject line from the email body. We create the first dataset for t... | 944712672c507156db9ce5f60d9b2ad9 | 2,019 | [
"given the overwhelming number of emails , an effective subject line becomes essential to better inform the recipient of the email ’ s content .",
"in this paper , we propose and study the task of email subject line generation : automatically generating an email subject line from the email body .",
"we create t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "effective subject line",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"effective",
"subject",
"line"
],
"offsets": [
8,
9,
10
... | [
"given",
"the",
"overwhelming",
"number",
"of",
"emails",
",",
"an",
"effective",
"subject",
"line",
"becomes",
"essential",
"to",
"better",
"inform",
"the",
"recipient",
"of",
"the",
"email",
"’",
"s",
"content",
".",
"in",
"this",
"paper",
",",
"we",
"pr... |
ACL | Unsupervised Extractive Opinion Summarization Using Sparse Coding | Opinion summarization is the task of automatically generating summaries that encapsulate information expressed in multiple user reviews. We present Semantic Autoencoder (SemAE) to perform extractive opinion summarization in an unsupervised manner. SemAE uses dictionary learning to implicitly capture semantic informatio... | 48a1c0e0bc7f5dbf70eab2bc5188edfc | 2,022 | [
"opinion summarization is the task of automatically generating summaries that encapsulate information expressed in multiple user reviews .",
"we present semantic autoencoder ( semae ) to perform extractive opinion summarization in an unsupervised manner .",
"semae uses dictionary learning to implicitly capture ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "opinion summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"opinion",
"summarization"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"opinion",
"summarization",
"is",
"the",
"task",
"of",
"automatically",
"generating",
"summaries",
"that",
"encapsulate",
"information",
"expressed",
"in",
"multiple",
"user",
"reviews",
".",
"we",
"present",
"semantic",
"autoencoder",
"(",
"semae",
")",
"to",
"pe... |
ACL | KNN-Contrastive Learning for Out-of-Domain Intent Classification | The Out-of-Domain (OOD) intent classification is a basic and challenging task for dialogue systems. Previous methods commonly restrict the region (in feature space) of In-domain (IND) intent features to be compact or simply-connected implicitly, which assumes no OOD intents reside, to learn discriminative semantic feat... | 528d09c9db5ea3e1979e4408cc036b40 | 2,022 | [
"the out - of - domain ( ood ) intent classification is a basic and challenging task for dialogue systems .",
"previous methods commonly restrict the region ( in feature space ) of in - domain ( ind ) intent features to be compact or simply - connected implicitly , which assumes no ood intents reside , to learn d... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "out - of - domain ( ood ) intent classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"out",
"-",
"of",
"-",
"domain",
"(",
"o... | [
"the",
"out",
"-",
"of",
"-",
"domain",
"(",
"ood",
")",
"intent",
"classification",
"is",
"a",
"basic",
"and",
"challenging",
"task",
"for",
"dialogue",
"systems",
".",
"previous",
"methods",
"commonly",
"restrict",
"the",
"region",
"(",
"in",
"feature",
... |
ACL | Neural Machine Translation with Monolingual Translation Memory | Prior work has proved that Translation Memory (TM) can boost the performance of Neural Machine Translation (NMT). In contrast to existing work that uses bilingual corpus as TM and employs source-side similarity search for memory retrieval, we propose a new framework that uses monolingual memory and performs learnable m... | f9168e75c2b535fc4250284091ef01ed | 2,021 | [
"prior work has proved that translation memory ( tm ) can boost the performance of neural machine translation ( nmt ) .",
"in contrast to existing work that uses bilingual corpus as tm and employs source - side similarity search for memory retrieval , we propose a new framework that uses monolingual memory and pe... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "translation memory",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"translation",
"memory"
],
"offsets": [
5,
6
]
}
],
"trigger"... | [
"prior",
"work",
"has",
"proved",
"that",
"translation",
"memory",
"(",
"tm",
")",
"can",
"boost",
"the",
"performance",
"of",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
".",
"in",
"contrast",
"to",
"existing",
"work",
"that",
"uses",
"bilingual",
... |
ACL | (Re)construing Meaning in NLP | Human speakers have an extensive toolkit of ways to express themselves. In this paper, we engage with an idea largely absent from discussions of meaning in natural language understanding—namely, that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed. ... | 63900c053a6abdd5e36b5c5fe3637024 | 2,020 | [
"human speakers have an extensive toolkit of ways to express themselves .",
"in this paper , we engage with an idea largely absent from discussions of meaning in natural language understanding — namely , that the way something is expressed reflects different ways of conceptualizing or construing the information b... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
16
]
},
{
"text": "idea largely absent from discussions of meaning ... | [
"human",
"speakers",
"have",
"an",
"extensive",
"toolkit",
"of",
"ways",
"to",
"express",
"themselves",
".",
"in",
"this",
"paper",
",",
"we",
"engage",
"with",
"an",
"idea",
"largely",
"absent",
"from",
"discussions",
"of",
"meaning",
"in",
"natural",
"lang... |
ACL | RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers | When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modelin... | d8386f8365c5eb4398f5aab444cadd5c | 2,020 | [
"when translating natural language questions into sql queries to answer questions from a database , contemporary semantic parsing models struggle to generalize to unseen database schemas .",
"the generalization challenge lies in ( a ) encoding the database relations in an accessible way for the semantic parser , ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language questions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"questions"
],
"offsets": [
2,
3,
... | [
"when",
"translating",
"natural",
"language",
"questions",
"into",
"sql",
"queries",
"to",
"answer",
"questions",
"from",
"a",
"database",
",",
"contemporary",
"semantic",
"parsing",
"models",
"struggle",
"to",
"generalize",
"to",
"unseen",
"database",
"schemas",
... |
ACL | How Multilingual is Multilingual BERT? | In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2018) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model fo... | bc4021e1b9facf687ee3fd33e1bd6823 | 2,019 | [
"in this paper , we show that multilingual bert ( m - bert ) , released by devlin et al . ( 2018 ) as a single language model pre - trained from monolingual corpora in 104 languages , is surprisingly good at zero - shot cross - lingual model transfer , in which task - specific annotations in one language are used t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "zero - shot cross - lingual model transfer",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"zero",
"-",
"shot",
"cross",
"-",
"lingual",
... | [
"in",
"this",
"paper",
",",
"we",
"show",
"that",
"multilingual",
"bert",
"(",
"m",
"-",
"bert",
")",
",",
"released",
"by",
"devlin",
"et",
"al",
".",
"(",
"2018",
")",
"as",
"a",
"single",
"language",
"model",
"pre",
"-",
"trained",
"from",
"monoli... |
ACL | Cross-Modal Discrete Representation Learning | In contrast to recent advances focusing on high-level representation learning across modalities, in this work we present a self-supervised learning framework that is able to learn a representation that captures finer levels of granularity across different modalities such as concepts or events represented by visual obje... | 65cf0a8c8ef6b5da8a0f16f0ae71850a | 2,022 | [
"in contrast to recent advances focusing on high - level representation learning across modalities , in this work we present a self - supervised learning framework that is able to learn a representation that captures finer levels of granularity across different modalities such as concepts or events represented by v... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "high - level representation learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"high",
"-",
"level",
"representation",
"learning"
],
"... | [
"in",
"contrast",
"to",
"recent",
"advances",
"focusing",
"on",
"high",
"-",
"level",
"representation",
"learning",
"across",
"modalities",
",",
"in",
"this",
"work",
"we",
"present",
"a",
"self",
"-",
"supervised",
"learning",
"framework",
"that",
"is",
"able... |
ACL | Multilingual Knowledge Graph Completion with Self-Supervised Adaptive Graph Alignment | Predicting missing facts in a knowledge graph (KG) is crucial as modern KGs are far from complete. Due to labor-intensive human labeling, this phenomenon deteriorates when handling knowledge represented in various languages. In this paper, we explore multilingual KG completion, which leverages limited seed alignment as... | 92ed2698684aefc480442f19872bd631 | 2,022 | [
"predicting missing facts in a knowledge graph ( kg ) is crucial as modern kgs are far from complete .",
"due to labor - intensive human labeling , this phenomenon deteriorates when handling knowledge represented in various languages .",
"in this paper , we explore multilingual kg completion , which leverages l... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "missing facts",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"missing",
"facts"
],
"offsets": [
1,
2
]
}
],
"trigger": {
... | [
"predicting",
"missing",
"facts",
"in",
"a",
"knowledge",
"graph",
"(",
"kg",
")",
"is",
"crucial",
"as",
"modern",
"kgs",
"are",
"far",
"from",
"complete",
".",
"due",
"to",
"labor",
"-",
"intensive",
"human",
"labeling",
",",
"this",
"phenomenon",
"deter... |
ACL | Max-Margin Incremental CCG Parsing | Incremental syntactic parsing has been an active research area both for cognitive scientists trying to model human sentence processing and for NLP researchers attempting to combine incremental parsing with language modelling for ASR and MT. Most effort has been directed at designing the right transition mechanism, but ... | 2ac830d8f2dc33c854d9e7ad2c28c66c | 2,020 | [
"incremental syntactic parsing has been an active research area both for cognitive scientists trying to model human sentence processing and for nlp researchers attempting to combine incremental parsing with language modelling for asr and mt .",
"most effort has been directed at designing the right transition mech... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "incremental transition mechanism of a recently proposed ccg parser",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"incremental",
"transition",
"mechanism",
"of",
... | [
"incremental",
"syntactic",
"parsing",
"has",
"been",
"an",
"active",
"research",
"area",
"both",
"for",
"cognitive",
"scientists",
"trying",
"to",
"model",
"human",
"sentence",
"processing",
"and",
"for",
"nlp",
"researchers",
"attempting",
"to",
"combine",
"incr... |
ACL | Imputing Out-of-Vocabulary Embeddings with LOVE Makes LanguageModels Robust with Little Cost | State-of-the-art NLP systems represent inputs with word embeddings, but these are brittle when faced with Out-of-Vocabulary (OOV) words.To address this issue, we follow the principle of mimick-like models to generate vectors for unseen words, by learning the behavior of pre-trained embeddings using only the surface for... | 520f89232eea51bb6875530722119084 | 2,022 | [
"state - of - the - art nlp systems represent inputs with word embeddings , but these are brittle when faced with out - of - vocabulary ( oov ) words .",
"to address this issue , we follow the principle of mimick - like models to generate vectors for unseen words , by learning the behavior of pre - trained embedd... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "state - of - the - art nlp systems",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"nlp",
"systems",
"represent",
"inputs",
"with",
"word",
"embeddings",
",",
"but",
"these",
"are",
"brittle",
"when",
"faced",
"with",
"out",
"-",
"of",
"-",
"vocabulary",
"(",
"oov",
")",
"words",
".",
"to",... |
ACL | Transfer Capsule Network for Aspect Level Sentiment Classification | Aspect-level sentiment classification aims to determine the sentiment polarity of a sentence towards an aspect. Due to the high cost in annotation, the lack of aspect-level labeled data becomes a major obstacle in this area. On the other hand, document-level labeled data like reviews are easily accessible from online w... | bc66a1d22f95dc06626bc183819217b9 | 2,019 | [
"aspect - level sentiment classification aims to determine the sentiment polarity of a sentence towards an aspect .",
"due to the high cost in annotation , the lack of aspect - level labeled data becomes a major obstacle in this area .",
"on the other hand , document - level labeled data like reviews are easily... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aspect - level sentiment classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aspect",
"-",
"level",
"sentiment",
"classification"
],
... | [
"aspect",
"-",
"level",
"sentiment",
"classification",
"aims",
"to",
"determine",
"the",
"sentiment",
"polarity",
"of",
"a",
"sentence",
"towards",
"an",
"aspect",
".",
"due",
"to",
"the",
"high",
"cost",
"in",
"annotation",
",",
"the",
"lack",
"of",
"aspect... |
ACL | Visual-Language Navigation Pretraining via Prompt-based Environmental Self-exploration | Vision-language navigation (VLN) is a challenging task due to its large searching space in the environment. To address this problem, previous works have proposed some methods of fine-tuning a large model that pretrained on large-scale datasets. However, the conventional fine-tuning methods require extra human-labeled n... | 7688ff4f3684432666ec693d40588e1a | 2,022 | [
"vision - language navigation ( vln ) is a challenging task due to its large searching space in the environment .",
"to address this problem , previous works have proposed some methods of fine - tuning a large model that pretrained on large - scale datasets .",
"however , the conventional fine - tuning methods ... | [
{
"event_type": "CMP",
"arguments": [
{
"text": "prompt - based environmental self - exploration",
"nugget_type": "APP",
"argument_type": "Arg1",
"tokens": [
"prompt",
"-",
"based",
"environmental",
"self",
"-"... | [
"vision",
"-",
"language",
"navigation",
"(",
"vln",
")",
"is",
"a",
"challenging",
"task",
"due",
"to",
"its",
"large",
"searching",
"space",
"in",
"the",
"environment",
".",
"to",
"address",
"this",
"problem",
",",
"previous",
"works",
"have",
"proposed",
... |
ACL | Large Scale Substitution-based Word Sense Induction | We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora. The result is a corpus which is sense-tagged according to a corpus-derived sense inventory and where each sense is associated with indicative words. Evaluation on... | 9a52653e48a4cbd093c604b0cd0098ca | 2,022 | [
"we present a word - sense induction method based on pre - trained masked language models ( mlms ) , which can cheaply scale to large vocabularies and large corpora .",
"the result is a corpus which is sense - tagged according to a corpus - derived sense inventory and where each sense is associated with indicativ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "word - sense induction method based on pre - traine... | [
"we",
"present",
"a",
"word",
"-",
"sense",
"induction",
"method",
"based",
"on",
"pre",
"-",
"trained",
"masked",
"language",
"models",
"(",
"mlms",
")",
",",
"which",
"can",
"cheaply",
"scale",
"to",
"large",
"vocabularies",
"and",
"large",
"corpora",
".... |
ACL | Dual Graph Convolutional Networks for Aspect-based Sentiment Analysis | Aspect-based sentiment analysis is a fine-grained sentiment classification task. Recently, graph neural networks over dependency trees have been explored to explicitly model connections between aspects and opinion words. However, the improvement is limited due to the inaccuracy of the dependency parsing results and the... | 279091b08c6959c45b270468bde46786 | 2,021 | [
"aspect - based sentiment analysis is a fine - grained sentiment classification task .",
"recently , graph neural networks over dependency trees have been explored to explicitly model connections between aspects and opinion words .",
"however , the improvement is limited due to the inaccuracy of the dependency ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aspect - based sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aspect",
"-",
"based",
"sentiment",
"analysis"
],
"offset... | [
"aspect",
"-",
"based",
"sentiment",
"analysis",
"is",
"a",
"fine",
"-",
"grained",
"sentiment",
"classification",
"task",
".",
"recently",
",",
"graph",
"neural",
"networks",
"over",
"dependency",
"trees",
"have",
"been",
"explored",
"to",
"explicitly",
"model"... |
ACL | Automatically Identifying Complaints in Social Media | Complaining is a basic speech act regularly used in human and computer mediated communication to express a negative mismatch between reality and expectations in a particular situation. Automatically identifying complaints in social media is of utmost importance for organizations or brands to improve the customer experi... | 9abbe7e9207f9376c1f8ed1a0d55bb9d | 2,019 | [
"complaining is a basic speech act regularly used in human and computer mediated communication to express a negative mismatch between reality and expectations in a particular situation .",
"automatically identifying complaints in social media is of utmost importance for organizations or brands to improve the cust... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "automatically identifying complaints in social media",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"automatically",
"identifying",
"complaints",
"in",
"so... | [
"complaining",
"is",
"a",
"basic",
"speech",
"act",
"regularly",
"used",
"in",
"human",
"and",
"computer",
"mediated",
"communication",
"to",
"express",
"a",
"negative",
"mismatch",
"between",
"reality",
"and",
"expectations",
"in",
"a",
"particular",
"situation",... |
ACL | Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension | Multi-passage reading comprehension requires the ability to combine cross-passage information and reason over multiple passages to infer the answer. In this paper, we introduce the Dynamic Self-attention Network (DynSAN) for multi-passage reading comprehension task, which processes cross-passage information at token-le... | 1a4fc4abe6e43941ba709ccf6c3eed06 | 2,019 | [
"multi - passage reading comprehension requires the ability to combine cross - passage information and reason over multiple passages to infer the answer .",
"in this paper , we introduce the dynamic self - attention network ( dynsan ) for multi - passage reading comprehension task , which processes cross - passag... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - passage reading comprehension",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"passage",
"reading",
"comprehension"
],
... | [
"multi",
"-",
"passage",
"reading",
"comprehension",
"requires",
"the",
"ability",
"to",
"combine",
"cross",
"-",
"passage",
"information",
"and",
"reason",
"over",
"multiple",
"passages",
"to",
"infer",
"the",
"answer",
".",
"in",
"this",
"paper",
",",
"we",
... |
ACL | Towards Afrocentric NLP for African Languages: Where We Are and Where We Can Go | Aligning with ACL 2022 special Theme on “Language Diversity: from Low Resource to Endangered Languages”, we discuss the major linguistic and sociopolitical challenges facing development of NLP technologies for African languages. Situating African languages in a typological framework, we discuss how the particulars of t... | 6ac66d7ddda3c94b3bca420479d73841 | 2,022 | [
"aligning with acl 2022 special theme on “ language diversity : from low resource to endangered languages ” , we discuss the major linguistic and sociopolitical challenges facing development of nlp technologies for african languages .",
"situating african languages in a typological framework , we discuss how the ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
19
]
},
{
"text": "major linguistic and sociopolitical challenges",... | [
"aligning",
"with",
"acl",
"2022",
"special",
"theme",
"on",
"“",
"language",
"diversity",
":",
"from",
"low",
"resource",
"to",
"endangered",
"languages",
"”",
",",
"we",
"discuss",
"the",
"major",
"linguistic",
"and",
"sociopolitical",
"challenges",
"facing",
... |
ACL | N-Best ASR Transformer: Enhancing SLU Performance using Multiple ASR Hypotheses | Spoken Language Understanding (SLU) systems parse speech into semantic structures like dialog acts and slots. This involves the use of an Automatic Speech Recognizer (ASR) to transcribe speech into multiple text alternatives (hypotheses). Transcription errors, ordinary in ASRs, impact downstream SLU performance negativ... | 8b6ba1573bbe5ba27a9b0bb83c5ae622 | 2,021 | [
"spoken language understanding ( slu ) systems parse speech into semantic structures like dialog acts and slots .",
"this involves the use of an automatic speech recognizer ( asr ) to transcribe speech into multiple text alternatives ( hypotheses ) .",
"transcription errors , ordinary in asrs , impact downstrea... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "common approaches",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"common",
"approaches"
],
"offsets": [
54,
55
]
},
{
"t... | [
"spoken",
"language",
"understanding",
"(",
"slu",
")",
"systems",
"parse",
"speech",
"into",
"semantic",
"structures",
"like",
"dialog",
"acts",
"and",
"slots",
".",
"this",
"involves",
"the",
"use",
"of",
"an",
"automatic",
"speech",
"recognizer",
"(",
"asr"... |
ACL | Using Context-to-Vector with Graph Retrofitting to Improve Word Embeddings | Although contextualized embeddings generated from large-scale pre-trained models perform well in many tasks, traditional static embeddings (e.g., Skip-gram, Word2Vec) still play an important role in low-resource and lightweight settings due to their low computational cost, ease of deployment, and stability. In this pap... | b1984eb21e961397573a71ea2c4beb09 | 2,022 | [
"although contextualized embeddings generated from large - scale pre - trained models perform well in many tasks , traditional static embeddings ( e . g . , skip - gram , word2vec ) still play an important role in low - resource and lightweight settings due to their low computational cost , ease of deployment , and... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "traditional static embeddings",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"traditional",
"static",
"embeddings"
],
"offsets": [
18,
19,
... | [
"although",
"contextualized",
"embeddings",
"generated",
"from",
"large",
"-",
"scale",
"pre",
"-",
"trained",
"models",
"perform",
"well",
"in",
"many",
"tasks",
",",
"traditional",
"static",
"embeddings",
"(",
"e",
".",
"g",
".",
",",
"skip",
"-",
"gram",
... |
ACL | Sense Embeddings are also Biased – Evaluating Social Biases in Static and Contextualised Sense Embeddings | Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word. One sense of an ambiguous word might be socially biased while its other senses remain unbiased. In comparison to the numerous prior work evaluating the social biases in pretrained word embeddings, the biases in se... | c4460d287a4194b6016cb96be23c33dd | 2,022 | [
"sense embedding learning methods learn different embeddings for the different senses of an ambiguous word .",
"one sense of an ambiguous word might be socially biased while its other senses remain unbiased .",
"in comparison to the numerous prior work evaluating the social biases in pretrained word embeddings ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sense embedding learning methods",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"sense",
"embedding",
"learning",
"methods"
],
"offsets": [
... | [
"sense",
"embedding",
"learning",
"methods",
"learn",
"different",
"embeddings",
"for",
"the",
"different",
"senses",
"of",
"an",
"ambiguous",
"word",
".",
"one",
"sense",
"of",
"an",
"ambiguous",
"word",
"might",
"be",
"socially",
"biased",
"while",
"its",
"o... |
ACL | Psycholinguistics Meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering | We study the issue of catastrophic forgetting in the context of neural multimodal approaches to Visual Question Answering (VQA). Motivated by evidence from psycholinguistics, we devise a set of linguistically-informed VQA tasks, which differ by the types of questions involved (Wh-questions and polar questions). We test... | b335baa5da82a78b8ab8d562aba3aed8 | 2,019 | [
"we study the issue of catastrophic forgetting in the context of neural multimodal approaches to visual question answering ( vqa ) .",
"motivated by evidence from psycholinguistics , we devise a set of linguistically - informed vqa tasks , which differ by the types of questions involved ( wh - questions and polar... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
28
]
},
{
"text": "linguistically - informed vqa tasks",
"nug... | [
"we",
"study",
"the",
"issue",
"of",
"catastrophic",
"forgetting",
"in",
"the",
"context",
"of",
"neural",
"multimodal",
"approaches",
"to",
"visual",
"question",
"answering",
"(",
"vqa",
")",
".",
"motivated",
"by",
"evidence",
"from",
"psycholinguistics",
",",... |
ACL | Shortformer: Better Language Modeling using Shorter Inputs | Increasing the input length has been a driver of progress in language modeling with transformers. We identify conditions where shorter inputs are not harmful, and achieve perplexity and efficiency improvements through two new methods that decrease input length. First, we show that initially training a model on short su... | e4a735a248bf9fb4f0030fd418e3b688 | 2,021 | [
"increasing the input length has been a driver of progress in language modeling with transformers .",
"we identify conditions where shorter inputs are not harmful , and achieve perplexity and efficiency improvements through two new methods that decrease input length .",
"first , we show that initially training ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
16
]
},
{
"text": "conditions where shorter inputs are not harmful"... | [
"increasing",
"the",
"input",
"length",
"has",
"been",
"a",
"driver",
"of",
"progress",
"in",
"language",
"modeling",
"with",
"transformers",
".",
"we",
"identify",
"conditions",
"where",
"shorter",
"inputs",
"are",
"not",
"harmful",
",",
"and",
"achieve",
"pe... |
ACL | Empirical Linguistic Study of Sentence Embeddings | The purpose of the research is to answer the question whether linguistic information is retained in vector representations of sentences. We introduce a method of analysing the content of sentence embeddings based on universal probing tasks, along with the classification datasets for two contrasting languages. We perfor... | f9f61aeff2b36dd0afd6acd5409117c2 | 2,019 | [
"the purpose of the research is to answer the question whether linguistic information is retained in vector representations of sentences .",
"we introduce a method of analysing the content of sentence embeddings based on universal probing tasks , along with the classification datasets for two contrasting language... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
21
]
},
{
"text": "method of analysing the content of sentence embedd... | [
"the",
"purpose",
"of",
"the",
"research",
"is",
"to",
"answer",
"the",
"question",
"whether",
"linguistic",
"information",
"is",
"retained",
"in",
"vector",
"representations",
"of",
"sentences",
".",
"we",
"introduce",
"a",
"method",
"of",
"analysing",
"the",
... |
ACL | Headed-Span-Based Projective Dependency Parsing | We propose a new method for projective dependency parsing based on headed spans. In a projective dependency tree, the largest subtree rooted at each word covers a contiguous sequence (i.e., a span) in the surface order. We call such a span marked by a root word headed span. A projective dependency tree can be represent... | 5d6b7d4decf21de2d823eb0a33ecdec2 | 2,022 | [
"we propose a new method for projective dependency parsing based on headed spans .",
"in a projective dependency tree , the largest subtree rooted at each word covers a contiguous sequence ( i . e . , a span ) in the surface order .",
"we call such a span marked by a root word headed span .",
"a projective de... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "projective dependency parsing",
"nugget_typ... | [
"we",
"propose",
"a",
"new",
"method",
"for",
"projective",
"dependency",
"parsing",
"based",
"on",
"headed",
"spans",
".",
"in",
"a",
"projective",
"dependency",
"tree",
",",
"the",
"largest",
"subtree",
"rooted",
"at",
"each",
"word",
"covers",
"a",
"conti... |
ACL | Generating Logical Forms from Graph Representations of Text and Entities | Structured information about entities is critical for many semantic parsing tasks. We present an approach that uses a Graph Neural Network (GNN) architecture to incorporate information about relevant entities and their relations during parsing. Combined with a decoder copy mechanism, this approach provides a conceptual... | f9380a04c8f455c05a0d44c0e5988cf3 | 2,019 | [
"structured information about entities is critical for many semantic parsing tasks .",
"we present an approach that uses a graph neural network ( gnn ) architecture to incorporate information about relevant entities and their relations during parsing .",
"combined with a decoder copy mechanism , this approach p... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "semantic parsing tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"semantic",
"parsing",
"tasks"
],
"offsets": [
8,
9,
10
... | [
"structured",
"information",
"about",
"entities",
"is",
"critical",
"for",
"many",
"semantic",
"parsing",
"tasks",
".",
"we",
"present",
"an",
"approach",
"that",
"uses",
"a",
"graph",
"neural",
"network",
"(",
"gnn",
")",
"architecture",
"to",
"incorporate",
... |
ACL | Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering | Recent studies on semantic frame induction show that relatively high performance has been achieved by using clustering-based methods with contextualized word embeddings. However, there are two potential drawbacks to these methods: one is that they focus too much on the superficial information of the frame-evoking verb ... | e3c95bd546602d7d53ba0b85b2c437b3 | 2,021 | [
"recent studies on semantic frame induction show that relatively high performance has been achieved by using clustering - based methods with contextualized word embeddings .",
"however , there are two potential drawbacks to these methods : one is that they focus too much on the superficial information of the fram... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "semantic frame induction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"semantic",
"frame",
"induction"
],
"offsets": [
3,
4,
5
... | [
"recent",
"studies",
"on",
"semantic",
"frame",
"induction",
"show",
"that",
"relatively",
"high",
"performance",
"has",
"been",
"achieved",
"by",
"using",
"clustering",
"-",
"based",
"methods",
"with",
"contextualized",
"word",
"embeddings",
".",
"however",
",",
... |
ACL | Unsupervised Pivot Translation for Distant Languages | Unsupervised neural machine translation (NMT) has attracted a lot of attention recently. While state-of-the-art methods for unsupervised translation usually perform well between similar languages (e.g., English-German translation), they perform poorly between distant languages, because unsupervised alignment does not w... | 080ecf71e9dbcf0b0bac1f992d22307a | 2,019 | [
"unsupervised neural machine translation ( nmt ) has attracted a lot of attention recently .",
"while state - of - the - art methods for unsupervised translation usually perform well between similar languages ( e . g . , english - german translation ) , they perform poorly between distant languages , because unsu... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
1,
2,
... | [
"unsupervised",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"has",
"attracted",
"a",
"lot",
"of",
"attention",
"recently",
".",
"while",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"methods",
"for",
"unsupervised",
"translation",
"usually",
"perform",... |
ACL | Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions | In visual communication, text emphasis is used to increase the comprehension of written text to convey the author’s intent. We study the problem of emphasis selection, i.e. choosing candidates for emphasis in short written text, to enable automated design assistance in authoring. Without knowing the author’s intent and... | 5cfb4a76d6bbd936ecdd5b5efd7ba18c | 2,019 | [
"in visual communication , text emphasis is used to increase the comprehension of written text to convey the author ’ s intent .",
"we study the problem of emphasis selection , i . e . choosing candidates for emphasis in short written text , to enable automated design assistance in authoring .",
"without knowin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "emphasis selection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"emphasis",
"selection"
],
"offsets": [
28,
29
]
}
],
"trigge... | [
"in",
"visual",
"communication",
",",
"text",
"emphasis",
"is",
"used",
"to",
"increase",
"the",
"comprehension",
"of",
"written",
"text",
"to",
"convey",
"the",
"author",
"’",
"s",
"intent",
".",
"we",
"study",
"the",
"problem",
"of",
"emphasis",
"selection... |
ACL | NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks | Given the ubiquitous nature of numbers in text, reasoning with numbers to perform simple calculations is an important skill of AI systems. While many datasets and models have been developed to this end, state-of-the-art AI systems are brittle; failing to perform the underlying mathematical reasoning when they appear in... | 6f06fe6ae8b63d8a26fe6b9f33672ac0 | 2,022 | [
"given the ubiquitous nature of numbers in text , reasoning with numbers to perform simple calculations is an important skill of ai systems .",
"while many datasets and models have been developed to this end , state - of - the - art ai systems are brittle ; failing to perform the underlying mathematical reasoning... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "state - of - the - art ai systems",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
... | [
"given",
"the",
"ubiquitous",
"nature",
"of",
"numbers",
"in",
"text",
",",
"reasoning",
"with",
"numbers",
"to",
"perform",
"simple",
"calculations",
"is",
"an",
"important",
"skill",
"of",
"ai",
"systems",
".",
"while",
"many",
"datasets",
"and",
"models",
... |
ACL | BPE-Dropout: Simple and Effective Subword Regularization | Subword segmentation is widely used to address the open vocabulary problem in machine translation. The dominant approach to subword segmentation is Byte Pair Encoding (BPE), which keeps the most frequent words intact while splitting the rare ones into multiple tokens. While multiple segmentations are possible even with... | 71cc78f9b5cb6daf3457cc00fa424acb | 2,020 | [
"subword segmentation is widely used to address the open vocabulary problem in machine translation .",
"the dominant approach to subword segmentation is byte pair encoding ( bpe ) , which keeps the most frequent words intact while splitting the rare ones into multiple tokens .",
"while multiple segmentations ar... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "subword segmentation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"subword",
"segmentation"
],
"offsets": [
0,
1
]
}
],
"trig... | [
"subword",
"segmentation",
"is",
"widely",
"used",
"to",
"address",
"the",
"open",
"vocabulary",
"problem",
"in",
"machine",
"translation",
".",
"the",
"dominant",
"approach",
"to",
"subword",
"segmentation",
"is",
"byte",
"pair",
"encoding",
"(",
"bpe",
")",
... |
ACL | Modeling Transitions of Focal Entities for Conversational Knowledge Base Question Answering | Conversational KBQA is about answering a sequence of questions related to a KB. Follow-up questions in conversational KBQA often have missing information referring to entities from the conversation history. In this paper, we propose to model these implied entities, which we refer to as the focal entities of the convers... | af6620b5a11e85deba610112746a959d | 2,021 | [
"conversational kbqa is about answering a sequence of questions related to a kb .",
"follow - up questions in conversational kbqa often have missing information referring to entities from the conversation history .",
"in this paper , we propose to model these implied entities , which we refer to as the focal en... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conversational kbqa",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"conversational",
"kbqa"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"conversational",
"kbqa",
"is",
"about",
"answering",
"a",
"sequence",
"of",
"questions",
"related",
"to",
"a",
"kb",
".",
"follow",
"-",
"up",
"questions",
"in",
"conversational",
"kbqa",
"often",
"have",
"missing",
"information",
"referring",
"to",
"entities",... |
ACL | Negative Training for Neural Dialogue Response Generation | Although deep learning models have brought tremendous advancements to the field of open-domain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (boring) responses. In this work, we propose a framewo... | b1b5dc858d2dc3f97bf671e32a508909 | 2,020 | [
"although deep learning models have brought tremendous advancements to the field of open - domain dialogue response generation , recent research results have revealed that the trained models have undesirable generation behaviors , such as malicious responses and generic ( boring ) responses .",
"in this work , we... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain dialogue response generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"dialogue",
"response",
"generat... | [
"although",
"deep",
"learning",
"models",
"have",
"brought",
"tremendous",
"advancements",
"to",
"the",
"field",
"of",
"open",
"-",
"domain",
"dialogue",
"response",
"generation",
",",
"recent",
"research",
"results",
"have",
"revealed",
"that",
"the",
"trained",
... |
ACL | Neural News Recommendation with Topic-Aware News Representation | News recommendation can help users find interested news and alleviate information overload. The topic information of news is critical for learning accurate news and user representations for news recommendation. However, it is not considered in many existing news recommendation methods. In this paper, we propose a neura... | 92056b2652a1d36ca932ae6f6788aed8 | 2,019 | [
"news recommendation can help users find interested news and alleviate information overload .",
"the topic information of news is critical for learning accurate news and user representations for news recommendation .",
"however , it is not considered in many existing news recommendation methods .",
"in this p... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "news recommendation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"news",
"recommendation"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"news",
"recommendation",
"can",
"help",
"users",
"find",
"interested",
"news",
"and",
"alleviate",
"information",
"overload",
".",
"the",
"topic",
"information",
"of",
"news",
"is",
"critical",
"for",
"learning",
"accurate",
"news",
"and",
"user",
"representation... |
ACL | Measuring the Impact of (Psycho-)Linguistic and Readability Features and Their Spill Over Effects on the Prediction of Eye Movement Patterns | There is a growing interest in the combined use of NLP and machine learning methods to predict gaze patterns during naturalistic reading. While promising results have been obtained through the use of transformer-based language models, little work has been undertaken to relate the performance of such models to general t... | 2a49a90d5f4b871f57612327f296aa76 | 2,022 | [
"there is a growing interest in the combined use of nlp and machine learning methods to predict gaze patterns during naturalistic reading .",
"while promising results have been obtained through the use of transformer - based language models , little work has been undertaken to relate the performance of such model... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
59
]
},
{
"text": "experiments",
"nugget_type": "TAK",
... | [
"there",
"is",
"a",
"growing",
"interest",
"in",
"the",
"combined",
"use",
"of",
"nlp",
"and",
"machine",
"learning",
"methods",
"to",
"predict",
"gaze",
"patterns",
"during",
"naturalistic",
"reading",
".",
"while",
"promising",
"results",
"have",
"been",
"ob... |
ACL | Cross-media Structured Common Space for Multimedia Event Extraction | We introduce a new task, MultiMedia Event Extraction, which aims to extract events and their arguments from multimedia documents. We develop the first benchmark and collect a dataset of 245 multimedia news articles with extensively annotated events and arguments. We propose a novel method, Weakly Aligned Structured Emb... | 10dbc3c9a083cdc59b98330abf0d3c12 | 2,020 | [
"we introduce a new task , multimedia event extraction , which aims to extract events and their arguments from multimedia documents .",
"we develop the first benchmark and collect a dataset of 245 multimedia news articles with extensively annotated events and arguments .",
"we propose a novel method , weakly al... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "extract",
"nugget_type": "E-PUR",
"... | [
"we",
"introduce",
"a",
"new",
"task",
",",
"multimedia",
"event",
"extraction",
",",
"which",
"aims",
"to",
"extract",
"events",
"and",
"their",
"arguments",
"from",
"multimedia",
"documents",
".",
"we",
"develop",
"the",
"first",
"benchmark",
"and",
"collect... |
ACL | SummScreen: A Dataset for Abstractive Screenplay Summarization | We introduce SummScreen, a summarization dataset comprised of pairs of TV series transcripts and human written recaps. The dataset provides a challenging testbed for abstractive summarization for several reasons. Plot details are often expressed indirectly in character dialogues and may be scattered across the entirety... | 10a52e00b03a270ec37227153c991b8d | 2,022 | [
"we introduce summscreen , a summarization dataset comprised of pairs of tv series transcripts and human written recaps .",
"the dataset provides a challenging testbed for abstractive summarization for several reasons .",
"plot details are often expressed indirectly in character dialogues and may be scattered a... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "summscreen",
"nugget_type": "DST",
... | [
"we",
"introduce",
"summscreen",
",",
"a",
"summarization",
"dataset",
"comprised",
"of",
"pairs",
"of",
"tv",
"series",
"transcripts",
"and",
"human",
"written",
"recaps",
".",
"the",
"dataset",
"provides",
"a",
"challenging",
"testbed",
"for",
"abstractive",
"... |
ACL | The Moral Integrity Corpus: A Benchmark for Ethical Dialogue Systems | Conversational agents have come increasingly closer to human competence in open-domain dialogue settings; however, such models can reflect insensitive, hurtful, or entirely incoherent viewpoints that erode a user’s trust in the moral integrity of the system. Moral deviations are difficult to mitigate because moral judg... | 9b67d8332f0f85185ba6590da9c15c23 | 2,022 | [
"conversational agents have come increasingly closer to human competence in open - domain dialogue settings ; however , such models can reflect insensitive , hurtful , or entirely incoherent viewpoints that erode a user ’ s trust in the moral integrity of the system .",
"moral deviations are difficult to mitigate... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conversational agents",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"conversational",
"agents"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"conversational",
"agents",
"have",
"come",
"increasingly",
"closer",
"to",
"human",
"competence",
"in",
"open",
"-",
"domain",
"dialogue",
"settings",
";",
"however",
",",
"such",
"models",
"can",
"reflect",
"insensitive",
",",
"hurtful",
",",
"or",
"entirely",... |
ACL | OoMMix: Out-of-manifold Regularization in Contextual Embedding Space for Text Classification | Recent studies on neural networks with pre-trained weights (i.e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located. In this work, we propose a new approach, called OoMMix, to finding and regularizing the remainder of the space... | 9d7c8fba874d98cad9d3c35d6bff2ec4 | 2,021 | [
"recent studies on neural networks with pre - trained weights ( i . e . , bert ) have mainly focused on a low - dimensional subspace , where the embedding vectors computed from input words ( or their contexts ) are located .",
"in this work , we propose a new approach , called oommix , to finding and regularizing... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "low - dimensional subspace",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"low",
"-",
"dimensional",
"subspace"
],
"offsets": [
23,
... | [
"recent",
"studies",
"on",
"neural",
"networks",
"with",
"pre",
"-",
"trained",
"weights",
"(",
"i",
".",
"e",
".",
",",
"bert",
")",
"have",
"mainly",
"focused",
"on",
"a",
"low",
"-",
"dimensional",
"subspace",
",",
"where",
"the",
"embedding",
"vector... |
ACL | Accelerating Sparse Matrix Operations in Neural Networks on Graphics Processing Units | Graphics Processing Units (GPUs) are commonly used to train and evaluate neural networks efficiently. While previous work in deep learning has focused on accelerating operations on dense matrices/tensors on GPUs, efforts have concentrated on operations involving sparse data structures. Operations using sparse structure... | c2c5ff9e6a927d2137aff5a5b3d2f4cb | 2,019 | [
"graphics processing units ( gpus ) are commonly used to train and evaluate neural networks efficiently .",
"while previous work in deep learning has focused on accelerating operations on dense matrices / tensors on gpus , efforts have concentrated on operations involving sparse data structures .",
"operations ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "graphics processing units",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"graphics",
"processing",
"units"
],
"offsets": [
0,
1,
... | [
"graphics",
"processing",
"units",
"(",
"gpus",
")",
"are",
"commonly",
"used",
"to",
"train",
"and",
"evaluate",
"neural",
"networks",
"efficiently",
".",
"while",
"previous",
"work",
"in",
"deep",
"learning",
"has",
"focused",
"on",
"accelerating",
"operations... |
ACL | More than Text: Multi-modal Chinese Word Segmentation | Chinese word segmentation (CWS) is undoubtedly an important basic task in natural language processing. Previous works only focus on the textual modality, but there are often audio and video utterances (such as news broadcast and face-to-face dialogues), where textual, acoustic and visual modalities normally exist. To t... | 73f140fbb6fb3891d37c3cf305da123c | 2,021 | [
"chinese word segmentation ( cws ) is undoubtedly an important basic task in natural language processing .",
"previous works only focus on the textual modality , but there are often audio and video utterances ( such as news broadcast and face - to - face dialogues ) , where textual , acoustic and visual modalitie... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "chinese word segmentation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"chinese",
"word",
"segmentation"
],
"offsets": [
0,
1,
... | [
"chinese",
"word",
"segmentation",
"(",
"cws",
")",
"is",
"undoubtedly",
"an",
"important",
"basic",
"task",
"in",
"natural",
"language",
"processing",
".",
"previous",
"works",
"only",
"focus",
"on",
"the",
"textual",
"modality",
",",
"but",
"there",
"are",
... |
ACL | Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition | Interpretable rationales for model predictions play a critical role in practical applications. In this study, we develop models possessing interpretable inference process for structured prediction. Specifically, we present a method of instance-based learning that learns similarities between spans. At inference time, ea... | f6181077896c6abfad019f5f1fc5abbe | 2,020 | [
"interpretable rationales for model predictions play a critical role in practical applications .",
"in this study , we develop models possessing interpretable inference process for structured prediction .",
"specifically , we present a method of instance - based learning that learns similarities between spans .... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "model predictions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"model",
"predictions"
],
"offsets": [
3,
4
]
}
],
"trigger": ... | [
"interpretable",
"rationales",
"for",
"model",
"predictions",
"play",
"a",
"critical",
"role",
"in",
"practical",
"applications",
".",
"in",
"this",
"study",
",",
"we",
"develop",
"models",
"possessing",
"interpretable",
"inference",
"process",
"for",
"structured",
... |
ACL | KLEJ: Comprehensive Benchmark for Polish Language Understanding | In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available on... | 133cb6aa8b5c93bd670a4de021c3f536 | 2,020 | [
"in recent years , a series of transformer - based models unlocked major improvements in general natural language understanding ( nlu ) tasks .",
"such a fast pace of research would not be possible without general nlu benchmarks , which allow for a fair comparison of the proposed methods .",
"however , such ben... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language understanding",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"understanding"
],
"offsets": [
16,
17... | [
"in",
"recent",
"years",
",",
"a",
"series",
"of",
"transformer",
"-",
"based",
"models",
"unlocked",
"major",
"improvements",
"in",
"general",
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"tasks",
".",
"such",
"a",
"fast",
"pace",
"of",
"researc... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.