venue
stringclasses
1 value
title
stringlengths
18
162
abstract
stringlengths
252
1.89k
doc_id
stringlengths
32
32
publication_year
int64
2.02k
2.02k
sentences
listlengths
1
13
events
listlengths
1
24
document
listlengths
50
348
ACL
Semi-supervised Stochastic Multi-Domain Learning using Variational Inference
Supervised models of NLP rely on large collections of text which closely resemble the intended testing setting. Unfortunately matching text is often not available in sufficient quantity, and moreover, within any domain of text, data is often highly heterogenous. In this paper we propose a method to distill the importan...
670b5267465699d1c78f4e68473bf7db
2,019
[ "supervised models of nlp rely on large collections of text which closely resemble the intended testing setting .", "unfortunately matching text is often not available in sufficient quantity , and moreover , within any domain of text , data is often highly heterogenous .", "in this paper we propose a method to ...
[ { "event_type": "ITT", "arguments": [ { "text": "supervised models of nlp", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "supervised", "models", "of", "nlp" ], "offsets": [ 0, 1,...
[ "supervised", "models", "of", "nlp", "rely", "on", "large", "collections", "of", "text", "which", "closely", "resemble", "the", "intended", "testing", "setting", ".", "unfortunately", "matching", "text", "is", "often", "not", "available", "in", "sufficient", "qu...
ACL
TWEETQA: A Social Media Focused Question Answering Dataset
With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effective-ness of many applications that rely on real-time knowledge. While previous datasets have concentrated on question answering (QA) for forma...
d43b97ab1dd74dfaf266ac9e7843dcb9
2,019
[ "with social media becoming increasingly popular on which lots of news and real - time events are reported , developing automated question answering systems is critical to the effective - ness of many applications that rely on real - time knowledge .", "while previous datasets have concentrated on question answer...
[ { "event_type": "ITT", "arguments": [ { "text": "automated question answering systems", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "automated", "question", "answering", "systems" ], "offsets": [ ...
[ "with", "social", "media", "becoming", "increasingly", "popular", "on", "which", "lots", "of", "news", "and", "real", "-", "time", "events", "are", "reported", ",", "developing", "automated", "question", "answering", "systems", "is", "critical", "to", "the", "...
ACL
Online Semantic Parsing for Latency Reduction in Task-Oriented Dialogue
Standard conversational semantic parsing maps a complete user utterance into an executable program, after which the program is executed to respond to the user. This could be slow when the program contains expensive function calls. We investigate the opportunity to reduce latency by predicting and executing function cal...
b1d1a91bb472a8af25ec45a68f5a257c
2,022
[ "standard conversational semantic parsing maps a complete user utterance into an executable program , after which the program is executed to respond to the user .", "this could be slow when the program contains expensive function calls .", "we investigate the opportunity to reduce latency by predicting and exec...
[ { "event_type": "RWS", "arguments": [ { "text": "complete user utterance", "nugget_type": "FEA", "argument_type": "TriedComponent", "tokens": [ "complete", "user", "utterance" ], "offsets": [ 6, 7, ...
[ "standard", "conversational", "semantic", "parsing", "maps", "a", "complete", "user", "utterance", "into", "an", "executable", "program", ",", "after", "which", "the", "program", "is", "executed", "to", "respond", "to", "the", "user", ".", "this", "could", "be...
ACL
Combining Knowledge Hunting and Neural Language Models to Solve the Winograd Schema Challenge
Winograd Schema Challenge (WSC) is a pronoun resolution task which seems to require reasoning with commonsense knowledge. The needed knowledge is not present in the given text. Automatic extraction of the needed knowledge is a bottleneck in solving the challenge. The existing state-of-the-art approach uses the knowledg...
c85b56c691af20cecefcc05f18bca48b
2,019
[ "winograd schema challenge ( wsc ) is a pronoun resolution task which seems to require reasoning with commonsense knowledge .", "the needed knowledge is not present in the given text .", "automatic extraction of the needed knowledge is a bottleneck in solving the challenge .", "the existing state - of - the -...
[ { "event_type": "ITT", "arguments": [ { "text": "wsc", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "wsc" ], "offsets": [ 194 ] } ], "trigger": { "text": "task", "tokens": [ "t...
[ "winograd", "schema", "challenge", "(", "wsc", ")", "is", "a", "pronoun", "resolution", "task", "which", "seems", "to", "require", "reasoning", "with", "commonsense", "knowledge", ".", "the", "needed", "knowledge", "is", "not", "present", "in", "the", "given",...
ACL
Disentangled Representation Learning for Non-Parallel Text Style Transfer
This paper tackles the problem of disentangling the latent representations of style and content in language models. We propose a simple yet effective approach, which incorporates auxiliary multi-task and adversarial objectives, for style prediction and bag-of-words prediction, respectively. We show, both qualitatively ...
c8a3789253156bd2e4b3e49853ac8791
2,019
[ "this paper tackles the problem of disentangling the latent representations of style and content in language models .", "we propose a simple yet effective approach , which incorporates auxiliary multi - task and adversarial objectives , for style prediction and bag - of - words prediction , respectively .", "we...
[ { "event_type": "WKS", "arguments": [ { "text": "in language models", "nugget_type": "LIM", "argument_type": "Condition", "tokens": [ "in", "language", "models" ], "offsets": [ 14, 15, 16 ...
[ "this", "paper", "tackles", "the", "problem", "of", "disentangling", "the", "latent", "representations", "of", "style", "and", "content", "in", "language", "models", ".", "we", "propose", "a", "simple", "yet", "effective", "approach", ",", "which", "incorporates...
ACL
Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks
Word embeddings have been widely adopted across several NLP applications. Most existing word embedding methods utilize sequential context of a word to learn its embedding. While there have been some attempts at utilizing syntactic context of a word, such methods result in an explosion of the vocabulary size. In this pa...
3d7f40f5b037cc84f1224092d4483a9a
2,019
[ "word embeddings have been widely adopted across several nlp applications .", "most existing word embedding methods utilize sequential context of a word to learn its embedding .", "while there have been some attempts at utilizing syntactic context of a word , such methods result in an explosion of the vocabular...
[ { "event_type": "ITT", "arguments": [ { "text": "word embeddings", "nugget_type": "MOD", "argument_type": "Target", "tokens": [ "word", "embeddings" ], "offsets": [ 0, 1 ] } ], "trigger": { ...
[ "word", "embeddings", "have", "been", "widely", "adopted", "across", "several", "nlp", "applications", ".", "most", "existing", "word", "embedding", "methods", "utilize", "sequential", "context", "of", "a", "word", "to", "learn", "its", "embedding", ".", "while"...
ACL
Multi-Task Deep Neural Networks for Natural Language Understanding
In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks. MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations to help adapt...
17585ec801890eb9fb103bf71b2a0deb
2,019
[ "in this paper , we present a multi - task deep neural network ( mt - dnn ) for learning representations across multiple natural language understanding ( nlu ) tasks .", "mt - dnn not only leverages large amounts of cross - task data , but also benefits from a regularization effect that leads to more general repr...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 4 ] }, { "text": "mt - dnn", "nugget_type": "APP", "a...
[ "in", "this", "paper", ",", "we", "present", "a", "multi", "-", "task", "deep", "neural", "network", "(", "mt", "-", "dnn", ")", "for", "learning", "representations", "across", "multiple", "natural", "language", "understanding", "(", "nlu", ")", "tasks", "...
ACL
Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation
We propose a new domain adaptation method for Combinatory Categorial Grammar (CCG) parsing, based on the idea of automatic generation of CCG corpora exploiting cheaper resources of dependency trees. Our solution is conceptually simple, and not relying on a specific parser architecture, making it applicable to the curre...
8d5d1023a546081479a519633fa67f9a
2,019
[ "we propose a new domain adaptation method for combinatory categorial grammar ( ccg ) parsing , based on the idea of automatic generation of ccg corpora exploiting cheaper resources of dependency trees .", "our solution is conceptually simple , and not relying on a specific parser architecture , making it applica...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "domain adaptation method", "nugget_type": "...
[ "we", "propose", "a", "new", "domain", "adaptation", "method", "for", "combinatory", "categorial", "grammar", "(", "ccg", ")", "parsing", ",", "based", "on", "the", "idea", "of", "automatic", "generation", "of", "ccg", "corpora", "exploiting", "cheaper", "reso...
ACL
SDR: Efficient Neural Re-ranking using Succinct Document Representation
BERT based ranking models have achieved superior performance on various information retrieval tasks. However, the large number of parameters and complex self-attention operations come at a significant latency overhead. To remedy this, recent works propose late-interaction architectures, which allow pre-computation of i...
c1e77ddaa047a0736beb0bf36cafc3ab
2,022
[ "bert based ranking models have achieved superior performance on various information retrieval tasks .", "however , the large number of parameters and complex self - attention operations come at a significant latency overhead .", "to remedy this , recent works propose late - interaction architectures , which al...
[ { "event_type": "ITT", "arguments": [ { "text": "information retrieval tasks", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "information", "retrieval", "tasks" ], "offsets": [ 10, 11, ...
[ "bert", "based", "ranking", "models", "have", "achieved", "superior", "performance", "on", "various", "information", "retrieval", "tasks", ".", "however", ",", "the", "large", "number", "of", "parameters", "and", "complex", "self", "-", "attention", "operations", ...
ACL
Rethinking Stealthiness of Backdoor Attack against NLP Models
Recent researches have shown that large natural language processing (NLP) models are vulnerable to a kind of security threat called the Backdoor Attack. Backdoor attacked models can achieve good performance on clean test sets but perform badly on those input sentences injected with designed trigger words. In this work,...
132b9f2304f691b28add9b7890a4a2ad
2,021
[ "recent researches have shown that large natural language processing ( nlp ) models are vulnerable to a kind of security threat called the backdoor attack .", "backdoor attacked models can achieve good performance on clean test sets but perform badly on those input sentences injected with designed trigger words ....
[ { "event_type": "RWF", "arguments": [ { "text": "large natural language processing models", "nugget_type": "APP", "argument_type": "Concern", "tokens": [ "large", "natural", "language", "processing", "models" ], ...
[ "recent", "researches", "have", "shown", "that", "large", "natural", "language", "processing", "(", "nlp", ")", "models", "are", "vulnerable", "to", "a", "kind", "of", "security", "threat", "called", "the", "backdoor", "attack", ".", "backdoor", "attacked", "m...
ACL
Toward Better Storylines with Sentence-Level Language Models
We propose a sentence-level language model which selects the next sentence in a story from a finite set of fluent alternatives. Since it does not need to model fluency, the sentence-level language model can focus on longer range dependencies, which are crucial for multi-sentence coherence. Rather than dealing with indi...
e270447c30de244a902c0ab6faeae2c0
2,020
[ "we propose a sentence - level language model which selects the next sentence in a story from a finite set of fluent alternatives .", "since it does not need to model fluency , the sentence - level language model can focus on longer range dependencies , which are crucial for multi - sentence coherence .", "rath...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "sentence - level language model", "nugget_t...
[ "we", "propose", "a", "sentence", "-", "level", "language", "model", "which", "selects", "the", "next", "sentence", "in", "a", "story", "from", "a", "finite", "set", "of", "fluent", "alternatives", ".", "since", "it", "does", "not", "need", "to", "model", ...
ACL
Self-Attention with Cross-Lingual Position Representation
Position encoding (PE), an essential part of self-attention networks (SANs), is used to preserve the word order information for natural language processing tasks, generating fixed position indices for input sequences. However, in cross-lingual scenarios, machine translation, the PEs of source and target sentences are m...
2133e20ba0f793397bbf59b148969284
2,020
[ "position encoding ( pe ) , an essential part of self - attention networks ( sans ) , is used to preserve the word order information for natural language processing tasks , generating fixed position indices for input sequences .", "however , in cross - lingual scenarios , machine translation , the pes of source a...
[ { "event_type": "ITT", "arguments": [ { "text": "position encoding", "nugget_type": "MOD", "argument_type": "Target", "tokens": [ "position", "encoding" ], "offsets": [ 0, 1 ] } ], "trigger": ...
[ "position", "encoding", "(", "pe", ")", ",", "an", "essential", "part", "of", "self", "-", "attention", "networks", "(", "sans", ")", ",", "is", "used", "to", "preserve", "the", "word", "order", "information", "for", "natural", "language", "processing", "t...
ACL
Word-order Biases in Deep-agent Emergent Communication
Sequence-processing neural networks led to remarkable progress on many NLP tasks. As a consequence, there has been increasing interest in understanding to what extent they process language as humans do. We aim here to uncover which biases such models display with respect to “natural” word-order constraints. We train mo...
3445639400b771f8ff46567dc3f82c48
2,019
[ "sequence - processing neural networks led to remarkable progress on many nlp tasks .", "as a consequence , there has been increasing interest in understanding to what extent they process language as humans do .", "we aim here to uncover which biases such models display with respect to “ natural ” word - order ...
[ { "event_type": "ITT", "arguments": [ { "text": "sequence - processing neural networks", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "sequence", "-", "processing", "neural", "networks" ], ...
[ "sequence", "-", "processing", "neural", "networks", "led", "to", "remarkable", "progress", "on", "many", "nlp", "tasks", ".", "as", "a", "consequence", ",", "there", "has", "been", "increasing", "interest", "in", "understanding", "to", "what", "extent", "they...
ACL
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification
Fact verification (FV) is a challenging task which requires to retrieve relevant evidence from plain text and use the evidence to verify given claims. Many claims require to simultaneously integrate and reason over several pieces of evidence for verification. However, previous work employs simple models to extract info...
6424d591150b43d6005213e49f5d2bcd
2,019
[ "fact verification ( fv ) is a challenging task which requires to retrieve relevant evidence from plain text and use the evidence to verify given claims .", "many claims require to simultaneously integrate and reason over several pieces of evidence for verification .", "however , previous work employs simple mo...
[ { "event_type": "ITT", "arguments": [ { "text": "fact verification", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "fact", "verification" ], "offsets": [ 0, 1 ] } ], "trigger": ...
[ "fact", "verification", "(", "fv", ")", "is", "a", "challenging", "task", "which", "requires", "to", "retrieve", "relevant", "evidence", "from", "plain", "text", "and", "use", "the", "evidence", "to", "verify", "given", "claims", ".", "many", "claims", "requ...
ACL
Code Synonyms Do Matter: Multiple Synonyms Matching Network for Automatic ICD Coding
Automatic ICD coding is defined as assigning disease codes to electronic medical records (EMRs).Existing methods usually apply label attention with code representations to match related text snippets.Unlike these works that model the label with the code hierarchy or description, we argue that the code synonyms can prov...
03b8ac71a30ada1c9f15828762feab0e
2,022
[ "automatic icd coding is defined as assigning disease codes to electronic medical records ( emrs ) . existing methods usually apply label attention with code representations to match related text snippets .", "unlike these works that model the label with the code hierarchy or description , we argue that the code ...
[ { "event_type": "ITT", "arguments": [ { "text": "automatic icd coding", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "automatic", "icd", "coding" ], "offsets": [ 0, 1, 2 ...
[ "automatic", "icd", "coding", "is", "defined", "as", "assigning", "disease", "codes", "to", "electronic", "medical", "records", "(", "emrs", ")", ".", "existing", "methods", "usually", "apply", "label", "attention", "with", "code", "representations", "to", "matc...
ACL
Semantic Expressive Capacity with Bounded Memory
We investigate the capacity of mechanisms for compositional semantic parsing to describe relations between sentences and semantic representations. We prove that in order to represent certain relations, mechanisms which are syntactically projective must be able to remember an unbounded number of locations in the semanti...
aa1c94f7460e8e52af8e3330d2fdfe6d
2,019
[ "we investigate the capacity of mechanisms for compositional semantic parsing to describe relations between sentences and semantic representations .", "we prove that in order to represent certain relations , mechanisms which are syntactically projective must be able to remember an unbounded number of locations in...
[ { "event_type": "WKS", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Researcher", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "capacity of mechanisms for compositional semantic...
[ "we", "investigate", "the", "capacity", "of", "mechanisms", "for", "compositional", "semantic", "parsing", "to", "describe", "relations", "between", "sentences", "and", "semantic", "representations", ".", "we", "prove", "that", "in", "order", "to", "represent", "c...
ACL
A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation
Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications. Existing work usually attempts to detect these hallucinations based on a corresponding oracle reference at a sentence or document level. However ...
f67da7f42fe08693dd066eb814e75812
2,022
[ "large pretrained generative models like gpt - 3 often suffer from hallucinating non - existent or incorrect content , which undermines their potential merits in real applications .", "existing work usually attempts to detect these hallucinations based on a corresponding oracle reference at a sentence or document...
[ { "event_type": "RWF", "arguments": [ { "text": "large pretrained generative models", "nugget_type": "APP", "argument_type": "Concern", "tokens": [ "large", "pretrained", "generative", "models" ], "offsets": [ ...
[ "large", "pretrained", "generative", "models", "like", "gpt", "-", "3", "often", "suffer", "from", "hallucinating", "non", "-", "existent", "or", "incorrect", "content", ",", "which", "undermines", "their", "potential", "merits", "in", "real", "applications", "....
ACL
CARETS: A Consistency And Robustness Evaluative Test Suite for VQA
We introduce CARETS, a systematic test suite to measure consistency and robustness of modern VQA models through a series of six fine-grained capability tests. In contrast to existing VQA test sets, CARETS features balanced question generation to create pairs of instances to test models, with each pair focusing on a spe...
c5fda75d820c09ef3f33b39eac5dfa51
2,022
[ "we introduce carets , a systematic test suite to measure consistency and robustness of modern vqa models through a series of six fine - grained capability tests .", "in contrast to existing vqa test sets , carets features balanced question generation to create pairs of instances to test models , with each pair f...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "carets", "nugget_type": "TAK", "arg...
[ "we", "introduce", "carets", ",", "a", "systematic", "test", "suite", "to", "measure", "consistency", "and", "robustness", "of", "modern", "vqa", "models", "through", "a", "series", "of", "six", "fine", "-", "grained", "capability", "tests", ".", "in", "cont...
ACL
Learning to Rank Visual Stories From Human Ranking Data
Visual storytelling (VIST) is a typical vision and language task that has seen extensive development in the natural language generation research domain. However, it remains unclear whether conventional automatic evaluation metrics for text generation are applicable on VIST. In this paper, we present the VHED (VIST Huma...
6f0eb0a0aaaf464aac141069aa388e56
2,022
[ "visual storytelling ( vist ) is a typical vision and language task that has seen extensive development in the natural language generation research domain .", "however , it remains unclear whether conventional automatic evaluation metrics for text generation are applicable on vist .", "in this paper , we presen...
[ { "event_type": "ITT", "arguments": [ { "text": "visual storytelling", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "visual", "storytelling" ], "offsets": [ 0, 1 ] } ], "trigge...
[ "visual", "storytelling", "(", "vist", ")", "is", "a", "typical", "vision", "and", "language", "task", "that", "has", "seen", "extensive", "development", "in", "the", "natural", "language", "generation", "research", "domain", ".", "however", ",", "it", "remain...
ACL
Neural Machine Translation with Phrase-Level Universal Visual Representations
Multimodal machine translation (MMT) aims to improve neural machine translation (NMT) with additional visual information, but most existing MMT methods require paired input of source sentence and image, which makes them suffer from shortage of sentence-image pairs. In this paper, we propose a phrase-level retrieval-bas...
b35f08654da37a284d3a8198085733f5
2,022
[ "multimodal machine translation ( mmt ) aims to improve neural machine translation ( nmt ) with additional visual information , but most existing mmt methods require paired input of source sentence and image , which makes them suffer from shortage of sentence - image pairs .", "in this paper , we propose a phrase...
[ { "event_type": "ITT", "arguments": [ { "text": "multimodal machine translation", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "multimodal", "machine", "translation" ], "offsets": [ 0, 1, ...
[ "multimodal", "machine", "translation", "(", "mmt", ")", "aims", "to", "improve", "neural", "machine", "translation", "(", "nmt", ")", "with", "additional", "visual", "information", ",", "but", "most", "existing", "mmt", "methods", "require", "paired", "input", ...
ACL
SEEK: Segmented Embedding of Knowledge Graphs
In recent years, knowledge graph embedding becomes a pretty hot research topic of artificial intelligence and plays increasingly vital roles in various downstream applications, such as recommendation and question answering. However, existing methods for knowledge graph embedding can not make a proper trade-off between ...
7c439a2b63db754e9fe9b7fbb5223a65
2,020
[ "in recent years , knowledge graph embedding becomes a pretty hot research topic of artificial intelligence and plays increasingly vital roles in various downstream applications , such as recommendation and question answering .", "however , existing methods for knowledge graph embedding can not make a proper trad...
[ { "event_type": "ITT", "arguments": [ { "text": "knowledge graph embedding", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "knowledge", "graph", "embedding" ], "offsets": [ 4, 5, ...
[ "in", "recent", "years", ",", "knowledge", "graph", "embedding", "becomes", "a", "pretty", "hot", "research", "topic", "of", "artificial", "intelligence", "and", "plays", "increasingly", "vital", "roles", "in", "various", "downstream", "applications", ",", "such",...
ACL
Predicting Humorousness and Metaphor Novelty with Gaussian Process Preference Learning
The inability to quantify key aspects of creative language is a frequent obstacle to natural language understanding. To address this, we introduce novel tasks for evaluating the creativeness of language—namely, scoring and ranking text by humorousness and metaphor novelty. To sidestep the difficulty of assigning discre...
59e1461466e5970e30cdfe4c5d0af94c
2,019
[ "the inability to quantify key aspects of creative language is a frequent obstacle to natural language understanding .", "to address this , we introduce novel tasks for evaluating the creativeness of language — namely , scoring and ranking text by humorousness and metaphor novelty .", "to sidestep the difficult...
[ { "event_type": "ITT", "arguments": [ { "text": "natural language understanding", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "natural", "language", "understanding" ], "offsets": [ 14, 15...
[ "the", "inability", "to", "quantify", "key", "aspects", "of", "creative", "language", "is", "a", "frequent", "obstacle", "to", "natural", "language", "understanding", ".", "to", "address", "this", ",", "we", "introduce", "novel", "tasks", "for", "evaluating", ...
ACL
A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space
Learning high-quality sentence representations is a fundamental problem of natural language processing which could benefit a wide range of downstream tasks. Though the BERT-like pre-trained language models have achieved great success, using their sentence representations directly often results in poor performance on th...
8a547b4b6599446d86bb5200930b2352
2,022
[ "learning high - quality sentence representations is a fundamental problem of natural language processing which could benefit a wide range of downstream tasks .", "though the bert - like pre - trained language models have achieved great success , using their sentence representations directly often results in poor...
[ { "event_type": "ITT", "arguments": [ { "text": "high - quality sentence representations", "nugget_type": "FEA", "argument_type": "Target", "tokens": [ "high", "-", "quality", "sentence", "representations" ], ...
[ "learning", "high", "-", "quality", "sentence", "representations", "is", "a", "fundamental", "problem", "of", "natural", "language", "processing", "which", "could", "benefit", "a", "wide", "range", "of", "downstream", "tasks", ".", "though", "the", "bert", "-", ...
ACL
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Building on Petroni et al. 2019, we propose two new probing tasks analyzing factual knowledge stored in Pretrained Language Models (PLMs). (1) Negation. We find that PLMs do not distinguish between negated (‘‘Birds cannot [MASK]”) and non-negated (‘‘Birds can [MASK]”) cloze questions. (2) Mispriming. Inspired by primin...
922dc8360325b0fb1add76570541ba5a
2,020
[ "building on petroni et al . 2019 , we propose two new probing tasks analyzing factual knowledge stored in pretrained language models ( plms ) .", "( 1 ) negation . we find that plms do not distinguish between negated ( “ birds cannot [MASK] ” ) and non - negated ( “ birds can [MASK] ” ) cloze questions .", "( ...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 8 ] }, { "text": "two new probing tasks", "nugget_type": "TAK...
[ "building", "on", "petroni", "et", "al", ".", "2019", ",", "we", "propose", "two", "new", "probing", "tasks", "analyzing", "factual", "knowledge", "stored", "in", "pretrained", "language", "models", "(", "plms", ")", ".", "(", "1", ")", "negation", ".", ...
ACL
On Forgetting to Cite Older Papers: An Analysis of the ACL Anthology
The field of natural language processing is experiencing a period of unprecedented growth, and with it a surge of published papers. This represents an opportunity for us to take stock of how we cite the work of other researchers, and whether this growth comes at the expense of “forgetting” about older literature. In th...
bbce35e24c97694630080fc28e7e4555
2,020
[ "the field of natural language processing is experiencing a period of unprecedented growth , and with it a surge of published papers .", "this represents an opportunity for us to take stock of how we cite the work of other researchers , and whether this growth comes at the expense of “ forgetting ” about older li...
[ { "event_type": "ITT", "arguments": [ { "text": "natural language processing", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "natural", "language", "processing" ], "offsets": [ 3, 4, ...
[ "the", "field", "of", "natural", "language", "processing", "is", "experiencing", "a", "period", "of", "unprecedented", "growth", ",", "and", "with", "it", "a", "surge", "of", "published", "papers", ".", "this", "represents", "an", "opportunity", "for", "us", ...
ACL
Exploring Phoneme-Level Speech Representations for End-to-End Speech Translation
Previous work on end-to-end translation from speech has primarily used frame-level features as speech representations, which creates longer, sparser sequences than text. We show that a naive method to create compressed phoneme-like speech representations is far more effective and efficient for translation than traditio...
25542a2174b6a075295264aa9eb05a6f
2,019
[ "previous work on end - to - end translation from speech has primarily used frame - level features as speech representations , which creates longer , sparser sequences than text .", "we show that a naive method to create compressed phoneme - like speech representations is far more effective and efficient for tran...
[ { "event_type": "RWS", "arguments": [ { "text": "frame - level features", "nugget_type": "FEA", "argument_type": "TriedComponent", "tokens": [ "frame", "-", "level", "features" ], "offsets": [ 14, ...
[ "previous", "work", "on", "end", "-", "to", "-", "end", "translation", "from", "speech", "has", "primarily", "used", "frame", "-", "level", "features", "as", "speech", "representations", ",", "which", "creates", "longer", ",", "sparser", "sequences", "than", ...
ACL
Align Voting Behavior with Public Statements for Legislator Representation Learning
Ideology of legislators is typically estimated by ideal point models from historical records of votes. It represents legislators and legislation as points in a latent space and shows promising results for modeling voting behavior. However, it fails to capture more specific attitudes of legislators toward emerging issue...
e0285f62b8afc0d90fbc1129b57f973f
2,021
[ "ideology of legislators is typically estimated by ideal point models from historical records of votes .", "it represents legislators and legislation as points in a latent space and shows promising results for modeling voting behavior .", "however , it fails to capture more specific attitudes of legislators tow...
[ { "event_type": "ITT", "arguments": [ { "text": "ideal point models", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "ideal", "point", "models" ], "offsets": [ 7, 8, 9 ] ...
[ "ideology", "of", "legislators", "is", "typically", "estimated", "by", "ideal", "point", "models", "from", "historical", "records", "of", "votes", ".", "it", "represents", "legislators", "and", "legislation", "as", "points", "in", "a", "latent", "space", "and", ...
ACL
KinGDOM: Knowledge-Guided DOMain Adaptation for Sentiment Analysis
Cross-domain sentiment analysis has received significant attention in recent years, prompted by the need to combat the domain gap between different applications that make use of sentiment analysis. In this paper, we take a novel perspective on this task by exploring the role of external commonsense knowledge. We introd...
071abc1d72ef543784c0a39d6729b828
2,020
[ "cross - domain sentiment analysis has received significant attention in recent years , prompted by the need to combat the domain gap between different applications that make use of sentiment analysis .", "in this paper , we take a novel perspective on this task by exploring the role of external commonsense knowl...
[ { "event_type": "ITT", "arguments": [ { "text": "cross - domain sentiment analysis", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "cross", "-", "domain", "sentiment", "analysis" ], "offset...
[ "cross", "-", "domain", "sentiment", "analysis", "has", "received", "significant", "attention", "in", "recent", "years", ",", "prompted", "by", "the", "need", "to", "combat", "the", "domain", "gap", "between", "different", "applications", "that", "make", "use", ...
ACL
Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG
Neural natural language generation (NNLG) from structured meaning representations has become increasingly popular in recent years. While we have seen progress with generating syntactically correct utterances that preserve semantics, various shortcomings of NNLG systems are clear: new tasks require new training data whi...
10081e713191e0b925aaea3925cab2c6
2,019
[ "neural natural language generation ( nnlg ) from structured meaning representations has become increasingly popular in recent years .", "while we have seen progress with generating syntactically correct utterances that preserve semantics , various shortcomings of nnlg systems are clear : new tasks require new tr...
[ { "event_type": "ITT", "arguments": [ { "text": "neural natural language generation", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "neural", "natural", "language", "generation" ], "offsets": [ ...
[ "neural", "natural", "language", "generation", "(", "nnlg", ")", "from", "structured", "meaning", "representations", "has", "become", "increasingly", "popular", "in", "recent", "years", ".", "while", "we", "have", "seen", "progress", "with", "generating", "syntact...
ACL
On Importance Sampling-Based Evaluation of Latent Language Models
Language models that use additional latent structures (e.g., syntax trees, coreference chains, knowledge graph links) provide several advantages over traditional language models. However, likelihood-based evaluation of these models is often intractable as it requires marginalizing over the latent space. Existing works ...
47eb386ba1c18918284e6dd9ed0c738e
2,020
[ "language models that use additional latent structures ( e . g . , syntax trees , coreference chains , knowledge graph links ) provide several advantages over traditional language models .", "however , likelihood - based evaluation of these models is often intractable as it requires marginalizing over the latent ...
[ { "event_type": "ITT", "arguments": [ { "text": "language models", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "language", "models" ], "offsets": [ 0, 1 ] } ], "trigger": { ...
[ "language", "models", "that", "use", "additional", "latent", "structures", "(", "e", ".", "g", ".", ",", "syntax", "trees", ",", "coreference", "chains", ",", "knowledge", "graph", "links", ")", "provide", "several", "advantages", "over", "traditional", "langu...
ACL
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these inaccurate labels is challenging. Inspired by the recent work on pre...
718ee9b19aca15bf494d11d3d750a498
2,019
[ "neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence - level labels , which are created heuristically using rule - based methods .", "training the hierarchical encoder with these inaccurate labels is challenging .", "inspired by ...
[ { "event_type": "RWS", "arguments": [ { "text": "sentence - level labels", "nugget_type": "FEA", "argument_type": "BaseComponent", "tokens": [ "sentence", "-", "level", "labels" ], "offsets": [ 17, ...
[ "neural", "extractive", "summarization", "models", "usually", "employ", "a", "hierarchical", "encoder", "for", "document", "encoding", "and", "they", "are", "trained", "using", "sentence", "-", "level", "labels", ",", "which", "are", "created", "heuristically", "u...
ACL
Examining Citations of Natural Language Processing Literature
We extracted information from the ACL Anthology (AA) and Google Scholar (GS) to examine trends in citations of NLP papers. We explore questions such as: how well cited are papers of different types (journal articles, conference papers, demo papers, etc.)? how well cited are papers from different areas of within NLP? et...
4c5808359738e5dab0bf9cc52352ab09
2,020
[ "we extracted information from the acl anthology ( aa ) and google scholar ( gs ) to examine trends in citations of nlp papers .", "we explore questions such as : how well cited are papers of different types ( journal articles , conference papers , demo papers , etc . ) ?", "how well cited are papers from diffe...
[ { "event_type": "WKS", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Researcher", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "information", "nugget_type": "FEA", ...
[ "we", "extracted", "information", "from", "the", "acl", "anthology", "(", "aa", ")", "and", "google", "scholar", "(", "gs", ")", "to", "examine", "trends", "in", "citations", "of", "nlp", "papers", ".", "we", "explore", "questions", "such", "as", ":", "h...
ACL
PaperRobot: Incremental Draft Generation of Scientific Ideas
We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining...
4a0768b7fd450a04896ba69ca8592e51
2,019
[ "we present a paperrobot who performs as an automatic research assistant by ( 1 ) conducting deep understanding of a large collection of human - written papers in a target domain and constructing comprehensive background knowledge graphs ( kgs ) ; ( 2 ) creating new ideas by predicting links from the background kgs...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "paperrobot", "nugget_type": "APP", ...
[ "we", "present", "a", "paperrobot", "who", "performs", "as", "an", "automatic", "research", "assistant", "by", "(", "1", ")", "conducting", "deep", "understanding", "of", "a", "large", "collection", "of", "human", "-", "written", "papers", "in", "a", "target...
ACL
Parsing into Variable-in-situ Logico-Semantic Graphs
We propose variable-in-situ logico-semantic graphs to bridge the gap between semantic graph and logical form parsing. The new type of graph-based meaning representation allows us to include analysis for scope-related phenomena, such as quantification, negation and modality, in a way that is consistent with the state-of...
ee97bd265f3ab415ece8a1fc5fd35e8b
2,020
[ "we propose variable - in - situ logico - semantic graphs to bridge the gap between semantic graph and logical form parsing .", "the new type of graph - based meaning representation allows us to include analysis for scope - related phenomena , such as quantification , negation and modality , in a way that is cons...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "variable - in - situ logico - semantic graphs", ...
[ "we", "propose", "variable", "-", "in", "-", "situ", "logico", "-", "semantic", "graphs", "to", "bridge", "the", "gap", "between", "semantic", "graph", "and", "logical", "form", "parsing", ".", "the", "new", "type", "of", "graph", "-", "based", "meaning", ...
ACL
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Language models pretrained on text from a wide variety of sources form the foundation of today’s NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and com...
189c769faf133b9c373c829bcac9419b
2,020
[ "language models pretrained on text from a wide variety of sources form the foundation of today ’ s nlp .", "in light of the success of these broad - coverage models , we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task .", "we present a study across four domai...
[ { "event_type": "ITT", "arguments": [ { "text": "language models pretrained", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "language", "models", "pretrained" ], "offsets": [ 0, 1, ...
[ "language", "models", "pretrained", "on", "text", "from", "a", "wide", "variety", "of", "sources", "form", "the", "foundation", "of", "today", "’", "s", "nlp", ".", "in", "light", "of", "the", "success", "of", "these", "broad", "-", "coverage", "models", ...
ACL
An Imitation Learning Curriculum for Text Editing with Non-Autoregressive Models
We propose a framework for training non-autoregressive sequence-to-sequence models for editing tasks, where the original input sequence is iteratively edited to produce the output. We show that the imitation learning algorithms designed to train such models for machine translation introduces mismatches between training...
65f075941eed44ad48a7c7d246b96542
2,022
[ "we propose a framework for training non - autoregressive sequence - to - sequence models for editing tasks , where the original input sequence is iteratively edited to produce the output .", "we show that the imitation learning algorithms designed to train such models for machine translation introduces mismatche...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "framework", "nugget_type": "APP", "...
[ "we", "propose", "a", "framework", "for", "training", "non", "-", "autoregressive", "sequence", "-", "to", "-", "sequence", "models", "for", "editing", "tasks", ",", "where", "the", "original", "input", "sequence", "is", "iteratively", "edited", "to", "produce...
ACL
End-to-End Sequential Metaphor Identification Inspired by Linguistic Theories
End-to-end training with Deep Neural Networks (DNN) is a currently popular method for metaphor identification. However, standard sequence tagging models do not explicitly take advantage of linguistic theories of metaphor identification. We experiment with two DNN models which are inspired by two human metaphor identifi...
191fb2de4e965f2eb5bbaefadaecbd00
2,019
[ "end - to - end training with deep neural networks ( dnn ) is a currently popular method for metaphor identification .", "however , standard sequence tagging models do not explicitly take advantage of linguistic theories of metaphor identification .", "we experiment with two dnn models which are inspired by two...
[ { "event_type": "ITT", "arguments": [ { "text": "end - to - end training with deep neural networks", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "end", "-", "to", "-", "end", "training", ...
[ "end", "-", "to", "-", "end", "training", "with", "deep", "neural", "networks", "(", "dnn", ")", "is", "a", "currently", "popular", "method", "for", "metaphor", "identification", ".", "however", ",", "standard", "sequence", "tagging", "models", "do", "not", ...
ACL
What is Learned in Visually Grounded Neural Syntax Acquisition
Visual features are a promising signal for learning bootstrap textual models. However, blackbox learning models make it difficult to isolate the specific contribution of visual components. In this analysis, we consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach f...
f788ecf2d4ec259a6abe7e4b838b420f
2,020
[ "visual features are a promising signal for learning bootstrap textual models .", "however , blackbox learning models make it difficult to isolate the specific contribution of visual components .", "in this analysis , we consider the case study of the visually grounded neural syntax learner ( shi et al . , 2019...
[ { "event_type": "ITT", "arguments": [ { "text": "bootstrap textual models", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "bootstrap", "textual", "models" ], "offsets": [ 8, 9, 10...
[ "visual", "features", "are", "a", "promising", "signal", "for", "learning", "bootstrap", "textual", "models", ".", "however", ",", "blackbox", "learning", "models", "make", "it", "difficult", "to", "isolate", "the", "specific", "contribution", "of", "visual", "c...
ACL
Transfer Learning for Sequence Generation: from Single-source to Multi-source
Multi-source sequence generation (MSG) is an important kind of sequence generation tasks that takes multiple sources, including automatic post-editing, multi-source translation, multi-document summarization, etc. As MSG tasks suffer from the data scarcity problem and recent pretrained models have been proven to be effe...
4f1edc10566ace1008723305ec993cb1
2,021
[ "multi - source sequence generation ( msg ) is an important kind of sequence generation tasks that takes multiple sources , including automatic post - editing , multi - source translation , multi - document summarization , etc .", "as msg tasks suffer from the data scarcity problem and recent pretrained models ha...
[ { "event_type": "ITT", "arguments": [ { "text": "multi - source sequence generation", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "multi", "-", "source", "sequence", "generation" ], "offs...
[ "multi", "-", "source", "sequence", "generation", "(", "msg", ")", "is", "an", "important", "kind", "of", "sequence", "generation", "tasks", "that", "takes", "multiple", "sources", ",", "including", "automatic", "post", "-", "editing", ",", "multi", "-", "so...
ACL
Improving Meta-learning for Low-resource Text Classification and Generation via Memory Imitation
Building models of natural language processing (NLP) is challenging in low-resource scenarios where limited data are available. Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks. Nonetheless, these appr...
cf8628bd8074646240f9c8218a0ae098
2,022
[ "building models of natural language processing ( nlp ) is challenging in low - resource scenarios where limited data are available .", "optimization - based meta - learning algorithms achieve promising results in low - resource scenarios by adapting a well - generalized model initialization to handle new tasks ....
[ { "event_type": "ITT", "arguments": [ { "text": "optimization - based meta - learning algorithms", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "optimization", "-", "based", "meta", "-", "learni...
[ "building", "models", "of", "natural", "language", "processing", "(", "nlp", ")", "is", "challenging", "in", "low", "-", "resource", "scenarios", "where", "limited", "data", "are", "available", ".", "optimization", "-", "based", "meta", "-", "learning", "algor...
ACL
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs
Sequence-based neural networks show significant sensitivity to syntactic structure, but they still perform less well on syntactic tasks than tree-based networks. Such tree-based networks can be provided with a constituency parse, a dependency parse, or both. We evaluate which of these two representational schemes more ...
02d66ac10868908b45d865936d434e32
2,020
[ "sequence - based neural networks show significant sensitivity to syntactic structure , but they still perform less well on syntactic tasks than tree - based networks .", "such tree - based networks can be provided with a constituency parse , a dependency parse , or both .", "we evaluate which of these two repr...
[ { "event_type": "ITT", "arguments": [ { "text": "sequence - based neural networks", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "sequence", "-", "based", "neural", "networks" ], "offsets"...
[ "sequence", "-", "based", "neural", "networks", "show", "significant", "sensitivity", "to", "syntactic", "structure", ",", "but", "they", "still", "perform", "less", "well", "on", "syntactic", "tasks", "than", "tree", "-", "based", "networks", ".", "such", "tr...
ACL
Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces
Recent work on bilingual lexicon induction (BLI) has frequently depended either on aligned bilingual lexicons or on distribution matching, often with an assumption about the isometry of the two spaces. We propose a technique to quantitatively estimate this assumption of the isometry between two embedding spaces and emp...
beb812a98000307549efb74d2fd61586
2,019
[ "recent work on bilingual lexicon induction ( bli ) has frequently depended either on aligned bilingual lexicons or on distribution matching , often with an assumption about the isometry of the two spaces .", "we propose a technique to quantitatively estimate this assumption of the isometry between two embedding ...
[ { "event_type": "ITT", "arguments": [ { "text": "bilingual lexicon induction", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "bilingual", "lexicon", "induction" ], "offsets": [ 3, 4, ...
[ "recent", "work", "on", "bilingual", "lexicon", "induction", "(", "bli", ")", "has", "frequently", "depended", "either", "on", "aligned", "bilingual", "lexicons", "or", "on", "distribution", "matching", ",", "often", "with", "an", "assumption", "about", "the", ...
ACL
Transition-based Directed Graph Construction for Emotion-Cause Pair Extraction
Emotion-cause pair extraction aims to extract all potential pairs of emotions and corresponding causes from unannotated emotion text. Most existing methods are pipelined framework, which identifies emotions and extracts causes separately, leading to a drawback of error propagation. Towards this issue, we propose a tran...
0574dd79e8c7007c3cb6a079a1998aba
2,020
[ "emotion - cause pair extraction aims to extract all potential pairs of emotions and corresponding causes from unannotated emotion text .", "most existing methods are pipelined framework , which identifies emotions and extracts causes separately , leading to a drawback of error propagation .", "towards this iss...
[ { "event_type": "RWF", "arguments": [ { "text": "most existing methods", "nugget_type": "APP", "argument_type": "Concern", "tokens": [ "most", "existing", "methods" ], "offsets": [ 21, 22, 23 ...
[ "emotion", "-", "cause", "pair", "extraction", "aims", "to", "extract", "all", "potential", "pairs", "of", "emotions", "and", "corresponding", "causes", "from", "unannotated", "emotion", "text", ".", "most", "existing", "methods", "are", "pipelined", "framework", ...
ACL
Sharpness-Aware Minimization Improves Language Model Generalization
The allure of superhuman-level capabilities has led to considerable interest in language models like GPT-3 and T5, wherein the research has, by and large, revolved around new model architectures, training tasks, and loss objectives, along with substantial engineering efforts to scale up model capacity and dataset size....
37d8b2e1e8783a34ade1323b3e342a2b
2,022
[ "the allure of superhuman - level capabilities has led to considerable interest in language models like gpt - 3 and t5 , wherein the research has , by and large , revolved around new model architectures , training tasks , and loss objectives , along with substantial engineering efforts to scale up model capacity an...
[ { "event_type": "ITT", "arguments": [ { "text": "language models", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "language", "models" ], "offsets": [ 13, 14 ] } ], "trigger": { ...
[ "the", "allure", "of", "superhuman", "-", "level", "capabilities", "has", "led", "to", "considerable", "interest", "in", "language", "models", "like", "gpt", "-", "3", "and", "t5", ",", "wherein", "the", "research", "has", ",", "by", "and", "large", ",", ...
ACL
Global Optimization under Length Constraint for Neural Text Summarization
We propose a global optimization method under length constraint (GOLC) for neural text summarization models. GOLC increases the probabilities of generating summaries that have high evaluation scores, ROUGE in this paper, within a desired length. We compared GOLC with two optimization methods, a maximum log-likelihood a...
253a0ff970fb87a5122bd4b6d5837fd8
2,019
[ "we propose a global optimization method under length constraint ( golc ) for neural text summarization models .", "golc increases the probabilities of generating summaries that have high evaluation scores , rouge in this paper , within a desired length .", "we compared golc with two optimization methods , a ma...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "global optimization method under length constraint"...
[ "we", "propose", "a", "global", "optimization", "method", "under", "length", "constraint", "(", "golc", ")", "for", "neural", "text", "summarization", "models", ".", "golc", "increases", "the", "probabilities", "of", "generating", "summaries", "that", "have", "h...
ACL
Geometry-aware domain adaptation for unsupervised alignment of word embeddings
We propose a novel manifold based geometric approach for learning unsupervised alignment of word embeddings between the source and the target languages. Our approach formulates the alignment learning problem as a domain adaptation problem over the manifold of doubly stochastic matrices. This viewpoint arises from the a...
270d499527fafdc4256479b1a2ddd664
2,020
[ "we propose a novel manifold based geometric approach for learning unsupervised alignment of word embeddings between the source and the target languages .", "our approach formulates the alignment learning problem as a domain adaptation problem over the manifold of doubly stochastic matrices .", "this viewpoint ...
[ { "event_type": "PRP", "arguments": [ { "text": "learning unsupervised alignment of word embeddings", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "learning", "unsupervised", "alignment", "of", "word", ...
[ "we", "propose", "a", "novel", "manifold", "based", "geometric", "approach", "for", "learning", "unsupervised", "alignment", "of", "word", "embeddings", "between", "the", "source", "and", "the", "target", "languages", ".", "our", "approach", "formulates", "the", ...
ACL
End-to-end Deep Reinforcement Learning Based Coreference Resolution
Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreferen...
fa42cc4c136d118b7eb632afaf0d1362
2,019
[ "recent neural network models have significantly advanced the task of coreference resolution .", "however , current neural coreference models are usually trained with heuristic loss functions that are computed over a sequence of local decisions .", "in this paper , we introduce an end - to - end reinforcement l...
[ { "event_type": "ITT", "arguments": [ { "text": "coreference resolution", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "coreference", "resolution" ], "offsets": [ 10, 11 ] } ], ...
[ "recent", "neural", "network", "models", "have", "significantly", "advanced", "the", "task", "of", "coreference", "resolution", ".", "however", ",", "current", "neural", "coreference", "models", "are", "usually", "trained", "with", "heuristic", "loss", "functions", ...
ACL
To Find Waldo You Need Contextual Cues: Debiasing Who’s Waldo
We present a debiased dataset for the Person-centric Visual Grounding (PCVG) task first proposed by Cui et al. (2021) in the Who’s Waldo dataset. Given an image and a caption, PCVG requires pairing up a person’s name mentioned in a caption with a bounding box that points to the person in the image. We find that the ori...
62799f1e34438f9ac5d608914f573036
2,022
[ "we present a debiased dataset for the person - centric visual grounding ( pcvg ) task first proposed by cui et al . ( 2021 ) in the who ’ s waldo dataset .", "given an image and a caption , pcvg requires pairing up a person ’ s name mentioned in a caption with a bounding box that points to the person in the imag...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "debiased dataset", "nugget_type": "DST", ...
[ "we", "present", "a", "debiased", "dataset", "for", "the", "person", "-", "centric", "visual", "grounding", "(", "pcvg", ")", "task", "first", "proposed", "by", "cui", "et", "al", ".", "(", "2021", ")", "in", "the", "who", "’", "s", "waldo", "dataset",...
ACL
Multimodal and Multiresolution Speech Recognition with Transformers
This paper presents an audio visual automatic speech recognition (AV-ASR) system using a Transformer-based architecture. We particularly focus on the scene context provided by the visual information, to ground the ASR. We extract representations for audio features in the encoder layers of the transformer and fuse video...
beb1390a92f627af503b75f932fca5bc
2,020
[ "this paper presents an audio visual automatic speech recognition ( av - asr ) system using a transformer - based architecture .", "we particularly focus on the scene context provided by the visual information , to ground the asr .", "we extract representations for audio features in the encoder layers of the tr...
[ { "event_type": "PRP", "arguments": [ { "text": "audio visual automatic speech recognition system", "nugget_type": "APP", "argument_type": "Content", "tokens": [ "audio", "visual", "automatic", "speech", "recognition", ...
[ "this", "paper", "presents", "an", "audio", "visual", "automatic", "speech", "recognition", "(", "av", "-", "asr", ")", "system", "using", "a", "transformer", "-", "based", "architecture", ".", "we", "particularly", "focus", "on", "the", "scene", "context", ...
ACL
Syntax-augmented Multilingual BERT for Cross-lingual Transfer
In recent years, we have seen a colossal effort in pre-training multilingual text encoders using large-scale corpora in many languages to facilitate cross-lingual transfer learning. However, due to typological differences across languages, the cross-lingual transfer is challenging. Nevertheless, language syntax, e.g., ...
33767de65b88b3f3323fc9d3cddb4deb
2,021
[ "in recent years , we have seen a colossal effort in pre - training multilingual text encoders using large - scale corpora in many languages to facilitate cross - lingual transfer learning .", "however , due to typological differences across languages , the cross - lingual transfer is challenging .", "neverthel...
[ { "event_type": "ITT", "arguments": [ { "text": "cross - lingual transfer learning", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "cross", "-", "lingual", "transfer", "learning" ], "offset...
[ "in", "recent", "years", ",", "we", "have", "seen", "a", "colossal", "effort", "in", "pre", "-", "training", "multilingual", "text", "encoders", "using", "large", "-", "scale", "corpora", "in", "many", "languages", "to", "facilitate", "cross", "-", "lingual"...
ACL
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World
We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language. We factorize PIGLeT into a physical dynamics model, and a separate language model. Our dynamics model learns not just what objects are but also what they do: glass cups break when ...
3065c6bd6c3b4c37378ca31e7e6124a5
2,021
[ "we propose piglet : a model that learns physical commonsense knowledge through interaction , and then uses this knowledge to ground language .", "we factorize piglet into a physical dynamics model , and a separate language model .", "our dynamics model learns not just what objects are but also what they do : g...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "piglet", "nugget_type": "APP", "arg...
[ "we", "propose", "piglet", ":", "a", "model", "that", "learns", "physical", "commonsense", "knowledge", "through", "interaction", ",", "and", "then", "uses", "this", "knowledge", "to", "ground", "language", ".", "we", "factorize", "piglet", "into", "a", "physi...
ACL
Quotation Recommendation and Interpretation Based on Transformation from Queries to Quotations
To help individuals express themselves better, quotation recommendation is receiving growing attention. Nevertheless, most prior efforts focus on modeling quotations and queries separately and ignore the relationship between the quotations and the queries. In this work, we introduce a transformation matrix that directl...
bad5094213eb43c1bc57485866cf9413
2,021
[ "to help individuals express themselves better , quotation recommendation is receiving growing attention .", "nevertheless , most prior efforts focus on modeling quotations and queries separately and ignore the relationship between the quotations and the queries .", "in this work , we introduce a transformation...
[ { "event_type": "ITT", "arguments": [ { "text": "quotation recommendation", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "quotation", "recommendation" ], "offsets": [ 7, 8 ] } ], ...
[ "to", "help", "individuals", "express", "themselves", "better", ",", "quotation", "recommendation", "is", "receiving", "growing", "attention", ".", "nevertheless", ",", "most", "prior", "efforts", "focus", "on", "modeling", "quotations", "and", "queries", "separatel...
ACL
History for Visual Dialog: Do we really need it?
Visual Dialogue involves “understanding” the dialogue history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to accurately generate the correct response. In this paper, we show that co-attention models which explicitly encode dialoh hist...
63ece20c9f363a44c45e4e9274d19aa7
2,020
[ "visual dialogue involves “ understanding ” the dialogue history ( what has been discussed previously ) and the current question ( what is asked ) , in addition to grounding information in the image , to accurately generate the correct response .", "in this paper , we show that co - attention models which explici...
[ { "event_type": "FIN", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Finder", "tokens": [ "we" ], "offsets": [ 46 ] }, { "text": "outperform", "nugget_type": "E-CMP", ...
[ "visual", "dialogue", "involves", "“", "understanding", "”", "the", "dialogue", "history", "(", "what", "has", "been", "discussed", "previously", ")", "and", "the", "current", "question", "(", "what", "is", "asked", ")", ",", "in", "addition", "to", "groundi...
ACL
Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network
In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications. This...
a2b3bee1fc8c883c34fed159fcf5531d
2,019
[ "in the literature , existing studies on aspect sentiment classification ( asc ) focus on individual non - interactive reviews .", "this paper extends the research to interactive reviews and proposes a new research task , namely aspect sentiment classification towards question - answering ( asc - qa ) , for real ...
[ { "event_type": "ITT", "arguments": [ { "text": "aspect sentiment classification", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "aspect", "sentiment", "classification" ], "offsets": [ 7, 8...
[ "in", "the", "literature", ",", "existing", "studies", "on", "aspect", "sentiment", "classification", "(", "asc", ")", "focus", "on", "individual", "non", "-", "interactive", "reviews", ".", "this", "paper", "extends", "the", "research", "to", "interactive", "...
ACL
SaRoCo: Detecting Satire in a Novel Romanian Corpus of News Articles
In this work, we introduce a corpus for satire detection in Romanian news. We gathered 55,608 public news articles from multiple real and satirical news sources, composing one of the largest corpora for satire detection regardless of language and the only one for the Romanian language. We provide an official split of t...
4d6a858846174994eee2b6f654bc42c5
2,021
[ "in this work , we introduce a corpus for satire detection in romanian news .", "we gathered 55 , 608 public news articles from multiple real and satirical news sources , composing one of the largest corpora for satire detection regardless of language and the only one for the romanian language .", "we provide a...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 4 ] }, { "text": "corpus for satire detection in romanian news", ...
[ "in", "this", "work", ",", "we", "introduce", "a", "corpus", "for", "satire", "detection", "in", "romanian", "news", ".", "we", "gathered", "55", ",", "608", "public", "news", "articles", "from", "multiple", "real", "and", "satirical", "news", "sources", "...
ACL
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?
Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences. In this paper, we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language, namely, th...
46be744b7ec7473aef4ebcdc3c33b904
2,020
[ "despite the success of language models using neural networks , it remains unclear to what extent neural models have the generalization ability to perform inferences .", "in this paper , we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural languag...
[ { "event_type": "ITT", "arguments": [ { "text": "language models using neural networks", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "language", "models", "using", "neural", "networks" ], ...
[ "despite", "the", "success", "of", "language", "models", "using", "neural", "networks", ",", "it", "remains", "unclear", "to", "what", "extent", "neural", "models", "have", "the", "generalization", "ability", "to", "perform", "inferences", ".", "in", "this", "...
ACL
Can Visual Dialogue Models Do Scorekeeping? Exploring How Dialogue Representations Incrementally Encode Shared Knowledge
Cognitively plausible visual dialogue models should keep a mental scoreboard of shared established facts in the dialogue context. We propose a theory-based evaluation method for investigating to what degree models pretrained on the VisDial dataset incrementally build representations that appropriately do scorekeeping. ...
4ec8c8fdc1b4cb1571d014652c6373e1
2,022
[ "cognitively plausible visual dialogue models should keep a mental scoreboard of shared established facts in the dialogue context .", "we propose a theory - based evaluation method for investigating to what degree models pretrained on the visdial dataset incrementally build representations that appropriately do s...
[ { "event_type": "ITT", "arguments": [ { "text": "cognitively plausible visual dialogue models", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "cognitively", "plausible", "visual", "dialogue", "models" ...
[ "cognitively", "plausible", "visual", "dialogue", "models", "should", "keep", "a", "mental", "scoreboard", "of", "shared", "established", "facts", "in", "the", "dialogue", "context", ".", "we", "propose", "a", "theory", "-", "based", "evaluation", "method", "for...
ACL
Improving Neural Machine Translation with Soft Template Prediction
Although neural machine translation (NMT) has achieved significant progress in recent years, most previous NMT models only depend on the source text to generate translation. Inspired by the success of template-based and syntax-based approaches in other fields, we propose to use extracted templates from tree structures ...
dd4f17a4b3bb004b2cf4cd954a7c367d
2,020
[ "although neural machine translation ( nmt ) has achieved significant progress in recent years , most previous nmt models only depend on the source text to generate translation .", "inspired by the success of template - based and syntax - based approaches in other fields , we propose to use extracted templates fr...
[ { "event_type": "ITT", "arguments": [ { "text": "neural machine translation", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "neural", "machine", "translation" ], "offsets": [ 1, 2, ...
[ "although", "neural", "machine", "translation", "(", "nmt", ")", "has", "achieved", "significant", "progress", "in", "recent", "years", ",", "most", "previous", "nmt", "models", "only", "depend", "on", "the", "source", "text", "to", "generate", "translation", ...
ACL
A Graph-based Coarse-to-fine Method for Unsupervised Bilingual Lexicon Induction
Unsupervised bilingual lexicon induction is the task of inducing word translations from monolingual corpora of two languages. Recent methods are mostly based on unsupervised cross-lingual word embeddings, the key to which is to find initial solutions of word translations, followed by the learning and refinement of mapp...
12a00e25ac1634d8be0e1137e7d5ecd9
2,020
[ "unsupervised bilingual lexicon induction is the task of inducing word translations from monolingual corpora of two languages .", "recent methods are mostly based on unsupervised cross - lingual word embeddings , the key to which is to find initial solutions of word translations , followed by the learning and ref...
[ { "event_type": "ITT", "arguments": [ { "text": "unsupervised bilingual lexicon induction", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "unsupervised", "bilingual", "lexicon", "induction" ], "offse...
[ "unsupervised", "bilingual", "lexicon", "induction", "is", "the", "task", "of", "inducing", "word", "translations", "from", "monolingual", "corpora", "of", "two", "languages", ".", "recent", "methods", "are", "mostly", "based", "on", "unsupervised", "cross", "-", ...
ACL
Dual Reader-Parser on Hybrid Textual and Tabular Evidence for Open Domain Question Answering
The current state-of-the-art generative models for open-domain question answering (ODQA) have focused on generating direct answers from unstructured textual information. However, a large amount of world’s knowledge is stored in structured databases, and need to be accessed using query languages such as SQL. Furthermore...
c146fcb9e57997157a6b101c1679039c
2,021
[ "the current state - of - the - art generative models for open - domain question answering ( odqa ) have focused on generating direct answers from unstructured textual information .", "however , a large amount of world ’ s knowledge is stored in structured databases , and need to be accessed using query languages...
[ { "event_type": "ITT", "arguments": [ { "text": "odqa", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "odqa" ], "offsets": [ 18 ] } ], "trigger": { "text": "focused", "tokens": [ ...
[ "the", "current", "state", "-", "of", "-", "the", "-", "art", "generative", "models", "for", "open", "-", "domain", "question", "answering", "(", "odqa", ")", "have", "focused", "on", "generating", "direct", "answers", "from", "unstructured", "textual", "inf...
ACL
Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing Regressions In NLP Model Updates
Behavior of deep neural networks can be inconsistent between different versions. Regressions during model update are a common cause of concern that often over-weigh the benefits in accuracy or efficiency gain. This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates. Using neg...
5ff580bd94b28a4d98b63b7e33eb6635
2,021
[ "behavior of deep neural networks can be inconsistent between different versions .", "regressions during model update are a common cause of concern that often over - weigh the benefits in accuracy or efficiency gain .", "this work focuses on quantifying , reducing and analyzing regression errors in the nlp mode...
[ { "event_type": "ITT", "arguments": [ { "text": "regression errors", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "regression", "errors" ], "offsets": [ 44, 45 ] }, { "te...
[ "behavior", "of", "deep", "neural", "networks", "can", "be", "inconsistent", "between", "different", "versions", ".", "regressions", "during", "model", "update", "are", "a", "common", "cause", "of", "concern", "that", "often", "over", "-", "weigh", "the", "ben...
ACL
Unleash GPT-2 Power for Event Detection
Event Detection (ED) aims to recognize mentions of events (i.e., event triggers) and their types in text. Recently, several ED datasets in various domains have been proposed. However, the major limitation of these resources is the lack of enough training data for individual event types which hinders the efficient train...
8f301117ba61302de9e8b6ea82fa3562
2,021
[ "event detection ( ed ) aims to recognize mentions of events ( i . e . , event triggers ) and their types in text .", "recently , several ed datasets in various domains have been proposed .", "however , the major limitation of these resources is the lack of enough training data for individual event types which ...
[ { "event_type": "ITT", "arguments": [ { "text": "event detection", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "event", "detection" ], "offsets": [ 0, 1 ] } ], "trigger": { ...
[ "event", "detection", "(", "ed", ")", "aims", "to", "recognize", "mentions", "of", "events", "(", "i", ".", "e", ".", ",", "event", "triggers", ")", "and", "their", "types", "in", "text", ".", "recently", ",", "several", "ed", "datasets", "in", "variou...
ACL
Predicting the Topical Stance and Political Leaning of Media using Tweets
Discovering the stances of media outlets and influential people on current, debatable topics is important for social statisticians and policy makers. Many supervised solutions exist for determining viewpoints, but manually annotating training data is costly. In this paper, we propose a cascaded method that uses unsuper...
985a4440bf6d375d4cfab6e4ff4c681d
2,020
[ "discovering the stances of media outlets and influential people on current , debatable topics is important for social statisticians and policy makers .", "many supervised solutions exist for determining viewpoints , but manually annotating training data is costly .", "in this paper , we propose a cascaded meth...
[ { "event_type": "ITT", "arguments": [ { "text": "discovering the stances of media outlets and influential people on current", "nugget_type": "LIM", "argument_type": "Condition", "tokens": [ "discovering", "the", "stances", "of", ...
[ "discovering", "the", "stances", "of", "media", "outlets", "and", "influential", "people", "on", "current", ",", "debatable", "topics", "is", "important", "for", "social", "statisticians", "and", "policy", "makers", ".", "many", "supervised", "solutions", "exist",...
ACL
Camouflaged Chinese Spam Content Detection with Semi-supervised Generative Active Learning
We propose a Semi-supervIsed GeNerative Active Learning (SIGNAL) model to address the imbalance, efficiency, and text camouflage problems of Chinese text spam detection task. A “self-diversity” criterion is proposed for measuring the “worthiness” of a candidate for annotation. A semi-supervised variational autoencoder ...
43c19fab36c3dec70951955286c431b5
2,020
[ "we propose a semi - supervised generative active learning ( signal ) model to address the imbalance , efficiency , and text camouflage problems of chinese text spam detection task .", "a “ self - diversity ” criterion is proposed for measuring the “ worthiness ” of a candidate for annotation .", "a semi - supe...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "semi - supervised generative active learning model"...
[ "we", "propose", "a", "semi", "-", "supervised", "generative", "active", "learning", "(", "signal", ")", "model", "to", "address", "the", "imbalance", ",", "efficiency", ",", "and", "text", "camouflage", "problems", "of", "chinese", "text", "spam", "detection"...
ACL
Learning to Select, Track, and Generate for Data-to-Text
We propose a data-to-text generation model with two modules, one for tracking and the other for text generation. Our tracking module selects and keeps track of salient information and memorizes which record has been mentioned. Our generation module generates a summary conditioned on the state of tracking module. Our pr...
57a1d312e4768178d2f1b33f7851200d
2,019
[ "we propose a data - to - text generation model with two modules , one for tracking and the other for text generation .", "our tracking module selects and keeps track of salient information and memorizes which record has been mentioned .", "our generation module generates a summary conditioned on the state of t...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "data - to - text generation model with two modules"...
[ "we", "propose", "a", "data", "-", "to", "-", "text", "generation", "model", "with", "two", "modules", ",", "one", "for", "tracking", "and", "the", "other", "for", "text", "generation", ".", "our", "tracking", "module", "selects", "and", "keeps", "track", ...
ACL
Deep Differential Amplifier for Extractive Summarization
For sentence-level extractive summarization, there is a disproportionate ratio of selected and unselected sentences, leading to flatting the summary features when maximizing the accuracy. The imbalanced classification of summarization is inherent, which can’t be addressed by common algorithms easily. In this paper, we ...
a7f3a0171c07abaa4d998aad85894eb5
2,021
[ "for sentence - level extractive summarization , there is a disproportionate ratio of selected and unselected sentences , leading to flatting the summary features when maximizing the accuracy .", "the imbalanced classification of summarization is inherent , which can ’ t be addressed by common algorithms easily ....
[ { "event_type": "ITT", "arguments": [ { "text": "sentence - level extractive summarization", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "sentence", "-", "level", "extractive", "summarization" ],...
[ "for", "sentence", "-", "level", "extractive", "summarization", ",", "there", "is", "a", "disproportionate", "ratio", "of", "selected", "and", "unselected", "sentences", ",", "leading", "to", "flatting", "the", "summary", "features", "when", "maximizing", "the", ...
ACL
nmT5 - Is parallel data still relevant for pre-training massively multilingual language models?
Recently, mT5 - a massively multilingual version of T5 - leveraged a unified text-to-text format to attain state-of-the-art results on a wide variety of multilingual NLP tasks. In this paper, we investigate the impact of incorporating parallel data into mT5 pre-training. We find that multi-tasking language modeling wit...
0e84e26f2351d5627c478d9fa2de287c
2,021
[ "recently , mt5 - a massively multilingual version of t5 - leveraged a unified text - to - text format to attain state - of - the - art results on a wide variety of multilingual nlp tasks .", "in this paper , we investigate the impact of incorporating parallel data into mt5 pre - training .", "we find that mult...
[ { "event_type": "RWS", "arguments": [ { "text": "mt5", "nugget_type": "APP", "argument_type": "Subject", "tokens": [ "mt5" ], "offsets": [ 2 ] }, { "text": "unified text - to - text format", "nugget...
[ "recently", ",", "mt5", "-", "a", "massively", "multilingual", "version", "of", "t5", "-", "leveraged", "a", "unified", "text", "-", "to", "-", "text", "format", "to", "attain", "state", "-", "of", "-", "the", "-", "art", "results", "on", "a", "wide", ...
ACL
Towards Transparent and Explainable Attention Models
Recent studies on interpretability of attention distributions have led to notions of faithful and plausible explanations for a model’s predictions. Attention distributions can be considered a faithful explanation if a higher attention weight implies a greater impact on the model’s prediction. They can be considered a p...
cc7cd8e2c6c1d7a6f91bca7351b3a784
2,020
[ "recent studies on interpretability of attention distributions have led to notions of faithful and plausible explanations for a model ’ s predictions .", "attention distributions can be considered a faithful explanation if a higher attention weight implies a greater impact on the model ’ s prediction .", "they ...
[ { "event_type": "ITT", "arguments": [ { "text": "interpretability of attention distributions", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "interpretability", "of", "attention", "distributions" ], ...
[ "recent", "studies", "on", "interpretability", "of", "attention", "distributions", "have", "led", "to", "notions", "of", "faithful", "and", "plausible", "explanations", "for", "a", "model", "’", "s", "predictions", ".", "attention", "distributions", "can", "be", ...
ACL
Multitasking Framework for Unsupervised Simple Definition Generation
The definition generation task can help language learners by providing explanations for unfamiliar words. This task has attracted much attention in recent years. We propose a novel task of Simple Definition Generation (SDG) to help language learners and low literacy readers. A significant challenge of this task is the ...
0cb33c70f4e83ada2d5db5fbf3aecd6e
2,022
[ "the definition generation task can help language learners by providing explanations for unfamiliar words .", "this task has attracted much attention in recent years .", "we propose a novel task of simple definition generation ( sdg ) to help language learners and low literacy readers .", "a significant chall...
[ { "event_type": "ITT", "arguments": [ { "text": "definition generation task", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "definition", "generation", "task" ], "offsets": [ 1, 2, ...
[ "the", "definition", "generation", "task", "can", "help", "language", "learners", "by", "providing", "explanations", "for", "unfamiliar", "words", ".", "this", "task", "has", "attracted", "much", "attention", "in", "recent", "years", ".", "we", "propose", "a", ...
ACL
TruthfulQA: Measuring How Models Mimic Human Falsehoods
We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To per...
e6ce0ffdec0a6ed5cae6e68d3190a8bf
2,022
[ "we propose a benchmark to measure whether a language model is truthful in generating answers to questions .", "the benchmark comprises 817 questions that span 38 categories , including health , law , finance and politics .", "we crafted questions that some humans would answer falsely due to a false belief or m...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "measure", "nugget_type": "E-PUR", "...
[ "we", "propose", "a", "benchmark", "to", "measure", "whether", "a", "language", "model", "is", "truthful", "in", "generating", "answers", "to", "questions", ".", "the", "benchmark", "comprises", "817", "questions", "that", "span", "38", "categories", ",", "inc...
ACL
Confidence Based Bidirectional Global Context Aware Training Framework for Neural Machine Translation
Most dominant neural machine translation (NMT) models are restricted to make predictions only according to the local context of preceding words in a left-to-right manner. Although many previous studies try to incorporate global information into NMT models, there still exist limitations on how to effectively exploit bid...
6cf650d1b3cc6ab3cba5806d88d5bdc8
2,022
[ "most dominant neural machine translation ( nmt ) models are restricted to make predictions only according to the local context of preceding words in a left - to - right manner .", "although many previous studies try to incorporate global information into nmt models , there still exist limitations on how to effec...
[ { "event_type": "RWS", "arguments": [ { "text": "global information", "nugget_type": "FEA", "argument_type": "TriedComponent", "tokens": [ "global", "information" ], "offsets": [ 39, 40 ] }, { ...
[ "most", "dominant", "neural", "machine", "translation", "(", "nmt", ")", "models", "are", "restricted", "to", "make", "predictions", "only", "according", "to", "the", "local", "context", "of", "preceding", "words", "in", "a", "left", "-", "to", "-", "right",...
ACL
Exploring Dynamic Selection of Branch Expansion Orders for Code Generation
Due to the great potential in facilitating software development, code generation has attracted increasing attention recently. Generally, dominant models are Seq2Tree models, which convert the input natural language description into a sequence of tree-construction actions corresponding to the pre-order traversal of an A...
dbb9e05063d1b5d02ae4b9c8fa2b66d2
2,021
[ "due to the great potential in facilitating software development , code generation has attracted increasing attention recently .", "generally , dominant models are seq2tree models , which convert the input natural language description into a sequence of tree - construction actions corresponding to the pre - order...
[ { "event_type": "ITT", "arguments": [ { "text": "software development", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "software", "development" ], "offsets": [ 7, 8 ] } ], "trig...
[ "due", "to", "the", "great", "potential", "in", "facilitating", "software", "development", ",", "code", "generation", "has", "attracted", "increasing", "attention", "recently", ".", "generally", ",", "dominant", "models", "are", "seq2tree", "models", ",", "which",...
ACL
Few-shot Controllable Style Transfer for Low-Resource Multilingual Settings
Style transfer is the task of rewriting a sentence into a target style while approximately preserving content. While most prior literature assumes access to a large style-labelled corpus, recent work (Riley et al. 2021) has attempted “few-shot” style transfer using only 3-10 sentences at inference for style extraction....
db9e153c0969425ca7e9f0b428269102
2,022
[ "style transfer is the task of rewriting a sentence into a target style while approximately preserving content .", "while most prior literature assumes access to a large style - labelled corpus , recent work ( riley et al .", "2021 ) has attempted “ few - shot ” style transfer using only 3 - 10 sentences at inf...
[ { "event_type": "ITT", "arguments": [ { "text": "style transfer", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "style", "transfer" ], "offsets": [ 0, 1 ] } ], "trigger": { ...
[ "style", "transfer", "is", "the", "task", "of", "rewriting", "a", "sentence", "into", "a", "target", "style", "while", "approximately", "preserving", "content", ".", "while", "most", "prior", "literature", "assumes", "access", "to", "a", "large", "style", "-",...
ACL
Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences on Neural Machine Translation
While it has been shown that Neural Machine Translation (NMT) is highly sensitive to noisy parallel training samples, prior work treats all types of mismatches between source and target as noise. As a result, it remains unclear how samples that are mostly equivalent but contain a small number of semantically divergent ...
9d6a095a90a82c718d4b8798d954e909
2,021
[ "while it has been shown that neural machine translation ( nmt ) is highly sensitive to noisy parallel training samples , prior work treats all types of mismatches between source and target as noise .", "as a result , it remains unclear how samples that are mostly equivalent but contain a small number of semantic...
[ { "event_type": "ITT", "arguments": [ { "text": "neural machine translation", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "neural", "machine", "translation" ], "offsets": [ 6, 7, ...
[ "while", "it", "has", "been", "shown", "that", "neural", "machine", "translation", "(", "nmt", ")", "is", "highly", "sensitive", "to", "noisy", "parallel", "training", "samples", ",", "prior", "work", "treats", "all", "types", "of", "mismatches", "between", ...
ACL
Code Generation from Natural Language with Less Prior Knowledge and More Monolingual Data
Training datasets for semantic parsing are typically small due to the higher expertise required for annotation than most other NLP tasks. As a result, models for this application usually need additional prior knowledge to be built into the architecture or algorithm. The increased dependency on human experts hinders aut...
37f5b015e76b15cdaadc194861959f15
2,021
[ "training datasets for semantic parsing are typically small due to the higher expertise required for annotation than most other nlp tasks .", "as a result , models for this application usually need additional prior knowledge to be built into the architecture or algorithm .", "the increased dependency on human e...
[ { "event_type": "ITT", "arguments": [ { "text": "semantic parsing", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "semantic", "parsing" ], "offsets": [ 3, 4 ] } ], "trigger": { ...
[ "training", "datasets", "for", "semantic", "parsing", "are", "typically", "small", "due", "to", "the", "higher", "expertise", "required", "for", "annotation", "than", "most", "other", "nlp", "tasks", ".", "as", "a", "result", ",", "models", "for", "this", "a...
ACL
Parameter Selection: Why We Should Pay More Attention to It
The importance of parameter selection in supervised learning is well known. However, due to the many parameter combinations, an incomplete or an insufficient procedure is often applied. This situation may cause misleading or confusing conclusions. In this opinion paper, through an intriguing example we point out that t...
308f345cb1960ce580d62fe992f3fa64
2,021
[ "the importance of parameter selection in supervised learning is well known .", "however , due to the many parameter combinations , an incomplete or an insufficient procedure is often applied .", "this situation may cause misleading or confusing conclusions .", "in this opinion paper , through an intriguing e...
[ { "event_type": "ITT", "arguments": [ { "text": "parameter selection in supervised learning", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "parameter", "selection", "in", "supervised", "learning" ...
[ "the", "importance", "of", "parameter", "selection", "in", "supervised", "learning", "is", "well", "known", ".", "however", ",", "due", "to", "the", "many", "parameter", "combinations", ",", "an", "incomplete", "or", "an", "insufficient", "procedure", "is", "o...
ACL
A Two-Step Approach for Implicit Event Argument Detection
In this work, we explore the implicit event argument detection task, which studies event arguments beyond sentence boundaries. The addition of cross-sentence argument candidates imposes great challenges for modeling. To reduce the number of candidates, we adopt a two-step approach, decomposing the problem into two sub-...
bfe344f89c178c55b0fffd1f1ba88224
2,020
[ "in this work , we explore the implicit event argument detection task , which studies event arguments beyond sentence boundaries .", "the addition of cross - sentence argument candidates imposes great challenges for modeling .", "to reduce the number of candidates , we adopt a two - step approach , decomposing ...
[ { "event_type": "WKS", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Researcher", "tokens": [ "we" ], "offsets": [ 4 ] }, { "text": "implicit event argument detection task", ...
[ "in", "this", "work", ",", "we", "explore", "the", "implicit", "event", "argument", "detection", "task", ",", "which", "studies", "event", "arguments", "beyond", "sentence", "boundaries", ".", "the", "addition", "of", "cross", "-", "sentence", "argument", "can...
ACL
Every Bite Is an Experience: Key Point Analysis of Business Reviews
Previous work on review summarization focused on measuring the sentiment toward the main aspects of the reviewed product or business, or on creating a textual summary. These approaches provide only a partial view of the data: aspect-based sentiment summaries lack sufficient explanation or justification for the aspect r...
a7eb3274772d244a76676ca00d9554bd
2,021
[ "previous work on review summarization focused on measuring the sentiment toward the main aspects of the reviewed product or business , or on creating a textual summary .", "these approaches provide only a partial view of the data : aspect - based sentiment summaries lack sufficient explanation or justification f...
[ { "event_type": "ITT", "arguments": [ { "text": "review summarization", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "review", "summarization" ], "offsets": [ 3, 4 ] } ], "trig...
[ "previous", "work", "on", "review", "summarization", "focused", "on", "measuring", "the", "sentiment", "toward", "the", "main", "aspects", "of", "the", "reviewed", "product", "or", "business", ",", "or", "on", "creating", "a", "textual", "summary", ".", "these...
ACL
Neural Text Simplification of Clinical Letters with a Domain Specific Phrase Table
Clinical letters are infamously impenetrable for the lay patient. This work uses neural text simplification methods to automatically improve the understandability of clinical letters for patients. We take existing neural text simplification software and augment it with a new phrase table that links complex medical term...
21d0864063319bae49bf23e9572b66a0
2,019
[ "clinical letters are infamously impenetrable for the lay patient .", "this work uses neural text simplification methods to automatically improve the understandability of clinical letters for patients .", "we take existing neural text simplification software and augment it with a new phrase table that links com...
[ { "event_type": "WKS", "arguments": [ { "text": "neural text simplification methods", "nugget_type": "APP", "argument_type": "Content", "tokens": [ "neural", "text", "simplification", "methods" ], "offsets": [ ...
[ "clinical", "letters", "are", "infamously", "impenetrable", "for", "the", "lay", "patient", ".", "this", "work", "uses", "neural", "text", "simplification", "methods", "to", "automatically", "improve", "the", "understandability", "of", "clinical", "letters", "for", ...
ACL
Structured Tuning for Semantic Role Labeling
Recent neural network-driven semantic role labeling (SRL) systems have shown impressive improvements in F1 scores. These improvements are due to expressive input representations, which, at least at the surface, are orthogonal to knowledge-rich constrained decoding mechanisms that helped linear SRL models. Introducing t...
3cdf2053adff17615d186bb5190cd913
2,020
[ "recent neural network - driven semantic role labeling ( srl ) systems have shown impressive improvements in f1 scores .", "these improvements are due to expressive input representations , which , at least at the surface , are orthogonal to knowledge - rich constrained decoding mechanisms that helped linear srl m...
[ { "event_type": "ITT", "arguments": [ { "text": "neural network - driven semantic role labeling systems", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "neural", "network", "-", "driven", "semantic", ...
[ "recent", "neural", "network", "-", "driven", "semantic", "role", "labeling", "(", "srl", ")", "systems", "have", "shown", "impressive", "improvements", "in", "f1", "scores", ".", "these", "improvements", "are", "due", "to", "expressive", "input", "representatio...
ACL
Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations
We introduce Span-ConveRT, a light-weight model for dialog slot-filling which frames the task as a turn-based span extraction task. This formulation allows for a simple integration of conversational knowledge coded in large pretrained conversational models such as ConveRT (Henderson et al., 2019). We show that leveragi...
a10aceec32717e5d7316d4d6e2c0a9b4
2,020
[ "we introduce span - convert , a light - weight model for dialog slot - filling which frames the task as a turn - based span extraction task .", "this formulation allows for a simple integration of conversational knowledge coded in large pretrained conversational models such as convert ( henderson et al . , 2019 ...
[ { "event_type": "WKS", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Researcher", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "dialog slot - filling", "nugget_type": "T...
[ "we", "introduce", "span", "-", "convert", ",", "a", "light", "-", "weight", "model", "for", "dialog", "slot", "-", "filling", "which", "frames", "the", "task", "as", "a", "turn", "-", "based", "span", "extraction", "task", ".", "this", "formulation", "a...
ACL
AdvAug: Robust Adversarial Augmentation for Neural Machine Translation
In this paper, we propose a new adversarial augmentation method for Neural Machine Translation (NMT). The main idea is to minimize the vicinal risk over virtual sentences sampled from two vicinity distributions, in which the crucial one is a novel vicinity distribution for adversarial sentences that describes a smooth ...
8b971188703b5f0c7f276da1952acaea
2,020
[ "in this paper , we propose a new adversarial augmentation method for neural machine translation ( nmt ) .", "the main idea is to minimize the vicinal risk over virtual sentences sampled from two vicinity distributions , in which the crucial one is a novel vicinity distribution for adversarial sentences that desc...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 4 ] }, { "text": "new adversarial augmentation method", "nugg...
[ "in", "this", "paper", ",", "we", "propose", "a", "new", "adversarial", "augmentation", "method", "for", "neural", "machine", "translation", "(", "nmt", ")", ".", "the", "main", "idea", "is", "to", "minimize", "the", "vicinal", "risk", "over", "virtual", "...
ACL
A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks
Many high-level procedural tasks can be decomposed into sequences of instructions that vary in their order and choice of tools. In the cooking domain, the web offers many, partially-overlapping, text and video recipes (i.e. procedures) that describe how to make the same dish (i.e. high-level task). Aligning instruction...
d1b077f5c5d5b62458735e7e31109bed
2,020
[ "many high - level procedural tasks can be decomposed into sequences of instructions that vary in their order and choice of tools .", "in the cooking domain , the web offers many , partially - overlapping , text and video recipes ( i . e . procedures ) that describe how to make the same dish ( i . e . high - leve...
[ { "event_type": "ITT", "arguments": [ { "text": "high - level procedural tasks", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "high", "-", "level", "procedural", "tasks" ], "offsets": [ ...
[ "many", "high", "-", "level", "procedural", "tasks", "can", "be", "decomposed", "into", "sequences", "of", "instructions", "that", "vary", "in", "their", "order", "and", "choice", "of", "tools", ".", "in", "the", "cooking", "domain", ",", "the", "web", "of...
ACL
“That Is a Suspicious Reaction!”: Interpreting Logits Variation to Detect NLP Adversarial Attacks
Adversarial attacks are a major challenge faced by current machine learning research. These purposely crafted inputs fool even the most advanced models, precluding their deployment in safety-critical applications. Extensive research in computer vision has been carried to develop reliable defense strategies. However, th...
7eb9a80d8e4138e6318bb97196ab288c
2,022
[ "adversarial attacks are a major challenge faced by current machine learning research .", "these purposely crafted inputs fool even the most advanced models , precluding their deployment in safety - critical applications .", "extensive research in computer vision has been carried to develop reliable defense str...
[ { "event_type": "ITT", "arguments": [ { "text": "adversarial attacks", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "adversarial", "attacks" ], "offsets": [ 0, 1 ] } ], "trigge...
[ "adversarial", "attacks", "are", "a", "major", "challenge", "faced", "by", "current", "machine", "learning", "research", ".", "these", "purposely", "crafted", "inputs", "fool", "even", "the", "most", "advanced", "models", ",", "precluding", "their", "deployment", ...
ACL
Cross-Domain Generalization of Neural Constituency Parsers
Neural parsers obtain state-of-the-art results on benchmark treebanks for constituency parsing—but to what degree do they generalize to other domains? We present three results about the generalization of neural parsers in a zero-shot setting: training on trees from one corpus and evaluating on out-of-domain corpora. Fi...
22322562d681e904fb69140c682d293b
2,019
[ "neural parsers obtain state - of - the - art results on benchmark treebanks for constituency parsing — but to what degree do they generalize to other domains ?", "we present three results about the generalization of neural parsers in a zero - shot setting : training on trees from one corpus and evaluating on out...
[ { "event_type": "ITT", "arguments": [ { "text": "neural parsers", "nugget_type": "MOD", "argument_type": "Target", "tokens": [ "neural", "parsers" ], "offsets": [ 0, 1 ] } ], "trigger": { ...
[ "neural", "parsers", "obtain", "state", "-", "of", "-", "the", "-", "art", "results", "on", "benchmark", "treebanks", "for", "constituency", "parsing", "—", "but", "to", "what", "degree", "do", "they", "generalize", "to", "other", "domains", "?", "we", "pr...
ACL
KACE: Generating Knowledge Aware Contrastive Explanations for Natural Language Inference
In order to better understand the reason behind model behaviors (i.e., making predictions), most recent works have exploited generative models to provide complementary explanations. However, existing approaches in NLP mainly focus on “WHY A” rather than contrastive “WHY A NOT B”, which is shown to be able to better dis...
55c020597c59eefedda6fd9e2cee105a
2,021
[ "in order to better understand the reason behind model behaviors ( i . e . , making predictions ) , most recent works have exploited generative models to provide complementary explanations .", "however , existing approaches in nlp mainly focus on “ why a ” rather than contrastive “ why a not b ” , which is shown ...
[ { "event_type": "MDS", "arguments": [ { "text": "rationales", "nugget_type": "FEA", "argument_type": "BaseComponent", "tokens": [ "rationales" ], "offsets": [ 110 ] }, { "text": "key perturbations", ...
[ "in", "order", "to", "better", "understand", "the", "reason", "behind", "model", "behaviors", "(", "i", ".", "e", ".", ",", "making", "predictions", ")", ",", "most", "recent", "works", "have", "exploited", "generative", "models", "to", "provide", "complemen...
ACL
Interpretable Question Answering on Knowledge Bases and Text
Interpretability of machine learning (ML) models becomes more relevant with their increasing adoption. In this work, we address the interpretability of ML based question answering (QA) models on a combination of knowledge bases (KB) and text documents. We adapt post hoc explanation methods such as LIME and input pertur...
5f3fe6d62f46948e89c1fce976c90e53
2,019
[ "interpretability of machine learning ( ml ) models becomes more relevant with their increasing adoption .", "in this work , we address the interpretability of ml based question answering ( qa ) models on a combination of knowledge bases ( kb ) and text documents .", "we adapt post hoc explanation methods such ...
[ { "event_type": "ITT", "arguments": [ { "text": "machine learning", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "machine", "learning" ], "offsets": [ 2, 3 ] }, { "text":...
[ "interpretability", "of", "machine", "learning", "(", "ml", ")", "models", "becomes", "more", "relevant", "with", "their", "increasing", "adoption", ".", "in", "this", "work", ",", "we", "address", "the", "interpretability", "of", "ml", "based", "question", "a...
ACL
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Laws and their interpretations, legal arguments and agreements are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size. Natural language understanding (NLU) tec...
d8695decbe6f2dc8c6329c1a3fb2d28b
2,022
[ "laws and their interpretations , legal arguments and agreements are typically expressed in writing , leading to the production of vast corpora of legal text .", "their analysis , which is at the center of legal practice , becomes increasingly elaborate as these collections grow in size .", "natural language un...
[ { "event_type": "ITT", "arguments": [ { "text": "natural language understanding technologies", "nugget_type": "APP", "argument_type": "Target", "tokens": [ "natural", "language", "understanding", "technologies" ], ...
[ "laws", "and", "their", "interpretations", ",", "legal", "arguments", "and", "agreements", "are", "typically", "expressed", "in", "writing", ",", "leading", "to", "the", "production", "of", "vast", "corpora", "of", "legal", "text", ".", "their", "analysis", ",...
ACL
Meta-Reinforced Multi-Domain State Generator for Dialogue Systems
A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system. Tremendous progress has been made in recent years. However, the major challenges remain. The state-of-the-art accuracy for DST is below 50% for a multi-domain dialogue task. A learnable DST for any new domain requires a large...
526c621c88094fc9096182cbc2c18894
2,020
[ "a dialogue state tracker ( dst ) is a core component of a modular task - oriented dialogue system .", "tremendous progress has been made in recent years .", "however , the major challenges remain .", "the state - of - the - art accuracy for dst is below 50 % for a multi - domain dialogue task .", "a learna...
[ { "event_type": "ITT", "arguments": [ { "text": "dst", "nugget_type": "MOD", "argument_type": "Target", "tokens": [ "dst" ], "offsets": [ 5 ] } ], "trigger": { "text": "component", "tokens": [ ...
[ "a", "dialogue", "state", "tracker", "(", "dst", ")", "is", "a", "core", "component", "of", "a", "modular", "task", "-", "oriented", "dialogue", "system", ".", "tremendous", "progress", "has", "been", "made", "in", "recent", "years", ".", "however", ",", ...
ACL
Putting Words in Context: LSTM Language Models and Lexical Ambiguity
In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model...
cfc27dd3774681d9dc039e3cd6ae0cb7
2,019
[ "in neural network models of language , words are commonly represented using context - invariant representations ( word embeddings ) which are then put in context in the hidden layers .", "since words are often ambiguous , representing the contextually relevant information is not trivial .", "we investigate how...
[ { "event_type": "RWF", "arguments": [ { "text": "words", "nugget_type": "FEA", "argument_type": "Concern", "tokens": [ "words" ], "offsets": [ 32 ] } ], "trigger": { "text": "ambiguous", "tokens": [ ...
[ "in", "neural", "network", "models", "of", "language", ",", "words", "are", "commonly", "represented", "using", "context", "-", "invariant", "representations", "(", "word", "embeddings", ")", "which", "are", "then", "put", "in", "context", "in", "the", "hidden...
ACL
Cross-Utterance Conditioned VAE for Non-Autoregressive Text-to-Speech
Modelling prosody variation is critical for synthesizing natural and expressive speech in end-to-end text-to-speech (TTS) systems. In this paper, a cross-utterance conditional VAE (CUC-VAE) is proposed to estimate a posterior probability distribution of the latent prosody features for each phoneme by conditioning on ac...
c403610287629024b466360c32573c34
2,022
[ "modelling prosody variation is critical for synthesizing natural and expressive speech in end - to - end text - to - speech ( tts ) systems .", "in this paper , a cross - utterance conditional vae ( cuc - vae ) is proposed to estimate a posterior probability distribution of the latent prosody features for each p...
[ { "event_type": "ITT", "arguments": [ { "text": "natural speech", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "natural", "speech" ], "offsets": [ 7, 10 ] }, { "text": "e...
[ "modelling", "prosody", "variation", "is", "critical", "for", "synthesizing", "natural", "and", "expressive", "speech", "in", "end", "-", "to", "-", "end", "text", "-", "to", "-", "speech", "(", "tts", ")", "systems", ".", "in", "this", "paper", ",", "a"...
ACL
Fluent Response Generation for Conversational Question Answering
Question answering (QA) is an important aspect of open-domain conversational agents, garnering specific research focus in the conversational QA (ConvQA) subtask. One notable limitation of recent ConvQA efforts is the response being answer span extraction from the target corpus, thus ignoring the natural language genera...
46d03dd8c05818a1f86dfba268d10d21
2,020
[ "question answering ( qa ) is an important aspect of open - domain conversational agents , garnering specific research focus in the conversational qa ( convqa ) subtask .", "one notable limitation of recent convqa efforts is the response being answer span extraction from the target corpus , thus ignoring the natu...
[ { "event_type": "ITT", "arguments": [ { "text": "question answering", "nugget_type": "TAK", "argument_type": "Target", "tokens": [ "question", "answering" ], "offsets": [ 0, 1 ] } ], "trigger"...
[ "question", "answering", "(", "qa", ")", "is", "an", "important", "aspect", "of", "open", "-", "domain", "conversational", "agents", ",", "garnering", "specific", "research", "focus", "in", "the", "conversational", "qa", "(", "convqa", ")", "subtask", ".", "...
ACL
A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation
Recent neural language generation systems often hallucinate contents (i.e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text. To mitigate this issue, we propose to integrate a language understanding module for data refinement with self-t...
885726bd294bc32845bb14fbc61ac187
2,019
[ "recent neural language generation systems often hallucinate contents ( i . e . , producing irrelevant or contradicted facts ) , especially when trained on loosely corresponding pairs of the input structure and text .", "to mitigate this issue , we propose to integrate a language understanding module for data ref...
[ { "event_type": "RWF", "arguments": [ { "text": "neural language generation systems", "nugget_type": "APP", "argument_type": "Concern", "tokens": [ "neural", "language", "generation", "systems" ], "offsets": [ ...
[ "recent", "neural", "language", "generation", "systems", "often", "hallucinate", "contents", "(", "i", ".", "e", ".", ",", "producing", "irrelevant", "or", "contradicted", "facts", ")", ",", "especially", "when", "trained", "on", "loosely", "corresponding", "pai...
ACL
Skill Induction and Planning with Latent Language
We present a framework for learning hierarchical policies from demonstrations, using sparse natural language annotations to guide the discovery of reusable skills for autonomous decision-making. We formulate a generative model of action sequences in which goals generate sequences of high-level subtask descriptions, and...
ed8890ee09d447ff58d041b96e4349f5
2,022
[ "we present a framework for learning hierarchical policies from demonstrations , using sparse natural language annotations to guide the discovery of reusable skills for autonomous decision - making .", "we formulate a generative model of action sequences in which goals generate sequences of high - level subtask d...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "framework", "nugget_type": "APP", "...
[ "we", "present", "a", "framework", "for", "learning", "hierarchical", "policies", "from", "demonstrations", ",", "using", "sparse", "natural", "language", "annotations", "to", "guide", "the", "discovery", "of", "reusable", "skills", "for", "autonomous", "decision", ...
ACL
Clickbait Spoiling via Question Answering and Passage Retrieval
We introduce and study the task of clickbait spoiling: generating a short text that satisfies the curiosity induced by a clickbait post. Clickbait links to a web page and advertises its contents by arousing curiosity instead of providing an informative summary. Our contributions are approaches to classify the type of s...
187cbae28caa1795661c772bf4d5812a
2,022
[ "we introduce and study the task of clickbait spoiling : generating a short text that satisfies the curiosity induced by a clickbait post .", "clickbait links to a web page and advertises its contents by arousing curiosity instead of providing an informative summary .", "our contributions are approaches to clas...
[ { "event_type": "WKS", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Researcher", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "clickbait spoiling", "nugget_type": "TAK"...
[ "we", "introduce", "and", "study", "the", "task", "of", "clickbait", "spoiling", ":", "generating", "a", "short", "text", "that", "satisfies", "the", "curiosity", "induced", "by", "a", "clickbait", "post", ".", "clickbait", "links", "to", "a", "web", "page",...
ACL
Multi-Modal Sarcasm Detection in Twitter with Hierarchical Fusion Model
Sarcasm is a subtle form of language in which people express the opposite of what is implied. Previous works of sarcasm detection focused on texts. However, more and more social media platforms like Twitter allow users to create multi-modal messages, including texts, images, and videos. It is insufficient to detect sar...
4741166aa55362e277b155f6d6eef73c
2,019
[ "sarcasm is a subtle form of language in which people express the opposite of what is implied .", "previous works of sarcasm detection focused on texts .", "however , more and more social media platforms like twitter allow users to create multi - modal messages , including texts , images , and videos .", "it ...
[ { "event_type": "RWF", "arguments": [ { "text": "detect", "nugget_type": "E-PUR", "argument_type": "Target", "tokens": [ "detect" ], "offsets": [ 58 ] }, { "text": "from multi - model messages", "nu...
[ "sarcasm", "is", "a", "subtle", "form", "of", "language", "in", "which", "people", "express", "the", "opposite", "of", "what", "is", "implied", ".", "previous", "works", "of", "sarcasm", "detection", "focused", "on", "texts", ".", "however", ",", "more", "...
ACL
Scoring Sentence Singletons and Pairs for Abstractive Summarization
When writing a summary, humans tend to choose content from one or two sentences and merge them into a single summary sentence. However, the mechanisms behind the selection of one or multiple source sentences remain poorly understood. Sentence fusion assumes multi-sentence input; yet sentence selection methods only work...
db59e4873b2bef1cc7b9e4b6025c3bbf
2,019
[ "when writing a summary , humans tend to choose content from one or two sentences and merge them into a single summary sentence .", "however , the mechanisms behind the selection of one or multiple source sentences remain poorly understood .", "sentence fusion assumes multi - sentence input ; yet sentence selec...
[ { "event_type": "RWS", "arguments": [ { "text": "when writing a summary", "nugget_type": "LIM", "argument_type": "Condition", "tokens": [ "when", "writing", "a", "summary" ], "offsets": [ 0, 1, ...
[ "when", "writing", "a", "summary", ",", "humans", "tend", "to", "choose", "content", "from", "one", "or", "two", "sentences", "and", "merge", "them", "into", "a", "single", "summary", "sentence", ".", "however", ",", "the", "mechanisms", "behind", "the", "...
ACL
Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation
In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation (NMT). We focus on studying the impact of the jointly pretrained decoder, which is the main difference between Seq2Seq pretraining and previous encoder-based pretrainin...
20e747a6e242b62e3683ae8cf02bbe2f
2,022
[ "in this paper , we present a substantial step in better understanding the sota sequence - to - sequence ( seq2seq ) pretraining for neural machine translation ( nmt ) .", "we focus on studying the impact of the jointly pretrained decoder , which is the main difference between seq2seq pretraining and previous enc...
[ { "event_type": "FIN", "arguments": [ { "text": "limit", "nugget_type": "E-FAC", "argument_type": "Content", "tokens": [ "limit" ], "offsets": [ 116 ] }, { "text": "induce", "nugget_type": "E-FAC", ...
[ "in", "this", "paper", ",", "we", "present", "a", "substantial", "step", "in", "better", "understanding", "the", "sota", "sequence", "-", "to", "-", "sequence", "(", "seq2seq", ")", "pretraining", "for", "neural", "machine", "translation", "(", "nmt", ")", ...
ACL
Cognitive Graph for Multi-Hop Reading Comprehension at Scale
We propose a new CogQA framework for multi-hop reading comprehension question answering in web-scale documents. Founded on the dual process theory in cognitive science, the framework gradually builds a cognitive graph in an iterative process by coordinating an implicit extraction module (System 1) and an explicit reaso...
079e0a80ae804548eacfbabe80312654
2,019
[ "we propose a new cogqa framework for multi - hop reading comprehension question answering in web - scale documents .", "founded on the dual process theory in cognitive science , the framework gradually builds a cognitive graph in an iterative process by coordinating an implicit extraction module ( system 1 ) and...
[ { "event_type": "PRP", "arguments": [ { "text": "we", "nugget_type": "OG", "argument_type": "Proposer", "tokens": [ "we" ], "offsets": [ 0 ] }, { "text": "cogqa framework", "nugget_type": "APP", ...
[ "we", "propose", "a", "new", "cogqa", "framework", "for", "multi", "-", "hop", "reading", "comprehension", "question", "answering", "in", "web", "-", "scale", "documents", ".", "founded", "on", "the", "dual", "process", "theory", "in", "cognitive", "science", ...
ACL
Efficient Unsupervised Sentence Compression by Fine-tuning Transformers with Reinforcement Learning
Sentence compression reduces the length of text by removing non-essential content while preserving important facts and grammaticality. Unsupervised objective driven methods for sentence compression can be used to create customized models without the need for ground-truth training data, while allowing flexibility in the...
cc10fc7b1ae1cc201db422fe32a0dc6c
2,022
[ "sentence compression reduces the length of text by removing non - essential content while preserving important facts and grammaticality .", "unsupervised objective driven methods for sentence compression can be used to create customized models without the need for ground - truth training data , while allowing fl...
[ { "event_type": "RWF", "arguments": [ { "text": "guided search", "nugget_type": "APP", "argument_type": "Concern", "tokens": [ "guided", "search" ], "offsets": [ 77, 78 ] }, { "text": "e...
[ "sentence", "compression", "reduces", "the", "length", "of", "text", "by", "removing", "non", "-", "essential", "content", "while", "preserving", "important", "facts", "and", "grammaticality", ".", "unsupervised", "objective", "driven", "methods", "for", "sentence",...