venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | Few-Shot Question Answering by Pretraining Span Selection | In several question answering benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred training examples are available, and observe that standard models perform poorly, hig... | 843121db06733dcbe4b459e2183ec94b | 2,021 | [
"in several question answering benchmarks , pretrained models have reached human parity through fine - tuning on an order of 100 , 000 annotated questions and answers .",
"we explore the more realistic few - shot setting , where only a few hundred training examples are available , and observe that standard models... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"models"
],
"offsets": [
6,
7
]
}
],
"trigger": ... | [
"in",
"several",
"question",
"answering",
"benchmarks",
",",
"pretrained",
"models",
"have",
"reached",
"human",
"parity",
"through",
"fine",
"-",
"tuning",
"on",
"an",
"order",
"of",
"100",
",",
"000",
"annotated",
"questions",
"and",
"answers",
".",
"we",
... |
ACL | Cross-Lingual Phrase Retrieval | Cross-lingual retrieval aims to retrieve relevant text across languages. Current methods typically achieve cross-lingual retrieval by learning language-agnostic text representations in word or sentence level. However, how to learn phrase representations for cross-lingual phrase retrieval is still an open problem. In th... | 6b36aac8b6fb413c7f6d77e1aad2e872 | 2,022 | [
"cross - lingual retrieval aims to retrieve relevant text across languages .",
"current methods typically achieve cross - lingual retrieval by learning language - agnostic text representations in word or sentence level .",
"however , how to learn phrase representations for cross - lingual phrase retrieval is st... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual retrieval",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"retrieval"
],
"offsets": [
0,
... | [
"cross",
"-",
"lingual",
"retrieval",
"aims",
"to",
"retrieve",
"relevant",
"text",
"across",
"languages",
".",
"current",
"methods",
"typically",
"achieve",
"cross",
"-",
"lingual",
"retrieval",
"by",
"learning",
"language",
"-",
"agnostic",
"text",
"representati... |
ACL | Exposing the limits of Zero-shot Cross-lingual Hate Speech Detection | Reducing and counter-acting hate speech on Social Media is a significant concern. Most of the proposed automatic methods are conducted exclusively on English and very few consistently labeled, non-English resources have been proposed. Learning to detect hate speech on English and transferring to unseen languages seems ... | 67cd94de1749af7c53b535135272fbdc | 2,021 | [
"reducing and counter - acting hate speech on social media is a significant concern .",
"most of the proposed automatic methods are conducted exclusively on english and very few consistently labeled , non - english resources have been proposed .",
"learning to detect hate speech on english and transferring to u... | [
{
"event_type": "ITT",
"arguments": [],
"trigger": {
"text": "significant concern",
"tokens": [
"significant",
"concern"
],
"offsets": [
12,
13
]
}
},
{
"event_type": "RWF",
"arguments": [
{
"text": "exclusively ... | [
"reducing",
"and",
"counter",
"-",
"acting",
"hate",
"speech",
"on",
"social",
"media",
"is",
"a",
"significant",
"concern",
".",
"most",
"of",
"the",
"proposed",
"automatic",
"methods",
"are",
"conducted",
"exclusively",
"on",
"english",
"and",
"very",
"few",... |
ACL | SemFace: Pre-training Encoder and Decoder with a Semantic Interface for Neural Machine Translation | While pre-training techniques are working very well in natural language processing, how to pre-train a decoder and effectively use it for neural machine translation (NMT) still remains a tricky issue. The main reason is that the cross-attention module between the encoder and decoder cannot be pre-trained, and the combi... | 162126c87b2f21aad549d0660c34191b | 2,021 | [
"while pre - training techniques are working very well in natural language processing , how to pre - train a decoder and effectively use it for neural machine translation ( nmt ) still remains a tricky issue .",
"the main reason is that the cross - attention module between the encoder and decoder cannot be pre - ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
26,
27,
... | [
"while",
"pre",
"-",
"training",
"techniques",
"are",
"working",
"very",
"well",
"in",
"natural",
"language",
"processing",
",",
"how",
"to",
"pre",
"-",
"train",
"a",
"decoder",
"and",
"effectively",
"use",
"it",
"for",
"neural",
"machine",
"translation",
"... |
ACL | Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints | Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions. Most existing methods ignore the faithfulness between a generated text description and the original table, leading to generated information that goes beyond the content of the table. In this paper, for the first ... | 0ed552fa5b3749599a886059a275509e | 2,020 | [
"text generation from a knowledge base aims to translate knowledge triples to natural language descriptions .",
"most existing methods ignore the faithfulness between a generated text description and the original table , leading to generated information that goes beyond the content of the table .",
"in this pap... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"generation"
],
"offsets": [
0,
1
]
},
{
"text": "... | [
"text",
"generation",
"from",
"a",
"knowledge",
"base",
"aims",
"to",
"translate",
"knowledge",
"triples",
"to",
"natural",
"language",
"descriptions",
".",
"most",
"existing",
"methods",
"ignore",
"the",
"faithfulness",
"between",
"a",
"generated",
"text",
"descr... |
ACL | Domain Adaptation of Neural Machine Translation by Lexicon Induction | It has been previously noted that neural machine translation (NMT) is very sensitive to domain shift. In this paper, we argue that this is a dual effect of the highly lexicalized nature of NMT, resulting in failure for sentences with large numbers of unknown words, and lack of supervision for domain-specific words. To ... | dd9b7098afaad11d22d44d3cd8aacffc | 2,019 | [
"it has been previously noted that neural machine translation ( nmt ) is very sensitive to domain shift .",
"in this paper , we argue that this is a dual effect of the highly lexicalized nature of nmt , resulting in failure for sentences with large numbers of unknown words , and lack of supervision for domain - s... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
6,
7,
... | [
"it",
"has",
"been",
"previously",
"noted",
"that",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"is",
"very",
"sensitive",
"to",
"domain",
"shift",
".",
"in",
"this",
"paper",
",",
"we",
"argue",
"that",
"this",
"is",
"a",
"dual",
"effect",
"of"... |
ACL | Life after BERT: What do Other Muppets Understand about Language? | Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives. In our work, we utilize the oLMpics bench- mark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART,... | 4a2464ac871e0f3ae4ab9dc6237bec57 | 2,022 | [
"existing pre - trained transformer analysis works usually focus only on one or two model families at a time , overlooking the variability of the architecture and pre - training objectives .",
"in our work , we utilize the olmpics bench - mark and psycholinguistic probing datasets for a diverse set of 29 models i... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "existing pre - trained transformer analysis works",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"existing",
"pre",
"-",
"trained",
"transformer",
... | [
"existing",
"pre",
"-",
"trained",
"transformer",
"analysis",
"works",
"usually",
"focus",
"only",
"on",
"one",
"or",
"two",
"model",
"families",
"at",
"a",
"time",
",",
"overlooking",
"the",
"variability",
"of",
"the",
"architecture",
"and",
"pre",
"-",
"tr... |
ACL | Handling Extreme Class Imbalance in Technical Logbook Datasets | Technical logbooks are a challenging and under-explored text type in automated event identification. These texts are typically short and written in non-standard yet technical language, posing challenges to off-the-shelf NLP pipelines. The granularity of issue types described in these datasets additionally leads to clas... | b941427590b2bcbc3de39869f909b873 | 2,021 | [
"technical logbooks are a challenging and under - explored text type in automated event identification .",
"these texts are typically short and written in non - standard yet technical language , posing challenges to off - the - shelf nlp pipelines .",
"the granularity of issue types described in these datasets ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "technical logbooks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"technical",
"logbooks"
],
"offsets": [
0,
1
]
},
{
"te... | [
"technical",
"logbooks",
"are",
"a",
"challenging",
"and",
"under",
"-",
"explored",
"text",
"type",
"in",
"automated",
"event",
"identification",
".",
"these",
"texts",
"are",
"typically",
"short",
"and",
"written",
"in",
"non",
"-",
"standard",
"yet",
"techn... |
ACL | Attention-based Conditioning Methods for External Knowledge Integration | In this paper, we present a novel approach for incorporating external knowledge in Recurrent Neural Networks (RNNs). We propose the integration of lexicon features into the self-attention mechanism of RNN-based architectures. This form of conditioning on the attention distribution, enforces the contribution of the most... | cfefbc9c196ec8a8c1ef99fab6b50521 | 2,019 | [
"in this paper , we present a novel approach for incorporating external knowledge in recurrent neural networks ( rnns ) .",
"we propose the integration of lexicon features into the self - attention mechanism of rnn - based architectures .",
"this form of conditioning on the attention distribution , enforces the... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "approach",
"nugget_type": "APP",
"a... | [
"in",
"this",
"paper",
",",
"we",
"present",
"a",
"novel",
"approach",
"for",
"incorporating",
"external",
"knowledge",
"in",
"recurrent",
"neural",
"networks",
"(",
"rnns",
")",
".",
"we",
"propose",
"the",
"integration",
"of",
"lexicon",
"features",
"into",
... |
ACL | Embedding Imputation with Grounded Language Information | Due to the ubiquitous use of embeddings as input representations for a wide range of natural language tasks, imputation of embeddings for rare and unseen words is a critical problem in language processing. Embedding imputation involves learning representations for rare or unseen words during the training of an embeddin... | ebef3c4f011ed57d1703844cbef6bd36 | 2,019 | [
"due to the ubiquitous use of embeddings as input representations for a wide range of natural language tasks , imputation of embeddings for rare and unseen words is a critical problem in language processing .",
"embedding imputation involves learning representations for rare or unseen words during the training of... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "imputation of embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"imputation",
"of",
"embeddings"
],
"offsets": [
19,
20,
... | [
"due",
"to",
"the",
"ubiquitous",
"use",
"of",
"embeddings",
"as",
"input",
"representations",
"for",
"a",
"wide",
"range",
"of",
"natural",
"language",
"tasks",
",",
"imputation",
"of",
"embeddings",
"for",
"rare",
"and",
"unseen",
"words",
"is",
"a",
"crit... |
ACL | Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure | Multilingual pre-trained language models, such as mBERT and XLM-R, have shown impressive cross-lingual ability. Surprisingly, both of them use multilingual masked language model (MLM) without any cross-lingual supervision or aligned data. Despite the encouraging results, we still lack a clear understanding of why cross... | f923d76f68ba76d7b0981f72b024bda8 | 2,022 | [
"multilingual pre - trained language models , such as mbert and xlm - r , have shown impressive cross - lingual ability .",
"surprisingly , both of them use multilingual masked language model ( mlm ) without any cross - lingual supervision or aligned data .",
"despite the encouraging results , we still lack a c... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilingual pre - trained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"multilingual",
"pre",
"-",
"trained",
"language",
"mod... | [
"multilingual",
"pre",
"-",
"trained",
"language",
"models",
",",
"such",
"as",
"mbert",
"and",
"xlm",
"-",
"r",
",",
"have",
"shown",
"impressive",
"cross",
"-",
"lingual",
"ability",
".",
"surprisingly",
",",
"both",
"of",
"them",
"use",
"multilingual",
... |
ACL | Phone-ing it in: Towards Flexible Multi-Modal Language Model Training by Phonetic Representations of Data | Multi-modal techniques offer significant untapped potential to unlock improved NLP technology for local languages. However, many advances in language model pre-training are focused on text, a fact that only increases systematic inequalities in the performance of NLP tasks across the world’s languages. In this work, we ... | b1fa979c42d81c6ed7f1e18afa711dac | 2,022 | [
"multi - modal techniques offer significant untapped potential to unlock improved nlp technology for local languages .",
"however , many advances in language model pre - training are focused on text , a fact that only increases systematic inequalities in the performance of nlp tasks across the world ’ s languages... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - modal techniques",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"modal",
"techniques"
],
"offsets": [
0,
1,... | [
"multi",
"-",
"modal",
"techniques",
"offer",
"significant",
"untapped",
"potential",
"to",
"unlock",
"improved",
"nlp",
"technology",
"for",
"local",
"languages",
".",
"however",
",",
"many",
"advances",
"in",
"language",
"model",
"pre",
"-",
"training",
"are",... |
ACL | Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs | One of the most crucial challenges in question answering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer (QA) pairs for a target text domain with human annotation. An alternative approach to tackle the problem is to use automatically generated QA pairs from either the problem context ... | e9c844c7ecbe5eb84d52cc8acb6ffc83 | 2,020 | [
"one of the most crucial challenges in question answering ( qa ) is the scarcity of labeled data , since it is costly to obtain question - answer ( qa ) pairs for a target text domain with human annotation .",
"an alternative approach to tackle the problem is to use automatically generated qa pairs from either th... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "labeled data",
"nugget_type": "DST",
"argument_type": "Concern",
"tokens": [
"labeled",
"data"
],
"offsets": [
16,
17
]
},
{
"text": "sca... | [
"one",
"of",
"the",
"most",
"crucial",
"challenges",
"in",
"question",
"answering",
"(",
"qa",
")",
"is",
"the",
"scarcity",
"of",
"labeled",
"data",
",",
"since",
"it",
"is",
"costly",
"to",
"obtain",
"question",
"-",
"answer",
"(",
"qa",
")",
"pairs",
... |
ACL | Discovering Dialog Structure Graph for Coherent Dialog Generation | Learning discrete dialog structure graph from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation. However, this problem is less studied in open-domain dialogue. In this paper, we conduct unsupervised discovery of discrete ... | 963c44829865bdfdacc59966886b66cc | 2,021 | [
"learning discrete dialog structure graph from human - human dialogs yields basic insights into the structure of conversation , and also provides background knowledge to facilitate dialog generation .",
"however , this problem is less studied in open - domain dialogue .",
"in this paper , we conduct unsupervise... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "chitchat corpora",
"nugget_type": "DST",
"argument_type": "Dataset",
"tokens": [
"chitchat",
"corpora"
],
"offsets": [
55,
56
]
},
{
"tex... | [
"learning",
"discrete",
"dialog",
"structure",
"graph",
"from",
"human",
"-",
"human",
"dialogs",
"yields",
"basic",
"insights",
"into",
"the",
"structure",
"of",
"conversation",
",",
"and",
"also",
"provides",
"background",
"knowledge",
"to",
"facilitate",
"dialo... |
ACL | Leveraging Graph to Improve Abstractive Multi-Document Summarization | Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries. In this paper, we develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of docu... | fb7697c0f1f638875b431a8988006a41 | 2,020 | [
"graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries .",
"in this paper , we develop a neural abstractive multi - document summarization ( mds ) model which can leverage well - known graph repre... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "capture",
"nugget_type": "E-PUR",
"argument_type": "Target",
"tokens": [
"capture"
],
"offsets": [
80
]
},
{
"text": "graphs",
"nugget_type": "FEA",
... | [
"graphs",
"that",
"capture",
"relations",
"between",
"textual",
"units",
"have",
"great",
"benefits",
"for",
"detecting",
"salient",
"information",
"from",
"multiple",
"documents",
"and",
"generating",
"overall",
"coherent",
"summaries",
".",
"in",
"this",
"paper",
... |
ACL | Logical Natural Language Generation from Open-Domain Tables | Neural natural language generation (NLG) models have recently shown remarkable progress in fluency and coherence. However, existing studies on neural NLG are primarily focused on surface-level realizations with limited emphasis on logical inference, an important aspect of human thinking and language. In this paper, we ... | 12db70457a8ed10d9cce33871faa69bc | 2,020 | [
"neural natural language generation ( nlg ) models have recently shown remarkable progress in fluency and coherence .",
"however , existing studies on neural nlg are primarily focused on surface - level realizations with limited emphasis on logical inference , an important aspect of human thinking and language ."... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural natural language generation models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"natural",
"language",
"generation",
"models"
],... | [
"neural",
"natural",
"language",
"generation",
"(",
"nlg",
")",
"models",
"have",
"recently",
"shown",
"remarkable",
"progress",
"in",
"fluency",
"and",
"coherence",
".",
"however",
",",
"existing",
"studies",
"on",
"neural",
"nlg",
"are",
"primarily",
"focused"... |
ACL | Noisy Channel Language Model Prompting for Few-Shot Text Classification | We introduce a noisy channel approach for language model prompting in few-shot text classification. Instead of computing the likelihood of the label given the input (referred as direct models), channel models compute the conditional probability of the input given the label, and are thereby required to explain every wor... | 5bdbfeff3d689a14fd927431638503a0 | 2,022 | [
"we introduce a noisy channel approach for language model prompting in few - shot text classification .",
"instead of computing the likelihood of the label given the input ( referred as direct models ) , channel models compute the conditional probability of the input given the label , and are thereby required to ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "noisy channel approach for language model prompting... | [
"we",
"introduce",
"a",
"noisy",
"channel",
"approach",
"for",
"language",
"model",
"prompting",
"in",
"few",
"-",
"shot",
"text",
"classification",
".",
"instead",
"of",
"computing",
"the",
"likelihood",
"of",
"the",
"label",
"given",
"the",
"input",
"(",
"... |
ACL | Screenplay Summarization Using Latent Narrative Structure | Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront. As a result, such models are biased on position and often perform a smart selection of sentences from the beginning of the document. When summarizing long narratives, which h... | e4d5952b8b94eb490056abdb7d65243e | 2,020 | [
"most general - purpose extractive summarization models are trained on news articles , which are short and present all important information upfront .",
"as a result , such models are biased on position and often perform a smart selection of sentences from the beginning of the document .",
"when summarizing lon... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "general - purpose extractive summarization models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"general",
"-",
"purpose",
"extractive",
"summarization",
... | [
"most",
"general",
"-",
"purpose",
"extractive",
"summarization",
"models",
"are",
"trained",
"on",
"news",
"articles",
",",
"which",
"are",
"short",
"and",
"present",
"all",
"important",
"information",
"upfront",
".",
"as",
"a",
"result",
",",
"such",
"models... |
ACL | Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages | Human languages are full of metaphorical expressions. Metaphors help people understand the world by connecting new concepts and domains to more familiar ones. Large pre-trained language models (PLMs) are therefore assumed to encode metaphorical knowledge useful for NLP systems. In this paper, we investigate this hypoth... | 6bc61ccf558b0ee4eeacf9c2e561e8a2 | 2,022 | [
"human languages are full of metaphorical expressions .",
"metaphors help people understand the world by connecting new concepts and domains to more familiar ones .",
"large pre - trained language models ( plms ) are therefore assumed to encode metaphorical knowledge useful for nlp systems .",
"in this paper ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "large pre - trained language models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"large",
"pre",
"-",
"trained",
"language",
"models"
]... | [
"human",
"languages",
"are",
"full",
"of",
"metaphorical",
"expressions",
".",
"metaphors",
"help",
"people",
"understand",
"the",
"world",
"by",
"connecting",
"new",
"concepts",
"and",
"domains",
"to",
"more",
"familiar",
"ones",
".",
"large",
"pre",
"-",
"tr... |
ACL | Characterizing Idioms: Conventionality and Contingency | Idioms are unlike most phrases in two important ways. First, words in an idiom have non-canonical meanings. Second, the non-canonical meanings of words in an idiom are contingent on the presence of other words in the idiom. Linguistic theories differ on whether these properties depend on one another, as well as whether... | 09bf7e760e0c9c7e3a522200ae46d5d2 | 2,022 | [
"idioms are unlike most phrases in two important ways .",
"first , words in an idiom have non - canonical meanings .",
"second , the non - canonical meanings of words in an idiom are contingent on the presence of other words in the idiom .",
"linguistic theories differ on whether these properties depend on on... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "idioms",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"idioms"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "unlike",
"tokens": [
... | [
"idioms",
"are",
"unlike",
"most",
"phrases",
"in",
"two",
"important",
"ways",
".",
"first",
",",
"words",
"in",
"an",
"idiom",
"have",
"non",
"-",
"canonical",
"meanings",
".",
"second",
",",
"the",
"non",
"-",
"canonical",
"meanings",
"of",
"words",
"... |
ACL | RELiC: Retrieving Evidence for Literary Claims | Humanities scholars commonly provide evidence for claims that they make about a work of literature (e.g., a novel) in the form of quotations from the work. We collect a large-scale dataset (RELiC) of 78K literary quotations and surrounding critical analysis and use it to formulate the novel task of literary evidence re... | bec09956df96433462f23a73b769f659 | 2,022 | [
"humanities scholars commonly provide evidence for claims that they make about a work of literature ( e . g . , a novel ) in the form of quotations from the work .",
"we collect a large - scale dataset ( relic ) of 78k literary quotations and surrounding critical analysis and use it to formulate the novel task of... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
33
]
},
{
"text": "dataset ( relic ) of 78k literary quotations",
... | [
"humanities",
"scholars",
"commonly",
"provide",
"evidence",
"for",
"claims",
"that",
"they",
"make",
"about",
"a",
"work",
"of",
"literature",
"(",
"e",
".",
"g",
".",
",",
"a",
"novel",
")",
"in",
"the",
"form",
"of",
"quotations",
"from",
"the",
"work... |
ACL | Learning to Abstract for Memory-augmented Conversational Response Generation | Neural generative models for open-domain chit-chat conversations have become an active area of research in recent years. A critical issue with most existing generative models is that the generated responses lack informativeness and diversity. A few researchers attempt to leverage the results of retrieval models to stre... | f943e096dbc382527c16c60702e2e2ac | 2,019 | [
"neural generative models for open - domain chit - chat conversations have become an active area of research in recent years .",
"a critical issue with most existing generative models is that the generated responses lack informativeness and diversity .",
"a few researchers attempt to leverage the results of ret... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain chit - chat conversations",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"chit",
"-",
"chat",
"con... | [
"neural",
"generative",
"models",
"for",
"open",
"-",
"domain",
"chit",
"-",
"chat",
"conversations",
"have",
"become",
"an",
"active",
"area",
"of",
"research",
"in",
"recent",
"years",
".",
"a",
"critical",
"issue",
"with",
"most",
"existing",
"generative",
... |
ACL | Enabling Language Models to Fill in the Blanks | We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document. While infilling could enable rich functionality especially for writing assistance tools, more attention has been devoted to language modeling—a special case of infilling where text is predicted a... | c7177c3c640db68978503aeaa263714f | 2,020 | [
"we present a simple approach for text infilling , the task of predicting missing spans of text at any position in a document .",
"while infilling could enable rich functionality especially for writing assistance tools , more attention has been devoted to language modeling — a special case of infilling where text... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "simple approach",
"nugget_type": "APP",
... | [
"we",
"present",
"a",
"simple",
"approach",
"for",
"text",
"infilling",
",",
"the",
"task",
"of",
"predicting",
"missing",
"spans",
"of",
"text",
"at",
"any",
"position",
"in",
"a",
"document",
".",
"while",
"infilling",
"could",
"enable",
"rich",
"functiona... |
ACL | A Call for More Rigor in Unsupervised Cross-lingual Learning | We review motivations, definition, approaches, and methodology for unsupervised cross-lingual learning and call for a more rigorous position in each of them. An existing rationale for such research is based on the lack of parallel data for many of the world’s languages. However, we argue that a scenario without any par... | 0802997a85124d8b41037921c2cf3f13 | 2,020 | [
"we review motivations , definition , approaches , and methodology for unsupervised cross - lingual learning and call for a more rigorous position in each of them .",
"an existing rationale for such research is based on the lack of parallel data for many of the world ’ s languages .",
"however , we argue that a... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "unsupervised cross - lingual learning",
"... | [
"we",
"review",
"motivations",
",",
"definition",
",",
"approaches",
",",
"and",
"methodology",
"for",
"unsupervised",
"cross",
"-",
"lingual",
"learning",
"and",
"call",
"for",
"a",
"more",
"rigorous",
"position",
"in",
"each",
"of",
"them",
".",
"an",
"exi... |
ACL | TAN-NTM: Topic Attention Networks for Neural Topic Modeling | Topic models have been widely used to learn text representations and gain insight into document corpora. To perform topic discovery, most existing neural models either take document bag-of-words (BoW) or sequence of tokens as input followed by variational inference and BoW reconstruction to learn topic-word distributio... | 228bc1027757d17a10e0348598b88e46 | 2,021 | [
"topic models have been widely used to learn text representations and gain insight into document corpora .",
"to perform topic discovery , most existing neural models either take document bag - of - words ( bow ) or sequence of tokens as input followed by variational inference and bow reconstruction to learn topi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "topic models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"topic",
"models"
],
"offsets": [
0,
1
]
}
],
"trigger": {
"t... | [
"topic",
"models",
"have",
"been",
"widely",
"used",
"to",
"learn",
"text",
"representations",
"and",
"gain",
"insight",
"into",
"document",
"corpora",
".",
"to",
"perform",
"topic",
"discovery",
",",
"most",
"existing",
"neural",
"models",
"either",
"take",
"... |
ACL | Variance of Average Surprisal: A Better Predictor for Quality of Grammar from Unsupervised PCFG Induction | In unsupervised grammar induction, data likelihood is known to be only weakly correlated with parsing accuracy, especially at convergence after multiple runs. In order to find a better indicator for quality of induced grammars, this paper correlates several linguistically- and psycholinguistically-motivated predictors ... | 41a087e8cc8dc99faf7937e2f5a312f8 | 2,019 | [
"in unsupervised grammar induction , data likelihood is known to be only weakly correlated with parsing accuracy , especially at convergence after multiple runs .",
"in order to find a better indicator for quality of induced grammars , this paper correlates several linguistically - and psycholinguistically - moti... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "data likelihood",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"data",
"likelihood"
],
"offsets": [
5,
6
]
},
{
"text": ... | [
"in",
"unsupervised",
"grammar",
"induction",
",",
"data",
"likelihood",
"is",
"known",
"to",
"be",
"only",
"weakly",
"correlated",
"with",
"parsing",
"accuracy",
",",
"especially",
"at",
"convergence",
"after",
"multiple",
"runs",
".",
"in",
"order",
"to",
"f... |
ACL | Estimating predictive uncertainty for rumour verification models | The inability to correctly resolve rumours circulating online can have harmful real-world consequences. We present a method for incorporating model and data uncertainty estimates into natural language processing models for automatic rumour verification. We show that these estimates can be used to filter out model predi... | 6a0607021d5371480f1b0b8f34b23de1 | 2,020 | [
"the inability to correctly resolve rumours circulating online can have harmful real - world consequences .",
"we present a method for incorporating model and data uncertainty estimates into natural language processing models for automatic rumour verification .",
"we show that these estimates can be used to fil... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "rumours circulating",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"rumours",
"circulating"
],
"offsets": [
5,
6
]
}
],
"trigge... | [
"the",
"inability",
"to",
"correctly",
"resolve",
"rumours",
"circulating",
"online",
"can",
"have",
"harmful",
"real",
"-",
"world",
"consequences",
".",
"we",
"present",
"a",
"method",
"for",
"incorporating",
"model",
"and",
"data",
"uncertainty",
"estimates",
... |
ACL | Deep Context- and Relation-Aware Learning for Aspect-based Sentiment Analysis | Existing works for aspect-based sentiment analysis (ABSA) have adopted a unified approach, which allows the interactive relations among subtasks. However, we observe that these methods tend to predict polarities based on the literal meaning of aspect and opinion terms and mainly consider relations implicitly among subt... | 2d6160a9da933ee57d21199f3cd91d5c | 2,021 | [
"existing works for aspect - based sentiment analysis ( absa ) have adopted a unified approach , which allows the interactive relations among subtasks .",
"however , we observe that these methods tend to predict polarities based on the literal meaning of aspect and opinion terms and mainly consider relations impl... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aspect - based sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aspect",
"-",
"based",
"sentiment",
"analysis"
],
"offset... | [
"existing",
"works",
"for",
"aspect",
"-",
"based",
"sentiment",
"analysis",
"(",
"absa",
")",
"have",
"adopted",
"a",
"unified",
"approach",
",",
"which",
"allows",
"the",
"interactive",
"relations",
"among",
"subtasks",
".",
"however",
",",
"we",
"observe",
... |
ACL | SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions | State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word subs... | d938d1ae0f8f3e3f4fdebf91f88ce9a5 | 2,020 | [
"state - of - the - art nlp models can often be fooled by human - unaware transformations such as synonymous word substitution .",
"for security reasons , it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "models with certified robustness",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"models",
"with",
"certified",
"robustness"
],
"offsets": [
... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"nlp",
"models",
"can",
"often",
"be",
"fooled",
"by",
"human",
"-",
"unaware",
"transformations",
"such",
"as",
"synonymous",
"word",
"substitution",
".",
"for",
"security",
"reasons",
",",
"it",
"is",
"of",
"... |
ACL | That Slepen Al the Nyght with Open Ye! Cross-era Sequence Segmentation with Switch-memory | The evolution of language follows the rule of gradual change. Grammar, vocabulary, and lexical semantic shifts take place over time, resulting in a diachronic linguistic gap. As such, a considerable amount of texts are written in languages of different eras, which creates obstacles for natural language processing tasks... | 9cd2b6121ad415642ef6f7c19e6af8bc | 2,022 | [
"the evolution of language follows the rule of gradual change .",
"grammar , vocabulary , and lexical semantic shifts take place over time , resulting in a diachronic linguistic gap .",
"as such , a considerable amount of texts are written in languages of different eras , which creates obstacles for natural lan... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "diachronic linguistic gap",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"diachronic",
"linguistic",
"gap"
],
"offsets": [
27,
28,
... | [
"the",
"evolution",
"of",
"language",
"follows",
"the",
"rule",
"of",
"gradual",
"change",
".",
"grammar",
",",
"vocabulary",
",",
"and",
"lexical",
"semantic",
"shifts",
"take",
"place",
"over",
"time",
",",
"resulting",
"in",
"a",
"diachronic",
"linguistic",... |
ACL | On Training Instance Selection for Few-Shot Neural Text Generation | Large-scale pretrained language models have led to dramatic improvements in text generation. Impressive performance can be achieved by finetuning only on a small number of instances (few-shot setting). Nonetheless, almost all previous work simply applies random sampling to select the few-shot training instances. Little... | 5769bdc80157c6ef2939965f01ea3b84 | 2,021 | [
"large - scale pretrained language models have led to dramatic improvements in text generation .",
"impressive performance can be achieved by finetuning only on a small number of instances ( few - shot setting ) .",
"nonetheless , almost all previous work simply applies random sampling to select the few - shot ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "large - scale pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"large",
"-",
"scale",
"pretrained",
"language",
"models"... | [
"large",
"-",
"scale",
"pretrained",
"language",
"models",
"have",
"led",
"to",
"dramatic",
"improvements",
"in",
"text",
"generation",
".",
"impressive",
"performance",
"can",
"be",
"achieved",
"by",
"finetuning",
"only",
"on",
"a",
"small",
"number",
"of",
"... |
ACL | Embarrassingly Simple Unsupervised Aspect Extraction | We present a simple but effective method for aspect identification in sentiment analysis. Our unsupervised method only requires word embeddings and a POS tagger, and is therefore straightforward to apply to new domains and languages. We introduce Contrastive Attention (CAt), a novel single-head attention mechanism base... | 2b75fbdfabbd47cd6f51a79c731f1f72 | 2,020 | [
"we present a simple but effective method for aspect identification in sentiment analysis .",
"our unsupervised method only requires word embeddings and a pos tagger , and is therefore straightforward to apply to new domains and languages .",
"we introduce contrastive attention ( cat ) , a novel single - head a... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "simple but effective method for aspect identificati... | [
"we",
"present",
"a",
"simple",
"but",
"effective",
"method",
"for",
"aspect",
"identification",
"in",
"sentiment",
"analysis",
".",
"our",
"unsupervised",
"method",
"only",
"requires",
"word",
"embeddings",
"and",
"a",
"pos",
"tagger",
",",
"and",
"is",
"ther... |
ACL | Pay Attention when you Pay the Bills. A Multilingual Corpus with Dependency-based and Semantic Annotation of Collocations. | This paper presents a new multilingual corpus with semantic annotation of collocations in English, Portuguese, and Spanish. The whole resource contains 155k tokens and 1,526 collocations labeled in context. The annotated examples belong to three syntactic relations (adjective-noun, verb-object, and nominal compounds), ... | 35d2f20654678b404de19f9b6009aa1d | 2,019 | [
"this paper presents a new multilingual corpus with semantic annotation of collocations in english , portuguese , and spanish .",
"the whole resource contains 155k tokens and 1 , 526 collocations labeled in context .",
"the annotated examples belong to three syntactic relations ( adjective - noun , verb - objec... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "multilingual corpus",
"nugget_type": "DST",
"argument_type": "Content",
"tokens": [
"multilingual",
"corpus"
],
"offsets": [
5,
6
]
}
],
"trigg... | [
"this",
"paper",
"presents",
"a",
"new",
"multilingual",
"corpus",
"with",
"semantic",
"annotation",
"of",
"collocations",
"in",
"english",
",",
"portuguese",
",",
"and",
"spanish",
".",
"the",
"whole",
"resource",
"contains",
"155k",
"tokens",
"and",
"1",
","... |
ACL | Explanations for CommonsenseQA: New Dataset and Models | CommonsenseQA (CQA) (Talmor et al., 2019) dataset was recently released to advance the research on common-sense question answering (QA) task. Whereas the prior work has mostly focused on proposing QA models for this dataset, our aim is to retrieve as well as generate explanation for a given (question, correct answer ch... | 8287284edc67d58e74ffcac238af8d21 | 2,021 | [
"commonsenseqa ( cqa ) ( talmor et al . , 2019 ) dataset was recently released to advance the research on common - sense question answering ( qa ) task .",
"whereas the prior work has mostly focused on proposing qa models for this dataset , our aim is to retrieve as well as generate explanation for a given ( ques... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "commonsenseqa dataset",
"nugget_type": "DST",
"argument_type": "Target",
"tokens": [
"commonsenseqa",
"dataset"
],
"offsets": [
0,
12
]
}
],
"t... | [
"commonsenseqa",
"(",
"cqa",
")",
"(",
"talmor",
"et",
"al",
".",
",",
"2019",
")",
"dataset",
"was",
"recently",
"released",
"to",
"advance",
"the",
"research",
"on",
"common",
"-",
"sense",
"question",
"answering",
"(",
"qa",
")",
"task",
".",
"whereas... |
ACL | Assessing the Representations of Idiomaticity in Vector Models with a Noun Compound Dataset Labeled at Type and Token Levels | Accurate assessment of the ability of embedding models to capture idiomaticity may require evaluation at token rather than type level, to account for degrees of idiomaticity and possible ambiguity between literal and idiomatic usages. However, most existing resources with annotation of idiomaticity include ratings only... | b2e1e698916e31fbf7aed7446b34c691 | 2,021 | [
"accurate assessment of the ability of embedding models to capture idiomaticity may require evaluation at token rather than type level , to account for degrees of idiomaticity and possible ambiguity between literal and idiomatic usages .",
"however , most existing resources with annotation of idiomaticity include... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "idiomaticity",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"idiomaticity"
],
"offsets": [
10
]
}
],
"trigger": {
"text": "require",
"t... | [
"accurate",
"assessment",
"of",
"the",
"ability",
"of",
"embedding",
"models",
"to",
"capture",
"idiomaticity",
"may",
"require",
"evaluation",
"at",
"token",
"rather",
"than",
"type",
"level",
",",
"to",
"account",
"for",
"degrees",
"of",
"idiomaticity",
"and",... |
ACL | Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts | Emotion cause extraction (ECE), the task aimed at extracting the potential causes behind certain emotions in text, has gained much attention in recent years due to its wide applications. However, it suffers from two shortcomings: 1) the emotion must be annotated before cause extraction in ECE, which greatly limits its ... | 220f8a5851ed7479a5e70afd56025008 | 2,019 | [
"emotion cause extraction ( ece ) , the task aimed at extracting the potential causes behind certain emotions in text , has gained much attention in recent years due to its wide applications .",
"however , it suffers from two shortcomings : 1 ) the emotion must be annotated before cause extraction in ece , which ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "emotion cause extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"emotion",
"cause",
"extraction"
],
"offsets": [
0,
1,
2
... | [
"emotion",
"cause",
"extraction",
"(",
"ece",
")",
",",
"the",
"task",
"aimed",
"at",
"extracting",
"the",
"potential",
"causes",
"behind",
"certain",
"emotions",
"in",
"text",
",",
"has",
"gained",
"much",
"attention",
"in",
"recent",
"years",
"due",
"to",
... |
ACL | Enhancing Machine Translation with Dependency-Aware Self-Attention | Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism. In this work, we investigate different approaches to incorporate syntactic knowledge in the Transformer model and also propose a novel, parameter-free, depe... | ce7dff599fbaf0e195f1bc58cd56b026 | 2,020 | [
"most neural machine translation models only rely on pairs of parallel sentences , assuming syntactic information is automatically learned by an attention mechanism .",
"in this work , we investigate different approaches to incorporate syntactic knowledge in the transformer model and also propose a novel , parame... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "neural machine translation models",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"neural",
"machine",
"translation",
"models"
],
"offsets": [
... | [
"most",
"neural",
"machine",
"translation",
"models",
"only",
"rely",
"on",
"pairs",
"of",
"parallel",
"sentences",
",",
"assuming",
"syntactic",
"information",
"is",
"automatically",
"learned",
"by",
"an",
"attention",
"mechanism",
".",
"in",
"this",
"work",
",... |
ACL | Discourse as a Function of Event: Profiling Discourse Structure in News Articles around the Main Event | Understanding discourse structures of news articles is vital to effectively contextualize the occurrence of a news event. To enable computational modeling of news structures, we apply an existing theory of functional discourse structure for news articles that revolves around the main event and create a human-annotated ... | a117c70fbf085e7cc27aa374e31fab9a | 2,020 | [
"understanding discourse structures of news articles is vital to effectively contextualize the occurrence of a news event .",
"to enable computational modeling of news structures , we apply an existing theory of functional discourse structure for news articles that revolves around the main event and create a huma... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "structures of news articles",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"structures",
"of",
"news",
"articles"
],
"offsets": [
2,
... | [
"understanding",
"discourse",
"structures",
"of",
"news",
"articles",
"is",
"vital",
"to",
"effectively",
"contextualize",
"the",
"occurrence",
"of",
"a",
"news",
"event",
".",
"to",
"enable",
"computational",
"modeling",
"of",
"news",
"structures",
",",
"we",
"... |
ACL | Distantly Supervised Named Entity Recognition using Positive-Unlabeled Learning | In this work, we explore the way to perform named entity recognition (NER) using only unlabeled data and named entity dictionaries. To this end, we formulate the task as a positive-unlabeled (PU) learning problem and accordingly propose a novel PU learning algorithm to perform the task. We prove that the proposed algor... | 02fa7828e50cffcdcf8b1773b863a015 | 2,019 | [
"in this work , we explore the way to perform named entity recognition ( ner ) using only unlabeled data and named entity dictionaries .",
"to this end , we formulate the task as a positive - unlabeled ( pu ) learning problem and accordingly propose a novel pu learning algorithm to perform the task .",
"we prov... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "unlabeled data",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"unlabeled",
"data"
],
"offsets": [
18,
19
]
},
{
"t... | [
"in",
"this",
"work",
",",
"we",
"explore",
"the",
"way",
"to",
"perform",
"named",
"entity",
"recognition",
"(",
"ner",
")",
"using",
"only",
"unlabeled",
"data",
"and",
"named",
"entity",
"dictionaries",
".",
"to",
"this",
"end",
",",
"we",
"formulate",
... |
ACL | Annotating Online Misogyny | Online misogyny, a category of online abusive language, has serious and harmful social consequences. Automatic detection of misogynistic language online, while imperative, poses complicated challenges to both data gathering, data annotation, and bias mitigation, as this type of data is linguistically complex and divers... | 1878eeb09862560fdd39e2c918989b03 | 2,021 | [
"online misogyny , a category of online abusive language , has serious and harmful social consequences .",
"automatic detection of misogynistic language online , while imperative , poses complicated challenges to both data gathering , data annotation , and bias mitigation , as this type of data is linguistically ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "online misogyny",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"online",
"misogyny"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"online",
"misogyny",
",",
"a",
"category",
"of",
"online",
"abusive",
"language",
",",
"has",
"serious",
"and",
"harmful",
"social",
"consequences",
".",
"automatic",
"detection",
"of",
"misogynistic",
"language",
"online",
",",
"while",
"imperative",
",",
"pos... |
ACL | MarkupLM: Pre-training of Text and Markup Language for Visually Rich Document Understanding | Multimodal pre-training with text, layout, and image has made significant progress for Visually Rich Document Understanding (VRDU), especially the fixed-layout documents such as scanned document images. While, there are still a large number of digital documents where the layout information is not fixed and needs to be ... | ba02ccbfdb9b43cafd6da9f3622bebb7 | 2,022 | [
"multimodal pre - training with text , layout , and image has made significant progress for visually rich document understanding ( vrdu ) , especially the fixed - layout documents such as scanned document images .",
"while , there are still a large number of digital documents where the layout information is not f... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multimodal pre - training",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multimodal",
"pre",
"-",
"training"
],
"offsets": [
0,
... | [
"multimodal",
"pre",
"-",
"training",
"with",
"text",
",",
"layout",
",",
"and",
"image",
"has",
"made",
"significant",
"progress",
"for",
"visually",
"rich",
"document",
"understanding",
"(",
"vrdu",
")",
",",
"especially",
"the",
"fixed",
"-",
"layout",
"d... |
ACL | Buy Tesla, Sell Ford: Assessing Implicit Stock Market Preference in Pre-trained Language Models | Pretrained language models such as BERT have achieved remarkable success in several NLP tasks. With the wide adoption of BERT in real-world applications, researchers begin to investigate the implicit biases encoded in the BERT. In this paper, we assess the implicit stock market preferences in BERT and its finance domai... | f288dea3d0963e8b2243561fb68ee53a | 2,022 | [
"pretrained language models such as bert have achieved remarkable success in several nlp tasks .",
"with the wide adoption of bert in real - world applications , researchers begin to investigate the implicit biases encoded in the bert .",
"in this paper , we assess the implicit stock market preferences in bert ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"language",
"models"
],
"offsets": [
0,
1,
... | [
"pretrained",
"language",
"models",
"such",
"as",
"bert",
"have",
"achieved",
"remarkable",
"success",
"in",
"several",
"nlp",
"tasks",
".",
"with",
"the",
"wide",
"adoption",
"of",
"bert",
"in",
"real",
"-",
"world",
"applications",
",",
"researchers",
"begin... |
ACL | StructuralLM: Structural Pre-training for Form Understanding | Large pre-trained language models achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, they almost exclusively focus on text-only representation, while neglecting cell-level layout information that is important for form image understanding. In this paper, we propose a new pre-training appr... | cc28fdb4c3c4f35485a5c1d5e95e65fd | 2,021 | [
"large pre - trained language models achieve state - of - the - art results when fine - tuned on downstream nlp tasks .",
"however , they almost exclusively focus on text - only representation , while neglecting cell - level layout information that is important for form image understanding .",
"in this paper , ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "downstream nlp tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"downstream",
"nlp",
"tasks"
],
"offsets": [
20,
21,
22
... | [
"large",
"pre",
"-",
"trained",
"language",
"models",
"achieve",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"results",
"when",
"fine",
"-",
"tuned",
"on",
"downstream",
"nlp",
"tasks",
".",
"however",
",",
"they",
"almost",
"exclusively",
"focus",
"on",
... |
ACL | A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal | Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds, presentation of search results, and timeline generation. However, there is a lack of datasets that realistically address such use cases at ... | 56699b41e5be7019c6be6bb6f51296e2 | 2,020 | [
"multi - document summarization ( mds ) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds , presentation of search results , and timeline generation .",
"however , there is a lack of datasets that realistically address ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "mds",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"mds"
],
"offsets": [
5
]
}
],
"trigger": {
"text": "compress",
"tokens": [
... | [
"multi",
"-",
"document",
"summarization",
"(",
"mds",
")",
"aims",
"to",
"compress",
"the",
"content",
"in",
"large",
"document",
"collections",
"into",
"short",
"summaries",
"and",
"has",
"important",
"applications",
"in",
"story",
"clustering",
"for",
"newsfe... |
ACL | Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter | Datasets with induced emotion labels are scarce but of utmost importance for many NLP tasks. We present a new, automated method for collecting texts along with their induced reaction labels. The method exploits the online use of reaction GIFs, which capture complex affective states. We show how to augment the data with... | 75d9830cf200d796da50b71334e46d4b | 2,021 | [
"datasets with induced emotion labels are scarce but of utmost importance for many nlp tasks .",
"we present a new , automated method for collecting texts along with their induced reaction labels .",
"the method exploits the online use of reaction gifs , which capture complex affective states .",
"we show how... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "datasets with induced emotion labels",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"datasets",
"with",
"induced",
"emotion",
"labels"
],
"... | [
"datasets",
"with",
"induced",
"emotion",
"labels",
"are",
"scarce",
"but",
"of",
"utmost",
"importance",
"for",
"many",
"nlp",
"tasks",
".",
"we",
"present",
"a",
"new",
",",
"automated",
"method",
"for",
"collecting",
"texts",
"along",
"with",
"their",
"in... |
ACL | CaMEL: Case Marker Extraction without Labels | We introduce CaMEL (Case Marker Extraction without Labels), a novel and challenging task in computational morphology that is especially relevant for low-resource languages. We propose a first model for CaMEL that uses a massively multilingual corpus to extract case markers in 83 languages based only on a noun phrase ch... | 542c6c26ace58342b98f703918bd8423 | 2,022 | [
"we introduce camel ( case marker extraction without labels ) , a novel and challenging task in computational morphology that is especially relevant for low - resource languages .",
"we propose a first model for camel that uses a massively multilingual corpus to extract case markers in 83 languages based only on ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "case marker extraction without labels",
"... | [
"we",
"introduce",
"camel",
"(",
"case",
"marker",
"extraction",
"without",
"labels",
")",
",",
"a",
"novel",
"and",
"challenging",
"task",
"in",
"computational",
"morphology",
"that",
"is",
"especially",
"relevant",
"for",
"low",
"-",
"resource",
"languages",
... |
ACL | Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation | Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation. However, this approach is still challenged by the large pivot detection problem that should be solved, and by the inherent instability of LSTMs.... | 8dfa88826d5bbdb80ea250550b3a81c7 | 2,019 | [
"pivot based language modeling ( pblm ) ( ziser and reichart , 2018a ) , combining lstms with pivot - based methods , has yielded significant progress in unsupervised domain adaptation .",
"however , this approach is still challenged by the large pivot detection problem that should be solved , and by the inherent... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pivot based language modeling",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pivot",
"based",
"language",
"modeling"
],
"offsets": [
0,
... | [
"pivot",
"based",
"language",
"modeling",
"(",
"pblm",
")",
"(",
"ziser",
"and",
"reichart",
",",
"2018a",
")",
",",
"combining",
"lstms",
"with",
"pivot",
"-",
"based",
"methods",
",",
"has",
"yielded",
"significant",
"progress",
"in",
"unsupervised",
"doma... |
ACL | Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation | Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process. However, NAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by differ... | ee2822fd666266d2f89a85580d70682c | 2,020 | [
"non - autoregressive neural machine translation ( nat ) predicts the entire target sequence simultaneously and significantly accelerates inference process .",
"however , nat discards the dependency information in a sentence , and thus inevitably suffers from the multi - modality problem : the target tokens may b... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "token repetitions",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"token",
"repetitions"
],
"offsets": [
56,
57
]
},
{
"tex... | [
"non",
"-",
"autoregressive",
"neural",
"machine",
"translation",
"(",
"nat",
")",
"predicts",
"the",
"entire",
"target",
"sequence",
"simultaneously",
"and",
"significantly",
"accelerates",
"inference",
"process",
".",
"however",
",",
"nat",
"discards",
"the",
"d... |
ACL | Corpus-based Check-up for Thesaurus | In this paper we discuss the usefulness of applying a checking procedure to existing thesauri. The procedure is based on the analysis of discrepancies of corpus-based and thesaurus-based word similarities. We applied the procedure to more than 30 thousand words of the Russian wordnet and found some serious errors in wo... | 80aa87015a37737d2e129fe68ee95fe8 | 2,019 | [
"in this paper we discuss the usefulness of applying a checking procedure to existing thesauri .",
"the procedure is based on the analysis of discrepancies of corpus - based and thesaurus - based word similarities .",
"we applied the procedure to more than 30 thousand words of the russian wordnet and found some... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
3
]
},
{
"text": "checking procedure",
"nugget_type": "APP"... | [
"in",
"this",
"paper",
"we",
"discuss",
"the",
"usefulness",
"of",
"applying",
"a",
"checking",
"procedure",
"to",
"existing",
"thesauri",
".",
"the",
"procedure",
"is",
"based",
"on",
"the",
"analysis",
"of",
"discrepancies",
"of",
"corpus",
"-",
"based",
"... |
ACL | Review-based Question Generation with Adaptive Instance Transfer and Augmentation | While online reviews of products and services become an important information source, it remains inefficient for potential consumers to exploit verbose reviews for fulfilling their information need. We propose to explore question generation as a new way of review information exploitation, namely generating questions th... | 7acbc6493e7c2e2745e7c1472b3ea5c3 | 2,020 | [
"while online reviews of products and services become an important information source , it remains inefficient for potential consumers to exploit verbose reviews for fulfilling their information need .",
"we propose to explore question generation as a new way of review information exploitation , namely generating... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "question generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"question",
"generation"
],
"offsets": [
33,
34
]
}
],
"trig... | [
"while",
"online",
"reviews",
"of",
"products",
"and",
"services",
"become",
"an",
"important",
"information",
"source",
",",
"it",
"remains",
"inefficient",
"for",
"potential",
"consumers",
"to",
"exploit",
"verbose",
"reviews",
"for",
"fulfilling",
"their",
"inf... |
ACL | Variational Neural Machine Translation with Normalizing Flows | Variational Neural Machine Translation (VNMT) is an attractive framework for modeling the generation of target translations, conditioned not only on the source sentence but also on some latent random variables. The latent variable modeling may introduce useful statistical dependencies that can improve translation accur... | 48f698fefc2253634d11b61f2fa8271d | 2,020 | [
"variational neural machine translation ( vnmt ) is an attractive framework for modeling the generation of target translations , conditioned not only on the source sentence but also on some latent random variables .",
"the latent variable modeling may introduce useful statistical dependencies that can improve tra... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "variational neural machine translation",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"variational",
"neural",
"machine",
"translation"
],
"offsets":... | [
"variational",
"neural",
"machine",
"translation",
"(",
"vnmt",
")",
"is",
"an",
"attractive",
"framework",
"for",
"modeling",
"the",
"generation",
"of",
"target",
"translations",
",",
"conditioned",
"not",
"only",
"on",
"the",
"source",
"sentence",
"but",
"also... |
ACL | Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling | Sequence labeling is a fundamental task for a range of natural language processing problems. When used in practice, its performance is largely influenced by the annotation quality and quantity, and meanwhile, obtaining ground truth labels is often costly. In many cases, ground truth labels do not exist, but noisy annot... | e3e77a55beee373e168e334747ca247b | 2,020 | [
"sequence labeling is a fundamental task for a range of natural language processing problems .",
"when used in practice , its performance is largely influenced by the annotation quality and quantity , and meanwhile , obtaining ground truth labels is often costly .",
"in many cases , ground truth labels do not e... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sequence labeling",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"sequence",
"labeling"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"sequence",
"labeling",
"is",
"a",
"fundamental",
"task",
"for",
"a",
"range",
"of",
"natural",
"language",
"processing",
"problems",
".",
"when",
"used",
"in",
"practice",
",",
"its",
"performance",
"is",
"largely",
"influenced",
"by",
"the",
"annotation",
"q... |
ACL | Structured Sentiment Analysis as Dependency Graph Parsing | Structured sentiment analysis attempts to extract full opinion tuples from a text, but over time this task has been subdivided into smaller and smaller sub-tasks, e.g., target extraction or targeted polarity classification. We argue that this division has become counterproductive and propose a new unified framework to ... | 5117728b682783d2f8887acc151941cb | 2,021 | [
"structured sentiment analysis attempts to extract full opinion tuples from a text , but over time this task has been subdivided into smaller and smaller sub - tasks , e . g . , target extraction or targeted polarity classification .",
"we argue that this division has become counterproductive and propose a new un... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "structured sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"structured",
"sentiment",
"analysis"
],
"offsets": [
0,
1,
... | [
"structured",
"sentiment",
"analysis",
"attempts",
"to",
"extract",
"full",
"opinion",
"tuples",
"from",
"a",
"text",
",",
"but",
"over",
"time",
"this",
"task",
"has",
"been",
"subdivided",
"into",
"smaller",
"and",
"smaller",
"sub",
"-",
"tasks",
",",
"e",... |
ACL | Low Resource Sequence Tagging using Sentence Reconstruction | This work revisits the task of training sequence tagging models with limited resources using transfer learning. We investigate several proposed approaches introduced in recent works and suggest a new loss that relies on sentence reconstruction from normalized embeddings. Specifically, our method demonstrates how by add... | 21cc41154e91532fbe520ed290197bec | 2,020 | [
"this work revisits the task of training sequence tagging models with limited resources using transfer learning .",
"we investigate several proposed approaches introduced in recent works and suggest a new loss that relies on sentence reconstruction from normalized embeddings .",
"specifically , our method demon... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "task of training sequence tagging models",
"nugget_type": "TAK",
"argument_type": "Content",
"tokens": [
"task",
"of",
"training",
"sequence",
"tagging",
"models... | [
"this",
"work",
"revisits",
"the",
"task",
"of",
"training",
"sequence",
"tagging",
"models",
"with",
"limited",
"resources",
"using",
"transfer",
"learning",
".",
"we",
"investigate",
"several",
"proposed",
"approaches",
"introduced",
"in",
"recent",
"works",
"an... |
ACL | Text Smoothing: Enhance Various Data Augmentation Methods on Text Classification Tasks | Before entering the neural network, a token needs to be converted to its one-hot representation, which is a discrete distribution of the vocabulary. Smoothed representation is the probability of candidate tokens obtained from the pre-trained masked language model, which can be seen as a more informative augmented subst... | 1e152448b43e88b310d1855eb69d30a4 | 2,022 | [
"before entering the neural network , a token needs to be converted to its one - hot representation , which is a discrete distribution of the vocabulary .",
"smoothed representation is the probability of candidate tokens obtained from the pre - trained masked language model , which can be seen as a more informati... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "one - hot representation",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"one",
"-",
"hot",
"representation"
],
"offsets": [
14,
1... | [
"before",
"entering",
"the",
"neural",
"network",
",",
"a",
"token",
"needs",
"to",
"be",
"converted",
"to",
"its",
"one",
"-",
"hot",
"representation",
",",
"which",
"is",
"a",
"discrete",
"distribution",
"of",
"the",
"vocabulary",
".",
"smoothed",
"represe... |
ACL | Automated Chess Commentator Powered by Neural Chess Engine | In this paper, we explore a new approach for automated chess commentary generation, which aims to generate chess commentary texts in different categories (e.g., description, comparison, planning, etc.). We introduce a neural chess engine into text generation models to help with encoding boards, predicting moves, and an... | 93e740fb45650508e2e399789f020eda | 2,019 | [
"in this paper , we explore a new approach for automated chess commentary generation , which aims to generate chess commentary texts in different categories ( e . g . , description , comparison , planning , etc . ) .",
"we introduce a neural chess engine into text generation models to help with encoding boards , ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "approach",
"nugget_type": "APP",
"a... | [
"in",
"this",
"paper",
",",
"we",
"explore",
"a",
"new",
"approach",
"for",
"automated",
"chess",
"commentary",
"generation",
",",
"which",
"aims",
"to",
"generate",
"chess",
"commentary",
"texts",
"in",
"different",
"categories",
"(",
"e",
".",
"g",
".",
... |
ACL | How Can We Accelerate Progress Towards Human-like Linguistic Generalization? | This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pre-training of a word prediction model on a corpus of arbitrary ... | efbfeaddcd4ef71ef920b72cb7381c56 | 2,020 | [
"this position paper describes and critiques the pretraining - agnostic identically distributed ( paid ) evaluation paradigm , which has become a central tool for measuring progress in natural language understanding .",
"this paradigm consists of three stages : ( 1 ) pre - training of a word prediction model on a... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "pretraining - agnostic identically distributed ( paid ) evaluation paradigm",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"pretraining",
"-",
"agnostic",
"identica... | [
"this",
"position",
"paper",
"describes",
"and",
"critiques",
"the",
"pretraining",
"-",
"agnostic",
"identically",
"distributed",
"(",
"paid",
")",
"evaluation",
"paradigm",
",",
"which",
"has",
"become",
"a",
"central",
"tool",
"for",
"measuring",
"progress",
... |
ACL | Coreference Resolution without Span Representations | The introduction of pretrained language models has reduced many complex task-specific NLP models to simple lightweight layers. An exception to this trend is coreference resolution, where a sophisticated task-specific model is appended to a pretrained transformer encoder. While highly effective, the model has a very lar... | a3dc9def5c62871c7f4efb13e655deba | 2,021 | [
"the introduction of pretrained language models has reduced many complex task - specific nlp models to simple lightweight layers .",
"an exception to this trend is coreference resolution , where a sophisticated task - specific model is appended to a pretrained transformer encoder .",
"while highly effective , t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "complex task - specific nlp models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"complex",
"task",
"-",
"specific",
"nlp",
"models"
],
... | [
"the",
"introduction",
"of",
"pretrained",
"language",
"models",
"has",
"reduced",
"many",
"complex",
"task",
"-",
"specific",
"nlp",
"models",
"to",
"simple",
"lightweight",
"layers",
".",
"an",
"exception",
"to",
"this",
"trend",
"is",
"coreference",
"resoluti... |
ACL | Response-Anticipated Memory for On-Demand Knowledge Integration in Response Generation | Neural conversation models are known to generate appropriate but non-informative responses in general. A scenario where informativeness can be significantly enhanced is Conversing by Reading (CbR), where conversations take place with respect to a given external document. In previous work, the external document is utili... | 68b5d9bd36de3df81ecab57350433c33 | 2,020 | [
"neural conversation models are known to generate appropriate but non - informative responses in general .",
"a scenario where informativeness can be significantly enhanced is conversing by reading ( cbr ) , where conversations take place with respect to a given external document .",
"in previous work , the ext... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conversing by reading",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"conversing",
"by",
"reading"
],
"offsets": [
25,
26,
27
... | [
"neural",
"conversation",
"models",
"are",
"known",
"to",
"generate",
"appropriate",
"but",
"non",
"-",
"informative",
"responses",
"in",
"general",
".",
"a",
"scenario",
"where",
"informativeness",
"can",
"be",
"significantly",
"enhanced",
"is",
"conversing",
"by... |
ACL | AggGen: Ordering and Aggregating while Generating | We present AggGen (pronounced ‘again’) a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems: input ordering and input aggregation. In contrast to previous work using sentence planning, our model is still end-to-end: AggGen performs sentence planning at the same... | 3c0878652cb2d1fbc3685a1fdd7cfbd4 | 2,021 | [
"we present agggen ( pronounced ‘ again ’ ) a data - to - text model which re - introduces two explicit sentence planning stages into neural data - to - text systems : input ordering and input aggregation .",
"in contrast to previous work using sentence planning , our model is still end - to - end : agggen perfor... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "agggen",
"nugget_type": "APP",
"arg... | [
"we",
"present",
"agggen",
"(",
"pronounced",
"‘",
"again",
"’",
")",
"a",
"data",
"-",
"to",
"-",
"text",
"model",
"which",
"re",
"-",
"introduces",
"two",
"explicit",
"sentence",
"planning",
"stages",
"into",
"neural",
"data",
"-",
"to",
"-",
"text",
... |
ACL | Mask-Align: Self-Supervised Neural Word Alignment | Word alignment, which aims to align translationally equivalent words between source and target sentences, plays an important role in many natural language processing tasks. Current unsupervised neural alignment methods focus on inducing alignments from neural machine translation models, which does not leverage the full... | ba6e541033327f185da6b4844781c9f9 | 2,021 | [
"word alignment , which aims to align translationally equivalent words between source and target sentences , plays an important role in many natural language processing tasks .",
"current unsupervised neural alignment methods focus on inducing alignments from neural machine translation models , which does not lev... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"tasks"
],
"offsets": [
... | [
"word",
"alignment",
",",
"which",
"aims",
"to",
"align",
"translationally",
"equivalent",
"words",
"between",
"source",
"and",
"target",
"sentences",
",",
"plays",
"an",
"important",
"role",
"in",
"many",
"natural",
"language",
"processing",
"tasks",
".",
"curr... |
ACL | An Effectiveness Metric for Ordinal Classification: Formal Properties and Experimental Results | In Ordinal Classification tasks, items have to be assigned to classes that have a relative ordering, such as “positive”, “neutral”, “negative” in sentiment analysis. Remarkably, the most popular evaluation metrics for ordinal classification tasks either ignore relevant information (for instance, precision/recall on eac... | d1490cba4d2ca78140b665e823a56e78 | 2,020 | [
"in ordinal classification tasks , items have to be assigned to classes that have a relative ordering , such as “ positive ” , “ neutral ” , “ negative ” in sentiment analysis .",
"remarkably , the most popular evaluation metrics for ordinal classification tasks either ignore relevant information ( for instance ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ordinal classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"ordinal",
"classification"
],
"offsets": [
1,
2
]
}
],
"... | [
"in",
"ordinal",
"classification",
"tasks",
",",
"items",
"have",
"to",
"be",
"assigned",
"to",
"classes",
"that",
"have",
"a",
"relative",
"ordering",
",",
"such",
"as",
"“",
"positive",
"”",
",",
"“",
"neutral",
"”",
",",
"“",
"negative",
"”",
"in",
... |
ACL | Enhancing Pre-trained Chinese Character Representation with Word-aligned Attention | Most Chinese pre-trained models take character as the basic unit and learn representation according to character’s external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful utterance in Chinese. Hence, we propose a novel word-aligned attention to exploit explicit word information... | 34ac7ba5bf670ddc258b74d6c7f77956 | 2,020 | [
"most chinese pre - trained models take character as the basic unit and learn representation according to character ’ s external contexts , ignoring the semantics expressed in the word , which is the smallest meaningful utterance in chinese .",
"hence , we propose a novel word - aligned attention to exploit expli... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "chinese pre - trained models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"chinese",
"pre",
"-",
"trained",
"models"
],
"offsets": [
... | [
"most",
"chinese",
"pre",
"-",
"trained",
"models",
"take",
"character",
"as",
"the",
"basic",
"unit",
"and",
"learn",
"representation",
"according",
"to",
"character",
"’",
"s",
"external",
"contexts",
",",
"ignoring",
"the",
"semantics",
"expressed",
"in",
"... |
ACL | Hierarchical Modeling for User Personality Prediction: The Role of Message-Level Attention | Not all documents are equally important. Language processing is increasingly finding use as a supplement for questionnaires to assess psychological attributes of consenting individuals, but most approaches neglect to consider whether all documents of an individual are equally informative. In this paper, we present a no... | 569a5b5b4ac745b4b49d51fa29d43fcb | 2,020 | [
"not all documents are equally important .",
"language processing is increasingly finding use as a supplement for questionnaires to assess psychological attributes of consenting individuals , but most approaches neglect to consider whether all documents of an individual are equally informative .",
"in this pape... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "neglect",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"neglect"
],
"offsets": [
29
]
}
],
"trigger": {
"text": "neglect",
"tokens": [
... | [
"not",
"all",
"documents",
"are",
"equally",
"important",
".",
"language",
"processing",
"is",
"increasingly",
"finding",
"use",
"as",
"a",
"supplement",
"for",
"questionnaires",
"to",
"assess",
"psychological",
"attributes",
"of",
"consenting",
"individuals",
",",
... |
ACL | Controlled Crowdsourcing for High-Quality QA-SRL Annotation | Question-answer driven Semantic Role Labeling (QA-SRL) was proposed as an attractive open and natural flavour of SRL, potentially attainable from laymen. Recently, a large-scale crowdsourced QA-SRL corpus and a trained parser were released. Trying to replicate the QA-SRL annotation for new texts, we found that the resu... | d0927c1a93336d65ac94be08b61baa52 | 2,020 | [
"question - answer driven semantic role labeling ( qa - srl ) was proposed as an attractive open and natural flavour of srl , potentially attainable from laymen .",
"recently , a large - scale crowdsourced qa - srl corpus and a trained parser were released .",
"trying to replicate the qa - srl annotation for ne... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "question - answer driven semantic role labeling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"question",
"-",
"answer",
"driven",
"semantic",
"... | [
"question",
"-",
"answer",
"driven",
"semantic",
"role",
"labeling",
"(",
"qa",
"-",
"srl",
")",
"was",
"proposed",
"as",
"an",
"attractive",
"open",
"and",
"natural",
"flavour",
"of",
"srl",
",",
"potentially",
"attainable",
"from",
"laymen",
".",
"recently... |
ACL | A Bidirectional Transformer Based Alignment Model for Unsupervised Word Alignment | Word alignment and machine translation are two closely related tasks. Neural translation models, such as RNN-based and Transformer models, employ a target-to-source attention mechanism which can provide rough word alignments, but with a rather low accuracy. High-quality word alignment can help neural machine translatio... | 81a9f06c4dd6a663e006ddcc3ce47371 | 2,021 | [
"word alignment and machine translation are two closely related tasks .",
"neural translation models , such as rnn - based and transformer models , employ a target - to - source attention mechanism which can provide rough word alignments , but with a rather low accuracy .",
"high - quality word alignment can he... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural translation models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"translation",
"models"
],
"offsets": [
11,
12,
... | [
"word",
"alignment",
"and",
"machine",
"translation",
"are",
"two",
"closely",
"related",
"tasks",
".",
"neural",
"translation",
"models",
",",
"such",
"as",
"rnn",
"-",
"based",
"and",
"transformer",
"models",
",",
"employ",
"a",
"target",
"-",
"to",
"-",
... |
ACL | CoRI: Collective Relation Integration with Data Augmentation for Open Information Extraction | Integrating extracted knowledge from the Web to knowledge graphs (KGs) can facilitate tasks like question answering. We study relation integration that aims to align free-text relations in subject-relation-object extractions to relations in a target KG. To address the challenge that free-text relations are ambiguous, p... | 5ccf5c7265cf02748ca78e8aef0b4699 | 2,021 | [
"integrating extracted knowledge from the web to knowledge graphs ( kgs ) can facilitate tasks like question answering .",
"we study relation integration that aims to align free - text relations in subject - relation - object extractions to relations in a target kg .",
"to address the challenge that free - text... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "integrating extracted knowledge",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"integrating",
"extracted",
"knowledge"
],
"offsets": [
0,
1... | [
"integrating",
"extracted",
"knowledge",
"from",
"the",
"web",
"to",
"knowledge",
"graphs",
"(",
"kgs",
")",
"can",
"facilitate",
"tasks",
"like",
"question",
"answering",
".",
"we",
"study",
"relation",
"integration",
"that",
"aims",
"to",
"align",
"free",
"-... |
ACL | End-to-End AMR Corefencence Resolution | Although parsing to Abstract Meaning Representation (AMR) has become very popular and AMR has been shown effective on the many sentence-level downstream tasks, little work has studied how to generate AMRs that can represent multi-sentence information. We introduce the first end-to-end AMR coreference resolution model i... | 911cdda15d35d9fa0c64d3e8461ee8f3 | 2,021 | [
"although parsing to abstract meaning representation ( amr ) has become very popular and amr has been shown effective on the many sentence - level downstream tasks , little work has studied how to generate amrs that can represent multi - sentence information .",
"we introduce the first end - to - end amr corefere... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "abstract meaning representation",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"abstract",
"meaning",
"representation"
],
"offsets": [
3,
4... | [
"although",
"parsing",
"to",
"abstract",
"meaning",
"representation",
"(",
"amr",
")",
"has",
"become",
"very",
"popular",
"and",
"amr",
"has",
"been",
"shown",
"effective",
"on",
"the",
"many",
"sentence",
"-",
"level",
"downstream",
"tasks",
",",
"little",
... |
ACL | Distilling Discrimination and Generalization Knowledge for Event Detection via Delta-Representation Learning | Event detection systems rely on discrimination knowledge to distinguish ambiguous trigger words and generalization knowledge to detect unseen/sparse trigger words. Current neural event detection approaches focus on trigger-centric representations, which work well on distilling discrimination knowledge, but poorly on le... | 673c7fa4e98a970820685f0a8a4d92ed | 2,019 | [
"event detection systems rely on discrimination knowledge to distinguish ambiguous trigger words and generalization knowledge to detect unseen / sparse trigger words .",
"current neural event detection approaches focus on trigger - centric representations , which work well on distilling discrimination knowledge ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "event detection systems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"event",
"detection",
"systems"
],
"offsets": [
0,
1,
2
... | [
"event",
"detection",
"systems",
"rely",
"on",
"discrimination",
"knowledge",
"to",
"distinguish",
"ambiguous",
"trigger",
"words",
"and",
"generalization",
"knowledge",
"to",
"detect",
"unseen",
"/",
"sparse",
"trigger",
"words",
".",
"current",
"neural",
"event",
... |
ACL | Enriched In-Order Linearization for Faster Sequence-to-Sequence Constituent Parsing | Sequence-to-sequence constituent parsing requires a linearization to represent trees as sequences. Top-down tree linearizations, which can be based on brackets or shift-reduce actions, have achieved the best accuracy to date. In this paper, we show that these results can be improved by using an in-order linearization i... | 95197df3bdc9f07a839fc3fefcae8304 | 2,020 | [
"sequence - to - sequence constituent parsing requires a linearization to represent trees as sequences .",
"top - down tree linearizations , which can be based on brackets or shift - reduce actions , have achieved the best accuracy to date .",
"in this paper , we show that these results can be improved by using... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sequence - to - sequence constituent parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"sequence",
"-",
"to",
"-",
"sequence",
"constituent"... | [
"sequence",
"-",
"to",
"-",
"sequence",
"constituent",
"parsing",
"requires",
"a",
"linearization",
"to",
"represent",
"trees",
"as",
"sequences",
".",
"top",
"-",
"down",
"tree",
"linearizations",
",",
"which",
"can",
"be",
"based",
"on",
"brackets",
"or",
... |
ACL | ClarQ: A large-scale and diverse dataset for Clarification Question Generation | Question answering and conversational systems are often baffled and need help clarifying certain ambiguities. However, limitations of existing datasets hinder the development of large-scale models capable of generating and utilising clarification questions. In order to overcome these limitations, we devise a novel boot... | 04975e523526bd92235b1a4883ebcd59 | 2,020 | [
"question answering and conversational systems are often baffled and need help clarifying certain ambiguities .",
"however , limitations of existing datasets hinder the development of large - scale models capable of generating and utilising clarification questions .",
"in order to overcome these limitations , w... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "question answering and conversational systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"question",
"answering",
"and",
"conversational",
"systems"
... | [
"question",
"answering",
"and",
"conversational",
"systems",
"are",
"often",
"baffled",
"and",
"need",
"help",
"clarifying",
"certain",
"ambiguities",
".",
"however",
",",
"limitations",
"of",
"existing",
"datasets",
"hinder",
"the",
"development",
"of",
"large",
... |
ACL | Progressive Self-Supervised Attention Learning for Aspect-Level Sentiment Analysis | In aspect-level sentiment classification (ASC), it is prevalent to equip dominant neural models with attention mechanisms, for the sake of acquiring the importance of each context word on the given aspect. However, such a mechanism tends to excessively focus on a few frequent words with sentiment polarities, while igno... | 385fb4892692c04e4854fabd8d074b48 | 2,019 | [
"in aspect - level sentiment classification ( asc ) , it is prevalent to equip dominant neural models with attention mechanisms , for the sake of acquiring the importance of each context word on the given aspect .",
"however , such a mechanism tends to excessively focus on a few frequent words with sentiment pola... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "aspect - level sentiment classification",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"aspect",
"-",
"level",
"sentiment",
"classification"
],
... | [
"in",
"aspect",
"-",
"level",
"sentiment",
"classification",
"(",
"asc",
")",
",",
"it",
"is",
"prevalent",
"to",
"equip",
"dominant",
"neural",
"models",
"with",
"attention",
"mechanisms",
",",
"for",
"the",
"sake",
"of",
"acquiring",
"the",
"importance",
"... |
ACL | Glyph2Vec: Learning Chinese Out-of-Vocabulary Word Embedding from Glyphs | Chinese NLP applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus. We show that characters’ written form, Glyphs, in ideographic languages could carry rich semantics. We present a multi-modal model, Glyph2Vec, to tackle Chinese out-of-vocabulary word embedding problem.... | 22c41e072fe75e1ee501728754d46f09 | 2,020 | [
"chinese nlp applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus .",
"we show that characters ’ written form , glyphs , in ideographic languages could carry rich semantics .",
"we present a multi - modal model , glyph2vec , to tackle chinese out - of - vocabu... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "chinese nlp applications",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"chinese",
"nlp",
"applications"
],
"offsets": [
0,
1,
2
... | [
"chinese",
"nlp",
"applications",
"that",
"rely",
"on",
"large",
"text",
"often",
"contain",
"huge",
"amounts",
"of",
"vocabulary",
"which",
"are",
"sparse",
"in",
"corpus",
".",
"we",
"show",
"that",
"characters",
"’",
"written",
"form",
",",
"glyphs",
",",... |
ACL | A Cognitive Regularizer for Language Modeling | The uniform information density (UID) hypothesis, which posits that speakers behaving optimally tend to distribute information uniformly across a linguistic signal, has gained traction in psycholinguistics as an explanation for certain syntactic, morphological, and prosodic choices. In this work, we explore whether the... | 13d3a16f5e87da7031a562c30e93ad76 | 2,021 | [
"the uniform information density ( uid ) hypothesis , which posits that speakers behaving optimally tend to distribute information uniformly across a linguistic signal , has gained traction in psycholinguistics as an explanation for certain syntactic , morphological , and prosodic choices .",
"in this work , we e... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "uniform information density",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"uniform",
"information",
"density"
],
"offsets": [
1,
2,
... | [
"the",
"uniform",
"information",
"density",
"(",
"uid",
")",
"hypothesis",
",",
"which",
"posits",
"that",
"speakers",
"behaving",
"optimally",
"tend",
"to",
"distribute",
"information",
"uniformly",
"across",
"a",
"linguistic",
"signal",
",",
"has",
"gained",
"... |
ACL | E-LANG: Energy-Based Joint Inferencing of Super and Swift Language Models | Building huge and highly capable language models has been a trend in the past years. Despite their great performance, they incur high computational cost. A common solution is to apply model compression or choose light-weight architectures, which often need a separate fixed-size model for each desirable computational bu... | a5ea0d5d9da7adf8f49510f1b7b3dce2 | 2,022 | [
"building huge and highly capable language models has been a trend in the past years .",
"despite their great performance , they incur high computational cost .",
"a common solution is to apply model compression or choose light - weight architectures , which often need a separate fixed - size model for each des... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "huge and highly capable language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"huge",
"and",
"highly",
"capable",
"language",
"models"
... | [
"building",
"huge",
"and",
"highly",
"capable",
"language",
"models",
"has",
"been",
"a",
"trend",
"in",
"the",
"past",
"years",
".",
"despite",
"their",
"great",
"performance",
",",
"they",
"incur",
"high",
"computational",
"cost",
".",
"a",
"common",
"solu... |
ACL | Zero-shot Event Extraction via Transfer Learning: Challenges and Insights | Event extraction has long been a challenging task, addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies. In this work, we explore the possibility of zero-shot event extraction by formulating it as a set of Textual Entailment (TE) and/or Question Answe... | 21940db0dfa429ed394d78ad1b6f1d92 | 2,021 | [
"event extraction has long been a challenging task , addressed mostly with supervised methods that require expensive annotation and are not extensible to new event ontologies .",
"in this work , we explore the possibility of zero - shot event extraction by formulating it as a set of textual entailment ( te ) and ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "event extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"event",
"extraction"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"event",
"extraction",
"has",
"long",
"been",
"a",
"challenging",
"task",
",",
"addressed",
"mostly",
"with",
"supervised",
"methods",
"that",
"require",
"expensive",
"annotation",
"and",
"are",
"not",
"extensible",
"to",
"new",
"event",
"ontologies",
".",
"in",... |
ACL | How is BERT surprised? Layerwise detection of linguistic anomalies | Transformer language models have shown remarkable ability in detecting when a word is anomalous in context, but likelihood scores offer no information about the cause of the anomaly. In this work, we use Gaussian models for density estimation at intermediate layers of three language models (BERT, RoBERTa, and XLNet), a... | 25dcb5f684a203b28344e296564a22b9 | 2,021 | [
"transformer language models have shown remarkable ability in detecting when a word is anomalous in context , but likelihood scores offer no information about the cause of the anomaly .",
"in this work , we use gaussian models for density estimation at intermediate layers of three language models ( bert , roberta... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
34
]
},
{
"text": "gaussian models",
"nugget_type": "APP",
... | [
"transformer",
"language",
"models",
"have",
"shown",
"remarkable",
"ability",
"in",
"detecting",
"when",
"a",
"word",
"is",
"anomalous",
"in",
"context",
",",
"but",
"likelihood",
"scores",
"offer",
"no",
"information",
"about",
"the",
"cause",
"of",
"the",
"... |
ACL | TAG : Type Auxiliary Guiding for Code Comment Generation | Existing leading code comment generation approaches with the structure-to-sequence framework ignores the type information of the interpretation of the code, e.g., operator, string, etc. However, introducing the type information into the existing framework is non-trivial due to the hierarchical dependence among the type... | 8707997a430d3196a1976dcdda2d7370 | 2,020 | [
"existing leading code comment generation approaches with the structure - to - sequence framework ignores the type information of the interpretation of the code , e . g . , operator , string , etc .",
"however , introducing the type information into the existing framework is non - trivial due to the hierarchical ... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "ignores",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"ignores"
],
"offsets": [
14
]
},
{
"text": "code comment generation approaches with th... | [
"existing",
"leading",
"code",
"comment",
"generation",
"approaches",
"with",
"the",
"structure",
"-",
"to",
"-",
"sequence",
"framework",
"ignores",
"the",
"type",
"information",
"of",
"the",
"interpretation",
"of",
"the",
"code",
",",
"e",
".",
"g",
".",
"... |
ACL | BERTifying the Hidden Markov Model for Multi-Source Weakly Supervised Named Entity Recognition | We study the problem of learning a named entity recognition (NER) tagger using noisy labels from multiple weak supervision sources. Though cheap to obtain, the labels from weak supervision sources are often incomplete, inaccurate, and contradictory, making it difficult to learn an accurate NER model. To address this ch... | c159bac1f4b6d1808448c61a087be103 | 2,021 | [
"we study the problem of learning a named entity recognition ( ner ) tagger using noisy labels from multiple weak supervision sources .",
"though cheap to obtain , the labels from weak supervision sources are often incomplete , inaccurate , and contradictory , making it difficult to learn an accurate ner model ."... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "noisy labels",
"nugget_type": "FEA",
... | [
"we",
"study",
"the",
"problem",
"of",
"learning",
"a",
"named",
"entity",
"recognition",
"(",
"ner",
")",
"tagger",
"using",
"noisy",
"labels",
"from",
"multiple",
"weak",
"supervision",
"sources",
".",
"though",
"cheap",
"to",
"obtain",
",",
"the",
"labels... |
ACL | Robust Neural Machine Translation with Joint Textual and Phonetic Embedding | Neural machine translation (NMT) is notoriously sensitive to noises, but noises are almost inevitable in practice. One special kind of noise is the homophone noise, where words are replaced by other words with similar pronunciations. We propose to improve the robustness of NMT to homophone noises by 1) jointly embeddin... | 881a88f75dbbc15b956d42773e68ce01 | 2,019 | [
"neural machine translation ( nmt ) is notoriously sensitive to noises , but noises are almost inevitable in practice .",
"one special kind of noise is the homophone noise , where words are replaced by other words with similar pronunciations .",
"we propose to improve the robustness of nmt to homophone noises b... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
0,
1,
... | [
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"is",
"notoriously",
"sensitive",
"to",
"noises",
",",
"but",
"noises",
"are",
"almost",
"inevitable",
"in",
"practice",
".",
"one",
"special",
"kind",
"of",
"noise",
"is",
"the",
"homophone",
"noise",
","... |
ACL | AugNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation | Natural Language Generation (NLG) is a key component in a task-oriented dialogue system, which converts the structured meaning representation (MR) to the natural language. For large-scale conversational systems, where it is common to have over hundreds of intents and thousands of slots, neither template-based approache... | 31947087f1f9b14242e740e3ae9ddff8 | 2,021 | [
"natural language generation ( nlg ) is a key component in a task - oriented dialogue system , which converts the structured meaning representation ( mr ) to the natural language .",
"for large - scale conversational systems , where it is common to have over hundreds of intents and thousands of slots , neither te... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "task - oriented dialogue system",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"task",
"-",
"oriented",
"dialogue",
"system"
],
"offsets": ... | [
"natural",
"language",
"generation",
"(",
"nlg",
")",
"is",
"a",
"key",
"component",
"in",
"a",
"task",
"-",
"oriented",
"dialogue",
"system",
",",
"which",
"converts",
"the",
"structured",
"meaning",
"representation",
"(",
"mr",
")",
"to",
"the",
"natural",... |
ACL | Analyzing the Persuasive Effect of Style in News Editorial Argumentation | News editorials argue about political issues in order to challenge or reinforce the stance of readers with different ideologies. Previous research has investigated such persuasive effects for argumentative content. In contrast, this paper studies how important the style of news editorials is to achieve persuasion. To t... | 634a37322dd42e28e366bc863f0ff813 | 2,020 | [
"news editorials argue about political issues in order to challenge or reinforce the stance of readers with different ideologies .",
"previous research has investigated such persuasive effects for argumentative content .",
"in contrast , this paper studies how important the style of news editorials is to achiev... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "style of news editorials",
"nugget_type": "FEA",
"argument_type": "Content",
"tokens": [
"style",
"of",
"news",
"editorials"
],
"offsets": [
40,
... | [
"news",
"editorials",
"argue",
"about",
"political",
"issues",
"in",
"order",
"to",
"challenge",
"or",
"reinforce",
"the",
"stance",
"of",
"readers",
"with",
"different",
"ideologies",
".",
"previous",
"research",
"has",
"investigated",
"such",
"persuasive",
"effe... |
ACL | Exploring Sequence-to-Sequence Learning in Aspect Term Extraction | Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem. However, sequence labeling based methods cannot make full use of the overall meaning of the whole sentence and have the limitation in processing dependencies between labels. To tackle t... | 85b4e6d875f6ce29cc14b3438e09f738 | 2,019 | [
"aspect term extraction ( ate ) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem .",
"however , sequence labeling based methods cannot make full use of the overall meaning of the whole sentence and have the limitation in processing dependencies between label... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aspect term extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aspect",
"term",
"extraction"
],
"offsets": [
0,
1,
2
... | [
"aspect",
"term",
"extraction",
"(",
"ate",
")",
"aims",
"at",
"identifying",
"all",
"aspect",
"terms",
"in",
"a",
"sentence",
"and",
"is",
"usually",
"modeled",
"as",
"a",
"sequence",
"labeling",
"problem",
".",
"however",
",",
"sequence",
"labeling",
"base... |
ACL | Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese | We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LM-based method is... | c1b0e52547200fd02b843268c7a26535 | 2,020 | [
"we examine a methodology using neural language models ( lms ) for analyzing the word order of language .",
"this lm - based method has the potential to overcome the difficulties existing methods face , such as the propagation of preprocessor errors in count - based methods .",
"in this study , we explore wheth... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "analyzing",
"nugget_type": "E-PUR",
... | [
"we",
"examine",
"a",
"methodology",
"using",
"neural",
"language",
"models",
"(",
"lms",
")",
"for",
"analyzing",
"the",
"word",
"order",
"of",
"language",
".",
"this",
"lm",
"-",
"based",
"method",
"has",
"the",
"potential",
"to",
"overcome",
"the",
"dif... |
ACL | Document-Level Event Role Filler Extraction using Multi-Granularity Contextualized Encoding | Few works in the literature of event extraction have gone beyond individual sentences to make extraction decisions. This is problematic when the information needed to recognize an event argument is spread across multiple sentences. We argue that document-level event extraction is a difficult task since it requires a vi... | a609716f75272f4084aa4a77476b6ed2 | 2,020 | [
"few works in the literature of event extraction have gone beyond individual sentences to make extraction decisions .",
"this is problematic when the information needed to recognize an event argument is spread across multiple sentences .",
"we argue that document - level event extraction is a difficult task sin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "event extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"event",
"extraction"
],
"offsets": [
6,
7
]
}
],
"trigger": {
... | [
"few",
"works",
"in",
"the",
"literature",
"of",
"event",
"extraction",
"have",
"gone",
"beyond",
"individual",
"sentences",
"to",
"make",
"extraction",
"decisions",
".",
"this",
"is",
"problematic",
"when",
"the",
"information",
"needed",
"to",
"recognize",
"an... |
ACL | Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models | Knowledge probing is crucial for understanding the knowledge transfer mechanism behind the pre-trained language models (PLMs). Despite the growing progress of probing knowledge for PLMs in the general domain, specialised areas such as the biomedical domain are vastly under-explored. To facilitate this, we release a wel... | f3ffa8e827f5dc166c404ec0cd474c82 | 2,022 | [
"knowledge probing is crucial for understanding the knowledge transfer mechanism behind the pre - trained language models ( plms ) .",
"despite the growing progress of probing knowledge for plms in the general domain , specialised areas such as the biomedical domain are vastly under - explored .",
"to facilitat... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge transfer mechanism",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"knowledge",
"transfer",
"mechanism"
],
"offsets": [
7,
8,
... | [
"knowledge",
"probing",
"is",
"crucial",
"for",
"understanding",
"the",
"knowledge",
"transfer",
"mechanism",
"behind",
"the",
"pre",
"-",
"trained",
"language",
"models",
"(",
"plms",
")",
".",
"despite",
"the",
"growing",
"progress",
"of",
"probing",
"knowledg... |
ACL | Paraphrase Augmented Task-Oriented Dialog Generation | Neural generative models have achieved promising performance on dialog generation tasks if given a huge data set. However, the lack of high-quality dialog data and the expensive data annotation process greatly limit their application in real world settings. We propose a paraphrase augmented response generation (PARG) f... | 207731fa7a9cfba0251c79ead39d5ceb | 2,020 | [
"neural generative models have achieved promising performance on dialog generation tasks if given a huge data set .",
"however , the lack of high - quality dialog data and the expensive data annotation process greatly limit their application in real world settings .",
"we propose a paraphrase augmented response... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural generative models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"generative",
"models"
],
"offsets": [
0,
1,
2
... | [
"neural",
"generative",
"models",
"have",
"achieved",
"promising",
"performance",
"on",
"dialog",
"generation",
"tasks",
"if",
"given",
"a",
"huge",
"data",
"set",
".",
"however",
",",
"the",
"lack",
"of",
"high",
"-",
"quality",
"dialog",
"data",
"and",
"th... |
ACL | Parallel Sentence Mining by Constrained Decoding | We present a novel method to extract parallel sentences from two monolingual corpora, using neural machine translation. Our method relies on translating sentences in one corpus, but constraining the decoding by a prefix tree built on the other corpus. We argue that a neural machine translation system by itself can be a... | 51bd87d11e16392075ed99f8f5dd4603 | 2,020 | [
"we present a novel method to extract parallel sentences from two monolingual corpora , using neural machine translation .",
"our method relies on translating sentences in one corpus , but constraining the decoding by a prefix tree built on the other corpus .",
"we argue that a neural machine translation system... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "method",
"nugget_type": "APP",
"arg... | [
"we",
"present",
"a",
"novel",
"method",
"to",
"extract",
"parallel",
"sentences",
"from",
"two",
"monolingual",
"corpora",
",",
"using",
"neural",
"machine",
"translation",
".",
"our",
"method",
"relies",
"on",
"translating",
"sentences",
"in",
"one",
"corpus",... |
ACL | A Gradually Soft Multi-Task and Data-Augmented Approach to Medical Question Understanding | Users of medical question answering systems often submit long and detailed questions, making it hard to achieve high recall in answer retrieval. To alleviate this problem, we propose a novel Multi-Task Learning (MTL) method with data augmentation for medical question understanding. We first establish an equivalence bet... | a3f4c073c31fa81acebd8cbc9ef4968e | 2,021 | [
"users of medical question answering systems often submit long and detailed questions , making it hard to achieve high recall in answer retrieval .",
"to alleviate this problem , we propose a novel multi - task learning ( mtl ) method with data augmentation for medical question understanding .",
"we first estab... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "long and detailed questions",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"long",
"and",
"detailed",
"questions"
],
"offsets": [
8,
... | [
"users",
"of",
"medical",
"question",
"answering",
"systems",
"often",
"submit",
"long",
"and",
"detailed",
"questions",
",",
"making",
"it",
"hard",
"to",
"achieve",
"high",
"recall",
"in",
"answer",
"retrieval",
".",
"to",
"alleviate",
"this",
"problem",
","... |
ACL | The Right Tool for the Job: Matching Model and Instance Complexities | As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) “exit” fr... | 61350e040561e1b668b54905a8982d3a | 2,020 | [
"as nlp models become larger , executing a trained model requires significant computational resources incurring monetary and environmental costs .",
"to better respect a given inference budget , we propose a modification to contextual representation fine - tuning which , during inference , allows for an early ( a... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "nlp models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"nlp",
"models"
],
"offsets": [
1,
2
]
}
],
"trigger": {
"text"... | [
"as",
"nlp",
"models",
"become",
"larger",
",",
"executing",
"a",
"trained",
"model",
"requires",
"significant",
"computational",
"resources",
"incurring",
"monetary",
"and",
"environmental",
"costs",
".",
"to",
"better",
"respect",
"a",
"given",
"inference",
"bud... |
ACL | IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks | Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc. As the AI debate attracts more attention these years, it is worth exploring the methods to automate the tedi... | 6b42cc2bd211cdd14d189311db388a79 | 2,022 | [
"traditionally , a debate usually requires a manual preparation process , including reading plenty of articles , selecting the claims , identifying the stances of the claims , seeking the evidence for the claims , etc .",
"as the ai debate attracts more attention these years , it is worth exploring the methods to... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ai debate",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"ai",
"debate"
],
"offsets": [
39,
40
]
}
],
"trigger": {
"text"... | [
"traditionally",
",",
"a",
"debate",
"usually",
"requires",
"a",
"manual",
"preparation",
"process",
",",
"including",
"reading",
"plenty",
"of",
"articles",
",",
"selecting",
"the",
"claims",
",",
"identifying",
"the",
"stances",
"of",
"the",
"claims",
",",
"... |
ACL | KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation | The research of knowledge-driven conversational systems is largely limited due to the lack of dialog data which consists of multi-turn conversations on multiple topics and with knowledge annotations. In this paper, we propose a Chinese multi-domain knowledge-driven conversation dataset, KdConv, which grounds the topics... | bfffb3629df98a7f37100cdc2af504f5 | 2,020 | [
"the research of knowledge - driven conversational systems is largely limited due to the lack of dialog data which consists of multi - turn conversations on multiple topics and with knowledge annotations .",
"in this paper , we propose a chinese multi - domain knowledge - driven conversation dataset , kdconv , wh... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
37
]
},
{
"text": "chinese multi - domain knowledge - driven conversa... | [
"the",
"research",
"of",
"knowledge",
"-",
"driven",
"conversational",
"systems",
"is",
"largely",
"limited",
"due",
"to",
"the",
"lack",
"of",
"dialog",
"data",
"which",
"consists",
"of",
"multi",
"-",
"turn",
"conversations",
"on",
"multiple",
"topics",
"and... |
ACL | Information-Theoretic Probing for Linguistic Structure | The success of neural networks on a diverse set of NLP tasks has led researchers to question how much these networks actually “know” about natural language. Probes are a natural way of assessing this. When probing, a researcher chooses a linguistic task and trains a supervised model to predict annotations in that lingu... | 83436baf1aad9cd8ed2a4cde339f6c92 | 2,020 | [
"the success of neural networks on a diverse set of nlp tasks has led researchers to question how much these networks actually “ know ” about natural language .",
"probes are a natural way of assessing this .",
"when probing , a researcher chooses a linguistic task and trains a supervised model to predict annot... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "probes",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"probes"
],
"offsets": [
29
]
}
],
"trigger": {
"text": "way",
"tokens": [
... | [
"the",
"success",
"of",
"neural",
"networks",
"on",
"a",
"diverse",
"set",
"of",
"nlp",
"tasks",
"has",
"led",
"researchers",
"to",
"question",
"how",
"much",
"these",
"networks",
"actually",
"“",
"know",
"”",
"about",
"natural",
"language",
".",
"probes",
... |
ACL | ILDAE: Instance-Level Difficulty Analysis of Evaluation Data | Knowledge of difficulty level of questions helps a teacher in several ways, such as estimating students’ potential quickly by asking carefully selected questions and improving quality of examination by modifying trivial and hard questions. Can we extract such benefits of instance difficulty in Natural Language Processi... | 30e8cbfa6e23d0120e4a3fce8aeccf8b | 2,022 | [
"knowledge of difficulty level of questions helps a teacher in several ways , such as estimating students ’ potential quickly by asking carefully selected questions and improving quality of examination by modifying trivial and hard questions .",
"can we extract such benefits of instance difficulty in natural lang... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge of difficulty level of questions",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"knowledge",
"of",
"difficulty",
"level",
"of",
"questi... | [
"knowledge",
"of",
"difficulty",
"level",
"of",
"questions",
"helps",
"a",
"teacher",
"in",
"several",
"ways",
",",
"such",
"as",
"estimating",
"students",
"’",
"potential",
"quickly",
"by",
"asking",
"carefully",
"selected",
"questions",
"and",
"improving",
"qu... |
ACL | Storytelling with Dialogue: A Critical Role Dungeons and Dragons Dataset | This paper describes the Critical Role Dungeons and Dragons Dataset (CRD3) and related analyses. Critical Role is an unscripted, live-streamed show where a fixed group of people play Dungeons and Dragons, an open-ended role-playing game. The dataset is collected from 159 Critical Role episodes transcribed to text dialo... | 162ca18f86896ff34365767494e68606 | 2,020 | [
"this paper describes the critical role dungeons and dragons dataset ( crd3 ) and related analyses .",
"critical role is an unscripted , live - streamed show where a fixed group of people play dungeons and dragons , an open - ended role - playing game .",
"the dataset is collected from 159 critical role episode... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "critical role dungeons and dragons dataset",
"nugget_type": "DST",
"argument_type": "Content",
"tokens": [
"critical",
"role",
"dungeons",
"and",
"dragons",
"dat... | [
"this",
"paper",
"describes",
"the",
"critical",
"role",
"dungeons",
"and",
"dragons",
"dataset",
"(",
"crd3",
")",
"and",
"related",
"analyses",
".",
"critical",
"role",
"is",
"an",
"unscripted",
",",
"live",
"-",
"streamed",
"show",
"where",
"a",
"fixed",
... |
ACL | Universal Conditional Masked Language Pre-training for Neural Machine Translation | Pre-trained sequence-to-sequence models have significantly improved Neural Machine Translation (NMT). Different from prior works where pre-trained models usually adopt an unidirectional decoder, this paper demonstrates that pre-training a sequence-to-sequence model but with a bidirectional decoder can produce notable p... | dbbd02e140468bb24ce6d7ea478cf08c | 2,022 | [
"pre - trained sequence - to - sequence models have significantly improved neural machine translation ( nmt ) .",
"different from prior works where pre - trained models usually adopt an unidirectional decoder , this paper demonstrates that pre - training a sequence - to - sequence model but with a bidirectional d... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
12,
13,
... | [
"pre",
"-",
"trained",
"sequence",
"-",
"to",
"-",
"sequence",
"models",
"have",
"significantly",
"improved",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
".",
"different",
"from",
"prior",
"works",
"where",
"pre",
"-",
"trained",
"models",
"usually",
... |
ACL | What You Say and How You Say It Matters: Predicting Stock Volatility Using Verbal and Vocal Cues | Predicting financial risk is an essential task in financial market. Prior research has shown that textual information in a firm’s financial statement can be used to predict its stock’s risk level. Nowadays, firm CEOs communicate information not only verbally through press releases and financial reports, but also nonver... | 0ee770ad8347f2bfc30a0b698ea1ad2d | 2,019 | [
"predicting financial risk is an essential task in financial market .",
"prior research has shown that textual information in a firm ’ s financial statement can be used to predict its stock ’ s risk level .",
"nowadays , firm ceos communicate information not only verbally through press releases and financial re... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "predicting financial risk",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"predicting",
"financial",
"risk"
],
"offsets": [
0,
1,
... | [
"predicting",
"financial",
"risk",
"is",
"an",
"essential",
"task",
"in",
"financial",
"market",
".",
"prior",
"research",
"has",
"shown",
"that",
"textual",
"information",
"in",
"a",
"firm",
"’",
"s",
"financial",
"statement",
"can",
"be",
"used",
"to",
"pr... |
ACL | Word Order Does Matter and Shuffled Language Models Know It | Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information. Somewhat counter-intuitively, some of these studies also report that position embeddings appear to be crucia... | a850af13e5dfeaee563cca4ed206f08d | 2,022 | [
"recent studies have shown that language models pretrained and / or fine - tuned on randomly permuted sentences exhibit competitive performance on glue , putting into question the importance of word order information .",
"somewhat counter - intuitively , some of these studies also report that position embeddings ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "word order information",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"word",
"order",
"information"
],
"offsets": [
30,
31,
32
... | [
"recent",
"studies",
"have",
"shown",
"that",
"language",
"models",
"pretrained",
"and",
"/",
"or",
"fine",
"-",
"tuned",
"on",
"randomly",
"permuted",
"sentences",
"exhibit",
"competitive",
"performance",
"on",
"glue",
",",
"putting",
"into",
"question",
"the",... |
ACL | HEAD-QA: A Healthcare Dataset for Complex Reasoning | We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to ... | 361a7f6932bed7c6ecc0fa053981cc52 | 2,019 | [
"we present head - qa , a multi - choice question answering testbed to encourage research on complex reasoning .",
"the questions come from exams to access a specialized position in the spanish healthcare system , and are challenging even for highly specialized humans .",
"we then consider monolingual ( spanish... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "head - qa",
"nugget_type": "APP",
"... | [
"we",
"present",
"head",
"-",
"qa",
",",
"a",
"multi",
"-",
"choice",
"question",
"answering",
"testbed",
"to",
"encourage",
"research",
"on",
"complex",
"reasoning",
".",
"the",
"questions",
"come",
"from",
"exams",
"to",
"access",
"a",
"specialized",
"posi... |
ACL | ILDC for CJPE: Indian Legal Documents Corpus for Court Judgment Prediction and Explanation | An automated system that could assist a judge in predicting the outcome of a case would help expedite the judicial process. For such a system to be practically useful, predictions by the system should be explainable. To promote research in developing such a system, we introduce ILDC (Indian Legal Documents Corpus). ILD... | 75c76df36102522b5fab4f1c56fcae2d | 2,021 | [
"an automated system that could assist a judge in predicting the outcome of a case would help expedite the judicial process .",
"for such a system to be practically useful , predictions by the system should be explainable .",
"to promote research in developing such a system , we introduce ildc ( indian legal do... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
48
]
},
{
"text": "ildc",
"nugget_type": "DST",
"argu... | [
"an",
"automated",
"system",
"that",
"could",
"assist",
"a",
"judge",
"in",
"predicting",
"the",
"outcome",
"of",
"a",
"case",
"would",
"help",
"expedite",
"the",
"judicial",
"process",
".",
"for",
"such",
"a",
"system",
"to",
"be",
"practically",
"useful",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.