venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | GhostBERT: Generate More Features with Cheap Operations for BERT | Transformer-based pre-trained language models like BERT, though powerful in many tasks, are expensive in both memory and computation, due to their large number of parameters. Previous works show that some parameters in these models can be pruned away without severe accuracy drop. However, these redundant features contr... | cb253ee5e5dbc9fe5a64b979d6388cf5 | 2,021 | [
"transformer - based pre - trained language models like bert , though powerful in many tasks , are expensive in both memory and computation , due to their large number of parameters .",
"previous works show that some parameters in these models can be pruned away without severe accuracy drop .",
"however , these... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "expensive",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"expensive"
],
"offsets": [
18
]
},
{
"text": "transformer - based pre - trained lang... | [
"transformer",
"-",
"based",
"pre",
"-",
"trained",
"language",
"models",
"like",
"bert",
",",
"though",
"powerful",
"in",
"many",
"tasks",
",",
"are",
"expensive",
"in",
"both",
"memory",
"and",
"computation",
",",
"due",
"to",
"their",
"large",
"number",
... |
ACL | Graph Enhanced Contrastive Learning for Radiology Findings Summarization | The impression section of a radiology report summarizes the most prominent observation from the findings section and is the most important section for radiologists to communicate to physicians. Summarizing findings is time-consuming and can be prone to error for inexperienced radiologists, and thus automatic impression... | 25b97889e3c55cfef714c491f3191f34 | 2,022 | [
"the impression section of a radiology report summarizes the most prominent observation from the findings section and is the most important section for radiologists to communicate to physicians .",
"summarizing findings is time - consuming and can be prone to error for inexperienced radiologists , and thus automa... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "findings",
"nugget_type": "FEA",
"argument_type": "Concern",
"tokens": [
"findings"
],
"offsets": [
30
]
},
{
"text": "time - consuming",
"nugget_typ... | [
"the",
"impression",
"section",
"of",
"a",
"radiology",
"report",
"summarizes",
"the",
"most",
"prominent",
"observation",
"from",
"the",
"findings",
"section",
"and",
"is",
"the",
"most",
"important",
"section",
"for",
"radiologists",
"to",
"communicate",
"to",
... |
ACL | Input Representations for Parsing Discourse Representation Structures: Comparing English with Chinese | Neural semantic parsers have obtained acceptable results in the context of parsing DRSs (Discourse Representation Structures). In particular models with character sequences as input showed remarkable performance for English. But how does this approach perform on languages with a different writing system, like Chinese, ... | fb73f00b1be1a32b83dee5e803729ec8 | 2,021 | [
"neural semantic parsers have obtained acceptable results in the context of parsing drss ( discourse representation structures ) .",
"in particular models with character sequences as input showed remarkable performance for english .",
"but how does this approach perform on languages with a different writing sys... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "drss",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"drss"
],
"offsets": [
12
]
}
],
"trigger": {
"text": "parsing",
"tokens": [
... | [
"neural",
"semantic",
"parsers",
"have",
"obtained",
"acceptable",
"results",
"in",
"the",
"context",
"of",
"parsing",
"drss",
"(",
"discourse",
"representation",
"structures",
")",
".",
"in",
"particular",
"models",
"with",
"character",
"sequences",
"as",
"input"... |
ACL | OIE@OIA: an Adaptable and Efficient Open Information Extraction Framework | Different Open Information Extraction (OIE) tasks require different types of information, so the OIE field requires strong adaptability of OIE algorithms to meet different task requirements. This paper discusses the adaptability problem in existing OIE systems and designs a new adaptable and efficient OIE system - OIE@... | c5235d80daa3864da11c01ecd7c2abef | 2,022 | [
"different open information extraction ( oie ) tasks require different types of information , so the oie field requires strong adaptability of oie algorithms to meet different task requirements .",
"this paper discusses the adaptability problem in existing oie systems and designs a new adaptable and efficient oie... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open information extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"information",
"extraction"
],
"offsets": [
1,
2,
... | [
"different",
"open",
"information",
"extraction",
"(",
"oie",
")",
"tasks",
"require",
"different",
"types",
"of",
"information",
",",
"so",
"the",
"oie",
"field",
"requires",
"strong",
"adaptability",
"of",
"oie",
"algorithms",
"to",
"meet",
"different",
"task"... |
ACL | Fatality Killed the Cat or: BabelPic, a Multimodal Dataset for Non-Concrete Concepts | Thanks to the wealth of high-quality annotated images available in popular repositories such as ImageNet, multimodal language-vision research is in full bloom. However, events, feelings and many other kinds of concepts which can be visually grounded are not well represented in current datasets. Nevertheless, we would e... | f195ed294f0c987151758f7d414ce9c0 | 2,020 | [
"thanks to the wealth of high - quality annotated images available in popular repositories such as imagenet , multimodal language - vision research is in full bloom .",
"however , events , feelings and many other kinds of concepts which can be visually grounded are not well represented in current datasets .",
"... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multimodal language - vision research",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multimodal",
"language",
"-",
"vision",
"research"
],
... | [
"thanks",
"to",
"the",
"wealth",
"of",
"high",
"-",
"quality",
"annotated",
"images",
"available",
"in",
"popular",
"repositories",
"such",
"as",
"imagenet",
",",
"multimodal",
"language",
"-",
"vision",
"research",
"is",
"in",
"full",
"bloom",
".",
"however",... |
ACL | Learn to Adapt for Generalized Zero-Shot Text Classification | Generalized zero-shot text classification aims to classify textual instances from both previously seen classes and incrementally emerging unseen classes. Most existing methods generalize poorly since the learned parameters are only optimal for seen classes rather than for both classes, and the parameters keep stationar... | 23ce3e48a52b952e0284940f2c15d395 | 2,022 | [
"generalized zero - shot text classification aims to classify textual instances from both previously seen classes and incrementally emerging unseen classes .",
"most existing methods generalize poorly since the learned parameters are only optimal for seen classes rather than for both classes , and the parameters ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "generalized zero - shot text classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"generalized",
"zero",
"-",
"shot",
"text",
"classific... | [
"generalized",
"zero",
"-",
"shot",
"text",
"classification",
"aims",
"to",
"classify",
"textual",
"instances",
"from",
"both",
"previously",
"seen",
"classes",
"and",
"incrementally",
"emerging",
"unseen",
"classes",
".",
"most",
"existing",
"methods",
"generalize"... |
ACL | Language-Agnostic Meta-Learning for Low-Resource Text-to-Speech with Articulatory Features | While neural text-to-speech systems perform remarkably well in high-resource scenarios, they cannot be applied to the majority of the over 6,000 spoken languages in the world due to a lack of appropriate training data. In this work, we use embeddings derived from articulatory vectors rather than embeddings derived from... | 6c00c2dcf05c3c050ab3d2cdb1d1fd03 | 2,022 | [
"while neural text - to - speech systems perform remarkably well in high - resource scenarios , they cannot be applied to the majority of the over 6 , 000 spoken languages in the world due to a lack of appropriate training data .",
"in this work , we use embeddings derived from articulatory vectors rather than em... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "embeddings",
"nugget_type": "MOD",
"argument_type": "TriedComponent",
"tokens": [
"embeddings"
],
"offsets": [
50
]
},
{
"text": "articulatory vectors",
... | [
"while",
"neural",
"text",
"-",
"to",
"-",
"speech",
"systems",
"perform",
"remarkably",
"well",
"in",
"high",
"-",
"resource",
"scenarios",
",",
"they",
"cannot",
"be",
"applied",
"to",
"the",
"majority",
"of",
"the",
"over",
"6",
",",
"000",
"spoken",
... |
ACL | Training Hybrid Language Models by Marginalizing over Segmentations | In this paper, we study the problem of hybrid language modeling, that is using models which can predict both characters and larger units such as character ngrams or words. Using such models, multiple potential segmentations usually exist for a given string, for example one using words and one using characters only. Thu... | 3913afe38867b8081ccfe4c3e7edeaf2 | 2,019 | [
"in this paper , we study the problem of hybrid language modeling , that is using models which can predict both characters and larger units such as character ngrams or words .",
"using such models , multiple potential segmentations usually exist for a given string , for example one using words and one using chara... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "problem of hybrid language modeling",
"nu... | [
"in",
"this",
"paper",
",",
"we",
"study",
"the",
"problem",
"of",
"hybrid",
"language",
"modeling",
",",
"that",
"is",
"using",
"models",
"which",
"can",
"predict",
"both",
"characters",
"and",
"larger",
"units",
"such",
"as",
"character",
"ngrams",
"or",
... |
ACL | Modeling Persuasive Discourse to Adaptively Support Students’ Argumentative Writing | We introduce an argumentation annotation approach to model the structure of argumentative discourse in student-written business model pitches. Additionally, the annotation scheme captures a series of persuasiveness scores such as the specificity, strength, evidence, and relevance of the pitch and the individual compone... | 2edb917287b730c934164ae21c62e72c | 2,022 | [
"we introduce an argumentation annotation approach to model the structure of argumentative discourse in student - written business model pitches .",
"additionally , the annotation scheme captures a series of persuasiveness scores such as the specificity , strength , evidence , and relevance of the pitch and the i... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "argumentation annotation approach",
"nugg... | [
"we",
"introduce",
"an",
"argumentation",
"annotation",
"approach",
"to",
"model",
"the",
"structure",
"of",
"argumentative",
"discourse",
"in",
"student",
"-",
"written",
"business",
"model",
"pitches",
".",
"additionally",
",",
"the",
"annotation",
"scheme",
"ca... |
ACL | Enhancing Unsupervised Generative Dependency Parser with Contextual Information | Most of the unsupervised dependency parsers are based on probabilistic generative models that learn the joint distribution of the given sentence and its parse. Probabilistic generative models usually explicit decompose the desired dependency tree into factorized grammar rules, which lack the global features of the enti... | da17b78f95c6df2195c3203c47dc53a6 | 2,019 | [
"most of the unsupervised dependency parsers are based on probabilistic generative models that learn the joint distribution of the given sentence and its parse .",
"probabilistic generative models usually explicit decompose the desired dependency tree into factorized grammar rules , which lack the global features... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "unsupervised dependency parsers",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"unsupervised",
"dependency",
"parsers"
],
"offsets": [
3,
4... | [
"most",
"of",
"the",
"unsupervised",
"dependency",
"parsers",
"are",
"based",
"on",
"probabilistic",
"generative",
"models",
"that",
"learn",
"the",
"joint",
"distribution",
"of",
"the",
"given",
"sentence",
"and",
"its",
"parse",
".",
"probabilistic",
"generative... |
ACL | Neural Temporality Adaptation for Document Classification: Diachronic Word Embeddings and Domain Adaptation Models | Language usage can change across periods of time, but document classifiers models are usually trained and tested on corpora spanning multiple years without considering temporal variations. This paper describes two complementary ways to adapt classifiers to shifts across time. First, we show that diachronic word embeddi... | f13496be47184886efd5c0fabc4fcd9d | 2,019 | [
"language usage can change across periods of time , but document classifiers models are usually trained and tested on corpora spanning multiple years without considering temporal variations .",
"this paper describes two complementary ways to adapt classifiers to shifts across time .",
"first , we show that diac... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "document classifiers models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"document",
"classifiers",
"models"
],
"offsets": [
10,
11,
... | [
"language",
"usage",
"can",
"change",
"across",
"periods",
"of",
"time",
",",
"but",
"document",
"classifiers",
"models",
"are",
"usually",
"trained",
"and",
"tested",
"on",
"corpora",
"spanning",
"multiple",
"years",
"without",
"considering",
"temporal",
"variati... |
ACL | Bilingual Lexicon Induction through Unsupervised Machine Translation | A recent research line has obtained strong results on bilingual lexicon induction by aligning independently trained word embeddings in two languages and using the resulting cross-lingual embeddings to induce word translation pairs through nearest neighbor or related retrieval methods. In this paper, we propose an alter... | 8a5ebecfa05f3d56736667cfc4dffdee | 2,019 | [
"a recent research line has obtained strong results on bilingual lexicon induction by aligning independently trained word embeddings in two languages and using the resulting cross - lingual embeddings to induce word translation pairs through nearest neighbor or related retrieval methods .",
"in this paper , we pr... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "bilingual lexicon induction",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"bilingual",
"lexicon",
"induction"
],
"offsets": [
9,
10,
... | [
"a",
"recent",
"research",
"line",
"has",
"obtained",
"strong",
"results",
"on",
"bilingual",
"lexicon",
"induction",
"by",
"aligning",
"independently",
"trained",
"word",
"embeddings",
"in",
"two",
"languages",
"and",
"using",
"the",
"resulting",
"cross",
"-",
... |
ACL | Roles and Utilization of Attention Heads in Transformer-based Neural Language Models | Sentence encoders based on the transformer architecture have shown promising results on various natural language tasks. The main impetus lies in the pre-trained neural language models that capture long-range dependencies among words, owing to multi-head attention that is unique in the architecture. However, little is k... | b8e1be4ca63124dcef0115317fb8e5b4 | 2,020 | [
"sentence encoders based on the transformer architecture have shown promising results on various natural language tasks .",
"the main impetus lies in the pre - trained neural language models that capture long - range dependencies among words , owing to multi - head attention that is unique in the architecture .",... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"tasks"
],
"offsets": [
13,
14,
15
... | [
"sentence",
"encoders",
"based",
"on",
"the",
"transformer",
"architecture",
"have",
"shown",
"promising",
"results",
"on",
"various",
"natural",
"language",
"tasks",
".",
"the",
"main",
"impetus",
"lies",
"in",
"the",
"pre",
"-",
"trained",
"neural",
"language"... |
ACL | A Unified Linear-Time Framework for Sentence-Level Discourse Parsing | We propose an efficient neural framework for sentence-level discourse analysis in accordance with Rhetorical Structure Theory (RST). Our framework comprises a discourse segmenter to identify the elementary discourse units (EDU) in a text, and a discourse parser that constructs a discourse tree in a top-down fashion. Bo... | ba25b4938996d3252057b05f8ab00f07 | 2,019 | [
"we propose an efficient neural framework for sentence - level discourse analysis in accordance with rhetorical structure theory ( rst ) .",
"our framework comprises a discourse segmenter to identify the elementary discourse units ( edu ) in a text , and a discourse parser that constructs a discourse tree in a to... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "neural framework",
"nugget_type": "APP",
... | [
"we",
"propose",
"an",
"efficient",
"neural",
"framework",
"for",
"sentence",
"-",
"level",
"discourse",
"analysis",
"in",
"accordance",
"with",
"rhetorical",
"structure",
"theory",
"(",
"rst",
")",
".",
"our",
"framework",
"comprises",
"a",
"discourse",
"segmen... |
ACL | MIE: A Medical Information Extractor towards Medical Dialogues | Electronic Medical Records (EMRs) have become key components of modern medical care systems. Despite the merits of EMRs, many doctors suffer from writing them, which is time-consuming and tedious. We believe that automatically converting medical dialogues to EMRs can greatly reduce the burdens of doctors, and extractin... | b8b6f1bc762cd7762e7d33870b2cdb6b | 2,020 | [
"electronic medical records ( emrs ) have become key components of modern medical care systems .",
"despite the merits of emrs , many doctors suffer from writing them , which is time - consuming and tedious .",
"we believe that automatically converting medical dialogues to emrs can greatly reduce the burdens of... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "electronic medical records",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"electronic",
"medical",
"records"
],
"offsets": [
0,
1,
... | [
"electronic",
"medical",
"records",
"(",
"emrs",
")",
"have",
"become",
"key",
"components",
"of",
"modern",
"medical",
"care",
"systems",
".",
"despite",
"the",
"merits",
"of",
"emrs",
",",
"many",
"doctors",
"suffer",
"from",
"writing",
"them",
",",
"which... |
ACL | Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills | Models pre-trained with a language modeling objective possess ample world knowledge and language skills, but are known to struggle in tasks that require reasoning. In this work, we propose to leverage semi-structured tables, and automatically generate at scale question-paragraph pairs, where answering the question requ... | 45303b3478817205c860556aa962c089 | 2,022 | [
"models pre - trained with a language modeling objective possess ample world knowledge and language skills , but are known to struggle in tasks that require reasoning .",
"in this work , we propose to leverage semi - structured tables , and automatically generate at scale question - paragraph pairs , where answer... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "struggle",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"struggle"
],
"offsets": [
21
]
},
{
"text": "models pre - trained with a language mod... | [
"models",
"pre",
"-",
"trained",
"with",
"a",
"language",
"modeling",
"objective",
"possess",
"ample",
"world",
"knowledge",
"and",
"language",
"skills",
",",
"but",
"are",
"known",
"to",
"struggle",
"in",
"tasks",
"that",
"require",
"reasoning",
".",
"in",
... |
ACL | Semi-Supervised Formality Style Transfer with Consistency Training | Formality style transfer (FST) is a task that involves paraphrasing an informal sentence into a formal one without altering its meaning. To address the data-scarcity problem of existing parallel datasets, previous studies tend to adopt a cycle-reconstruction scheme to utilize additional unlabeled data, where the FST mo... | b3bb50f2b1b1218cf864982d960e8d9a | 2,022 | [
"formality style transfer ( fst ) is a task that involves paraphrasing an informal sentence into a formal one without altering its meaning .",
"to address the data - scarcity problem of existing parallel datasets , previous studies tend to adopt a cycle - reconstruction scheme to utilize additional unlabeled data... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "formality style transfer",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"formality",
"style",
"transfer"
],
"offsets": [
0,
1,
2
... | [
"formality",
"style",
"transfer",
"(",
"fst",
")",
"is",
"a",
"task",
"that",
"involves",
"paraphrasing",
"an",
"informal",
"sentence",
"into",
"a",
"formal",
"one",
"without",
"altering",
"its",
"meaning",
".",
"to",
"address",
"the",
"data",
"-",
"scarcity... |
ACL | Not always about you: Prioritizing community needs when developing endangered language technology | Languages are classified as low-resource when they lack the quantity of data necessary for training statistical and machine learning tools and models. Causes of resource scarcity vary but can include poor access to technology for developing these resources, a relatively small population of speakers, or a lack of urgenc... | 016c39f66d1ce55309493839ec544a66 | 2,022 | [
"languages are classified as low - resource when they lack the quantity of data necessary for training statistical and machine learning tools and models .",
"causes of resource scarcity vary but can include poor access to technology for developing these resources , a relatively small population of speakers , or a... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "low - resource",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"low",
"-",
"resource"
],
"offsets": [
4,
5,
6
]
}
... | [
"languages",
"are",
"classified",
"as",
"low",
"-",
"resource",
"when",
"they",
"lack",
"the",
"quantity",
"of",
"data",
"necessary",
"for",
"training",
"statistical",
"and",
"machine",
"learning",
"tools",
"and",
"models",
".",
"causes",
"of",
"resource",
"sc... |
ACL | INFOTABS: Inference on Tables as Semi-structured Data | In this paper, we observe that semi-structured tabulated text is ubiquitous; understanding them requires not only comprehending the meaning of text fragments, but also implicit relationships between them. We argue that such data can prove as a testing ground for understanding how we reason about information. To study t... | 1fa9d1e6d52ebf1e623443ea06087b46 | 2,020 | [
"in this paper , we observe that semi - structured tabulated text is ubiquitous ; understanding them requires not only comprehending the meaning of text fragments , but also implicit relationships between them .",
"we argue that such data can prove as a testing ground for understanding how we reason about informa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "semi - structured tabulated text",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"semi",
"-",
"structured",
"tabulated",
"text"
],
"offsets"... | [
"in",
"this",
"paper",
",",
"we",
"observe",
"that",
"semi",
"-",
"structured",
"tabulated",
"text",
"is",
"ubiquitous",
";",
"understanding",
"them",
"requires",
"not",
"only",
"comprehending",
"the",
"meaning",
"of",
"text",
"fragments",
",",
"but",
"also",
... |
ACL | Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze? | Learned self-attention functions in state-of-the-art NLP models often correlate with human attention. We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during task-reading as classical cognitive models of human attention. We compare attentio... | 927ecf06496cb71aa2e50bbbd8afc07d | 2,022 | [
"learned self - attention functions in state - of - the - art nlp models often correlate with human attention .",
"we investigate whether self - attention in large - scale pre - trained language models is as predictive of human eye fixation patterns during task - reading as classical cognitive models of human att... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "learned self - attention functions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"learned",
"self",
"-",
"attention",
"functions"
],
"offs... | [
"learned",
"self",
"-",
"attention",
"functions",
"in",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"nlp",
"models",
"often",
"correlate",
"with",
"human",
"attention",
".",
"we",
"investigate",
"whether",
"self",
"-",
"attention",
"in",
"large",
"-",
"sca... |
ACL | Predicate-Argument Based Bi-Encoder for Paraphrase Identification | Paraphrase identification involves identifying whether a pair of sentences express the same or similar meanings. While cross-encoders have achieved high performances across several benchmarks, bi-encoders such as SBERT have been widely applied to sentence pair tasks. They exhibit substantially lower computation complex... | ef1be1dbada0132041a14d9a4676f796 | 2,022 | [
"paraphrase identification involves identifying whether a pair of sentences express the same or similar meanings .",
"while cross - encoders have achieved high performances across several benchmarks , bi - encoders such as sbert have been widely applied to sentence pair tasks .",
"they exhibit substantially low... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "paraphrase identification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"paraphrase",
"identification"
],
"offsets": [
0,
1
]
}
],... | [
"paraphrase",
"identification",
"involves",
"identifying",
"whether",
"a",
"pair",
"of",
"sentences",
"express",
"the",
"same",
"or",
"similar",
"meanings",
".",
"while",
"cross",
"-",
"encoders",
"have",
"achieved",
"high",
"performances",
"across",
"several",
"b... |
ACL | Self-Guided Contrastive Learning for BERT Sentence Representations | Although BERT and its variants have reshaped the NLP landscape, it still remains unclear how best to derive sentence embeddings from such pre-trained Transformers. In this work, we propose a contrastive learning method that utilizes self-guidance for improving the quality of BERT sentence representations. Our method fi... | 506cb96546ae8c73181053d54c3deae0 | 2,021 | [
"although bert and its variants have reshaped the nlp landscape , it still remains unclear how best to derive sentence embeddings from such pre - trained transformers .",
"in this work , we propose a contrastive learning method that utilizes self - guidance for improving the quality of bert sentence representatio... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
32
]
},
{
"text": "contrastive learning method",
"nugget_type... | [
"although",
"bert",
"and",
"its",
"variants",
"have",
"reshaped",
"the",
"nlp",
"landscape",
",",
"it",
"still",
"remains",
"unclear",
"how",
"best",
"to",
"derive",
"sentence",
"embeddings",
"from",
"such",
"pre",
"-",
"trained",
"transformers",
".",
"in",
... |
ACL | DEEP: DEnoising Entity Pre-training for Neural Machine Translation | It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language ... | b3e95d7eebb73032cd3676f7753d9387 | 2,022 | [
"it has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus .",
"earlier named entity translation methods mainly focus on phonetic transliteration , which ignores the sentence context for translation and is limited in domain a... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine translation models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"machine",
"translation",
"models"
],
"offsets": [
5,
6,
... | [
"it",
"has",
"been",
"shown",
"that",
"machine",
"translation",
"models",
"usually",
"generate",
"poor",
"translations",
"for",
"named",
"entities",
"that",
"are",
"infrequent",
"in",
"the",
"training",
"corpus",
".",
"earlier",
"named",
"entity",
"translation",
... |
ACL | Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering | Question Answering (QA) is in increasing demand as the amount of information available online and the desire for quick access to this content grows. A common approach to QA has been to fine-tune a pretrained language model on a task-specific labeled dataset. This paradigm, however, relies on scarce, and costly to obtai... | d0585471934fc73c877271507dc156ea | 2,020 | [
"question answering ( qa ) is in increasing demand as the amount of information available online and the desire for quick access to this content grows .",
"a common approach to qa has been to fine - tune a pretrained language model on a task - specific labeled dataset .",
"this paradigm , however , relies on sc... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"question",
"answering"
],
"offsets": [
0,
1
]
},
{
"te... | [
"question",
"answering",
"(",
"qa",
")",
"is",
"in",
"increasing",
"demand",
"as",
"the",
"amount",
"of",
"information",
"available",
"online",
"and",
"the",
"desire",
"for",
"quick",
"access",
"to",
"this",
"content",
"grows",
".",
"a",
"common",
"approach"... |
ACL | Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning | When pre-trained contextualized embedding-based models developed for unstructured data are adapted for structured tabular data, they perform admirably. However, recent probing studies show that these models use spurious correlations, and often predict inference labels by focusing on false evidence or ignoring it altoge... | cc0c5630ede09c873b5dd1cad5025ada | 2,022 | [
"when pre - trained contextualized embedding - based models developed for unstructured data are adapted for structured tabular data , they perform admirably .",
"however , recent probing studies show that these models use spurious correlations , and often predict inference labels by focusing on false evidence or ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "structured tabular data",
"nugget_type": "DST",
"argument_type": "Target",
"tokens": [
"structured",
"tabular",
"data"
],
"offsets": [
16,
17,
18... | [
"when",
"pre",
"-",
"trained",
"contextualized",
"embedding",
"-",
"based",
"models",
"developed",
"for",
"unstructured",
"data",
"are",
"adapted",
"for",
"structured",
"tabular",
"data",
",",
"they",
"perform",
"admirably",
".",
"however",
",",
"recent",
"probi... |
ACL | Reliability-aware Dynamic Feature Composition for Name Tagging | Word embeddings are widely used on a variety of tasks and can substantially improve the performance. However, their quality is not consistent throughout the vocabulary due to the long-tail distribution of word frequency. Without sufficient contexts, rare word embeddings are usually less reliable than those of common wo... | 2345b289b3abef22b795fd8922ded355 | 2,019 | [
"word embeddings are widely used on a variety of tasks and can substantially improve the performance .",
"however , their quality is not consistent throughout the vocabulary due to the long - tail distribution of word frequency .",
"without sufficient contexts , rare word embeddings are usually less reliable th... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "word embeddings",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"word",
"embeddings"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"word",
"embeddings",
"are",
"widely",
"used",
"on",
"a",
"variety",
"of",
"tasks",
"and",
"can",
"substantially",
"improve",
"the",
"performance",
".",
"however",
",",
"their",
"quality",
"is",
"not",
"consistent",
"throughout",
"the",
"vocabulary",
"due",
"t... |
ACL | A Hierarchical VAE for Calibrating Attributes while Generating Text using Normalizing Flow | In this digital age, online users expect personalized content. To cater to diverse group of audiences across online platforms it is necessary to generate multiple variants of same content with differing degree of characteristics (sentiment, style, formality, etc.). Though text-style transfer is a well explored related ... | 4009512e0d9687bac55d3eac4232581a | 2,021 | [
"in this digital age , online users expect personalized content .",
"to cater to diverse group of audiences across online platforms it is necessary to generate multiple variants of same content with differing degree of characteristics ( sentiment , style , formality , etc . ) .",
"though text - style transfer i... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text - style transfer",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"-",
"style",
"transfer"
],
"offsets": [
48,
49,
... | [
"in",
"this",
"digital",
"age",
",",
"online",
"users",
"expect",
"personalized",
"content",
".",
"to",
"cater",
"to",
"diverse",
"group",
"of",
"audiences",
"across",
"online",
"platforms",
"it",
"is",
"necessary",
"to",
"generate",
"multiple",
"variants",
"o... |
ACL | Storyboarding of Recipes: Grounded Contextual Generation | Information need of humans is essentially multimodal in nature, enabling maximum exploitation of situated context. We introduce a dataset for sequential procedural (how-to) text generation from images in cooking domain. The dataset consists of 16,441 cooking recipes with 160,479 photos associated with different steps. ... | 2f7fd5feffe795d9c50ecf84f02692fe | 2,019 | [
"information need of humans is essentially multimodal in nature , enabling maximum exploitation of situated context .",
"we introduce a dataset for sequential procedural ( how - to ) text generation from images in cooking domain .",
"the dataset consists of 16 , 441 cooking recipes with 160 , 479 photos associa... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
17
]
},
{
"text": "dataset",
"nugget_type": "DST",
"a... | [
"information",
"need",
"of",
"humans",
"is",
"essentially",
"multimodal",
"in",
"nature",
",",
"enabling",
"maximum",
"exploitation",
"of",
"situated",
"context",
".",
"we",
"introduce",
"a",
"dataset",
"for",
"sequential",
"procedural",
"(",
"how",
"-",
"to",
... |
ACL | Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation | Contextual embeddings represent a new generation of semantic representations learned from Neural Language Modelling (NLM) that addresses the issue of meaning conflation hampering traditional word embeddings. In this work, we show that contextual embeddings can be used to achieve unprecedented gains in Word Sense Disamb... | 147a02d5019d6366b197159da9d24958 | 2,019 | [
"contextual embeddings represent a new generation of semantic representations learned from neural language modelling ( nlm ) that addresses the issue of meaning conflation hampering traditional word embeddings .",
"in this work , we show that contextual embeddings can be used to achieve unprecedented gains in wor... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "contextual embeddings",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"contextual",
"embeddings"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"contextual",
"embeddings",
"represent",
"a",
"new",
"generation",
"of",
"semantic",
"representations",
"learned",
"from",
"neural",
"language",
"modelling",
"(",
"nlm",
")",
"that",
"addresses",
"the",
"issue",
"of",
"meaning",
"conflation",
"hampering",
"tradition... |
ACL | Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice | Classifiers in natural language processing (NLP) often have a large number of output classes. For example, neural language models (LMs) and machine translation (MT) models both predict tokens from a vocabulary of thousands. The Softmax output layer of these models typically receives as input a dense feature representat... | 8a858041a5b590efac9f7047a8e08ad0 | 2,022 | [
"classifiers in natural language processing ( nlp ) often have a large number of output classes .",
"for example , neural language models ( lms ) and machine translation ( mt ) models both predict tokens from a vocabulary of thousands .",
"the softmax output layer of these models typically receives as input a d... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
2,
3,
... | [
"classifiers",
"in",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"often",
"have",
"a",
"large",
"number",
"of",
"output",
"classes",
".",
"for",
"example",
",",
"neural",
"language",
"models",
"(",
"lms",
")",
"and",
"machine",
"translation",
"(",
... |
ACL | Self-Supervised Dialogue Learning | The sequential order of utterances is often meaningful in coherent dialogues, and the order changes of utterances could lead to low-quality and incoherent conversations. We consider the order information as a crucial supervised signal for dialogue learning, which, however, has been neglected by many previous dialogue s... | 6f5a04e02e4af247df52c7dcea97e00a | 2,019 | [
"the sequential order of utterances is often meaningful in coherent dialogues , and the order changes of utterances could lead to low - quality and incoherent conversations .",
"we consider the order information as a crucial supervised signal for dialogue learning , which , however , has been neglected by many pr... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
28
]
},
{
"text": "order information as a crucial supervised signal... | [
"the",
"sequential",
"order",
"of",
"utterances",
"is",
"often",
"meaningful",
"in",
"coherent",
"dialogues",
",",
"and",
"the",
"order",
"changes",
"of",
"utterances",
"could",
"lead",
"to",
"low",
"-",
"quality",
"and",
"incoherent",
"conversations",
".",
"w... |
ACL | On the Robustness of Question Rewriting Systems to Questions of Varying Hardness | In conversational question answering (CQA), the task of question rewriting (QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer. In this paper, we are interested in the robustness of a QR system to questions varying in rewriting hardness or d... | 63d3bc1895a5208fe1a6e5a9408850d8 | 2,022 | [
"in conversational question answering ( cqa ) , the task of question rewriting ( qr ) in context aims to rewrite a context - dependent question into an equivalent self - contained question that gives the same answer .",
"in this paper , we are interested in the robustness of a qr system to questions varying in re... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "question rewriting",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"question",
"rewriting"
],
"offsets": [
11,
12
]
}
],
"trigge... | [
"in",
"conversational",
"question",
"answering",
"(",
"cqa",
")",
",",
"the",
"task",
"of",
"question",
"rewriting",
"(",
"qr",
")",
"in",
"context",
"aims",
"to",
"rewrite",
"a",
"context",
"-",
"dependent",
"question",
"into",
"an",
"equivalent",
"self",
... |
ACL | Sentence-Level Agreement for Neural Machine Translation | The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references. In NMT, there is a natural correspondence between the source sentence and the target sentence. However, this relationship has only been represented using the enti... | 46b1a4a84eb9e069e79f98ee22dc06d8 | 2,019 | [
"the training objective of neural machine translation ( nmt ) is to minimize the loss between the words in the translated sentences and those in the references .",
"in nmt , there is a natural correspondence between the source sentence and the target sentence .",
"however , this relationship has only been repre... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
4,
5,
... | [
"the",
"training",
"objective",
"of",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"is",
"to",
"minimize",
"the",
"loss",
"between",
"the",
"words",
"in",
"the",
"translated",
"sentences",
"and",
"those",
"in",
"the",
"references",
".",
"in",
"nmt",
... |
ACL | ERNIE: Enhanced Language Representation with Informative Entities | Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs ... | ddc4ba3945bd183fe75988f27479fc83 | 2,019 | [
"neural language representation models such as bert pre - trained on large - scale corpora can well capture rich semantic patterns from plain text , and be fine - tuned to consistently improve the performance of various nlp tasks .",
"however , the existing pre - trained language models rarely consider incorporat... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "existing pre - trained language models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"existing",
"pre",
"-",
"trained",
"language",
"models"
... | [
"neural",
"language",
"representation",
"models",
"such",
"as",
"bert",
"pre",
"-",
"trained",
"on",
"large",
"-",
"scale",
"corpora",
"can",
"well",
"capture",
"rich",
"semantic",
"patterns",
"from",
"plain",
"text",
",",
"and",
"be",
"fine",
"-",
"tuned",
... |
ACL | Highway Transformer: Self-Gating Enhanced Self-Attentive Networks | Self-attention mechanisms have made striking state-of-the-art (SOTA) progress in various sequence learning tasks, standing on the multi-headed dot product attention by attending to all the global contexts at different locations. Through a pseudo information highway, we introduce a gated component self-dependency units ... | 53400bce78436051243516eeeb758d74 | 2,020 | [
"self - attention mechanisms have made striking state - of - the - art ( sota ) progress in various sequence learning tasks , standing on the multi - headed dot product attention by attending to all the global contexts at different locations .",
"through a pseudo information highway , we introduce a gated compone... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "self - attention mechanisms",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"self",
"-",
"attention",
"mechanisms"
],
"offsets": [
0,
... | [
"self",
"-",
"attention",
"mechanisms",
"have",
"made",
"striking",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"(",
"sota",
")",
"progress",
"in",
"various",
"sequence",
"learning",
"tasks",
",",
"standing",
"on",
"the",
"multi",
"-",
"headed",
"dot",
"... |
ACL | Do you have the right scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods | It has been a common approach to pre-train a language model on a large corpus and fine-tune it on task-specific data. In practice, we observe that fine-tuning a pre-trained model on a small dataset may lead to over- and/or under-estimate problem. In this paper, we propose MC-Tailor, a novel method to alleviate the abov... | 613a1cf444107d0f6d2ca5bffae92522 | 2,020 | [
"it has been a common approach to pre - train a language model on a large corpus and fine - tune it on task - specific data .",
"in practice , we observe that fine - tuning a pre - trained model on a small dataset may lead to over - and / or under - estimate problem .",
"in this paper , we propose mc - tailor ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "language model",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"language",
"model"
],
"offsets": [
11,
12
]
}
],
"trigger": {
... | [
"it",
"has",
"been",
"a",
"common",
"approach",
"to",
"pre",
"-",
"train",
"a",
"language",
"model",
"on",
"a",
"large",
"corpus",
"and",
"fine",
"-",
"tune",
"it",
"on",
"task",
"-",
"specific",
"data",
".",
"in",
"practice",
",",
"we",
"observe",
"... |
ACL | CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation | Multi-hop question generation focuses on generating complex questions that require reasoning over multiple pieces of information of the input passage. Current models with state-of-the-art performance have been able to generate the correct questions corresponding to the answers. However, most models can not ensure the c... | a15ce6838e7a66e5fcd5cf67c1a35403 | 2,022 | [
"multi - hop question generation focuses on generating complex questions that require reasoning over multiple pieces of information of the input passage .",
"current models with state - of - the - art performance have been able to generate the correct questions corresponding to the answers .",
"however , most m... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - hop question generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"hop",
"question",
"generation"
],
"offsets": ... | [
"multi",
"-",
"hop",
"question",
"generation",
"focuses",
"on",
"generating",
"complex",
"questions",
"that",
"require",
"reasoning",
"over",
"multiple",
"pieces",
"of",
"information",
"of",
"the",
"input",
"passage",
".",
"current",
"models",
"with",
"state",
"... |
ACL | PeTra: A Sparsely Supervised Memory Model for People Tracking | We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots. PeTra is trained using sparse annotation from the GAP pronoun resolution dataset and outperforms a prior memory model on the task while using a simpler architecture. We empirically compare key modeling choices, finding t... | e4311154de4f7e6310698fa809176348 | 2,020 | [
"we propose petra , a memory - augmented neural network designed to track entities in its memory slots .",
"petra is trained using sparse annotation from the gap pronoun resolution dataset and outperforms a prior memory model on the task while using a simpler architecture .",
"we empirically compare key modelin... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "track",
"nugget_type": "E-PUR",
"ar... | [
"we",
"propose",
"petra",
",",
"a",
"memory",
"-",
"augmented",
"neural",
"network",
"designed",
"to",
"track",
"entities",
"in",
"its",
"memory",
"slots",
".",
"petra",
"is",
"trained",
"using",
"sparse",
"annotation",
"from",
"the",
"gap",
"pronoun",
"reso... |
ACL | Cross-Lingual Abstractive Summarization with Limited Parallel Resources | Parallel cross-lingual summarization data is scarce, requiring models to better use the limited available cross-lingual resources. Existing methods to do so often adopt sequence-to-sequence networks with multi-task frameworks. Such approaches apply multiple decoders, each of which is utilized for a specific task. Howev... | 6eba9158ca20b7adae24299b661da58b | 2,021 | [
"parallel cross - lingual summarization data is scarce , requiring models to better use the limited available cross - lingual resources .",
"existing methods to do so often adopt sequence - to - sequence networks with multi - task frameworks .",
"such approaches apply multiple decoders , each of which is utiliz... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "parallel cross - lingual summarization data",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"parallel",
"cross",
"-",
"lingual",
"summarization",
... | [
"parallel",
"cross",
"-",
"lingual",
"summarization",
"data",
"is",
"scarce",
",",
"requiring",
"models",
"to",
"better",
"use",
"the",
"limited",
"available",
"cross",
"-",
"lingual",
"resources",
".",
"existing",
"methods",
"to",
"do",
"so",
"often",
"adopt"... |
ACL | Multi-Level Matching and Aggregation Network for Few-Shot Relation Classification | This paper presents a multi-level matching and aggregation network (MLMAN) for few-shot relation classification. Previous studies on this topic adopt prototypical networks, which calculate the embedding vector of a query instance and the prototype vector of the support set for each relation candidate independently. On ... | 8374d45f17196d18ff3c1cada8d34f9d | 2,019 | [
"this paper presents a multi - level matching and aggregation network ( mlman ) for few - shot relation classification .",
"previous studies on this topic adopt prototypical networks , which calculate the embedding vector of a query instance and the prototype vector of the support set for each relation candidate ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "multi - level matching and aggregation network",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"multi",
"-",
"level",
"matching",
"and",
"aggrega... | [
"this",
"paper",
"presents",
"a",
"multi",
"-",
"level",
"matching",
"and",
"aggregation",
"network",
"(",
"mlman",
")",
"for",
"few",
"-",
"shot",
"relation",
"classification",
".",
"previous",
"studies",
"on",
"this",
"topic",
"adopt",
"prototypical",
"netwo... |
ACL | Retrieval-guided Counterfactual Generation for QA | Deep NLP models have been shown to be brittle to input perturbations. Recent work has shown that data augmentation using counterfactuals — i.e. minimally perturbed inputs — can help ameliorate this weakness. We focus on the task of creating counterfactuals for question answering, which presents unique challenges relate... | 60ba2b4bceefb466882672eb24e0291c | 2,022 | [
"deep nlp models have been shown to be brittle to input perturbations .",
"recent work has shown that data augmentation using counterfactuals — i . e . minimally perturbed inputs — can help ameliorate this weakness .",
"we focus on the task of creating counterfactuals for question answering , which presents uni... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "deep nlp models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"deep",
"nlp",
"models"
],
"offsets": [
0,
1,
2
]
}... | [
"deep",
"nlp",
"models",
"have",
"been",
"shown",
"to",
"be",
"brittle",
"to",
"input",
"perturbations",
".",
"recent",
"work",
"has",
"shown",
"that",
"data",
"augmentation",
"using",
"counterfactuals",
"—",
"i",
".",
"e",
".",
"minimally",
"perturbed",
"in... |
ACL | Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation | Knowledge distillation (KD) is commonly used to construct synthetic data for training non-autoregressive translation (NAT) models. However, there exists a discrepancy on low-frequency words between the distilled and the original data, leading to more errors on predicting low-frequency words. To alleviate the problem, w... | 94542471d561f78d37ad94e4ee16cc3f | 2,021 | [
"knowledge distillation ( kd ) is commonly used to construct synthetic data for training non - autoregressive translation ( nat ) models .",
"however , there exists a discrepancy on low - frequency words between the distilled and the original data , leading to more errors on predicting low - frequency words .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge distillation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"knowledge",
"distillation"
],
"offsets": [
0,
1
]
}
],
"... | [
"knowledge",
"distillation",
"(",
"kd",
")",
"is",
"commonly",
"used",
"to",
"construct",
"synthetic",
"data",
"for",
"training",
"non",
"-",
"autoregressive",
"translation",
"(",
"nat",
")",
"models",
".",
"however",
",",
"there",
"exists",
"a",
"discrepancy"... |
ACL | Disentangled Sequence to Sequence Learning for Compositional Generalization | There is mounting evidence that existing neural network models, in particular the very popular sequence-to-sequence architecture, struggle to systematically generalize to unseen compositions of seen components. We demonstrate that one of the reasons hindering compositional generalization relates to representations bein... | 6497acaf14569a7d1019d52672c035de | 2,022 | [
"there is mounting evidence that existing neural network models , in particular the very popular sequence - to - sequence architecture , struggle to systematically generalize to unseen compositions of seen components .",
"we demonstrate that one of the reasons hindering compositional generalization relates to rep... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "existing neural network models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"existing",
"neural",
"network",
"models"
],
"offsets": [
5,... | [
"there",
"is",
"mounting",
"evidence",
"that",
"existing",
"neural",
"network",
"models",
",",
"in",
"particular",
"the",
"very",
"popular",
"sequence",
"-",
"to",
"-",
"sequence",
"architecture",
",",
"struggle",
"to",
"systematically",
"generalize",
"to",
"uns... |
ACL | E2E-VLP: End-to-End Vision-Language Pre-training Enhanced by Visual Learning | Vision-language pre-training (VLP) on large-scale image-text pairs has achieved huge success for the cross-modal downstream tasks. The most existing pre-training methods mainly adopt a two-step training procedure, which firstly employs a pre-trained object detector to extract region-based visual features, then concaten... | dce874ba19f5a024238f3472a84ff278 | 2,021 | [
"vision - language pre - training ( vlp ) on large - scale image - text pairs has achieved huge success for the cross - modal downstream tasks .",
"the most existing pre - training methods mainly adopt a two - step training procedure , which firstly employs a pre - trained object detector to extract region - base... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "vision - language pre - training",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"vision",
"-",
"language",
"pre",
"-",
"training"
],
... | [
"vision",
"-",
"language",
"pre",
"-",
"training",
"(",
"vlp",
")",
"on",
"large",
"-",
"scale",
"image",
"-",
"text",
"pairs",
"has",
"achieved",
"huge",
"success",
"for",
"the",
"cross",
"-",
"modal",
"downstream",
"tasks",
".",
"the",
"most",
"existin... |
ACL | Tree LSTMs with Convolution Units to Predict Stance and Rumor Veracity in Social Media Conversations | Learning from social-media conversations has gained significant attention recently because of its applications in areas like rumor detection. In this research, we propose a new way to represent social-media conversations as binarized constituency trees that allows comparing features in source-posts and their replies ef... | 1835deb0174ea363444c212b3ee9444a | 2,019 | [
"learning from social - media conversations has gained significant attention recently because of its applications in areas like rumor detection .",
"in this research , we propose a new way to represent social - media conversations as binarized constituency trees that allows comparing features in source - posts an... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "social - media conversations",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"social",
"-",
"media",
"conversations"
],
"offsets": [
2,
... | [
"learning",
"from",
"social",
"-",
"media",
"conversations",
"has",
"gained",
"significant",
"attention",
"recently",
"because",
"of",
"its",
"applications",
"in",
"areas",
"like",
"rumor",
"detection",
".",
"in",
"this",
"research",
",",
"we",
"propose",
"a",
... |
ACL | HOLM: Hallucinating Objects with Language Models for Referring Expression Recognition in Partially-Observed Scenes | AI systems embodied in the physical world face a fundamental challenge of partial observability; operating with only a limited view and knowledge of the environment. This creates challenges when AI systems try to reason about language and its relationship with the environment: objects referred to through language (e.g.... | c3c689334a0bb693e6b778c7d759f8bb | 2,022 | [
"ai systems embodied in the physical world face a fundamental challenge of partial observability ; operating with only a limited view and knowledge of the environment .",
"this creates challenges when ai systems try to reason about language and its relationship with the environment : objects referred to through l... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "partial observability",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"partial",
"observability"
],
"offsets": [
12,
13
]
},
{
... | [
"ai",
"systems",
"embodied",
"in",
"the",
"physical",
"world",
"face",
"a",
"fundamental",
"challenge",
"of",
"partial",
"observability",
";",
"operating",
"with",
"only",
"a",
"limited",
"view",
"and",
"knowledge",
"of",
"the",
"environment",
".",
"this",
"cr... |
ACL | Improving Segmentation for Technical Support Problems | Technical support problems are often long and complex. They typically contain user descriptions of the problem, the setup, and steps for attempted resolution. Often they also contain various non-natural language text elements like outputs of commands, snippets of code, error messages or stack traces. These elements con... | b458811aa7c533044b93aabdf5f3eb97 | 2,020 | [
"technical support problems are often long and complex .",
"they typically contain user descriptions of the problem , the setup , and steps for attempted resolution .",
"often they also contain various non - natural language text elements like outputs of commands , snippets of code , error messages or stack tra... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "technical support problems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"technical",
"support",
"problems"
],
"offsets": [
0,
1,
... | [
"technical",
"support",
"problems",
"are",
"often",
"long",
"and",
"complex",
".",
"they",
"typically",
"contain",
"user",
"descriptions",
"of",
"the",
"problem",
",",
"the",
"setup",
",",
"and",
"steps",
"for",
"attempted",
"resolution",
".",
"often",
"they",... |
ACL | Learning Confidence for Transformer-based Neural Machine Translation | Confidence estimation aims to quantify the confidence of the model prediction, providing an expectation of success. A well-calibrated confidence estimate enables accurate failure prediction and proper risk measurement when given noisy samples and out-of-distribution data in real-world settings. However, this task remai... | 6b93c6f090f843a8344b7eec8249a8b0 | 2,022 | [
"confidence estimation aims to quantify the confidence of the model prediction , providing an expectation of success .",
"a well - calibrated confidence estimate enables accurate failure prediction and proper risk measurement when given noisy samples and out - of - distribution data in real - world settings .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "confidence estimation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"confidence",
"estimation"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"confidence",
"estimation",
"aims",
"to",
"quantify",
"the",
"confidence",
"of",
"the",
"model",
"prediction",
",",
"providing",
"an",
"expectation",
"of",
"success",
".",
"a",
"well",
"-",
"calibrated",
"confidence",
"estimate",
"enables",
"accurate",
"failure",
... |
ACL | Weakly Supervised Named Entity Tagging with Learnable Logical Rules | We study the problem of building entity tagging systems by using a few rules as weak supervision. Previous methods mostly focus on disambiguating entity types based on contexts and expert-provided rules, while assuming entity spans are given. In this work, we propose a novel method TALLOR that bootstraps high-quality l... | cb96695c917c4a8b3d277a18fb88e567 | 2,021 | [
"we study the problem of building entity tagging systems by using a few rules as weak supervision .",
"previous methods mostly focus on disambiguating entity types based on contexts and expert - provided rules , while assuming entity spans are given .",
"in this work , we propose a novel method tallor that boot... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "weak supervision",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"weak",
"supervision"
],
"offsets": [
15,
16
]
},
{
... | [
"we",
"study",
"the",
"problem",
"of",
"building",
"entity",
"tagging",
"systems",
"by",
"using",
"a",
"few",
"rules",
"as",
"weak",
"supervision",
".",
"previous",
"methods",
"mostly",
"focus",
"on",
"disambiguating",
"entity",
"types",
"based",
"on",
"contex... |
ACL | XLPT-AMR: Cross-Lingual Pre-Training via Multi-Task Learning for Zero-Shot AMR Parsing and Text Generation | Due to the scarcity of annotated data, Abstract Meaning Representation (AMR) research is relatively limited and challenging for languages other than English. Upon the availability of English AMR dataset and English-to- X parallel datasets, in this paper we propose a novel cross-lingual pre-training approach via multi-t... | f558ec023a1f21f417f5fd19eda3462f | 2,021 | [
"due to the scarcity of annotated data , abstract meaning representation ( amr ) research is relatively limited and challenging for languages other than english .",
"upon the availability of english amr dataset and english - to - x parallel datasets , in this paper we propose a novel cross - lingual pre - trainin... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
45
]
},
{
"text": "cross - lingual pre - training approach",
... | [
"due",
"to",
"the",
"scarcity",
"of",
"annotated",
"data",
",",
"abstract",
"meaning",
"representation",
"(",
"amr",
")",
"research",
"is",
"relatively",
"limited",
"and",
"challenging",
"for",
"languages",
"other",
"than",
"english",
".",
"upon",
"the",
"avai... |
ACL | How does the pre-training objective affect what large language models learn about linguistic properties? | Several pre-training objectives, such as masked language modeling (MLM), have been proposed to pre-train language models (e.g. BERT) with the aim of learning better language representations. However, to the best of our knowledge, no previous work so far has investigated how different pre-training objectives affect what... | 480e4e7bb31f435bbbc2a046ca2d70e2 | 2,022 | [
"several pre - training objectives , such as masked language modeling ( mlm ) , have been proposed to pre - train language models ( e . g . bert ) with the aim of learning better language representations .",
"however , to the best of our knowledge , no previous work so far has investigated how different pre - tra... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - training objectives",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"training",
"objectives"
],
"offsets": [
1,
... | [
"several",
"pre",
"-",
"training",
"objectives",
",",
"such",
"as",
"masked",
"language",
"modeling",
"(",
"mlm",
")",
",",
"have",
"been",
"proposed",
"to",
"pre",
"-",
"train",
"language",
"models",
"(",
"e",
".",
"g",
".",
"bert",
")",
"with",
"the"... |
ACL | Relation-Aware Collaborative Learning for Unified Aspect-Based Sentiment Analysis | Aspect-based sentiment analysis (ABSA) involves three subtasks, i.e., aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Most existing studies focused on one of these subtasks only. Several recent researches made successful attempts to solve the complete ABSA problem with a unif... | af65fefb4019c353f07b9372c823612e | 2,020 | [
"aspect - based sentiment analysis ( absa ) involves three subtasks , i . e . , aspect term extraction , opinion term extraction , and aspect - level sentiment classification .",
"most existing studies focused on one of these subtasks only .",
"several recent researches made successful attempts to solve the com... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aspect - based sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aspect",
"-",
"based",
"sentiment",
"analysis"
],
"offset... | [
"aspect",
"-",
"based",
"sentiment",
"analysis",
"(",
"absa",
")",
"involves",
"three",
"subtasks",
",",
"i",
".",
"e",
".",
",",
"aspect",
"term",
"extraction",
",",
"opinion",
"term",
"extraction",
",",
"and",
"aspect",
"-",
"level",
"sentiment",
"classi... |
ACL | Self-Supervised Learning for Contextualized Extractive Summarization | Existing models for extractive summarization are usually trained from scratch with a cross-entropy loss, which does not explicitly capture the global context at the document level. In this paper, we aim to improve this task by introducing three auxiliary pre-training tasks that learn to capture the document-level conte... | 974f66712b393ade8d52f0cdf817a4b6 | 2,019 | [
"existing models for extractive summarization are usually trained from scratch with a cross - entropy loss , which does not explicitly capture the global context at the document level .",
"in this paper , we aim to improve this task by introducing three auxiliary pre - training tasks that learn to capture the doc... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "existing models for extractive summarization",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"existing",
"models",
"for",
"extractive",
"summarization"
... | [
"existing",
"models",
"for",
"extractive",
"summarization",
"are",
"usually",
"trained",
"from",
"scratch",
"with",
"a",
"cross",
"-",
"entropy",
"loss",
",",
"which",
"does",
"not",
"explicitly",
"capture",
"the",
"global",
"context",
"at",
"the",
"document",
... |
ACL | Detecting Annotation Errors in Morphological Data with the Transformer | Annotation errors that stem from various sources are usually unavoidable when performing large-scale annotation of linguistic data. In this paper, we evaluate the feasibility of using the Transformer model to detect various types of annotator errors in morphological data sets that contain inflected word forms. We evalu... | 334ae5a0f051d333e6bba5275b87f46d | 2,022 | [
"annotation errors that stem from various sources are usually unavoidable when performing large - scale annotation of linguistic data .",
"in this paper , we evaluate the feasibility of using the transformer model to detect various types of annotator errors in morphological data sets that contain inflected word f... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "annotation errors",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"annotation",
"errors"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"annotation",
"errors",
"that",
"stem",
"from",
"various",
"sources",
"are",
"usually",
"unavoidable",
"when",
"performing",
"large",
"-",
"scale",
"annotation",
"of",
"linguistic",
"data",
".",
"in",
"this",
"paper",
",",
"we",
"evaluate",
"the",
"feasibility",... |
ACL | A Neural Transition-based Joint Model for Disease Named Entity Recognition and Normalization | Disease is one of the fundamental entities in biomedical research. Recognizing such entities from biomedical text and then normalizing them to a standardized disease vocabulary offer a tremendous opportunity for many downstream applications. Previous studies have demonstrated that joint modeling of the two sub-tasks ha... | cfb6543cf6f8f659176ff6951d363390 | 2,021 | [
"disease is one of the fundamental entities in biomedical research .",
"recognizing such entities from biomedical text and then normalizing them to a standardized disease vocabulary offer a tremendous opportunity for many downstream applications .",
"previous studies have demonstrated that joint modeling of the... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "entity normalization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"entity",
"normalization"
],
"offsets": [
124,
125
]
}
],
"... | [
"disease",
"is",
"one",
"of",
"the",
"fundamental",
"entities",
"in",
"biomedical",
"research",
".",
"recognizing",
"such",
"entities",
"from",
"biomedical",
"text",
"and",
"then",
"normalizing",
"them",
"to",
"a",
"standardized",
"disease",
"vocabulary",
"offer",... |
ACL | Improving Disentangled Text Representation Learning with Information-Theoretic Guidance | Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. Similar problems have been studied extensively for other forms of data, such as images and videos. However, the discrete nature of natural lan... | bbf3413dcab02c72aad2d00ed75a6b1e | 2,020 | [
"learning disentangled representations of natural language is essential for many nlp tasks , e . g . , conditional text generation , style transfer , personalized dialogue systems , etc .",
"similar problems have been studied extensively for other forms of data , such as images and videos .",
"however , the dis... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "nlp tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"nlp",
"tasks"
],
"offsets": [
10,
11
]
}
],
"trigger": {
"text"... | [
"learning",
"disentangled",
"representations",
"of",
"natural",
"language",
"is",
"essential",
"for",
"many",
"nlp",
"tasks",
",",
"e",
".",
"g",
".",
",",
"conditional",
"text",
"generation",
",",
"style",
"transfer",
",",
"personalized",
"dialogue",
"systems",... |
ACL | Cost-sensitive Regularization for Label Confusion-aware Event Detection | In supervised event detection, most of the mislabeling occurs between a small number of confusing type pairs, including trigger-NIL pairs and sibling sub-types of the same coarse type. To address this label confusion problem, this paper proposes cost-sensitive regularization, which can force the training procedure to c... | 3f1413ba923cdb069f675f57add645ad | 2,019 | [
"in supervised event detection , most of the mislabeling occurs between a small number of confusing type pairs , including trigger - nil pairs and sibling sub - types of the same coarse type .",
"to address this label confusion problem , this paper proposes cost - sensitive regularization , which can force the tr... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "most of the mislabeling",
"nugget_type": "FEA",
"argument_type": "Concern",
"tokens": [
"most",
"of",
"the",
"mislabeling"
],
"offsets": [
5,
6,
... | [
"in",
"supervised",
"event",
"detection",
",",
"most",
"of",
"the",
"mislabeling",
"occurs",
"between",
"a",
"small",
"number",
"of",
"confusing",
"type",
"pairs",
",",
"including",
"trigger",
"-",
"nil",
"pairs",
"and",
"sibling",
"sub",
"-",
"types",
"of",... |
ACL | Sentence-level Privacy for Document Embeddings | User language data can contain highly sensitive personal content. As such, it is imperative to offer users a strong and interpretable privacy guarantee when learning from their data. In this work we propose SentDP, pure local differential privacy at the sentence level for a single user document. We propose a novel tech... | 5a54ccde24f7269dd10b4f296a234fd6 | 2,022 | [
"user language data can contain highly sensitive personal content .",
"as such , it is imperative to offer users a strong and interpretable privacy guarantee when learning from their data .",
"in this work we propose sentdp , pure local differential privacy at the sentence level for a single user document .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "user language data",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"user",
"language",
"data"
],
"offsets": [
0,
1,
2
]
... | [
"user",
"language",
"data",
"can",
"contain",
"highly",
"sensitive",
"personal",
"content",
".",
"as",
"such",
",",
"it",
"is",
"imperative",
"to",
"offer",
"users",
"a",
"strong",
"and",
"interpretable",
"privacy",
"guarantee",
"when",
"learning",
"from",
"th... |
ACL | Emergence of Syntax Needs Minimal Supervision | This paper is a theoretical contribution to the debate on the learnability of syntax from a corpus without explicit syntax-specific guidance. Our approach originates in the observable structure of a corpus, which we use to define and isolate grammaticality (syntactic information) and meaning/pragmatics information. We ... | 7ae990671905a0bd73bb589f205b7bd7 | 2,020 | [
"this paper is a theoretical contribution to the debate on the learnability of syntax from a corpus without explicit syntax - specific guidance .",
"our approach originates in the observable structure of a corpus , which we use to define and isolate grammaticality ( syntactic information ) and meaning / pragmatic... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "learnability of syntax",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"learnability",
"of",
"syntax"
],
"offsets": [
11,
12,
13
... | [
"this",
"paper",
"is",
"a",
"theoretical",
"contribution",
"to",
"the",
"debate",
"on",
"the",
"learnability",
"of",
"syntax",
"from",
"a",
"corpus",
"without",
"explicit",
"syntax",
"-",
"specific",
"guidance",
".",
"our",
"approach",
"originates",
"in",
"the... |
ACL | Entity-Aware Dependency-Based Deep Graph Attention Network for Comparative Preference Classification | This paper studies the task of comparative preference classification (CPC). Given two entities in a sentence, our goal is to classify whether the first (or the second) entity is preferred over the other or no comparison is expressed at all between the two entities. Existing works either do not learn entity-aware repres... | b19ac8fbdc62c8efb5fc5c665b65b672 | 2,020 | [
"this paper studies the task of comparative preference classification ( cpc ) .",
"given two entities in a sentence , our goal is to classify whether the first ( or the second ) entity is preferred over the other or no comparison is expressed at all between the two entities .",
"existing works either do not lea... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "comparative preference classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"comparative",
"preference",
"classification"
],
"offsets": [
6,... | [
"this",
"paper",
"studies",
"the",
"task",
"of",
"comparative",
"preference",
"classification",
"(",
"cpc",
")",
".",
"given",
"two",
"entities",
"in",
"a",
"sentence",
",",
"our",
"goal",
"is",
"to",
"classify",
"whether",
"the",
"first",
"(",
"or",
"the"... |
ACL | As Little as Possible, as Much as Necessary: Detecting Over- and Undertranslations with Contrastive Conditioning | Omission and addition of content is a typical issue in neural machine translation. We propose a method for detecting such phenomena with off-the-shelf translation models. Using contrastive conditioning, we compare the likelihood of a full sequence under a translation model to the likelihood of its parts, given the corr... | 0a3b7bc90bdb2433badeb7407fc524e1 | 2,022 | [
"omission and addition of content is a typical issue in neural machine translation .",
"we propose a method for detecting such phenomena with off - the - shelf translation models .",
"using contrastive conditioning , we compare the likelihood of a full sequence under a translation model to the likelihood of its... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
10,
11,
... | [
"omission",
"and",
"addition",
"of",
"content",
"is",
"a",
"typical",
"issue",
"in",
"neural",
"machine",
"translation",
".",
"we",
"propose",
"a",
"method",
"for",
"detecting",
"such",
"phenomena",
"with",
"off",
"-",
"the",
"-",
"shelf",
"translation",
"mo... |
ACL | Just “OneSeC” for Producing Multilingual Sense-Annotated Data | The well-known problem of knowledge acquisition is one of the biggest issues in Word Sense Disambiguation (WSD), where annotated data are still scarce in English and almost absent in other languages. In this paper we formulate the assumption of One Sense per Wikipedia Category and present OneSeC, a language-independent... | e228907aa8bb6cea12e5d5c655f52b9c | 2,019 | [
"the well - known problem of knowledge acquisition is one of the biggest issues in word sense disambiguation ( wsd ) , where annotated data are still scarce in english and almost absent in other languages .",
"in this paper we formulate the assumption of one sense per wikipedia category and present onesec , a lan... | [
{
"event_type": "CMP",
"arguments": [
{
"text": "on all languages and most domains",
"nugget_type": "LIM",
"argument_type": "Condition",
"tokens": [
"on",
"all",
"languages",
"and",
"most",
"domains"
],... | [
"the",
"well",
"-",
"known",
"problem",
"of",
"knowledge",
"acquisition",
"is",
"one",
"of",
"the",
"biggest",
"issues",
"in",
"word",
"sense",
"disambiguation",
"(",
"wsd",
")",
",",
"where",
"annotated",
"data",
"are",
"still",
"scarce",
"in",
"english",
... |
ACL | SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization | Sequence-to-sequence neural networks have recently achieved great success in abstractive summarization, especially through fine-tuning large pre-trained language models on the downstream dataset. These models are typically decoded with beam search to generate a unique summary. However, the search space is very large, a... | 1e54e904941c04f8ad39e3220d68d689 | 2,022 | [
"sequence - to - sequence neural networks have recently achieved great success in abstractive summarization , especially through fine - tuning large pre - trained language models on the downstream dataset .",
"these models are typically decoded with beam search to generate a unique summary .",
"however , the se... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "abstractive summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"abstractive",
"summarization"
],
"offsets": [
13,
14
]
}
... | [
"sequence",
"-",
"to",
"-",
"sequence",
"neural",
"networks",
"have",
"recently",
"achieved",
"great",
"success",
"in",
"abstractive",
"summarization",
",",
"especially",
"through",
"fine",
"-",
"tuning",
"large",
"pre",
"-",
"trained",
"language",
"models",
"on... |
ACL | Leveraging Visual Knowledge in Language Tasks: An Empirical Study on Intermediate Pre-training for Cross-Modal Knowledge Transfer | Pre-trained language models are still far from human performance in tasks that need understanding of properties (e.g. appearance, measurable quantity) and affordances of everyday objects in the real world since the text lacks such information due to reporting bias.In this work, we study whether integrating visual knowl... | bd23bb2885abaa2c95d3ea1d4620880f | 2,022 | [
"pre - trained language models are still far from human performance in tasks that need understanding of properties ( e . g . appearance , measurable quantity ) and affordances of everyday objects in the real world since the text lacks such information due to reporting bias .",
"in this work , we study whether int... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "lacks",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"lacks"
],
"offsets": [
40
]
},
{
"text": "text",
"nugget_type": "FEA",
"... | [
"pre",
"-",
"trained",
"language",
"models",
"are",
"still",
"far",
"from",
"human",
"performance",
"in",
"tasks",
"that",
"need",
"understanding",
"of",
"properties",
"(",
"e",
".",
"g",
".",
"appearance",
",",
"measurable",
"quantity",
")",
"and",
"afforda... |
ACL | BinaryBERT: Pushing the Limit of BERT Quantization | The rapid development of large pre-trained language models has greatly increased the demand for model compression techniques, among which quantization is a popular solution. In this paper, we propose BinaryBERT, which pushes BERT quantization to the limit by weight binarization. We find that a binary BERT is hard to be... | 8c7be41e8bef42d4f5ccfd0f426c85f1 | 2,021 | [
"the rapid development of large pre - trained language models has greatly increased the demand for model compression techniques , among which quantization is a popular solution .",
"in this paper , we propose binarybert , which pushes bert quantization to the limit by weight binarization .",
"we find that a bin... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Finder",
"tokens": [
"we"
],
"offsets": [
47
]
},
{
"text": "trained directly",
"nugget_type": "E-CMP",
... | [
"the",
"rapid",
"development",
"of",
"large",
"pre",
"-",
"trained",
"language",
"models",
"has",
"greatly",
"increased",
"the",
"demand",
"for",
"model",
"compression",
"techniques",
",",
"among",
"which",
"quantization",
"is",
"a",
"popular",
"solution",
".",
... |
ACL | Cross-Lingual Unsupervised Sentiment Classification with Multi-View Transfer Learning | Recent neural network models have achieved impressive performance on sentiment classification in English as well as other languages. Their success heavily depends on the availability of a large amount of labeled data or parallel corpus. In this paper, we investigate an extreme scenario of cross-lingual sentiment classi... | 2d4578d33e0cfbc1933fdab930b51f5b | 2,020 | [
"recent neural network models have achieved impressive performance on sentiment classification in english as well as other languages .",
"their success heavily depends on the availability of a large amount of labeled data or parallel corpus .",
"in this paper , we investigate an extreme scenario of cross - ling... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural network models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"network",
"models"
],
"offsets": [
1,
2,
3
... | [
"recent",
"neural",
"network",
"models",
"have",
"achieved",
"impressive",
"performance",
"on",
"sentiment",
"classification",
"in",
"english",
"as",
"well",
"as",
"other",
"languages",
".",
"their",
"success",
"heavily",
"depends",
"on",
"the",
"availability",
"o... |
ACL | Explaining Contextualization in Language Models using Visual Analytics | Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn. In this paper, we contribute to the current efforts of explaining such models by exploring the continuum between function and content words with respect to contextualization in BERT, based on ... | 2e64dccd83851dd1c01968a303758036 | 2,021 | [
"despite the success of contextualized language models on various nlp tasks , it is still unclear what these models really learn .",
"in this paper , we contribute to the current efforts of explaining such models by exploring the continuum between function and content words with respect to contextualization in be... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "contextualized language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"contextualized",
"language",
"models"
],
"offsets": [
4,
5,
... | [
"despite",
"the",
"success",
"of",
"contextualized",
"language",
"models",
"on",
"various",
"nlp",
"tasks",
",",
"it",
"is",
"still",
"unclear",
"what",
"these",
"models",
"really",
"learn",
".",
"in",
"this",
"paper",
",",
"we",
"contribute",
"to",
"the",
... |
ACL | Keeping Notes: Conditional Natural Language Generation with a Scratchpad Encoder | We introduce the Scratchpad Mechanism, a novel addition to the sequence-to-sequence (seq2seq) neural network architecture and demonstrate its effectiveness in improving the overall fluency of seq2seq models for natural language generation tasks. By enabling the decoder at each time step to write to all of the encoder o... | e7f8304ee88e9f033de11b0cf0266207 | 2,019 | [
"we introduce the scratchpad mechanism , a novel addition to the sequence - to - sequence ( seq2seq ) neural network architecture and demonstrate its effectiveness in improving the overall fluency of seq2seq models for natural language generation tasks .",
"by enabling the decoder at each time step to write to al... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "scratchpad mechanism",
"nugget_type": "APP"... | [
"we",
"introduce",
"the",
"scratchpad",
"mechanism",
",",
"a",
"novel",
"addition",
"to",
"the",
"sequence",
"-",
"to",
"-",
"sequence",
"(",
"seq2seq",
")",
"neural",
"network",
"architecture",
"and",
"demonstrate",
"its",
"effectiveness",
"in",
"improving",
... |
ACL | What Does BERT with Vision Look At? | Pre-trained visually grounded language models such as ViLBERT, LXMERT, and UNITER have achieved significant performance improvement on vision-and-language tasks but what they learn during pre-training remains unclear. In this work, we demonstrate that certain attention heads of a visually grounded language model active... | 252c007240e31c060ee57815afcbdd76 | 2,020 | [
"pre - trained visually grounded language models such as vilbert , lxmert , and uniter have achieved significant performance improvement on vision - and - language tasks but what they learn during pre - training remains unclear .",
"in this work , we demonstrate that certain attention heads of a visually grounded... | [
{
"event_type": "FAC",
"arguments": [
{
"text": "pre - trained visually grounded language models",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"pre",
"-",
"trained",
"visually",
"grounded",
"l... | [
"pre",
"-",
"trained",
"visually",
"grounded",
"language",
"models",
"such",
"as",
"vilbert",
",",
"lxmert",
",",
"and",
"uniter",
"have",
"achieved",
"significant",
"performance",
"improvement",
"on",
"vision",
"-",
"and",
"-",
"language",
"tasks",
"but",
"wh... |
ACL | Probing for Labeled Dependency Trees | Probing has become an important tool for analyzing representations in Natural Language Processing (NLP). For graphical NLP tasks such as dependency parsing, linear probes are currently limited to extracting undirected or unlabeled parse trees which do not capture the full task. This work introduces DepProbe, a linear p... | 9c3721c238a542805aaf00178aa45343 | 2,022 | [
"probing has become an important tool for analyzing representations in natural language processing ( nlp ) .",
"for graphical nlp tasks such as dependency parsing , linear probes are currently limited to extracting undirected or unlabeled parse trees which do not capture the full task .",
"this work introduces ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
10,
11,
... | [
"probing",
"has",
"become",
"an",
"important",
"tool",
"for",
"analyzing",
"representations",
"in",
"natural",
"language",
"processing",
"(",
"nlp",
")",
".",
"for",
"graphical",
"nlp",
"tasks",
"such",
"as",
"dependency",
"parsing",
",",
"linear",
"probes",
"... |
ACL | From English to Code-Switching: Transfer Learning with Strong Morphological Clues | Linguistic Code-switching (CS) is still an understudied phenomenon in natural language processing. The NLP community has mostly focused on monolingual and multi-lingual scenarios, but little attention has been given to CS in particular. This is partly because of the lack of resources and annotated data, despite its inc... | 4914a49430a3cd11d9462d9b00fc8ddd | 2,020 | [
"linguistic code - switching ( cs ) is still an understudied phenomenon in natural language processing .",
"the nlp community has mostly focused on monolingual and multi - lingual scenarios , but little attention has been given to cs in particular .",
"this is partly because of the lack of resources and annotat... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
13,
14,
... | [
"linguistic",
"code",
"-",
"switching",
"(",
"cs",
")",
"is",
"still",
"an",
"understudied",
"phenomenon",
"in",
"natural",
"language",
"processing",
".",
"the",
"nlp",
"community",
"has",
"mostly",
"focused",
"on",
"monolingual",
"and",
"multi",
"-",
"lingual... |
ACL | Out-of-Scope Intent Detection with Self-Supervision and Discriminative Training | Out-of-scope intent detection is of practical importance in task-oriented dialogue systems. Since the distribution of outlier utterances is arbitrary and unknown in the training stage, existing methods commonly rely on strong assumptions on data distribution such as mixture of Gaussians to make inference, resulting in ... | 9fd6207d29efed2fa1a4d6b34925f3ee | 2,021 | [
"out - of - scope intent detection is of practical importance in task - oriented dialogue systems .",
"since the distribution of outlier utterances is arbitrary and unknown in the training stage , existing methods commonly rely on strong assumptions on data distribution such as mixture of gaussians to make infere... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "in task - oriented dialogue systems",
"nugget_type": "LIM",
"argument_type": "Condition",
"tokens": [
"in",
"task",
"-",
"oriented",
"dialogue",
"systems"
... | [
"out",
"-",
"of",
"-",
"scope",
"intent",
"detection",
"is",
"of",
"practical",
"importance",
"in",
"task",
"-",
"oriented",
"dialogue",
"systems",
".",
"since",
"the",
"distribution",
"of",
"outlier",
"utterances",
"is",
"arbitrary",
"and",
"unknown",
"in",
... |
ACL | Multi-Domain Named Entity Recognition with Genre-Aware and Agnostic Inference | Named entity recognition is a key component of many text processing pipelines and it is thus essential for this component to be robust to different types of input. However, domain transfer of NER models with data from multiple genres has not been widely studied. To this end, we conduct NER experiments in three predicti... | 4afbec5d9a19f5336dcb19e454fe8409 | 2,020 | [
"named entity recognition is a key component of many text processing pipelines and it is thus essential for this component to be robust to different types of input .",
"however , domain transfer of ner models with data from multiple genres has not been widely studied .",
"to this end , we conduct ner experiment... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "named entity recognition",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"named",
"entity",
"recognition"
],
"offsets": [
0,
1,
2
... | [
"named",
"entity",
"recognition",
"is",
"a",
"key",
"component",
"of",
"many",
"text",
"processing",
"pipelines",
"and",
"it",
"is",
"thus",
"essential",
"for",
"this",
"component",
"to",
"be",
"robust",
"to",
"different",
"types",
"of",
"input",
".",
"howev... |
ACL | Accurate Online Posterior Alignments for Principled Lexically-Constrained Decoding | Online alignment in machine translation refers to the task of aligning a target word to a source word when the target sequence has only been partially decoded. Good online alignments facilitate important applications such as lexically constrained translation where user-defined dictionaries are used to inject lexical co... | d57b8384c4567a1e6ad04c0fdd17547e | 2,022 | [
"online alignment in machine translation refers to the task of aligning a target word to a source word when the target sequence has only been partially decoded .",
"good online alignments facilitate important applications such as lexically constrained translation where user - defined dictionaries are used to inje... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"machine",
"translation"
],
"offsets": [
3,
4
]
}
],
"trigge... | [
"online",
"alignment",
"in",
"machine",
"translation",
"refers",
"to",
"the",
"task",
"of",
"aligning",
"a",
"target",
"word",
"to",
"a",
"source",
"word",
"when",
"the",
"target",
"sequence",
"has",
"only",
"been",
"partially",
"decoded",
".",
"good",
"onli... |
ACL | The (Non-)Utility of Structural Features in BiLSTM-based Dependency Parsers | Classical non-neural dependency parsers put considerable effort on the design of feature functions. Especially, they benefit from information coming from structural features, such as features drawn from neighboring tokens in the dependency tree. In contrast, their BiLSTM-based successors achieve state-of-the-art perfor... | b04f91842a14d9e794add106f970f33b | 2,019 | [
"classical non - neural dependency parsers put considerable effort on the design of feature functions .",
"especially , they benefit from information coming from structural features , such as features drawn from neighboring tokens in the dependency tree .",
"in contrast , their bilstm - based successors achieve... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "classical non - neural dependency parsers",
"nugget_type": "MOD",
"argument_type": "Concern",
"tokens": [
"classical",
"non",
"-",
"neural",
"dependency",
"parse... | [
"classical",
"non",
"-",
"neural",
"dependency",
"parsers",
"put",
"considerable",
"effort",
"on",
"the",
"design",
"of",
"feature",
"functions",
".",
"especially",
",",
"they",
"benefit",
"from",
"information",
"coming",
"from",
"structural",
"features",
",",
"... |
ACL | Multi-Source Cross-Lingual Model Transfer: Learning What to Share | Modern NLP applications have enjoyed a great boost utilizing neural networks models. Such deep neural models, however, are not applicable to most human languages due to the lack of annotated training data for various NLP tasks. Cross-lingual transfer learning (CLTL) is a viable method for building NLP models for a low-... | d2b1aeaa3c154d97e7ff7e02e5e1f692 | 2,019 | [
"modern nlp applications have enjoyed a great boost utilizing neural networks models .",
"such deep neural models , however , are not applicable to most human languages due to the lack of annotated training data for various nlp tasks .",
"cross - lingual transfer learning ( cltl ) is a viable method for buildin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural networks models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"networks",
"models"
],
"offsets": [
9,
10,
11
... | [
"modern",
"nlp",
"applications",
"have",
"enjoyed",
"a",
"great",
"boost",
"utilizing",
"neural",
"networks",
"models",
".",
"such",
"deep",
"neural",
"models",
",",
"however",
",",
"are",
"not",
"applicable",
"to",
"most",
"human",
"languages",
"due",
"to",
... |
ACL | Energy and Policy Considerations for Deep Learning in NLP | Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computat... | ad0c592910595b4a68f4cf5b9fd9ef3f | 2,019 | [
"recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data .",
"these models have obtained notable gains in accuracy across many nlp tasks .",
"however , these accuracy improvements depend on the availability of exceptiona... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "training neural networks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"training",
"neural",
"networks"
],
"offsets": [
7,
8,
9
... | [
"recent",
"progress",
"in",
"hardware",
"and",
"methodology",
"for",
"training",
"neural",
"networks",
"has",
"ushered",
"in",
"a",
"new",
"generation",
"of",
"large",
"networks",
"trained",
"on",
"abundant",
"data",
".",
"these",
"models",
"have",
"obtained",
... |
ACL | Discrete Opinion Tree Induction for Aspect-based Sentiment Analysis | Dependency trees have been intensively used with graph neural networks for aspect-based sentiment classification. Though being effective, such methods rely on external dependency parsers, which can be unavailable for low-resource languages or perform worse in low-resource domains. In addition, dependency trees are also... | fdd01376a67d2c8f92d1766b5ebc5b0d | 2,022 | [
"dependency trees have been intensively used with graph neural networks for aspect - based sentiment classification .",
"though being effective , such methods rely on external dependency parsers , which can be unavailable for low - resource languages or perform worse in low - resource domains .",
"in addition ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "dependency trees",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"dependency",
"trees"
],
"offsets": [
0,
1
]
},
{
"text":... | [
"dependency",
"trees",
"have",
"been",
"intensively",
"used",
"with",
"graph",
"neural",
"networks",
"for",
"aspect",
"-",
"based",
"sentiment",
"classification",
".",
"though",
"being",
"effective",
",",
"such",
"methods",
"rely",
"on",
"external",
"dependency",
... |
ACL | Token Dropping for Efficient BERT Pretraining | Transformer-based models generally allocate the same amount of computation for each token in a given sequence. We develop a simple but effective “token dropping” method to accelerate the pretraining of transformer models, such as BERT, without degrading its performance on downstream tasks. In particular, we drop unimpo... | 21604d00f74f08a63ae2e8c96042fef9 | 2,022 | [
"transformer - based models generally allocate the same amount of computation for each token in a given sequence .",
"we develop a simple but effective “ token dropping ” method to accelerate the pretraining of transformer models , such as bert , without degrading its performance on downstream tasks .",
"in par... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "transformer - based models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"transformer",
"-",
"based",
"models"
],
"offsets": [
0,
... | [
"transformer",
"-",
"based",
"models",
"generally",
"allocate",
"the",
"same",
"amount",
"of",
"computation",
"for",
"each",
"token",
"in",
"a",
"given",
"sequence",
".",
"we",
"develop",
"a",
"simple",
"but",
"effective",
"“",
"token",
"dropping",
"”",
"met... |
ACL | Probing for the Usage of Grammatical Number | A central quest of probing is to uncover how pre-trained models encode a linguistic property within their representations. An encoding, however, might be spurious—i.e., the model might not rely on it when making predictions. In this paper, we try to find an encoding that the model actually uses, introducing a usage-bas... | 0364f7432172c9ace51edd1126991d7e | 2,022 | [
"a central quest of probing is to uncover how pre - trained models encode a linguistic property within their representations .",
"an encoding , however , might be spurious — i . e . , the model might not rely on it when making predictions .",
"in this paper , we try to find an encoding that the model actually u... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - trained models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"trained",
"models"
],
"offsets": [
9,
10,
... | [
"a",
"central",
"quest",
"of",
"probing",
"is",
"to",
"uncover",
"how",
"pre",
"-",
"trained",
"models",
"encode",
"a",
"linguistic",
"property",
"within",
"their",
"representations",
".",
"an",
"encoding",
",",
"however",
",",
"might",
"be",
"spurious",
"—"... |
ACL | X-Fact: A New Benchmark Dataset for Multilingual Fact Checking | In this work, we introduce : the largest publicly available multilingual dataset for factual verification of naturally existing real-world claims. The dataset contains short statements in 25 languages and is labeled for veracity by expert fact-checkers. The dataset includes a multilingual evaluation benchmark that meas... | bb37fc34620dabdc864e27f468767ddb | 2,021 | [
"in this work , we introduce : the largest publicly available multilingual dataset for factual verification of naturally existing real - world claims .",
"the dataset contains short statements in 25 languages and is labeled for veracity by expert fact - checkers .",
"the dataset includes a multilingual evaluati... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "largest publicly available multilingual dataset",... | [
"in",
"this",
"work",
",",
"we",
"introduce",
":",
"the",
"largest",
"publicly",
"available",
"multilingual",
"dataset",
"for",
"factual",
"verification",
"of",
"naturally",
"existing",
"real",
"-",
"world",
"claims",
".",
"the",
"dataset",
"contains",
"short",
... |
ACL | CogNet: A Large-Scale Cognate Database | This paper introduces CogNet, a new, large-scale lexical database that provides cognates -words of common origin and meaning- across languages. The database currently contains 3.1 million cognate pairs across 338 languages using 35 writing systems. The paper also describes the automated method by which cognates were co... | 41270e6e97e79a8d70b3fa2c3eea431f | 2,019 | [
"this paper introduces cognet , a new , large - scale lexical database that provides cognates - words of common origin and meaning - across languages .",
"the database currently contains 3 . 1 million cognate pairs across 338 languages using 35 writing systems .",
"the paper also describes the automated method ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "cognet",
"nugget_type": "DST",
"argument_type": "Content",
"tokens": [
"cognet"
],
"offsets": [
3
]
},
{
"text": "provides",
"nugget_type": "E-PUR",
... | [
"this",
"paper",
"introduces",
"cognet",
",",
"a",
"new",
",",
"large",
"-",
"scale",
"lexical",
"database",
"that",
"provides",
"cognates",
"-",
"words",
"of",
"common",
"origin",
"and",
"meaning",
"-",
"across",
"languages",
".",
"the",
"database",
"curren... |
ACL | Unknown Intent Detection Using Gaussian Mixture Model with an Application to Zero-shot Intent Classification | User intent classification plays a vital role in dialogue systems. Since user intent may frequently change over time in many realistic scenarios, unknown (new) intent detection has become an essential problem, where the study has just begun. This paper proposes a semantic-enhanced Gaussian mixture model (SEG) for unkno... | ab3d64fd78790fd6f8ce6777ae88c391 | 2,020 | [
"user intent classification plays a vital role in dialogue systems .",
"since user intent may frequently change over time in many realistic scenarios , unknown ( new ) intent detection has become an essential problem , where the study has just begun .",
"this paper proposes a semantic - enhanced gaussian mixtur... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "user intent classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"user",
"intent",
"classification"
],
"offsets": [
0,
1,
... | [
"user",
"intent",
"classification",
"plays",
"a",
"vital",
"role",
"in",
"dialogue",
"systems",
".",
"since",
"user",
"intent",
"may",
"frequently",
"change",
"over",
"time",
"in",
"many",
"realistic",
"scenarios",
",",
"unknown",
"(",
"new",
")",
"intent",
... |
ACL | OpenMEVA: A Benchmark for Evaluating Open-ended Story Generation Metrics | Automatic metrics are essential for developing natural language generation (NLG) models, particularly for open-ended language generation tasks such as story generation. However, existing automatic metrics are observed to correlate poorly with human evaluation. The lack of standardized benchmark datasets makes it diffic... | 599711d0e81e6182ccc1454b338369cb | 2,021 | [
"automatic metrics are essential for developing natural language generation ( nlg ) models , particularly for open - ended language generation tasks such as story generation .",
"however , existing automatic metrics are observed to correlate poorly with human evaluation .",
"the lack of standardized benchmark d... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language generation models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"generation",
"models"
],
"offsets": [
... | [
"automatic",
"metrics",
"are",
"essential",
"for",
"developing",
"natural",
"language",
"generation",
"(",
"nlg",
")",
"models",
",",
"particularly",
"for",
"open",
"-",
"ended",
"language",
"generation",
"tasks",
"such",
"as",
"story",
"generation",
".",
"howev... |
ACL | Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset | One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill. While it is straightforward for humans to recognize and acknowledge others’ feelings in a conversation, this is a significant challenge for AI systems due to the paucity of suitable... | fc189d61a4dfce216daff8e0fd7c47eb | 2,019 | [
"one challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly , a key communicative skill .",
"while it is straightforward for humans to recognize and acknowledge others ’ feelings in a conversation , this is a significant challenge for ai systems due to the pauci... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "dialogue agents",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"dialogue",
"agents"
],
"offsets": [
3,
4
]
}
],
"trigger": {
... | [
"one",
"challenge",
"for",
"dialogue",
"agents",
"is",
"recognizing",
"feelings",
"in",
"the",
"conversation",
"partner",
"and",
"replying",
"accordingly",
",",
"a",
"key",
"communicative",
"skill",
".",
"while",
"it",
"is",
"straightforward",
"for",
"humans",
"... |
ACL | “None of the Above”: Measure Uncertainty in Dialog Response Retrieval | This paper discusses the importance of uncovering uncertainty in end-to-end dialog tasks and presents our experimental results on uncertainty classification on the processed Ubuntu Dialog Corpus. We show that instead of retraining models for this specific purpose, we can capture the original retrieval model’s underlyin... | fb0e6b9bcafba1c43d9ef3e1398498ea | 2,020 | [
"this paper discusses the importance of uncovering uncertainty in end - to - end dialog tasks and presents our experimental results on uncertainty classification on the processed ubuntu dialog corpus .",
"we show that instead of retraining models for this specific purpose , we can capture the original retrieval m... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "importance of uncovering uncertainty",
"nugget_type": "TAK",
"argument_type": "Content",
"tokens": [
"importance",
"of",
"uncovering",
"uncertainty"
],
"offsets": [
... | [
"this",
"paper",
"discusses",
"the",
"importance",
"of",
"uncovering",
"uncertainty",
"in",
"end",
"-",
"to",
"-",
"end",
"dialog",
"tasks",
"and",
"presents",
"our",
"experimental",
"results",
"on",
"uncertainty",
"classification",
"on",
"the",
"processed",
"ub... |
ACL | SpellGCN: Incorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check | Chinese Spelling Check (CSC) is a task to detect and correct spelling errors in Chinese natural language. Existing methods have made attempts to incorporate the similarity knowledge between Chinese characters. However, they take the similarity knowledge as either an external input resource or just heuristic rules. This... | be1110dfcf76310544cacbce709f98d3 | 2,020 | [
"chinese spelling check ( csc ) is a task to detect and correct spelling errors in chinese natural language .",
"existing methods have made attempts to incorporate the similarity knowledge between chinese characters .",
"however , they take the similarity knowledge as either an external input resource or just h... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "chinese spelling check",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"chinese",
"spelling",
"check"
],
"offsets": [
0,
1,
2
... | [
"chinese",
"spelling",
"check",
"(",
"csc",
")",
"is",
"a",
"task",
"to",
"detect",
"and",
"correct",
"spelling",
"errors",
"in",
"chinese",
"natural",
"language",
".",
"existing",
"methods",
"have",
"made",
"attempts",
"to",
"incorporate",
"the",
"similarity"... |
ACL | Speech Translation and the End-to-End Promise: Taking Stock of Where We Are | Over its three decade history, speech translation has experienced several shifts in its primary research themes; moving from loosely coupled cascades of speech recognition and machine translation, to exploring questions of tight coupling, and finally to end-to-end models that have recently attracted much attention. Thi... | 37c799ad79abd8e148f3dde1b5f2ab0e | 2,020 | [
"over its three decade history , speech translation has experienced several shifts in its primary research themes ; moving from loosely coupled cascades of speech recognition and machine translation , to exploring questions of tight coupling , and finally to end - to - end models that have recently attracted much a... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "speech translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"speech",
"translation"
],
"offsets": [
6,
7
]
}
],
"trigger"... | [
"over",
"its",
"three",
"decade",
"history",
",",
"speech",
"translation",
"has",
"experienced",
"several",
"shifts",
"in",
"its",
"primary",
"research",
"themes",
";",
"moving",
"from",
"loosely",
"coupled",
"cascades",
"of",
"speech",
"recognition",
"and",
"ma... |
ACL | Reverse Engineering Configurations of Neural Text Generation Models | Recent advances in neural text generation modeling have resulted in a number of societal concerns related to how such approaches might be used in malicious ways. It is therefore desirable to develop a deeper understanding of the fundamental properties of such models. The study of artifacts that emerge in machine genera... | cc77c5f28360a3d2c633a52c08bf67ac | 2,020 | [
"recent advances in neural text generation modeling have resulted in a number of societal concerns related to how such approaches might be used in malicious ways .",
"it is therefore desirable to develop a deeper understanding of the fundamental properties of such models .",
"the study of artifacts that emerge ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural text generation modeling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"text",
"generation",
"modeling"
],
"offsets": [
3... | [
"recent",
"advances",
"in",
"neural",
"text",
"generation",
"modeling",
"have",
"resulted",
"in",
"a",
"number",
"of",
"societal",
"concerns",
"related",
"to",
"how",
"such",
"approaches",
"might",
"be",
"used",
"in",
"malicious",
"ways",
".",
"it",
"is",
"t... |
ACL | Proactive Human-Machine Conversation with Explicit Conversation Goal | Though great progress has been made for human-machine conversation, current dialogue system is still in its infancy: it usually converses passively and utters words more as a matter of response, rather than on its own initiatives. In this paper, we take a radical step towards building a human-like conversational agent:... | 20c85484257d67119cfd1319f853b762 | 2,019 | [
"though great progress has been made for human - machine conversation , current dialogue system is still in its infancy : it usually converses passively and utters words more as a matter of response , rather than on its own initiatives .",
"in this paper , we take a radical step towards building a human - like co... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "human - machine conversation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"human",
"-",
"machine",
"conversation"
],
"offsets": [
7,
... | [
"though",
"great",
"progress",
"has",
"been",
"made",
"for",
"human",
"-",
"machine",
"conversation",
",",
"current",
"dialogue",
"system",
"is",
"still",
"in",
"its",
"infancy",
":",
"it",
"usually",
"converses",
"passively",
"and",
"utters",
"words",
"more",... |
ACL | Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport | Selecting input features of top relevance has become a popular method for building self-explaining models. In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream predi... | 34a3c776fad4477827af996c0624349a | 2,020 | [
"selecting input features of top relevance has become a popular method for building self - explaining models .",
"in this work , we extend this selective rationalization approach to text matching , where the goal is to jointly select and align text pieces , such as tokens or sentences , as a justification for the... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "building self - explaining models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"building",
"self",
"-",
"explaining",
"models"
],
"offset... | [
"selecting",
"input",
"features",
"of",
"top",
"relevance",
"has",
"become",
"a",
"popular",
"method",
"for",
"building",
"self",
"-",
"explaining",
"models",
".",
"in",
"this",
"work",
",",
"we",
"extend",
"this",
"selective",
"rationalization",
"approach",
"... |
ACL | Dscorer: A Fast Evaluation Metric for Discourse Representation Structure Parsing | Discourse representation structures (DRSs) are scoped semantic representations for texts of arbitrary length. Evaluating the accuracy of predicted DRSs plays a key role in developing semantic parsers and improving their performance. DRSs are typically visualized as boxes which are not straightforward to process automat... | 04fc7073715c81efd5355855f039f00f | 2,020 | [
"discourse representation structures ( drss ) are scoped semantic representations for texts of arbitrary length .",
"evaluating the accuracy of predicted drss plays a key role in developing semantic parsers and improving their performance .",
"drss are typically visualized as boxes which are not straightforward... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "drss",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"drss"
],
"offsets": [
51
]
}
],
"trigger": {
"text": "scoped",
"tokens": [
... | [
"discourse",
"representation",
"structures",
"(",
"drss",
")",
"are",
"scoped",
"semantic",
"representations",
"for",
"texts",
"of",
"arbitrary",
"length",
".",
"evaluating",
"the",
"accuracy",
"of",
"predicted",
"drss",
"plays",
"a",
"key",
"role",
"in",
"devel... |
ACL | MINER: Improving Out-of-Vocabulary Named Entity Recognition from an Information Theoretic Perspective | NER model has achieved promising performance on standard NER benchmarks. However, recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary(OOV) entity recognition. In this work, we propose MINER, a novel NER learning framework, to remed... | fbf8d9bddb11d97bd5f6c7ab4030b04b | 2,022 | [
"ner model has achieved promising performance on standard ner benchmarks .",
"however , recent studies show that previous approaches may over - rely on entity mention information , resulting in poor performance on out - of - vocabulary ( oov ) entity recognition .",
"in this work , we propose miner , a novel ne... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ner model",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"ner",
"model"
],
"offsets": [
0,
1
]
}
],
"trigger": {
"text": ... | [
"ner",
"model",
"has",
"achieved",
"promising",
"performance",
"on",
"standard",
"ner",
"benchmarks",
".",
"however",
",",
"recent",
"studies",
"show",
"that",
"previous",
"approaches",
"may",
"over",
"-",
"rely",
"on",
"entity",
"mention",
"information",
",",
... |
ACL | Tree-Structured Topic Modeling with Nonparametric Neural Variational Inference | Topic modeling has been widely used for discovering the latent semantic structure of documents, but most existing methods learn topics with a flat structure. Although probabilistic models can generate topic hierarchies by introducing nonparametric priors like Chinese restaurant process, such methods have data scalabili... | 59691d4b3757741fd95d55fbd05eb92e | 2,021 | [
"topic modeling has been widely used for discovering the latent semantic structure of documents , but most existing methods learn topics with a flat structure .",
"although probabilistic models can generate topic hierarchies by introducing nonparametric priors like chinese restaurant process , such methods have d... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "topic modeling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"topic",
"modeling"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"topic",
"modeling",
"has",
"been",
"widely",
"used",
"for",
"discovering",
"the",
"latent",
"semantic",
"structure",
"of",
"documents",
",",
"but",
"most",
"existing",
"methods",
"learn",
"topics",
"with",
"a",
"flat",
"structure",
".",
"although",
"probabilist... |
ACL | Learning from Perturbations: Diverse and Informative Dialogue Generation with Inverse Adversarial Training | In this paper, we propose Inverse Adversarial Training (IAT) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better. In contrast to standard adversarial training algorithms, IAT encourages the model to be sensitive to the perturbation in the dialogue history and ther... | 71a786fc82e7a30eb1187e16fb52523a | 2,021 | [
"in this paper , we propose inverse adversarial training ( iat ) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better .",
"in contrast to standard adversarial training algorithms , iat encourages the model to be sensitive to the perturbation in the dialogue h... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "avoid",
"nugget_type": "E-PUR",
"ar... | [
"in",
"this",
"paper",
",",
"we",
"propose",
"inverse",
"adversarial",
"training",
"(",
"iat",
")",
"algorithm",
"for",
"training",
"neural",
"dialogue",
"systems",
"to",
"avoid",
"generic",
"responses",
"and",
"model",
"dialogue",
"history",
"better",
".",
"i... |
ACL | Positional Artefacts Propagate Through Masked Language Model Embeddings | In this work, we demonstrate that the contextualized word vectors derived from pretrained masked language model-based encoders share a common, perhaps undesirable pattern across layers. Namely, we find cases of persistent outlier neurons within BERT and RoBERTa’s hidden state vectors that consistently bear the smallest... | d624a6b94fd5fa5ddf2d9ed25cbfe567 | 2,021 | [
"in this work , we demonstrate that the contextualized word vectors derived from pretrained masked language model - based encoders share a common , perhaps undesirable pattern across layers .",
"namely , we find cases of persistent outlier neurons within bert and roberta ’ s hidden state vectors that consistently... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Finder",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "share",
"nugget_type": "E-FAC",
"argu... | [
"in",
"this",
"work",
",",
"we",
"demonstrate",
"that",
"the",
"contextualized",
"word",
"vectors",
"derived",
"from",
"pretrained",
"masked",
"language",
"model",
"-",
"based",
"encoders",
"share",
"a",
"common",
",",
"perhaps",
"undesirable",
"pattern",
"acros... |
ACL | Handling Rare Entities for Neural Sequence Labeling | One great challenge in neural sequence labeling is the data sparsity problem for rare entity words and phrases. Most of test set entities appear only few times and are even unseen in training corpus, yielding large number of out-of-vocabulary (OOV) and low-frequency (LF) entities during evaluation. In this work, we pro... | 893c015986ae7f8c7abbaf7f6bc3c30b | 2,020 | [
"one great challenge in neural sequence labeling is the data sparsity problem for rare entity words and phrases .",
"most of test set entities appear only few times and are even unseen in training corpus , yielding large number of out - of - vocabulary ( oov ) and low - frequency ( lf ) entities during evaluation... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural sequence labeling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"sequence",
"labeling"
],
"offsets": [
4,
5,
6
... | [
"one",
"great",
"challenge",
"in",
"neural",
"sequence",
"labeling",
"is",
"the",
"data",
"sparsity",
"problem",
"for",
"rare",
"entity",
"words",
"and",
"phrases",
".",
"most",
"of",
"test",
"set",
"entities",
"appear",
"only",
"few",
"times",
"and",
"are",... |
ACL | RADDLE: An Evaluation Benchmark and Analysis Platform for Robust Task-oriented Dialog Systems | For task-oriented dialog systems to be maximally useful, it must be able to process conversations in a way that is (1) generalizable with a small number of training examples for new task domains, and (2) robust to user input in various styles, modalities, or domains. In pursuit of these goals, we introduce the RADDLE b... | 92195bd94b027c4c0982c4f77fe1daca | 2,021 | [
"for task - oriented dialog systems to be maximally useful , it must be able to process conversations in a way that is ( 1 ) generalizable with a small number of training examples for new task domains , and ( 2 ) robust to user input in various styles , modalities , or domains .",
"in pursuit of these goals , we ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "task - oriented dialog systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"task",
"-",
"oriented",
"dialog",
"systems"
],
"offsets": [
... | [
"for",
"task",
"-",
"oriented",
"dialog",
"systems",
"to",
"be",
"maximally",
"useful",
",",
"it",
"must",
"be",
"able",
"to",
"process",
"conversations",
"in",
"a",
"way",
"that",
"is",
"(",
"1",
")",
"generalizable",
"with",
"a",
"small",
"number",
"of... |
ACL | Structural Knowledge Distillation: Tractably Distilling Information for Structured Predictor | Knowledge distillation is a critical technique to transfer knowledge between models, typically from a large model (the teacher) to a more fine-grained one (the student). The objective function of knowledge distillation is typically the cross-entropy between the teacher and the student’s output distributions. However, f... | cd88f204890aa2b5bc6e3eb1d1472d70 | 2,021 | [
"knowledge distillation is a critical technique to transfer knowledge between models , typically from a large model ( the teacher ) to a more fine - grained one ( the student ) .",
"the objective function of knowledge distillation is typically the cross - entropy between the teacher and the student ’ s output dis... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge distillation",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"knowledge",
"distillation"
],
"offsets": [
0,
1
]
}
],
"... | [
"knowledge",
"distillation",
"is",
"a",
"critical",
"technique",
"to",
"transfer",
"knowledge",
"between",
"models",
",",
"typically",
"from",
"a",
"large",
"model",
"(",
"the",
"teacher",
")",
"to",
"a",
"more",
"fine",
"-",
"grained",
"one",
"(",
"the",
... |
ACL | Relating Simple Sentence Representations in Deep Neural Networks and the Brain | What is the relationship between sentence representations learned by deep recurrent models against those encoded by the brain? Is there any correspondence between hidden layers of these recurrent models and brain regions when processing sentences? Can these deep models be used to synthesize brain data which can then be... | b05dce3a0a3c2bb143ecb8e914316776 | 2,019 | [
"what is the relationship between sentence representations learned by deep recurrent models against those encoded by the brain ?",
"is there any correspondence between hidden layers of these recurrent models and brain regions when processing sentences ?",
"can these deep models be used to synthesize brain data ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
57
]
},
{
"text": "sentences with simple syntax",
"nugget_t... | [
"what",
"is",
"the",
"relationship",
"between",
"sentence",
"representations",
"learned",
"by",
"deep",
"recurrent",
"models",
"against",
"those",
"encoded",
"by",
"the",
"brain",
"?",
"is",
"there",
"any",
"correspondence",
"between",
"hidden",
"layers",
"of",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.