venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models | A sequence-to-sequence learning with neural networks has empirically proven to be an effective framework for Chinese Spelling Correction (CSC), which takes a sentence with some spelling errors as input and outputs the corrected one. However, CSC models may fail to correct spelling errors covered by the confusion sets, ... | edc4c0c0d65890639c885abe8d90d162 | 2,021 | [
"a sequence - to - sequence learning with neural networks has empirically proven to be an effective framework for chinese spelling correction ( csc ) , which takes a sentence with some spelling errors as input and outputs the corrected one .",
"however , csc models may fail to correct spelling errors covered by t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "csc",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"csc"
],
"offsets": [
155
]
}
],
"trigger": {
"text": "proven",
"tokens": [
... | [
"a",
"sequence",
"-",
"to",
"-",
"sequence",
"learning",
"with",
"neural",
"networks",
"has",
"empirically",
"proven",
"to",
"be",
"an",
"effective",
"framework",
"for",
"chinese",
"spelling",
"correction",
"(",
"csc",
")",
",",
"which",
"takes",
"a",
"sente... |
ACL | Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection | As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus. Existing ‘Stereotype Detection’ datasets mainly adopt a diagnostic approach toward large PLMs. Blodgett et. al. (2021) s... | 48745fbe660aba2172867b19a7e966f9 | 2,022 | [
"as large pre - trained language models ( plms ) trained on large amounts of data in an unsupervised manner become more ubiquitous , identifying various types of bias in the text has come into sharp focus .",
"existing ‘ stereotype detection ’ datasets mainly adopt a diagnostic approach toward large plms .",
"b... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - trained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"trained",
"language",
"models"
],
"offsets": [
... | [
"as",
"large",
"pre",
"-",
"trained",
"language",
"models",
"(",
"plms",
")",
"trained",
"on",
"large",
"amounts",
"of",
"data",
"in",
"an",
"unsupervised",
"manner",
"become",
"more",
"ubiquitous",
",",
"identifying",
"various",
"types",
"of",
"bias",
"in",... |
ACL | ChatMatch: Evaluating Chatbots by Autonomous Chat Tournaments | Existing automatic evaluation systems of chatbots mostly rely on static chat scripts as ground truth, which is hard to obtain, and requires access to the models of the bots as a form of “white-box testing”. Interactive evaluation mitigates this problem but requires human involvement. In our work, we propose an interact... | b2b608b18831c275b984243c4b407930 | 2,022 | [
"existing automatic evaluation systems of chatbots mostly rely on static chat scripts as ground truth , which is hard to obtain , and requires access to the models of the bots as a form of “ white - box testing ” .",
"interactive evaluation mitigates this problem but requires human involvement .",
"in our work ... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "static chat scripts",
"nugget_type": "FEA",
"argument_type": "Concern",
"tokens": [
"static",
"chat",
"scripts"
],
"offsets": [
9,
10,
11
... | [
"existing",
"automatic",
"evaluation",
"systems",
"of",
"chatbots",
"mostly",
"rely",
"on",
"static",
"chat",
"scripts",
"as",
"ground",
"truth",
",",
"which",
"is",
"hard",
"to",
"obtain",
",",
"and",
"requires",
"access",
"to",
"the",
"models",
"of",
"the"... |
ACL | Recursive Tree-Structured Self-Attention for Answer Sentence Selection | Syntactic structure is an important component of natural language text. Recent top-performing models in Answer Sentence Selection (AS2) use self-attention and transfer learning, but not syntactic structure. Tree structures have shown strong performance in tasks with sentence pair input like semantic relatedness. We inv... | d68a622ef87d0de0606979ffc79a7b8b | 2,021 | [
"syntactic structure is an important component of natural language text .",
"recent top - performing models in answer sentence selection ( as2 ) use self - attention and transfer learning , but not syntactic structure .",
"tree structures have shown strong performance in tasks with sentence pair input like sema... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "answer sentence selection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"answer",
"sentence",
"selection"
],
"offsets": [
17,
18,
... | [
"syntactic",
"structure",
"is",
"an",
"important",
"component",
"of",
"natural",
"language",
"text",
".",
"recent",
"top",
"-",
"performing",
"models",
"in",
"answer",
"sentence",
"selection",
"(",
"as2",
")",
"use",
"self",
"-",
"attention",
"and",
"transfer"... |
ACL | Second-Order Semantic Dependency Parsing with End-to-End Neural Networks | Semantic dependency parsing aims to identify semantic relationships between words in a sentence that form a graph. In this paper, we propose a second-order semantic dependency parser, which takes into consideration not only individual dependency edges but also interactions between pairs of edges. We show that second-or... | 5d45801848253510c0d33e9d5386060a | 2,019 | [
"semantic dependency parsing aims to identify semantic relationships between words in a sentence that form a graph .",
"in this paper , we propose a second - order semantic dependency parser , which takes into consideration not only individual dependency edges but also interactions between pairs of edges .",
"w... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "semantic dependency parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"semantic",
"dependency",
"parsing"
],
"offsets": [
0,
1,
... | [
"semantic",
"dependency",
"parsing",
"aims",
"to",
"identify",
"semantic",
"relationships",
"between",
"words",
"in",
"a",
"sentence",
"that",
"form",
"a",
"graph",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"second",
"-",
"order",
"semantic",
"d... |
ACL | BiTIIMT: A Bilingual Text-infilling Method for Interactive Machine Translation | Interactive neural machine translation (INMT) is able to guarantee high-quality translations by taking human interactions into account. Existing IMT systems relying on lexical constrained decoding (LCD) enable humans to translate in a flexible translation order beyond the left-to-right. However, they typically suffer f... | 1438b14a44b50bfcfc90e8523072adfa | 2,022 | [
"interactive neural machine translation ( inmt ) is able to guarantee high - quality translations by taking human interactions into account .",
"existing imt systems relying on lexical constrained decoding ( lcd ) enable humans to translate in a flexible translation order beyond the left - to - right .",
"howev... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "interactive neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"interactive",
"neural",
"machine",
"translation"
],
"offsets":... | [
"interactive",
"neural",
"machine",
"translation",
"(",
"inmt",
")",
"is",
"able",
"to",
"guarantee",
"high",
"-",
"quality",
"translations",
"by",
"taking",
"human",
"interactions",
"into",
"account",
".",
"existing",
"imt",
"systems",
"relying",
"on",
"lexical... |
ACL | Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation | Over the last few years two promising research directions in low-resource neural machine translation (NMT) have emerged. The first focuses on utilizing high-resource languages to improve the quality of low-resource languages via multilingual NMT. The second direction employs monolingual data with self-supervision to pr... | 3a0e4ce98023883d63b864f1f5c62995 | 2,020 | [
"over the last few years two promising research directions in low - resource neural machine translation ( nmt ) have emerged .",
"the first focuses on utilizing high - resource languages to improve the quality of low - resource languages via multilingual nmt .",
"the second direction employs monolingual data wi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "low - resource neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"low",
"-",
"resource",
"neural",
"machine",
"translatio... | [
"over",
"the",
"last",
"few",
"years",
"two",
"promising",
"research",
"directions",
"in",
"low",
"-",
"resource",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"have",
"emerged",
".",
"the",
"first",
"focuses",
"on",
"utilizing",
"high",
"-",
"resour... |
ACL | Language to Network: Conditional Parameter Adaptation with Natural Language Descriptions | Transfer learning using ImageNet pre-trained models has been the de facto approach in a wide range of computer vision tasks. However, fine-tuning still requires task-specific training data. In this paper, we propose N3 (Neural Networks from Natural Language) - a new paradigm of synthesizing task-specific neural network... | 51834773eeb2c53587f8e1a4412221dc | 2,020 | [
"transfer learning using imagenet pre - trained models has been the de facto approach in a wide range of computer vision tasks .",
"however , fine - tuning still requires task - specific training data .",
"in this paper , we propose n3 ( neural networks from natural language ) - a new paradigm of synthesizing t... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
40
]
},
{
"text": "paradigm of synthesizing task - specific neural ne... | [
"transfer",
"learning",
"using",
"imagenet",
"pre",
"-",
"trained",
"models",
"has",
"been",
"the",
"de",
"facto",
"approach",
"in",
"a",
"wide",
"range",
"of",
"computer",
"vision",
"tasks",
".",
"however",
",",
"fine",
"-",
"tuning",
"still",
"requires",
... |
ACL | Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work? | While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task trainin... | cb3cd9663c63a869a2868864e1951a6b | 2,020 | [
"while pretrained models such as bert have shown large gains across natural language understanding tasks , their performance can be improved by further training the model on a data - rich intermediate task , before fine - tuning it on a target task .",
"however , it is still poorly understood when and why interme... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
70
]
},
{
"text": "large - scale study on the pretrained roberta mo... | [
"while",
"pretrained",
"models",
"such",
"as",
"bert",
"have",
"shown",
"large",
"gains",
"across",
"natural",
"language",
"understanding",
"tasks",
",",
"their",
"performance",
"can",
"be",
"improved",
"by",
"further",
"training",
"the",
"model",
"on",
"a",
"... |
ACL | Towards Understanding Gender Bias in Relation Extraction | Recent developments in Neural Relation Extraction (NRE) have made significant strides towards Automated Knowledge Base Construction. While much attention has been dedicated towards improvements in accuracy, there have been no attempts in the literature to evaluate social biases exhibited in NRE systems. In this paper, ... | a493116cd4dac1825d0bbf3e31f96611 | 2,020 | [
"recent developments in neural relation extraction ( nre ) have made significant strides towards automated knowledge base construction .",
"while much attention has been dedicated towards improvements in accuracy , there have been no attempts in the literature to evaluate social biases exhibited in nre systems ."... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "recent developments in neural relation extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"recent",
"developments",
"in",
"neural",
"relation",
... | [
"recent",
"developments",
"in",
"neural",
"relation",
"extraction",
"(",
"nre",
")",
"have",
"made",
"significant",
"strides",
"towards",
"automated",
"knowledge",
"base",
"construction",
".",
"while",
"much",
"attention",
"has",
"been",
"dedicated",
"towards",
"i... |
ACL | Fine-grained Information Extraction from Biomedical Literature based on Knowledge-enriched Abstract Meaning Representation | Biomedical Information Extraction from scientific literature presents two unique and non-trivial challenges. First, compared with general natural language texts, sentences from scientific papers usually possess wider contexts between knowledge elements. Moreover, comprehending the fine-grained scientific entities and e... | 4abd7b1bcf60608234949588805204d3 | 2,021 | [
"biomedical information extraction from scientific literature presents two unique and non - trivial challenges .",
"first , compared with general natural language texts , sentences from scientific papers usually possess wider contexts between knowledge elements .",
"moreover , comprehending the fine - grained s... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "sentence - level knowledge graph",
"nugget_type": "APP",
"argument_type": "TriedComponent",
"tokens": [
"sentence",
"-",
"level",
"knowledge",
"graph"
],
"... | [
"biomedical",
"information",
"extraction",
"from",
"scientific",
"literature",
"presents",
"two",
"unique",
"and",
"non",
"-",
"trivial",
"challenges",
".",
"first",
",",
"compared",
"with",
"general",
"natural",
"language",
"texts",
",",
"sentences",
"from",
"sci... |
ACL | Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension | Multi-hop reading comprehension requires the model to explore and connect relevant information from multiple sentences/documents in order to answer the question about the context. To achieve this, we propose an interpretable 3-module system called Explore-Propose-Assemble reader (EPAr). First, the Document Explorer ite... | 1a7c0714bc88011c2180678dfdf62701 | 2,019 | [
"multi - hop reading comprehension requires the model to explore and connect relevant information from multiple sentences / documents in order to answer the question about the context .",
"to achieve this , we propose an interpretable 3 - module system called explore - propose - assemble reader ( epar ) .",
"fi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - hop reading comprehension",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"hop",
"reading",
"comprehension"
],
"offset... | [
"multi",
"-",
"hop",
"reading",
"comprehension",
"requires",
"the",
"model",
"to",
"explore",
"and",
"connect",
"relevant",
"information",
"from",
"multiple",
"sentences",
"/",
"documents",
"in",
"order",
"to",
"answer",
"the",
"question",
"about",
"the",
"conte... |
ACL | Learning to Explain: Generating Stable Explanations Fast | The importance of explaining the outcome of a machine learning model, especially a black-box model, is widely acknowledged. Recent approaches explain an outcome by identifying the contributions of input features to this outcome. In environments involving large black-box models or complex inputs, this leads to computati... | 97a51499bb5df57e5ed8a723a909ed45 | 2,021 | [
"the importance of explaining the outcome of a machine learning model , especially a black - box model , is widely acknowledged .",
"recent approaches explain an outcome by identifying the contributions of input features to this outcome .",
"in environments involving large black - box models or complex inputs ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "explaining the outcome of a machine learning model",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"explaining",
"the",
"outcome",
"of",
"a",
"mac... | [
"the",
"importance",
"of",
"explaining",
"the",
"outcome",
"of",
"a",
"machine",
"learning",
"model",
",",
"especially",
"a",
"black",
"-",
"box",
"model",
",",
"is",
"widely",
"acknowledged",
".",
"recent",
"approaches",
"explain",
"an",
"outcome",
"by",
"i... |
ACL | Predicting Declension Class from Form and Meaning | The noun lexica of many natural languages are divided into several declension classes with characteristic morphological properties. Class membership is far from deterministic, but the phonological form of a noun and/or its meaning can often provide imperfect clues. Here, we investigate the strength of those clues. More... | 0139f7b7ef3fe6952f22426688f2d853 | 2,020 | [
"the noun lexica of many natural languages are divided into several declension classes with characteristic morphological properties .",
"class membership is far from deterministic , but the phonological form of a noun and / or its meaning can often provide imperfect clues .",
"here , we investigate the strength... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "noun lexica",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"noun",
"lexica"
],
"offsets": [
1,
2
]
}
],
"trigger": {
"tex... | [
"the",
"noun",
"lexica",
"of",
"many",
"natural",
"languages",
"are",
"divided",
"into",
"several",
"declension",
"classes",
"with",
"characteristic",
"morphological",
"properties",
".",
"class",
"membership",
"is",
"far",
"from",
"deterministic",
",",
"but",
"the... |
ACL | Distinct Label Representations for Few-Shot Text Classification | Few-shot text classification aims to classify inputs whose label has only a few examples. Previous studies overlooked the semantic relevance between label representations. Therefore, they are easily confused by labels that are relevant. To address this problem, we propose a method that generates distinct label represen... | 356cbec692deb6f96e4ee80d018ab221 | 2,021 | [
"few - shot text classification aims to classify inputs whose label has only a few examples .",
"previous studies overlooked the semantic relevance between label representations .",
"therefore , they are easily confused by labels that are relevant .",
"to address this problem , we propose a method that genera... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "few - shot text classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"few",
"-",
"shot",
"text",
"classification"
],
"offsets": [
... | [
"few",
"-",
"shot",
"text",
"classification",
"aims",
"to",
"classify",
"inputs",
"whose",
"label",
"has",
"only",
"a",
"few",
"examples",
".",
"previous",
"studies",
"overlooked",
"the",
"semantic",
"relevance",
"between",
"label",
"representations",
".",
"ther... |
ACL | Multi-View Cross-Lingual Structured Prediction with Minimum Supervision | In structured prediction problems, cross-lingual transfer learning is an efficient way to train quality models for low-resource languages, and further improvement can be obtained by learning from multiple source languages. However, not all source models are created equal and some may hurt performance on the target lang... | 03e60afb94af8dcf36be64c6d7d13a13 | 2,021 | [
"in structured prediction problems , cross - lingual transfer learning is an efficient way to train quality models for low - resource languages , and further improvement can be obtained by learning from multiple source languages .",
"however , not all source models are created equal and some may hurt performance ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual transfer learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"transfer",
"learning"
],
"offset... | [
"in",
"structured",
"prediction",
"problems",
",",
"cross",
"-",
"lingual",
"transfer",
"learning",
"is",
"an",
"efficient",
"way",
"to",
"train",
"quality",
"models",
"for",
"low",
"-",
"resource",
"languages",
",",
"and",
"further",
"improvement",
"can",
"be... |
ACL | BiRRE: Learning Bidirectional Residual Relation Embeddings for Supervised Hypernymy Detection | The hypernymy detection task has been addressed under various frameworks. Previously, the design of unsupervised hypernymy scores has been extensively studied. In contrast, supervised classifiers, especially distributional models, leverage the global contexts of terms to make predictions, but are more likely to suffer ... | 03c04d6b3044d93471bb1bd1a02692ed | 2,020 | [
"the hypernymy detection task has been addressed under various frameworks .",
"previously , the design of unsupervised hypernymy scores has been extensively studied .",
"in contrast , supervised classifiers , especially distributional models , leverage the global contexts of terms to make predictions , but are ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "hypernymy detection task",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"hypernymy",
"detection",
"task"
],
"offsets": [
1,
2,
3
... | [
"the",
"hypernymy",
"detection",
"task",
"has",
"been",
"addressed",
"under",
"various",
"frameworks",
".",
"previously",
",",
"the",
"design",
"of",
"unsupervised",
"hypernymy",
"scores",
"has",
"been",
"extensively",
"studied",
".",
"in",
"contrast",
",",
"sup... |
ACL | Rumor Detection by Exploiting User Credibility Information, Attention and Multi-task Learning | In this study, we propose a new multi-task learning approach for rumor detection and stance classification tasks. This neural network model has a shared layer and two task specific layers. We incorporate the user credibility information into the rumor detection layer, and we also apply attention mechanism in the rumor ... | 444d3f5f43170fda16ad19a8697528fc | 2,019 | [
"in this study , we propose a new multi - task learning approach for rumor detection and stance classification tasks .",
"this neural network model has a shared layer and two task specific layers .",
"we incorporate the user credibility information into the rumor detection layer , and we also apply attention me... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "multi - task learning approach",
"nugget_ty... | [
"in",
"this",
"study",
",",
"we",
"propose",
"a",
"new",
"multi",
"-",
"task",
"learning",
"approach",
"for",
"rumor",
"detection",
"and",
"stance",
"classification",
"tasks",
".",
"this",
"neural",
"network",
"model",
"has",
"a",
"shared",
"layer",
"and",
... |
ACL | Memory Consolidation for Contextual Spoken Language Understanding with Dialogue Logistic Inference | Dialogue contexts are proven helpful in the spoken language understanding (SLU) system and they are typically encoded with explicit memory representations. However, most of the previous models learn the context memory with only one objective to maximizing the SLU performance, leaving the context memory under-exploited.... | 374a1ce5f96ebb23df68bac42b3536f8 | 2,019 | [
"dialogue contexts are proven helpful in the spoken language understanding ( slu ) system and they are typically encoded with explicit memory representations .",
"however , most of the previous models learn the context memory with only one objective to maximizing the slu performance , leaving the context memory u... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "spoken language understanding ( slu ) system",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"spoken",
"language",
"understanding",
"(",
"slu",
")... | [
"dialogue",
"contexts",
"are",
"proven",
"helpful",
"in",
"the",
"spoken",
"language",
"understanding",
"(",
"slu",
")",
"system",
"and",
"they",
"are",
"typically",
"encoded",
"with",
"explicit",
"memory",
"representations",
".",
"however",
",",
"most",
"of",
... |
ACL | GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media | This paper solves the fake news detection problem under a more realistic scenario on social media. Given the source short-text tweet and the corresponding sequence of retweet users without text comments, we aim at predicting whether the source tweet is fake or not, and generating explanation by highlighting the evidenc... | 22db1efd804837a17f24eebff0656e04 | 2,020 | [
"this paper solves the fake news detection problem under a more realistic scenario on social media .",
"given the source short - text tweet and the corresponding sequence of retweet users without text comments , we aim at predicting whether the source tweet is fake or not , and generating explanation by highlight... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "fake news detection problem",
"nugget_type": "TAK",
"argument_type": "Content",
"tokens": [
"fake",
"news",
"detection",
"problem"
],
"offsets": [
4,
... | [
"this",
"paper",
"solves",
"the",
"fake",
"news",
"detection",
"problem",
"under",
"a",
"more",
"realistic",
"scenario",
"on",
"social",
"media",
".",
"given",
"the",
"source",
"short",
"-",
"text",
"tweet",
"and",
"the",
"corresponding",
"sequence",
"of",
"... |
ACL | Societal Biases in Language Generation: Progress and Challenges | Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner. While techniques can effectively generate fluent text, they can also produce undesirable societal biases that c... | e45ed2178e242756b987c2331f8432b8 | 2,021 | [
"technology for language generation has advanced rapidly , spurred by advancements in pre - training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner .",
"while techniques can effectively generate fluent text , they can also produce undesirable societa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "language generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"language",
"generation"
],
"offsets": [
2,
3
]
}
],
"trigge... | [
"technology",
"for",
"language",
"generation",
"has",
"advanced",
"rapidly",
",",
"spurred",
"by",
"advancements",
"in",
"pre",
"-",
"training",
"large",
"models",
"on",
"massive",
"amounts",
"of",
"data",
"and",
"the",
"need",
"for",
"intelligent",
"agents",
... |
ACL | Complex Word Identification as a Sequence Labelling Task | Complex Word Identification (CWI) is concerned with detection of words in need of simplification and is a crucial first step in a simplification pipeline. It has been shown that reliable CWI systems considerably improve text simplification. However, most CWI systems to date address the task on a word-by-word basis, not... | ff6e27b1b821371b86180c5a309cbedf | 2,019 | [
"complex word identification ( cwi ) is concerned with detection of words in need of simplification and is a crucial first step in a simplification pipeline .",
"it has been shown that reliable cwi systems considerably improve text simplification .",
"however , most cwi systems to date address the task on a wor... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "complex word identification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"complex",
"word",
"identification"
],
"offsets": [
0,
1,
... | [
"complex",
"word",
"identification",
"(",
"cwi",
")",
"is",
"concerned",
"with",
"detection",
"of",
"words",
"in",
"need",
"of",
"simplification",
"and",
"is",
"a",
"crucial",
"first",
"step",
"in",
"a",
"simplification",
"pipeline",
".",
"it",
"has",
"been"... |
ACL | P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks | Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pretrained models. We also find that existing methods of prom... | 6605a9c4ff5e6e2b5cf1a61853eb9ffc | 2,022 | [
"prompt tuning , which only tunes continuous prompts with a frozen language model , substantially reduces per - task storage and memory usage at training .",
"however , in the context of nlu , prior work reveals that prompt tuning does not perform well for normal - sized pretrained models .",
"we also find that... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "continuous prompts",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"continuous",
"prompts"
],
"offsets": [
6,
7
]
},
{
... | [
"prompt",
"tuning",
",",
"which",
"only",
"tunes",
"continuous",
"prompts",
"with",
"a",
"frozen",
"language",
"model",
",",
"substantially",
"reduces",
"per",
"-",
"task",
"storage",
"and",
"memory",
"usage",
"at",
"training",
".",
"however",
",",
"in",
"th... |
ACL | An Investigation of the (In)effectiveness of Counterfactually Augmented Data | While pretrained language models achieve excellent performance on natural language understanding benchmarks, they tend to rely on spurious correlations and generalize poorly to out-of-distribution (OOD) data. Recent work has explored using counterfactually-augmented data (CAD)—data generated by minimally perturbing exa... | a7ce491b1e0e48647b86d3cb7071e580 | 2,022 | [
"while pretrained language models achieve excellent performance on natural language understanding benchmarks , they tend to rely on spurious correlations and generalize poorly to out - of - distribution ( ood ) data .",
"recent work has explored using counterfactually - augmented data ( cad ) — data generated by ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language understanding benchmarks",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"understanding",
"benchmarks"
],
"off... | [
"while",
"pretrained",
"language",
"models",
"achieve",
"excellent",
"performance",
"on",
"natural",
"language",
"understanding",
"benchmarks",
",",
"they",
"tend",
"to",
"rely",
"on",
"spurious",
"correlations",
"and",
"generalize",
"poorly",
"to",
"out",
"-",
"o... |
ACL | StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling | There are two major classes of natural language grammars — the dependency grammar that models one-to-one correspondences between words and the constituency grammar that models the assembly of one or several corresponded words. While previous unsupervised parsing methods mostly focus on only inducing one class of gramma... | 9acd1c95737a54286eefa982513ab362 | 2,021 | [
"there are two major classes of natural language grammars — the dependency grammar that models one - to - one correspondences between words and the constituency grammar that models the assembly of one or several corresponded words .",
"while previous unsupervised parsing methods mostly focus on only inducing one ... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "previous unsupervised parsing methods",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"previous",
"unsupervised",
"parsing",
"methods"
],
"offsets": ... | [
"there",
"are",
"two",
"major",
"classes",
"of",
"natural",
"language",
"grammars",
"—",
"the",
"dependency",
"grammar",
"that",
"models",
"one",
"-",
"to",
"-",
"one",
"correspondences",
"between",
"words",
"and",
"the",
"constituency",
"grammar",
"that",
"mo... |
ACL | Synchronous Double-channel Recurrent Network for Aspect-Opinion Pair Extraction | Opinion entity extraction is a fundamental task in fine-grained opinion mining. Related studies generally extract aspects and/or opinion expressions without recognizing the relations between them. However, the relations are crucial for downstream tasks, including sentiment classification, opinion summarization, etc. In... | 55d34ef75f2b394c1d4057ef89b38dbd | 2,020 | [
"opinion entity extraction is a fundamental task in fine - grained opinion mining .",
"related studies generally extract aspects and / or opinion expressions without recognizing the relations between them .",
"however , the relations are crucial for downstream tasks , including sentiment classification , opinio... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "opinion entity extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"opinion",
"entity",
"extraction"
],
"offsets": [
0,
1,
... | [
"opinion",
"entity",
"extraction",
"is",
"a",
"fundamental",
"task",
"in",
"fine",
"-",
"grained",
"opinion",
"mining",
".",
"related",
"studies",
"generally",
"extract",
"aspects",
"and",
"/",
"or",
"opinion",
"expressions",
"without",
"recognizing",
"the",
"re... |
ACL | Dual Supervised Learning for Natural Language Understanding and Generation | Natural language understanding (NLU) and natural language generation (NLG) are both critical research topics in the NLP and dialogue fields. Natural language understanding is to extract the core semantic meaning from the given utterances, while natural language generation is opposite, of which the goal is to construct ... | 8a0d76672ad81e4bb00d11b4d1f5f63d | 2,019 | [
"natural language understanding ( nlu ) and natural language generation ( nlg ) are both critical research topics in the nlp and dialogue fields .",
"natural language understanding is to extract the core semantic meaning from the given utterances , while natural language generation is opposite , of which the goal... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language understanding",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"understanding"
],
"offsets": [
0,
1,
... | [
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"and",
"natural",
"language",
"generation",
"(",
"nlg",
")",
"are",
"both",
"critical",
"research",
"topics",
"in",
"the",
"nlp",
"and",
"dialogue",
"fields",
".",
"natural",
"language",
"understanding",
... |
ACL | SpanNER: Named Entity Re-/Recognition as Span Prediction | Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction. Despite its preliminary effectiveness, the span prediction model’s architectural bias has not been fully understood. In this paper, we first investigate the strengths and weaknesses when the sp... | c4692ad7678de9ee1a07ec87166be15c | 2,021 | [
"recent years have seen the paradigm shift of named entity recognition ( ner ) systems from sequence labeling to span prediction .",
"despite its preliminary effectiveness , the span prediction model ’ s architectural bias has not been fully understood .",
"in this paper , we first investigate the strengths and... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "paradigm shift of named entity recognition systems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"paradigm",
"shift",
"of",
"named",
"entity",
"... | [
"recent",
"years",
"have",
"seen",
"the",
"paradigm",
"shift",
"of",
"named",
"entity",
"recognition",
"(",
"ner",
")",
"systems",
"from",
"sequence",
"labeling",
"to",
"span",
"prediction",
".",
"despite",
"its",
"preliminary",
"effectiveness",
",",
"the",
"s... |
ACL | Attention Temperature Matters in Abstractive Summarization Distillation | Recent progress of abstractive text summarization largely relies on large pre-trained sequence-to-sequence Transformer models, which are computationally expensive. This paper aims to distill these large models into smaller ones for faster inference and with minimal performance loss. Pseudo-labeling based methods are po... | 47045d2d87bb94f3104a1e9278b874dd | 2,022 | [
"recent progress of abstractive text summarization largely relies on large pre - trained sequence - to - sequence transformer models , which are computationally expensive .",
"this paper aims to distill these large models into smaller ones for faster inference and with minimal performance loss .",
"pseudo - lab... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "large pre - trained sequence - to - sequence transformer models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"large",
"pre",
"-",
"trained",
"sequence",... | [
"recent",
"progress",
"of",
"abstractive",
"text",
"summarization",
"largely",
"relies",
"on",
"large",
"pre",
"-",
"trained",
"sequence",
"-",
"to",
"-",
"sequence",
"transformer",
"models",
",",
"which",
"are",
"computationally",
"expensive",
".",
"this",
"pap... |
ACL | Latent Variable Sentiment Grammar | Neural models have been investigated for sentiment classification over constituent trees. They learn phrase composition automatically by encoding tree structures but do not explicitly model sentiment composition, which requires to encode sentiment class labels. To this end, we investigate two formalisms with deep senti... | 58eab4c099bc0b85524af3433747c4f3 | 2,019 | [
"neural models have been investigated for sentiment classification over constituent trees .",
"they learn phrase composition automatically by encoding tree structures but do not explicitly model sentiment composition , which requires to encode sentiment class labels .",
"to this end , we investigate two formali... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"models"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"neural",
"models",
"have",
"been",
"investigated",
"for",
"sentiment",
"classification",
"over",
"constituent",
"trees",
".",
"they",
"learn",
"phrase",
"composition",
"automatically",
"by",
"encoding",
"tree",
"structures",
"but",
"do",
"not",
"explicitly",
"model... |
ACL | Keyphrase Generation for Scientific Document Retrieval | Sequence-to-sequence models have lead to significant progress in keyphrase generation, but it remains unknown whether they are reliable enough to be beneficial for document retrieval. This study provides empirical evidence that such models can significantly improve retrieval performance, and introduces a new extrinsic ... | 1addbd7e0d5dd72a4deb3bf14132c890 | 2,020 | [
"sequence - to - sequence models have lead to significant progress in keyphrase generation , but it remains unknown whether they are reliable enough to be beneficial for document retrieval .",
"this study provides empirical evidence that such models can significantly improve retrieval performance , and introduces... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sequence - to - sequence models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"sequence",
"-",
"to",
"-",
"sequence",
"models"
],
... | [
"sequence",
"-",
"to",
"-",
"sequence",
"models",
"have",
"lead",
"to",
"significant",
"progress",
"in",
"keyphrase",
"generation",
",",
"but",
"it",
"remains",
"unknown",
"whether",
"they",
"are",
"reliable",
"enough",
"to",
"be",
"beneficial",
"for",
"docume... |
ACL | Knowledge Neurons in Pretrained Transformers | Large-scale pretrained language models are surprisingly good at recalling factual knowledge presented in the training corpus. In this paper, we present preliminary studies on how factual knowledge is stored in pretrained Transformers by introducing the concept of knowledge neurons. Specifically, we examine the fill-in-... | c560a1da6fb8f734fc72f4bfe1b0defd | 2,022 | [
"large - scale pretrained language models are surprisingly good at recalling factual knowledge presented in the training corpus .",
"in this paper , we present preliminary studies on how factual knowledge is stored in pretrained transformers by introducing the concept of knowledge neurons .",
"specifically , we... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "large - scale pretrained language models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"large",
"-",
"scale",
"pretrained",
"language",
"models"... | [
"large",
"-",
"scale",
"pretrained",
"language",
"models",
"are",
"surprisingly",
"good",
"at",
"recalling",
"factual",
"knowledge",
"presented",
"in",
"the",
"training",
"corpus",
".",
"in",
"this",
"paper",
",",
"we",
"present",
"preliminary",
"studies",
"on",... |
ACL | Textomics: A Dataset for Genomics Data Summary Generation | Summarizing biomedical discovery from genomics data using natural languages is an essential step in biomedical research but is mostly done manually. Here, we introduce Textomics, a novel dataset of genomics data description, which contains 22,273 pairs of genomics data matrices and their summaries. Each summary is writ... | 5d637ba658060fce2f3152045d00e0b8 | 2,022 | [
"summarizing biomedical discovery from genomics data using natural languages is an essential step in biomedical research but is mostly done manually .",
"here , we introduce textomics , a novel dataset of genomics data description , which contains 22 , 273 pairs of genomics data matrices and their summaries .",
... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
24
]
},
{
"text": "dataset of genomics data description",
"nu... | [
"summarizing",
"biomedical",
"discovery",
"from",
"genomics",
"data",
"using",
"natural",
"languages",
"is",
"an",
"essential",
"step",
"in",
"biomedical",
"research",
"but",
"is",
"mostly",
"done",
"manually",
".",
"here",
",",
"we",
"introduce",
"textomics",
"... |
ACL | Event-Event Relation Extraction using Probabilistic Box Embedding | To understand a story with multiple events, it is important to capture the proper relations across these events. However, existing event relation extraction (ERE) framework regards it as a multi-class classification task and do not guarantee any coherence between different relation types, such as anti-symmetry. If a ph... | c352299e69af1de389fff6e8badcfdbe | 2,022 | [
"to understand a story with multiple events , it is important to capture the proper relations across these events .",
"however , existing event relation extraction ( ere ) framework regards it as a multi - class classification task and do not guarantee any coherence between different relation types , such as anti... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "proper relations",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"proper",
"relations"
],
"offsets": [
14,
15
]
}
],
"trigger": ... | [
"to",
"understand",
"a",
"story",
"with",
"multiple",
"events",
",",
"it",
"is",
"important",
"to",
"capture",
"the",
"proper",
"relations",
"across",
"these",
"events",
".",
"however",
",",
"existing",
"event",
"relation",
"extraction",
"(",
"ere",
")",
"fr... |
ACL | Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling | Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling. We conduct the first large-scale systematic study of candidate pretraining tasks, comparing 19 dif... | c8cbba03cfca892cadf1540c523518c9 | 2,019 | [
"natural language understanding has recently seen a surge of progress with the use of sentence encoders like elmo ( peters et al . , 2018a ) and bert ( devlin et al . , 2019 ) which are pretrained on variants of language modeling .",
"we conduct the first large - scale systematic study of candidate pretraining ta... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language understanding",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"understanding"
],
"offsets": [
0,
1,
... | [
"natural",
"language",
"understanding",
"has",
"recently",
"seen",
"a",
"surge",
"of",
"progress",
"with",
"the",
"use",
"of",
"sentence",
"encoders",
"like",
"elmo",
"(",
"peters",
"et",
"al",
".",
",",
"2018a",
")",
"and",
"bert",
"(",
"devlin",
"et",
... |
ACL | Language-agnostic BERT Sentence Embedding | While BERT is an effective method for learning monolingual sentence embeddings for semantic similarity and embedding based transfer learning BERT based cross-lingual sentence embeddings have yet to be explored. We systematically investigate methods for learning multilingual sentence embeddings by combining the best met... | 622ea70dbb0987f3801e7ebe266524fd | 2,022 | [
"while bert is an effective method for learning monolingual sentence embeddings for semantic similarity and embedding based transfer learning bert based cross - lingual sentence embeddings have yet to be explored .",
"we systematically investigate methods for learning multilingual sentence embeddings by combining... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
32
]
},
{
"text": "investigate",
"nugget_type": "E-PUR",
... | [
"while",
"bert",
"is",
"an",
"effective",
"method",
"for",
"learning",
"monolingual",
"sentence",
"embeddings",
"for",
"semantic",
"similarity",
"and",
"embedding",
"based",
"transfer",
"learning",
"bert",
"based",
"cross",
"-",
"lingual",
"sentence",
"embeddings",
... |
ACL | DialFact: A Benchmark for Fact-Checking in Dialogue | Fact-checking is an essential tool to mitigate the spread of misinformation and disinformation. We introduce the task of fact-checking in dialogue, which is a relatively unexplored area. We construct DialFact, a testing benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wik... | 46c56f1b119b3408258e36cca018c3e7 | 2,022 | [
"fact - checking is an essential tool to mitigate the spread of misinformation and disinformation .",
"we introduce the task of fact - checking in dialogue , which is a relatively unexplored area .",
"we construct dialfact , a testing benchmark dataset of 22 , 245 annotated conversational claims , paired with p... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "fact - checking",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"fact",
"-",
"checking"
],
"offsets": [
0,
1,
2
]
}
... | [
"fact",
"-",
"checking",
"is",
"an",
"essential",
"tool",
"to",
"mitigate",
"the",
"spread",
"of",
"misinformation",
"and",
"disinformation",
".",
"we",
"introduce",
"the",
"task",
"of",
"fact",
"-",
"checking",
"in",
"dialogue",
",",
"which",
"is",
"a",
"... |
ACL | Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions | Product reviews contain a large number of implicit aspects and implicit opinions. However, most of the existing studies in aspect-based sentiment analysis ignored this problem. In this work, we introduce a new task, named Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction, with the goal to extract all aspect... | b5d2c705540eace1eb586fb3a94c6820 | 2,021 | [
"product reviews contain a large number of implicit aspects and implicit opinions .",
"however , most of the existing studies in aspect - based sentiment analysis ignored this problem .",
"in this work , we introduce a new task , named aspect - category - opinion - sentiment ( acos ) quadruple extraction , with... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
34
]
},
{
"text": "extract",
"nugget_type": "E-PUR",
... | [
"product",
"reviews",
"contain",
"a",
"large",
"number",
"of",
"implicit",
"aspects",
"and",
"implicit",
"opinions",
".",
"however",
",",
"most",
"of",
"the",
"existing",
"studies",
"in",
"aspect",
"-",
"based",
"sentiment",
"analysis",
"ignored",
"this",
"pro... |
ACL | Character-Level Translation with Self-attention | We explore the suitability of self-attention models for character-level neural machine translation. We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolutions. We perform extensive experiments on WMT and UN datasets, testi... | 4cfcaf4868aa2a1780fab08c198d598a | 2,020 | [
"we explore the suitability of self - attention models for character - level neural machine translation .",
"we test the standard transformer model , as well as a novel variant in which the encoder block combines information from nearby characters using convolutions .",
"we perform extensive experiments on wmt ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "suitability of self - attention models for character - level neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"suitability",
"of",
"self",
"... | [
"we",
"explore",
"the",
"suitability",
"of",
"self",
"-",
"attention",
"models",
"for",
"character",
"-",
"level",
"neural",
"machine",
"translation",
".",
"we",
"test",
"the",
"standard",
"transformer",
"model",
",",
"as",
"well",
"as",
"a",
"novel",
"varia... |
ACL | Multi-Channel Graph Neural Network for Entity Alignment | Entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments. In this paper, we propose a novel Multi-channel Graph Neural Network model (MuGNN) to learn alignment-oriented knowledge graph (KG) embeddings by robustly encoding two KGs via multiple channels. Each channel enco... | bb843ff9fe38238959e097b4ea0e2b6e | 2,019 | [
"entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments .",
"in this paper , we propose a novel multi - channel graph neural network model ( mugnn ) to learn alignment - oriented knowledge graph ( kg ) embeddings by robustly encoding two kgs via multiple channel... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "entity alignment",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"entity",
"alignment"
],
"offsets": [
0,
1
]
},
{
"text"... | [
"entity",
"alignment",
"typically",
"suffers",
"from",
"the",
"issues",
"of",
"structural",
"heterogeneity",
"and",
"limited",
"seed",
"alignments",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"novel",
"multi",
"-",
"channel",
"graph",
"neural",
"ne... |
ACL | Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors | Robustness of machine learning models on ever-changing real-world data is critical, especially for applications affecting human well-being such as content moderation. New kinds of abusive language continually emerge in online discussions in response to current events (e.g., COVID-19), and the deployed abuse detection s... | 584870ad7a95136a4a1838fa9dd3b301 | 2,022 | [
"robustness of machine learning models on ever - changing real - world data is critical , especially for applications affecting human well - being such as content moderation .",
"new kinds of abusive language continually emerge in online discussions in response to current events ( e . g . , covid - 19 ) , and the... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "robustness of machine learning models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"robustness",
"of",
"machine",
"learning",
"models"
],
... | [
"robustness",
"of",
"machine",
"learning",
"models",
"on",
"ever",
"-",
"changing",
"real",
"-",
"world",
"data",
"is",
"critical",
",",
"especially",
"for",
"applications",
"affecting",
"human",
"well",
"-",
"being",
"such",
"as",
"content",
"moderation",
"."... |
ACL | Fine-Grained Temporal Relation Extraction | We present a novel semantic framework for modeling temporal relations and event durations that maps pairs of events to real-valued scales. We use this framework to construct the largest temporal relations dataset to date, covering the entirety of the Universal Dependencies English Web Treebank. We use this dataset to t... | 7d89f3549400f1df3c622fc2290859c7 | 2,019 | [
"we present a novel semantic framework for modeling temporal relations and event durations that maps pairs of events to real - valued scales .",
"we use this framework to construct the largest temporal relations dataset to date , covering the entirety of the universal dependencies english web treebank .",
"we u... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "semantic framework",
"nugget_type": "APP",
... | [
"we",
"present",
"a",
"novel",
"semantic",
"framework",
"for",
"modeling",
"temporal",
"relations",
"and",
"event",
"durations",
"that",
"maps",
"pairs",
"of",
"events",
"to",
"real",
"-",
"valued",
"scales",
".",
"we",
"use",
"this",
"framework",
"to",
"con... |
ACL | Element Intervention for Open Relation Extraction | Open relation extraction aims to cluster relation instances referring to the same underlying relation, which is a critical step for general relation extraction. Current OpenRE models are commonly trained on the datasets generated from distant supervision, which often results in instability and makes the model easily co... | 317ca5bbe61e75f9ee63eb15413955cf | 2,021 | [
"open relation extraction aims to cluster relation instances referring to the same underlying relation , which is a critical step for general relation extraction .",
"current openre models are commonly trained on the datasets generated from distant supervision , which often results in instability and makes the mo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open relation extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"relation",
"extraction"
],
"offsets": [
0,
1,
2
... | [
"open",
"relation",
"extraction",
"aims",
"to",
"cluster",
"relation",
"instances",
"referring",
"to",
"the",
"same",
"underlying",
"relation",
",",
"which",
"is",
"a",
"critical",
"step",
"for",
"general",
"relation",
"extraction",
".",
"current",
"openre",
"mo... |
ACL | Zoom Out and Observe: News Environment Perception for Fake News Detection | Fake news detection is crucial for preventing the dissemination of misinformation on social media. To differentiate fake news from real ones, existing methods observe the language patterns of the news post and “zoom in” to verify its content with knowledge sources or check its readers’ replies. However, these methods n... | 523025f3e2d37d684ca3ea94b9b96fe6 | 2,022 | [
"fake news detection is crucial for preventing the dissemination of misinformation on social media .",
"to differentiate fake news from real ones , existing methods observe the language patterns of the news post and “ zoom in ” to verify its content with knowledge sources or check its readers ’ replies .",
"how... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "fake news detection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"fake",
"news",
"detection"
],
"offsets": [
0,
1,
2
]
... | [
"fake",
"news",
"detection",
"is",
"crucial",
"for",
"preventing",
"the",
"dissemination",
"of",
"misinformation",
"on",
"social",
"media",
".",
"to",
"differentiate",
"fake",
"news",
"from",
"real",
"ones",
",",
"existing",
"methods",
"observe",
"the",
"languag... |
ACL | What determines the order of adjectives in English? Comparing efficiency-based theories using dependency treebanks | We take up the scientific question of what determines the preferred order of adjectives in English, in phrases such as big blue box where multiple adjectives modify a following noun. We implement and test four quantitative theories, all of which are theoretically motivated in terms of efficiency in human language produ... | b0c35e324b9c21de3365504800986b2b | 2,020 | [
"we take up the scientific question of what determines the preferred order of adjectives in english , in phrases such as big blue box where multiple adjectives modify a following noun .",
"we implement and test four quantitative theories , all of which are theoretically motivated in terms of efficiency in human l... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "scientific question of what determines the prefer... | [
"we",
"take",
"up",
"the",
"scientific",
"question",
"of",
"what",
"determines",
"the",
"preferred",
"order",
"of",
"adjectives",
"in",
"english",
",",
"in",
"phrases",
"such",
"as",
"big",
"blue",
"box",
"where",
"multiple",
"adjectives",
"modify",
"a",
"fo... |
ACL | Joint Modelling of Emotion and Abusive Language Detection | The rise of online communication platforms has been accompanied by some undesirable effects, such as the proliferation of aggressive and abusive behaviour online. Aiming to tackle this problem, the natural language processing (NLP) community has experimented with a range of techniques for abuse detection. While achievi... | f6be0d3292e0b8579348b70dde104c62 | 2,020 | [
"the rise of online communication platforms has been accompanied by some undesirable effects , such as the proliferation of aggressive and abusive behaviour online .",
"aiming to tackle this problem , the natural language processing ( nlp ) community has experimented with a range of techniques for abuse detection... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aggressive and abusive behaviour online",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aggressive",
"and",
"abusive",
"behaviour",
"online"
],
... | [
"the",
"rise",
"of",
"online",
"communication",
"platforms",
"has",
"been",
"accompanied",
"by",
"some",
"undesirable",
"effects",
",",
"such",
"as",
"the",
"proliferation",
"of",
"aggressive",
"and",
"abusive",
"behaviour",
"online",
".",
"aiming",
"to",
"tackl... |
ACL | ∞-former: Infinite Memory Transformer | Transformers are unable to model long-term memories effectively, since the amount of computation they need to perform grows with the context length. While variations of efficient transformers have been proposed, they all have a finite memory capacity and are forced to drop old information. In this paper, we propose the... | 509e0320a344d2dd55472134337ce2d5 | 2,022 | [
"transformers are unable to model long - term memories effectively , since the amount of computation they need to perform grows with the context length .",
"while variations of efficient transformers have been proposed , they all have a finite memory capacity and are forced to drop old information .",
"in this ... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "transformers",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"transformers"
],
"offsets": [
0
]
},
{
"text": "unable",
"nugget_type":... | [
"transformers",
"are",
"unable",
"to",
"model",
"long",
"-",
"term",
"memories",
"effectively",
",",
"since",
"the",
"amount",
"of",
"computation",
"they",
"need",
"to",
"perform",
"grows",
"with",
"the",
"context",
"length",
".",
"while",
"variations",
"of",
... |
ACL | An Empirical Comparison of Unsupervised Constituency Parsing Methods | Unsupervised constituency parsing aims to learn a constituency parser from a training corpus without parse tree annotations. While many methods have been proposed to tackle the problem, including statistical and neural methods, their experimental results are often not directly comparable due to discrepancies in dataset... | 81c4af512160865233ec3674b3c0eb05 | 2,020 | [
"unsupervised constituency parsing aims to learn a constituency parser from a training corpus without parse tree annotations .",
"while many methods have been proposed to tackle the problem , including statistical and neural methods , their experimental results are often not directly comparable due to discrepanci... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
62
]
},
{
"text": "experimental settings",
"nugget_type": "... | [
"unsupervised",
"constituency",
"parsing",
"aims",
"to",
"learn",
"a",
"constituency",
"parser",
"from",
"a",
"training",
"corpus",
"without",
"parse",
"tree",
"annotations",
".",
"while",
"many",
"methods",
"have",
"been",
"proposed",
"to",
"tackle",
"the",
"pr... |
ACL | Understanding Advertisements with BERT | We consider a task based on CVPR 2018 challenge dataset on advertisement (Ad) understanding. The task involves detecting the viewer’s interpretation of an Ad image captured as text. Recent results have shown that the embedded scene-text in the image holds a vital cue for this task. Motivated by this, we fine-tune the b... | d2a60f6679d1df3cb14fb78eafec83b3 | 2,020 | [
"we consider a task based on cvpr 2018 challenge dataset on advertisement ( ad ) understanding .",
"the task involves detecting the viewer ’ s interpretation of an ad image captured as text .",
"recent results have shown that the embedded scene - text in the image holds a vital cue for this task .",
"motivate... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "task",
"nugget_type": "APP",
"arg... | [
"we",
"consider",
"a",
"task",
"based",
"on",
"cvpr",
"2018",
"challenge",
"dataset",
"on",
"advertisement",
"(",
"ad",
")",
"understanding",
".",
"the",
"task",
"involves",
"detecting",
"the",
"viewer",
"’",
"s",
"interpretation",
"of",
"an",
"ad",
"image",... |
ACL | Semantic Parsing for English as a Second Language | This paper is concerned with semantic parsing for English as a second language (ESL). Motivated by the theoretical emphasis on the learning challenges that occur at the syntax-semantics interface during second language acquisition, we formulate the task based on the divergence between literal and intended meanings. We ... | f2d306cdad21bd8aed1b6871f87cf64e | 2,020 | [
"this paper is concerned with semantic parsing for english as a second language ( esl ) .",
"motivated by the theoretical emphasis on the learning challenges that occur at the syntax - semantics interface during second language acquisition , we formulate the task based on the divergence between literal and intend... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "semantic parsing",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"semantic",
"parsing"
],
"offsets": [
5,
6
]
},
{
"text"... | [
"this",
"paper",
"is",
"concerned",
"with",
"semantic",
"parsing",
"for",
"english",
"as",
"a",
"second",
"language",
"(",
"esl",
")",
".",
"motivated",
"by",
"the",
"theoretical",
"emphasis",
"on",
"the",
"learning",
"challenges",
"that",
"occur",
"at",
"th... |
ACL | Multimodal Abstractive Summarization for How2 Videos | In this paper, we study abstractive summarization for open-domain videos. Unlike the traditional text news summarization, the goal is less to “compress” text information but rather to provide a fluent textual summary of information that has been collected and fused from different source modalities, in our case video an... | 95cc506615659a0b7eacf8d53901a129 | 2,019 | [
"in this paper , we study abstractive summarization for open - domain videos .",
"unlike the traditional text news summarization , the goal is less to “ compress ” text information but rather to provide a fluent textual summary of information that has been collected and fused from different source modalities , in... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "abstractive summarization for open - domain videos",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"abstractive",
"summarization",
"for",
"open",
"-",
... | [
"in",
"this",
"paper",
",",
"we",
"study",
"abstractive",
"summarization",
"for",
"open",
"-",
"domain",
"videos",
".",
"unlike",
"the",
"traditional",
"text",
"news",
"summarization",
",",
"the",
"goal",
"is",
"less",
"to",
"“",
"compress",
"”",
"text",
"... |
ACL | Abstractive Text Summarization Based on Deep Learning and Semantic Content Generalization | This work proposes a novel framework for enhancing abstractive text summarization based on the combination of deep learning techniques along with semantic data transformations. Initially, a theoretical model for semantic-based text generalization is introduced and used in conjunction with a deep encoder-decoder archite... | 9569a2d6e1264ced2bf5478525047896 | 2,019 | [
"this work proposes a novel framework for enhancing abstractive text summarization based on the combination of deep learning techniques along with semantic data transformations .",
"initially , a theoretical model for semantic - based text generalization is introduced and used in conjunction with a deep encoder -... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "framework",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"framework"
],
"offsets": [
5
]
},
{
"text": "enhancing",
"nugget_type": "E... | [
"this",
"work",
"proposes",
"a",
"novel",
"framework",
"for",
"enhancing",
"abstractive",
"text",
"summarization",
"based",
"on",
"the",
"combination",
"of",
"deep",
"learning",
"techniques",
"along",
"with",
"semantic",
"data",
"transformations",
".",
"initially",
... |
ACL | BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance | Pretraining deep language models has led to large performance gains in NLP. Despite this success, Schick and Schütze (2020) recently showed that these models struggle to understand rare words. For static word embeddings, this problem has been addressed by separately learning representations for rare words. In this work... | 7a6d62211f5e4bbc966f2f1bda2f58f3 | 2,020 | [
"pretraining deep language models has led to large performance gains in nlp .",
"despite this success , schick and schutze ( 2020 ) recently showed that these models struggle to understand rare words .",
"for static word embeddings , this problem has been addressed by separately learning representations for rar... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretraining deep language models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"pretraining",
"deep",
"language",
"models"
],
"offsets": [
... | [
"pretraining",
"deep",
"language",
"models",
"has",
"led",
"to",
"large",
"performance",
"gains",
"in",
"nlp",
".",
"despite",
"this",
"success",
",",
"schick",
"and",
"schutze",
"(",
"2020",
")",
"recently",
"showed",
"that",
"these",
"models",
"struggle",
... |
ACL | Dense-Caption Matching and Frame-Selection Gating for Temporal Localization in VideoQA | Videos convey rich information. Dynamic spatio-temporal relationships between people/objects, and diverse multimodal events are present in a video clip. Hence, it is important to develop automated models that can accurately extract such information from videos. Answering questions on videos is one of the tasks which ca... | e99d6a19cbc7417dd36a4c5f4fdf2ef1 | 2,020 | [
"videos convey rich information .",
"dynamic spatio - temporal relationships between people / objects , and diverse multimodal events are present in a video clip .",
"hence , it is important to develop automated models that can accurately extract such information from videos .",
"answering questions on videos... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "answering questions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"answering",
"questions"
],
"offsets": [
44,
45
]
},
{
... | [
"videos",
"convey",
"rich",
"information",
".",
"dynamic",
"spatio",
"-",
"temporal",
"relationships",
"between",
"people",
"/",
"objects",
",",
"and",
"diverse",
"multimodal",
"events",
"are",
"present",
"in",
"a",
"video",
"clip",
".",
"hence",
",",
"it",
... |
ACL | Robust Neural Machine Translation with Doubly Adversarial Inputs | Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial t... | 4791459d502ca46e03c059e741bb650a | 2,019 | [
"neural machine translation ( nmt ) often suffers from the vulnerability to noisy perturbations in the input .",
"we propose an approach to improving the robustness of nmt models , which consists of two parts : ( 1 ) attack the translation model with adversarial source examples ; ( 2 ) defend the translation mode... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "nmt",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"nmt"
],
"offsets": [
4
]
}
],
"trigger": {
"text": "suffers",
"tokens": [
"... | [
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"often",
"suffers",
"from",
"the",
"vulnerability",
"to",
"noisy",
"perturbations",
"in",
"the",
"input",
".",
"we",
"propose",
"an",
"approach",
"to",
"improving",
"the",
"robustness",
"of",
"nmt",
"models"... |
ACL | CTFN: Hierarchical Learning for Multimodal Sentiment Analysis Using Coupled-Translation Fusion Network | Multimodal sentiment analysis is the challenging research area that attends to the fusion of multiple heterogeneous modalities. The main challenge is the occurrence of some missing modalities during the multimodal fusion procedure. However, the existing techniques require all modalities as input, thus are sensitive to ... | 49e198836766c746b79d4f45d7f79de2 | 2,021 | [
"multimodal sentiment analysis is the challenging research area that attends to the fusion of multiple heterogeneous modalities .",
"the main challenge is the occurrence of some missing modalities during the multimodal fusion procedure .",
"however , the existing techniques require all modalities as input , thu... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multimodal sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multimodal",
"sentiment",
"analysis"
],
"offsets": [
0,
1,
... | [
"multimodal",
"sentiment",
"analysis",
"is",
"the",
"challenging",
"research",
"area",
"that",
"attends",
"to",
"the",
"fusion",
"of",
"multiple",
"heterogeneous",
"modalities",
".",
"the",
"main",
"challenge",
"is",
"the",
"occurrence",
"of",
"some",
"missing",
... |
ACL | Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation | Table-to-text generation aims to translate the structured data into the unstructured text. Most existing methods adopt the encoder-decoder framework to learn the transformation, which requires large-scale training samples. However, the lack of large parallel data is a major practical problem for many domains. In this w... | 30cfcb3c761cba77a4380e0d12be777a | 2,019 | [
"table - to - text generation aims to translate the structured data into the unstructured text .",
"most existing methods adopt the encoder - decoder framework to learn the transformation , which requires large - scale training samples .",
"however , the lack of large parallel data is a major practical problem ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "table - to - text generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"table",
"-",
"to",
"-",
"text",
"generation"
],
"off... | [
"table",
"-",
"to",
"-",
"text",
"generation",
"aims",
"to",
"translate",
"the",
"structured",
"data",
"into",
"the",
"unstructured",
"text",
".",
"most",
"existing",
"methods",
"adopt",
"the",
"encoder",
"-",
"decoder",
"framework",
"to",
"learn",
"the",
"t... |
ACL | Budgeted Policy Learning for Task-Oriented Dialogue Systems | This paper presents a new approach that extends Deep Dyna-Q (DDQ) by incorporating a Budget-Conscious Scheduling (BCS) to best utilize a fixed, small amount of user interactions (budget) for learning task-oriented dialogue agents. BCS consists of (1) a Poisson-based global scheduler to allocate budget over different st... | c99aa592f695a4ce6f4c0cb620a9c838 | 2,019 | [
"this paper presents a new approach that extends deep dyna - q ( ddq ) by incorporating a budget - conscious scheduling ( bcs ) to best utilize a fixed , small amount of user interactions ( budget ) for learning task - oriented dialogue agents .",
"bcs consists of ( 1 ) a poisson - based global scheduler to alloc... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "extends",
"nugget_type": "E-PUR",
"argument_type": "Target",
"tokens": [
"extends"
],
"offsets": [
7
]
},
{
"text": "budget - conscious scheduling",
... | [
"this",
"paper",
"presents",
"a",
"new",
"approach",
"that",
"extends",
"deep",
"dyna",
"-",
"q",
"(",
"ddq",
")",
"by",
"incorporating",
"a",
"budget",
"-",
"conscious",
"scheduling",
"(",
"bcs",
")",
"to",
"best",
"utilize",
"a",
"fixed",
",",
"small",... |
ACL | Improving Time Sensitivity for Question Answering over Temporal Knowledge Graphs | Question answering over temporal knowledge graphs (KGs) efficiently uses facts contained in a temporal KG, which records entity relations and when they occur in time, to answer natural language questions (e.g., “Who was the president of the US before Obama?”). These questions often involve three time-related challenges... | c68d15ca0aff65f5e30d9a85103ed1c1 | 2,022 | [
"question answering over temporal knowledge graphs ( kgs ) efficiently uses facts contained in a temporal kg , which records entity relations and when they occur in time , to answer natural language questions ( e . g . , “ who was the president of the us before obama ? ” ) .",
"these questions often involve three... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "answer",
"nugget_type": "E-PUR",
"argument_type": "Target",
"tokens": [
"answer"
],
"offsets": [
30
]
},
{
"text": "question answering over temporal knowledg... | [
"question",
"answering",
"over",
"temporal",
"knowledge",
"graphs",
"(",
"kgs",
")",
"efficiently",
"uses",
"facts",
"contained",
"in",
"a",
"temporal",
"kg",
",",
"which",
"records",
"entity",
"relations",
"and",
"when",
"they",
"occur",
"in",
"time",
",",
... |
ACL | ReadOnce Transformers: Reusable Representations of Text for Transformers | We present ReadOnce Transformers, an approach to convert a transformer-based model into one that can build an information-capturing, task-independent, and compressed representation of text. The resulting representation is reusable across different examples and tasks, thereby requiring a document shared across many exam... | d3c03d8ba977f629f444ba03cf2b0090 | 2,021 | [
"we present readonce transformers , an approach to convert a transformer - based model into one that can build an information - capturing , task - independent , and compressed representation of text .",
"the resulting representation is reusable across different examples and tasks , thereby requiring a document sh... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "readonce transformers",
"nugget_type": "APP... | [
"we",
"present",
"readonce",
"transformers",
",",
"an",
"approach",
"to",
"convert",
"a",
"transformer",
"-",
"based",
"model",
"into",
"one",
"that",
"can",
"build",
"an",
"information",
"-",
"capturing",
",",
"task",
"-",
"independent",
",",
"and",
"compre... |
ACL | Evidence-based Factual Error Correction | This paper introduces the task of factual error correction: performing edits to a claim so that the generated rewrite is better supported by evidence. This extends the well-studied task of fact verification by providing a mechanism to correct written texts that are refuted or only partially supported by evidence. We de... | 86daca8f61caf30c17cef35548132805 | 2,021 | [
"this paper introduces the task of factual error correction : performing edits to a claim so that the generated rewrite is better supported by evidence .",
"this extends the well - studied task of fact verification by providing a mechanism to correct written texts that are refuted or only partially supported by e... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "task of factual error correction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"task",
"of",
"factual",
"error",
"correction"
],
"offsets"... | [
"this",
"paper",
"introduces",
"the",
"task",
"of",
"factual",
"error",
"correction",
":",
"performing",
"edits",
"to",
"a",
"claim",
"so",
"that",
"the",
"generated",
"rewrite",
"is",
"better",
"supported",
"by",
"evidence",
".",
"this",
"extends",
"the",
"... |
ACL | Discourse Representation Parsing for Sentences and Documents | We introduce a novel semantic parsing task based on Discourse Representation Theory (DRT; Kamp and Reyle 1993). Our model operates over Discourse Representation Tree Structures which we formally define for sentences and documents. We present a general framework for parsing discourse structures of arbitrary length and g... | b60ca33e6d68fdb80d3fea540e784655 | 2,019 | [
"we introduce a novel semantic parsing task based on discourse representation theory ( drt ; kamp and reyle 1993 ) .",
"our model operates over discourse representation tree structures which we formally define for sentences and documents .",
"we present a general framework for parsing discourse structures of ar... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "semantic parsing task",
"nugget_type": "T... | [
"we",
"introduce",
"a",
"novel",
"semantic",
"parsing",
"task",
"based",
"on",
"discourse",
"representation",
"theory",
"(",
"drt",
";",
"kamp",
"and",
"reyle",
"1993",
")",
".",
"our",
"model",
"operates",
"over",
"discourse",
"representation",
"tree",
"struc... |
ACL | Unsupervised FAQ Retrieval with Question Generation and BERT | We focus on the task of Frequently Asked Questions (FAQ) retrieval. A given user query can be matched against the questions and/or the answers in the FAQ. We present a fully unsupervised method that exploits the FAQ pairs to train two BERT models. The two models match user queries to FAQ answers and questions, respecti... | ceae4135787d3f0a4650de498bf72814 | 2,020 | [
"we focus on the task of frequently asked questions ( faq ) retrieval .",
"a given user query can be matched against the questions and / or the answers in the faq .",
"we present a fully unsupervised method that exploits the faq pairs to train two bert models .",
"the two models match user queries to faq answ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "faq retrieval",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"faq",
"retrieval"
],
"offsets": [
57,
12
]
}
],
"trigger": {
... | [
"we",
"focus",
"on",
"the",
"task",
"of",
"frequently",
"asked",
"questions",
"(",
"faq",
")",
"retrieval",
".",
"a",
"given",
"user",
"query",
"can",
"be",
"matched",
"against",
"the",
"questions",
"and",
"/",
"or",
"the",
"answers",
"in",
"the",
"faq",... |
ACL | Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution | Recognizing coreferring events and entities across multiple texts is crucial for many NLP applications. Despite the task’s importance, research focus was given mostly to within-document entity coreference, with rather little attention to the other variants. We propose a neural architecture for cross-document coreferenc... | 055b28c5c496e23927fe20016f53ccab | 2,019 | [
"recognizing coreferring events and entities across multiple texts is crucial for many nlp applications .",
"despite the task ’ s importance , research focus was given mostly to within - document entity coreference , with rather little attention to the other variants .",
"we propose a neural architecture for cr... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "recognizing coreferring events and entities across multiple texts",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"recognizing",
"coreferring",
"events",
"and",
... | [
"recognizing",
"coreferring",
"events",
"and",
"entities",
"across",
"multiple",
"texts",
"is",
"crucial",
"for",
"many",
"nlp",
"applications",
".",
"despite",
"the",
"task",
"’",
"s",
"importance",
",",
"research",
"focus",
"was",
"given",
"mostly",
"to",
"w... |
ACL | Constrained Multi-Task Learning for Bridging Resolution | We examine the extent to which supervised bridging resolvers can be improved without employing additional labeled bridging data by proposing a novel constrained multi-task learning framework for bridging resolution, within which we (1) design cross-task consistency constraints to guide the learning process; (2) pre-tra... | 5131e4068e12610419c5b7737bf19a84 | 2,022 | [
"we examine the extent to which supervised bridging resolvers can be improved without employing additional labeled bridging data by proposing a novel constrained multi - task learning framework for bridging resolution , within which we ( 1 ) design cross - task consistency constraints to guide the learning process ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "novel constrained multi - task learning framework",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"novel",
"constrained",
"multi",
"-",
"task",
"... | [
"we",
"examine",
"the",
"extent",
"to",
"which",
"supervised",
"bridging",
"resolvers",
"can",
"be",
"improved",
"without",
"employing",
"additional",
"labeled",
"bridging",
"data",
"by",
"proposing",
"a",
"novel",
"constrained",
"multi",
"-",
"task",
"learning",
... |
ACL | GLM: General Language Model Pretraining with Autoregressive Blank Infilling | There have been various types of pretraining architectures including autoencoding models (e.g., BERT), autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5). However, none of the pretraining frameworks performs the best for all tasks of three main categories including natural language understanding (... | d74e4e96a41d5b0d907d87a64b1f5740 | 2,022 | [
"there have been various types of pretraining architectures including autoencoding models ( e . g . , bert ) , autoregressive models ( e . g . , gpt ) , and encoder - decoder models ( e . g . , t5 ) .",
"however , none of the pretraining frameworks performs the best for all tasks of three main categories includin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language understanding",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"understanding"
],
"offsets": [
63,
64... | [
"there",
"have",
"been",
"various",
"types",
"of",
"pretraining",
"architectures",
"including",
"autoencoding",
"models",
"(",
"e",
".",
"g",
".",
",",
"bert",
")",
",",
"autoregressive",
"models",
"(",
"e",
".",
"g",
".",
",",
"gpt",
")",
",",
"and",
... |
ACL | MuTual: A Dataset for Multi-Turn Dialogue Reasoning | Non-task oriented dialogue systems have achieved great success in recent years due to largely accessible conversation data and the development of deep learning techniques. Given a context, current systems are able to yield a relevant and fluent response, but sometimes make logical mistakes because of weak reasoning cap... | 18d2c036fe59b9d8542c8c930a30a6f6 | 2,020 | [
"non - task oriented dialogue systems have achieved great success in recent years due to largely accessible conversation data and the development of deep learning techniques .",
"given a context , current systems are able to yield a relevant and fluent response , but sometimes make logical mistakes because of wea... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "non - task oriented dialogue systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"non",
"-",
"task",
"oriented",
"dialogue",
"systems"
... | [
"non",
"-",
"task",
"oriented",
"dialogue",
"systems",
"have",
"achieved",
"great",
"success",
"in",
"recent",
"years",
"due",
"to",
"largely",
"accessible",
"conversation",
"data",
"and",
"the",
"development",
"of",
"deep",
"learning",
"techniques",
".",
"given... |
ACL | Contextualizing Hate Speech Classifiers with Post-hoc Explanation | Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like “gay” or “black” are used in offensive or prejudiced ways. Such biases manifest in false positives when these identifiers are present, due to models’ inability to learn the contexts which constitute a hateful usage of... | 6326d48b85889da0e11e82d194abade0 | 2,020 | [
"hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like “ gay ” or “ black ” are used in offensive or prejudiced ways .",
"such biases manifest in false positives when these identifiers are present , due to models ’ inability to learn the contexts which constitute a... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "hate speech classifiers",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"hate",
"speech",
"classifiers"
],
"offsets": [
0,
1,
2
... | [
"hate",
"speech",
"classifiers",
"trained",
"on",
"imbalanced",
"datasets",
"struggle",
"to",
"determine",
"if",
"group",
"identifiers",
"like",
"“",
"gay",
"”",
"or",
"“",
"black",
"”",
"are",
"used",
"in",
"offensive",
"or",
"prejudiced",
"ways",
".",
"suc... |
ACL | Mapping Natural Language Instructions to Mobile UI Action Sequences | We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it. For full task evaluation, we create PixelHelp, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator. To scale training, we decouple the l... | bd42b37bd3ed0a1a47fd1bf3eb9e9236 | 2,020 | [
"we present a new problem : grounding natural language instructions to mobile user interface actions , and create three new datasets for it .",
"for full task evaluation , we create pixelhelp , a corpus that pairs english instructions with actions performed by people on a mobile ui emulator .",
"to scale traini... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "natural language instructions",
"nugget_t... | [
"we",
"present",
"a",
"new",
"problem",
":",
"grounding",
"natural",
"language",
"instructions",
"to",
"mobile",
"user",
"interface",
"actions",
",",
"and",
"create",
"three",
"new",
"datasets",
"for",
"it",
".",
"for",
"full",
"task",
"evaluation",
",",
"we... |
ACL | Improving Textual Network Embedding with Global Attention via Optimal Transport | Constituting highly informative network embeddings is an essential tool for network analysis. It encodes network topology, along with other useful side information, into low dimensional node-based feature representations that can be exploited by statistical modeling. This work focuses on learning context-aware network ... | af2d1fdecc18cc90e6d224aeee29365a | 2,019 | [
"constituting highly informative network embeddings is an essential tool for network analysis .",
"it encodes network topology , along with other useful side information , into low dimensional node - based feature representations that can be exploited by statistical modeling .",
"this work focuses on learning c... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "highly informative network embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"highly",
"informative",
"network",
"embeddings"
],
"offsets": [... | [
"constituting",
"highly",
"informative",
"network",
"embeddings",
"is",
"an",
"essential",
"tool",
"for",
"network",
"analysis",
".",
"it",
"encodes",
"network",
"topology",
",",
"along",
"with",
"other",
"useful",
"side",
"information",
",",
"into",
"low",
"dim... |
ACL | Cross-Lingual Contrastive Learning for Fine-Grained Entity Typing for Low-Resource Languages | Fine-grained entity typing (FGET) aims to classify named entity mentions into fine-grained entity types, which is meaningful for entity-related NLP tasks. For FGET, a key challenge is the low-resource problem — the complex entity type hierarchy makes it difficult to manually label data. Especially for those languages o... | 7d9b0cf6f9438010ae5886c57b35b53f | 2,022 | [
"fine - grained entity typing ( fget ) aims to classify named entity mentions into fine - grained entity types , which is meaningful for entity - related nlp tasks .",
"for fget , a key challenge is the low - resource problem — the complex entity type hierarchy makes it difficult to manually label data .",
"esp... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "fine - grained entity typing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"fine",
"-",
"grained",
"entity",
"typing"
],
"offsets": [
... | [
"fine",
"-",
"grained",
"entity",
"typing",
"(",
"fget",
")",
"aims",
"to",
"classify",
"named",
"entity",
"mentions",
"into",
"fine",
"-",
"grained",
"entity",
"types",
",",
"which",
"is",
"meaningful",
"for",
"entity",
"-",
"related",
"nlp",
"tasks",
"."... |
ACL | Long Text Generation by Modeling Sentence-Level and Discourse-Level Coherence | Generating long and coherent text is an important but challenging task, particularly for open-ended language generation tasks such as story generation. Despite the success in modeling intra-sentence coherence, existing generation models (e.g., BART) still struggle to maintain a coherent event sequence throughout the ge... | c33d887e7d8b9604af762c34f3348df2 | 2,021 | [
"generating long and coherent text is an important but challenging task , particularly for open - ended language generation tasks such as story generation .",
"despite the success in modeling intra - sentence coherence , existing generation models ( e . g . , bart ) still struggle to maintain a coherent event seq... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "generating long text",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"generating",
"long",
"text"
],
"offsets": [
0,
1,
4
... | [
"generating",
"long",
"and",
"coherent",
"text",
"is",
"an",
"important",
"but",
"challenging",
"task",
",",
"particularly",
"for",
"open",
"-",
"ended",
"language",
"generation",
"tasks",
"such",
"as",
"story",
"generation",
".",
"despite",
"the",
"success",
... |
ACL | Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering | To alleviate the data scarcity problem in training question answering systems, recent works propose additional intermediate pre-training for dense passage retrieval (DPR). However, there still remains a large discrepancy between the provided upstream signals and the downstream question-passage relevance, which leads to... | 5e44be0a07f8a26b00bb86e918b1985a | 2,022 | [
"to alleviate the data scarcity problem in training question answering systems , recent works propose additional intermediate pre - training for dense passage retrieval ( dpr ) .",
"however , there still remains a large discrepancy between the provided upstream signals and the downstream question - passage releva... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "data scarcity problem",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"data",
"scarcity",
"problem"
],
"offsets": [
3,
4,
5
... | [
"to",
"alleviate",
"the",
"data",
"scarcity",
"problem",
"in",
"training",
"question",
"answering",
"systems",
",",
"recent",
"works",
"propose",
"additional",
"intermediate",
"pre",
"-",
"training",
"for",
"dense",
"passage",
"retrieval",
"(",
"dpr",
")",
".",
... |
ACL | Rewriter-Evaluator Architecture for Neural Machine Translation | A few approaches have been developed to improve neural machine translation (NMT) models with multiple passes of decoding. However, their performance gains are limited because of lacking proper policies to terminate the multi-pass process. To address this issue, we introduce a novel architecture of Rewriter-Evaluator. T... | 838f97e60e0ff3635bd985743ff52063 | 2,021 | [
"a few approaches have been developed to improve neural machine translation ( nmt ) models with multiple passes of decoding .",
"however , their performance gains are limited because of lacking proper policies to terminate the multi - pass process .",
"to address this issue , we introduce a novel architecture o... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation ( nmt ) models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation",
"(",
"nmt",
")",
... | [
"a",
"few",
"approaches",
"have",
"been",
"developed",
"to",
"improve",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"models",
"with",
"multiple",
"passes",
"of",
"decoding",
".",
"however",
",",
"their",
"performance",
"gains",
"are",
"limited",
"beca... |
ACL | Revisiting Higher-Order Dependency Parsers | Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers. This has led to a belief that neural encoders can implicitly encode structural constraints, such as siblings and g... | 18247a810ed5feefcce40e6fa2c23ddd | 2,020 | [
"neural encoders have allowed dependency parsers to shift from higher - order structured models to simpler first - order ones , making decoding faster and still achieving better accuracy than non - neural parsers .",
"this has led to a belief that neural encoders can implicitly encode structural constraints , suc... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural encoders",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"encoders"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"neural",
"encoders",
"have",
"allowed",
"dependency",
"parsers",
"to",
"shift",
"from",
"higher",
"-",
"order",
"structured",
"models",
"to",
"simpler",
"first",
"-",
"order",
"ones",
",",
"making",
"decoding",
"faster",
"and",
"still",
"achieving",
"better",
... |
ACL | Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization | The IMPRESSIONS section of a radiology report about an imaging study is a summary of the radiologist’s reasoning and conclusions, and it also aids the referring physician in confirming or excluding certain diagnoses. A cascade of tasks are required to automatically generate an abstractive summary of the typical informa... | dc11f0599ca5a0fcd700c3c6913a5d5c | 2,022 | [
"the impressions section of a radiology report about an imaging study is a summary of the radiologist ’ s reasoning and conclusions , and it also aids the referring physician in confirming or excluding certain diagnoses .",
"a cascade of tasks are required to automatically generate an abstractive summary of the t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "abstractive summary",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"abstractive",
"summary"
],
"offsets": [
47,
48
]
}
],
"trig... | [
"the",
"impressions",
"section",
"of",
"a",
"radiology",
"report",
"about",
"an",
"imaging",
"study",
"is",
"a",
"summary",
"of",
"the",
"radiologist",
"’",
"s",
"reasoning",
"and",
"conclusions",
",",
"and",
"it",
"also",
"aids",
"the",
"referring",
"physic... |
ACL | Understanding Multimodal Procedural Knowledge by Sequencing Multimodal Instructional Manuals | The ability to sequence unordered events is evidence of comprehension and reasoning about real world tasks/procedures. It is essential for applications such as task planning and multi-source instruction summarization.It often requires thorough understanding of temporal common sense and multimodal information, since the... | 4b48c7f92af8a836f837c99946d58b9c | 2,022 | [
"the ability to sequence unordered events is evidence of comprehension and reasoning about real world tasks / procedures .",
"it is essential for applications such as task planning and multi - source instruction summarization .",
"it often requires thorough understanding of temporal common sense and multimodal ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "unordered events",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"unordered",
"events"
],
"offsets": [
4,
5
]
}
],
"trigger": {
... | [
"the",
"ability",
"to",
"sequence",
"unordered",
"events",
"is",
"evidence",
"of",
"comprehension",
"and",
"reasoning",
"about",
"real",
"world",
"tasks",
"/",
"procedures",
".",
"it",
"is",
"essential",
"for",
"applications",
"such",
"as",
"task",
"planning",
... |
ACL | MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations | Emotion recognition in conversations is a challenging task that has recently gained popularity due to its potential applications. Until now, however, a large-scale multimodal multi-party emotional conversational database containing more than two speakers per dialogue was missing. Thus, we propose the Multimodal Emotion... | 0bbd31f995222d64212e76ab9a1d0e10 | 2,019 | [
"emotion recognition in conversations is a challenging task that has recently gained popularity due to its potential applications .",
"until now , however , a large - scale multimodal multi - party emotional conversational database containing more than two speakers per dialogue was missing .",
"thus , we propos... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "emotion recognition",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"emotion",
"recognition"
],
"offsets": [
0,
1
]
},
{
"... | [
"emotion",
"recognition",
"in",
"conversations",
"is",
"a",
"challenging",
"task",
"that",
"has",
"recently",
"gained",
"popularity",
"due",
"to",
"its",
"potential",
"applications",
".",
"until",
"now",
",",
"however",
",",
"a",
"large",
"-",
"scale",
"multim... |
ACL | Ensembling and Knowledge Distilling of Large Sequence Taggers for Grammatical Error Correction | In this paper, we investigate improvements to the GEC sequence tagging architecture with a focus on ensembling of recent cutting-edge Transformer-based encoders in Large configurations. We encourage ensembling models by majority votes on span-level edits because this approach is tolerant to the model architecture and v... | fb5317166298212924e2d580cb5b1d57 | 2,022 | [
"in this paper , we investigate improvements to the gec sequence tagging architecture with a focus on ensembling of recent cutting - edge transformer - based encoders in large configurations .",
"we encourage ensembling models by majority votes on span - level edits because this approach is tolerant to the model ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
31
]
},
{
"text": "ensembling models",
"nugget_type": "TAK"... | [
"in",
"this",
"paper",
",",
"we",
"investigate",
"improvements",
"to",
"the",
"gec",
"sequence",
"tagging",
"architecture",
"with",
"a",
"focus",
"on",
"ensembling",
"of",
"recent",
"cutting",
"-",
"edge",
"transformer",
"-",
"based",
"encoders",
"in",
"large"... |
ACL | Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting | Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only. However, it is still challenging to associate source-target sentences in the latent space. As people speak different languages biologically share similar visual systems, the potential of achieving better alignm... | 2eb69efabd312cc70cf1d193863b4702 | 2,020 | [
"unsupervised machine translation ( mt ) has recently achieved impressive results with monolingual corpora only .",
"however , it is still challenging to associate source - target sentences in the latent space .",
"as people speak different languages biologically share similar visual systems , the potential of ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "unsupervised machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"unsupervised",
"machine",
"translation"
],
"offsets": [
0,
... | [
"unsupervised",
"machine",
"translation",
"(",
"mt",
")",
"has",
"recently",
"achieved",
"impressive",
"results",
"with",
"monolingual",
"corpora",
"only",
".",
"however",
",",
"it",
"is",
"still",
"challenging",
"to",
"associate",
"source",
"-",
"target",
"sent... |
ACL | ScriptWriter: Narrative-Guided Script Generation | It is appealing to have a system that generates a story or scripts automatically from a storyline, even though this is still out of our reach. In dialogue systems, it would also be useful to drive dialogues by a dialogue plan. In this paper, we address a key problem involved in these applications - guiding a dialogue b... | 3afb0231bb18d6c168526624e8494b37 | 2,020 | [
"it is appealing to have a system that generates a story or scripts automatically from a storyline , even though this is still out of our reach .",
"in dialogue systems , it would also be useful to drive dialogues by a dialogue plan .",
"in this paper , we address a key problem involved in these applications - ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
49
]
},
{
"text": "guiding a dialogue by a narrative",
"nug... | [
"it",
"is",
"appealing",
"to",
"have",
"a",
"system",
"that",
"generates",
"a",
"story",
"or",
"scripts",
"automatically",
"from",
"a",
"storyline",
",",
"even",
"though",
"this",
"is",
"still",
"out",
"of",
"our",
"reach",
".",
"in",
"dialogue",
"systems"... |
ACL | Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models | Targeted syntactic evaluations have demonstrated the ability of language models to perform subject-verb agreement given difficult contexts. To elucidate the mechanisms by which the models accomplish this behavior, this study applies causal mediation analysis to pre-trained neural language models. We investigate the mag... | fcdc23296498c90f9a98b320ef647b46 | 2,021 | [
"targeted syntactic evaluations have demonstrated the ability of language models to perform subject - verb agreement given difficult contexts .",
"to elucidate the mechanisms by which the models accomplish this behavior , this study applies causal mediation analysis to pre - trained neural language models .",
"... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"language",
"models"
],
"offsets": [
8,
9
]
}
],
"trigger": {
... | [
"targeted",
"syntactic",
"evaluations",
"have",
"demonstrated",
"the",
"ability",
"of",
"language",
"models",
"to",
"perform",
"subject",
"-",
"verb",
"agreement",
"given",
"difficult",
"contexts",
".",
"to",
"elucidate",
"the",
"mechanisms",
"by",
"which",
"the",... |
ACL | Generating Biographies on Wikipedia: The Impact of Gender Bias on the Retrieval-Based Generation of Women Biographies | Generating factual, long-form text such as Wikipedia articles raises three key challenges: how to gather relevant evidence, how to structure information into well-formed text, and how to ensure that the generated text is factually correct. We address these by developing a model for English text that uses a retrieval me... | 0a08533837ab685cd243d00a78d5a064 | 2,022 | [
"generating factual , long - form text such as wikipedia articles raises three key challenges : how to gather relevant evidence , how to structure information into well - formed text , and how to ensure that the generated text is factually correct .",
"we address these by developing a model for english text that ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "factual , long - form text",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"factual",
",",
"long",
"-",
"form",
"text"
],
"offsets... | [
"generating",
"factual",
",",
"long",
"-",
"form",
"text",
"such",
"as",
"wikipedia",
"articles",
"raises",
"three",
"key",
"challenges",
":",
"how",
"to",
"gather",
"relevant",
"evidence",
",",
"how",
"to",
"structure",
"information",
"into",
"well",
"-",
"... |
ACL | TicketTalk: Toward human-level performance with end-to-end, transaction-based dialog systems | We present a data-driven, end-to-end approach to transaction-based dialog systems that performs at near-human levels in terms of verbal response quality and factual grounding accuracy. We show that two essential components of the system produce these results: a sufficiently large and diverse, in-domain labeled dataset,... | 7882d99db401565576ed1f8c88cffcf8 | 2,021 | [
"we present a data - driven , end - to - end approach to transaction - based dialog systems that performs at near - human levels in terms of verbal response quality and factual grounding accuracy .",
"we show that two essential components of the system produce these results : a sufficiently large and diverse , in... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "data - driven , end - to - end approach",
"... | [
"we",
"present",
"a",
"data",
"-",
"driven",
",",
"end",
"-",
"to",
"-",
"end",
"approach",
"to",
"transaction",
"-",
"based",
"dialog",
"systems",
"that",
"performs",
"at",
"near",
"-",
"human",
"levels",
"in",
"terms",
"of",
"verbal",
"response",
"qual... |
ACL | G-Transformer for Document-Level Machine Translation | Document-level MT models are still far from satisfactory. Existing work extend translation unit from single sentence to multiple sentences. However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail. In this paper, we find such failure is not ... | c1f14535e821eb955760c80beff762cf | 2,021 | [
"document - level mt models are still far from satisfactory .",
"existing work extend translation unit from single sentence to multiple sentences .",
"however , study shows that when we further enlarge the translation unit to a whole document , supervised training of transformer can fail .",
"in this paper , ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "document - level mt models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"document",
"-",
"level",
"mt",
"models"
],
"offsets": [
... | [
"document",
"-",
"level",
"mt",
"models",
"are",
"still",
"far",
"from",
"satisfactory",
".",
"existing",
"work",
"extend",
"translation",
"unit",
"from",
"single",
"sentence",
"to",
"multiple",
"sentences",
".",
"however",
",",
"study",
"shows",
"that",
"when... |
ACL | Showing Your Work Doesn’t Always Work | In natural language processing, a recently popular line of work explores how to best report the experimental results of neural networks. One exemplar publication, titled “Show Your Work: Improved Reporting of Experimental Results” (Dodge et al., 2019), advocates for reporting the expected validation effectiveness of th... | aa9115ecc2d719e71e26c158bbfb6c98 | 2,020 | [
"in natural language processing , a recently popular line of work explores how to best report the experimental results of neural networks .",
"one exemplar publication , titled “ show your work : improved reporting of experimental results ” ( dodge et al . , 2019 ) , advocates for reporting the expected validatio... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing"
],
"offsets": [
1,
2,
... | [
"in",
"natural",
"language",
"processing",
",",
"a",
"recently",
"popular",
"line",
"of",
"work",
"explores",
"how",
"to",
"best",
"report",
"the",
"experimental",
"results",
"of",
"neural",
"networks",
".",
"one",
"exemplar",
"publication",
",",
"titled",
"“"... |
ACL | An Unsupervised Multiple-Task and Multiple-Teacher Model for Cross-lingual Named Entity Recognition | Cross-lingual named entity recognition task is one of the critical problems for evaluating the potential transfer learning techniques on low resource languages. Knowledge distillation using pre-trained multilingual language models between source and target languages have shown their superiority in transfer. However, ex... | 8b5c2338d28a1edb4a6b18dcf178e822 | 2,022 | [
"cross - lingual named entity recognition task is one of the critical problems for evaluating the potential transfer learning techniques on low resource languages .",
"knowledge distillation using pre - trained multilingual language models between source and target languages have shown their superiority in transf... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual named entity recognition task",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"named",
"entity",
"recogni... | [
"cross",
"-",
"lingual",
"named",
"entity",
"recognition",
"task",
"is",
"one",
"of",
"the",
"critical",
"problems",
"for",
"evaluating",
"the",
"potential",
"transfer",
"learning",
"techniques",
"on",
"low",
"resource",
"languages",
".",
"knowledge",
"distillatio... |
ACL | When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-Trained Encoder Transfer Learning | Transfer learning (TL) in natural language processing (NLP) has seen a surge of interest in recent years, as pre-trained models have shown an impressive ability to transfer to novel tasks. Three main strategies have emerged for making use of multiple supervised datasets during fine-tuning: training on an intermediate t... | 871ecb8fcfb807ea5d35f9fc053fbbdc | 2,022 | [
"transfer learning ( tl ) in natural language processing ( nlp ) has seen a surge of interest in recent years , as pre - trained models have shown an impressive ability to transfer to novel tasks .",
"three main strategies have emerged for making use of multiple supervised datasets during fine - tuning : training... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "transfer learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"transfer",
"learning"
],
"offsets": [
0,
1
]
},
{
"text... | [
"transfer",
"learning",
"(",
"tl",
")",
"in",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"has",
"seen",
"a",
"surge",
"of",
"interest",
"in",
"recent",
"years",
",",
"as",
"pre",
"-",
"trained",
"models",
"have",
"shown",
"an",
"impressive",
"... |
ACL | Optimizing Deeper Transformers on Small Datasets | It is a common belief that training deep transformers from scratch requires large datasets. Consequently, for small datasets, people usually use shallow and simple additional layers on top of pre-trained models during fine-tuning. This work shows that this does not always need to be the case: with proper initialization... | be81507a2687da63dcbc6334820e4dbd | 2,021 | [
"it is a common belief that training deep transformers from scratch requires large datasets .",
"consequently , for small datasets , people usually use shallow and simple additional layers on top of pre - trained models during fine - tuning .",
"this work shows that this does not always need to be the case : wi... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
93
]
},
{
"text": "48 layers of transformers",
"nugget_type... | [
"it",
"is",
"a",
"common",
"belief",
"that",
"training",
"deep",
"transformers",
"from",
"scratch",
"requires",
"large",
"datasets",
".",
"consequently",
",",
"for",
"small",
"datasets",
",",
"people",
"usually",
"use",
"shallow",
"and",
"simple",
"additional",
... |
ACL | Explainable Prediction of Text Complexity: The Missing Preliminaries for Text Simplification | Text simplification reduces the language complexity of professional content for accessibility purposes. End-to-end neural network models have been widely adopted to directly generate the simplified version of input text, usually functioning as a blackbox. We show that text simplification can be decomposed into a compac... | 3127722524b7527261642e84daec78d0 | 2,021 | [
"text simplification reduces the language complexity of professional content for accessibility purposes .",
"end - to - end neural network models have been widely adopted to directly generate the simplified version of input text , usually functioning as a blackbox .",
"we show that text simplification can be de... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text simplification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"simplification"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"text",
"simplification",
"reduces",
"the",
"language",
"complexity",
"of",
"professional",
"content",
"for",
"accessibility",
"purposes",
".",
"end",
"-",
"to",
"-",
"end",
"neural",
"network",
"models",
"have",
"been",
"widely",
"adopted",
"to",
"directly",
"g... |
ACL | Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking | Injecting external domain-specific knowledge (e.g., UMLS) into pretrained language models (LMs) advances their capability to handle specialised in-domain tasks such as biomedical entity linking (BEL). However, such abundant expert knowledge is available only for a handful of languages (e.g., English). In this work, by ... | 685a396baea3c763cb0dea2446b06d42 | 2,021 | [
"injecting external domain - specific knowledge ( e . g . , umls ) into pretrained language models ( lms ) advances their capability to handle specialised in - domain tasks such as biomedical entity linking ( bel ) .",
"however , such abundant expert knowledge is available only for a handful of languages ( e . g ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"language",
"models"
],
"offsets": [
15,
16,
... | [
"injecting",
"external",
"domain",
"-",
"specific",
"knowledge",
"(",
"e",
".",
"g",
".",
",",
"umls",
")",
"into",
"pretrained",
"language",
"models",
"(",
"lms",
")",
"advances",
"their",
"capability",
"to",
"handle",
"specialised",
"in",
"-",
"domain",
... |
ACL | Posterior Calibrated Training on Sentence Classification Tasks | Most classification models work by first predicting a posterior probability distribution over all classes and then selecting that class with the largest estimated probability. In many settings however, the quality of posterior probability itself (e.g., 65% chance having diabetes), gives more reliable information than t... | 70e8c4b21103dfc5776c34a78dac9236 | 2,020 | [
"most classification models work by first predicting a posterior probability distribution over all classes and then selecting that class with the largest estimated probability .",
"in many settings however , the quality of posterior probability itself ( e . g . , 65 % chance having diabetes ) , gives more reliabl... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "classification models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"classification",
"models"
],
"offsets": [
1,
2
]
}
],
"tr... | [
"most",
"classification",
"models",
"work",
"by",
"first",
"predicting",
"a",
"posterior",
"probability",
"distribution",
"over",
"all",
"classes",
"and",
"then",
"selecting",
"that",
"class",
"with",
"the",
"largest",
"estimated",
"probability",
".",
"in",
"many"... |
ACL | Multimodal Sentiment Detection Based on Multi-channel Graph Neural Networks | With the popularity of smartphones, we have witnessed the rapid proliferation of multimodal posts on various social media platforms. We observe that the multimodal sentiment expression has specific global characteristics, such as the interdependencies of objects or scenes within the image. However, most previous studie... | c48606e4adbd1f56b4c808c1f8fb14a2 | 2,021 | [
"with the popularity of smartphones , we have witnessed the rapid proliferation of multimodal posts on various social media platforms .",
"we observe that the multimodal sentiment expression has specific global characteristics , such as the interdependencies of objects or scenes within the image .",
"however , ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multimodal sentiment expression",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multimodal",
"sentiment",
"expression"
],
"offsets": [
25,
... | [
"with",
"the",
"popularity",
"of",
"smartphones",
",",
"we",
"have",
"witnessed",
"the",
"rapid",
"proliferation",
"of",
"multimodal",
"posts",
"on",
"various",
"social",
"media",
"platforms",
".",
"we",
"observe",
"that",
"the",
"multimodal",
"sentiment",
"expr... |
ACL | Learning to Mediate Disparities Towards Pragmatic Communication | Human communication is a collaborative process. Speakers, on top of conveying their own intent, adjust the content and language expressions by taking the listeners into account, including their knowledge background, personalities, and physical capabilities. Towards building AI agents with similar abilities in language ... | 5803ed1a5472aa8981ff9f5424ac9346 | 2,022 | [
"human communication is a collaborative process .",
"speakers , on top of conveying their own intent , adjust the content and language expressions by taking the listeners into account , including their knowledge background , personalities , and physical capabilities .",
"towards building ai agents with similar ... | [
{
"event_type": "PUR",
"arguments": [
{
"text": "ai agents",
"nugget_type": "TAK",
"argument_type": "Aim",
"tokens": [
"ai",
"agents"
],
"offsets": [
43,
44
]
},
{
"text": "in language c... | [
"human",
"communication",
"is",
"a",
"collaborative",
"process",
".",
"speakers",
",",
"on",
"top",
"of",
"conveying",
"their",
"own",
"intent",
",",
"adjust",
"the",
"content",
"and",
"language",
"expressions",
"by",
"taking",
"the",
"listeners",
"into",
"acc... |
ACL | Using Human Attention to Extract Keyphrase from Microblog Post | This paper studies automatic keyphrase extraction on social media. Previous works have achieved promising results on it, but they neglect human reading behavior during keyphrase annotating. The human attention is a crucial element of human reading behavior. It reveals the relevance of words to the main topics of the ta... | 22c3fd601c777692e9a744f20ab4bfc1 | 2,019 | [
"this paper studies automatic keyphrase extraction on social media .",
"previous works have achieved promising results on it , but they neglect human reading behavior during keyphrase annotating .",
"the human attention is a crucial element of human reading behavior .",
"it reveals the relevance of words to t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "automatic keyphrase extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"automatic",
"keyphrase",
"extraction"
],
"offsets": [
3,
4,
... | [
"this",
"paper",
"studies",
"automatic",
"keyphrase",
"extraction",
"on",
"social",
"media",
".",
"previous",
"works",
"have",
"achieved",
"promising",
"results",
"on",
"it",
",",
"but",
"they",
"neglect",
"human",
"reading",
"behavior",
"during",
"keyphrase",
"... |
ACL | Modeling Discriminative Representations for Out-of-Domain Detection with Supervised Contrastive Learning | Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task-oriented dialog system. A key challenge of OOD detection is to learn discriminative semantic features. Traditional cross-entropy loss only focuses on whether a sample is correctly classified, and does not explicitly distinguish th... | 127f8f39125aef235860c72d9d166860 | 2,021 | [
"detecting out - of - domain ( ood ) or unknown intents from user queries is essential in a task - oriented dialog system .",
"a key challenge of ood detection is to learn discriminative semantic features .",
"traditional cross - entropy loss only focuses on whether a sample is correctly classified , and does n... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "task - oriented dialog system",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"task",
"-",
"oriented",
"dialog",
"system"
],
"offsets": [
... | [
"detecting",
"out",
"-",
"of",
"-",
"domain",
"(",
"ood",
")",
"or",
"unknown",
"intents",
"from",
"user",
"queries",
"is",
"essential",
"in",
"a",
"task",
"-",
"oriented",
"dialog",
"system",
".",
"a",
"key",
"challenge",
"of",
"ood",
"detection",
"is",... |
ACL | Probing the Robustness of Trained Metrics for Conversational Dialogue Systems | This paper introduces an adversarial method to stress-test trained metrics for the evaluation of conversational dialogue systems. The method leverages Reinforcement Learning to find response strategies that elicit optimal scores from the trained metrics. We apply our method to test recently proposed trained metrics. We... | 41d401b64ec71a32a20ca1a47667f50f | 2,022 | [
"this paper introduces an adversarial method to stress - test trained metrics for the evaluation of conversational dialogue systems .",
"the method leverages reinforcement learning to find response strategies that elicit optimal scores from the trained metrics .",
"we apply our method to test recently proposed ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "adversarial method",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"adversarial",
"method"
],
"offsets": [
4,
5
]
},
{
"t... | [
"this",
"paper",
"introduces",
"an",
"adversarial",
"method",
"to",
"stress",
"-",
"test",
"trained",
"metrics",
"for",
"the",
"evaluation",
"of",
"conversational",
"dialogue",
"systems",
".",
"the",
"method",
"leverages",
"reinforcement",
"learning",
"to",
"find"... |
ACL | Online Infix Probability Computation for Probabilistic Finite Automata | Probabilistic finite automata (PFAs) are com- mon statistical language model in natural lan- guage and speech processing. A typical task for PFAs is to compute the probability of all strings that match a query pattern. An impor- tant special case of this problem is computing the probability of a string appearing as a p... | e6e661c7caad0eda3bdfb9c3883f6300 | 2,019 | [
"probabilistic finite automata ( pfas ) are common statistical language model in natural language and speech processing .",
"a typical task for pfas is to compute the probability of all strings that match a query pattern .",
"an important special case of this problem is computing the probability of a string app... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "probabilistic finite automata",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"probabilistic",
"finite",
"automata"
],
"offsets": [
0,
1,
... | [
"probabilistic",
"finite",
"automata",
"(",
"pfas",
")",
"are",
"common",
"statistical",
"language",
"model",
"in",
"natural",
"language",
"and",
"speech",
"processing",
".",
"a",
"typical",
"task",
"for",
"pfas",
"is",
"to",
"compute",
"the",
"probability",
"... |
ACL | Rhetorically Controlled Encoder-Decoder for Modern Chinese Poetry Generation | Rhetoric is a vital element in modern poetry, and plays an essential role in improving its aesthetics. However, to date, it has not been considered in research on automatic poetry generation. In this paper, we propose a rhetorically controlled encoder-decoder for modern Chinese poetry generation. Our model relies on a ... | 14c739f9fb13c9b89fdcdf489cb988d5 | 2,019 | [
"rhetoric is a vital element in modern poetry , and plays an essential role in improving its aesthetics .",
"however , to date , it has not been considered in research on automatic poetry generation .",
"in this paper , we propose a rhetorically controlled encoder - decoder for modern chinese poetry generation ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "rhetoric",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"rhetoric"
],
"offsets": [
0
]
}
],
"trigger": {
"text": "element",
"tokens": [... | [
"rhetoric",
"is",
"a",
"vital",
"element",
"in",
"modern",
"poetry",
",",
"and",
"plays",
"an",
"essential",
"role",
"in",
"improving",
"its",
"aesthetics",
".",
"however",
",",
"to",
"date",
",",
"it",
"has",
"not",
"been",
"considered",
"in",
"research",... |
ACL | Entity-based Neural Local Coherence Modeling | In this paper, we propose an entity-based neural local coherence model which is linguistically more sound than previously proposed neural coherence models. Recent neural coherence models encode the input document using large-scale pretrained language models. Hence their basis for computing local coherence are words and... | a19d1aeff8a5826173596ba54b3f242c | 2,022 | [
"in this paper , we propose an entity - based neural local coherence model which is linguistically more sound than previously proposed neural coherence models .",
"recent neural coherence models encode the input document using large - scale pretrained language models .",
"hence their basis for computing local c... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "entity - based neural local coherence model",
... | [
"in",
"this",
"paper",
",",
"we",
"propose",
"an",
"entity",
"-",
"based",
"neural",
"local",
"coherence",
"model",
"which",
"is",
"linguistically",
"more",
"sound",
"than",
"previously",
"proposed",
"neural",
"coherence",
"models",
".",
"recent",
"neural",
"c... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.