venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | Alignment-Augmented Consistent Translation for Multilingual Open Information Extraction | Progress with supervised Open Information Extraction (OpenIE) has been primarily limited to English due to the scarcity of training data in other languages. In this paper, we explore techniques to automatically convert English text for training OpenIE systems in other languages. We introduce the Alignment-Augmented Con... | a9c801c0d8a08f6c59a71d332b43aacd | 2,022 | [
"progress with supervised open information extraction ( openie ) has been primarily limited to english due to the scarcity of training data in other languages .",
"in this paper , we explore techniques to automatically convert english text for training openie systems in other languages .",
"we introduce the ali... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open information extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"information",
"extraction"
],
"offsets": [
3,
4,
... | [
"progress",
"with",
"supervised",
"open",
"information",
"extraction",
"(",
"openie",
")",
"has",
"been",
"primarily",
"limited",
"to",
"english",
"due",
"to",
"the",
"scarcity",
"of",
"training",
"data",
"in",
"other",
"languages",
".",
"in",
"this",
"paper",... |
ACL | Logic Traps in Evaluating Attribution Scores | Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict.This goal is usually approached with attribution method, which assesses the influence of features on model predictions. As an explanation method, the evaluation criteria of attribu... | 2ac2e3a855cec9bf6115c903fa228726 | 2,022 | [
"modern deep learning models are notoriously opaque , which has motivated the development of methods for interpreting how deep models predict .",
"this goal is usually approached with attribution method , which assesses the influence of features on model predictions .",
"as an explanation method , the evaluatio... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "deep learning models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"deep",
"learning",
"models"
],
"offsets": [
1,
2,
3
... | [
"modern",
"deep",
"learning",
"models",
"are",
"notoriously",
"opaque",
",",
"which",
"has",
"motivated",
"the",
"development",
"of",
"methods",
"for",
"interpreting",
"how",
"deep",
"models",
"predict",
".",
"this",
"goal",
"is",
"usually",
"approached",
"with"... |
ACL | Cross-Domain NER using Cross-Domain Language Modeling | Due to limitation of labeled resources, cross-domain named entity recognition (NER) has been a challenging task. Most existing work considers a supervised setting, making use of labeled data for both the source and target domains. A disadvantage of such methods is that they cannot train for domains without NER data. To... | 33b193acb20ab7ba844e6c35ea839b16 | 2,019 | [
"due to limitation of labeled resources , cross - domain named entity recognition ( ner ) has been a challenging task .",
"most existing work considers a supervised setting , making use of labeled data for both the source and target domains .",
"a disadvantage of such methods is that they cannot train for domai... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - domain named entity recognition",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"domain",
"named",
"entity",
"recognition"
... | [
"due",
"to",
"limitation",
"of",
"labeled",
"resources",
",",
"cross",
"-",
"domain",
"named",
"entity",
"recognition",
"(",
"ner",
")",
"has",
"been",
"a",
"challenging",
"task",
".",
"most",
"existing",
"work",
"considers",
"a",
"supervised",
"setting",
",... |
ACL | Towards Propagation Uncertainty: Edge-enhanced Bayesian Graph Convolutional Networks for Rumor Detection | Detecting rumors on social media is a very critical task with significant implications to the economy, public health, etc. Previous works generally capture effective features from texts and the propagation structure. However, the uncertainty caused by unreliable relations in the propagation structure is common and inev... | ce626e5a638830b63e7292bd54711a81 | 2,021 | [
"detecting rumors on social media is a very critical task with significant implications to the economy , public health , etc .",
"previous works generally capture effective features from texts and the propagation structure .",
"however , the uncertainty caused by unreliable relations in the propagation structur... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "detecting rumors on social media",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"detecting",
"rumors",
"on",
"social",
"media"
],
"offsets"... | [
"detecting",
"rumors",
"on",
"social",
"media",
"is",
"a",
"very",
"critical",
"task",
"with",
"significant",
"implications",
"to",
"the",
"economy",
",",
"public",
"health",
",",
"etc",
".",
"previous",
"works",
"generally",
"capture",
"effective",
"features",
... |
ACL | Look Harder: A Neural Machine Translation Model with Hard Attention | Soft-attention based Neural Machine Translation (NMT) models have achieved promising results on several translation tasks. These models attend all the words in the source sequence for each target token, which makes them ineffective for long sequence translation. In this work, we propose a hard-attention based NMT model... | f4ce01d9e8f3d03df7861565ffb93f94 | 2,019 | [
"soft - attention based neural machine translation ( nmt ) models have achieved promising results on several translation tasks .",
"these models attend all the words in the source sequence for each target token , which makes them ineffective for long sequence translation .",
"in this work , we propose a hard - ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "translation tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"translation",
"tasks"
],
"offsets": [
17,
18
]
}
],
"trigger"... | [
"soft",
"-",
"attention",
"based",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"models",
"have",
"achieved",
"promising",
"results",
"on",
"several",
"translation",
"tasks",
".",
"these",
"models",
"attend",
"all",
"the",
"words",
"in",
"the",
"source... |
ACL | Neural Network Alignment for Sentential Paraphrases | We present a monolingual alignment system for long, sentence- or clause-level alignments, and demonstrate that systems designed for word- or short phrase-based alignment are ill-suited for these longer alignments. Our system is capable of aligning semantically similar spans of arbitrary length. We achieve significantly... | c8844137bccb8ecdd9daf330221a7beb | 2,019 | [
"we present a monolingual alignment system for long , sentence - or clause - level alignments , and demonstrate that systems designed for word - or short phrase - based alignment are ill - suited for these longer alignments .",
"our system is capable of aligning semantically similar spans of arbitrary length .",
... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "monolingual alignment system",
"nugget_type... | [
"we",
"present",
"a",
"monolingual",
"alignment",
"system",
"for",
"long",
",",
"sentence",
"-",
"or",
"clause",
"-",
"level",
"alignments",
",",
"and",
"demonstrate",
"that",
"systems",
"designed",
"for",
"word",
"-",
"or",
"short",
"phrase",
"-",
"based",
... |
ACL | Check It Again:Progressive Visual Question Answering via Visual Entailment | While sophisticated neural-based models have achieved remarkable success in Visual Question Answering (VQA), these models tend to answer questions only according to superficial correlations between question and answer. Several recent approaches have been developed to address this language priors problem. However, most ... | 67501d70dbb051e8b286df3fcce6cf26 | 2,021 | [
"while sophisticated neural - based models have achieved remarkable success in visual question answering ( vqa ) , these models tend to answer questions only according to superficial correlations between question and answer .",
"several recent approaches have been developed to address this language priors problem... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "visual question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"visual",
"question",
"answering"
],
"offsets": [
11,
12,
... | [
"while",
"sophisticated",
"neural",
"-",
"based",
"models",
"have",
"achieved",
"remarkable",
"success",
"in",
"visual",
"question",
"answering",
"(",
"vqa",
")",
",",
"these",
"models",
"tend",
"to",
"answer",
"questions",
"only",
"according",
"to",
"superficia... |
ACL | Lattice-Based Transformer Encoder for Neural Machine Translation | Neural machine translation (NMT) takes deterministic sequences for source representations. However, either word-level or subword-level segmentations have multiple choices to split a source sequence with different word segmentors or different subword vocabulary sizes. We hypothesize that the diversity in segmentations m... | 63a6b15028b30d7f841415982c574b9c | 2,019 | [
"neural machine translation ( nmt ) takes deterministic sequences for source representations .",
"however , either word - level or subword - level segmentations have multiple choices to split a source sequence with different word segmentors or different subword vocabulary sizes .",
"we hypothesize that the dive... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
0,
1,
... | [
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"takes",
"deterministic",
"sequences",
"for",
"source",
"representations",
".",
"however",
",",
"either",
"word",
"-",
"level",
"or",
"subword",
"-",
"level",
"segmentations",
"have",
"multiple",
"choices",
"t... |
ACL | Dynamically Fused Graph Network for Multi-hop Reasoning | Text-based question answering (TBQA) has been studied extensively in recent years. Most existing approaches focus on finding the answer to a question within a single paragraph. However, many difficult questions require multiple supporting evidence from scattered text among two or more documents. In this paper, we propo... | f46ce665cfdcf81a7b902daeb12bdab4 | 2,019 | [
"text - based question answering ( tbqa ) has been studied extensively in recent years .",
"most existing approaches focus on finding the answer to a question within a single paragraph .",
"however , many difficult questions require multiple supporting evidence from scattered text among two or more documents ."... | [
{
"event_type": "FAC",
"arguments": [
{
"text": "dynamically fused graph network",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"dynamically",
"fused",
"graph",
"network"
],
"offsets": [
... | [
"text",
"-",
"based",
"question",
"answering",
"(",
"tbqa",
")",
"has",
"been",
"studied",
"extensively",
"in",
"recent",
"years",
".",
"most",
"existing",
"approaches",
"focus",
"on",
"finding",
"the",
"answer",
"to",
"a",
"question",
"within",
"a",
"single... |
ACL | ProphetChat: Enhancing Dialogue Generation with Simulation of Future Conversation | Typical generative dialogue models utilize the dialogue history to generate the response. However, since one dialogue utterance can often be appropriately answered by multiple distinct responses, generating a desired response solely based on the historical information is not easy. Intuitively, if the chatbot can forese... | b83d69511f7fbb1ddd25a8a88fc51c29 | 2,022 | [
"typical generative dialogue models utilize the dialogue history to generate the response .",
"however , since one dialogue utterance can often be appropriately answered by multiple distinct responses , generating a desired response solely based on the historical information is not easy .",
"intuitively , if th... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "generative dialogue models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"generative",
"dialogue",
"models"
],
"offsets": [
1,
2,
... | [
"typical",
"generative",
"dialogue",
"models",
"utilize",
"the",
"dialogue",
"history",
"to",
"generate",
"the",
"response",
".",
"however",
",",
"since",
"one",
"dialogue",
"utterance",
"can",
"often",
"be",
"appropriately",
"answered",
"by",
"multiple",
"distinc... |
ACL | Do self-supervised speech models develop human-like perception biases? | Self-supervised models for speech processing form representational spaces without using any external labels. Increasingly, they appear to be a feasible way of at least partially eliminating costly manual annotations, a problem of particular concern for low-resource languages. But what kind of representational spaces do... | 1b77781fbda3d53201fe64c8c5965ff0 | 2,022 | [
"self - supervised models for speech processing form representational spaces without using any external labels .",
"increasingly , they appear to be a feasible way of at least partially eliminating costly manual annotations , a problem of particular concern for low - resource languages .",
"but what kind of rep... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "speech processing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"speech",
"processing"
],
"offsets": [
5,
6
]
}
],
"trigger": ... | [
"self",
"-",
"supervised",
"models",
"for",
"speech",
"processing",
"form",
"representational",
"spaces",
"without",
"using",
"any",
"external",
"labels",
".",
"increasingly",
",",
"they",
"appear",
"to",
"be",
"a",
"feasible",
"way",
"of",
"at",
"least",
"par... |
ACL | PHMOSpell: Phonological and Morphological Knowledge Guided Chinese Spelling Check | Chinese Spelling Check (CSC) is a challenging task due to the complex characteristics of Chinese characters. Statistics reveal that most Chinese spelling errors belong to phonological or visual errors. However, previous methods rarely utilize phonological and morphological knowledge of Chinese characters or heavily rel... | 1ab1a69795861af8dfc84e1094a5c600 | 2,021 | [
"chinese spelling check ( csc ) is a challenging task due to the complex characteristics of chinese characters .",
"statistics reveal that most chinese spelling errors belong to phonological or visual errors .",
"however , previous methods rarely utilize phonological and morphological knowledge of chinese chara... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "chinese spelling check",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"chinese",
"spelling",
"check"
],
"offsets": [
0,
1,
2
... | [
"chinese",
"spelling",
"check",
"(",
"csc",
")",
"is",
"a",
"challenging",
"task",
"due",
"to",
"the",
"complex",
"characteristics",
"of",
"chinese",
"characters",
".",
"statistics",
"reveal",
"that",
"most",
"chinese",
"spelling",
"errors",
"belong",
"to",
"p... |
ACL | Dynamic Prefix-Tuning for Generative Template-based Event Extraction | We consider event extraction in a generative manner with template-based conditional generation.Although there is a rising trend of casting the task of event extraction as a sequence generation problem with prompts, these generation-based methods have two significant challenges, including using suboptimal prompts and st... | ffa3a5a3f13545ce878d04d65afe294d | 2,022 | [
"we consider event extraction in a generative manner with template - based conditional generation .",
"although there is a rising trend of casting the task of event extraction as a sequence generation problem with prompts , these generation - based methods have two significant challenges , including using subopti... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "in a generative manner with template - based cond... | [
"we",
"consider",
"event",
"extraction",
"in",
"a",
"generative",
"manner",
"with",
"template",
"-",
"based",
"conditional",
"generation",
".",
"although",
"there",
"is",
"a",
"rising",
"trend",
"of",
"casting",
"the",
"task",
"of",
"event",
"extraction",
"as"... |
ACL | Learning a Matching Model with Co-teaching for Multi-turn Response Selection in Retrieval-based Dialogue Systems | We study learning of a matching model for response selection in retrieval-based dialogue systems. The problem is equally important with designing the architecture of a model, but is less explored in existing literature. To learn a robust matching model from noisy training data, we propose a general co-teaching framewor... | 884e5053bf7f64218274d03fb79b53f4 | 2,019 | [
"we study learning of a matching model for response selection in retrieval - based dialogue systems .",
"the problem is equally important with designing the architecture of a model , but is less explored in existing literature .",
"to learn a robust matching model from noisy training data , we propose a general... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "learning of a matching model",
"nugget_ty... | [
"we",
"study",
"learning",
"of",
"a",
"matching",
"model",
"for",
"response",
"selection",
"in",
"retrieval",
"-",
"based",
"dialogue",
"systems",
".",
"the",
"problem",
"is",
"equally",
"important",
"with",
"designing",
"the",
"architecture",
"of",
"a",
"mode... |
ACL | A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations | Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification, style transfer and sentence generation, among others. The existent dominant approaches in the context of text data either rely on training an adversary (discriminator) that aims at making attr... | 6f0fe1e1e996a7331b982492fe599fe4 | 2,021 | [
"learning disentangled representations of textual data is essential for many natural language tasks such as fair classification , style transfer and sentence generation , among others .",
"the existent dominant approaches in the context of text data either rely on training an adversary ( discriminator ) that aims... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "learning disentangled representations of textual data",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"learning",
"disentangled",
"representations",
"of",
"... | [
"learning",
"disentangled",
"representations",
"of",
"textual",
"data",
"is",
"essential",
"for",
"many",
"natural",
"language",
"tasks",
"such",
"as",
"fair",
"classification",
",",
"style",
"transfer",
"and",
"sentence",
"generation",
",",
"among",
"others",
"."... |
ACL | Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index | Existing open-domain question answering (QA) models are not suitable for real-time usage because they need to process several long documents on-demand for every input query, which is computationally prohibitive. In this paper, we introduce query-agnostic indexable representations of document phrases that can drasticall... | b835946a19208bebe251f32b74b78858 | 2,019 | [
"existing open - domain question answering ( qa ) models are not suitable for real - time usage because they need to process several long documents on - demand for every input query , which is computationally prohibitive .",
"in this paper , we introduce query - agnostic indexable representations of document phra... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain question answering models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"question",
"answering",
"models"
... | [
"existing",
"open",
"-",
"domain",
"question",
"answering",
"(",
"qa",
")",
"models",
"are",
"not",
"suitable",
"for",
"real",
"-",
"time",
"usage",
"because",
"they",
"need",
"to",
"process",
"several",
"long",
"documents",
"on",
"-",
"demand",
"for",
"ev... |
ACL | Multi-grained Named Entity Recognition | This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested. Different from traditional approaches regarding NER as a sequential labeling task and annotate entities consecutively, MGNER detec... | f300a1a8a425614b027e7571174864fc | 2,019 | [
"this paper presents a novel framework , mgner , for multi - grained named entity recognition where multiple entities or entity mentions in a sentence could be non - overlapping or totally nested .",
"different from traditional approaches regarding ner as a sequential labeling task and annotate entities consecuti... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "multi - grained",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"grained"
],
"offsets": [
10,
11,
12
]
... | [
"this",
"paper",
"presents",
"a",
"novel",
"framework",
",",
"mgner",
",",
"for",
"multi",
"-",
"grained",
"named",
"entity",
"recognition",
"where",
"multiple",
"entities",
"or",
"entity",
"mentions",
"in",
"a",
"sentence",
"could",
"be",
"non",
"-",
"overl... |
ACL | Calibrating Structured Output Predictors for Natural Language Processing | We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the appli... | 4611e98da1e24745bc46eea07c7d872a | 2,020 | [
"we address the problem of calibrating prediction confidence for output entities of interest in natural language processing ( nlp ) applications .",
"it is important that nlp applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions , especiall... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "in natural language processing applications",
... | [
"we",
"address",
"the",
"problem",
"of",
"calibrating",
"prediction",
"confidence",
"for",
"output",
"entities",
"of",
"interest",
"in",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"applications",
".",
"it",
"is",
"important",
"that",
"nlp",
"applicati... |
ACL | Efficient Classification of Long Documents Using Transformers | Several methods have been proposed for classifying long textual documents using Transformers. However, there is a lack of consensus on a benchmark to enable a fair comparison among different approaches. In this paper, we provide a comprehensive evaluation of the relative efficacy measured against various baselines and ... | b5c8348831210b09ae5f64bdfdb5fc94 | 2,022 | [
"several methods have been proposed for classifying long textual documents using transformers .",
"however , there is a lack of consensus on a benchmark to enable a fair comparison among different approaches .",
"in this paper , we provide a comprehensive evaluation of the relative efficacy measured against var... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "long textual documents",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"long",
"textual",
"documents"
],
"offsets": [
7,
8,
9
... | [
"several",
"methods",
"have",
"been",
"proposed",
"for",
"classifying",
"long",
"textual",
"documents",
"using",
"transformers",
".",
"however",
",",
"there",
"is",
"a",
"lack",
"of",
"consensus",
"on",
"a",
"benchmark",
"to",
"enable",
"a",
"fair",
"compariso... |
ACL | Divide and Denoise: Learning from Noisy Labels in Fine-Grained Entity Typing with Cluster-Wise Loss Correction | Fine-grained Entity Typing (FET) has made great progress based on distant supervision but still suffers from label noise. Existing FET noise learning methods rely on prediction distributions in an instance-independent manner, which causes the problem of confirmation bias. In this work, we propose a clustering-based los... | 09f50c03ee0687b86d06258efa2c0d54 | 2,022 | [
"fine - grained entity typing ( fet ) has made great progress based on distant supervision but still suffers from label noise .",
"existing fet noise learning methods rely on prediction distributions in an instance - independent manner , which causes the problem of confirmation bias .",
"in this work , we propo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "fet",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"fet"
],
"offsets": [
24
]
}
],
"trigger": {
"text": "progress",
"tokens": [
... | [
"fine",
"-",
"grained",
"entity",
"typing",
"(",
"fet",
")",
"has",
"made",
"great",
"progress",
"based",
"on",
"distant",
"supervision",
"but",
"still",
"suffers",
"from",
"label",
"noise",
".",
"existing",
"fet",
"noise",
"learning",
"methods",
"rely",
"on... |
ACL | Evaluating morphological typology in zero-shot cross-lingual transfer | Cross-lingual transfer has improved greatly through multi-lingual language model pretraining, reducing the need for parallel data and increasing absolute performance. However, this progress has also brought to light the differences in performance across languages. Specifically, certain language families and typologies ... | 475dfd503317cd9d6ce6e8b405d46c3c | 2,021 | [
"cross - lingual transfer has improved greatly through multi - lingual language model pretraining , reducing the need for parallel data and increasing absolute performance .",
"however , this progress has also brought to light the differences in performance across languages .",
"specifically , certain language ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual transfer",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"transfer"
],
"offsets": [
0,
1,... | [
"cross",
"-",
"lingual",
"transfer",
"has",
"improved",
"greatly",
"through",
"multi",
"-",
"lingual",
"language",
"model",
"pretraining",
",",
"reducing",
"the",
"need",
"for",
"parallel",
"data",
"and",
"increasing",
"absolute",
"performance",
".",
"however",
... |
ACL | FaiRR: Faithful and Robust Deductive Reasoning over Natural Language | Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in natural language. Recent works show that such models can also produce the reasoning steps (i.e., the proof graph) that emulate the model’s logical reasoning process. Currently, these b... | 3cc9bf12b3b4425e2ac35e39cdfae465 | 2,022 | [
"transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in natural language .",
"recent works show that such models can also produce the reasoning steps ( i . e . , the proof graph ) that emulate the model ’ s logical reasoning process ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "transformers",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"transformers"
],
"offsets": [
0
]
},
{
"text": "on a logical rulebase containing... | [
"transformers",
"have",
"been",
"shown",
"to",
"be",
"able",
"to",
"perform",
"deductive",
"reasoning",
"on",
"a",
"logical",
"rulebase",
"containing",
"rules",
"and",
"statements",
"written",
"in",
"natural",
"language",
".",
"recent",
"works",
"show",
"that",
... |
ACL | Entity-Relation Extraction as Multi-Turn Question Answering | In this paper, we propose a new paradigm for the task of entity-relation extraction. We cast the task as a multi-turn question answering problem, i.e., the extraction of entities and elations is transformed to the task of identifying answer spans from the context. This multi-turn QA formalization comes with several key... | d7a926c34a4d1807b9e2659251f6331b | 2,019 | [
"in this paper , we propose a new paradigm for the task of entity - relation extraction .",
"we cast the task as a multi - turn question answering problem , i . e . , the extraction of entities and elations is transformed to the task of identifying answer spans from the context .",
"this multi - turn qa formali... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "paradigm",
"nugget_type": "APP",
"a... | [
"in",
"this",
"paper",
",",
"we",
"propose",
"a",
"new",
"paradigm",
"for",
"the",
"task",
"of",
"entity",
"-",
"relation",
"extraction",
".",
"we",
"cast",
"the",
"task",
"as",
"a",
"multi",
"-",
"turn",
"question",
"answering",
"problem",
",",
"i",
"... |
ACL | Tracing Origins: Coreference-aware Machine Reading Comprehension | Machine reading comprehension is a heavily-studied research and test field for evaluating new pre-trained language models (PrLMs) and fine-tuning strategies, and recent studies have enriched the pre-trained language models with syntactic, semantic and other linguistic information to improve the performance of the model... | 286e220921d5bad4ffae6121803135cd | 2,022 | [
"machine reading comprehension is a heavily - studied research and test field for evaluating new pre - trained language models ( prlms ) and fine - tuning strategies , and recent studies have enriched the pre - trained language models with syntactic , semantic and other linguistic information to improve the perform... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - trained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"trained",
"language",
"models"
],
"offsets": [
... | [
"machine",
"reading",
"comprehension",
"is",
"a",
"heavily",
"-",
"studied",
"research",
"and",
"test",
"field",
"for",
"evaluating",
"new",
"pre",
"-",
"trained",
"language",
"models",
"(",
"prlms",
")",
"and",
"fine",
"-",
"tuning",
"strategies",
",",
"and... |
ACL | Inferring Rewards from Language in Context | In classic instruction following, language like “I’d like the JetBlue flight” maps to actions (e.g., selecting that flight). However, language also conveys information about a user’s underlying reward function (e.g., a general preference for JetBlue), which can allow a model to carry out desirable actions in new contex... | 4c580a26c582ed708aae7f21b341a45e | 2,022 | [
"in classic instruction following , language like “ i ’ d like the jetblue flight ” maps to actions ( e . g . , selecting that flight ) .",
"however , language also conveys information about a user ’ s underlying reward function ( e . g . , a general preference for jetblue ) , which can allow a model to carry out... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "instruction following",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"instruction",
"following"
],
"offsets": [
2,
3
]
}
],
"tr... | [
"in",
"classic",
"instruction",
"following",
",",
"language",
"like",
"“",
"i",
"’",
"d",
"like",
"the",
"jetblue",
"flight",
"”",
"maps",
"to",
"actions",
"(",
"e",
".",
"g",
".",
",",
"selecting",
"that",
"flight",
")",
".",
"however",
",",
"language... |
ACL | Domain Knowledge Transferring for Pre-trained Language Model via Calibrated Activation Boundary Distillation | Since the development and wide use of pretrained language models (PLMs), several approaches have been applied to boost their performance on downstream tasks in specific domains, such as biomedical or scientific domains. Additional pre-training with in-domain texts is the most common approach for providing domain-specif... | abad77699ca89f32d12a6ff88bc29daf | 2,022 | [
"since the development and wide use of pretrained language models ( plms ) , several approaches have been applied to boost their performance on downstream tasks in specific domains , such as biomedical or scientific domains .",
"additional pre - training with in - domain texts is the most common approach for prov... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"language",
"models"
],
"offsets": [
7,
8,
... | [
"since",
"the",
"development",
"and",
"wide",
"use",
"of",
"pretrained",
"language",
"models",
"(",
"plms",
")",
",",
"several",
"approaches",
"have",
"been",
"applied",
"to",
"boost",
"their",
"performance",
"on",
"downstream",
"tasks",
"in",
"specific",
"dom... |
ACL | Classification and Clustering of Arguments with Contextualized Word Embeddings | We experiment with two recent contextualized word embedding methods (ELMo and BERT) in the context of open-domain argument search. For the first time, we show how to leverage the power of contextualized word embeddings to classify and cluster topic-dependent arguments, achieving impressive results on both tasks and acr... | 784d9f0711a91601b5b3a7427b8cdd7e | 2,019 | [
"we experiment with two recent contextualized word embedding methods ( elmo and bert ) in the context of open - domain argument search .",
"for the first time , we show how to leverage the power of contextualized word embeddings to classify and cluster topic - dependent arguments , achieving impressive results on... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "two recent contextualized word embedding methods"... | [
"we",
"experiment",
"with",
"two",
"recent",
"contextualized",
"word",
"embedding",
"methods",
"(",
"elmo",
"and",
"bert",
")",
"in",
"the",
"context",
"of",
"open",
"-",
"domain",
"argument",
"search",
".",
"for",
"the",
"first",
"time",
",",
"we",
"show"... |
ACL | Multi-style Generative Reading Comprehension | This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike m... | e91e08c28f2c65923de18b0138361150 | 2,019 | [
"this study tackles generative reading comprehension ( rc ) , which consists of answering questions based on textual evidence and natural language generation ( nlg ) .",
"we propose a multi - style abstractive summarization model for question answering , called masque .",
"the proposed model has two key charact... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "generative reading comprehension",
"nugget_type": "TAK",
"argument_type": "Content",
"tokens": [
"generative",
"reading",
"comprehension"
],
"offsets": [
3,
... | [
"this",
"study",
"tackles",
"generative",
"reading",
"comprehension",
"(",
"rc",
")",
",",
"which",
"consists",
"of",
"answering",
"questions",
"based",
"on",
"textual",
"evidence",
"and",
"natural",
"language",
"generation",
"(",
"nlg",
")",
".",
"we",
"propo... |
ACL | The Sensitivity of Language Models and Humans to Winograd Schema Perturbations | Large-scale pretrained language models are the major driving force behind recent improvements in perfromance on the Winograd Schema Challenge, a widely employed test of commonsense reasoning ability. We show, however, with a new diagnostic dataset, that these models are sensitive to linguistic perturbations of the Wino... | cf6bb0bee3fe99fd758bba2f9f1531ac | 2,020 | [
"large - scale pretrained language models are the major driving force behind recent improvements in perfromance on the winograd schema challenge , a widely employed test of commonsense reasoning ability .",
"we show , however , with a new diagnostic dataset , that these models are sensitive to linguistic perturba... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Finder",
"tokens": [
"we"
],
"offsets": [
31
]
},
{
"text": "sensitive",
"nugget_type": "E-FAC",
... | [
"large",
"-",
"scale",
"pretrained",
"language",
"models",
"are",
"the",
"major",
"driving",
"force",
"behind",
"recent",
"improvements",
"in",
"perfromance",
"on",
"the",
"winograd",
"schema",
"challenge",
",",
"a",
"widely",
"employed",
"test",
"of",
"commonse... |
ACL | Knowledge Graph Embedding Compression | Knowledge graph (KG) representation learning techniques that learn continuous embeddings of entities and relations in the KG have become popular in many AI applications. With a large KG, the embeddings consume a large amount of storage and memory. This is problematic and prohibits the deployment of these techniques in ... | e7cd6e52366234e7d7731290fe746b5a | 2,020 | [
"knowledge graph ( kg ) representation learning techniques that learn continuous embeddings of entities and relations in the kg have become popular in many ai applications .",
"with a large kg , the embeddings consume a large amount of storage and memory .",
"this is problematic and prohibits the deployment of ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge graph representation learning techniques",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"knowledge",
"graph",
"representation",
"learning",
"tech... | [
"knowledge",
"graph",
"(",
"kg",
")",
"representation",
"learning",
"techniques",
"that",
"learn",
"continuous",
"embeddings",
"of",
"entities",
"and",
"relations",
"in",
"the",
"kg",
"have",
"become",
"popular",
"in",
"many",
"ai",
"applications",
".",
"with",
... |
ACL | Uncertainty Estimation of Transformer Predictions for Misclassification Detection | Uncertainty estimation (UE) of model predictions is a crucial step for a variety of tasks such as active learning, misclassification detection, adversarial attack detection, out-of-distribution detection, etc. Most of the works on modeling the uncertainty of deep neural networks evaluate these methods on image classifi... | 1b8978f758fb9abd31e2caaface53440 | 2,022 | [
"uncertainty estimation ( ue ) of model predictions is a crucial step for a variety of tasks such as active learning , misclassification detection , adversarial attack detection , out - of - distribution detection , etc .",
"most of the works on modeling the uncertainty of deep neural networks evaluate these meth... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "uncertainty estimation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"uncertainty",
"estimation"
],
"offsets": [
0,
1
]
}
],
"... | [
"uncertainty",
"estimation",
"(",
"ue",
")",
"of",
"model",
"predictions",
"is",
"a",
"crucial",
"step",
"for",
"a",
"variety",
"of",
"tasks",
"such",
"as",
"active",
"learning",
",",
"misclassification",
"detection",
",",
"adversarial",
"attack",
"detection",
... |
ACL | Neural semi-Markov CRF for Monolingual Word Alignment | Monolingual word alignment is important for studying fine-grained editing operations (i.e., deletion, addition, and substitution) in text-to-text generation tasks, such as paraphrase generation, text simplification, neutralizing biased language, etc. In this paper, we present a novel neural semi-Markov CRF alignment mo... | 19249552c55123af3c3c1d46e4d855a0 | 2,021 | [
"monolingual word alignment is important for studying fine - grained editing operations ( i . e . , deletion , addition , and substitution ) in text - to - text generation tasks , such as paraphrase generation , text simplification , neutralizing biased language , etc .",
"in this paper , we present a novel neura... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "monolingual word alignment",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"monolingual",
"word",
"alignment"
],
"offsets": [
0,
1,
... | [
"monolingual",
"word",
"alignment",
"is",
"important",
"for",
"studying",
"fine",
"-",
"grained",
"editing",
"operations",
"(",
"i",
".",
"e",
".",
",",
"deletion",
",",
"addition",
",",
"and",
"substitution",
")",
"in",
"text",
"-",
"to",
"-",
"text",
"... |
ACL | ParaCrawl: Web-Scale Acquisition of Parallel Corpora | We report on methods to create the largest publicly available parallel corpora by crawling the web, using open source software. We empirically compare alternative methods and publish benchmark data sets for sentence alignment and sentence pair filtering. We also describe the parallel corpora released and evaluate their... | 0f2bb09e20de228d849336c5c32bc658 | 2,020 | [
"we report on methods to create the largest publicly available parallel corpora by crawling the web , using open source software .\\nwe empirically compare alternative methods and publish benchmark data sets for sentence alignment and sentence pair filtering .\\nwe also describe the parallel corpora released and ev... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "methods",
"nugget_type": "APP",
"ar... | [
"we",
"report",
"on",
"methods",
"to",
"create",
"the",
"largest",
"publicly",
"available",
"parallel",
"corpora",
"by",
"crawling",
"the",
"web",
",",
"using",
"open",
"source",
"software",
".\\nwe",
"empirically",
"compare",
"alternative",
"methods",
"and",
"p... |
ACL | Enforcing Consistency in Weakly Supervised Semantic Parsing | The predominant challenge in weakly supervised semantic parsing is that of spurious programs that evaluate to correct answers for the wrong reasons. Prior work uses elaborate search strategies to mitigate the prevalence of spurious programs; however, they typically consider only one input at a time. In this work we exp... | f7dd26f248610f32cb827c7a9e580ac2 | 2,021 | [
"the predominant challenge in weakly supervised semantic parsing is that of spurious programs that evaluate to correct answers for the wrong reasons .",
"prior work uses elaborate search strategies to mitigate the prevalence of spurious programs ; however , they typically consider only one input at a time .",
"... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "weakly supervised semantic parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"weakly",
"supervised",
"semantic",
"parsing"
],
"offsets": [
... | [
"the",
"predominant",
"challenge",
"in",
"weakly",
"supervised",
"semantic",
"parsing",
"is",
"that",
"of",
"spurious",
"programs",
"that",
"evaluate",
"to",
"correct",
"answers",
"for",
"the",
"wrong",
"reasons",
".",
"prior",
"work",
"uses",
"elaborate",
"sear... |
ACL | Auto-Debias: Debiasing Masked Language Models with Automated Biased Prompts | Human-like biases and undesired social stereotypes exist in large pretrained language models. Given the wide adoption of these models in real-world applications, mitigating such biases has become an emerging and important task. In this paper, we propose an automatic method to mitigate the biases in pretrained language ... | 175d59ca06739e8cee368d925a9e83fd | 2,022 | [
"human - like biases and undesired social stereotypes exist in large pretrained language models .",
"given the wide adoption of these models in real - world applications , mitigating such biases has become an emerging and important task .",
"in this paper , we propose an automatic method to mitigate the biases ... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "human - like biases",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"human",
"-",
"like",
"biases"
],
"offsets": [
0,
1,
... | [
"human",
"-",
"like",
"biases",
"and",
"undesired",
"social",
"stereotypes",
"exist",
"in",
"large",
"pretrained",
"language",
"models",
".",
"given",
"the",
"wide",
"adoption",
"of",
"these",
"models",
"in",
"real",
"-",
"world",
"applications",
",",
"mitigat... |
ACL | Generating Informative Conversational Response using Recurrent Knowledge-Interaction and Knowledge-Copy | Knowledge-driven conversation approaches have achieved remarkable research attention recently. However, generating an informative response with multiple relevant knowledge without losing fluency and coherence is still one of the main challenges. To address this issue, this paper proposes a method that uses recurrent kn... | 9961072743c160345f0ac26160a2aa9a | 2,020 | [
"knowledge - driven conversation approaches have achieved remarkable research attention recently .",
"however , generating an informative response with multiple relevant knowledge without losing fluency and coherence is still one of the main challenges .",
"to address this issue , this paper proposes a method t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge - driven conversation approaches",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"knowledge",
"-",
"driven",
"conversation",
"approaches"
... | [
"knowledge",
"-",
"driven",
"conversation",
"approaches",
"have",
"achieved",
"remarkable",
"research",
"attention",
"recently",
".",
"however",
",",
"generating",
"an",
"informative",
"response",
"with",
"multiple",
"relevant",
"knowledge",
"without",
"losing",
"flue... |
ACL | Enhancing Chinese Pre-trained Language Model via Heterogeneous Linguistics Graph | Chinese pre-trained language models usually exploit contextual character information to learn representations, while ignoring the linguistics knowledge, e.g., word and sentence information. Hence, we propose a task-free enhancement module termed as Heterogeneous Linguistics Graph (HLG) to enhance Chinese pre-trained la... | 68d4f30cc48dc31a6b3e9323ba2611e5 | 2,022 | [
"chinese pre - trained language models usually exploit contextual character information to learn representations , while ignoring the linguistics knowledge , e . g . , word and sentence information .",
"hence , we propose a task - free enhancement module termed as heterogeneous linguistics graph ( hlg ) to enhanc... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "chinese pre - trained language models",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"chinese",
"pre",
"-",
"trained",
"language",
"models"
... | [
"chinese",
"pre",
"-",
"trained",
"language",
"models",
"usually",
"exploit",
"contextual",
"character",
"information",
"to",
"learn",
"representations",
",",
"while",
"ignoring",
"the",
"linguistics",
"knowledge",
",",
"e",
".",
"g",
".",
",",
"word",
"and",
... |
ACL | How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions | Cross-lingual word embeddings (CLEs) facilitate cross-lingual transfer of NLP models. Despite their ubiquitous downstream usage, increasingly popular projection-based CLE models are almost exclusively evaluated on bilingual lexicon induction (BLI). Even the BLI evaluations vary greatly, hindering our ability to correct... | 2f04525e5d592b52cdc7f7a9dd6a0b0c | 2,019 | [
"cross - lingual word embeddings ( cles ) facilitate cross - lingual transfer of nlp models .",
"despite their ubiquitous downstream usage , increasingly popular projection - based cle models are almost exclusively evaluated on bilingual lexicon induction ( bli ) .",
"even the bli evaluations vary greatly , hin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual word embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"word",
"embeddings"
],
"offsets": ... | [
"cross",
"-",
"lingual",
"word",
"embeddings",
"(",
"cles",
")",
"facilitate",
"cross",
"-",
"lingual",
"transfer",
"of",
"nlp",
"models",
".",
"despite",
"their",
"ubiquitous",
"downstream",
"usage",
",",
"increasingly",
"popular",
"projection",
"-",
"based",
... |
ACL | Hi-Transformer: Hierarchical Interactive Transformer for Efficient and Effective Long Document Modeling | Transformer is important for text modeling. However, it has difficulty in handling long documents due to the quadratic complexity with input text length. In order to handle this problem, we propose a hierarchical interactive Transformer (Hi-Transformer) for efficient and effective long document modeling. Hi-Transformer... | 23798d39ef3a650bcc5a11f632112803 | 2,021 | [
"transformer is important for text modeling .",
"however , it has difficulty in handling long documents due to the quadratic complexity with input text length .",
"in order to handle this problem , we propose a hierarchical interactive transformer ( hi - transformer ) for efficient and effective long document m... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text modeling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"text",
"modeling"
],
"offsets": [
4,
5
]
}
],
"trigger": {
... | [
"transformer",
"is",
"important",
"for",
"text",
"modeling",
".",
"however",
",",
"it",
"has",
"difficulty",
"in",
"handling",
"long",
"documents",
"due",
"to",
"the",
"quadratic",
"complexity",
"with",
"input",
"text",
"length",
".",
"in",
"order",
"to",
"h... |
ACL | Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention | Abstractive Sentence Summarization (ASSUM) targets at grasping the core idea of the source sentence and presenting it as the summary. It is extensively studied using statistical models or neural models based on the large-scale monolingual source-summary parallel corpus. But there is no cross-lingual parallel corpus, wh... | 59a589bdd5735550fee285c37de378ac | 2,019 | [
"abstractive sentence summarization ( assum ) targets at grasping the core idea of the source sentence and presenting it as the summary .",
"it is extensively studied using statistical models or neural models based on the large - scale monolingual source - summary parallel corpus .",
"but there is no cross - li... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "abstractive sentence summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"abstractive",
"sentence",
"summarization"
],
"offsets": [
0,
... | [
"abstractive",
"sentence",
"summarization",
"(",
"assum",
")",
"targets",
"at",
"grasping",
"the",
"core",
"idea",
"of",
"the",
"source",
"sentence",
"and",
"presenting",
"it",
"as",
"the",
"summary",
".",
"it",
"is",
"extensively",
"studied",
"using",
"statis... |
ACL | The Trade-offs of Domain Adaptation for Neural Language Models | This work connects language model adaptation with concepts of machine learning theory. We consider a training setup with a large out-of-domain set and a small in-domain set. We derive how the benefit of training a model on either set depends on the size of the sets and the distance between their underlying distribution... | 3d6877284612d6b42813d04d2500ea58 | 2,022 | [
"this work connects language model adaptation with concepts of machine learning theory .",
"we consider a training setup with a large out - of - domain set and a small in - domain set .",
"we derive how the benefit of training a model on either set depends on the size of the sets and the distance between their ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "language model adaptation",
"nugget_type": "TAK",
"argument_type": "Content",
"tokens": [
"language",
"model",
"adaptation"
],
"offsets": [
3,
4,
... | [
"this",
"work",
"connects",
"language",
"model",
"adaptation",
"with",
"concepts",
"of",
"machine",
"learning",
"theory",
".",
"we",
"consider",
"a",
"training",
"setup",
"with",
"a",
"large",
"out",
"-",
"of",
"-",
"domain",
"set",
"and",
"a",
"small",
"i... |
ACL | Shaping Visual Representations with Language for Few-Shot Classification | By describing the features and abstractions of our world, language is a crucial tool for human learning and a promising source of supervision for machine learning models. We use language to improve few-shot visual classification in the underexplored scenario where natural language task descriptions are available during... | f6a30c447bb80d6f02da997c997e42e7 | 2,020 | [
"by describing the features and abstractions of our world , language is a crucial tool for human learning and a promising source of supervision for machine learning models .",
"we use language to improve few - shot visual classification in the underexplored scenario where natural language task descriptions are av... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "human learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"human",
"learning"
],
"offsets": [
16,
17
]
},
{
"text": "... | [
"by",
"describing",
"the",
"features",
"and",
"abstractions",
"of",
"our",
"world",
",",
"language",
"is",
"a",
"crucial",
"tool",
"for",
"human",
"learning",
"and",
"a",
"promising",
"source",
"of",
"supervision",
"for",
"machine",
"learning",
"models",
".",
... |
ACL | PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction | Joint extraction of entities and relations from unstructured texts is a crucial task in information extraction. Recent methods achieve considerable performance but still suffer from some inherent limitations, such as redundancy of relation prediction, poor generalization of span-based extraction and inefficiency. In th... | 5ddb45ec17efc8df76107de8d83b3b87 | 2,021 | [
"joint extraction of entities and relations from unstructured texts is a crucial task in information extraction .",
"recent methods achieve considerable performance but still suffer from some inherent limitations , such as redundancy of relation prediction , poor generalization of span - based extraction and inef... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "joint extraction of entities and relations",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"joint",
"extraction",
"of",
"entities",
"and",
"relati... | [
"joint",
"extraction",
"of",
"entities",
"and",
"relations",
"from",
"unstructured",
"texts",
"is",
"a",
"crucial",
"task",
"in",
"information",
"extraction",
".",
"recent",
"methods",
"achieve",
"considerable",
"performance",
"but",
"still",
"suffer",
"from",
"so... |
ACL | Investigating Non-local Features for Neural Constituency Parsing | Thanks to the strong representation power of neural encoders, neural chart-based parsers have achieved highly competitive performance by using local features. Recently, it has been shown that non-local features in CRF structures lead to improvements. In this paper, we investigate injecting non-local features into the t... | f90b057cb908f5c665c670f97cf97cdd | 2,022 | [
"thanks to the strong representation power of neural encoders , neural chart - based parsers have achieved highly competitive performance by using local features .",
"recently , it has been shown that non - local features in crf structures lead to improvements .",
"in this paper , we investigate injecting non -... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural chart - based parsers",
"nugget_type": "MOD",
"argument_type": "Target",
"tokens": [
"neural",
"chart",
"-",
"based",
"parsers"
],
"offsets": [
... | [
"thanks",
"to",
"the",
"strong",
"representation",
"power",
"of",
"neural",
"encoders",
",",
"neural",
"chart",
"-",
"based",
"parsers",
"have",
"achieved",
"highly",
"competitive",
"performance",
"by",
"using",
"local",
"features",
".",
"recently",
",",
"it",
... |
ACL | How Accents Confound: Probing for Accent Information in End-to-End Speech Recognition Systems | In this work, we present a detailed analysis of how accent information is reflected in the internal representation of speech in an end-to-end automatic speech recognition (ASR) system. We use a state-of-the-art end-to-end ASR system, comprising convolutional and recurrent layers, that is trained on a large amount of US... | 89874ac48ad9f898c7080dae148b8aef | 2,020 | [
"in this work , we present a detailed analysis of how accent information is reflected in the internal representation of speech in an end - to - end automatic speech recognition ( asr ) system .",
"we use a state - of - the - art end - to - end asr system , comprising convolutional and recurrent layers , that is t... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "end - to - end automatic speech recognition system",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"end",
"-",
"to",
"-",
"end",
"automatic",
... | [
"in",
"this",
"work",
",",
"we",
"present",
"a",
"detailed",
"analysis",
"of",
"how",
"accent",
"information",
"is",
"reflected",
"in",
"the",
"internal",
"representation",
"of",
"speech",
"in",
"an",
"end",
"-",
"to",
"-",
"end",
"automatic",
"speech",
"r... |
ACL | Annotation and Automatic Classification of Aspectual Categories | We present the first annotated resource for the aspectual classification of German verb tokens in their clausal context. We use aspectual features compatible with the plurality of aspectual classifications in previous work and treat aspectual ambiguity systematically. We evaluate our corpus by using it to train supervi... | 3807793075df31c9ce922e54b7ec3b95 | 2,019 | [
"we present the first annotated resource for the aspectual classification of german verb tokens in their clausal context .",
"we use aspectual features compatible with the plurality of aspectual classifications in previous work and treat aspectual ambiguity systematically .",
"we evaluate our corpus by using it... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "aspectual classification of german verb tokens in t... | [
"we",
"present",
"the",
"first",
"annotated",
"resource",
"for",
"the",
"aspectual",
"classification",
"of",
"german",
"verb",
"tokens",
"in",
"their",
"clausal",
"context",
".",
"we",
"use",
"aspectual",
"features",
"compatible",
"with",
"the",
"plurality",
"of... |
ACL | Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning | Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks. Most of the existing approaches rely on a randomly initialized classifier on top of such networks. We argue that this fine-tuning procedure is sub-optimal as the pre-trained model has no prior on the specific cl... | 782a2e158235dd59081159463cbfcea1 | 2,020 | [
"fine - tuning of pre - trained transformer models has become the standard approach for solving common nlp tasks .",
"most of the existing approaches rely on a randomly initialized classifier on top of such networks .",
"we argue that this fine - tuning procedure is sub - optimal as the pre - trained model has ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pre - trained transformer models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pre",
"-",
"trained",
"transformer",
"models"
],
"offsets"... | [
"fine",
"-",
"tuning",
"of",
"pre",
"-",
"trained",
"transformer",
"models",
"has",
"become",
"the",
"standard",
"approach",
"for",
"solving",
"common",
"nlp",
"tasks",
".",
"most",
"of",
"the",
"existing",
"approaches",
"rely",
"on",
"a",
"randomly",
"initi... |
ACL | Lexical Knowledge Internalization for Neural Dialog Generation | We propose knowledge internalization (KI), which aims to complement the lexical knowledge into neural dialog models. Instead of further conditioning the knowledge-grounded dialog (KGD) models on externally retrieved knowledge, we seek to integrate knowledge about each input token internally into the model’s parameters.... | f5583b85d7b64b6baa92708a3a1a6a44 | 2,022 | [
"we propose knowledge internalization ( ki ) , which aims to complement the lexical knowledge into neural dialog models .",
"instead of further conditioning the knowledge - grounded dialog ( kgd ) models on externally retrieved knowledge , we seek to integrate knowledge about each input token internally into the ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "knowledge internalization",
"nugget_type": ... | [
"we",
"propose",
"knowledge",
"internalization",
"(",
"ki",
")",
",",
"which",
"aims",
"to",
"complement",
"the",
"lexical",
"knowledge",
"into",
"neural",
"dialog",
"models",
".",
"instead",
"of",
"further",
"conditioning",
"the",
"knowledge",
"-",
"grounded",
... |
ACL | A Neural Network Architecture for Program Understanding Inspired by Human Behaviors | Program understanding is a fundamental task in program language processing. Despite the success, existing works fail to take human behaviors as reference in understanding programs. In this paper, we consider human behaviors and propose the PGNN-EK model that consists of two main components. On the one hand, inspired by... | 92a5da342d667594d1a79ac55a566b0b | 2,022 | [
"program understanding is a fundamental task in program language processing .",
"despite the success , existing works fail to take human behaviors as reference in understanding programs .",
"in this paper , we consider human behaviors and propose the pgnn - ek model that consists of two main components .",
"o... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "program understanding",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"program",
"understanding"
],
"offsets": [
0,
1
]
}
],
"tr... | [
"program",
"understanding",
"is",
"a",
"fundamental",
"task",
"in",
"program",
"language",
"processing",
".",
"despite",
"the",
"success",
",",
"existing",
"works",
"fail",
"to",
"take",
"human",
"behaviors",
"as",
"reference",
"in",
"understanding",
"programs",
... |
ACL | INSET: Sentence Infilling with INter-SEntential Transformer | Missing sentence generation (or sentence in-filling) fosters a wide range of applications in natural language generation, such as document auto-completion and meeting note expansion. This task asks the model to generate intermediate missing sentences that can syntactically and semantically bridge the surrounding contex... | ea6c081c2ce3e0b304507a2815ec04ab | 2,020 | [
"missing sentence generation ( or sentence in - filling ) fosters a wide range of applications in natural language generation , such as document auto - completion and meeting note expansion .",
"this task asks the model to generate intermediate missing sentences that can syntactically and semantically bridge the ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"generation"
],
"offsets": [
17,
18,
... | [
"missing",
"sentence",
"generation",
"(",
"or",
"sentence",
"in",
"-",
"filling",
")",
"fosters",
"a",
"wide",
"range",
"of",
"applications",
"in",
"natural",
"language",
"generation",
",",
"such",
"as",
"document",
"auto",
"-",
"completion",
"and",
"meeting",... |
ACL | A Prioritization Model for Suicidality Risk Assessment | We reframe suicide risk assessment from social media as a ranking problem whose goal is maximizing detection of severely at-risk individuals given the time available. Building on measures developed for resource-bounded document retrieval, we introduce a well founded evaluation paradigm, and demonstrate using an expert-... | 5a54f2154f22659b9faaebdeece490d4 | 2,020 | [
"we reframe suicide risk assessment from social media as a ranking problem whose goal is maximizing detection of severely at - risk individuals given the time available .",
"building on measures developed for resource - bounded document retrieval , we introduce a well founded evaluation paradigm , and demonstrate... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
39
]
},
{
"text": "evaluation paradigm",
"nugget_type": "APP"... | [
"we",
"reframe",
"suicide",
"risk",
"assessment",
"from",
"social",
"media",
"as",
"a",
"ranking",
"problem",
"whose",
"goal",
"is",
"maximizing",
"detection",
"of",
"severely",
"at",
"-",
"risk",
"individuals",
"given",
"the",
"time",
"available",
".",
"build... |
ACL | Relation Extraction with Explanation | Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences. Efforts thus far have focused on improving extraction accuracy but little is known about their explanability. In this work we annotate a test set ... | 25240e7d89e8f8b31dce29b28d030b3d | 2,020 | [
"recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences .",
"efforts thus far have focused on improving extraction accuracy but little is known about their explanability .",
"in this work we ann... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "importance weights",
"nugget_type": "FEA",
"argument_type": "TriedComponent",
"tokens": [
"importance",
"weights"
],
"offsets": [
20,
21
]
},
{
... | [
"recent",
"neural",
"models",
"for",
"relation",
"extraction",
"with",
"distant",
"supervision",
"alleviate",
"the",
"impact",
"of",
"irrelevant",
"sentences",
"in",
"a",
"bag",
"by",
"learning",
"importance",
"weights",
"for",
"the",
"sentences",
".",
"efforts",
... |
ACL | Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence | The neural attention model has achieved great success in data-to-text generation tasks. Though usually excelling at producing fluent text, it suffers from the problem of information missing, repetition and “hallucination”. Due to the black-box nature of the neural attention architecture, avoiding these problems in a sy... | b950bb0fe11a8b64807dc72b86af5128 | 2,020 | [
"the neural attention model has achieved great success in data - to - text generation tasks .",
"though usually excelling at producing fluent text , it suffers from the problem of information missing , repetition and “ hallucination ” .",
"due to the black - box nature of the neural attention architecture , avo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural attention model",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"attention",
"model"
],
"offsets": [
1,
2,
3
... | [
"the",
"neural",
"attention",
"model",
"has",
"achieved",
"great",
"success",
"in",
"data",
"-",
"to",
"-",
"text",
"generation",
"tasks",
".",
"though",
"usually",
"excelling",
"at",
"producing",
"fluent",
"text",
",",
"it",
"suffers",
"from",
"the",
"probl... |
ACL | Multi-hop Graph Convolutional Network with High-order Chebyshev Approximation for Text Reasoning | Graph convolutional network (GCN) has become popular in various natural language processing (NLP) tasks with its superiority in long-term and non-consecutive word interactions. However, existing single-hop graph reasoning in GCN may miss some important non-consecutive dependencies. In this study, we define the spectral... | 2a94f2af191790f4ff5d75c6d5fb7bd8 | 2,021 | [
"graph convolutional network ( gcn ) has become popular in various natural language processing ( nlp ) tasks with its superiority in long - term and non - consecutive word interactions .",
"however , existing single - hop graph reasoning in gcn may miss some important non - consecutive dependencies .",
"in this... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "graph convolutional network",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"graph",
"convolutional",
"network"
],
"offsets": [
0,
1,
... | [
"graph",
"convolutional",
"network",
"(",
"gcn",
")",
"has",
"become",
"popular",
"in",
"various",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"tasks",
"with",
"its",
"superiority",
"in",
"long",
"-",
"term",
"and",
"non",
"-",
"consecutive",
"word... |
ACL | Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task | Pretraining and multitask learning are widely used to improve the speech translation performance. In this study, we are interested in training a speech translation model along with an auxiliary text translation task. We conduct a detailed analysis to understand the impact of the auxiliary task on the primary task withi... | 089cf75c65f65345963cec7c7940fbf7 | 2,021 | [
"pretraining and multitask learning are widely used to improve the speech translation performance .",
"in this study , we are interested in training a speech translation model along with an auxiliary text translation task .",
"we conduct a detailed analysis to understand the impact of the auxiliary task on the ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretraining learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"pretraining",
"learning"
],
"offsets": [
0,
3
]
},
{
... | [
"pretraining",
"and",
"multitask",
"learning",
"are",
"widely",
"used",
"to",
"improve",
"the",
"speech",
"translation",
"performance",
".",
"in",
"this",
"study",
",",
"we",
"are",
"interested",
"in",
"training",
"a",
"speech",
"translation",
"model",
"along",
... |
ACL | Gated Convolutional Bidirectional Attention-based Model for Off-topic Spoken Response Detection | Off-topic spoken response detection, the task aiming at predicting whether a response is off-topic for the corresponding prompt, is important for an automated speaking assessment system. In many real-world educational applications, off-topic spoken response detectors are required to achieve high recall for off-topic re... | d4722a61c362cc1add7c108e5a57f7c0 | 2,020 | [
"off - topic spoken response detection , the task aiming at predicting whether a response is off - topic for the corresponding prompt , is important for an automated speaking assessment system .",
"in many real - world educational applications , off - topic spoken response detectors are required to achieve high r... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "off - topic spoken response detection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"off",
"-",
"topic",
"spoken",
"response",
"detection"
... | [
"off",
"-",
"topic",
"spoken",
"response",
"detection",
",",
"the",
"task",
"aiming",
"at",
"predicting",
"whether",
"a",
"response",
"is",
"off",
"-",
"topic",
"for",
"the",
"corresponding",
"prompt",
",",
"is",
"important",
"for",
"an",
"automated",
"speak... |
ACL | Exploring Unexplored Generalization Challenges for Cross-Database Semantic Parsing | We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen during training. Recently, several datasets, including Spider, were proposed to support development of XSP systems. We propose a challenging evaluat... | 06707dd359b5b254e88dfbe7519e027d | 2,020 | [
"we study the task of cross - database semantic parsing ( xsp ) , where a system that maps natural language utterances to executable sql queries is evaluated on databases unseen during training .",
"recently , several datasets , including spider , were proposed to support development of xsp systems .",
"we prop... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "task of cross - database semantic parsing",
... | [
"we",
"study",
"the",
"task",
"of",
"cross",
"-",
"database",
"semantic",
"parsing",
"(",
"xsp",
")",
",",
"where",
"a",
"system",
"that",
"maps",
"natural",
"language",
"utterances",
"to",
"executable",
"sql",
"queries",
"is",
"evaluated",
"on",
"databases"... |
ACL | Length Control in Abstractive Summarization by Pretraining Information Selection | Previous length-controllable summarization models mostly control lengths at the decoding stage, whereas the encoding or the selection of information from the source document is not sensitive to the designed length. They also tend to generate summaries as long as those in the training data. In this paper, we propose a l... | 0bc8e3c5200fc19224ac00b6e24d4490 | 2,022 | [
"previous length - controllable summarization models mostly control lengths at the decoding stage , whereas the encoding or the selection of information from the source document is not sensitive to the designed length .",
"they also tend to generate summaries as long as those in the training data .",
"in this p... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
53
]
},
{
"text": "length - aware attention mechanism",
"nugg... | [
"previous",
"length",
"-",
"controllable",
"summarization",
"models",
"mostly",
"control",
"lengths",
"at",
"the",
"decoding",
"stage",
",",
"whereas",
"the",
"encoding",
"or",
"the",
"selection",
"of",
"information",
"from",
"the",
"source",
"document",
"is",
"... |
ACL | Down and Across: Introducing Crossword-Solving as a New NLP Benchmark | Solving crossword puzzles requires diverse reasoning capabilities, access to a vast amount of knowledge about language and the world, and the ability to satisfy the constraints imposed by the structure of the puzzle. In this work, we introduce solving crossword puzzles as a new natural language understanding task. We r... | 294ff5e13baae38fdaa7d6fb921f3d44 | 2,022 | [
"solving crossword puzzles requires diverse reasoning capabilities , access to a vast amount of knowledge about language and the world , and the ability to satisfy the constraints imposed by the structure of the puzzle .",
"in this work , we introduce solving crossword puzzles as a new natural language understand... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "crossword puzzles",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"crossword",
"puzzles"
],
"offsets": [
1,
2
]
}
],
"trigger": ... | [
"solving",
"crossword",
"puzzles",
"requires",
"diverse",
"reasoning",
"capabilities",
",",
"access",
"to",
"a",
"vast",
"amount",
"of",
"knowledge",
"about",
"language",
"and",
"the",
"world",
",",
"and",
"the",
"ability",
"to",
"satisfy",
"the",
"constraints",... |
ACL | Towards Lossless Encoding of Sentences | A lot of work has been done in the field of image compression via machine learning, but not much attention has been given to the compression of natural language. Compressing text into lossless representations while making features easily retrievable is not a trivial task, yet has huge benefits. Most methods designed to... | 694b90cd91377afef50a7fac7427308b | 2,019 | [
"a lot of work has been done in the field of image compression via machine learning , but not much attention has been given to the compression of natural language .",
"compressing text into lossless representations while making features easily retrievable is not a trivial task , yet has huge benefits .",
"most ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "compression of natural language",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"compression",
"of",
"natural",
"language"
],
"offsets": [
2... | [
"a",
"lot",
"of",
"work",
"has",
"been",
"done",
"in",
"the",
"field",
"of",
"image",
"compression",
"via",
"machine",
"learning",
",",
"but",
"not",
"much",
"attention",
"has",
"been",
"given",
"to",
"the",
"compression",
"of",
"natural",
"language",
".",... |
ACL | Deep Neural Model Inspection and Comparison via Functional Neuron Pathways | We introduce a general method for the interpretation and comparison of neural models. The method is used to factor a complex neural model into its functional components, which are comprised of sets of co-firing neurons that cut across layers of the network architecture, and which we call neural pathways. The function o... | f7e76f9d314d74bf4d676acf990f2d1b | 2,019 | [
"we introduce a general method for the interpretation and comparison of neural models .",
"the method is used to factor a complex neural model into its functional components , which are comprised of sets of co - firing neurons that cut across layers of the network architecture , and which we call neural pathways ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "general method",
"nugget_type": "APP",
... | [
"we",
"introduce",
"a",
"general",
"method",
"for",
"the",
"interpretation",
"and",
"comparison",
"of",
"neural",
"models",
".",
"the",
"method",
"is",
"used",
"to",
"factor",
"a",
"complex",
"neural",
"model",
"into",
"its",
"functional",
"components",
",",
... |
ACL | Structured Pruning Learns Compact and Accurate Models | The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the... | 2cfb33c0cad97dfff80640bf93ecbdf0 | 2,022 | [
"the growing size of neural language models has led to increased attention in model compression .",
"the two predominant approaches are pruning , which gradually removes weights from a pre - trained model , and distillation , which trains a smaller compact model to match a larger one .",
"pruning methods can si... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "growing size of neural language models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"growing",
"size",
"of",
"neural",
"language",
"models"
... | [
"the",
"growing",
"size",
"of",
"neural",
"language",
"models",
"has",
"led",
"to",
"increased",
"attention",
"in",
"model",
"compression",
".",
"the",
"two",
"predominant",
"approaches",
"are",
"pruning",
",",
"which",
"gradually",
"removes",
"weights",
"from",... |
ACL | Diverse and Informative Dialogue Generation with Context-Specific Commonsense Knowledge Awareness | Generative dialogue systems tend to produce generic responses, which often leads to boring conversations. For alleviating this issue, Recent studies proposed to retrieve and introduce knowledge facts from knowledge graphs. While this paradigm works to a certain extent, it usually retrieves knowledge facts only based on... | 1596cb5c3749cc868cc42d014806ac75 | 2,020 | [
"generative dialogue systems tend to produce generic responses , which often leads to boring conversations .",
"for alleviating this issue , recent studies proposed to retrieve and introduce knowledge facts from knowledge graphs .",
"while this paradigm works to a certain extent , it usually retrieves knowledge... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "generative dialogue systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"generative",
"dialogue",
"systems"
],
"offsets": [
0,
1,
... | [
"generative",
"dialogue",
"systems",
"tend",
"to",
"produce",
"generic",
"responses",
",",
"which",
"often",
"leads",
"to",
"boring",
"conversations",
".",
"for",
"alleviating",
"this",
"issue",
",",
"recent",
"studies",
"proposed",
"to",
"retrieve",
"and",
"int... |
ACL | A Graph Auto-encoder Model of Derivational Morphology | There has been little work on modeling the morphological well-formedness (MWF) of derivatives, a problem judged to be complex and difficult in linguistics. We present a graph auto-encoder that learns embeddings capturing information about the compatibility of affixes and stems in derivation. The auto-encoder models MWF... | 19e149ec7a1baf3a6912a376e72326fb | 2,020 | [
"there has been little work on modeling the morphological well - formedness ( mwf ) of derivatives , a problem judged to be complex and difficult in linguistics .",
"we present a graph auto - encoder that learns embeddings capturing information about the compatibility of affixes and stems in derivation .",
"the... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "morphological well - formedness",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"morphological",
"well",
"-",
"formedness"
],
"offsets": [
8... | [
"there",
"has",
"been",
"little",
"work",
"on",
"modeling",
"the",
"morphological",
"well",
"-",
"formedness",
"(",
"mwf",
")",
"of",
"derivatives",
",",
"a",
"problem",
"judged",
"to",
"be",
"complex",
"and",
"difficult",
"in",
"linguistics",
".",
"we",
"... |
ACL | The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail | Researchers in NLP often frame and discuss research results in ways that serve to deemphasize the field’s successes, often in response to the field’s widespread hype. Though well-meaning, this has yielded many misleading or false claims about the limits of our best technology. This is a problem, and it may be more seri... | 0347905d347a47d17db74ce4ad69642b | 2,022 | [
"researchers in nlp often frame and discuss research results in ways that serve to deemphasize the field ’ s successes , often in response to the field ’ s widespread hype .",
"though well - meaning , this has yielded many misleading or false claims about the limits of our best technology .",
"this is a problem... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "many misleading or false claims",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"many",
"misleading",
"or",
"false",
"claims"
],
"offsets": [... | [
"researchers",
"in",
"nlp",
"often",
"frame",
"and",
"discuss",
"research",
"results",
"in",
"ways",
"that",
"serve",
"to",
"deemphasize",
"the",
"field",
"’",
"s",
"successes",
",",
"often",
"in",
"response",
"to",
"the",
"field",
"’",
"s",
"widespread",
... |
ACL | Figurative Usage Detection of Symptom Words to Improve Personal Health Mention Detection | Personal health mention detection deals with predicting whether or not a given sentence is a report of a health condition. Past work mentions errors in this prediction when symptom words, i.e., names of symptoms of interest, are used in a figurative sense. Therefore, we combine a state-of-the-art figurative usage detec... | cf3c123422c47872c6611d89420e7e09 | 2,019 | [
"personal health mention detection deals with predicting whether or not a given sentence is a report of a health condition .",
"past work mentions errors in this prediction when symptom words , i . e . , names of symptoms of interest , are used in a figurative sense .",
"therefore , we combine a state - of - th... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "personal health mention detection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"personal",
"health",
"mention",
"detection"
],
"offsets": [
... | [
"personal",
"health",
"mention",
"detection",
"deals",
"with",
"predicting",
"whether",
"or",
"not",
"a",
"given",
"sentence",
"is",
"a",
"report",
"of",
"a",
"health",
"condition",
".",
"past",
"work",
"mentions",
"errors",
"in",
"this",
"prediction",
"when",... |
ACL | TGEA: An Error-Annotated Dataset and Benchmark Tasks for TextGeneration from Pretrained Language Models | In order to deeply understand the capability of pretrained language models in text generation and conduct a diagnostic evaluation, we propose TGEA, an error-annotated dataset with multiple benchmark tasks for text generation from pretrained language models (PLMs). We use carefully selected prompt words to guide GPT-2 t... | 66847814047043d9675d4a0414a01cb4 | 2,021 | [
"in order to deeply understand the capability of pretrained language models in text generation and conduct a diagnostic evaluation , we propose tgea , an error - annotated dataset with multiple benchmark tasks for text generation from pretrained language models ( plms ) .",
"we use carefully selected prompt words... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
20
]
},
{
"text": "tgea",
"nugget_type": "DST",
"argu... | [
"in",
"order",
"to",
"deeply",
"understand",
"the",
"capability",
"of",
"pretrained",
"language",
"models",
"in",
"text",
"generation",
"and",
"conduct",
"a",
"diagnostic",
"evaluation",
",",
"we",
"propose",
"tgea",
",",
"an",
"error",
"-",
"annotated",
"data... |
ACL | Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and Masking | This ability to learn consecutive tasks without forgetting how to perform previously trained problems is essential for developing an online dialogue system. This paper proposes an effective continual learning method for the task-oriented dialogue system with iterative network pruning, expanding, and masking (TPEM), whi... | fdc971b406c0ca7dd3c69c722e1a6b1c | 2,021 | [
"this ability to learn consecutive tasks without forgetting how to perform previously trained problems is essential for developing an online dialogue system .",
"this paper proposes an effective continual learning method for the task - oriented dialogue system with iterative network pruning , expanding , and mask... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "online dialogue system",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"online",
"dialogue",
"system"
],
"offsets": [
19,
20,
21
... | [
"this",
"ability",
"to",
"learn",
"consecutive",
"tasks",
"without",
"forgetting",
"how",
"to",
"perform",
"previously",
"trained",
"problems",
"is",
"essential",
"for",
"developing",
"an",
"online",
"dialogue",
"system",
".",
"this",
"paper",
"proposes",
"an",
... |
ACL | SPECTER: Document-level Representation Learning using Citation-informed Transformers | Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness... | 0849dd5d5f3a3cdeaa850041db6628a4 | 2,020 | [
"representation learning is a critical ingredient for natural language processing systems .",
"recent transformer language models like bert learn powerful textual representations , but these models are targeted towards token - and sentence - level training objectives and do not leverage information on inter - doc... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "representation learning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"representation",
"learning"
],
"offsets": [
0,
1
]
}
],
... | [
"representation",
"learning",
"is",
"a",
"critical",
"ingredient",
"for",
"natural",
"language",
"processing",
"systems",
".",
"recent",
"transformer",
"language",
"models",
"like",
"bert",
"learn",
"powerful",
"textual",
"representations",
",",
"but",
"these",
"mod... |
ACL | Word and Document Embedding with vMF-Mixture Priors on Context Word Vectors | Word embedding models typically learn two types of vectors: target word vectors and context word vectors. These vectors are normally learned such that they are predictive of some word co-occurrence statistic, but they are otherwise unconstrained. However, the words from a given language can be organized in various natu... | de0f567926262968a654e14397f07e77 | 2,019 | [
"word embedding models typically learn two types of vectors : target word vectors and context word vectors .",
"these vectors are normally learned such that they are predictive of some word co - occurrence statistic , but they are otherwise unconstrained .",
"however , the words from a given language can be org... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "word embedding models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"word",
"embedding",
"models"
],
"offsets": [
0,
1,
2
... | [
"word",
"embedding",
"models",
"typically",
"learn",
"two",
"types",
"of",
"vectors",
":",
"target",
"word",
"vectors",
"and",
"context",
"word",
"vectors",
".",
"these",
"vectors",
"are",
"normally",
"learned",
"such",
"that",
"they",
"are",
"predictive",
"of... |
ACL | A Comprehensive Analysis of Preprocessing for Word Representation Learning in Affective Tasks | Affective tasks such as sentiment analysis, emotion classification, and sarcasm detection have been popular in recent years due to an abundance of user-generated data, accurate computational linguistic models, and a broad range of relevant applications in various domains. At the same time, many studies have highlighted... | 444473e3666caa574e4b6ad26c110de4 | 2,020 | [
"affective tasks such as sentiment analysis , emotion classification , and sarcasm detection have been popular in recent years due to an abundance of user - generated data , accurate computational linguistic models , and a broad range of relevant applications in various domains .",
"at the same time , many studie... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
105
]
},
{
"text": "comprehensive analysis of the role of preproces... | [
"affective",
"tasks",
"such",
"as",
"sentiment",
"analysis",
",",
"emotion",
"classification",
",",
"and",
"sarcasm",
"detection",
"have",
"been",
"popular",
"in",
"recent",
"years",
"due",
"to",
"an",
"abundance",
"of",
"user",
"-",
"generated",
"data",
",",
... |
ACL | Accelerating Text Communication via Abbreviated Sentence Input | Typing every character in a text message may require more time or effort than strictly necessary. Skipping spaces or other characters may be able to speed input and reduce a user’s physical input effort. This can be particularly important for people with motor impairments. In a large crowdsourced study, we found worker... | c52b1a6755186d1a863b7550833f1ef3 | 2,021 | [
"typing every character in a text message may require more time or effort than strictly necessary .",
"skipping spaces or other characters may be able to speed input and reduce a user ’ s physical input effort .",
"this can be particularly important for people with motor impairments .",
"in a large crowdsourc... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
68
]
},
{
"text": "expanding",
"nugget_type": "E-PUR",
... | [
"typing",
"every",
"character",
"in",
"a",
"text",
"message",
"may",
"require",
"more",
"time",
"or",
"effort",
"than",
"strictly",
"necessary",
".",
"skipping",
"spaces",
"or",
"other",
"characters",
"may",
"be",
"able",
"to",
"speed",
"input",
"and",
"redu... |
ACL | Improving Non-autoregressive Neural Machine Translation with Monolingual Data | Non-autoregressive (NAR) neural machine translation is usually done via knowledge distillation from an autoregressive (AR) model. Under this framework, we leverage large monolingual corpora to improve the NAR model’s performance, with the goal of transferring the AR model’s generalization ability while preventing overf... | ff9751b193094e680dd549efb7c83e03 | 2,020 | [
"non - autoregressive ( nar ) neural machine translation is usually done via knowledge distillation from an autoregressive ( ar ) model .",
"under this framework , we leverage large monolingual corpora to improve the nar model ’ s performance , with the goal of transferring the ar model ’ s generalization ability... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "non - autoregressive neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"non",
"-",
"autoregressive",
"neural",
"machine",
... | [
"non",
"-",
"autoregressive",
"(",
"nar",
")",
"neural",
"machine",
"translation",
"is",
"usually",
"done",
"via",
"knowledge",
"distillation",
"from",
"an",
"autoregressive",
"(",
"ar",
")",
"model",
".",
"under",
"this",
"framework",
",",
"we",
"leverage",
... |
ACL | Comparison of Diverse Decoding Methods from Conditional Language Models | While conditional language models have greatly improved in their ability to output high quality natural language, many NLP applications benefit from being able to generate a diverse set of candidate sequences. Diverse decoding strategies aim to, within a given-sized candidate list, cover as much of the space of high-qu... | 73a3456646cd4e07e1a807fa29e0506a | 2,019 | [
"while conditional language models have greatly improved in their ability to output high quality natural language , many nlp applications benefit from being able to generate a diverse set of candidate sequences .",
"diverse decoding strategies aim to , within a given - sized candidate list , cover as much of the ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conditional language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"conditional",
"language",
"models"
],
"offsets": [
1,
2,
... | [
"while",
"conditional",
"language",
"models",
"have",
"greatly",
"improved",
"in",
"their",
"ability",
"to",
"output",
"high",
"quality",
"natural",
"language",
",",
"many",
"nlp",
"applications",
"benefit",
"from",
"being",
"able",
"to",
"generate",
"a",
"diver... |
ACL | Compositional Questions Do Not Necessitate Multi-hop Reasoning | Multi-hop reading comprehension (RC) questions are challenging because they require reading and reasoning over multiple paragraphs. We argue that it can be difficult to construct large multi-hop RC datasets. For example, even highly compositional questions can be answered with a single hop if they target specific entit... | de6db4da95c6443c89fc15c05eed5109 | 2,019 | [
"multi - hop reading comprehension ( rc ) questions are challenging because they require reading and reasoning over multiple paragraphs .",
"we argue that it can be difficult to construct large multi - hop rc datasets .",
"for example , even highly compositional questions can be answered with a single hop if th... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - hop reading comprehension questions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"hop",
"reading",
"comprehension",
"ques... | [
"multi",
"-",
"hop",
"reading",
"comprehension",
"(",
"rc",
")",
"questions",
"are",
"challenging",
"because",
"they",
"require",
"reading",
"and",
"reasoning",
"over",
"multiple",
"paragraphs",
".",
"we",
"argue",
"that",
"it",
"can",
"be",
"difficult",
"to",... |
ACL | Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas | Vision and language navigation (VLN) is a challenging visually-grounded language understanding task. Given a natural language navigation instruction, a visual agent interacts with a graph-based environment equipped with panorama images and tries to follow the described route. Most prior work has been conducted in indoo... | fcdaa2d4a1eb62b1e1428c1608adf4fc | 2,022 | [
"vision and language navigation ( vln ) is a challenging visually - grounded language understanding task .",
"given a natural language navigation instruction , a visual agent interacts with a graph - based environment equipped with panorama images and tries to follow the described route .",
"most prior work has... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "vision and language navigation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"vision",
"and",
"language",
"navigation"
],
"offsets": [
0,
... | [
"vision",
"and",
"language",
"navigation",
"(",
"vln",
")",
"is",
"a",
"challenging",
"visually",
"-",
"grounded",
"language",
"understanding",
"task",
".",
"given",
"a",
"natural",
"language",
"navigation",
"instruction",
",",
"a",
"visual",
"agent",
"interacts... |
ACL | French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English | Warning: This paper contains explicit statements of offensive stereotypes which may be upsetting.Much work on biases in natural language processing has addressed biases linked to the social and cultural experience of English speaking individuals in the United States. We seek to widen the scope of bias studies by creati... | dc1669496c0e4c05b5311418de811db4 | 2,022 | [
"warning : this paper contains explicit statements of offensive stereotypes which may be upsetting .",
"much work on biases in natural language processing has addressed biases linked to the social and cultural experience of english speaking individuals in the united states .",
"we seek to widen the scope of bia... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "biases linked to the",
"nugget_type": "WEA",
"argument_type": "Target",
"tokens": [
"biases",
"linked",
"to",
"the"
],
"offsets": [
25,
26,
... | [
"warning",
":",
"this",
"paper",
"contains",
"explicit",
"statements",
"of",
"offensive",
"stereotypes",
"which",
"may",
"be",
"upsetting",
".",
"much",
"work",
"on",
"biases",
"in",
"natural",
"language",
"processing",
"has",
"addressed",
"biases",
"linked",
"t... |
ACL | Using LSTMs to Assess the Obligatoriness of Phonological Distinctive Features for Phonotactic Learning | To ascertain the importance of phonetic information in the form of phonological distinctive features for the purpose of segment-level phonotactic acquisition, we compare the performance of two recurrent neural network models of phonotactic learning: one that has access to distinctive features at the start of the learni... | 2dbcb01828f1403f60c61c085795beae | 2,019 | [
"to ascertain the importance of phonetic information in the form of phonological distinctive features for the purpose of segment - level phonotactic acquisition , we compare the performance of two recurrent neural network models of phonotactic learning : one that has access to distinctive features at the start of t... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
24
]
},
{
"text": "performance of two recurrent neural network mode... | [
"to",
"ascertain",
"the",
"importance",
"of",
"phonetic",
"information",
"in",
"the",
"form",
"of",
"phonological",
"distinctive",
"features",
"for",
"the",
"purpose",
"of",
"segment",
"-",
"level",
"phonotactic",
"acquisition",
",",
"we",
"compare",
"the",
"per... |
ACL | Unified Dual-view Cognitive Model for Interpretable Claim Verification | Recent studies constructing direct interactions between the claim and each single user response (a comment or a relevant article) to capture evidence have shown remarkable success in interpretable claim verification. Owing to different single responses convey different cognition of individual users (i.e., audiences), t... | f86c1d8726e0187c0c9efee38da89bbd | 2,021 | [
"recent studies constructing direct interactions between the claim and each single user response ( a comment or a relevant article ) to capture evidence have shown remarkable success in interpretable claim verification .",
"owing to different single responses convey different cognition of individual users ( i . e... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "direct interactions",
"nugget_type": "FEA",
"argument_type": "TriedComponent",
"tokens": [
"direct",
"interactions"
],
"offsets": [
3,
4
]
},
{
... | [
"recent",
"studies",
"constructing",
"direct",
"interactions",
"between",
"the",
"claim",
"and",
"each",
"single",
"user",
"response",
"(",
"a",
"comment",
"or",
"a",
"relevant",
"article",
")",
"to",
"capture",
"evidence",
"have",
"shown",
"remarkable",
"succes... |
ACL | Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks | State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model. However, such modules are trained separately for each task and thus do not enable sharing information across tasks. In this paper, we show that we can learn adapter parameters ... | aaf4c73ed6c6fa6837417663608fef9c | 2,021 | [
"state - of - the - art parameter - efficient fine - tuning methods rely on introducing adapter modules between the layers of a pretrained language model .",
"however , such modules are trained separately for each task and thus do not enable sharing information across tasks .",
"in this paper , we show that we ... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "adapter modules",
"nugget_type": "MOD",
"argument_type": "Concern",
"tokens": [
"adapter",
"modules"
],
"offsets": [
17,
18
]
},
{
"text"... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"parameter",
"-",
"efficient",
"fine",
"-",
"tuning",
"methods",
"rely",
"on",
"introducing",
"adapter",
"modules",
"between",
"the",
"layers",
"of",
"a",
"pretrained",
"language",
"model",
".",
"however",
",",
"... |
ACL | Hierarchy-Aware Global Model for Hierarchical Text Classification | Hierarchical text classification is an essential yet challenging subtask of multi-label text classification with a taxonomic hierarchy. Existing methods have difficulties in modeling the hierarchical label structure in a global view. Furthermore, they cannot make full use of the mutual interactions between the text fea... | 02983ccbdf1620da897e06c9de29bc04 | 2,020 | [
"hierarchical text classification is an essential yet challenging subtask of multi - label text classification with a taxonomic hierarchy .",
"existing methods have difficulties in modeling the hierarchical label structure in a global view .",
"furthermore , they cannot make full use of the mutual interactions ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "hierarchical text classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"hierarchical",
"text",
"classification"
],
"offsets": [
0,
... | [
"hierarchical",
"text",
"classification",
"is",
"an",
"essential",
"yet",
"challenging",
"subtask",
"of",
"multi",
"-",
"label",
"text",
"classification",
"with",
"a",
"taxonomic",
"hierarchy",
".",
"existing",
"methods",
"have",
"difficulties",
"in",
"modeling",
... |
ACL | Neural Response Generation with Meta-words | We present open domain dialogue generation with meta-words. A meta-word is a structured record that describes attributes of a response, and thus allows us to explicitly model the one-to-many relationship within open domain dialogues and perform response generation in an explainable and controllable manner. To incorpora... | 51d72b3b36d53bc7d244e1d24e4e8e5d | 2,019 | [
"we present open domain dialogue generation with meta - words .",
"a meta - word is a structured record that describes attributes of a response , and thus allows us to explicitly model the one - to - many relationship within open domain dialogues and perform response generation in an explainable and controllable ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "open domain dialogue generation with meta - words",... | [
"we",
"present",
"open",
"domain",
"dialogue",
"generation",
"with",
"meta",
"-",
"words",
".",
"a",
"meta",
"-",
"word",
"is",
"a",
"structured",
"record",
"that",
"describes",
"attributes",
"of",
"a",
"response",
",",
"and",
"thus",
"allows",
"us",
"to",... |
ACL | Rare Tokens Degenerate All Tokens: Improving Neural Text Generation via Adaptive Gradient Gating for Rare Token Embeddings | Recent studies have determined that the learned token embeddings of large-scale neural language models are degenerated to be anisotropic with a narrow-cone shape. This phenomenon, called the representation degeneration problem, facilitates an increase in the overall similarity between token embeddings that negatively a... | de3038296aa555713ccf61e1cd83853f | 2,022 | [
"recent studies have determined that the learned token embeddings of large - scale neural language models are degenerated to be anisotropic with a narrow - cone shape .",
"this phenomenon , called the representation degeneration problem , facilitates an increase in the overall similarity between token embeddings ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "large - scale neural language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"large",
"-",
"scale",
"neural",
"language",
"models"
... | [
"recent",
"studies",
"have",
"determined",
"that",
"the",
"learned",
"token",
"embeddings",
"of",
"large",
"-",
"scale",
"neural",
"language",
"models",
"are",
"degenerated",
"to",
"be",
"anisotropic",
"with",
"a",
"narrow",
"-",
"cone",
"shape",
".",
"this",
... |
ACL | WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation | Recent works made significant advances on summarization tasks, facilitated by summarization datasets. Several existing datasets have the form of coherent-paragraph summaries. However, these datasets were curated from academic documents that were written for experts, thus making the essential step of assessing the summa... | eb6b30866dc2438d4ffe5eb91891dab9 | 2,021 | [
"recent works made significant advances on summarization tasks , facilitated by summarization datasets .",
"several existing datasets have the form of coherent - paragraph summaries .",
"however , these datasets were curated from academic documents that were written for experts , thus making the essential step ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "summarization datasets",
"nugget_type": "DST",
"argument_type": "Target",
"tokens": [
"summarization",
"datasets"
],
"offsets": [
11,
12
]
}
],
... | [
"recent",
"works",
"made",
"significant",
"advances",
"on",
"summarization",
"tasks",
",",
"facilitated",
"by",
"summarization",
"datasets",
".",
"several",
"existing",
"datasets",
"have",
"the",
"form",
"of",
"coherent",
"-",
"paragraph",
"summaries",
".",
"howev... |
ACL | Prompt-free and Efficient Few-shot Learning with Language Models | Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score. In this work, we propose Perfect, a simple and efficient method for few-shot fine-tuning of PLMs wit... | 848a24e2df519d85b6e8aaf1c26fa5d7 | 2,022 | [
"current methods for few - shot fine - tuning of pretrained masked language models ( plms ) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze - format that the plm can score .",
"in this work , we propose perfect , a simple and efficient method for few - shot ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "few - shot fine - tuning of pretrained masked language models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"few",
"-",
"shot",
"fine",
"-",
"tu... | [
"current",
"methods",
"for",
"few",
"-",
"shot",
"fine",
"-",
"tuning",
"of",
"pretrained",
"masked",
"language",
"models",
"(",
"plms",
")",
"require",
"carefully",
"engineered",
"prompts",
"and",
"verbalizers",
"for",
"each",
"new",
"task",
"to",
"convert",
... |
ACL | Fully-Semantic Parsing and Generation: the BabelNet Meaning Representation | A language-independent representation of meaning is one of the most coveted dreams in Natural Language Understanding. With this goal in mind, several formalisms have been proposed as frameworks for meaning representation in Semantic Parsing. And yet, the dependencies these formalisms share with respect to language-spec... | db8b7fef74ca9633556c878da3fdcb11 | 2,022 | [
"a language - independent representation of meaning is one of the most coveted dreams in natural language understanding .",
"with this goal in mind , several formalisms have been proposed as frameworks for meaning representation in semantic parsing .",
"and yet , the dependencies these formalisms share with res... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "language - independent representation of meaning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"language",
"-",
"independent",
"representation",
"of",
... | [
"a",
"language",
"-",
"independent",
"representation",
"of",
"meaning",
"is",
"one",
"of",
"the",
"most",
"coveted",
"dreams",
"in",
"natural",
"language",
"understanding",
".",
"with",
"this",
"goal",
"in",
"mind",
",",
"several",
"formalisms",
"have",
"been"... |
ACL | TIMERS: Document-level Temporal Relation Extraction | We present TIMERS - a TIME, Rhetorical and Syntactic-aware model for document-level temporal relation classification in the English language. Our proposed method leverages rhetorical discourse features and temporal arguments from semantic role labels, in addition to traditional local syntactic features, trained through... | 903a749d1562af295c139b106780799a | 2,021 | [
"we present timers - a time , rhetorical and syntactic - aware model for document - level temporal relation classification in the english language .",
"our proposed method leverages rhetorical discourse features and temporal arguments from semantic role labels , in addition to traditional local syntactic features... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "timers - a time , rhetorical and syntactic - aware ... | [
"we",
"present",
"timers",
"-",
"a",
"time",
",",
"rhetorical",
"and",
"syntactic",
"-",
"aware",
"model",
"for",
"document",
"-",
"level",
"temporal",
"relation",
"classification",
"in",
"the",
"english",
"language",
".",
"our",
"proposed",
"method",
"leverag... |
ACL | Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models | Publicly available, large pretrained Language Models (LMs) generate text with remarkable quality, but only sequentially from left to right. As a result, they are not immediately applicable to generation tasks that break the unidirectional assumption, such as paraphrasing or text-infilling, necessitating task-specific s... | 6030595fc857f6a1df2ac16a3b5b5751 | 2,021 | [
"publicly available , large pretrained language models ( lms ) generate text with remarkable quality , but only sequentially from left to right .",
"as a result , they are not immediately applicable to generation tasks that break the unidirectional assumption , such as paraphrasing or text - infilling , necessita... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "large pretrained language models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"large",
"pretrained",
"language",
"models"
],
"offsets": [
... | [
"publicly",
"available",
",",
"large",
"pretrained",
"language",
"models",
"(",
"lms",
")",
"generate",
"text",
"with",
"remarkable",
"quality",
",",
"but",
"only",
"sequentially",
"from",
"left",
"to",
"right",
".",
"as",
"a",
"result",
",",
"they",
"are",
... |
ACL | Automatic Error Analysis for Document-level Information Extraction | Document-level information extraction (IE) tasks have recently begun to be revisited in earnest using the end-to-end neural network techniques that have been successful on their sentence-level IE counterparts. Evaluation of the approaches, however, has been limited in a number of dimensions. In particular, the precisio... | 0ea3876466cca8a30a7763bafcce279b | 2,022 | [
"document - level information extraction ( ie ) tasks have recently begun to be revisited in earnest using the end - to - end neural network techniques that have been successful on their sentence - level ie counterparts .",
"evaluation of the approaches , however , has been limited in a number of dimensions .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "document - level information extraction ( ie ) tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"document",
"-",
"level",
"information",
"extraction",
... | [
"document",
"-",
"level",
"information",
"extraction",
"(",
"ie",
")",
"tasks",
"have",
"recently",
"begun",
"to",
"be",
"revisited",
"in",
"earnest",
"using",
"the",
"end",
"-",
"to",
"-",
"end",
"neural",
"network",
"techniques",
"that",
"have",
"been",
... |
ACL | TWAG: A Topic-Guided Wikipedia Abstract Generator | Wikipedia abstract generation aims to distill a Wikipedia abstract from web sources and has met significant success by adopting multi-document summarization techniques. However, previous works generally view the abstract as plain text, ignoring the fact that it is a description of a certain entity and can be decomposed... | 80b51ddb28e7f17f18cffd7120593f03 | 2,021 | [
"wikipedia abstract generation aims to distill a wikipedia abstract from web sources and has met significant success by adopting multi - document summarization techniques .",
"however , previous works generally view the abstract as plain text , ignoring the fact that it is a description of a certain entity and ca... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "wikipedia abstract generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"wikipedia",
"abstract",
"generation"
],
"offsets": [
0,
1,
... | [
"wikipedia",
"abstract",
"generation",
"aims",
"to",
"distill",
"a",
"wikipedia",
"abstract",
"from",
"web",
"sources",
"and",
"has",
"met",
"significant",
"success",
"by",
"adopting",
"multi",
"-",
"document",
"summarization",
"techniques",
".",
"however",
",",
... |
ACL | Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation | The masked language model has received remarkable attention due to its effectiveness on various natural language processing tasks. However, few works have adopted this technique in the sequence-to-sequence models. In this work, we introduce a jointly masked sequence-to-sequence model and explore its application on non-... | e7a8b675a9376885e3ff05d786fe8e47 | 2,020 | [
"the masked language model has received remarkable attention due to its effectiveness on various natural language processing tasks .",
"however , few works have adopted this technique in the sequence - to - sequence models .",
"in this work , we introduce a jointly masked sequence - to - sequence model and expl... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"tasks"
],
"offsets": [
... | [
"the",
"masked",
"language",
"model",
"has",
"received",
"remarkable",
"attention",
"due",
"to",
"its",
"effectiveness",
"on",
"various",
"natural",
"language",
"processing",
"tasks",
".",
"however",
",",
"few",
"works",
"have",
"adopted",
"this",
"technique",
"... |
ACL | Video Paragraph Captioning as a Text Summarization Task | Video paragraph captioning aims to generate a set of coherent sentences to describe a video that contains several events. Most previous methods simplify this task by using ground-truth event segments. In this work, we propose a novel framework by taking this task as a text summarization task. We first generate lots of ... | 1848f548473ae1da69ea0e5594d7a436 | 2,021 | [
"video paragraph captioning aims to generate a set of coherent sentences to describe a video that contains several events .",
"most previous methods simplify this task by using ground - truth event segments .",
"in this work , we propose a novel framework by taking this task as a text summarization task .",
"... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "video paragraph captioning",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"video",
"paragraph",
"captioning"
],
"offsets": [
0,
1,
... | [
"video",
"paragraph",
"captioning",
"aims",
"to",
"generate",
"a",
"set",
"of",
"coherent",
"sentences",
"to",
"describe",
"a",
"video",
"that",
"contains",
"several",
"events",
".",
"most",
"previous",
"methods",
"simplify",
"this",
"task",
"by",
"using",
"gr... |
ACL | bert2BERT: Towards Reusable Pretrained Language Models | In recent years, researchers tend to pre-train ever-larger language models to explore the upper limit of deep models. However, large language model pre-training costs intensive computational resources, and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful. In... | 0bb3f3feb9e3dbba79054fd925d9d669 | 2,022 | [
"in recent years , researchers tend to pre - train ever - larger language models to explore the upper limit of deep models .",
"however , large language model pre - training costs intensive computational resources , and most of the models are trained from scratch without reusing the existing pre - trained models ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "upper limit of deep models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"upper",
"limit",
"of",
"deep",
"models"
],
"offsets": [
... | [
"in",
"recent",
"years",
",",
"researchers",
"tend",
"to",
"pre",
"-",
"train",
"ever",
"-",
"larger",
"language",
"models",
"to",
"explore",
"the",
"upper",
"limit",
"of",
"deep",
"models",
".",
"however",
",",
"large",
"language",
"model",
"pre",
"-",
... |
ACL | QASR: QCRI Aljazeera Speech Resource A Large Scale Annotated Arabic Speech Corpus | We introduce the largest transcribed Arabic speech corpus, QASR, collected from the broadcast domain. This multi-dialect speech dataset contains 2,000 hours of speech sampled at 16kHz crawled from Aljazeera news channel. The dataset is released with lightly supervised transcriptions, aligned with the audio segments. Un... | eab2428da6bb5b2547d8a73904fb8a33 | 2,021 | [
"we introduce the largest transcribed arabic speech corpus , qasr , collected from the broadcast domain .",
"this multi - dialect speech dataset contains 2 , 000 hours of speech sampled at 16khz crawled from aljazeera news channel .",
"the dataset is released with lightly supervised transcriptions , aligned wit... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "largest transcribed arabic speech corpus",
... | [
"we",
"introduce",
"the",
"largest",
"transcribed",
"arabic",
"speech",
"corpus",
",",
"qasr",
",",
"collected",
"from",
"the",
"broadcast",
"domain",
".",
"this",
"multi",
"-",
"dialect",
"speech",
"dataset",
"contains",
"2",
",",
"000",
"hours",
"of",
"spe... |
ACL | Rethinking Negative Sampling for Handling Missing Entity Annotations | Negative sampling is highly effective in handling missing annotations for named entity recognition (NER). One of our contributions is an analysis on how it makes sense through introducing two insightful concepts: missampling and uncertainty. Empirical studies show low missampling rate and high uncertainty are both esse... | 4846c6a310066e552ad852a652c026ca | 2,022 | [
"negative sampling is highly effective in handling missing annotations for named entity recognition ( ner ) .",
"one of our contributions is an analysis on how it makes sense through introducing two insightful concepts : missampling and uncertainty .",
"empirical studies show low missampling rate and high uncer... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "negative sampling",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"negative",
"sampling"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"negative",
"sampling",
"is",
"highly",
"effective",
"in",
"handling",
"missing",
"annotations",
"for",
"named",
"entity",
"recognition",
"(",
"ner",
")",
".",
"one",
"of",
"our",
"contributions",
"is",
"an",
"analysis",
"on",
"how",
"it",
"makes",
"sense",
... |
ACL | MMCoQA: Conversational Question Answering over Text, Tables, and Images | The rapid development of conversational assistants accelerates the study on conversational question answering (QA). However, the existing conversational QA systems usually answer users’ questions with a single knowledge source, e.g., paragraphs or a knowledge graph, but overlook the important visual cues, let alone mul... | cd17d10ae1ff8597ce64d22c6af9e009 | 2,022 | [
"the rapid development of conversational assistants accelerates the study on conversational question answering ( qa ) .",
"however , the existing conversational qa systems usually answer users ’ questions with a single knowledge source , e . g . , paragraphs or a knowledge graph , but overlook the important visua... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conversational question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"conversational",
"question",
"answering"
],
"offsets": [
10,
... | [
"the",
"rapid",
"development",
"of",
"conversational",
"assistants",
"accelerates",
"the",
"study",
"on",
"conversational",
"question",
"answering",
"(",
"qa",
")",
".",
"however",
",",
"the",
"existing",
"conversational",
"qa",
"systems",
"usually",
"answer",
"us... |
ACL | Learning When to Translate for Streaming Speech | How to find proper moments to generate partial sentence translation given a streaming speech input? Existing approaches waiting-and-translating for a fixed duration often break the acoustic units in speech, since the boundaries between acoustic units in speech are not even. In this paper, we propose MoSST, a simple yet... | b5f4dc81177eb036e5b4f4eafc728227 | 2,022 | [
"how to find proper moments to generate partial sentence translation given a streaming speech input ?",
"existing approaches waiting - and - translating for a fixed duration often break the acoustic units in speech , since the boundaries between acoustic units in speech are not even .",
"in this paper , we prop... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "break",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"break"
],
"offsets": [
28
]
},
{
"text": "acoustic units in speech",
"nugget_typ... | [
"how",
"to",
"find",
"proper",
"moments",
"to",
"generate",
"partial",
"sentence",
"translation",
"given",
"a",
"streaming",
"speech",
"input",
"?",
"existing",
"approaches",
"waiting",
"-",
"and",
"-",
"translating",
"for",
"a",
"fixed",
"duration",
"often",
... |
ACL | Weight Distillation: Transferring the Knowledge in Neural Network Parameters | Knowledge distillation has been proven to be effective in model acceleration and compression. It transfers knowledge from a large neural network to a small one by using the large neural network predictions as targets of the small neural network. But this way ignores the knowledge inside the large neural networks, e.g.,... | 2ec3ab98b4f917613952603045679fe7 | 2,021 | [
"knowledge distillation has been proven to be effective in model acceleration and compression .",
"it transfers knowledge from a large neural network to a small one by using the large neural network predictions as targets of the small neural network .",
"but this way ignores the knowledge inside the large neura... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge distillation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"knowledge",
"distillation"
],
"offsets": [
0,
1
]
}
],
"... | [
"knowledge",
"distillation",
"has",
"been",
"proven",
"to",
"be",
"effective",
"in",
"model",
"acceleration",
"and",
"compression",
".",
"it",
"transfers",
"knowledge",
"from",
"a",
"large",
"neural",
"network",
"to",
"a",
"small",
"one",
"by",
"using",
"the",... |
ACL | Fast and Accurate Neural Machine Translation with Translation Memory | It is generally believed that a translation memory (TM) should be beneficial for machine translation tasks. Unfortunately, existing wisdom demonstrates the superiority of TM-based neural machine translation (NMT) only on the TM-specialized translation tasks rather than general tasks, with a non-negligible computational... | 49b86f7c3f42b485a106082853a71f59 | 2,021 | [
"it is generally believed that a translation memory ( tm ) should be beneficial for machine translation tasks .",
"unfortunately , existing wisdom demonstrates the superiority of tm - based neural machine translation ( nmt ) only on the tm - specialized translation tasks rather than general tasks , with a non - n... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "translation memory",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"translation",
"memory"
],
"offsets": [
6,
7
]
}
],
"trigger"... | [
"it",
"is",
"generally",
"believed",
"that",
"a",
"translation",
"memory",
"(",
"tm",
")",
"should",
"be",
"beneficial",
"for",
"machine",
"translation",
"tasks",
".",
"unfortunately",
",",
"existing",
"wisdom",
"demonstrates",
"the",
"superiority",
"of",
"tm",
... |
ACL | Facet-Aware Evaluation for Extractive Summarization | Commonly adopted metrics for extractive summarization focus on lexical overlap at the token level. In this paper, we present a facet-aware evaluation setup for better assessment of the information coverage in extracted summaries. Specifically, we treat each sentence in the reference summary as a facet, identify the sen... | 3dbe51d93baa09aa732350bfa60b044b | 2,020 | [
"commonly adopted metrics for extractive summarization focus on lexical overlap at the token level .",
"in this paper , we present a facet - aware evaluation setup for better assessment of the information coverage in extracted summaries .",
"specifically , we treat each sentence in the reference summary as a fa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "extractive summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"extractive",
"summarization"
],
"offsets": [
4,
5
]
}
],
... | [
"commonly",
"adopted",
"metrics",
"for",
"extractive",
"summarization",
"focus",
"on",
"lexical",
"overlap",
"at",
"the",
"token",
"level",
".",
"in",
"this",
"paper",
",",
"we",
"present",
"a",
"facet",
"-",
"aware",
"evaluation",
"setup",
"for",
"better",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.