venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | Program Transfer for Answering Complex Questions over Knowledge Bases | Program induction for answering complex questions over knowledge bases (KBs) aims to decompose a question into a multi-step program, whose execution against the KB produces the final answer. Learning to induce programs relies on a large number of parallel question-program pairs for the given KB. However, for most KBs, ... | febd573ada9568c635f6d8aeada27ec5 | 2,022 | [
"program induction for answering complex questions over knowledge bases ( kbs ) aims to decompose a question into a multi - step program , whose execution against the kb produces the final answer .",
"learning to induce programs relies on a large number of parallel question - program pairs for the given kb .",
... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "program induction",
"tokens": [
"program",
"induction"
]
}
],
"event_type": "ITT",
"trigger": ... | [
"program",
"induction",
"for",
"answering",
"complex",
"questions",
"over",
"knowledge",
"bases",
"(",
"kbs",
")",
"aims",
"to",
"decompose",
"a",
"question",
"into",
"a",
"multi",
"-",
"step",
"program",
",",
"whose",
"execution",
"against",
"the",
"kb",
"p... |
ACL | Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESupposition | Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMP... | 1a6285faf0918175c1ea9e0b7c8ea82e | 2,020 | [
"natural language inference ( nli ) is an increasingly important task for natural language understanding , which requires one to infer whether a sentence entails another .",
"however , the ability of nli models to make pragmatic inferences remains understudied .",
"we create an implicature and presupposition di... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "natural language inference",
"tokens": [
"natural",
"language",
"inference"
]
}
... | [
"natural",
"language",
"inference",
"(",
"nli",
")",
"is",
"an",
"increasingly",
"important",
"task",
"for",
"natural",
"language",
"understanding",
",",
"which",
"requires",
"one",
"to",
"infer",
"whether",
"a",
"sentence",
"entails",
"another",
".",
"however",... |
ACL | Leveraging Task Transferability to Meta-learning for Clinical Section Classification with Limited Data | Identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note-writing tasks. Most state-of-the-art text classification systems require thousands of in-domain text data to achieve high performance. Howe... | 6bab1cf097070e6d457c9c8fd0e74e57 | 2,022 | [
"identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note - writing tasks .",
"most state - of - the - art text classification systems require thousands of in - domain text data to achieve h... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
28,
29,
30,
31,
32,
33,
34,
35,
36,
37
],
"text": "state - of - the - art text classi... | [
"identifying",
"sections",
"is",
"one",
"of",
"the",
"critical",
"components",
"of",
"understanding",
"medical",
"information",
"from",
"unstructured",
"clinical",
"notes",
"and",
"developing",
"assistive",
"technologies",
"for",
"clinical",
"note",
"-",
"writing",
... |
ACL | Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation | Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines. The persona-based dialogue generation task is thus introduced to tackle the personality-inconsistent problem by incorporating explicit persona text into dialogue generation models. Desp... | 71ff0f02bc14a28822f0cdf6c508aae2 | 2,020 | [
"maintaining a consistent personality in conversations is quite natural for human beings , but is still a non - trivial task for machines .",
"the persona - based dialogue generation task is thus introduced to tackle the personality - inconsistent problem by incorporating explicit persona text into dialogue gener... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3
],
"text": "consistent personality",
"tokens": [
"consistent",
"personality"
]
}
],
"event_type": "ITT",
"... | [
"maintaining",
"a",
"consistent",
"personality",
"in",
"conversations",
"is",
"quite",
"natural",
"for",
"human",
"beings",
",",
"but",
"is",
"still",
"a",
"non",
"-",
"trivial",
"task",
"for",
"machines",
".",
"the",
"persona",
"-",
"based",
"dialogue",
"ge... |
ACL | An In-depth Study on Internal Structure of Chinese Words | Unlike English letters, Chinese characters have rich and specific meanings. Usually, the meaning of a word can be derived from its constituent characters in some way. Several previous works on syntactic parsing propose to annotate shallow word-internal structures for better utilizing character-level information. This w... | 636dd0c8ece0788d40d37b9f500026d8 | 2,021 | [
"unlike english letters , chinese characters have rich and specific meanings .",
"usually , the meaning of a word can be derived from its constituent characters in some way .",
"several previous works on syntactic parsing propose to annotate shallow word - internal structures for better utilizing character - le... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5
],
"text": "chinese characters",
"tokens": [
"chinese",
"characters"
]
}
],
"event_type": "ITT",
"trigger"... | [
"unlike",
"english",
"letters",
",",
"chinese",
"characters",
"have",
"rich",
"and",
"specific",
"meanings",
".",
"usually",
",",
"the",
"meaning",
"of",
"a",
"word",
"can",
"be",
"derived",
"from",
"its",
"constituent",
"characters",
"in",
"some",
"way",
".... |
ACL | Preview, Attend and Review: Schema-Aware Curriculum Learning for Multi-Domain Dialogue State Tracking | Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset. In this paper, we propose to use curriculum learning (CL) to better leverage both the curriculum structure and schema structure for task-oriented dialogs. Specifically, we pro... | a0fd29c17984ed8d2e2b7f86831cb0a4 | 2,021 | [
"existing dialog state tracking ( dst ) models are trained with dialog data in a random order , neglecting rich structural information in a dataset .",
"in this paper , we propose to use curriculum learning ( cl ) to better leverage both the curriculum structure and schema structure for task - oriented dialogs ."... | [
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
34,
35
],
"text": "curriculum learning",
"tokens": [
"curriculum",
"learning"
]
},
{
"argument_type":... | [
"existing",
"dialog",
"state",
"tracking",
"(",
"dst",
")",
"models",
"are",
"trained",
"with",
"dialog",
"data",
"in",
"a",
"random",
"order",
",",
"neglecting",
"rich",
"structural",
"information",
"in",
"a",
"dataset",
".",
"in",
"this",
"paper",
",",
"... |
ACL | Self-Attentional Models for Lattice Inputs | Lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks, for example to compactly capture multiple speech recognition hypotheses, or to represent multiple linguistic analyses. Previous work has extended recurrent neural networks to model lattice inputs... | 8e057b24ffe8ed4a5448b19bb7b9c2bf | 2,019 | [
"lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks , for example to compactly capture multiple speech recognition hypotheses , or to represent multiple linguistic analyses .",
"previous work has extended recurrent neural networks to model l... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0
],
"text": "lattices",
"tokens": [
"lattices"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offse... | [
"lattices",
"are",
"an",
"efficient",
"and",
"effective",
"method",
"to",
"encode",
"ambiguity",
"of",
"upstream",
"systems",
"in",
"natural",
"language",
"processing",
"tasks",
",",
"for",
"example",
"to",
"compactly",
"capture",
"multiple",
"speech",
"recognitio... |
ACL | Joint Effects of Context and User History for Predicting Online Conversation Re-entries | As the online world continues its exponential growth, interpersonal communication has come to play an increasingly central role in opinion formation and change. In order to help users better engage with each other online, we study a challenging problem of re-entry prediction foreseeing whether a user will come back to ... | 54dc18f3c81976ab42c7f5f4bd591db4 | 2,019 | [
"as the online world continues its exponential growth , interpersonal communication has come to play an increasingly central role in opinion formation and change .",
"in order to help users better engage with each other online , we study a challenging problem of re - entry prediction foreseeing whether a user wil... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
9,
10
],
"text": "interpersonal communication",
"tokens": [
"interpersonal",
"communication"
]
}
],
"event_type": "... | [
"as",
"the",
"online",
"world",
"continues",
"its",
"exponential",
"growth",
",",
"interpersonal",
"communication",
"has",
"come",
"to",
"play",
"an",
"increasingly",
"central",
"role",
"in",
"opinion",
"formation",
"and",
"change",
".",
"in",
"order",
"to",
"... |
ACL | Probing for Predicate Argument Structures in Pretrained Language Models | Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency- and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL). These results have prompted researchers to investigate the inner workin... | 81cb7fa52062f9d17d9e93a1e4567dec | 2,022 | [
"thanks to the effectiveness and wide availability of modern pretrained language models ( plms ) , recently proposed approaches have achieved remarkable results in dependency - and span - based , multilingual and cross - lingual semantic role labeling ( srl ) .",
"these results have prompted researchers to invest... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
36,
37,
38
],
"text": "semantic role labeling",
"tokens": [
"semantic",
"role",
"labeling"
]
}
],
... | [
"thanks",
"to",
"the",
"effectiveness",
"and",
"wide",
"availability",
"of",
"modern",
"pretrained",
"language",
"models",
"(",
"plms",
")",
",",
"recently",
"proposed",
"approaches",
"have",
"achieved",
"remarkable",
"results",
"in",
"dependency",
"-",
"and",
"... |
ACL | Where to Go for the Holidays: Towards Mixed-Type Dialogs for Clarification of User Goals | Most dialog systems posit that users have figured out clear and specific goals before starting an interaction. For example, users have determined the departure, the destination, and the travel time for booking a flight. However, in many scenarios, limited by experience and knowledge, users may know what they need, but ... | e6819b3ce223923478bb9d3b63e830a6 | 2,022 | [
"most dialog systems posit that users have figured out clear and specific goals before starting an interaction .",
"for example , users have determined the departure , the destination , and the travel time for booking a flight .",
"however , in many scenarios , limited by experience and knowledge , users may kn... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2
],
"text": "dialog systems",
"tokens": [
"dialog",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"most",
"dialog",
"systems",
"posit",
"that",
"users",
"have",
"figured",
"out",
"clear",
"and",
"specific",
"goals",
"before",
"starting",
"an",
"interaction",
".",
"for",
"example",
",",
"users",
"have",
"determined",
"the",
"departure",
",",
"the",
"destina... |
ACL | DVD: A Diagnostic Dataset for Multi-step Reasoning in Video Grounded Dialogue | A video-grounded dialogue system is required to understand both dialogue, which contains semantic dependencies from turn to turn, and video, which contains visual cues of spatial and temporal scene variations. Building such dialogue systems is a challenging problem, involving various reasoning types on both visual and ... | d45cd0ddedda4f5e033a5ce54cd0afb9 | 2,021 | [
"a video - grounded dialogue system is required to understand both dialogue , which contains semantic dependencies from turn to turn , and video , which contains visual cues of spatial and temporal scene variations .",
"building such dialogue systems is a challenging problem , involving various reasoning types on... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
1,
2,
3,
4,
5
],
"text": "video - grounded dialogue system",
"tokens": [
"video",
"-",
"grounded"... | [
"a",
"video",
"-",
"grounded",
"dialogue",
"system",
"is",
"required",
"to",
"understand",
"both",
"dialogue",
",",
"which",
"contains",
"semantic",
"dependencies",
"from",
"turn",
"to",
"turn",
",",
"and",
"video",
",",
"which",
"contains",
"visual",
"cues",
... |
ACL | MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation | The advent of large pre-trained language models has given rise to rapid progress in the field of Natural Language Processing (NLP). While the performance of these models on standard benchmarks has scaled with size, compression techniques such as knowledge distillation have been key in making them practical. We present ... | bcf2a5086a3b7ab9ae680289f38dad5f | 2,021 | [
"the advent of large pre - trained language models has given rise to rapid progress in the field of natural language processing ( nlp ) .",
"while the performance of these models on standard benchmarks has scaled with size , compression techniques such as knowledge distillation have been key in making them practi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
19,
20,
21
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
... | [
"the",
"advent",
"of",
"large",
"pre",
"-",
"trained",
"language",
"models",
"has",
"given",
"rise",
"to",
"rapid",
"progress",
"in",
"the",
"field",
"of",
"natural",
"language",
"processing",
"(",
"nlp",
")",
".",
"while",
"the",
"performance",
"of",
"the... |
ACL | An Automated Framework for Fast Cognate Detection and Bayesian Phylogenetic Inference in Computational Historical Linguistics | We present a fully automated workflow for phylogenetic reconstruction on large datasets, consisting of two novel methods, one for fast detection of cognates and one for fast Bayesian phylogenetic inference. Our results show that the methods take less than a few minutes to process language families that have so far requ... | db2fff29a55036937a41cdace0266be9 | 2,019 | [
"we present a fully automated workflow for phylogenetic reconstruction on large datasets , consisting of two novel methods , one for fast detection of cognates and one for fast bayesian phylogenetic inference .",
"our results show that the methods take less than a few minutes to process language families that hav... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"a",
"fully",
"automated",
"workflow",
"for",
"phylogenetic",
"reconstruction",
"on",
"large",
"datasets",
",",
"consisting",
"of",
"two",
"novel",
"methods",
",",
"one",
"for",
"fast",
"detection",
"of",
"cognates",
"and",
"one",
"for",
"fast... |
ACL | Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization | Despite recent progress in abstractive summarization, systems still suffer from faithfulness errors. While prior work has proposed models that improve faithfulness, it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfulness is to m... | 959fbecee82a093efd41a9a4608a4728 | 2,022 | [
"despite recent progress in abstractive summarization , systems still suffer from faithfulness errors .",
"while prior work has proposed models that improve faithfulness , it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfu... | [
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
7
],
"text": "systems",
"tokens": [
"systems"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": ... | [
"despite",
"recent",
"progress",
"in",
"abstractive",
"summarization",
",",
"systems",
"still",
"suffer",
"from",
"faithfulness",
"errors",
".",
"while",
"prior",
"work",
"has",
"proposed",
"models",
"that",
"improve",
"faithfulness",
",",
"it",
"is",
"unclear",
... |
ACL | Enhancing the generalization for Intent Classification and Out-of-Domain Detection in SLU | Intent classification is a major task in spoken language understanding (SLU). Since most models are built with pre-collected in-domain (IND) training utterances, their ability to detect unsupported out-of-domain (OOD) utterances has a critical effect in practical use. Recent works have shown that using extra data and l... | bbe87249393bc09725f2b0dcfda04997 | 2,021 | [
"intent classification is a major task in spoken language understanding ( slu ) .",
"since most models are built with pre - collected in - domain ( ind ) training utterances , their ability to detect unsupported out - of - domain ( ood ) utterances has a critical effect in practical use .",
"recent works have s... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "intent classification",
"tokens": [
"intent",
"classification"
]
}
],
"event_type": "ITT",
"tr... | [
"intent",
"classification",
"is",
"a",
"major",
"task",
"in",
"spoken",
"language",
"understanding",
"(",
"slu",
")",
".",
"since",
"most",
"models",
"are",
"built",
"with",
"pre",
"-",
"collected",
"in",
"-",
"domain",
"(",
"ind",
")",
"training",
"uttera... |
ACL | PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks | This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks. We propose Prompt-based Data Augmentation model (PromDA) which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) in the frozen Pre-trained Language Models (PLMs). This avoids human effort in col... | 04e5995f7999d5daad821408248f8262 | 2,022 | [
"this paper focuses on the data augmentation for low - resource natural language understanding ( nlu ) tasks .",
"we propose prompt - based data augmentation model ( promda ) which only trains small - scale soft prompt ( i . e . , a set of trainable vectors ) in the frozen pre - trained language models ( plms ) .... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
8,
9,
10,
11,
12,
13,
17
],
"text": "low - resource natural language understanding ( nlu ) tasks",
"tokens"... | [
"this",
"paper",
"focuses",
"on",
"the",
"data",
"augmentation",
"for",
"low",
"-",
"resource",
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"tasks",
".",
"we",
"propose",
"prompt",
"-",
"based",
"data",
"augmentation",
"model",
"(",
"promda",
")... |
ACL | Analyzing the Limitations of Cross-lingual Word Embedding Mappings | Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations. While several authors have questioned the underlying isomorphism assumption, which states th... | c84823d8450a619b600d844943a96c1e | 2,019 | [
"recent research in cross - lingual word embeddings has almost exclusively focused on offline methods , which independently train word embeddings in different languages and map them to a shared space through linear transformations .",
"while several authors have questioned the underlying isomorphism assumption , ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5,
6,
7
],
"text": "cross - lingual word embeddings",
"tokens": [
"cross",
"-",
"lingual",
... | [
"recent",
"research",
"in",
"cross",
"-",
"lingual",
"word",
"embeddings",
"has",
"almost",
"exclusively",
"focused",
"on",
"offline",
"methods",
",",
"which",
"independently",
"train",
"word",
"embeddings",
"in",
"different",
"languages",
"and",
"map",
"them",
... |
ACL | Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks | Syntactic information, especially dependency trees, has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities. However, most existing studies suffer from the noise in the dependency trees, especially whe... | 09e8a58fe50453a2401747d5e9c40e18 | 2,021 | [
"syntactic information , especially dependency trees , has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities .",
"however , most existing studies suffer from the noise in the dependency trees ,... | [
{
"arguments": [],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "used",
"tokens": [
"used"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
36,
... | [
"syntactic",
"information",
",",
"especially",
"dependency",
"trees",
",",
"has",
"been",
"widely",
"used",
"by",
"existing",
"studies",
"to",
"improve",
"relation",
"extraction",
"with",
"better",
"semantic",
"guidance",
"for",
"analyzing",
"the",
"context",
"inf... |
ACL | Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge | Chinese word segmentation (CWS) and part-of-speech (POS) tagging are important fundamental tasks for Chinese language processing, where joint learning of them is an effective one-step solution for both tasks. Previous studies for joint CWS and POS tagging mainly follow the character-based tagging paradigm with introduc... | 9609778ad9e5f0ef4d2c7c494df6a6dc | 2,020 | [
"chinese word segmentation ( cws ) and part - of - speech ( pos ) tagging are important fundamental tasks for chinese language processing , where joint learning of them is an effective one - step solution for both tasks .",
"previous studies for joint cws and pos tagging mainly follow the character - based taggin... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
21,
22,
23
],
"text": "chinese language processing",
"tokens": [
"chinese",
"language",
"processing"
]
... | [
"chinese",
"word",
"segmentation",
"(",
"cws",
")",
"and",
"part",
"-",
"of",
"-",
"speech",
"(",
"pos",
")",
"tagging",
"are",
"important",
"fundamental",
"tasks",
"for",
"chinese",
"language",
"processing",
",",
"where",
"joint",
"learning",
"of",
"them",
... |
ACL | RankQA: Neural Question Answering with Answer Re-Ranking | The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process: first, relevant text passages are retrieved and, subsequently, a neural network for machine comprehension extracts the likeliest answer. However, both stages are largely isolated in the status quo and, h... | 864e901c9c8268c1e32b4e85b4cdda05 | 2,019 | [
"the conventional paradigm in neural question answering ( qa ) for narrative content is limited to a two - stage process : first , relevant text passages are retrieved and , subsequently , a neural network for machine comprehension extracts the likeliest answer .",
"however , both stages are largely isolated in t... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5,
6
],
"text": "neural question answering",
"tokens": [
"neural",
"question",
"answering"
]
}
]... | [
"the",
"conventional",
"paradigm",
"in",
"neural",
"question",
"answering",
"(",
"qa",
")",
"for",
"narrative",
"content",
"is",
"limited",
"to",
"a",
"two",
"-",
"stage",
"process",
":",
"first",
",",
"relevant",
"text",
"passages",
"are",
"retrieved",
"and... |
ACL | Domain Adaptation in Multilingual and Multi-Domain Monolingual Settings for Complex Word Identification | Complex word identification (CWI) is a cornerstone process towards proper text simplification. CWI is highly dependent on context, whereas its difficulty is augmented by the scarcity of available datasets which vary greatly in terms of domains and languages. As such, it becomes increasingly more difficult to develop a ... | 0e0d6cc75f98e0e32960341f2f384171 | 2,022 | [
"complex word identification ( cwi ) is a cornerstone process towards proper text simplification .",
"cwi is highly dependent on context , whereas its difficulty is augmented by the scarcity of available datasets which vary greatly in terms of domains and languages .",
"as such , it becomes increasingly more di... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "complex word identification",
"tokens": [
"complex",
"word",
"identification"
]
}
... | [
"complex",
"word",
"identification",
"(",
"cwi",
")",
"is",
"a",
"cornerstone",
"process",
"towards",
"proper",
"text",
"simplification",
".",
"cwi",
"is",
"highly",
"dependent",
"on",
"context",
",",
"whereas",
"its",
"difficulty",
"is",
"augmented",
"by",
"t... |
ACL | Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing | We present substructure distribution projection (SubDP), a technique that projects a distribution over structures in one domain to another, by projecting substructure distributions separately. Models for the target domain can then be trained, using the projected distributions as soft silver labels. We evaluate SubDP on... | a11d23df083ec881504fcaf35594405c | 2,022 | [
"we present substructure distribution projection ( subdp ) , a technique that projects a distribution over structures in one domain to another , by projecting substructure distributions separately .",
"models for the target domain can then be trained , using the projected distributions as soft silver labels .",
... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"substructure",
"distribution",
"projection",
"(",
"subdp",
")",
",",
"a",
"technique",
"that",
"projects",
"a",
"distribution",
"over",
"structures",
"in",
"one",
"domain",
"to",
"another",
",",
"by",
"projecting",
"substructure",
"distributions"... |
ACL | How can NLP Help Revitalize Endangered Languages? A Case Study and Roadmap for the Cherokee Language | More than 43% of the languages spoken in the world are endangered, and language loss currently occurs at an accelerated rate because of globalization and neocolonialism. Saving and revitalizing endangered languages has become very important for maintaining the cultural diversity on our planet. In this work, we focus on... | 6043d6347900bef1481b47f957a5bdf4 | 2,022 | [
"more than 43 % of the languages spoken in the world are endangered , and language loss currently occurs at an accelerated rate because of globalization and neocolonialism .",
"saving and revitalizing endangered languages has become very important for maintaining the cultural diversity on our planet .",
"in thi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
41,
42
],
"text": "cultural diversity",
"tokens": [
"cultural",
"diversity"
]
}
],
"event_type": "ITT",
"trigge... | [
"more",
"than",
"43",
"%",
"of",
"the",
"languages",
"spoken",
"in",
"the",
"world",
"are",
"endangered",
",",
"and",
"language",
"loss",
"currently",
"occurs",
"at",
"an",
"accelerated",
"rate",
"because",
"of",
"globalization",
"and",
"neocolonialism",
".",
... |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 4