venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | LearnDA: Learnable Knowledge-Guided Data Augmentation for Event Causality Identification | Modern models for event causality identification (ECI) are mainly based on supervised learning, which are prone to the data lacking problem. Unfortunately, the existing NLP-related augmentation methods cannot directly produce available data required for this task. To solve the data lacking problem, we introduce a new a... | 74e913c90a1dd5e175cb7b5a0731c92b | 2,021 | [
"modern models for event causality identification ( eci ) are mainly based on supervised learning , which are prone to the data lacking problem .",
"unfortunately , the existing nlp - related augmentation methods cannot directly produce available data required for this task .",
"to solve the data lacking proble... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "event causality identification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"event",
"causality",
"identification"
],
"offsets": [
3,
4,
... | [
"modern",
"models",
"for",
"event",
"causality",
"identification",
"(",
"eci",
")",
"are",
"mainly",
"based",
"on",
"supervised",
"learning",
",",
"which",
"are",
"prone",
"to",
"the",
"data",
"lacking",
"problem",
".",
"unfortunately",
",",
"the",
"existing",... |
ACL | SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization | In this paper, we present a conceptually simple while empirically powerful framework for abstractive summarization, SimCLS, which can bridge the gap between the learning objective and evaluation metrics resulting from the currently dominated sequence-to-sequence learning framework by formulating text generation as a re... | 3e2dca56d1e24a3e939aea53fcd86dea | 2,021 | [
"in this paper , we present a conceptually simple while empirically powerful framework for abstractive summarization , simcls , which can bridge the gap between the learning objective and evaluation metrics resulting from the currently dominated sequence - to - sequence learning framework by formulating text genera... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "conceptually simple while empirically powerful fram... | [
"in",
"this",
"paper",
",",
"we",
"present",
"a",
"conceptually",
"simple",
"while",
"empirically",
"powerful",
"framework",
"for",
"abstractive",
"summarization",
",",
"simcls",
",",
"which",
"can",
"bridge",
"the",
"gap",
"between",
"the",
"learning",
"objecti... |
ACL | Learning Latent Structures for Cross Action Phrase Relations in Wet Lab Protocols | Wet laboratory protocols (WLPs) are critical for conveying reproducible procedures in biological research. They are composed of instructions written in natural language describing the step-wise processing of materials by specific actions. This process flow description for reagents and materials synthesis in WLPs can be... | bdc150368ebd8bae88f227821aeadb1d | 2,021 | [
"wet laboratory protocols ( wlps ) are critical for conveying reproducible procedures in biological research .",
"they are composed of instructions written in natural language describing the step - wise processing of materials by specific actions .",
"this process flow description for reagents and materials syn... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "wet laboratory protocols",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"wet",
"laboratory",
"protocols"
],
"offsets": [
0,
1,
2
... | [
"wet",
"laboratory",
"protocols",
"(",
"wlps",
")",
"are",
"critical",
"for",
"conveying",
"reproducible",
"procedures",
"in",
"biological",
"research",
".",
"they",
"are",
"composed",
"of",
"instructions",
"written",
"in",
"natural",
"language",
"describing",
"th... |
ACL | Dialogue Response Selection with Hierarchical Curriculum Learning | We study the learning of a matching model for dialogue response selection. Motivated by the recent finding that models trained with random negative samples are not ideal in real-world scenarios, we propose a hierarchical curriculum learning framework that trains the matching model in an “easy-to-difficult” scheme. Our ... | 29d38a4cbf20640d35cdd73a6579c809 | 2,021 | [
"we study the learning of a matching model for dialogue response selection .",
"motivated by the recent finding that models trained with random negative samples are not ideal in real - world scenarios , we propose a hierarchical curriculum learning framework that trains the matching model in an “ easy - to - diff... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "learning of a matching model",
"nugget_ty... | [
"we",
"study",
"the",
"learning",
"of",
"a",
"matching",
"model",
"for",
"dialogue",
"response",
"selection",
".",
"motivated",
"by",
"the",
"recent",
"finding",
"that",
"models",
"trained",
"with",
"random",
"negative",
"samples",
"are",
"not",
"ideal",
"in",... |
ACL | Uncertain Natural Language Inference | We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments. We demonstrate the feasibility of collecting annotations for UNLI by relabeling a portion of... | 8b1c2c6d51ca36631866471fe5dc70b2 | 2,020 | [
"we introduce uncertain natural language inference ( unli ) , a refinement of natural language inference ( nli ) that shifts away from categorical labels , targeting instead the direct prediction of subjective probability assessments .",
"we demonstrate the feasibility of collecting annotations for unli by relabe... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "uncertain natural language inference",
"nug... | [
"we",
"introduce",
"uncertain",
"natural",
"language",
"inference",
"(",
"unli",
")",
",",
"a",
"refinement",
"of",
"natural",
"language",
"inference",
"(",
"nli",
")",
"that",
"shifts",
"away",
"from",
"categorical",
"labels",
",",
"targeting",
"instead",
"th... |
ACL | Interpolated Spectral NGram Language Models | Spectral models for learning weighted non-deterministic automata have nice theoretical and algorithmic properties. Despite this, it has been challenging to obtain competitive results in language modeling tasks, for two main reasons. First, in order to capture long-range dependencies of the data, the method must use sta... | 4a191cb7d21cf96760920554dc422aa9 | 2,019 | [
"spectral models for learning weighted non - deterministic automata have nice theoretical and algorithmic properties .",
"despite this , it has been challenging to obtain competitive results in language modeling tasks , for two main reasons .",
"first , in order to capture long - range dependencies of the data ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "spectral models for learning weighted non - deterministic automata",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"spectral",
"models",
"for",
"learning",
... | [
"spectral",
"models",
"for",
"learning",
"weighted",
"non",
"-",
"deterministic",
"automata",
"have",
"nice",
"theoretical",
"and",
"algorithmic",
"properties",
".",
"despite",
"this",
",",
"it",
"has",
"been",
"challenging",
"to",
"obtain",
"competitive",
"result... |
ACL | Meta-Transfer Learning for Code-Switched Speech Recognition | An increasing number of people in the world today speak a mixed-language as a result of being multilingual. However, building a speech recognition system for code-switching remains difficult due to the availability of limited resources and the expense and significant effort required to collect mixed-language data. We t... | 065b77c10de8f028fd06f575b0db288a | 2,020 | [
"an increasing number of people in the world today speak a mixed - language as a result of being multilingual .",
"however , building a speech recognition system for code - switching remains difficult due to the availability of limited resources and the expense and significant effort required to collect mixed - l... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "speech recognition system",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"speech",
"recognition",
"system"
],
"offsets": [
25,
26,
... | [
"an",
"increasing",
"number",
"of",
"people",
"in",
"the",
"world",
"today",
"speak",
"a",
"mixed",
"-",
"language",
"as",
"a",
"result",
"of",
"being",
"multilingual",
".",
"however",
",",
"building",
"a",
"speech",
"recognition",
"system",
"for",
"code",
... |
ACL | Automatic Domain Adaptation Outperforms Manual Domain Adaptation for Predicting Financial Outcomes | In this paper, we automatically create sentiment dictionaries for predicting financial outcomes. We compare three approaches: (i) manual adaptation of the domain-general dictionary H4N, (ii) automatic adaptation of H4N and (iii) a combination consisting of first manual, then automatic adaptation. In our experiments, we... | 7edff319c8d8604677495072169f63ac | 2,019 | [
"in this paper , we automatically create sentiment dictionaries for predicting financial outcomes .",
"we compare three approaches : ( i ) manual adaptation of the domain - general dictionary h4n , ( ii ) automatic adaptation of h4n and ( iii ) a combination consisting of first manual , then automatic adaptation ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
4
]
},
{
"text": "sentiment dictionaries",
"nugget_type": "TA... | [
"in",
"this",
"paper",
",",
"we",
"automatically",
"create",
"sentiment",
"dictionaries",
"for",
"predicting",
"financial",
"outcomes",
".",
"we",
"compare",
"three",
"approaches",
":",
"(",
"i",
")",
"manual",
"adaptation",
"of",
"the",
"domain",
"-",
"genera... |
ACL | SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher | Even though several methods have proposed to defend textual neural network (NN) models against black-box adversarial attacks, they often defend against a specific text perturbation strategy and/or require re-training the models from scratch. This leads to a lack of generalization in practice and redundant computation. ... | db308ef55f1a97ae1afd82e9df957b67 | 2,022 | [
"even though several methods have proposed to defend textual neural network ( nn ) models against black - box adversarial attacks , they often defend against a specific text perturbation strategy and / or require re - training the models from scratch .",
"this leads to a lack of generalization in practice and red... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "textual neural network models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"textual",
"neural",
"network",
"models"
],
"offsets": [
8,
... | [
"even",
"though",
"several",
"methods",
"have",
"proposed",
"to",
"defend",
"textual",
"neural",
"network",
"(",
"nn",
")",
"models",
"against",
"black",
"-",
"box",
"adversarial",
"attacks",
",",
"they",
"often",
"defend",
"against",
"a",
"specific",
"text",
... |
ACL | Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts | For evaluating machine-generated texts, automatic methods hold the promise of avoiding collection of human judgments, which can be expensive and time-consuming. The most common automatic metrics, like BLEU and ROUGE, depend on exact word matching, an inflexible approach for measuring semantic similarity. We introduce m... | b1d47b398d692f1219978956d74ead96 | 2,019 | [
"for evaluating machine - generated texts , automatic methods hold the promise of avoiding collection of human judgments , which can be expensive and time - consuming .",
"the most common automatic metrics , like bleu and rouge , depend on exact word matching , an inflexible approach for measuring semantic simila... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "automatic methods",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"automatic",
"methods"
],
"offsets": [
7,
8
]
}
],
"trigger":... | [
"for",
"evaluating",
"machine",
"-",
"generated",
"texts",
",",
"automatic",
"methods",
"hold",
"the",
"promise",
"of",
"avoiding",
"collection",
"of",
"human",
"judgments",
",",
"which",
"can",
"be",
"expensive",
"and",
"time",
"-",
"consuming",
".",
"the",
... |
ACL | ForecastQA: A Question Answering Challenge for Event Forecasting with Temporal Text Data | Event forecasting is a challenging, yet important task, as humans seek to constantly plan for the future. Existing automated forecasting studies rely mostly on structured data, such as time-series or event-based knowledge graphs, to help predict future events. In this work, we aim to formulate a task, construct a datas... | e3768ad9ee616a9839c242bb3cff6940 | 2,021 | [
"event forecasting is a challenging , yet important task , as humans seek to constantly plan for the future .",
"existing automated forecasting studies rely mostly on structured data , such as time - series or event - based knowledge graphs , to help predict future events .",
"in this work , we aim to formulate... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "event forecasting",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"event",
"forecasting"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"event",
"forecasting",
"is",
"a",
"challenging",
",",
"yet",
"important",
"task",
",",
"as",
"humans",
"seek",
"to",
"constantly",
"plan",
"for",
"the",
"future",
".",
"existing",
"automated",
"forecasting",
"studies",
"rely",
"mostly",
"on",
"structured",
"d... |
ACL | Efficient Passage Retrieval with Hashing for Open-domain Question Answering | Most state-of-the-art open-domain question answering systems use a neural retrieval model to encode passages into continuous vectors and extract them from a knowledge source. However, such retrieval models often require large memory to run because of the massive size of their passage index. In this paper, we introduce ... | 66fa7cb7c9e5fbe0bc0cf02a3a30dcc9 | 2,021 | [
"most state - of - the - art open - domain question answering systems use a neural retrieval model to encode passages into continuous vectors and extract them from a knowledge source .",
"however , such retrieval models often require large memory to run because of the massive size of their passage index .",
"in... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "state - of - the - art open - domain question answering systems",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"... | [
"most",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"open",
"-",
"domain",
"question",
"answering",
"systems",
"use",
"a",
"neural",
"retrieval",
"model",
"to",
"encode",
"passages",
"into",
"continuous",
"vectors",
"and",
"extract",
"them",
"from",
"a",
"... |
ACL | Evaluating Extreme Hierarchical Multi-label Classification | Several natural language processing (NLP) tasks are defined as a classification problem in its most complex form: Multi-label Hierarchical Extreme classification, in which items may be associated with multiple classes from a set of thousands of possible classes organized in a hierarchy and with a highly unbalanced dist... | db434d8ae5b0587b4cc241b7433ef2d2 | 2,022 | [
"several natural language processing ( nlp ) tasks are defined as a classification problem in its most complex form : multi - label hierarchical extreme classification , in which items may be associated with multiple classes from a set of thousands of possible classes organized in a hierarchy and with a highly unba... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - label hierarchical extreme classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"label",
"hierarchical",
"extreme",
... | [
"several",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"tasks",
"are",
"defined",
"as",
"a",
"classification",
"problem",
"in",
"its",
"most",
"complex",
"form",
":",
"multi",
"-",
"label",
"hierarchical",
"extreme",
"classification",
",",
"in",
"whi... |
ACL | Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation | In Neural Machine Translation (and, more generally, conditional language modeling), the generation of a target token is influenced by two types of context: the source and the prefix of the target sequence. While many attempts to understand the internal workings of NMT models have been made, none of them explicitly eval... | 34f754f53130727c0dace5149651fa43 | 2,021 | [
"in neural machine translation ( and , more generally , conditional language modeling ) , the generation of a target token is influenced by two types of context : the source and the prefix of the target sequence .",
"while many attempts to understand the internal workings of nmt models have been made , none of th... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"machine",
"translation"
],
"offsets": [
1,
2,
... | [
"in",
"neural",
"machine",
"translation",
"(",
"and",
",",
"more",
"generally",
",",
"conditional",
"language",
"modeling",
")",
",",
"the",
"generation",
"of",
"a",
"target",
"token",
"is",
"influenced",
"by",
"two",
"types",
"of",
"context",
":",
"the",
... |
ACL | Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics | Automatic metrics are fundamental for the development and evaluation of machine translation systems. Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem. We show that current methods for judging metrics are highly sensitive to the tra... | 04bef279dfd2a30d98dc331369c4511e | 2,020 | [
"automatic metrics are fundamental for the development and evaluation of machine translation systems .",
"judging whether , and to what extent , automatic metrics concur with the gold standard of human evaluation is not a straightforward problem .",
"we show that current methods for judging metrics are highly s... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "automatic metrics",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"automatic",
"metrics"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"automatic",
"metrics",
"are",
"fundamental",
"for",
"the",
"development",
"and",
"evaluation",
"of",
"machine",
"translation",
"systems",
".",
"judging",
"whether",
",",
"and",
"to",
"what",
"extent",
",",
"automatic",
"metrics",
"concur",
"with",
"the",
"gold"... |
ACL | Towards Holistic and Automatic Evaluation of Open-Domain Dialogue Generation | Open-domain dialogue generation has gained increasing attention in Natural Language Processing. Its evaluation requires a holistic means. Human ratings are deemed as the gold standard. As human evaluation is inefficient and costly, an automated substitute is highly desirable. In this paper, we propose holistic evaluati... | 50b75514df02b4768ea72541797fcb07 | 2,020 | [
"open - domain dialogue generation has gained increasing attention in natural language processing .",
"its evaluation requires a holistic means .",
"human ratings are deemed as the gold standard .",
"as human evaluation is inefficient and costly , an automated substitute is highly desirable .",
"in this pap... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open - domain dialogue generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"-",
"domain",
"dialogue",
"generation"
],
"offset... | [
"open",
"-",
"domain",
"dialogue",
"generation",
"has",
"gained",
"increasing",
"attention",
"in",
"natural",
"language",
"processing",
".",
"its",
"evaluation",
"requires",
"a",
"holistic",
"means",
".",
"human",
"ratings",
"are",
"deemed",
"as",
"the",
"gold",... |
ACL | Searching for fingerspelled content in American Sign Language | Natural language processing for sign language video—including tasks like recognition, translation, and search—is crucial for making artificial intelligence technologies accessible to deaf individuals, and is gaining research interest in recent years. In this paper, we address the problem of searching for fingerspelled ... | c54493f88eb5c16d1c0d9c97a9bdf0f4 | 2,022 | [
"natural language processing for sign language video — including tasks like recognition , translation , and search — is crucial for making artificial intelligence technologies accessible to deaf individuals , and is gaining research interest in recent years .",
"in this paper , we address the problem of searching... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing for sign language video",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"for",
"sign",
... | [
"natural",
"language",
"processing",
"for",
"sign",
"language",
"video",
"—",
"including",
"tasks",
"like",
"recognition",
",",
"translation",
",",
"and",
"search",
"—",
"is",
"crucial",
"for",
"making",
"artificial",
"intelligence",
"technologies",
"accessible",
... |
ACL | Ethics Sheets for AI Tasks | Several high-profile events, such as the mass testing of emotion recognition systems on vulnerable sub-populations and using question answering systems to make moral judgments, have highlighted how technology will often lead to more adverse outcomes for those that are already marginalized. At issue here are not just in... | 47ce1c594e3cd90f39d4170d72f8dc99 | 2,022 | [
"several high - profile events , such as the mass testing of emotion recognition systems on vulnerable sub - populations and using question answering systems to make moral judgments , have highlighted how technology will often lead to more adverse outcomes for those that are already marginalized .",
"at issue her... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "ai tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"ai",
"tasks"
],
"offsets": [
62,
63
]
}
],
"trigger": {
"text": ... | [
"several",
"high",
"-",
"profile",
"events",
",",
"such",
"as",
"the",
"mass",
"testing",
"of",
"emotion",
"recognition",
"systems",
"on",
"vulnerable",
"sub",
"-",
"populations",
"and",
"using",
"question",
"answering",
"systems",
"to",
"make",
"moral",
"judg... |
ACL | Who Sides with Whom? Towards Computational Construction of Discourse Networks for Political Debates | Understanding the structures of political debates (which actors make what claims) is essential for understanding democratic political decision making. The vision of computational construction of such discourse networks from newspaper reports brings together political science and natural language processing. This paper ... | 85ea522174888752888c00779f94c185 | 2,019 | [
"understanding the structures of political debates ( which actors make what claims ) is essential for understanding democratic political decision making .",
"the vision of computational construction of such discourse networks from newspaper reports brings together political science and natural language processing... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "structures of political debates",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"structures",
"of",
"political",
"debates"
],
"offsets": [
2... | [
"understanding",
"the",
"structures",
"of",
"political",
"debates",
"(",
"which",
"actors",
"make",
"what",
"claims",
")",
"is",
"essential",
"for",
"understanding",
"democratic",
"political",
"decision",
"making",
".",
"the",
"vision",
"of",
"computational",
"con... |
ACL | Quantifying Attention Flow in Transformers | In the Transformer model, “self-attention” combines information from attended embeddings into the representation of the focal embedding in the next layer. Thus, across layers of the Transformer, information originating from different tokens gets increasingly mixed. This makes attention weights unreliable as explanation... | 150cfb01fbd228b9a0852fe4c06f509f | 2,020 | [
"in the transformer model , “ self - attention ” combines information from attended embeddings into the representation of the focal embedding in the next layer .",
"thus , across layers of the transformer , information originating from different tokens gets increasingly mixed .",
"this makes attention weights u... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "representation of the focal embedding",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"representation",
"of",
"the",
"focal",
"embedding"
],
... | [
"in",
"the",
"transformer",
"model",
",",
"“",
"self",
"-",
"attention",
"”",
"combines",
"information",
"from",
"attended",
"embeddings",
"into",
"the",
"representation",
"of",
"the",
"focal",
"embedding",
"in",
"the",
"next",
"layer",
".",
"thus",
",",
"ac... |
ACL | BAM! Born-Again Multi-Task Networks for Natural Language Understanding | It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts. To help address this, we propose using knowledge distillation where single-task models teach a multi-task model. We enhance this training with teacher annealing, a novel method that gradually transiti... | fdf99a71b3866b495279c73ace46c4be | 2,019 | [
"it can be challenging to train multi - task neural networks that outperform or even match their single - task counterparts .",
"to help address this , we propose using knowledge distillation where single - task models teach a multi - task model .",
"we enhance this training with teacher annealing , a novel met... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - task neural networks",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"task",
"neural",
"networks"
],
"offsets": [
... | [
"it",
"can",
"be",
"challenging",
"to",
"train",
"multi",
"-",
"task",
"neural",
"networks",
"that",
"outperform",
"or",
"even",
"match",
"their",
"single",
"-",
"task",
"counterparts",
".",
"to",
"help",
"address",
"this",
",",
"we",
"propose",
"using",
"... |
ACL | The Power of Prompt Tuning for Low-Resource Semantic Parsing | Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing—the task of mapping natural language utterances onto formal meaning representations. On the low-... | 06be01387a0d5c3a33e132d39b7f2154 | 2,022 | [
"prompt tuning has recently emerged as an effective method for adapting pre - trained language models to a number of language understanding and generation tasks .",
"in this paper , we investigate prompt tuning for semantic parsing — the task of mapping natural language utterances onto formal meaning representati... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "language understanding tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"language",
"understanding",
"tasks"
],
"offsets": [
20,
21,
... | [
"prompt",
"tuning",
"has",
"recently",
"emerged",
"as",
"an",
"effective",
"method",
"for",
"adapting",
"pre",
"-",
"trained",
"language",
"models",
"to",
"a",
"number",
"of",
"language",
"understanding",
"and",
"generation",
"tasks",
".",
"in",
"this",
"paper... |
ACL | Relational Graph Attention Network for Aspect-based Sentiment Analysis | Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews. Most recent efforts adopt attention-based neural network models to implicitly connect aspects with opinion words. However, due to the complexity of language and the existence of multiple aspects in a si... | 7ab07221061efc25f3500f8dd431d749 | 2,020 | [
"aspect - based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews .",
"most recent efforts adopt attention - based neural network models to implicitly connect aspects with opinion words .",
"however , due to the complexity of language and the existence of mu... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "aspect - based sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"aspect",
"-",
"based",
"sentiment",
"analysis"
],
"offset... | [
"aspect",
"-",
"based",
"sentiment",
"analysis",
"aims",
"to",
"determine",
"the",
"sentiment",
"polarity",
"towards",
"a",
"specific",
"aspect",
"in",
"online",
"reviews",
".",
"most",
"recent",
"efforts",
"adopt",
"attention",
"-",
"based",
"neural",
"network"... |
ACL | DynaEval: Unifying Turn and Dialogue Level Evaluation | A dialogue is essentially a multi-turn interaction among interlocutors. Effective evaluation metrics should reflect the dynamics of such interaction. Existing automatic metrics are focused very much on the turn-level quality, while ignoring such dynamics. To this end, we propose DynaEval, a unified automatic evaluation... | 245db7afd11be58cdd7c62ebd420ca65 | 2,021 | [
"a dialogue is essentially a multi - turn interaction among interlocutors .",
"effective evaluation metrics should reflect the dynamics of such interaction .",
"existing automatic metrics are focused very much on the turn - level quality , while ignoring such dynamics .",
"to this end , we propose dynaeval , ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "evaluation metrics",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"evaluation",
"metrics"
],
"offsets": [
13,
14
]
}
],
"trigge... | [
"a",
"dialogue",
"is",
"essentially",
"a",
"multi",
"-",
"turn",
"interaction",
"among",
"interlocutors",
".",
"effective",
"evaluation",
"metrics",
"should",
"reflect",
"the",
"dynamics",
"of",
"such",
"interaction",
".",
"existing",
"automatic",
"metrics",
"are"... |
ACL | Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards | The growth of online consumer health questions has led to the necessity for reliable and accurate question answering systems. A recent study showed that manual summarization of consumer health questions brings significant improvement in retrieving relevant answers. However, the automatic summarization of long questions... | 566f56965d81f0ccb15da0d367dca74c | 2,021 | [
"the growth of online consumer health questions has led to the necessity for reliable and accurate question answering systems .",
"a recent study showed that manual summarization of consumer health questions brings significant improvement in retrieving relevant answers .",
"however , the automatic summarization... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "online consumer health questions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"online",
"consumer",
"health",
"questions"
],
"offsets": [
... | [
"the",
"growth",
"of",
"online",
"consumer",
"health",
"questions",
"has",
"led",
"to",
"the",
"necessity",
"for",
"reliable",
"and",
"accurate",
"question",
"answering",
"systems",
".",
"a",
"recent",
"study",
"showed",
"that",
"manual",
"summarization",
"of",
... |
ACL | Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling | Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate. Current models for jointly learning sentence and token labeling are limited to binary classification. We present a joint mo... | 2fbb4456fc76dd83fc54b0c6bb439207 | 2,020 | [
"exploiting sentence - level labels , which are easy to obtain , is one of the plausible methods to improve low - resource named entity recognition ( ner ) , where token - level labels are costly to annotate .",
"current models for jointly learning sentence and token labeling are limited to binary classification ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "exploiting sentence - level labels",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"exploiting",
"sentence",
"-",
"level",
"labels"
],
"offs... | [
"exploiting",
"sentence",
"-",
"level",
"labels",
",",
"which",
"are",
"easy",
"to",
"obtain",
",",
"is",
"one",
"of",
"the",
"plausible",
"methods",
"to",
"improve",
"low",
"-",
"resource",
"named",
"entity",
"recognition",
"(",
"ner",
")",
",",
"where",
... |
ACL | Learning to Ask More: Semi-Autoregressive Sequential Question Generation under Dual-Graph Interaction | Traditional Question Generation (TQG) aims to generate a question given an input passage and an answer. When there is a sequence of answers, we can perform Sequential Question Generation (SQG) to produce a series of interconnected questions. Since the frequently occurred information omission and coreference between que... | 0446ac288dccfb381b1e0c95368677d6 | 2,020 | [
"traditional question generation ( tqg ) aims to generate a question given an input passage and an answer .",
"when there is a sequence of answers , we can perform sequential question generation ( sqg ) to produce a series of interconnected questions .",
"since the frequently occurred information omission and c... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "traditional question generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"traditional",
"question",
"generation"
],
"offsets": [
0,
1... | [
"traditional",
"question",
"generation",
"(",
"tqg",
")",
"aims",
"to",
"generate",
"a",
"question",
"given",
"an",
"input",
"passage",
"and",
"an",
"answer",
".",
"when",
"there",
"is",
"a",
"sequence",
"of",
"answers",
",",
"we",
"can",
"perform",
"seque... |
ACL | Towards a more Robust Evaluation for Conversational Question Answering | With the explosion of chatbot applications, Conversational Question Answering (CQA) has generated a lot of interest in recent years. Among proposals, reading comprehension models which take advantage of the conversation history (previous QA) seem to answer better than those which only consider the current question. Nev... | 4b9e192929bda2838dbed59ca724673b | 2,021 | [
"with the explosion of chatbot applications , conversational question answering ( cqa ) has generated a lot of interest in recent years .",
"among proposals , reading comprehension models which take advantage of the conversation history ( previous qa ) seem to answer better than those which only consider the curr... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conversational question answering",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"conversational",
"question",
"answering"
],
"offsets": [
7,
... | [
"with",
"the",
"explosion",
"of",
"chatbot",
"applications",
",",
"conversational",
"question",
"answering",
"(",
"cqa",
")",
"has",
"generated",
"a",
"lot",
"of",
"interest",
"in",
"recent",
"years",
".",
"among",
"proposals",
",",
"reading",
"comprehension",
... |
ACL | Simultaneous Translation Policies: From Fixed to Adaptive | Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information. But previous methods on obtaining adaptive policies either rely on complicated training process, or underperform s... | fc389d4f980ba7d4fa0fa486cc168c82 | 2,020 | [
"adaptive policies are better than fixed policies for simultaneous translation , since they can flexibly balance the tradeoff between translation quality and latency based on the current context information .",
"but previous methods on obtaining adaptive policies either rely on complicated training process , or u... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "simultaneous translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"simultaneous",
"translation"
],
"offsets": [
8,
9
]
}
],
... | [
"adaptive",
"policies",
"are",
"better",
"than",
"fixed",
"policies",
"for",
"simultaneous",
"translation",
",",
"since",
"they",
"can",
"flexibly",
"balance",
"the",
"tradeoff",
"between",
"translation",
"quality",
"and",
"latency",
"based",
"on",
"the",
"current... |
ACL | Upstream Mitigation Is Not All You Need: Testing the Bias Transfer Hypothesis in Pre-Trained Language Models | A few large, homogenous, pre-trained models undergird many machine learning systems — and often, these models contain harmful stereotypes learned from the internet. We investigate the bias transfer hypothesis: the theory that social biases (such as stereotypes) internalized by large language models during pre-training ... | 4acb41805e96a7b71e0b189f4934c88c | 2,022 | [
"a few large , homogenous , pre - trained models undergird many machine learning systems — and often , these models contain harmful stereotypes learned from the internet .",
"we investigate the bias transfer hypothesis : the theory that social biases ( such as stereotypes ) internalized by large language models d... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "harmful stereotypes",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"harmful",
"stereotypes"
],
"offsets": [
22,
23
]
},
{
... | [
"a",
"few",
"large",
",",
"homogenous",
",",
"pre",
"-",
"trained",
"models",
"undergird",
"many",
"machine",
"learning",
"systems",
"—",
"and",
"often",
",",
"these",
"models",
"contain",
"harmful",
"stereotypes",
"learned",
"from",
"the",
"internet",
".",
... |
ACL | Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks | Sequential labeling-based NER approaches restrict each word belonging to at most one entity mention, which will face a serious problem when recognizing nested entity mentions. In this paper, we propose to resolve this problem by modeling and leveraging the head-driven phrase structures of entity mentions, i.e., althoug... | aae04240829419f1393e7e07bfbf6606 | 2,019 | [
"sequential labeling - based ner approaches restrict each word belonging to at most one entity mention , which will face a serious problem when recognizing nested entity mentions .",
"in this paper , we propose to resolve this problem by modeling and leveraging the head - driven phrase structures of entity mentio... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
33
]
},
{
"text": "resolve",
"nugget_type": "E-PUR",
... | [
"sequential",
"labeling",
"-",
"based",
"ner",
"approaches",
"restrict",
"each",
"word",
"belonging",
"to",
"at",
"most",
"one",
"entity",
"mention",
",",
"which",
"will",
"face",
"a",
"serious",
"problem",
"when",
"recognizing",
"nested",
"entity",
"mentions",
... |
ACL | Scientific Credibility of Machine Translation Research: A Meta-Evaluation of 769 Papers | This paper presents the first large-scale meta-evaluation of machine translation (MT). We annotated MT evaluations conducted in 769 research papers published from 2010 to 2020. Our study shows that practices for automatic MT evaluation have dramatically changed during the past decade and follow concerning trends. An in... | 645af78e19918dda510ac96acffef7df | 2,021 | [
"this paper presents the first large - scale meta - evaluation of machine translation ( mt ) .",
"we annotated mt evaluations conducted in 769 research papers published from 2010 to 2020 .",
"our study shows that practices for automatic mt evaluation have dramatically changed during the past decade and follow c... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "meta - evaluation of mt",
"nugget_type": "TAK",
"argument_type": "Content",
"tokens": [
"meta",
"-",
"evaluation",
"of",
"mt"
],
"offsets": [
8,
... | [
"this",
"paper",
"presents",
"the",
"first",
"large",
"-",
"scale",
"meta",
"-",
"evaluation",
"of",
"machine",
"translation",
"(",
"mt",
")",
".",
"we",
"annotated",
"mt",
"evaluations",
"conducted",
"in",
"769",
"research",
"papers",
"published",
"from",
"... |
ACL | Pretraining Methods for Dialog Context Representation Learning | This paper examines various unsupervised pretraining objectives for learning dialog context representations. Two novel methods of pretraining dialog context encoders are proposed, and a total of four methods are examined. Each pretraining objective is fine-tuned and evaluated on a set of downstream dialog tasks using t... | 0a56e6c958ddfbf6b8b287f4ffbba6ed | 2,019 | [
"this paper examines various unsupervised pretraining objectives for learning dialog context representations .",
"two novel methods of pretraining dialog context encoders are proposed , and a total of four methods are examined .",
"each pretraining objective is fine - tuned and evaluated on a set of downstream ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "two novel methods of pretraining dialog context encoders",
"nugget_type": "APP",
"argument_type": "Content",
"tokens": [
"two",
"novel",
"methods",
"of",
"pretraining",
... | [
"this",
"paper",
"examines",
"various",
"unsupervised",
"pretraining",
"objectives",
"for",
"learning",
"dialog",
"context",
"representations",
".",
"two",
"novel",
"methods",
"of",
"pretraining",
"dialog",
"context",
"encoders",
"are",
"proposed",
",",
"and",
"a",
... |
ACL | EmailSum: Abstractive Email Thread Summarization | Recent years have brought about an interest in the challenging task of summarizing conversation threads (meetings, online discussions, etc.). Such summaries help analysis of the long text to quickly catch up with the decisions made and thus improve our work or communication efficiency. To spur research in thread summar... | 74e6e6d58dc0bf0f2e7ce6a0be006692 | 2,021 | [
"recent years have brought about an interest in the challenging task of summarizing conversation threads ( meetings , online discussions , etc . ) .",
"such summaries help analysis of the long text to quickly catch up with the decisions made and thus improve our work or communication efficiency .",
"to spur res... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "summarizing conversation threads",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"summarizing",
"conversation",
"threads"
],
"offsets": [
12,
... | [
"recent",
"years",
"have",
"brought",
"about",
"an",
"interest",
"in",
"the",
"challenging",
"task",
"of",
"summarizing",
"conversation",
"threads",
"(",
"meetings",
",",
"online",
"discussions",
",",
"etc",
".",
")",
".",
"such",
"summaries",
"help",
"analysi... |
ACL | Exploiting Explicit Paths for Multi-hop Reading Comprehension | We propose a novel, path-based reasoning approach for the multi-hop reading comprehension task where a system needs to combine facts from multiple passages to answer a question. Although inspired by multi-hop reasoning over knowledge graphs, our proposed approach operates directly over unstructured text. It generates p... | fbe5799a5b888f00d642e761ec4971a8 | 2,019 | [
"we propose a novel , path - based reasoning approach for the multi - hop reading comprehension task where a system needs to combine facts from multiple passages to answer a question .",
"although inspired by multi - hop reasoning over knowledge graphs , our proposed approach operates directly over unstructured t... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "path - based reasoning approach",
"nugget_t... | [
"we",
"propose",
"a",
"novel",
",",
"path",
"-",
"based",
"reasoning",
"approach",
"for",
"the",
"multi",
"-",
"hop",
"reading",
"comprehension",
"task",
"where",
"a",
"system",
"needs",
"to",
"combine",
"facts",
"from",
"multiple",
"passages",
"to",
"answer... |
ACL | Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation | Self-training has proven effective for improving NMT performance by augmenting model training with synthetic parallel data. The common practice is to construct synthetic data based on a randomly sampled subset of large-scale monolingual data, which we empirically show is sub-optimal. In this work, we propose to improve... | f8c290e608da9340b5353f2f0b0c632f | 2,021 | [
"self - training has proven effective for improving nmt performance by augmenting model training with synthetic parallel data .",
"the common practice is to construct synthetic data based on a randomly sampled subset of large - scale monolingual data , which we empirically show is sub - optimal .",
"in this wor... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "self - training",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"self",
"-",
"training"
],
"offsets": [
0,
1,
2
]
}
... | [
"self",
"-",
"training",
"has",
"proven",
"effective",
"for",
"improving",
"nmt",
"performance",
"by",
"augmenting",
"model",
"training",
"with",
"synthetic",
"parallel",
"data",
".",
"the",
"common",
"practice",
"is",
"to",
"construct",
"synthetic",
"data",
"ba... |
ACL | It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations | Training on only perfect Standard English corpora predisposes pre-trained neural networks to discriminate against minorities from non-standard linguistic backgrounds (e.g., African American Vernacular English, Colloquial Singapore English, etc.). We perturb the inflectional morphology of words to craft plausible and se... | 3c6f531b0238e7915cfa5c7df13d7f66 | 2,020 | [
"training on only perfect standard english corpora predisposes pre - trained neural networks to discriminate against minorities from non - standard linguistic backgrounds ( e . g . , african american vernacular english , colloquial singapore english , etc . ) .",
"we perturb the inflectional morphology of words t... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "discriminate",
"nugget_type": "WEA",
"argument_type": "Fault",
"tokens": [
"discriminate"
],
"offsets": [
14
]
},
{
"text": "pre - trained neural networks",
... | [
"training",
"on",
"only",
"perfect",
"standard",
"english",
"corpora",
"predisposes",
"pre",
"-",
"trained",
"neural",
"networks",
"to",
"discriminate",
"against",
"minorities",
"from",
"non",
"-",
"standard",
"linguistic",
"backgrounds",
"(",
"e",
".",
"g",
"."... |
ACL | Sequence-to-Sequence Knowledge Graph Completion and Question Answering | Knowledge graph embedding (KGE) models represent each entity and relation of a knowledge graph (KG) with low-dimensional embedding vectors. These methods have recently been applied to KG link prediction and question answering over incomplete KGs (KGQA). KGEs typically create an embedding for each entity in the graph, w... | e87e5f6333a209fc7fdeaf34ed60f562 | 2,022 | [
"knowledge graph embedding ( kge ) models represent each entity and relation of a knowledge graph ( kg ) with low - dimensional embedding vectors .",
"these methods have recently been applied to kg link prediction and question answering over incomplete kgs ( kgqa ) .",
"kges typically create an embedding for ea... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "atomic entity representations",
"nugget_type": "FEA",
"argument_type": "Concern",
"tokens": [
"atomic",
"entity",
"representations"
],
"offsets": [
78,
79,... | [
"knowledge",
"graph",
"embedding",
"(",
"kge",
")",
"models",
"represent",
"each",
"entity",
"and",
"relation",
"of",
"a",
"knowledge",
"graph",
"(",
"kg",
")",
"with",
"low",
"-",
"dimensional",
"embedding",
"vectors",
".",
"these",
"methods",
"have",
"rece... |
ACL | AutoML Strategy Based on Grammatical Evolution: A Case Study about Knowledge Discovery from Text | The process of extracting knowledge from natural language text poses a complex problem that requires both a combination of machine learning techniques and proper feature selection. Recent advances in Automatic Machine Learning (AutoML) provide effective tools to explore large sets of algorithms, hyper-parameters and fe... | 3bb95c02b1dd9a7555b1b1795071d8b2 | 2,019 | [
"the process of extracting knowledge from natural language text poses a complex problem that requires both a combination of machine learning techniques and proper feature selection .",
"recent advances in automatic machine learning ( automl ) provide effective tools to explore large sets of algorithms , hyper - p... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "extracting knowledge from natural language text",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"extracting",
"knowledge",
"from",
"natural",
"language",
... | [
"the",
"process",
"of",
"extracting",
"knowledge",
"from",
"natural",
"language",
"text",
"poses",
"a",
"complex",
"problem",
"that",
"requires",
"both",
"a",
"combination",
"of",
"machine",
"learning",
"techniques",
"and",
"proper",
"feature",
"selection",
".",
... |
ACL | Robust Representation Learning of Biomedical Names | Biomedical concepts are often mentioned in medical documents under different name variations (synonyms). This mismatch between surface forms is problematic, resulting in difficulties pertaining to learning effective representations. Consequently, this has tremendous implications such as rendering downstream application... | 9935dfa073545cefa8787d1d2a28a43f | 2,019 | [
"biomedical concepts are often mentioned in medical documents under different name variations ( synonyms ) .",
"this mismatch between surface forms is problematic , resulting in difficulties pertaining to learning effective representations .",
"consequently , this has tremendous implications such as rendering d... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "biomedical concepts",
"nugget_type": "FEA",
"argument_type": "Target",
"tokens": [
"biomedical",
"concepts"
],
"offsets": [
0,
1
]
}
],
"trigge... | [
"biomedical",
"concepts",
"are",
"often",
"mentioned",
"in",
"medical",
"documents",
"under",
"different",
"name",
"variations",
"(",
"synonyms",
")",
".",
"this",
"mismatch",
"between",
"surface",
"forms",
"is",
"problematic",
",",
"resulting",
"in",
"difficultie... |
ACL | Clinical Reading Comprehension: A Thorough Analysis of the emrQA Dataset | Machine reading comprehension has made great progress in recent years owing to large-scale annotated datasets. In the clinical domain, however, creating such datasets is quite difficult due to the domain expertise required for annotation. Recently, Pampari et al. (EMNLP’18) tackled this issue by using expert-annotated ... | bf62e7f4e7283a9bb80af42aa93a1bc8 | 2,020 | [
"machine reading comprehension has made great progress in recent years owing to large - scale annotated datasets .",
"in the clinical domain , however , creating such datasets is quite difficult due to the domain expertise required for annotation .",
"recently , pampari et al . ( emnlp ’ 18 ) tackled this issue... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine reading comprehension",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"machine",
"reading",
"comprehension"
],
"offsets": [
0,
1,
... | [
"machine",
"reading",
"comprehension",
"has",
"made",
"great",
"progress",
"in",
"recent",
"years",
"owing",
"to",
"large",
"-",
"scale",
"annotated",
"datasets",
".",
"in",
"the",
"clinical",
"domain",
",",
"however",
",",
"creating",
"such",
"datasets",
"is"... |
ACL | A Joint Model for Document Segmentation and Segment Labeling | Text segmentation aims to uncover latent structure by dividing text from a document into coherent sections. Where previous work on text segmentation considers the tasks of document segmentation and segment labeling separately, we show that the tasks contain complementary information and are best addressed jointly. We i... | e88f3170385684ba0ad858db7a465ddc | 2,020 | [
"text segmentation aims to uncover latent structure by dividing text from a document into coherent sections .",
"where previous work on text segmentation considers the tasks of document segmentation and segment labeling separately , we show that the tasks contain complementary information and are best addressed j... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "text segmentation",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"text",
"segmentation"
],
"offsets": [
0,
1
]
}
],
"trigger": ... | [
"text",
"segmentation",
"aims",
"to",
"uncover",
"latent",
"structure",
"by",
"dividing",
"text",
"from",
"a",
"document",
"into",
"coherent",
"sections",
".",
"where",
"previous",
"work",
"on",
"text",
"segmentation",
"considers",
"the",
"tasks",
"of",
"documen... |
ACL | CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of Modality | Previous studies in multimodal sentiment analysis have used limited datasets, which only contain unified multimodal annotations. However, the unified annotations do not always reflect the independent sentiment of single modalities and limit the model to capture the difference between modalities. In this paper, we intro... | 49c81db78fcc08b16e288858ded51e14 | 2,020 | [
"previous studies in multimodal sentiment analysis have used limited datasets , which only contain unified multimodal annotations .",
"however , the unified annotations do not always reflect the independent sentiment of single modalities and limit the model to capture the difference between modalities .",
"in t... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "previous studies in multimodal sentiment analysis",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"previous",
"studies",
"in",
"multimodal",
"sentiment",
... | [
"previous",
"studies",
"in",
"multimodal",
"sentiment",
"analysis",
"have",
"used",
"limited",
"datasets",
",",
"which",
"only",
"contain",
"unified",
"multimodal",
"annotations",
".",
"however",
",",
"the",
"unified",
"annotations",
"do",
"not",
"always",
"reflec... |
ACL | Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums | Teaching machines to ask questions is an important yet challenging task. Most prior work focused on generating questions with fixed answers. As contents are highly limited by given answers, these questions are often not worth discussing. In this paper, we take the first step on teaching machines to ask open-answered qu... | a48fdd6e3eb15ca3128b91d41e1f86ee | 2,019 | [
"teaching machines to ask questions is an important yet challenging task .",
"most prior work focused on generating questions with fixed answers .",
"as contents are highly limited by given answers , these questions are often not worth discussing .",
"in this paper , we take the first step on teaching machine... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "teaching machines to ask questions",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"teaching",
"machines",
"to",
"ask",
"questions"
],
"offs... | [
"teaching",
"machines",
"to",
"ask",
"questions",
"is",
"an",
"important",
"yet",
"challenging",
"task",
".",
"most",
"prior",
"work",
"focused",
"on",
"generating",
"questions",
"with",
"fixed",
"answers",
".",
"as",
"contents",
"are",
"highly",
"limited",
"b... |
ACL | Zero-shot Fact Verification by Claim Generation | Neural models for automated fact verification have achieved promising results thanks to the availability of large, human-annotated datasets. However, for each new domain that requires fact verification, creating a dataset by manually writing claims and linking them to their supporting evidence is expensive. We develop ... | ac8d7d3d825980348e1c79c507e1134c | 2,021 | [
"neural models for automated fact verification have achieved promising results thanks to the availability of large , human - annotated datasets .",
"however , for each new domain that requires fact verification , creating a dataset by manually writing claims and linking them to their supporting evidence is expens... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"neural",
"models"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"neural",
"models",
"for",
"automated",
"fact",
"verification",
"have",
"achieved",
"promising",
"results",
"thanks",
"to",
"the",
"availability",
"of",
"large",
",",
"human",
"-",
"annotated",
"datasets",
".",
"however",
",",
"for",
"each",
"new",
"domain",
"... |
ACL | Exploring Content Selection in Summarization of Novel Chapters | We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides. This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries. We focus on extractive summari... | efc840470773472c7fa580757d24ecac | 2,020 | [
"we present a new summarization task , generating summaries of novel chapters using summary / chapter pairs from online study guides .",
"this is a harder task than the news summarization task , given the chapter length as well as the extreme paraphrasing and generalization found in the summaries .",
"we focus ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "summarization task",
"nugget_type": "TAK",
... | [
"we",
"present",
"a",
"new",
"summarization",
"task",
",",
"generating",
"summaries",
"of",
"novel",
"chapters",
"using",
"summary",
"/",
"chapter",
"pairs",
"from",
"online",
"study",
"guides",
".",
"this",
"is",
"a",
"harder",
"task",
"than",
"the",
"news"... |
ACL | Deep Inductive Logic Reasoning for Multi-Hop Reading Comprehension | Multi-hop reading comprehension requires an ability to reason across multiple documents. On the one hand, deep learning approaches only implicitly encode query-related information into distributed embeddings which fail to uncover the discrete relational reasoning process to infer the correct answer. On the other hand, ... | 4e0eab416480a82a0ffe8444784c6838 | 2,022 | [
"multi - hop reading comprehension requires an ability to reason across multiple documents .",
"on the one hand , deep learning approaches only implicitly encode query - related information into distributed embeddings which fail to uncover the discrete relational reasoning process to infer the correct answer .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multi - hop reading comprehension",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multi",
"-",
"hop",
"reading",
"comprehension"
],
"offset... | [
"multi",
"-",
"hop",
"reading",
"comprehension",
"requires",
"an",
"ability",
"to",
"reason",
"across",
"multiple",
"documents",
".",
"on",
"the",
"one",
"hand",
",",
"deep",
"learning",
"approaches",
"only",
"implicitly",
"encode",
"query",
"-",
"related",
"i... |
ACL | CluHTM - Semantic Hierarchical Topic Modeling based on CluWords | Hierarchical Topic modeling (HTM) exploits latent topics and relationships among them as a powerful tool for data analysis and exploration. Despite advantages over traditional topic modeling, HTM poses its own challenges, such as (1) topic incoherence, (2) unreasonable (hierarchical) structure, and (3) issues related t... | e234a0763f7f413697f2b439dcb3b6d9 | 2,020 | [
"hierarchical topic modeling ( htm ) exploits latent topics and relationships among them as a powerful tool for data analysis and exploration .",
"despite advantages over traditional topic modeling , htm poses its own challenges , such as ( 1 ) topic incoherence , ( 2 ) unreasonable ( hierarchical ) structure , a... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "data analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"data",
"analysis"
],
"offsets": [
18,
19
]
},
{
"text": "da... | [
"hierarchical",
"topic",
"modeling",
"(",
"htm",
")",
"exploits",
"latent",
"topics",
"and",
"relationships",
"among",
"them",
"as",
"a",
"powerful",
"tool",
"for",
"data",
"analysis",
"and",
"exploration",
".",
"despite",
"advantages",
"over",
"traditional",
"t... |
ACL | Improving Transformer Models by Reordering their Sublayers | Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly ordered transformers and train them with the language modeling objective. We observe that some of these models are able to... | 18dc31a00f84f4eb0dcb87f98bfbcf46 | 2,020 | [
"multilayer transformer networks consist of interleaved self - attention and feedforward sublayers .",
"could ordering the sublayers in a different pattern lead to better performance ?",
"we generate randomly ordered transformers and train them with the language modeling objective .",
"we observe that some of... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multilayer transformer networks",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"multilayer",
"transformer",
"networks"
],
"offsets": [
0,
1... | [
"multilayer",
"transformer",
"networks",
"consist",
"of",
"interleaved",
"self",
"-",
"attention",
"and",
"feedforward",
"sublayers",
".",
"could",
"ordering",
"the",
"sublayers",
"in",
"a",
"different",
"pattern",
"lead",
"to",
"better",
"performance",
"?",
"we",... |
ACL | Premise-based Multimodal Reasoning: Conditional Inference on Joint Textual and Visual Clues | It is a common practice for recent works in vision language cross-modal reasoning to adopt a binary or multi-choice classification formulation taking as input a set of source image(s) and textual query. In this work, we take a sober look at such an “unconditional” formulation in the sense that no prior knowledge is spe... | 2e90eba59ff6a075c13fddfdbfa2ce29 | 2,022 | [
"it is a common practice for recent works in vision language cross - modal reasoning to adopt a binary or multi - choice classification formulation taking as input a set of source image ( s ) and textual query .",
"in this work , we take a sober look at such an “ unconditional ” formulation in the sense that no p... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "set of source image",
"nugget_type": "FEA",
"argument_type": "TriedComponent",
"tokens": [
"set",
"of",
"source",
"image"
],
"offsets": [
29,
30,... | [
"it",
"is",
"a",
"common",
"practice",
"for",
"recent",
"works",
"in",
"vision",
"language",
"cross",
"-",
"modal",
"reasoning",
"to",
"adopt",
"a",
"binary",
"or",
"multi",
"-",
"choice",
"classification",
"formulation",
"taking",
"as",
"input",
"a",
"set",... |
ACL | Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker | Document-level event extraction aims to recognize event information from a whole piece of article. Existing methods are not effective due to two challenges of this task: a) the target event arguments are scattered across sentences; b) the correlation among events in a document is non-trivial to model. In this paper, we... | 5c3127cc22f2a20913522d84545925da | 2,021 | [
"document - level event extraction aims to recognize event information from a whole piece of article .",
"existing methods are not effective due to two challenges of this task : a ) the target event arguments are scattered across sentences ; b ) the correlation among events in a document is non - trivial to model... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "document - level event extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"document",
"-",
"level",
"event",
"extraction"
],
"offset... | [
"document",
"-",
"level",
"event",
"extraction",
"aims",
"to",
"recognize",
"event",
"information",
"from",
"a",
"whole",
"piece",
"of",
"article",
".",
"existing",
"methods",
"are",
"not",
"effective",
"due",
"to",
"two",
"challenges",
"of",
"this",
"task",
... |
ACL | Best of Both Worlds: Making High Accuracy Non-incremental Transformer-based Disfluency Detection Incremental | While Transformer-based text classifiers pre-trained on large volumes of text have yielded significant improvements on a wide range of computational linguistics tasks, their implementations have been unsuitable for live incremental processing thus far, operating only on the level of complete sentence inputs. We address... | 5d10254e2bc14ad49800fee42cd6b4ad | 2,021 | [
"while transformer - based text classifiers pre - trained on large volumes of text have yielded significant improvements on a wide range of computational linguistics tasks , their implementations have been unsuitable for live incremental processing thus far , operating only on the level of complete sentence inputs ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "transformer - based text classifiers pre - trained",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"transformer",
"-",
"based",
"text",
"classifiers",
... | [
"while",
"transformer",
"-",
"based",
"text",
"classifiers",
"pre",
"-",
"trained",
"on",
"large",
"volumes",
"of",
"text",
"have",
"yielded",
"significant",
"improvements",
"on",
"a",
"wide",
"range",
"of",
"computational",
"linguistics",
"tasks",
",",
"their",... |
ACL | Systematic Inequalities in Language Technology Performance across the World’s Languages | Natural language processing (NLP) systems have become a central technology in communication, education, medicine, artificial intelligence, and many other domains of research and development. While the performance of NLP methods has grown enormously over the last decade, this progress has been restricted to a minuscule ... | e4136414d8c4a326a258aa39857e1eff | 2,022 | [
"natural language processing ( nlp ) systems have become a central technology in communication , education , medicine , artificial intelligence , and many other domains of research and development .",
"while the performance of nlp methods has grown enormously over the last decade , this progress has been restrict... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing systems",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"systems"
],
"offsets": [
... | [
"natural",
"language",
"processing",
"(",
"nlp",
")",
"systems",
"have",
"become",
"a",
"central",
"technology",
"in",
"communication",
",",
"education",
",",
"medicine",
",",
"artificial",
"intelligence",
",",
"and",
"many",
"other",
"domains",
"of",
"research"... |
ACL | Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention | Semantically controlled neural response generation on limited-domain has achieved great performance. However, moving towards multi-domain large-scale scenarios are shown to be difficult because the possible combinations of semantic inputs grow exponentially with the number of domains. To alleviate such scalability issu... | 9811551d3fc21fd889aebbe7487872bf | 2,019 | [
"semantically controlled neural response generation on limited - domain has achieved great performance .",
"however , moving towards multi - domain large - scale scenarios are shown to be difficult because the possible combinations of semantic inputs grow exponentially with the number of domains .",
"to allevia... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "semantically controlled neural response generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"semantically",
"controlled",
"neural",
"response",
"gene... | [
"semantically",
"controlled",
"neural",
"response",
"generation",
"on",
"limited",
"-",
"domain",
"has",
"achieved",
"great",
"performance",
".",
"however",
",",
"moving",
"towards",
"multi",
"-",
"domain",
"large",
"-",
"scale",
"scenarios",
"are",
"shown",
"to... |
ACL | Vision-Language Pre-Training for Multimodal Aspect-Based Sentiment Analysis | As an important task in sentiment analysis, Multimodal Aspect-Based Sentiment Analysis (MABSA) has attracted increasing attention inrecent years. However, previous approaches either (i) use separately pre-trained visual and textual models, which ignore the crossmodalalignment or (ii) use vision-language models pre-trai... | 24da7bba2d6c9a0e969fad15a99a6a95 | 2,022 | [
"as an important task in sentiment analysis , multimodal aspect - based sentiment analysis ( mabsa ) has attracted increasing attention inrecent years .",
"however , previous approaches either ( i ) use separately pre - trained visual and textual models , which ignore the crossmodalalignment or ( ii ) use vision ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "multimodal aspect - based sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"multimodal",
"aspect",
"-",
"based",
"sentiment",
"a... | [
"as",
"an",
"important",
"task",
"in",
"sentiment",
"analysis",
",",
"multimodal",
"aspect",
"-",
"based",
"sentiment",
"analysis",
"(",
"mabsa",
")",
"has",
"attracted",
"increasing",
"attention",
"inrecent",
"years",
".",
"however",
",",
"previous",
"approache... |
ACL | Challenges in Information-Seeking QA: Unanswerable Questions and Paragraph Retrieval | Recent pretrained language models “solved” many reading comprehension benchmarks, where questions are written with access to the evidence document. However, datasets containing information-seeking queries where evidence documents are provided after the queries are written independently remain challenging. We analyze wh... | 92f3a74cf3d1d3b587674ddf8b158820 | 2,021 | [
"recent pretrained language models “ solved ” many reading comprehension benchmarks , where questions are written with access to the evidence document .",
"however , datasets containing information - seeking queries where evidence documents are provided after the queries are written independently remain challengi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "pretrained language models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"pretrained",
"language",
"models"
],
"offsets": [
1,
2,
... | [
"recent",
"pretrained",
"language",
"models",
"“",
"solved",
"”",
"many",
"reading",
"comprehension",
"benchmarks",
",",
"where",
"questions",
"are",
"written",
"with",
"access",
"to",
"the",
"evidence",
"document",
".",
"however",
",",
"datasets",
"containing",
... |
ACL | HighRES: Highlight-based Reference-less Evaluation of Summarization | There has been substantial progress in summarization research enabled by the availability of novel, often large-scale, datasets and recent advances on neural network-based approaches. However, manual evaluation of the system generated summaries is inconsistent due to the difficulty the task poses to human non-expert re... | bdeeb6b17b6f40e53d1fb062de9ef198 | 2,019 | [
"there has been substantial progress in summarization research enabled by the availability of novel , often large - scale , datasets and recent advances on neural network - based approaches .",
"however , manual evaluation of the system generated summaries is inconsistent due to the difficulty the task poses to h... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "summarization research",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"summarization",
"research"
],
"offsets": [
6,
7
]
}
],
"... | [
"there",
"has",
"been",
"substantial",
"progress",
"in",
"summarization",
"research",
"enabled",
"by",
"the",
"availability",
"of",
"novel",
",",
"often",
"large",
"-",
"scale",
",",
"datasets",
"and",
"recent",
"advances",
"on",
"neural",
"network",
"-",
"bas... |
ACL | Principled Paraphrase Generation with Parallel Corpora | Round-trip Machine Translation (MT) is a popular choice for paraphrase generation, which leverages readily available parallel corpora for supervision. In this paper, we formalize the implicit similarity function induced by this approach, and show that it is susceptible to non-paraphrase pairs sharing a single ambiguous... | 43a12e4477493b80c8dc935b7b7a798c | 2,022 | [
"round - trip machine translation ( mt ) is a popular choice for paraphrase generation , which leverages readily available parallel corpora for supervision .",
"in this paper , we formalize the implicit similarity function induced by this approach , and show that it is susceptible to non - paraphrase pairs sharin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "paraphrase generation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"paraphrase",
"generation"
],
"offsets": [
13,
14
]
}
],
"... | [
"round",
"-",
"trip",
"machine",
"translation",
"(",
"mt",
")",
"is",
"a",
"popular",
"choice",
"for",
"paraphrase",
"generation",
",",
"which",
"leverages",
"readily",
"available",
"parallel",
"corpora",
"for",
"supervision",
".",
"in",
"this",
"paper",
",",
... |
ACL | Overlap-based Vocabulary Generation Improves Cross-lingual Transfer Among Related Languages | Pre-trained multilingual language models such as mBERT and XLM-R have demonstrated great potential for zero-shot cross-lingual transfer to low web-resource languages (LRL). However, due to limited model capacity, the large difference in the sizes of available monolingual corpora between high web-resource languages (HRL... | b3470e8c0a9ee5d9c9b26972e95bedd2 | 2,022 | [
"pre - trained multilingual language models such as mbert and xlm - r have demonstrated great potential for zero - shot cross - lingual transfer to low web - resource languages ( lrl ) .",
"however , due to limited model capacity , the large difference in the sizes of available monolingual corpora between high we... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "zero - shot cross - lingual transfer",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"zero",
"-",
"shot",
"cross",
"-",
"lingual",
"tran... | [
"pre",
"-",
"trained",
"multilingual",
"language",
"models",
"such",
"as",
"mbert",
"and",
"xlm",
"-",
"r",
"have",
"demonstrated",
"great",
"potential",
"for",
"zero",
"-",
"shot",
"cross",
"-",
"lingual",
"transfer",
"to",
"low",
"web",
"-",
"resource",
... |
ACL | Should All Cross-Lingual Embeddings Speak English? | Most of recent work in cross-lingual word embeddings is severely Anglocentric. The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting. With this work, however, we c... | 615210596b15db1bbe4cf37125e60ec7 | 2,020 | [
"most of recent work in cross - lingual word embeddings is severely anglocentric .",
"the vast majority of lexicon induction evaluation dictionaries are between english and another language , and the english embedding space is selected by default as the hub when learning in a multilingual setting .",
"with this... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - lingual word embeddings",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"lingual",
"word",
"embeddings"
],
"offsets": ... | [
"most",
"of",
"recent",
"work",
"in",
"cross",
"-",
"lingual",
"word",
"embeddings",
"is",
"severely",
"anglocentric",
".",
"the",
"vast",
"majority",
"of",
"lexicon",
"induction",
"evaluation",
"dictionaries",
"are",
"between",
"english",
"and",
"another",
"lan... |
ACL | Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings | Knowledge Graphs (KG) are multi-relational graphs consisting of entities as nodes and relations among them as typed edges. Goal of the Question Answering over KG (KGQA) task is to answer natural language queries posed over the KG. Multi-hop KGQA requires reasoning over multiple edges of the KG to arrive at the right an... | 12ec71d2f94b77f40d628291aa9e4317 | 2,020 | [
"knowledge graphs ( kg ) are multi - relational graphs consisting of entities as nodes and relations among them as typed edges .",
"goal of the question answering over kg ( kgqa ) task is to answer natural language queries posed over the kg .",
"multi - hop kgqa requires reasoning over multiple edges of the kg ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "knowledge graphs",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"knowledge",
"graphs"
],
"offsets": [
0,
1
]
}
],
"trigger": {
... | [
"knowledge",
"graphs",
"(",
"kg",
")",
"are",
"multi",
"-",
"relational",
"graphs",
"consisting",
"of",
"entities",
"as",
"nodes",
"and",
"relations",
"among",
"them",
"as",
"typed",
"edges",
".",
"goal",
"of",
"the",
"question",
"answering",
"over",
"kg",
... |
ACL | Graph-based Dependency Parsing with Graph Neural Networks | We investigate the problem of efficiently incorporating high-order features into neural graph-based dependency parsing. Instead of explicitly extracting high-order features from intermediate parse trees, we develop a more powerful dependency tree node representation which captures high-order information concisely and e... | 02c07a52d7bc2ba7e8a287900371e477 | 2,019 | [
"we investigate the problem of efficiently incorporating high - order features into neural graph - based dependency parsing .",
"instead of explicitly extracting high - order features from intermediate parse trees , we develop a more powerful dependency tree node representation which captures high - order informa... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "incorporating high - order features into neural graph - based dependency parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"incorporating",
"high",
"-",
"order"... | [
"we",
"investigate",
"the",
"problem",
"of",
"efficiently",
"incorporating",
"high",
"-",
"order",
"features",
"into",
"neural",
"graph",
"-",
"based",
"dependency",
"parsing",
".",
"instead",
"of",
"explicitly",
"extracting",
"high",
"-",
"order",
"features",
"... |
ACL | HellaSwag: Can a Machine Really Finish Your Sentence? | Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as “A woman sits at a piano,” a machine must select the most likely followup: “She sets her fingers on the keys.” With the introduction of BERT, near human-level performance was reached.... | 48bcb45e64d77c664a7af1828f0f4fd9 | 2,019 | [
"recent work by zellers et al . ( 2018 ) introduced a new task of commonsense natural language inference : given an event description such as “ a woman sits at a piano , ” a machine must select the most likely followup : “ she sets her fingers on the keys . ”",
"with the introduction of bert , near human - level ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "commonsense natural language inference",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"commonsense",
"natural",
"language",
"inference"
],
"offsets":... | [
"recent",
"work",
"by",
"zellers",
"et",
"al",
".",
"(",
"2018",
")",
"introduced",
"a",
"new",
"task",
"of",
"commonsense",
"natural",
"language",
"inference",
":",
"given",
"an",
"event",
"description",
"such",
"as",
"“",
"a",
"woman",
"sits",
"at",
"a... |
ACL | LogicalFactChecker: Leveraging Logical Operations for Fact Checking with Graph Module Network | Verifying the correctness of a textual statement requires not only semantic reasoning about the meaning of words, but also symbolic reasoning about logical operations like count, superlative, aggregation, etc. In this work, we propose LogicalFactChecker, a neural network approach capable of leveraging logical operation... | f1e10d1c08cb1abfb059d182f66b7492 | 2,020 | [
"verifying the correctness of a textual statement requires not only semantic reasoning about the meaning of words , but also symbolic reasoning about logical operations like count , superlative , aggregation , etc .",
"in this work , we propose logicalfactchecker , a neural network approach capable of leveraging ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
38
]
},
{
"text": "logicalfactchecker",
"nugget_type": "APP",... | [
"verifying",
"the",
"correctness",
"of",
"a",
"textual",
"statement",
"requires",
"not",
"only",
"semantic",
"reasoning",
"about",
"the",
"meaning",
"of",
"words",
",",
"but",
"also",
"symbolic",
"reasoning",
"about",
"logical",
"operations",
"like",
"count",
",... |
ACL | Unsupervised Opinion Summarization with Noising and Denoising | The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization. Unfortunately, in most domains (other than news) such training data is not available and cannot be eas... | cb00b39cc0ab8209c211a9bd26cb1f97 | 2,020 | [
"the supervised training of high - capacity models on large datasets containing hundreds of thousands of document - summary pairs is critical to the recent success of deep learning techniques for abstractive summarization .",
"unfortunately , in most domains ( other than news ) such training data is not available... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "abstractive summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"abstractive",
"summarization"
],
"offsets": [
31,
32
]
}
... | [
"the",
"supervised",
"training",
"of",
"high",
"-",
"capacity",
"models",
"on",
"large",
"datasets",
"containing",
"hundreds",
"of",
"thousands",
"of",
"document",
"-",
"summary",
"pairs",
"is",
"critical",
"to",
"the",
"recent",
"success",
"of",
"deep",
"lear... |
ACL | LM-BFF-MS: Improving Few-Shot Fine-tuning of Language Models based on Multiple Soft Demonstration Memory | LM-BFF (CITATION) achieves significant few-shot performance by using auto-generated prompts and adding demonstrations similar to an input example. To improve the approach of LM-BFF, this paper proposes LM-BFF-MS—better few-shot fine-tuning of language models with multiple soft demonstrations by making its further exten... | 463d0f8ba62e6df167b8ceaf0a611fcf | 2,022 | [
"lm - bff ( citation ) achieves significant few - shot performance by using auto - generated prompts and adding demonstrations similar to an input example .",
"to improve the approach of lm - bff , this paper proposes lm - bff - ms — better few - shot fine - tuning of language models with multiple soft demonstrat... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "auto - generated prompts",
"nugget_type": "FEA",
"argument_type": "TriedComponent",
"tokens": [
"auto",
"-",
"generated",
"prompts"
],
"offsets": [
14,
... | [
"lm",
"-",
"bff",
"(",
"citation",
")",
"achieves",
"significant",
"few",
"-",
"shot",
"performance",
"by",
"using",
"auto",
"-",
"generated",
"prompts",
"and",
"adding",
"demonstrations",
"similar",
"to",
"an",
"input",
"example",
".",
"to",
"improve",
"the... |
ACL | A Span-based Dynamic Local Attention Model for Sequential Sentence Classification | Sequential sentence classification aims to classify each sentence in the document based on the context in which sentences appear. Most existing work addresses this problem using a hierarchical sequence labeling network. However, they ignore considering the latent segment structure of the document, in which contiguous s... | 404e21edc6a4ca87a32a3fc93d06ef5d | 2,021 | [
"sequential sentence classification aims to classify each sentence in the document based on the context in which sentences appear .",
"most existing work addresses this problem using a hierarchical sequence labeling network .",
"however , they ignore considering the latent segment structure of the document , in... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sequential sentence classification",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"sequential",
"sentence",
"classification"
],
"offsets": [
0,
... | [
"sequential",
"sentence",
"classification",
"aims",
"to",
"classify",
"each",
"sentence",
"in",
"the",
"document",
"based",
"on",
"the",
"context",
"in",
"which",
"sentences",
"appear",
".",
"most",
"existing",
"work",
"addresses",
"this",
"problem",
"using",
"a... |
ACL | A Frame-based Sentence Representation for Machine Reading Comprehension | Sentence representation (SR) is the most crucial and challenging task in Machine Reading Comprehension (MRC). MRC systems typically only utilize the information contained in the sentence itself, while human beings can leverage their semantic knowledge. To bridge the gap, we proposed a novel Frame-based Sentence Represe... | 5d40f50489af5865f3b2579e46d1e56e | 2,020 | [
"sentence representation ( sr ) is the most crucial and challenging task in machine reading comprehension ( mrc ) .",
"mrc systems typically only utilize the information contained in the sentence itself , while human beings can leverage their semantic knowledge .",
"to bridge the gap , we proposed a novel frame... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sentence representation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"sentence",
"representation"
],
"offsets": [
0,
1
]
},
{
... | [
"sentence",
"representation",
"(",
"sr",
")",
"is",
"the",
"most",
"crucial",
"and",
"challenging",
"task",
"in",
"machine",
"reading",
"comprehension",
"(",
"mrc",
")",
".",
"mrc",
"systems",
"typically",
"only",
"utilize",
"the",
"information",
"contained",
... |
ACL | A Formal Hierarchy of RNN Architectures | We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN’s memory, and rational recurrence, defined as whether the recurrent update can be described by a weighted finite-state machine. We place several RNN va... | 456f8368f66adff7eecd957dbd7b9601 | 2,020 | [
"we develop a formal hierarchy of the expressive capacity of rnn architectures .",
"the hierarchy is based on two formal properties : space complexity , which measures the rnn ’ s memory , and rational recurrence , defined as whether the recurrent update can be described by a weighted finite - state machine .",
... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "formal hierarchy of the expressive capacity of rnn ... | [
"we",
"develop",
"a",
"formal",
"hierarchy",
"of",
"the",
"expressive",
"capacity",
"of",
"rnn",
"architectures",
".",
"the",
"hierarchy",
"is",
"based",
"on",
"two",
"formal",
"properties",
":",
"space",
"complexity",
",",
"which",
"measures",
"the",
"rnn",
... |
ACL | Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study | Neural generative models have been become increasingly popular when building conversational agents. They offer flexibility, can be easily adapted to new domains, and require minimal domain engineering. A common criticism of these systems is that they seldom understand or use the available dialog history effectively. In... | 008266f3b36227dbf606c9b1a5d531e7 | 2,019 | [
"neural generative models have been become increasingly popular when building conversational agents .",
"they offer flexibility , can be easily adapted to new domains , and require minimal domain engineering .",
"a common criticism of these systems is that they seldom understand or use the available dialog hist... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural generative models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"generative",
"models"
],
"offsets": [
0,
1,
2
... | [
"neural",
"generative",
"models",
"have",
"been",
"become",
"increasingly",
"popular",
"when",
"building",
"conversational",
"agents",
".",
"they",
"offer",
"flexibility",
",",
"can",
"be",
"easily",
"adapted",
"to",
"new",
"domains",
",",
"and",
"require",
"min... |
ACL | Scalable Syntax-Aware Language Models Using Knowledge Distillation | Prior work has shown that, on small amounts of training data, syntactic neural language models learn structurally sensitive generalisations more successfully than sequential language models. However, their computational complexity renders scaling difficult, and it remains an open question whether structural biases are ... | f50da0f86de60b0dfce435ee3c7e7918 | 2,019 | [
"prior work has shown that , on small amounts of training data , syntactic neural language models learn structurally sensitive generalisations more successfully than sequential language models .",
"however , their computational complexity renders scaling difficult , and it remains an open question whether structu... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "structurally sensitive generalisations",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"structurally",
"sensitive",
"generalisations"
],
"offsets": [
... | [
"prior",
"work",
"has",
"shown",
"that",
",",
"on",
"small",
"amounts",
"of",
"training",
"data",
",",
"syntactic",
"neural",
"language",
"models",
"learn",
"structurally",
"sensitive",
"generalisations",
"more",
"successfully",
"than",
"sequential",
"language",
"... |
ACL | Subsequence Based Deep Active Learning for Named Entity Recognition | Active Learning (AL) has been successfully applied to Deep Learning in order to drastically reduce the amount of data required to achieve high performance. Previous works have shown that lightweight architectures for Named Entity Recognition (NER) can achieve optimal performance with only 25% of the original training d... | 084b3e62c7fbafa582b9ef29995ca86f | 2,021 | [
"active learning ( al ) has been successfully applied to deep learning in order to drastically reduce the amount of data required to achieve high performance .",
"previous works have shown that lightweight architectures for named entity recognition ( ner ) can achieve optimal performance with only 25 % of the ori... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "within sentences",
"nugget_type": "LIM",
"argument_type": "Condition",
"tokens": [
"within",
"sentences"
],
"offsets": [
117,
118
]
},
{
... | [
"active",
"learning",
"(",
"al",
")",
"has",
"been",
"successfully",
"applied",
"to",
"deep",
"learning",
"in",
"order",
"to",
"drastically",
"reduce",
"the",
"amount",
"of",
"data",
"required",
"to",
"achieve",
"high",
"performance",
".",
"previous",
"works",... |
ACL | Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations | To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions. In this work, we show that such models are nonetheless prone to generating mutually inconsistent explanations, such as ”Beca... | 3e24a5d1c5cf6dc9eb31fb4270bc18cf | 2,020 | [
"to increase trust in artificial intelligence systems , a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions .",
"in this work , we show that such models are nonetheless prone to generating mutually inconsistent explanations ,... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural models capable of generating natural language explanations",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"models",
"capable",
"of",
"gene... | [
"to",
"increase",
"trust",
"in",
"artificial",
"intelligence",
"systems",
",",
"a",
"promising",
"research",
"direction",
"consists",
"of",
"designing",
"neural",
"models",
"capable",
"of",
"generating",
"natural",
"language",
"explanations",
"for",
"their",
"predic... |
ACL | A Multitask Learning Approach for Diacritic Restoration | In many languages like Arabic, diacritics are used to specify pronunciations as well as meanings. Such diacritics are often omitted in written text, increasing the number of possible pronunciations and meanings for a word. This results in a more ambiguous text making computational processing on such text more difficult... | 3a5664635f5fd6e7335b7bc0f6e42ef4 | 2,020 | [
"in many languages like arabic , diacritics are used to specify pronunciations as well as meanings .",
"such diacritics are often omitted in written text , increasing the number of possible pronunciations and meanings for a word .",
"this results in a more ambiguous text making computational processing on such ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "diacritic restoration",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"diacritic",
"restoration"
],
"offsets": [
54,
55
]
}
],
"... | [
"in",
"many",
"languages",
"like",
"arabic",
",",
"diacritics",
"are",
"used",
"to",
"specify",
"pronunciations",
"as",
"well",
"as",
"meanings",
".",
"such",
"diacritics",
"are",
"often",
"omitted",
"in",
"written",
"text",
",",
"increasing",
"the",
"number",... |
ACL | Dice Loss for Data-imbalanced NLP Tasks | Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of easy-negative examples overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an accuracy-... | 94180e6b6d95763216159a084cdd4efb | 2,020 | [
"many nlp tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue : negative examples significantly outnumber positive examples , and the huge number of easy - negative examples overwhelms the training .",
"the most commonly used cross entropy ( ce ) criteria is actu... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
97
]
},
{
"text": "dice loss",
"nugget_type": "FEA",
... | [
"many",
"nlp",
"tasks",
"such",
"as",
"tagging",
"and",
"machine",
"reading",
"comprehension",
"are",
"faced",
"with",
"the",
"severe",
"data",
"imbalance",
"issue",
":",
"negative",
"examples",
"significantly",
"outnumber",
"positive",
"examples",
",",
"and",
"... |
ACL | Long-range Sequence Modeling with Predictable Sparse Attention | Self-attention mechanism has been shown to be an effective approach for capturing global context dependencies in sequence modeling, but it suffers from quadratic complexity in time and memory usage. Due to the sparsity of the attention matrix, much computation is redundant. Therefore, in this paper, we design an effici... | 5746545f25814e7b2ca6b4d07fb0a0c3 | 2,022 | [
"self - attention mechanism has been shown to be an effective approach for capturing global context dependencies in sequence modeling , but it suffers from quadratic complexity in time and memory usage .",
"due to the sparsity of the attention matrix , much computation is redundant .",
"therefore , in this pape... | [
{
"event_type": "RWF",
"arguments": [
{
"text": "self - attention mechanism",
"nugget_type": "APP",
"argument_type": "Concern",
"tokens": [
"self",
"-",
"attention",
"mechanism"
],
"offsets": [
0,
... | [
"self",
"-",
"attention",
"mechanism",
"has",
"been",
"shown",
"to",
"be",
"an",
"effective",
"approach",
"for",
"capturing",
"global",
"context",
"dependencies",
"in",
"sequence",
"modeling",
",",
"but",
"it",
"suffers",
"from",
"quadratic",
"complexity",
"in",... |
ACL | Wide-Coverage Neural A* Parsing for Minimalist Grammars | Minimalist Grammars (Stabler, 1997) are a computationally oriented, and rigorous formalisation of many aspects of Chomsky’s (1995) Minimalist Program. This paper presents the first ever application of this formalism to the task of realistic wide-coverage parsing. The parser uses a linguistically expressive yet highly c... | d3219308b97040d193a708a8c64577b8 | 2,019 | [
"minimalist grammars ( stabler , 1997 ) are a computationally oriented , and rigorous formalisation of many aspects of chomsky ’ s ( 1995 ) minimalist program .",
"this paper presents the first ever application of this formalism to the task of realistic wide - coverage parsing .",
"the parser uses a linguistica... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "realistic wide - coverage parsing",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"realistic",
"wide",
"-",
"coverage",
"parsing"
],
"offset... | [
"minimalist",
"grammars",
"(",
"stabler",
",",
"1997",
")",
"are",
"a",
"computationally",
"oriented",
",",
"and",
"rigorous",
"formalisation",
"of",
"many",
"aspects",
"of",
"chomsky",
"’",
"s",
"(",
"1995",
")",
"minimalist",
"program",
".",
"this",
"paper... |
ACL | PTB Graph Parsing with Tree Approximation | The Penn Treebank (PTB) represents syntactic structures as graphs due to nonlocal dependencies. This paper proposes a method that approximates PTB graph-structured representations by trees. By our approximation method, we can reduce nonlocal dependency identification and constituency parsing into single tree-based pars... | 463067582e8fe6c2733101e261dc82c1 | 2,019 | [
"the penn treebank ( ptb ) represents syntactic structures as graphs due to nonlocal dependencies .",
"this paper proposes a method that approximates ptb graph - structured representations by trees .",
"by our approximation method , we can reduce nonlocal dependency identification and constituency parsing into ... | [
{
"event_type": "FIN",
"arguments": [
{
"text": "significantly outperforms",
"nugget_type": "E-CMP",
"argument_type": "Content",
"tokens": [
"significantly",
"outperforms"
],
"offsets": [
72,
73
]
}
... | [
"the",
"penn",
"treebank",
"(",
"ptb",
")",
"represents",
"syntactic",
"structures",
"as",
"graphs",
"due",
"to",
"nonlocal",
"dependencies",
".",
"this",
"paper",
"proposes",
"a",
"method",
"that",
"approximates",
"ptb",
"graph",
"-",
"structured",
"representat... |
ACL | Probing Toxic Content in Large Pre-Trained Language Models | Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems. We propose a method based on logistic regression classifiers to probe English, French, and Arabic PTLMs and quantify the pote... | 42092c30c6c713f7ef84aaa371ece49e | 2,021 | [
"large pre - trained language models ( ptlms ) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major nlp systems .",
"we propose a method based on logistic regression classifiers to probe english , french , and arabic ptlms and ... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
34
]
},
{
"text": "method based on logistic regression classifiers",
... | [
"large",
"pre",
"-",
"trained",
"language",
"models",
"(",
"ptlms",
")",
"have",
"been",
"shown",
"to",
"carry",
"biases",
"towards",
"different",
"social",
"groups",
"which",
"leads",
"to",
"the",
"reproduction",
"of",
"stereotypical",
"and",
"toxic",
"conten... |
ACL | Constructing Interpretive Spatio-Temporal Features for Multi-Turn Responses Selection | Response selection plays an important role in fully automated dialogue systems. Given the dialogue context, the goal of response selection is to identify the best-matched next utterance (i.e., response) from multiple candidates. Despite the efforts of many previous useful models, this task remains challenging due to th... | 19951a41b727179b5b6b9461108f528d | 2,019 | [
"response selection plays an important role in fully automated dialogue systems .",
"given the dialogue context , the goal of response selection is to identify the best - matched next utterance ( i . e . , response ) from multiple candidates .",
"despite the efforts of many previous useful models , this task re... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "response selection",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"response",
"selection"
],
"offsets": [
0,
1
]
}
],
"trigger"... | [
"response",
"selection",
"plays",
"an",
"important",
"role",
"in",
"fully",
"automated",
"dialogue",
"systems",
".",
"given",
"the",
"dialogue",
"context",
",",
"the",
"goal",
"of",
"response",
"selection",
"is",
"to",
"identify",
"the",
"best",
"-",
"matched"... |
ACL | An Effective Approach to Unsupervised Machine Translation | While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of exis... | 870e81aae9d16423c2b5bdffcc384db9 | 2,019 | [
"while machine translation has traditionally relied on large amounts of parallel corpora , a recent research line has managed to train both neural machine translation ( nmt ) and statistical machine translation ( smt ) systems using monolingual corpora only .",
"in this paper , we identify and address several def... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "machine translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"machine",
"translation"
],
"offsets": [
1,
2
]
}
],
"trigge... | [
"while",
"machine",
"translation",
"has",
"traditionally",
"relied",
"on",
"large",
"amounts",
"of",
"parallel",
"corpora",
",",
"a",
"recent",
"research",
"line",
"has",
"managed",
"to",
"train",
"both",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"a... |
ACL | Empowering Active Learning to Jointly Optimize System and User Demands | Existing approaches to active learning maximize the system performance by sampling unlabeled instances for annotation that yield the most efficient training. However, when active learning is integrated with an end-user application, this can lead to frustration for participating users, as they spend time labeling instan... | 02d439ea98b11c25d0b435d59a37d0c7 | 2,020 | [
"existing approaches to active learning maximize the system performance by sampling unlabeled instances for annotation that yield the most efficient training .",
"however , when active learning is integrated with an end - user application , this can lead to frustration for participating users , as they spend time... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "unlabeled instances for annotation",
"nugget_type": "FEA",
"argument_type": "BaseComponent",
"tokens": [
"unlabeled",
"instances",
"for",
"annotation"
],
"offsets": ... | [
"existing",
"approaches",
"to",
"active",
"learning",
"maximize",
"the",
"system",
"performance",
"by",
"sampling",
"unlabeled",
"instances",
"for",
"annotation",
"that",
"yield",
"the",
"most",
"efficient",
"training",
".",
"however",
",",
"when",
"active",
"lear... |
ACL | Open Domain Event Extraction Using Neural Latent Variable Models | We consider open domain event extraction, the task of extracting unconstraint types of events from news clusters. A novel latent variable neural model is constructed, which is scalable to very large corpus. A dataset is collected and manually annotated, with task-specific evaluation metrics being designed. Results show... | 0f320e759ad8651b631cce58dacce20d | 2,019 | [
"we consider open domain event extraction , the task of extracting unconstraint types of events from news clusters .",
"a novel latent variable neural model is constructed , which is scalable to very large corpus .",
"a dataset is collected and manually annotated , with task - specific evaluation metrics being ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "open domain event extraction",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"open",
"domain",
"event",
"extraction"
],
"offsets": [
2,
... | [
"we",
"consider",
"open",
"domain",
"event",
"extraction",
",",
"the",
"task",
"of",
"extracting",
"unconstraint",
"types",
"of",
"events",
"from",
"news",
"clusters",
".",
"a",
"novel",
"latent",
"variable",
"neural",
"model",
"is",
"constructed",
",",
"which... |
ACL | Towards Better Non-Tree Argument Mining: Proposition-Level Biaffine Parsing with Task-Specific Parameterization | State-of-the-art argument mining studies have advanced the techniques for predicting argument structures. However, the technology for capturing non-tree-structured arguments is still in its infancy. In this paper, we focus on non-tree argument mining with a neural model. We jointly predict proposition types and edges b... | e8ff0cda0e79bd998cd0fe5d1c39b904 | 2,020 | [
"state - of - the - art argument mining studies have advanced the techniques for predicting argument structures .",
"however , the technology for capturing non - tree - structured arguments is still in its infancy .",
"in this paper , we focus on non - tree argument mining with a neural model .",
"we jointly ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "state - of - the - art argument mining studies",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"state",
"-",
"of",
"-",
"the",
"-",
"art... | [
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"argument",
"mining",
"studies",
"have",
"advanced",
"the",
"techniques",
"for",
"predicting",
"argument",
"structures",
".",
"however",
",",
"the",
"technology",
"for",
"capturing",
"non",
"-",
"tree",
"-",
"struc... |
ACL | UniGDD: A Unified Generative Framework for Goal-Oriented Document-Grounded Dialogue | The goal-oriented document-grounded dialogue aims at responding to the user query based on the dialogue context and supporting document. Existing studies tackle this problem by decomposing it into two sub-tasks: knowledge identification and response generation. However, such pipeline methods would unavoidably suffer fr... | ac1c28de9aa0cc797de8343e0adbdcbb | 2,022 | [
"the goal - oriented document - grounded dialogue aims at responding to the user query based on the dialogue context and supporting document .",
"existing studies tackle this problem by decomposing it into two sub - tasks : knowledge identification and response generation .",
"however , such pipeline methods wo... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "goal - oriented document - grounded dialogue",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"goal",
"-",
"oriented",
"document",
"-",
"grounded",... | [
"the",
"goal",
"-",
"oriented",
"document",
"-",
"grounded",
"dialogue",
"aims",
"at",
"responding",
"to",
"the",
"user",
"query",
"based",
"on",
"the",
"dialogue",
"context",
"and",
"supporting",
"document",
".",
"existing",
"studies",
"tackle",
"this",
"prob... |
ACL | Distantly Supervised Named Entity Recognition via Confidence-Based Multi-Class Positive and Unlabeled Learning | In this paper, we study the named entity recognition (NER) problem under distant supervision. Due to the incompleteness of the external dictionaries and/or knowledge bases, such distantly annotated training data usually suffer from a high false negative rate. To this end, we formulate the Distantly Supervised NER (DS-N... | 325ca5d9c4e379c63b92648f78190235 | 2,022 | [
"in this paper , we study the named entity recognition ( ner ) problem under distant supervision .",
"due to the incompleteness of the external dictionaries and / or knowledge bases , such distantly annotated training data usually suffer from a high false negative rate .",
"to this end , we formulate the distan... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "named entity recognition",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"named",
"entity",
"recognition"
],
"offsets": [
7,
8,
9
... | [
"in",
"this",
"paper",
",",
"we",
"study",
"the",
"named",
"entity",
"recognition",
"(",
"ner",
")",
"problem",
"under",
"distant",
"supervision",
".",
"due",
"to",
"the",
"incompleteness",
"of",
"the",
"external",
"dictionaries",
"and",
"/",
"or",
"knowledg... |
ACL | Diverse Pretrained Context Encodings Improve Document Translation | We propose a new architecture for adapting a sentence-level sequence-to-sequence transformer by incorporating multiple pre-trained document context signals and assess the impact on translation performance of (1) different pretraining approaches for generating these signals, (2) the quantity of parallel data for which d... | e54056ad15617ada5845f4c35fc2da7f | 2,021 | [
"we propose a new architecture for adapting a sentence - level sequence - to - sequence transformer by incorporating multiple pre - trained document context signals and assess the impact on translation performance of ( 1 ) different pretraining approaches for generating these signals , ( 2 ) the quantity of paralle... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "architecture",
"nugget_type": "APP",
... | [
"we",
"propose",
"a",
"new",
"architecture",
"for",
"adapting",
"a",
"sentence",
"-",
"level",
"sequence",
"-",
"to",
"-",
"sequence",
"transformer",
"by",
"incorporating",
"multiple",
"pre",
"-",
"trained",
"document",
"context",
"signals",
"and",
"assess",
"... |
ACL | Towards Conversational Recommendation over Multi-Type Dialogs | We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user’s interests and feedback. To facilitate the study of this task, w... | df6b72e25eb086fd9a3d6b386ba83ee9 | 2,020 | [
"we focus on the study of conversational recommendation in the context of multi - type dialogs , where the bots can proactively and naturally lead a conversation from a non - recommendation dialog ( e . g . , qa ) to a recommendation dialog , taking into account user ’ s interests and feedback .",
"to facilitate ... | [
{
"event_type": "WKS",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Researcher",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "conversational recommendation",
"nugget_t... | [
"we",
"focus",
"on",
"the",
"study",
"of",
"conversational",
"recommendation",
"in",
"the",
"context",
"of",
"multi",
"-",
"type",
"dialogs",
",",
"where",
"the",
"bots",
"can",
"proactively",
"and",
"naturally",
"lead",
"a",
"conversation",
"from",
"a",
"no... |
ACL | Learning Constraints for Structured Prediction Using Rectifier Networks | Various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions. Past work has shown that domain knowledge, framed as constraints over the output space, can help improve predictive accuracy. However, designing good constraints often relie... | e501969220bc0c8f710bea425295eb5b | 2,020 | [
"various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions .",
"past work has shown that domain knowledge , framed as constraints over the output space , can help improve predictive accuracy .",
"however , designing good con... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "natural language processing tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"natural",
"language",
"processing",
"tasks"
],
"offsets": [
... | [
"various",
"natural",
"language",
"processing",
"tasks",
"are",
"structured",
"prediction",
"problems",
"where",
"outputs",
"are",
"constructed",
"with",
"multiple",
"interdependent",
"decisions",
".",
"past",
"work",
"has",
"shown",
"that",
"domain",
"knowledge",
"... |
ACL | A Systematic Assessment of Syntactic Generalization in Neural Language Models | While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model p... | 942367a67ca3ad93b7fe348c20fd211d | 2,020 | [
"while state - of - the - art neural network models continue to achieve lower perplexity scores on language modeling benchmarks , it remains unknown whether optimizing for broad - coverage predictive performance leads to human - like syntactic knowledge .",
"furthermore , existing work has not provided a clear pi... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "neural network models",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"neural",
"network",
"models"
],
"offsets": [
8,
9,
10
... | [
"while",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"neural",
"network",
"models",
"continue",
"to",
"achieve",
"lower",
"perplexity",
"scores",
"on",
"language",
"modeling",
"benchmarks",
",",
"it",
"remains",
"unknown",
"whether",
"optimizing",
"for",
"broa... |
ACL | A Flexible Multi-Task Model for BERT Serving | We present an efficient BERT-based multi-task (MT) framework that is particularly suitable for iterative and incremental development of the tasks. The proposed framework is based on the idea of partial fine-tuning, i.e. only fine-tune some top layers of BERT while keep the other layers frozen. For each task, we train i... | 4df6ba666557bbef69fd9914ab1d314c | 2,022 | [
"we present an efficient bert - based multi - task ( mt ) framework that is particularly suitable for iterative and incremental development of the tasks .",
"the proposed framework is based on the idea of partial fine - tuning , i . e . only fine - tune some top layers of bert while keep the other layers frozen .... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "efficient bert - based multi - task ( mt ) framewor... | [
"we",
"present",
"an",
"efficient",
"bert",
"-",
"based",
"multi",
"-",
"task",
"(",
"mt",
")",
"framework",
"that",
"is",
"particularly",
"suitable",
"for",
"iterative",
"and",
"incremental",
"development",
"of",
"the",
"tasks",
".",
"the",
"proposed",
"fra... |
ACL | Avoiding Overlap in Data Augmentation for AMR-to-Text Generation | Leveraging additional unlabeled data to boost model performance is common practice in machine learning and natural language processing. For generation tasks, if there is overlap between the additional data and the target text evaluation data, then training on the additional data is training on answers of the test set. ... | 2dfa9e2eb9dd1708f56071afb8f984eb | 2,021 | [
"leveraging additional unlabeled data to boost model performance is common practice in machine learning and natural language processing .",
"for generation tasks , if there is overlap between the additional data and the target text evaluation data , then training on the additional data is training on answers of t... | [
{
"event_type": "RWS",
"arguments": [
{
"text": "boost",
"nugget_type": "E-PUR",
"argument_type": "Target",
"tokens": [
"boost"
],
"offsets": [
5
]
},
{
"text": "additional unlabeled data",
"nugget_... | [
"leveraging",
"additional",
"unlabeled",
"data",
"to",
"boost",
"model",
"performance",
"is",
"common",
"practice",
"in",
"machine",
"learning",
"and",
"natural",
"language",
"processing",
".",
"for",
"generation",
"tasks",
",",
"if",
"there",
"is",
"overlap",
"... |
ACL | Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations | Zero-shot translation, translating between language pairs on which a Neural Machine Translation (NMT) system has never been trained, is an emergent property when training the system in multilingual settings. However, naive training for zero-shot NMT easily fails, and is sensitive to hyper-parameter setting. The perform... | feb1f30cb81de17242fdb1defa0e420e | 2,019 | [
"zero - shot translation , translating between language pairs on which a neural machine translation ( nmt ) system has never been trained , is an emergent property when training the system in multilingual settings .",
"however , naive training for zero - shot nmt easily fails , and is sensitive to hyper - paramet... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "zero - shot translation",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"zero",
"-",
"shot",
"translation"
],
"offsets": [
0,
1,
... | [
"zero",
"-",
"shot",
"translation",
",",
"translating",
"between",
"language",
"pairs",
"on",
"which",
"a",
"neural",
"machine",
"translation",
"(",
"nmt",
")",
"system",
"has",
"never",
"been",
"trained",
",",
"is",
"an",
"emergent",
"property",
"when",
"tr... |
ACL | Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis | Cross-domain sentiment analysis has achieved promising results with the help of pre-trained language models. As GPT-3 appears, prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks. However, directly using a fixed predefined template for cross-domain researc... | 6eca851954e84fc1a47b68eb93728dc5 | 2,022 | [
"cross - domain sentiment analysis has achieved promising results with the help of pre - trained language models .",
"as gpt - 3 appears , prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks .",
"however , directly using a fixed predefined template... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "cross - domain sentiment analysis",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"cross",
"-",
"domain",
"sentiment",
"analysis"
],
"offset... | [
"cross",
"-",
"domain",
"sentiment",
"analysis",
"has",
"achieved",
"promising",
"results",
"with",
"the",
"help",
"of",
"pre",
"-",
"trained",
"language",
"models",
".",
"as",
"gpt",
"-",
"3",
"appears",
",",
"prompt",
"tuning",
"has",
"been",
"widely",
"... |
ACL | Active Learning for Coreference Resolution using Discrete Annotation | We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent. This simple modification, when combined with a novel mention clustering algorithm for selecting which examples to label, is much m... | f76ebcbd1e8179098489ab8d0b07df7b | 2,020 | [
"we improve upon pairwise annotation for active learning in coreference resolution , by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent .",
"this simple modification , when combined with a novel mention clustering algorithm for selecting which examples to lab... | [
{
"event_type": "FAC",
"arguments": [
{
"text": "modification",
"nugget_type": "APP",
"argument_type": "Subject",
"tokens": [
"modification"
],
"offsets": [
31
]
},
{
"text": "when combined with a novel men... | [
"we",
"improve",
"upon",
"pairwise",
"annotation",
"for",
"active",
"learning",
"in",
"coreference",
"resolution",
",",
"by",
"asking",
"annotators",
"to",
"identify",
"mention",
"antecedents",
"if",
"a",
"presented",
"mention",
"pair",
"is",
"deemed",
"not",
"c... |
ACL | Empower Entity Set Expansion via Language Model Probing | Entity set expansion, aiming at expanding a small seed entity set with new entities belonging to the same semantic class, is a critical task that benefits many downstream NLP and IR applications, such as question answering, query understanding, and taxonomy construction. Existing set expansion methods bootstrap the see... | 12d135e2554d5e903a90b661f0e60876 | 2,020 | [
"entity set expansion , aiming at expanding a small seed entity set with new entities belonging to the same semantic class , is a critical task that benefits many downstream nlp and ir applications , such as question answering , query understanding , and taxonomy construction .",
"existing set expansion methods b... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "entity set expansion",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"entity",
"set",
"expansion"
],
"offsets": [
0,
1,
2
... | [
"entity",
"set",
"expansion",
",",
"aiming",
"at",
"expanding",
"a",
"small",
"seed",
"entity",
"set",
"with",
"new",
"entities",
"belonging",
"to",
"the",
"same",
"semantic",
"class",
",",
"is",
"a",
"critical",
"task",
"that",
"benefits",
"many",
"downstre... |
ACL | Augmenting Document Representations for Dense Retrieval with Interpolation and Perturbation | Dense retrieval models, which aim at retrieving the most relevant document for an input query on a dense representation space, have gained considerable attention for their remarkable success. Yet, dense models require a vast amount of labeled training data for notable performance, whereas it is often challenging to acq... | 22e3786bf9250eeb7b5b86a251b1ded0 | 2,022 | [
"dense retrieval models , which aim at retrieving the most relevant document for an input query on a dense representation space , have gained considerable attention for their remarkable success .",
"yet , dense models require a vast amount of labeled training data for notable performance , whereas it is often cha... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "dense retrieval models",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"dense",
"retrieval",
"models"
],
"offsets": [
0,
1,
2
... | [
"dense",
"retrieval",
"models",
",",
"which",
"aim",
"at",
"retrieving",
"the",
"most",
"relevant",
"document",
"for",
"an",
"input",
"query",
"on",
"a",
"dense",
"representation",
"space",
",",
"have",
"gained",
"considerable",
"attention",
"for",
"their",
"r... |
ACL | A Human-machine Collaborative Framework for Evaluating Malevolence in Dialogues | Conversational dialogue systems (CDSs) are hard to evaluate due to the complexity of natural language. Automatic evaluation of dialogues often shows insufficient correlation with human judgements. Human evaluation is reliable but labor-intensive. We introduce a human-machine collaborative framework, HMCEval, that can g... | 1056ca0ba0ae240c1da2d69175e05659 | 2,021 | [
"conversational dialogue systems ( cdss ) are hard to evaluate due to the complexity of natural language .",
"automatic evaluation of dialogues often shows insufficient correlation with human judgements .",
"human evaluation is reliable but labor - intensive .",
"we introduce a human - machine collaborative f... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "conversational dialogue systems",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"conversational",
"dialogue",
"systems"
],
"offsets": [
0,
1... | [
"conversational",
"dialogue",
"systems",
"(",
"cdss",
")",
"are",
"hard",
"to",
"evaluate",
"due",
"to",
"the",
"complexity",
"of",
"natural",
"language",
".",
"automatic",
"evaluation",
"of",
"dialogues",
"often",
"shows",
"insufficient",
"correlation",
"with",
... |
ACL | Measuring Conversational Uptake: A Case Study on Student-Teacher Interactions | In conversation, uptake happens when a speaker builds on the contribution of their interlocutor by, for example, acknowledging, repeating or reformulating what they have said. In education, teachers’ uptake of student contributions has been linked to higher student achievement. Yet measuring and improving teachers’ upt... | 16645876035a6bb11a9184bbaee6f3bc | 2,021 | [
"in conversation , uptake happens when a speaker builds on the contribution of their interlocutor by , for example , acknowledging , repeating or reformulating what they have said .",
"in education , teachers ’ uptake of student contributions has been linked to higher student achievement .",
"yet measuring and ... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "teachers ’ uptake",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"teachers",
"’",
"uptake"
],
"offsets": [
33,
34,
35
]
... | [
"in",
"conversation",
",",
"uptake",
"happens",
"when",
"a",
"speaker",
"builds",
"on",
"the",
"contribution",
"of",
"their",
"interlocutor",
"by",
",",
"for",
"example",
",",
"acknowledging",
",",
"repeating",
"or",
"reformulating",
"what",
"they",
"have",
"s... |
ACL | Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization | In zero-shot multilingual extractive text summarization, a model is typically trained on English summarization dataset and then applied on summarization datasets of other languages. Given English gold summaries and documents, sentence-level labels for extractive summarization are usually generated using heuristics. How... | c45b0fb962e04cc571a93a7063bfdee6 | 2,022 | [
"in zero - shot multilingual extractive text summarization , a model is typically trained on english summarization dataset and then applied on summarization datasets of other languages .",
"given english gold summaries and documents , sentence - level labels for extractive summarization are usually generated usin... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "zero - shot multilingual extractive text summarization",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"zero",
"-",
"shot",
"multilingual",
"extractive",
... | [
"in",
"zero",
"-",
"shot",
"multilingual",
"extractive",
"text",
"summarization",
",",
"a",
"model",
"is",
"typically",
"trained",
"on",
"english",
"summarization",
"dataset",
"and",
"then",
"applied",
"on",
"summarization",
"datasets",
"of",
"other",
"languages",... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.