venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | A Lightweight Recurrent Network for Sequence Modeling | Recurrent networks have achieved great success on various sequential tasks with the assistance of complex recurrent units, but suffer from severe computational inefficiency due to weak parallelization. One direction to alleviate this issue is to shift heavy computations outside the recurrence. In this paper, we propose... | e7d6faf5905e216a5d25cc8404b063c9 | 2,019 | [
"recurrent networks have achieved great success on various sequential tasks with the assistance of complex recurrent units , but suffer from severe computational inefficiency due to weak parallelization .",
"one direction to alleviate this issue is to shift heavy computations outside the recurrence .",
"in this... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "sequential tasks",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"sequential",
"tasks"
],
"offsets": [
8,
9
]
}
],
"trigger": {
... | [
"recurrent",
"networks",
"have",
"achieved",
"great",
"success",
"on",
"various",
"sequential",
"tasks",
"with",
"the",
"assistance",
"of",
"complex",
"recurrent",
"units",
",",
"but",
"suffer",
"from",
"severe",
"computational",
"inefficiency",
"due",
"to",
"weak... |
ACL | Contrastive Self-Supervised Learning for Commonsense Reasoning | We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called “trigger” words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense re... | 67aee4c4326695393662a0a4c6051845 | 2,020 | [
"we propose a self - supervised method to solve pronoun disambiguation and winograd schema challenge problems .",
"our approach exploits the characteristic structure of training corpora related to so - called “ trigger ” words , which are responsible for flipping the answer in pronoun disambiguation .",
"we ach... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "self - supervised method",
"nugget_type": "... | [
"we",
"propose",
"a",
"self",
"-",
"supervised",
"method",
"to",
"solve",
"pronoun",
"disambiguation",
"and",
"winograd",
"schema",
"challenge",
"problems",
".",
"our",
"approach",
"exploits",
"the",
"characteristic",
"structure",
"of",
"training",
"corpora",
"rel... |
ACL | The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents | We introduce dodecaDialogue: a set of 12 tasks that measures if a conversational agent can communicate engagingly with personality and empathy, ask questions, answer questions by utilizing knowledge resources, discuss topics and situations, and perceive and converse about images. By multi-tasking on such a broad large-... | b601d999f30b81efaef652d89d0f28d3 | 2,020 | [
"we introduce dodecadialogue : a set of 12 tasks that measures if a conversational agent can communicate engagingly with personality and empathy , ask questions , answer questions by utilizing knowledge resources , discuss topics and situations , and perceive and converse about images .",
"by multi - tasking on s... | [
{
"event_type": "CMP",
"arguments": [
{
"text": "bert pre - trained baseline",
"nugget_type": "APP",
"argument_type": "Arg2",
"tokens": [
"bert",
"pre",
"-",
"trained",
"baseline"
],
"offsets": [
... | [
"we",
"introduce",
"dodecadialogue",
":",
"a",
"set",
"of",
"12",
"tasks",
"that",
"measures",
"if",
"a",
"conversational",
"agent",
"can",
"communicate",
"engagingly",
"with",
"personality",
"and",
"empathy",
",",
"ask",
"questions",
",",
"answer",
"questions",... |
ACL | Sub-Word Alignment is Still Useful: A Vest-Pocket Method for Enhancing Low-Resource Machine Translation | We leverage embedding duplication between aligned sub-words to extend the Parent-Child transfer learning method, so as to improve low-resource machine translation. We conduct experiments on benchmark datasets of My-En, Id-En and Tr-En translation scenarios. The test results show that our method produces substantial imp... | f40e67b3590f81a7b0f8005f63c0d103 | 2,022 | [
"we leverage embedding duplication between aligned sub - words to extend the parent - child transfer learning method , so as to improve low - resource machine translation .",
"we conduct experiments on benchmark datasets of my - en , id - en and tr - en translation scenarios .",
"the test results show that our ... | [
{
"event_type": "MDS",
"arguments": [
{
"text": "improve",
"nugget_type": "E-PUR",
"argument_type": "Target",
"tokens": [
"improve"
],
"offsets": [
22
]
},
{
"text": "embedding duplication between aligned s... | [
"we",
"leverage",
"embedding",
"duplication",
"between",
"aligned",
"sub",
"-",
"words",
"to",
"extend",
"the",
"parent",
"-",
"child",
"transfer",
"learning",
"method",
",",
"so",
"as",
"to",
"improve",
"low",
"-",
"resource",
"machine",
"translation",
".",
... |
ACL | Parallel Corpus Filtering via Pre-trained Language Models | Web-crawled data provides a good source of parallel corpora for training machine translation models. It is automatically obtained, but extremely noisy, and recent work shows that neural machine translation systems are more sensitive to noise than traditional statistical machine translation methods. In this paper, we pr... | 03f63f7d2a1cbc816860924bd533888f | 2,020 | [
"web - crawled data provides a good source of parallel corpora for training machine translation models .",
"it is automatically obtained , but extremely noisy , and recent work shows that neural machine translation systems are more sensitive to noise than traditional statistical machine translation methods .",
... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "web - crawled data",
"nugget_type": "TAK",
"argument_type": "Target",
"tokens": [
"web",
"-",
"crawled",
"data"
],
"offsets": [
0,
1,
2... | [
"web",
"-",
"crawled",
"data",
"provides",
"a",
"good",
"source",
"of",
"parallel",
"corpora",
"for",
"training",
"machine",
"translation",
"models",
".",
"it",
"is",
"automatically",
"obtained",
",",
"but",
"extremely",
"noisy",
",",
"and",
"recent",
"work",
... |
ACL | Synthetic QA Corpora Generation with Roundtrip Consistency | We introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction, and by filtering the results to ensure roundtrip consistency. By pretraining on the resulting corpora we obtain significant improvements on SQuAD2 and NQ, establishing a new s... | 54c8198933fc690451f9eafd8e1d9efe | 2,019 | [
"we introduce a novel method of generating synthetic question answering corpora by combining models of question generation and answer extraction , and by filtering the results to ensure roundtrip consistency .",
"by pretraining on the resulting corpora we obtain significant improvements on squad2 and nq , establi... | [
{
"event_type": "PRP",
"arguments": [
{
"text": "we",
"nugget_type": "OG",
"argument_type": "Proposer",
"tokens": [
"we"
],
"offsets": [
0
]
},
{
"text": "generating",
"nugget_type": "E-PUR",
... | [
"we",
"introduce",
"a",
"novel",
"method",
"of",
"generating",
"synthetic",
"question",
"answering",
"corpora",
"by",
"combining",
"models",
"of",
"question",
"generation",
"and",
"answer",
"extraction",
",",
"and",
"by",
"filtering",
"the",
"results",
"to",
"en... |
ACL | Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation | Existing continual relation learning (CRL) methods rely on plenty of labeled training data for learning a new task, which can be hard to acquire in real scenario as getting large and representative labeled data is often expensive and time-consuming. It is therefore necessary for the model to learn novel relational patt... | 06e07ff498e7c6ed929109ab657bbd6a | 2,022 | [
"existing continual relation learning ( crl ) methods rely on plenty of labeled training data for learning a new task , which can be hard to acquire in real scenario as getting large and representative labeled data is often expensive and time - consuming .",
"it is therefore necessary for the model to learn novel... | [
{
"event_type": "ITT",
"arguments": [
{
"text": "continual relation learning methods",
"nugget_type": "APP",
"argument_type": "Target",
"tokens": [
"continual",
"relation",
"learning",
"methods"
],
"offsets": [
... | [
"existing",
"continual",
"relation",
"learning",
"(",
"crl",
")",
"methods",
"rely",
"on",
"plenty",
"of",
"labeled",
"training",
"data",
"for",
"learning",
"a",
"new",
"task",
",",
"which",
"can",
"be",
"hard",
"to",
"acquire",
"in",
"real",
"scenario",
"... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.