doc_id stringlengths 4 10 | revision_depth int64 1 4 | before_revision stringlengths 135 9.03k | after_revision stringlengths 144 8.89k | edit_actions list | sents_char_pos sequence | domain stringclasses 3
values |
|---|---|---|---|---|---|---|
1912.05372 | 1 | Language models have become a key step to achieve state-of-the-art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,... | Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,... | [
{
"type": "R",
"before": "state-of-the-art",
"after": "state-of-the art",
"start_char_pos": 50,
"end_char_pos": 66,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"clarity",
"fluency"
]
},
{
"type": "R",
"before": "are shared with",
"after": ... | [
0,
133,
378,
568,
681,
811,
1011
] | arxiv |
1912.05372 | 2 | Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,... | Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,... | [
{
"type": "R",
"before": "word representations such as OpenAI GPT (Radford",
"after": "representations (Dai and Le, 2015; Peters",
"start_char_pos": 446,
"end_char_pos": 494,
"major_intent": "meaning-changed",
"raw_intents": [
"meaning-changed",
"meaning-changed",
"mean... | [
0,
133,
378,
572,
685,
815,
1017
] | arxiv |
1912.10514 | 2 | An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data. The method was not able to utilize the available huge amount of monolingual data because of the inability of models ... | An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of the back-translations of the target-side monolingual data. The standard back-translation method has been shown to be unable to efficiently utilize the available huge amount o... | [
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 146,
"end_char_pos": 146,
"major_intent": "fluency",
"raw_intents": [
"clarity",
"fluency",
"fluency"
]
},
{
"type": "R",
"before": "method was not able to",
"after": "standard back-tra... | [
0,
201,
389,
674,
807,
930,
1227
] | arxiv |
1912.10616 | 1 | Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applie... | Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applie... | [
{
"type": "R",
"before": "current",
"after": "applications to",
"start_char_pos": 358,
"end_char_pos": 365,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "A",
"before": null,
"after": "applications have be... | [
0,
74,
275,
433,
583
] | arxiv |
1912.10616 | 2 | Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set . While deep learning methodshave been appli... | Authorship attribution is the process of identifying the author of a text. Approaches to tackling it have been conventionally divided into classification-based ones, which work well for small numbers of candidate authors, and similarity-based methods , which are applicable for larger numbers of authors or for authors b... | [
{
"type": "R",
"before": "Classification-based approaches",
"after": "Approaches to tackling it have been conventionally divided into classification-based ones, which",
"start_char_pos": 75,
"end_char_pos": 106,
"major_intent": "coherence",
"raw_intents": [
"clarity",
"cohere... | [
0,
74,
277,
500,
656,
958
] | arxiv |
1912.11602 | 1 | Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information . We propose that the le... | Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general . We propose t... | [
{
"type": "A",
"before": null,
"after": "in general",
"start_char_pos": 295,
"end_char_pos": 295,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"meaning-changed",
"clarity"
]
},
{
"type": "A",
"before": null,
"after": "our favor in",
"st... | [
0,
131,
297,
535,
706,
787
] | null |
1912.11602 | 2 | Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information . While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general. We propose ... | A typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary , it has a detrimental effect on teaching a model to discriminate and extract important information in general. We p... | [
{
"type": "R",
"before": "Lead bias is a common phenomenon in news summarization, where early parts of an article often contain",
"after": "A typical journalistic convention in news articles is to deliver",
"start_char_pos": 0,
"end_char_pos": 101,
"major_intent": "clarity",
"raw_intents... | [
0,
132,
308,
551,
650,
772,
998,
1112
] | arxiv |
1912.13318 | 1 | Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the wide spread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In thi... | Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this... | [
{
"type": "R",
"before": "wide spread",
"after": "widespread",
"start_char_pos": 111,
"end_char_pos": 122,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"fluency"
]
},
{
"type": "D",
"before": "textbf",
"after": null,
"start_c... | [
0,
98,
313,
599,
694
] | arxiv |
1912.13318 | 2 | Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this... | Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this... | [
{
"type": "A",
"before": null,
"after": "form understanding (from 70.72 to 79.27),",
"start_char_pos": 936,
"end_char_pos": 936,
"major_intent": "meaning-changed",
"raw_intents": [
"meaning-changed",
"meaning-changed",
"meaning-changed"
]
},
{
"type": "R",
... | [
0,
98,
312,
595,
706,
855,
1037
] | arxiv |
2001.00059 | 2 | Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which ca... | Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which ca... | [
{
"type": "R",
"before": "6M",
"after": "7.4M",
"start_char_pos": 721,
"end_char_pos": 723,
"major_intent": "meaning-changed",
"raw_intents": [
"meaning-changed",
"meaning-changed",
"meaning-changed"
]
},
{
"type": "A",
"before": null,
"after": "from... | [
0,
169,
435,
605,
655,
830,
1017,
1326
] | arxiv |
2001.01037 | 1 | This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning with attention. The result provides simultaneously pix... | This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. The explanations pro... | [
{
"type": "R",
"before": "with attention. The result provides",
"after": "models with attention mechanisms. The explanations provide",
"start_char_pos": 266,
"end_char_pos": 301,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"coherence",
"clarity"
]
},
... | [
0,
125,
281,
403,
550,
704,
961,
1089
] | arxiv |
2001.01037 | 2 | This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms. The explanations pr... | This paper interprets the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods , tailored to image captioning models with attention mechanisms. We comp... | [
{
"type": "R",
"before": "explains",
"after": "interprets the",
"start_char_pos": 11,
"end_char_pos": 19,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "R",
"before": "backpropagation",
"after": "propagati... | [
0,
125,
300,
427,
583,
738,
995,
1118
] | arxiv |
2001.04063 | 1 | In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence mo... | In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence mo... | [
{
"type": "R",
"before": "Experimental results show ProphetNet achieves the best performance on both",
"after": "Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for",
"start_char_pos": 731,
"end_char_pos": 805,
"major_intent": "meaning-changed",
"raw_inte... | [
0,
224,
479,
624,
730,
932
] | arxiv |
2001.05272 | 1 | Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. We propose the FGN , Fusion Glyph Network for Chinese NER. This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include: (1) a... | Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN , Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mech... | [
{
"type": "R",
"before": "information",
"after": "infor-mation",
"start_char_pos": 91,
"end_char_pos": 102,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"others"
]
},
{
"type": "R",
"before": "We",
"after": "In this paper, we",
... | [
0,
34,
131,
190,
314,
758
] | arxiv |
2001.05272 | 2 | Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mecha... | Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive information with the fusion mechani... | [
{
"type": "R",
"before": "infor-mation",
"after": "information",
"start_char_pos": 91,
"end_char_pos": 103,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"fluency"
]
},
{
"type": "R",
"before": "infor-mation",
"after": "informatio... | [
0,
34,
132,
205,
325,
524,
740,
890
] | arxiv |
2001.05687 | 3 | Although over 95 million people worldwide speak the Vietnamese language , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for this task. ... | Although Vietnamese is the 17th most popular native-speaker language in the world , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for t... | [
{
"type": "R",
"before": "over 95 million people worldwide speak the Vietnamese language",
"after": "Vietnamese is the 17th most popular native-speaker language in the world",
"start_char_pos": 9,
"end_char_pos": 71,
"major_intent": "meaning-changed",
"raw_intents": [
"meaning-chan... | [
0,
227,
319,
454,
547,
737,
856,
962,
1082,
1149
] | arxiv |
2001.07676 | 2 | Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exp... | Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exp... | [
{
"type": "R",
"before": "regular",
"after": "standard",
"start_char_pos": 583,
"end_char_pos": 590,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "D",
"before": "both",
"after": null,
"start_char_pos"... | [
0,
176,
485,
573,
654
] | arxiv |
2001.08604 | 1 | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data natura... | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs... | [
{
"type": "R",
"before": "dialogue",
"after": "dialog",
"start_char_pos": 243,
"end_char_pos": 251,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"clarity",
"fluency"
]
},
{
"type": "R",
"before": "dialogues, in which the data naturally exhibits... | [
0,
189,
399,
543,
723
] | arxiv |
2001.08604 | 2 | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs... | Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Due to the inherent hierarchical structure of... | [
{
"type": "R",
"before": "are used to augment",
"after": "complement",
"start_char_pos": 121,
"end_char_pos": 140,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "D",
"before": "certain",
"after": null,
... | [
0,
189,
291,
522,
744,
844,
1095
] | arxiv |
2001.11453 | 1 | Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian genera... | Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian gener... | [
{
"type": "R",
"before": "task-language",
"after": "task--language",
"start_char_pos": 209,
"end_char_pos": 222,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"others"
]
},
{
"type": "R",
"before": "task-language",
"after": "task-... | [
0,
143,
277,
366,
465,
598,
679,
870,
1112
] | null |
2001.11453 | 2 | Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian gener... | Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian genera... | [
{
"type": "R",
"before": "task--language",
"after": "task-language",
"start_char_pos": 209,
"end_char_pos": 223,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"others"
]
},
{
"type": "R",
"before": "task--language",
"after": "task... | [
0,
143,
278,
367,
466,
600,
681,
872,
1113,
1275
] | null |
2002.06353 | 1 | We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated ... | With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training . Besides, most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks. In this paper, w... | [
{
"type": "R",
"before": "We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by",
"after": "With",
"start_char_pos": 0,
"end_char_pos": 125,
"major_intent": "coherence",
"raw_intents": [
"coherence",
"cla... | [
0,
112,
341,
510,
644,
779,
940
] | arxiv |
2002.09253 | 1 | Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In... | Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In... | [
{
"type": "R",
"before": "model",
"after": "encoder",
"start_char_pos": 586,
"end_char_pos": 591,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "A",
"before": null,
"after": ", using an algorithm grounded ... | [
0,
174,
317,
520,
631,
971,
1129
] | arxiv |
2002.09253 | 2 | Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In... | Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn to achieve them. Recent approaches have considered goal spaces that were either fixed and hand-defined or ... | [
{
"type": "R",
"before": "Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how",
"after": "Developmental machine learning studies how artificial agents can model the way children learn open-ended rep... | [
0,
174,
317,
520,
829,
1060,
1218
] | arxiv |
2002.09616 | 1 | Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one re... | Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced b... | [
{
"type": "R",
"before": "Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a... | [
0,
111,
259,
355,
507,
619,
734,
916,
980,
1157,
1244,
1392
] | arxiv |
2002.09616 | 2 | Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced ... | Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one re... | [
{
"type": "R",
"before": "Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent",
"after": "Producing natural and accurate responses like human beings... | [
0,
84,
286,
546,
682,
815,
931,
1045,
1179,
1375,
1563,
1683
] | arxiv |
2002.10107 | 2 | Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and exper... | Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and exper... | [
{
"type": "A",
"before": null,
"after": "a",
"start_char_pos": 709,
"end_char_pos": 709,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"fluency"
]
},
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 769,
"e... | [
0,
152,
412,
618,
759,
925
] | arxiv |
2003.02645 | 1 | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key bar... | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barri... | [
{
"type": "D",
"before": "?",
"after": null,
"start_char_pos": 206,
"end_char_pos": 207,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"fluency"
]
},
{
"type": "D",
"before": "?",
"after": null,
"start_char_pos": 335,
"end... | [
0,
134,
380,
541,
637,
878,
1029
] | arxiv |
2003.02645 | 2 | We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key ba... | SentenceMIM is a probabilistic auto-encoder for language data , trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (ie, similar to VAE) . Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MI... | [
{
"type": "R",
"before": "We introduce sentenceMIM,",
"after": "SentenceMIM is",
"start_char_pos": 0,
"end_char_pos": 25,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "R",
"before": "modelling",
"after": ... | [
0,
378,
539,
635,
876,
1029
] | arxiv |
2004.12316 | 1 | Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in e... | Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empath... | [
{
"type": "R",
"before": "conversational models",
"after": "dialogue systems",
"start_char_pos": 11,
"end_char_pos": 32,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "R",
"before": "conversations",
"after... | [
0,
116,
228,
345,
517,
634,
769,
875
] | arxiv |
2004.12316 | 2 | Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empath... | Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in e... | [
{
"type": "R",
"before": "dialogue systems",
"after": "conversational models",
"start_char_pos": 11,
"end_char_pos": 27,
"major_intent": "clarity",
"raw_intents": [
"style",
"clarity",
"clarity"
]
},
{
"type": "R",
"before": "dialogues",
"after": "co... | [
0,
111,
223,
336,
512,
625,
760,
866
] | arxiv |
2004.12765 | 1 | Automatic humor detection has interesting use cases in modern technologies, such as chatbots and personal assistants. In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. Our proposed model uses BERT to generate tokens and sentence embedding for texts. It sends ... | Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. Based on the general linguistic structure of humor, in this paper, we propose a novel approach for detecting humor in short texts by using BERT sentence embedding. Our proposed method uses BERT to genera... | [
{
"type": "R",
"before": "personal assistants. In",
"after": "virtual assistants. Based on the general linguistic structure of humor, in",
"start_char_pos": 97,
"end_char_pos": 120,
"major_intent": "meaning-changed",
"raw_intents": [
"style",
"meaning-changed",
"meaning... | [
0,
117,
228,
310,
409,
543,
739
] | arxiv |
2004.14519 | 2 | Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance f... | Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied . In this paper , we pre-train a c... | [
{
"type": "R",
"before": "Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual",
"after": "Multilingua... | [
0,
235,
488,
614,
792
] | arxiv |
2004.14601 | 1 | We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We train LSTMs on non-linguistic , structured data and... | We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models . We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language... | [
{
"type": "R",
"before": "a novel methodology",
"after": "transfer learning as a method",
"start_char_pos": 11,
"end_char_pos": 30,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"style",
"clarity"
]
},
{
"type": "R",
"before": "through transfer ... | [
0,
135,
265,
463,
746,
971,
1153
] | arxiv |
2004.14601 | 2 | We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language.... | We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language.... | [
{
"type": "R",
"before": "Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap",
"after": "To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run... | [
0,
119,
320,
510,
866,
1205
] | arxiv |
2004.14623 | 2 | In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experiment... | We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) p... | [
{
"type": "R",
"before": "In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our ce... | [
0,
123,
210,
275,
523,
695
] | arxiv |
2004.14974 | 1 | We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retri... | We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study this task, we construct SciFact, a dataset of 1.4K expert-written scientific claims... | [
{
"type": "R",
"before": "the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary... | [
0,
50,
208,
335,
509,
576,
792,
923
] | arxiv |
2004.15003 | 1 | One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment. However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of ali... | One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentenc... | [
{
"type": "R",
"before": "semantic similarity between texts is to measure",
"after": "textual similarity is measuring",
"start_char_pos": 32,
"end_char_pos": 79,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "R",
... | [
0,
157,
262,
422,
640,
764,
946,
1101
] | arxiv |
2004.15003 | 2 | One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentenc... | A key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vector... | [
{
"type": "R",
"before": "One key principle for",
"after": "A key principle in",
"start_char_pos": 0,
"end_char_pos": 21,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"fluency"
]
},
{
"type": "D",
"before": "both",
"after": null,... | [
0,
147,
217,
330,
495,
652,
880,
980,
1141
] | arxiv |
2004.15011 | 1 | We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation ... | We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression , requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation... | [
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 116,
"end_char_pos": 116,
"major_intent": "fluency",
"raw_intents": [
"fluency",
"fluency",
"fluency"
]
},
{
"type": "R",
"before": "tasks of extreme summarization and",
"after": "task of... | [
0,
190,
274,
411,
594
] | arxiv |
2004.15011 | 2 | We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding . To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotati... | We introduce TLDR generation , a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language . To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset ... | [
{
"type": "D",
"before": "for scientific papers",
"after": null,
"start_char_pos": 29,
"end_char_pos": 50,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"clarity",
"clarity"
]
},
{
"type": "R",
"before": "automatic summarizationtask with",
"... | [
0,
192,
414,
597
] | arxiv |
2005.00192 | 2 | In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metr... | In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Especially, widely used n-gram similarity metrics often fail to discriminate the incorrect answers since they equally consider all of the token... | [
{
"type": "R",
"before": "Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datase... | [
0,
176,
297,
425,
553,
646,
843
] | arxiv |
2005.00782 | 1 | Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies of PTLMs have found inference deficits, but have failed to provide a syste... | Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge... | [
{
"type": "R",
"before": "greatly improved",
"after": "impressive",
"start_char_pos": 40,
"end_char_pos": 56,
"major_intent": "clarity",
"raw_intents": [
"clarity",
"style",
"clarity"
]
},
{
"type": "R",
"before": "however, it remains unclear whether the... | [
0,
231,
437,
589,
853,
1147
] | null |
2005.00782 | 2 | Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledg... | Pre-trained language models ( PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated . In the pursuit of advancing fluid human-AI communication, we propose a... | [
{
"type": "R",
"before": "PTLM) have",
"after": "PTLMs) have achieved",
"start_char_pos": 30,
"end_char_pos": 40,
"major_intent": "style",
"raw_intents": [
"clarity",
"style",
"style"
]
},
{
"type": "D",
"before": "practically",
"after": null,
"s... | [
0,
200,
345,
492,
714,
821,
1041
] | null |
YAML Metadata Warning:The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Paper: Understanding Iterative Revision from Human-Written Text
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
- Downloads last month
- 34