text
stringlengths
4
222k
label
int64
0
4
As shown in Figure 2 , our approach firstly obtains the word-level edit matrix through three neural layers. Then based on the word-level edit matrix, it applies a generation algorithm to produce the rewritten utterance. Since the model yields a Ushaped architecture (illustrated later), we name our approach as Rewritten...
2
The principle approaches for constructing wordnets are the merge approach or the expand approach. In the merge approach, the synsets and relations are built independently and then aligned with WordNet. The drawbacks of the merge approach are that it is time-consuming and requires a lot of manual effort to build. On the...
2
The analysis consists of three steps:1. enumerate possible segmentations of an input colnpound nmm by consulting headwords of the thesaurus (BGH)2. assign thesaurus categories to all words 3. calculate the preferences of every structure of thc compound noun according to tlm frcquen-tics of category collocationsWe assum...
2
In this section we explain our method for extracting support verbs for nominalizations. We suppose that we are given a pair of words: ayerb and its nominalized form. As explained in the previous section, we are interested in extracting only nominalized forms which have not become concrete nouns, and that this will be d...
2
Our methodology is depicted in Figure 1 . In a nutshell, it can be described as follows. For both datasets, we extract four feature sets: LF, SE, BF, and RF. The details of each feature set are described in more detail in these working notes. Next, we train a neural network model for each feature set. We use these neur...
2
Our multi-task model consists of three main components: BERT encoder, a multi-task attention interaction module, and two task classifiers.Fine-tuning Bidirectional Encoder Representation from Transformers (BERT) model on downstream tasks has shown a new wave of state-of-the-art performances in many NLP applications (De...
2
Suppose we have two non-parallel corpora X and Y with style S x and S y , the goal is training two transferrers, each of which can (i) transfer a sen- tence from one style (either S x and S y ) to another (i.e., transfer intensity); and (ii) preserve the styleindependent context during the transformation (i.e., preserv...
2
For each of the seven identified skills, we defined an ablation method, as shown in Table 1 design of these methods is based on the fact that explicit discourse relations are expressed using explicit discourse connectives (Webber et al., 2019) . The scope of the proposed methodology hence captures only relations repres...
2
Our domain-agnostic approach is based on two aspects: a simple but informative paralinguistic feature set which can be easily extracted for speech signals from different domains and a deep learning approach which can discover temporal regularities in the data.Creating textual transcripts of speech recordings is an expe...
2
Color data We employ the Color Lexicon of American English, which provides extensive data on color naming. The lexicon consists of 51 monolexemic color name judgements for each of the 330 Munsell Chart color chips 4 (Lindsey and Brown, 2014). The color terms are solicited through a free-naming task, resulting in 122 te...
2
We present a series of experiments performed with BATS dataset. Although there are more results on analogy task published with Google test than with BATS, Google test only contains 15 types of linguistic relations, and these happen to be the easier ones . relations (98,000 questions in total). BATS covers most relation...
2
We construct our representations from visual objects. We illustrate an overview of our representation construction in Figure 1 . Based on the representations, we introduce hypernymy measures to measure the generality of word meanings. We then explain how the LE task is solved.We follow the procedure described in the wo...
2
In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention o...
2
Our algorithm (Algorithm 1) works by intrinsically using the Phrase2VecGLM model (Section 4.2) for query expansion, to discover concepts that are similar in the shared local contexts that they occur in, within documents ranked as top-K relevant to a query document, and using one of two options for specified threshold c...
2
In this paper, we study the problem of logical reasoning on the task of multiple choice question answering (MCQA). Specifically, given a passage P , a question Q and a set of K optionsO = {O 1 , • • • , O K },the goal is to select the correct option O y , where y ∈ [1, K]. Notably, to tackle this task, we devise a nove...
2
Our primary technical contribution in this paper is the development of a novel approach to identifying structured information embedded within natural language texts. Our approach treats each occurrence of a structured region independently, breaking the problem down into two parts. First, we identify the location of eac...
2
The architecture of our proposed model is presented in Figure 2 . The model is divided into two parts: the autoencoder model for the content, and the style embedding model. In the autoencoder model, the latent representation from the encoder and the style representation are combined, and the decoder uses the resulting ...
2
In this section, we outline our procedure for automatic acquisition of patterns. We employ a cascading procedure, as is shown in Figure 3 . First, the original documents are processed by a morphological analyzer and NE-tagger. Then the system retrieves the relevant documents for the scenario as a relevant document set....
2
Aphasic speech data can be collected in mainly two ways: as a free form discussion between a PWA and an interviewer or a PWA reading a set of provided scripts. While a PWA reading from scripts is conducive to supervised learning methods, it is rarely the case in real life. Hence, our goal is to perform paraphasia detec...
2
Before we formulate the problem, we will first give some formal definitions. The set of relations R is defined as {r 1 , r 2 , ..., r m } where each r i is a tuple (r p i , r s i , r o i ) corresponding to the predicate, subject, and object; and the set of attributes E is represented as {e 1 , e 2 , ..., e n } where ea...
2
Deep neural networks, with or without word embeddings, have recently shown significant improvements over traditional machine learning-based approaches when applied to various sentence-and documentlevel classification tasks. Kim (2014) have shown that CNNs outperform traditional machine learning-based approaches on seve...
2
We use two types of models, a feature-based model and an neural-based model, that could be applied to document-level understanding of relations between entities in order to investigate which models are suitable to GxE recognition and whether or not there are important issues particularly in this new task. There are thr...
2
As the dataset was fully annotated at token-level, we consider the document layout analysis task as a text-based sequence labeling task. Under this setting, we evaluate three representative pre-trained language models on our dataset including BERT, RoBERTa and LayoutLM to validate the effectiveness of DocBank. To verif...
2
The first objective of our work is to detect emotions expressed in customer turns and the second is to predict the emotional technique in agent turns. We treated these two objectives as two classification tasks. We generated a classifier for each task, where the classification output of one classifier can be part of th...
2
The overall methodology is split into three phases: preprocessing of data, extraction of features and finally evaluation of models and feature sets.Due to the unstructured format of the text used in social media, a set of filters were employed to reduce the noise while not losing useful information.1. A tweet-tokenizer...
2
In this section, we briefly introduce several methods for news recommendation, including general recommendation methods and news-specific recommendation methods. These methods were developed in different settings and on different datasets. Some of their implementations can be found in Microsoft Recommenders open source...
2
In order to find qualia relations for entities in REO, we looked for ways to automatically extract them from SUMO. By examining around a hundred nodes in SUMO, a number of relations were found to be useful in extracting qualia. For instance, the relation hasPurpose in SUMO directly specifies the purpose (hence the teli...
2
The CAM design integrates multiple matching strategies at different levels of representation and various abstractions from the surface form to compare meanings across a range of response variations. The approach is related to the methods used in machine translation evaluation (e.g., Banerjee and Lavie, 2005; Lin and Oc...
2
Our method has a very strong assumption, which oversimplifies the problem but it also gives the chance of recognizing some patterns. The assumption is that all lemmas and their inflections have the following form for all languages STEM +SUFFIX → STEM +SUFFIX as illustrated in the following examples for English and Span...
2
The experiments are designed for supervised classification on the type level, i.e., we do not try to decide whether a particular verb coordination in a given context is an SPC, but rather whether the verb coordination, given all its contexts, tends to function as a pseudo-coordination. For this we need a labeled data s...
2
In MUKAYESE, we focus on under-researched tasks of NLP in the Turkish language. After defining the task and assessing its importance, we construct the following three key elements for each benchmark:Datasets are the first element to consider when it comes to a benchmark. We define the minimum requirements of a benchmar...
2
Following prior work , we translate the i-th source sentence x i into the i-th target sentence y i in the presence of extra source contexts c = (x i−1 , x i+1 ), where x i−1 and x i+1 refer to the predecessor and successor of x i respectively. We adopt Transformer as the model architecture of pre-training and machine t...
2
In our method, the predominant sense for a target word is determined from a prevalence ranking of the possible senses for that word. The senses come from a predefined inventory (which might be a dictionary or WordNet-like resource). The ranking is derived using a distributional thesaurus automatically produced from a l...
2
We assign users in the IT dataset to two groups, Yes and No, based on the quantity nu,yes nu,yes+nu,no , where n u,yes is the number of tweets in which user u has used at least one of the Yes hashtags and none of the No hashtags in Table 1 ; and n u,no is the number of tweets in which u has used at least one No hashtag...
2
We use a hierarchical recurrent neural network (Serban et al., 2016) to model the current utterances (Figure 1a) . In other words, a recurrent neural network (RNN) captures the meaning of a sentence; another LSTM-RNN aggregates the sentence information into a fixed-size vector. For simplicity, we use RNN's last state a...
2
Writing academic paper by referencing examples (e.g., We illustrate the method ...) often does not work very well, because learners may fail to generalize from examples and apply them to their own situations. Often, there are too many examples to choose from and to adapt to match the need of the learner writers. To hel...
2
In this section, we present our approach for estimating the value of actions. Our approach casts the problem as a supervised learning-to-rank problem between pairs of actions. Given, a textual description of an action a, we want to estimate its value magnitude v. We represent the action a via a set of features that are...
2
We discuss several different metrics we developed for human-level clue-giving ability as well as a baseline metric for automatic clue-giving ability in order to provide more context to our machine learning experiment results. We perform machine learning experiments in order to determine the predictive value of simple t...
2
Our approach on MeasEval consisted of a cascade system composed of individual subsystems for each of the problems in the first two subtasks, and then jointly solving the last three subtasks with a single subsystem.The subtask of identifying quantities in text was formalized as a sequence labeling problem with Inside-Ou...
2
Annotated Data We use data from the Parallel Meaning Bank (PMB 3.0.0, Abzianidze et al., 2017) . The documents in this PMB release are sourced from seven different corpora from a wide range of genres. For one of these corpora, Tatoeba, Chinese translations already exist, and we added them to the PMB data. For the remai...
2
In order to compare text simplification corpora in different languages and domains, we have chosen eight corpora in five languages and three domains (see Section 3.1). For the analysis, we use in sum 104 language-independent features (see Section 3.2). In order to analyze relevance of the features per corpus, language,...
2
We perform a series of methodologies for narrative analysis. Figure 1 illustrates the main components that are used to analyse news and create the website.Preprocessing. First, we perform co-reference and anaphora resolution on each U.S Election article. This is based on the ANNIE plugin in GATE (Cunningham, 2002) . Ne...
2
The proposed MTS architecture is graphically shown in figure 2. It takes four separate inputs: (i) discussed topic, (ii) first statement, (iii) second statement, and (iv) their stance toward the topic. The final output is the similarity score of the fed in statements with respect to the main context. In the remainder o...
2
The analysis of this is divided into two parts. First, a list of commonly used adjectives was to be compiled based on their frequencies in the corpus. The second part involves a statistical analysis based on the categorization of the semantic and syntactic features of the adjectives.The British National Corpus was used...
2
To evaluate the performance of our speech-driven retrieval system, we used the IREX collection 4 . This test collection, which resembles one used in the TREC ad hoc retrieval track, includes 30 Japanese topics (information need) and relevance assessment (correct judgement) for each topic, along with target 4 http://cs....
2
All our translation experiments were conducted with Moses' EMS toolkit (Koehn et al., 2007) , which in turn uses gizapp (Och and Ney, 2003) and SRILM (Stolcke, 2002) .As a test bed, we used the 200 bilingual tweets we acquired that were not used to follow urls, as described in Sections 2.1 and 2.3. We kept each feed se...
2
We frame the problem as a translation task from English to Bash. In Section 3.1 we describe our approach to incorporate the structure in modelling natural language invocations. Section 3.2 describes the proposed architecture and an analysis of its computational complexity at inference time.The constituency tree represe...
2
The proposed DDI classification approach con-sists of four main steps. The first is Drug Name Normalization, which used RxNorm (Nelson, et al., 2011) to normalize drug synonyms in order calculate odds ratio more accurately. Following is the Odds Ratio step, in which we calculate the odds ratio of drug-drug pair matched...
2