paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity Applications
1902.10247
Table 3: Hyperparameters of the CNN algorithms
['Parameter', 'Value']
[['Sequence length', '2633'], ['Embedding dimensions', '20'], ['Filter size', '(3, 4)'], ['Number of filters', '150'], ['Dropout probability', '0.25'], ['Hidden dimensions', '150']]
Convolutional neural network is employed on the experiments to compare with the proposed approach. This network typically includes two operations, which can be considered of as feature extractors, convolution and pooling. CNN performs a sequence of operations on the data in its training phase and the output of this seq...
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity Applications
1902.10247
Table 4: Experimental results on given datasets
['Method', 'Negative class (%) precision', 'Negative class (%) recall', 'Negative class (%) F1', 'Positive class (%) precision', 'Positive class (%) recall', 'Positive class (%) F1', 'Overall (%) accuracy', 'Overall (%) F1']
[['[BOLD] HCR', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Proposed method(CNN+Graph)', '89.11', '88.60', '81.31', '85.17', '84.32', '84.20', '85.71', '82.12'], ['SVM(linear)', '80.21', '91.40', '85.01', '67.12', '45.23', '54.24', '76.01', '76.74'], ['SVM(RBF)', '77.87', ...
We compare performance of the proposed method to support vector machine and convolutional neural network for short sentences by using pre-trained Google word embeddings (kim2014convolutional). It is important to note how well an algorithm is performing on different classes in a dataset, for example, SVM is not showing ...
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity Applications
1902.10247
Table 5: Comparison of graph-based learning vs. word2vec
['Method', 'Negative class (%) precision', 'Negative class (%) recall', 'Negative class (%) F1', 'Positive class (%) precision', 'Positive class (%) recall', 'Positive class (%) F1', 'Overall (%) accuracy', 'Overall (%) F1']
[['[BOLD] IMDB', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Graph', '87.42', '90.85', '88.31', '86.25', '86.80', '86.60', '86.07', '87.27'], ['w2v', '74.34', '73.37', '75.20', '71.41', '70.82', '71.32', '70.14', '72.71']]
With the intention to show the priority of the graph representation procedure over word2vec, we have extracted the word embeddings only on the IMDB dataset to demonstrate the effect of graph representation on text documents. This shows the superiority of the graphs in extracting features from the text materials even if...
KSU KDD: Word Sense Induction by Clustering in Topic Space
1302.7056
Table 1: Effect of varying the number of topics K on performance
['K', '10', '50', '200', '400', '500']
[['V-measure', '5.1', '5.8', '7.2', '8.4', '8.1'], ['F-score', '8.6', '32.0', '53.9', '63.9', '64.2']]
Before running our main experiments, we wanted to see how the number of topics K used in the topic model could affect the performance of our system. We found that the V-measure and F-score values increase with increasing K, as more dimensions are added to the topic space, the different senses in this K-dimensional spac...
KSU KDD: Word Sense Induction by Clustering in Topic Space
1302.7056
Table 2: V-measure and F-score on SemEval-1
['[EMPTY]', 'All', 'Verbs', 'Nouns']
[['V-measure', '8.4', '8.0', '8.7'], ['F-score', '63.9', '56.8', '69.0']]
Next, we evaluated the performance of our system on SemEval-1 WSI task data. Since no training data was provided for this task, we used an un-annotated version of the test instances to create the LDA topic model. For each target word (verb or noun), we trained the topic model on its given test instances. Then we used t...
Learning End-to-End Goal-Oriented Dialog with Multiple Answers
1808.09996
Table 3: Ablation study of our proposed model on permuted-bAbI dialog task. Results (accuracy %) are given in the standard setup, without match-type features.
['[BOLD] Model', '[BOLD] Per-turn', '[BOLD] Per-dialog']
[['Mask-memN2N', '93.4', '32'], ['Mask-memN2N (w/o entropy)', '92.1', '24.6'], ['Mask-memN2N (w/o L2 mask pre-training)', '85.8', '2.2'], ['Mask-memN2N (Reinforcement learning phase only)', '16.0', '0']]
Here, we study the different parts of our model for better understanding of how the different parts influence the overall model performance. We show results for Mask-memN2N in various settings - a) without entropy, b) without pre-training mask c) reinforcement learning phase only. When we do only the RL phase, it might...
Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer
2006.05174
Table 2: Performance of all attentions
['[BOLD] Attention', '[BOLD] Speaker [BOLD] Utterance', '[BOLD] Speaker [BOLD] Frame', '[BOLD] Phoneme [BOLD] 1-hidden', '[BOLD] Phoneme [BOLD] 2-hidden']
[['Baseline (Mel)', '0.0060', '0.0033', '0.5246', '0.5768'], ['Baseline (QK)', '0.9926', '0.9824', '0.6460', '0.6887'], ['Baseline (Q)', '0.9898', '0.9622', '0.5893', '0.6345'], ['Sparse (Strided)', '0.9786', '0.9039', '0.6048', '0.6450'], ['Sparse (Fixed)', '0.9597', '0.7960', '0.6069', '0.6846'], ['Sign-ALSH', '0.971...
Baseline (QK) and Baseline (Q) (shared-QK attention) remarkably outperform Baseline (Mel), which shows the importance of pre-training. LSH /ALSH algorithms have negative influences on most downstream tasks, showing that restricting the attention by LSH/ALSH algorithm is not effective enough. For utterance-level speaker...