paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Lattice-Based Unsupervised Test-Time Adaptationof Neural Network Acoustic Models
1906.11521
Table 6: WER for adaptation of the TED-LIUM model without i-vectors and the Somali model using best path as a supervision with varying fractions of the adaptation data.
['[EMPTY]', 'TED-LIUM dev', 'TED-LIUM test', 'Somali NB', 'Somali WB']
[['[BOLD] baseline', '10.0', '10.6', '53.7', '57.3'], ['[BOLD] ALL-LAT 100%', '9.1', '9.0', '53.0', '56.5'], ['[BOLD] ALL-LAT 75%', '9.2', '8.8', '53.3', '56.2'], ['[BOLD] ALL-LAT 50%', '9.4', '9.0', '53.8', '56.5'], ['[BOLD] ALL-LAT 25%', '9.7', '9.5', '56.0', '57.0'], ['[BOLD] ALL-BP 100%', '9.9', '10.6', '54.5', '58...
This filtering can be done by using a hard threshold, or by using only the fraction of utterances with the highest confidences. Either way one extra hyper-parameter is introduced. We experiment with the TED-LIUM model without i-vectors, and the Somali model. As can be seen from the table, filtering utterances improves ...
Lattice-Based Unsupervised Test-Time Adaptationof Neural Network Acoustic Models
1906.11521
Table 4: WER for adaptation of the MGB model to episodes in the longitudinal eval data.
['[EMPTY]', 'eval']
[['[BOLD] baseline', '19.9'], ['[BOLD] LHUC-LAT', '19.4'], ['[BOLD] LHUC-BP', '19.5'], ['[BOLD] ALL-LAT', '19.2'], ['[BOLD] ALL-BP', '19.7']]
This provides more adaptation data ( 30-45 minutes per episode), but perhaps at the cost of losing finer granularity for adaptation. Using the best path with all parameters yields almost no gains (∼1%). When only adapting a subset of the parameters with LHUC the results are more stable, but does not perform as well as ...
The Perceptimatic English Benchmark for Speech Perception Models
2005.03418
Table 1: Percent accuracies for humans (PEB) and models (the bigger the better). GMM is for DPGMM, DS for DeepSpeech. BEnM, BEnT and BMu are (in order) for monophone English, triphone English and multilingual bottleneck models. Art is for articulatory reconstruction.
['[EMPTY]', 'PEB', 'GMM', 'DS', 'BEnM', 'BEnT', 'BMu', 'Art', 'MFCC']
[['En', '79.5', '88.3', '89.5', '91.2', '90.3', '88.9', '77.3', '78.6'], ['Fr', '76.7', '82.0', '80.2', '87.6', '88.8', '88.5', '70.1', '78.3']]
This implies that, to the extent that any of these models accurately captures listeners’ perceived discriminability, listeners’ behaviour on the task, unsurprisingly, cannot correspond to a hard decision at the optimal decision threshhold. The results also indicate, as expected, a small native language effect—a decreas...
Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization
1809.05972
Table 4: Quantitative evaluation on the Twitter dataset.
['Models', 'Relevance BLEU', 'Relevance ROUGE', 'Relevance Greedy', 'Relevance Average', 'Relevance Extreme', 'Diversity Dist-1', 'Diversity Dist-2', 'Diversity Ent-4']
[['seq2seq', '0.64', '0.62', '1.669', '0.54', '0.34', '0.020', '0.084', '6.427'], ['cGAN', '0.62', '0.61', '1.68', '0.536', '0.329', '0.028', '0.102', '6.631'], ['AIM', '[BOLD] 0.85', '[BOLD] 0.82', '[BOLD] 1.960', '[BOLD] 0.645', '[BOLD] 0.370', '0.030', '0.092', '7.245'], ['DAIM', '0.81', '0.77', '1.845', '0.588', '0...
We further compared our methods on the Twitter dataset. We treated all dialog history before the last response in a multi-turn conversation session as a source sentence, and use the last response as the target to form our dataset. We employed CNN as our encoder because a CNN-based encoder is presumably advantageous in ...
Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization
1809.05972
Table 1: Quantitative evaluation on the Reddit dataset. (∗ is implemented based on [5].)
['Models', 'Relevance BLEU', 'Relevance ROUGE', 'Relevance Greedy', 'Relevance Average', 'Relevance Extreme', 'Diversity Dist-1', 'Diversity Dist-2', 'Diversity Ent-4']
[['seq2seq', '1.85', '0.9', '1.845', '0.591', '0.342', '0.040', '0.153', '6.807'], ['cGAN', '1.83', '0.9', '1.872', '0.604', '0.357', '0.052', '0.199', '7.864'], ['AIM', '[BOLD] 2.04', '[BOLD] 1.2', '[BOLD] 1.989', '[BOLD] 0.645', '0.362', '0.050', '0.205', '8.014'], ['DAIM', '1.93', '1.1', '1.945', '0.632', '[BOLD] 0....
Quantitative evaluation We first evaluated our methods on the Reddit dataset using the relevance and diversity metrics. We truncated the vocabulary to contain only the most frequent 20,000 words. We observe that by incorporating the adversarial loss the diversity of generated responses is improved (cGAN vs. seq2seq). T...
Efficient and Robust Question Answering from Minimal Context over Documents
1805.08092
Table 4: Results of sentence selection on the dev set of SQuAD and NewsQA. (Top) We compare different models and training methods. We report Top 1 accuracy (Top 1) and Mean Average Precision (MAP). Our selector outperforms the previous state-of-the-art Tan et al. (2018). (Bottom) We compare different selection methods....
['Selection method', 'SQuAD N sent', 'SQuAD Acc', 'NewsQA N sent', 'NewsQA Acc']
[['Top k\xa0(T+M)', '1', '91.2', '1', '70.9'], ['Top k\xa0(T+M)', '2', '97.2', '3', '89.7'], ['Top k\xa0(T+M)', '3', '98.9', '4', '92.5'], ['Dyn\xa0(T+M)', '1.5', '94.7', '2.9', '84.9'], ['Dyn\xa0(T+M)', '1.9', '96.5', '3.9', '89.4'], ['Dyn\xa0(T+M+N)', '1.5', '98.3', '2.9', '91.8'], ['Dyn\xa0(T+M+N)', '1.9', '[BOLD] 9...
We introduce 3 techniques to train the model. (i) As the encoder module of our model is identical to that of S-Reader, we transfer the weights to the encoder module from the QA model trained on the single oracle sentence (Oracle). (ii) We modify the training data by treating a sentence as a wrong sentence if the QA mod...
Efficient and Robust Question Answering from Minimal Context over Documents
1805.08092
Table 4: Results of sentence selection on the dev set of SQuAD and NewsQA. (Top) We compare different models and training methods. We report Top 1 accuracy (Top 1) and Mean Average Precision (MAP). Our selector outperforms the previous state-of-the-art Tan et al. (2018). (Bottom) We compare different selection methods....
['Model', 'SQuAD Top 1', 'SQuAD MAP', 'NewsQA Top 1', 'NewsQA Top 3', 'NewsQA MAP']
[['TF-IDF', '81.2', '89.0', '49.8', '72.1', '63.7'], ['Our selector', '85.8', '91.6', '63.2', '85.1', '75.5'], ['Our selector\xa0(T)', '90.0', '94.3', '67.1', '87.9', '78.5'], ['Our selector\xa0(T+M, T+M+N)', '[BOLD] 91.2', '[BOLD] 95.0', '[BOLD] 70.9', '[BOLD] 89.7', '[BOLD] 81.1'], ['Tan et\xa0al. ( 2018 )', '-', '92...
We introduce 3 techniques to train the model. (i) As the encoder module of our model is identical to that of S-Reader, we transfer the weights to the encoder module from the QA model trained on the single oracle sentence (Oracle). (ii) We modify the training data by treating a sentence as a wrong sentence if the QA mod...
Efficient and Robust Question Answering from Minimal Context over Documents
1805.08092
Table 8: Results on the dev-full set of TriviaQA (Wikipedia) and the dev set of SQuAD-Open. Full results (including the dev-verified set on TriviaQA) are in Appendix C. For training Full and Minimal on TriviaQA, we use 10 paragraphs and 20 sentences, respectively. For training Full and Minimal on SQuAD-Open, we use 20 ...
['[EMPTY]', '[EMPTY]', 'TriviaQA (Wikipedia) n sent', 'TriviaQA (Wikipedia) Acc', 'TriviaQA (Wikipedia) Sp', 'TriviaQA (Wikipedia) F1', 'TriviaQA (Wikipedia) EM', 'SQuAD-Open n sent', 'SQuAD-Open Acc', 'SQuAD-Open Sp', 'SQuAD-Open F1', 'SQuAD-Open EM']
[['Full', 'Full', '69', '95.9', 'x1.0', '59.6', '53.5', '124', '76.9', 'x1.0', '41.0', '33.1'], ['Minimal', 'TF-IDF', '5', '73.0', 'x13.8', '51.9', '45.8', '5', '46.1', 'x12.4', '36.6', '29.6'], ['Minimal', 'TF-IDF', '10', '79.9', 'x6.9', '57.2', '51.5', '10', '54.3', 'x6.2', '39.8', '32.5'], ['Minimal', 'Our', '5.0', ...
First, Minimal obtains higher F1 and EM over Full, with the inference speedup of up to 13.8×. Second, the model with our sentence selector with Dyn achieves higher F1 and EM over the model with TF-IDF selector. For example, on the development-full set, with 5 sentences per question on average, the model with Dyn achiev...
Simple and Effective Text Matching with Richer Alignment Features
1908.00300
Table 7: Robustness checks on dev sets of the corresponding datasets.
['[EMPTY]', '[BOLD] SNLI', '[BOLD] Quora', '[BOLD] Scitail']
[['1 block', '88.1±0.1', '88.7±0.1', '88.3±0.8'], ['2 blocks', '88.9±0.2', '89.2±0.2', '[BOLD] 88.9±0.3'], ['3 blocks', '88.9±0.1', '89.4±0.1', '88.8±0.5'], ['4 blocks', '[BOLD] 89.0±0.1', '[BOLD] 89.5±0.1', '88.7±0.5'], ['5 blocks', '89.0±0.2', '89.2±0.2', '88.5±0.5'], ['1 enc. layer', '88.6±0.2', '88.9±0.2', '88.1±0....
The number of blocks is tuned in a range from 1 to 3. The number of layers of the convolutional encoder is tuned from 1 to 3. we validate with up to 5 blocks and layers, in all other experiments we deliberately limit the maximum number of blocks and number of layers to 3 to control the size of the model. The initial le...
Simple and Effective Text Matching with Richer Alignment Features
1908.00300
Table 6: Ablation study on dev sets of the corresponding datasets.
['[EMPTY]', '[BOLD] SNLI', '[BOLD] Quora', '[BOLD] Scitail', '[BOLD] WikiQA']
[['original', '88.9', '89.4', '88.9', '0.7740'], ['w/o enc-in', '87.2', '85.7', '78.1', '0.7146'], ['residual conn.', '88.9', '89.2', '87.4', '0.7640'], ['simple fusion', '88.8', '88.3', '87.5', '0.7345'], ['alignment alt.', '88.7', '89.3', '88.2', '0.7702'], ['prediction alt.', '88.9', '89.2', '88.8', '0.7558'], ['par...
The first ablation baseline shows that without richer features as the alignment input, the performance on all datasets degrades significantly. This is the key component in the whole model. The results of the second baseline show that vanilla residual connections without direct access to the original point-wise features...
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 3: Comparison of normalizing query and key in N-SAN.
['Query', 'Key', 'B@4', 'M', 'R', 'C', 'S']
[['✗', '✗', '38.4', '28.6', '58.4', '128.6', '22.6'], ['✓', '✗', '39.3', '[BOLD] 29.1', '[BOLD] 58.9', '[BOLD] 130.8', '23.0'], ['✗', '✓', '39.2', '29.0', '58.8', '130.1', '22.8'], ['✓', '✓', '[BOLD] 39.4', '[BOLD] 29.1', '58.8', '130.7', '[BOLD] 23.1']]
What if we normalize the keys in addition to the queries? We have the following observations. 1) Normalizing either of Q and K could increase the performance. 2) The performances of normalizing both Q and K and normalizing Q alone are very similar, and are both significantly higher than that of SAN. 3) Normalizing K al...
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 2: Comparison of using various normalization methods in NSA.
['Approach', 'B@4', 'M', 'R', 'C', 'S']
[['SAN', '38.4', '28.6', '58.4', '128.6', '22.6'], ['LN', '38.5', '28.6', '58.3', '128.2', '22.5'], ['BN', '38.8', '28.9', '58.7', '129.4', '22.8'], ['IN', '[BOLD] 39.4', '[BOLD] 29.2', '[BOLD] 59.0', '130.7', '[BOLD] 23.0'], ['IN w/o [ITALIC] γ, [ITALIC] β', '39.3', '29.1', '58.9', '[BOLD] 130.8', '[BOLD] 23.0']]
Since we introduced IN into the NSA module for normalization, an intuitive question to ask is whether we can replace IN with other normalization methods. We have the following observations. 1) Using LN slightly decreases the performance. We conjecture that is because LN normalizes activations of all channels with the s...
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 4: Comparison of various variants of GSA.
['Approach', '#params', 'B@4', 'M', 'R', 'C', 'S']
[['SAN', '40.2M', '38.4', '28.6', '58.4', '128.6', '22.6'], ['absolute', '40.2M', '38.3', '28.5', '58.4', '128.4', '22.6'], ['content-independent', '40.2M', '39.2', '29.1', '58.9', '131.0', '22.9'], ['key-dependent', '41.5M', '38.9', '29.0', '58.8', '129.5', '22.8'], ['query-dependent', '41.5M', '[BOLD] 39.3', '[BOLD] ...
‘+absolute" denotes adding absolute geometry information of each individual object to their input representations at the bottoms of the encoder. We have the following findings. 1) Adding the absolute geometry information (‘‘absolute") is not beneficial to the performance. That is probably because it is too complex for ...
Normalized and Geometry-Aware Self-Attention Network for Image Captioning
2003.08897
Table 7: Video captioning results on VATEX dataset.
['Model', 'B@4', 'M', 'R', 'C']
[['VATEX ', '28.2', '21.7', '46.9', '45.7'], ['Transformer (Ours)', '30.6', '22.3', '48.4', '53.4'], ['+NSA', '[BOLD] 31.0', '[BOLD] 22.7', '[BOLD] 49.0', '[BOLD] 57.1']]
We see that the performance of Transformer strongly exceeds that of VATEX, which adopts an LSTM-based architecture. Our Transformer+NSA method consistently improves over Transformer on all metrics. Particularly, our method improves the CIDEr score by 3.7 points when compared to Transformer, and significantly improves t...
Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches
1803.09578
Table 1: The same BiLSTM-CRF approach was evaluated twice under Evaluation 1. The threshold column depicts the average difference in percentage points F1-score for statistical significance with 0.04
['[BOLD] Task', '[BOLD] Threshold [ITALIC] τ', '[BOLD] % significant', 'Δ( [ITALIC] test)95', 'Δ( [ITALIC] test) [ITALIC] Max']
[['ACE 2005 - Entities', '0.65', '28.96%', '1.21', '2.53'], ['ACE 2005 - Events', '1.97', '34.48%', '4.32', '9.04'], ['CoNLL 2000 - Chunking', '0.20', '18.36%', '0.30', '0.56'], ['CoNLL 2003 - NER-En', '0.42', '31.02%', '0.83', '1.69'], ['CoNLL 2003 - NER-De', '0.78', '33.20%', '1.61', '3.36'], ['GermEval 2014 - NER-De...
For the ACE 2005 - Events task, we observe in 34.48% of the cases a significant difference between the models A(j)i and ~A(j)i. For the other tasks, we observe similar results and between 10.72% and 33.20% of the cases are statistically significant.
Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches
1803.09578
Table 2: The same BiLSTM-CRF approach was evaluated twice under Evaluation 2. The threshold column depicts the average difference in percentage points F1-score for statistical significance with 0.04
['[BOLD] Task', '[BOLD] Spearman [ITALIC] ρ', '[BOLD] Threshold [ITALIC] τ', '[BOLD] % significant', 'Δ( [ITALIC] dev)95', 'Δ( [ITALIC] test)95', 'Δ( [ITALIC] test) [ITALIC] Max']
[['ACE 2005 - Entities', '0.153', '0.65', '24.86%', '0.42', '1.04', '1.66'], ['ACE 2005 - Events', '0.241', '1.97', '29.08%', '1.29', '3.73', '7.98'], ['CoNLL 2000 - Chunking', '0.262', '0.20', '15.84%', '0.10', '0.29', '0.49'], ['CoNLL 2003 - NER-En', '0.234', '0.42', '21.72%', '0.27', '0.67', '1.12'], ['CoNLL 2003 - ...
For all tasks, we observe small Spearman’s rank correlation ρ between the development and the test score. The low correlation indicates that a run with high development score doesn’t have to yield a high test score. The value 3.68 for the ACE 2005 - Events tasks indicates that, given two models with the same performanc...
Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches
1803.09578
Table 5: 95% percentile of Δ(test) after averaging.
['[BOLD] Task', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 1', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 3', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 5', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n scores [BOLD] 10', 'Δ( [ITALIC] test)95 [BOLD] for [ITALIC] n sc...
[['ACE-Ent.', '1.21', '0.72', '0.51', '0.38', '0.26'], ['ACE-Ev.', '4.32', '2.41', '1.93', '1.39', '0.97'], ['Chk.', '0.30', '0.16', '0.14', '0.09', '0.06'], ['NER-En', '0.83', '0.45', '0.35', '0.26', '0.18'], ['NER-De', '1.61', '0.94', '0.72', '0.51', '0.37'], ['GE 14', '1.12', '0.64', '0.48', '0.34', '0.25'], ['TE 3'...
For increasing n the value Δ(test)95 decreases, i.e. the mean score becomes more stable. However, for the CoNLL 2003 NER- En task we still observe a difference of 0.26 percentage points F1-score between the mean scores for n=10. For the ACE 2005 Events dataset, the value is even at 1.39 percentage points F1-score.
Locally Adaptive Translation for Knowledge Graph Embedding
1512.01370
Table 1: Different choices of optimal loss functions and the predictive performances over three data sets Subset1, Subset2 and FB15K, where fr(h,t)=∥h+r−t∥22, (h,r,t) is a triple in knowledge graph, and (h′,r,t′) is incorrect triple.
['Data sets', 'Optimal loss function', 'Mean Rank Raw', 'Mean Rank Filter']
[['Subset1', '[ITALIC] fr( [ITALIC] h, [ITALIC] t)+3− [ITALIC] fr( [ITALIC] h′, [ITALIC] t′)', '339', '240'], ['Subset2', '[ITALIC] fr( [ITALIC] h, [ITALIC] t)+2− [ITALIC] fr( [ITALIC] h′, [ITALIC] t′)', '500', '365'], ['FB15K', '[ITALIC] fr( [ITALIC] h, [ITALIC] t)+1− [ITALIC] fr( [ITALIC] h′, [ITALIC] t′)', '243', '1...
To verify this, we construct knowledge graphs with different locality. We simply partition a knowledge graph into different subgraphs in a uniform manner. Each subgraph contains different types of relations and their corresponding entities. Moreover, different subgraphs have the identical number of relations for the sa...
Locally Adaptive Translation for Knowledge Graph Embedding
1512.01370
Table 3: Evaluation results on link prediction.
['Data sets Metric', 'WN18 Mean Rank', 'WN18 Mean Rank', 'FB15K Mean Rank', 'FB15K Mean Rank']
[['Metric', 'Raw', 'Filter', 'Raw', 'Filter'], ['Unstructured', '315', '304', '1,074', '979'], ['RESCAL', '1,180', '1,163', '828', '683'], ['SE', '1,011', '985', '273', '162'], ['SME(linear)', '545', '533', '274', '154'], ['SME(bilinear)', '526', '509', '284', '158'], ['LFM', '469', '456', '283', '164'], ['TransE', '26...
All parameters are determined on the validation set. It can be seen that on both data sets, TransA obtains the lowest mean rank. Furthermore, on WN18, among the baselines, Unstructured and TransH(unif) perform the best, but TransA decreases the mean rank by about 150 compared with both of them. On FB15K, among the base...
Locally Adaptive Translation for Knowledge Graph Embedding
1512.01370
Table 4: Evaluation results of triple classification. (%)
['Data sets', 'WN11', 'FB13', 'FB15K']
[['SE', '53.0', '75.2', '-'], ['SME(linear)', '70.0', '63.7', '-'], ['SLM', '69.9', '85.3', '-'], ['LFM', '73.8', '84.3', '-'], ['NTN', '70.4', '87.1', '68.5'], ['TransH(unif)', '77.7', '76.5', '79.0'], ['TransH(bern)', '78.8', '83.3', '80.2'], ['TransA', '93.2', '82.8', '87.7']]
All parameters are determined on the validation set. The optimal setting are: λ=0.001, d=220, B=120, μ=0.5 and taking L1 as dissimilarity on WN11; λ=0.001, d=50, B=480, μ=0.5 and taking L1 as dissimilarity on FB13. On WN11, TransA outperforms the other methods. On FB13, the method NTN is shown more powerful. This is co...
Evaluating Dialogue Generation Systems via Response Selection
2004.14302
Table 3: Correlations between the ground-truth system ranking and the rankings by automatic evaluation.
['Metrics', 'Spearman', 'p-value']
[['BLEU-1', '−0.36', '0.30'], ['BLEU-2', '0.085', '0.82'], ['METEOR', '0.073', '0.84'], ['ROUGE-L', '0.35', '0.33'], ['RANDOM', '0.43', '-'], ['[BOLD] CHOSEN', '[BOLD] 0.48', '[BOLD] 0.19'], ['HUMAN', '0.87', '0.0038']]
First, we yielded the human upper bound. we evaluated the correlation between the rankings made by different annotators (HUMAN). We randomly divided human evaluation into two groups and made two rankings. The correlation coefficient between the two rankings was 0.87. Second, we found that the rankings made using existi...
A Wind of Change:Detecting and Evaluating Lexical Semantic Changeacross Times and Domains
1906.02979
Table 5: ρ for SGNS+OP+CD (L/P, win=2, k=1, t=None) before (ORG) and after time-shuffling (SHF) and downampling them to the same frequency (+DWN).
['[BOLD] Dataset', '[BOLD] ORG', '[BOLD] SHF', '[BOLD] +DWN']
[['[BOLD] DURel', '[BOLD] 0.816', '0.180', '0.372'], ['[BOLD] SURel', '[BOLD] 0.767', '0.763', '0.576']]
As we saw, dispersion measures are sensitive to frequency. In order to test for this influence within our datasets we follow Dubossarsky et al. For each target word we merge all sentences from the two corpora Ca and Cb containing it, shuffle them, split them again into two sets while holding their frequencies from the ...
A Wind of Change:Detecting and Evaluating Lexical Semantic Changeacross Times and Domains
1906.02979
Table 3: Best and mean ρ scores across similarity measures (CD, LND, JSD) on semantic representations.
['[BOLD] Dataset', '[BOLD] Representation', '[BOLD] best', '[BOLD] mean']
[['[BOLD] DURel', 'raw count', '0.639', '0.395'], ['[BOLD] DURel', 'PPMI', '0.670', '0.489'], ['[BOLD] DURel', 'SVD', '0.728', '0.498'], ['[BOLD] DURel', 'RI', '0.601', '0.374'], ['[BOLD] DURel', 'SGNS', '[BOLD] 0.866', '[BOLD] 0.502'], ['[BOLD] DURel', 'SCAN', '0.327', '0.156'], ['[BOLD] SURel', 'raw count', '0.599', ...
SGNS is clearly the best vector space model, even though its mean performance does not outperform other representations as clearly as its best performance. Regarding count models, PPMI and SVD show the best results.
A Wind of Change:Detecting and Evaluating Lexical Semantic Changeacross Times and Domains
1906.02979
Table 4: Mean ρ scores for CD across the alignments. Applies only to RI, SVD and SGNS.
['[BOLD] Dataset', '[BOLD] OP', 'OP−', 'OP+', '[BOLD] WI', '[BOLD] None']
[['[BOLD] DURel', '0.618', '0.557', '[BOLD] 0.621', '0.468', '0.254'], ['[BOLD] SURel', '[BOLD] 0.590', '0.514', '0.401', '0.492', '0.285']]
OP+ has the best mean performance on DURel, but performs poorly on SURel. Artetxe et al. show that the additional pre- and post-processing steps of OP+ can be harmful in certain conditions. We tested the influence of the different steps and identified the non-orthogonal whitening transformation as the main reason for a...
Retrofitting Word Vectors to Semantic Lexicons
1411.4166
Table 3: Absolute performance changes for including PPDB information while training LBL vectors. Spearman’s correlation (3 left columns) and accuracy (3 right columns) on different tasks. Bold indicates greatest improvement.
['Method', '[ITALIC] k, [ITALIC] γ', 'MEN-3k', 'RG-65', 'WS-353', 'TOEFL', 'SYN-REL', 'SA']
[['LBL (Baseline)', '[ITALIC] k=∞, [ITALIC] γ=0', '58.0', '42.7', '53.6', '66.7', '31.5', '72.5'], ['[BOLD] LBL + Lazy', '[ITALIC] γ=1', '–0.4', '4.2', '0.6', '–0.1', '0.6', '1.2'], ['[BOLD] LBL + Lazy', '[ITALIC] γ=0.1', '0.7', '8.1', '0.4', '–1.4', '0.7', '0.8'], ['[BOLD] LBL + Lazy', '[ITALIC] γ=0.01', '0.7', '9.5'...
S6SS2SSS0Px1 Results. For lazy, γ=0.01 performs best, but the method is in most cases not highly sensitive to γ’s value. For periodic, which overall leads to greater improvements over the baseline than lazy, k=50M performs best, although all other values of k also outperform the the baseline. Retrofitting, which can be...
Retrofitting Word Vectors to Semantic Lexicons
1411.4166
Table 2: Absolute performance changes with retrofitting. Spearman’s correlation (3 left columns) and accuracy (3 right columns) on different tasks. Higher scores are always better. Bold indicates greatest improvement for a vector type.
['Lexicon', 'MEN-3k', 'RG-65', 'WS-353', 'TOEFL', 'SYN-REL', 'SA']
[['Glove', '73.7', '76.7', '60.5', '89.7', '67.0', '79.6'], ['+PPDB', '1.4', '2.9', '–1.2', '[BOLD] 5.1', '–0.4', '[BOLD] 1.6'], ['+WN [ITALIC] syn', '0.0', '2.7', '0.5', '[BOLD] 5.1', '–12.4', '0.7'], ['+WN [ITALIC] all', '[BOLD] 2.2', '[BOLD] 7.5', '[BOLD] 0.7', '2.6', '–8.4', '0.5'], ['+FN', '–3.6', '–1.0', '–5.3', ...
All of the lexicons offer high improvements on the word similarity tasks (the first three columns). On the TOEFL task, we observe large improvements of the order of 10 absolute points in accuracy for all lexicons except for FrameNet. FrameNet’s performance is weaker, in some cases leading to worse performance (e.g., wi...
Searching for Effective Neural Extractive Summarization: What Works and What’s Next
1907.03491
Table 5: Results of different architectures with different pre-trained knowledge on CNN/DailyMail, where Enc. and Dec. represent document encoder and decoder respectively.
['[BOLD] Model [BOLD] Dec.', '[BOLD] Model [BOLD] Enc.', '[BOLD] R-1 [BOLD] Baseline', '[BOLD] R-2 [BOLD] Baseline', '[BOLD] R-L [BOLD] Baseline', '[BOLD] R-1 [BOLD] + GloVe', '[BOLD] R-2 [BOLD] + GloVe', '[BOLD] R-L [BOLD] + GloVe', '[BOLD] R-1 [BOLD] + BERT', '[BOLD] R-2 [BOLD] + BERT', '[BOLD] R-L [BOLD] ...
[['SeqLab', 'LSTM', '41.22', '18.72', '37.52', '[BOLD] 41.33', '[BOLD] 18.78', '[BOLD] 37.64', '42.18', '19.64', '38.53', '41.48', '[BOLD] 18.95', '37.78'], ['SeqLab', 'Transformer', '41.31', '[BOLD] 18.85', '37.63', '40.19', '18.67', '37.51', '42.28', '[BOLD] 19.73', '38.59', '41.32', '18.83', '37.63'], ['Pointer', 'L...
As shown in Tab. However, when the models are equipped with BERT, we are excited to observe that the performances of all types of architectures are improved by a large margin. Specifically, the model CNN-LSTM-Pointer has achieved a new state-of-the-art with 42.11 on R-1, surpassing existing models dramatically.
Enriching Neural Models with Targeted Features for Dementia Detection
1906.05483
Table 3: Performance of evaluated models.
['[BOLD] Approach', '[BOLD] Accuracy', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1', '[BOLD] AUC', '[BOLD] TN', '[BOLD] FP', '[BOLD] FN', '[BOLD] TP']
[['C-LSTM', '0.8384', '0.8683', '0.9497', '0.9058', '0.9057', '6.3', '15.6', '5.3', '102.6'], ['C-LSTM-Att', '0.8333', '0.8446', '0.9778', '0.9061', '0.9126', '2.6', '19.3', '2.3', '105.6'], ['C-LSTM-Att-w', '0.8512', '0.9232', '0.8949', '0.9084', '0.9139', '14.0', '8.0', '11.3', '96.6'], ['OURS', '0.8495', '0.8508', '...
As is demonstrated, our proposed model achieves the highest performance in Accuracy, Precision, Recall, F1, and AUC. It outperforms the state of the art (C-LSTM) by 5.2%, 7.1%, 4.9%, 2.6%, and 3.7%, respectively.
A Corpus for Modeling Word Importance in Spoken Dialogue Transcripts
1801.09746
Table 2: Model performance in terms of RMS deviation and macro-averaged F1 score, with best results in bold font.
['[BOLD] Model', '[BOLD] RMS', '[ITALIC] F1 [BOLD] (macro)']
[['LSTM-CRF', '0.154', '[BOLD] 0.60'], ['LSTM-SIG', '[BOLD] 0.120', '0.519']]
While the LSTM-CRF had a better (higher) F-score on the classification task, its RMS score was worse (higher) than the LSTM-SIG model, which may be due to the limitation of the model as discussed in Section 5.
Weak Supervision Enhanced Generative Network for Question Generation
1907.00607
Table 1: Comparison with other methods on SQuAD dataset. We demonstrate automatic evaluation results on BLEU 1-4, ROUGE-L, METEOR metrics. The best performance for each column is highlighted in boldface. The WeGen without pre-training means the pipeline of Answer-Related Encoder and Transferred Interaction module are n...
['Model', 'BLEU 1', 'BLEU 2', 'BLEU 3', 'BLEU 4', 'ROUGE-L', 'METEOR']
[['Vanilla Seq2Seq', '17.13', '8.28', '4.74', '2.94', '18.92', '7.23'], ['Seq2Seq+Attention', '17.90', '9.64', '5.68', '3.34', '19.95', '8.63'], ['Transformer', '15.14', '7.27', '3.94', '1.61', '16.47', '5.93'], ['Seq2Seq+Attention+Copy', '29.17', '19.45', '12.63', '10.43', '28.97', '17.63'], ['[BOLD] WeGen', '[BOLD] 3...
The experimental results reveal a number of interesting points. The copy mechanism improve the results significantly. It uses attentive read from word embedding of sequence in encoder and selective read from location-aware hidden states to enhance the capability of decoder and proves effectiveness of the repeat pattern...
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 5: Comparison of word error rates for different language models.
['LM', 'WER SWB', 'WER CH']
[['Baseline 4M 4-gram', '9.3', '15.6'], ['37M 4-gram (n-gram)', '8.8', '15.3'], ['n-gram + model M', '8.4', '14.3'], ['n-gram + model M + NNLM', '8.0', '14.1']]
This new n-gram LM was used in combination with our best acoustic model to decode and generate word lattices for further LM rescoring experiments. The WER improved by 0.5% for SWB and 0.3% for CallHome. Part of this improvement (0.1-0.2%) was due to also using a larger beam for decoding. We built a model M LM on each c...
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 1: Word error rates of sigmoid vs. Maxout networks trained with annealed dropout (Maxout-AD) for ST CNNs, DNNs and score fusion on Hub5’00 SWB. Note that all networks are trained only on the SWB-1 data (262 hours).
['Model', 'WER SWB (ST) sigmoid', 'WER SWB (ST) Maxout-AD']
[['DNN', '11.9', '11.0'], ['CNN', '11.8', '11.6'], ['DNN+CNN', '10.5', '10.2']]
All Maxout networks utilize 2 filters per hidden unit, and the same number of layers and roughly the same number of parameters per layer as the sigmoid-based DNN/CNN counterparts. Parameter equalization is achieved by having a factor of √2 more neurons per hidden layer for the maxout nets since the maxout operation red...
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 2: Comparison of word error rates for CE-trained DNNs with different number of outputs and phonetic context size on Hub5’00 SWB.
['Nb. outputs', 'Phonetic ctx.', 'WER SWB (CE)']
[['16000', '±2', '12.0'], ['16000', '±3', '11.8'], ['32000', '±2', '11.7'], ['64000', '±2', '11.9']]
When training on 2000 hours of data, we found it beneficial to increase the number of context-dependent HMM output targets to values that are far larger than commonly reported. We conjecture that this is because GMMs are a distributed model and require more data for each state to reliably estimate the mixture component...
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 3: Comparison of word error rates for CE and ST CNN, DNN, RNN and various score fusions on Hub5’00.
['Model', 'WER SWB CE', 'WER SWB ST', 'WER CH CE', 'WER CH ST']
[['CNN', '12.6', '10.4', '18.4', '17.9'], ['DNN', '11.7', '10.3', '18.5', '17.0'], ['RNN', '11.5', '9.9', '17.7', '16.3'], ['DNN+CNN', '11.3', '9.6', '17.4', '16.3'], ['RNN+CNN', '11.2', '9.4', '17.0', '16.1'], ['DNN+RNN+CNN', '11.1', '9.4', '17.1', '15.9']]
All nets are trained with 10-15 passes of cross-entropy on 2000 hours of audio and 30 iterations of sequence For score fusion, we decode with a frame-level sum of the outputs of the nets prior to the softmax with uniform weights.
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 4: Comparison of word error rates for CE and sequence trained unfolded RNN and DNN with score fusion and joint modeling on Hub5’00. The WERs for the joint models are after sequence training.
['RNN/CNN combination', 'WER SWB', 'WER CH']
[['score fusion of CE models', '11.2', '17.0'], ['score fusion of ST models', '9.4', '16.1'], ['joint model from CE models (ST)', '9.3', '15.6'], ['joint model from ST models (ST)', '9.4', '15.7']]
Two experimental scenarios were considered. The first is where the joint model was initialized with the fusion of the cross-entropy trained RNN and CNN whereas the second uses ST models as the starting point.
The IBM 2015 English Conversational Telephone Speech Recognition System
1505.05899
Table 6: Comparison of word error rates on Hub5’00 (SWB and CH) for existing systems (∗ note that the 19.1% CallHome WER is not reported in [13]).
['System', 'AM training data', 'SWB', 'CH']
[['Vesely et al.\xa0', 'SWB', '12.6', '24.1'], ['Seide et al.\xa0', 'SWB+Fisher+other', '13.1', '–'], ['Hannun et al.\xa0', 'SWB+Fisher', '12.6', '19.3'], ['Zhou et al.\xa0', 'SWB', '14.2', '–'], ['Maas et al.\xa0', 'SWB', '14.3', '26.0'], ['Maas et al.\xa0', 'SWB+Fisher', '15.0', '23.0'], ['Soltau et al.\xa0', 'SWB', ...
Since Switchboard is such a well-studied corpus, we thought we would take a step back and reflect on how far we have come in terms of speech recognition technology. At the height of technological development for GMM-based systems, the winning IBM submission scored 15.2% WER during the 2004 DARPA EARS For clarity, we al...
Thematically ReinforcedExplicit Semantic Analysis
1405.4364
Table 2: Evaluation results (ordered by decreasing precision)
['[ITALIC] λ1', '[ITALIC] λ2', '[ITALIC] λ3', '[ITALIC] λ4', '[ITALIC] λ5', 'C', '# SVs', 'Precision']
[['1.5', '0', '0.5', '0.25', '0.125', '3.0', '786', '[BOLD] 75.015%'], ['1', '0', '0.5', '0.25', '0.125', '3.0', '709', '74.978%'], ['1.5', '1', '0.5', '0.25', '0.125', '3.0', '827', '74.899%'], ['0.25', '1.5', '0.5', '0.25', '0.125', '3.0', '761', '74.87%'], ['0.5', '0', '0.5', '0.25', '0.125', '3.0', '698', '74.867%'...
The results show a significant improvement over the standard ESA version (that corresponds to λi=0 for all i. This confirms our approach. On Fig. the reader can see the precision obtained as function of the two first parameters λ1 and λ2, as well the number of support vectors used. We notice that the precision varies s...
Thematically ReinforcedExplicit Semantic Analysis
1405.4364
Table 2: Evaluation results (ordered by decreasing precision)
['[ITALIC] λ1', '[ITALIC] λ2', '[ITALIC] λ3', '[ITALIC] λ4', '[ITALIC] λ5', 'C', '# SVs', 'Precision']
[['0', '1', '0.5', '0.25', '0.125', '3.0', '710', '74.716%'], ['2', '1', '0.5', '0.25', '0.125', '3.0', '899', '74.705%'], ['2', '0', '0.5', '0.25', '0.125', '3.0', '852', '74.675%'], ['0.5', '0.25', '0.125', '0.0625', '0.0312', '3.0', '653', '74.67%'], ['2', '0.5', '0.5', '0.25', '0.125', '3.0', '899', '74.641%'], ['0...
The results show a significant improvement over the standard ESA version (that corresponds to λi=0 for all i. This confirms our approach. On Fig. the reader can see the precision obtained as function of the two first parameters λ1 and λ2, as well the number of support vectors used. We notice that the precision varies s...
Essence Knowledge Distillation for Speech Recognition
1906.10834
Table 2: Word error rates of different models trained with a subset of the Switchboard data.
['Acoustic Model', '[ITALIC] k', 'SWB', 'CHE', 'TOTAL']
[['TDNN', '[EMPTY]', '14.1', '26.3', '20.3'], ['TDNN-LSTM', '[EMPTY]', '14.4', '26.2', '20.2'], ['TDNN-LSTM+TDNN (teacher)', '[EMPTY]', '13.2', '25.4', '19.3'], ['[EMPTY]', '1', '13.5', '25.4', '19.6'], ['[EMPTY]', '5', '13.1', '24.6', '18.9'], ['[EMPTY]', '10', '13.0', '24.6', '[BOLD] 18.8'], ['TDNN-LSTM (student)', '...
A subset consisting 25% of the training data from the Switchboard data set was used to quickly evaluate the effectiveness of the proposed method and to tune some hyperparameters. As can be seen, the TDNN-LSTM performed better than the TDNN model. The teacher model, which is a fusion of a TDNN model and a TDNN-LSTM mode...
Neural Belief Tracker: Data-Driven Dialogue State Tracking
1606.03777
Table 2: DSTC2 and WOZ 2.0 test set performance (joint goals and requests) of the NBT-CNN model making use of three different word vector collections. The asterisk indicates statistically significant improvement over the baseline xavier (random) word vectors (paired t-test; p<0.05).
['[BOLD] Word Vectors', '[BOLD] DSTC2 [BOLD] Goals', '[BOLD] DSTC2 [BOLD] Requests', '[BOLD] WOZ 2.0 [BOLD] Goals', '[BOLD] WOZ 2.0 [BOLD] Requests']
[['xavier [BOLD] (No Info.)', '64.2', '81.2', '81.2', '90.7'], ['[BOLD] GloVe', '69.0*', '96.4*', '80.1', '91.4'], ['[BOLD] Paragram-SL999', '[BOLD] 73.4*', '[BOLD] 96.5*', '[BOLD] 84.2*', '[BOLD] 91.6']]
The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. , trained using co-occurrence information in large textual corpora; and 3) semantically specialised Paragram-SL999 vectors Wieting et al. Paragram-SL999 ...
Neural Belief Tracker: Data-Driven Dialogue State Tracking
1606.03777
Table 1: DSTC2 and WOZ 2.0 test set accuracies for: a) joint goals; and b) turn-level requests. The asterisk indicates statistically significant improvement over the baseline trackers (paired t-test; p<0.05).
['[BOLD] DST Model', '[BOLD] DSTC2 [BOLD] Goals', '[BOLD] DSTC2 [BOLD] Requests', '[BOLD] WOZ 2.0 [BOLD] Goals', '[BOLD] WOZ 2.0 [BOLD] Requests']
[['[BOLD] Delexicalisation-Based Model', '69.1', '95.7', '70.8', '87.1'], ['[BOLD] Delexicalisation-Based Model + Semantic Dictionary', '72.9*', '95.7', '83.7*', '87.6'], ['Neural Belief Tracker: NBT-DNN', '72.6*', '96.4', '[BOLD] 84.4*', '91.2*'], ['Neural Belief Tracker: NBT-CNN', '[BOLD] 73.4*', '[BOLD] 96.5', '84.2...
The NBT models outperformed the baseline models in terms of both joint goal and request accuracies. For goals, the gains are always statistically significant (paired t-test, p<0.05). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can ...
Variational Neural Discourse Relation Recognizer
1603.03876
(b) Con vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] (R & X\xa0rutherford-xue:2015:NAACL-HLT)', '-', '-', '-', '53.80'], ['[BOLD] (J & E\xa0TACL536)', '76.95', '-', '-', '52.78'], ['[BOLD] SVM', '62.62', '39.14', '72.40', '50.82'], ['[BOLD] SCNN', '63.00', '39.80', '75.29', '52.04'], ['[BOLD] VarNDRR', '53.82', '35.39', '88.53', '50.56']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, V...
Variational Neural Discourse Relation Recognizer
1603.03876
(a) Com vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] R & X\xa0rutherford-xue:2015:NAACL-HLT', '-', '-', '-', '41.00'], ['[BOLD] J & E\xa0TACL536', '70.27', '-', '-', '35.93'], ['[BOLD] SVM', '63.10', '22.79', '64.47', '33.68'], ['[BOLD] SCNN', '60.42', '22.00', '67.76', '33.22'], ['[BOLD] VarNDRR', '63.30', '24.00', '71.05', '35.88']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, V...
Variational Neural Discourse Relation Recognizer
1603.03876
(c) Exp vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] (R & X\xa0rutherford-xue:2015:NAACL-HLT)', '-', '-', '-', '69.40'], ['[BOLD] (J & E\xa0TACL536)', '69.80', '-', '-', '80.02'], ['[BOLD] SVM', '60.71', '65.89', '58.89', '62.19'], ['[BOLD] SCNN', '63.00', '56.29', '91.11', '69.59'], ['[BOLD] VarNDRR', '57.36', '56.46', '97.39', '71.48']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, V...
Variational Neural Discourse Relation Recognizer
1603.03876
(d) Tem vs Other
['[BOLD] Model', '[BOLD] Acc', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['[BOLD] (R & X\xa0rutherford-xue:2015:NAACL-HLT)', '-', '-', '-', '33.30'], ['[BOLD] (J & E\xa0TACL536)', '87.11', '-', '-', '27.63'], ['[BOLD] SVM', '66.25', '15.10', '68.24', '24.73'], ['[BOLD] SCNN', '76.95', '20.22', '62.35', '30.54'], ['[BOLD] VarNDRR', '62.14', '17.40', '97.65', '29.54']]
Because the development and test sets are imbalanced in terms of the ratio of positive and negative instances, we chose the widely-used F1 score as our major evaluation metric. In addition, we also provide the precision, recall and accuracy for further analysis. according to their F1 scores. Although it fails on Con, V...
Gated Convolutional Bidirectional Attention-based Model for Off-topic Spoken Response Detection
2004.09036
Table 5: The performance of GCBiA with negative sampling augmentation method conditioned on over 0.999 on-topic recall.
['Model', 'Seen PPR3', 'Seen AOR', 'Unseen PPR3', 'Unseen AOR']
[['GCBiA', '93.6', '79.2', '68.0', '45.0'], ['+ neg sampling', '[BOLD] 94.2', '[BOLD] 88.2', '[BOLD] 79.4', '[BOLD] 69.1']]
To augment training data and strengthen the generalization of the off-topic response detection model for unseen prompts, we proposed a new and effective negative sampling method for off-topic response detection task. Comparing with the previous method of generating only one negative sample for each positive one, we gen...
Gated Convolutional Bidirectional Attention-based Model for Off-topic Spoken Response Detection
2004.09036
Table 4: The comparison of different models based on over 0.999 on-topic recall on seen and unseen benchmarks. AOR means Average Off-topic Recall (%) and PRR3 means Prompt Ratio over off-topic Recall 0.3 (%).
['Systems', 'Model', 'Seen PPR3', 'Seen AOR', 'Unseen PPR3', 'Unseen AOR']
[['Malinin et\xa0al., 2017', 'Att-RNN', '84.6', '72.2', '32.0', '21.0'], ['Our baseline model', 'G-Att-RNN', '87.8', '76.8', '54.0', '38.1'], ['This work', '+ Bi-Attention', '90.4', '78.3', '56.0', '39.7'], ['This work', '+ RNN→CNN', '89.7', '76.6', '66.0', '43.7'], ['This work', '+ [ITALIC] maxpooling', '92.3', '79.1...
As is shown in To make the evaluation more convincing, we built a stronger baseline model G-Att-RNN based on Att-RNN by adding residual connections with each layer. Additionally, we add a gated unit as the relevance layer for our baseline model G-Att-RNN. Compared with Att-RNN, our baseline model G-Att-RNN achieved sig...
Joint Speaker Counting, Speech Recognition, and Speaker Identification for Overlapped Speech of Any Number of Speakers
2006.10930
Table 1: SER (%), WER (%), and SA-WER (%) for baseline systems and proposed method. The number of profiles per test audio was 8. Each profile was extracted by using 2 utterances (15 sec on average). For random speaker assignment experiment (3rd row), averages of 10 trials were computed. No LM was used in the evaluation...
['ModelEval Set', '1-speaker SER', '1-speaker WER', '1-speaker [BOLD] SA-WER', '2-speaker-mixed SER', '2-speaker-mixed WER', '2-speaker-mixed [BOLD] SA-WER', '3-speaker-mixed SER', '3-speaker-mixed WER', '3-speaker-mixed [BOLD] SA-WER', 'Total SER', 'Total WER', 'Total [BOLD] SA-WER']
[['Single-speaker ASR', '-', '4.7', '-', '-', '66.9', '-', '-', '90.7', '-', '-', '68.4', '-'], ['SOT-ASR', '-', '4.5', '-', '-', '10.3', '-', '-', '19.5', '-', '-', '13.9', '-'], ['SOT-ASR + random speaker assignment', '87.4', '4.5', '[BOLD] 175.2', '82.8', '23.4', '[BOLD] 169.7', '76.1', '39.1', '[BOLD] 165.1', '80.2...
Baseline results The first row corresponds the conventional single-speaker ASR based on AED. As expected, the WER was significantly degraded for overlapped speech. The second row shows the result of the SOT-ASR system that was used for initializing the proposed method in training. SOT-ASR significantly improved the WER...
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
(a)
['[EMPTY]', 'ASG', 'CTC']
[['dev-clean', '10.4', '10.7'], ['test-clean', '10.1', '10.5']]
Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which c...
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
(b)
['batch size', 'CTC CPU', 'CTC GPU', 'ASG CPU']
[['1', '1.9', '5.9', '2.5'], ['4', '2.0', '6.0', '2.8'], ['8', '2.0', '6.1', '2.8']]
Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which c...
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
(c)
['batch size', 'CTC CPU', 'CTC GPU', 'ASG CPU']
[['1', '40.9', '97.9', '16.0'], ['4', '41.6', '99.6', '17.7'], ['8', '41.7', '100.3', '19.2']]
Our ASG criterion is implemented in C (CPU only), leveraging SSE instructions when possible. Our batching is done with an OpenMP parallel for. Both criteria lead to the same LER. For comparing the speed, we report performance for sequence sizes as reported initially by Baidu, but also for longer sequence sizes, which c...
Wav2Letter: an End-to-End ConvNet-based Speech Recognition System
1609.03193
Table 2: LER/WER of the best sets of hyper-parameters for each feature types.
['[EMPTY]', 'MFCC LER', 'MFCC WER', 'PS LER', 'PS WER', 'Raw LER', 'Raw WER']
[['dev-clean', '6.9', '[EMPTY]', '9.3', '[EMPTY]', '10.3', '[EMPTY]'], ['test-clean', '6.9', '7.2', '9.1', '9.4', '10.6', '10.1']]
every 20 ms. We found that one could squeeze out about 1% in performance by refining the precision of the output. This is efficiently achieved by shifting the input sequence, and feeding it to the network several times. Both power spectrum and raw features are performing slightly worse than MFCCs. the gap would vanish.
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 6: Evaluating EARL’s Relation Linking performance
['[BOLD] System', '[BOLD] Accuracy LC-QuAD', '[BOLD] Accuracy - QALD']
[['ReMatch\xa0', '0.12', '0.31'], ['RelMatch\xa0', '0.15', '0.29'], ['EARL without adaptive learning', '0.32', '0.45'], ['EARL with adaptive learning', '[BOLD] 0.36', '[BOLD] 0.47']]
Aim: Given a question, the task is to the perform relation linking in the question. This also evaluates our hypothesis H3. we could run on LC-QuAD and QALD. The large difference in accuracy of relation-linking over LC-QuAD over QALD, is due to the face that LC-QuAD has 82% questions with more than one relation, thus de...
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 3: Empirical comparison of Connection Density and GTSP: n = number of nodes in graph; L = number of clusters in graph; N = number of nodes per cluster; top K results retrieved from ElasticSearch.
['[BOLD] Approach', '[BOLD] Accuracy (K=30)', '[BOLD] Accuracy (K=10)', '[BOLD] Time Complexity']
[['Brute Force GTSP', '0.61', '0.62', 'O( [ITALIC] n22 [ITALIC] n)'], ['LKH - GTSP', '0.59', '0.58', 'O( [ITALIC] nL2)'], ['Connection Density', '0.61', '0.62', 'O( [ITALIC] N2 [ITALIC] L2)']]
Aim: We evaluate hypotheses (H1 and H2) that the connection density and GTSP can be used for joint linking task. We also evaluate the LKH approximation solution of GTSP for doing this task. We compare the time complexity of the three different approaches. Results: Connection density has worse time complexity than appro...
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 4: Evaluation of joint linking performance
['[BOLD] Value of k', 'R [ITALIC] f [BOLD] based on R [ITALIC] i', 'R [ITALIC] f [BOLD] based on C,H', 'R [ITALIC] f [BOLD] based on R [ITALIC] i,C,H']
[['[ITALIC] k = 10', '0.543', '0.689', '0.708'], ['[ITALIC] k = 30', '0.544', '0.666', '0.735'], ['[ITALIC] k = 50', '0.543', '0.617', '[BOLD] 0.737'], ['[ITALIC] k = 100', '0.540', '0.534', '0.733'], ['[ITALIC] k∗ = 10', '0.568', '0.864', '[BOLD] 0.905'], ['[ITALIC] k∗ = 30', '0.554', '0.779', '0.864'], ['[ITALIC] k∗ ...
Metrics: We use the mean reciprocal rank of the correct candidate ¯ci for each entity/relation in the query. From the probable candidate list generation step, we fetch a list of top candidates for each identified phrase in a query with a k value of 10, 30, 50 and 100, where k is the number of results from text search f...
EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs
1801.03825
Table 5: Evaluating EARL’s Entity Linking performance
['[BOLD] System', '[BOLD] Accuracy LC-QuAD', '[BOLD] Accuracy - QALD']
[['FOX\xa0', '0.36', '0.30'], ['DBpediaSpotlight\xa0', '0.40', '0.42'], ['TextRazor', '0.52', '0.53'], ['Babelfy\xa0', '0.56', '0.56'], ['EARL without adaptive learning', '0.61', '0.55'], ['EARL with adaptive learning', '[BOLD] 0.65', '[BOLD] 0.57']]
EARL uses a series of sequential modules with little to no feedback across them. Hence, the errors in one module propagate down the line. To trammel this, we implement an adaptive approach especially for curbing the errors made in the pre-processing modules. While conducting experiments, it was observed that most of th...
Ask No More: Deciding when to guess in referential visual dialogue
1805.06960
Table 4: Games played by DM with MaxQ=10, and the baseline with 5 fixed questions. Percentages of games (among all games and only decided games) where the DM models ask either fewer or more questions than the baseline. For the decided games, percentages of games where asking fewer/more questions helps (+ Change), hurts...
['DM', 'Decided games + Change', 'Decided games + Change', 'Decided games – Change', 'Decided games – Change', 'Decided games No Change', 'Decided games No Change', 'Decided games Total', 'Decided games Total', 'All games Total', 'All games Total']
[['DM', 'Fewer', 'More', 'Fewer', 'More', 'Fewer', 'More', 'Fewer', 'More', 'Fewer', 'More'], ['DM1', '1.77', '3.46', '2.64', '3.79', '22.58', '50.35', '26.99', '57.6', '22.63', '64.43'], ['DM2', '25.01', '0.16', '13.98', '0.81', '56.18', '3.67', '95.17', '4.64', '14.83', '85.14']]
When considering all the games, we see that the DM models ask many more questions (64.43% DM1 and 85.14% DM2) than the baseline. Zooming into decided games thus allows for a more appropriate comparison. helps (+ Change), hurts (– Change) or does not have an impact on task success (No Change) with respect to the baselin...
A Fixed-Size Encoding Method for Variable-Length Sequences with its Application to Neural Network Language Models
1505.01504
Table 2: Perplexities on PTB for various LMs.
['Model', 'Test PPL']
[['KN 5-gram ', '141'], ['FNNLM ', '140'], ['RNNLM ', '123'], ['LSTM ', '117'], ['bigram FNNLM', '176'], ['trigram FNNLM', '131'], ['4-gram FNNLM', '118'], ['5-gram FNNLM', '114'], ['6-gram FNNLM', '113'], ['1st-order FOFE-FNNLM', '116'], ['2nd-order FOFE-FNNLM', '[BOLD] 108']]
We have first evaluated the performance of the traditional FNN-LMs, taking the previous several words as input, denoted as n-gram FNN-LMs here. We have trained neural networks with a linear projection layer (of 200 hidden nodes) and two hidden layers (of 400 nodes per layer). All hidden units in networks use the rectif...
Multimodal Social Media Analysis for Gang Violence Prevention
1807.08465
Table 2. Results for detecting the psychosocial codes: aggression, loss and substance use. For each code we report precision (P), recall (R), F1-scores (F1) and average precision (AP). Numbers shown are mean values of 5-fold cross validation performances. The highest performance (based on AP) for each code is marked wi...
['[BOLD] Modality', '[BOLD] Features', '[BOLD] Fusion', '[BOLD] Aggression P', '[BOLD] Aggression R', '[BOLD] Aggression F1', '[BOLD] Aggression AP', '[BOLD] Loss P', '[BOLD] Loss R', '[BOLD] Loss F1', '[BOLD] Loss AP', '[BOLD] Substance use P', '[BOLD] Substance use R', '[BOLD] Substance use F1', '[BOLD] Substance use...
[['-', '- (random baseline)', '-', '0.25', '0.26', '0.26', '0.26', '0.17', '0.17', '0.17', '0.20', '0.18', '0.18', '0.18', '0.20', '0.23'], ['-', '- (positive baseline)', '-', '0.25', '1.00', '0.40', '0.25', '0.21', '1.00', '0.35', '0.22', '0.20', '1.00', '0.33', '0.20', '0.22'], ['text', 'linguistic features', '-', '0...
Our results indicate that image and text features play different roles in detecting different psychosocial codes. Textual information clearly dominates the detection of code loss. We hypothesize that loss is better conveyed textually whereas substance use and aggression are easier to express visually. Qualitatively, th...
Multimodal Social Media Analysis for Gang Violence Prevention
1807.08465
Table 1. Numbers of instances for the different visual concepts and psychosocial codes in our dataset. For the different codes, the first number indicates for how many tweets at least one annotator assigned the corresponding code, numbers in parentheses are based on per-tweet majority votes.
['[BOLD] Concepts/Codes', '[BOLD] Twitter', '[BOLD] Tumblr', '[BOLD] Total']
[['[ITALIC] handgun', '164', '41', '205'], ['[ITALIC] long gun', '15', '105', '116'], ['[ITALIC] joint', '185', '113', '298'], ['[ITALIC] marijuana', '56', '154', '210'], ['[ITALIC] person', '1368', '74', '1442'], ['[ITALIC] tattoo', '227', '33', '260'], ['[ITALIC] hand gesture', '572', '2', '574'], ['[ITALIC] lean', '...
Note that in order to ensure sufficient quality of the annotations, but also due to the nature of the data, we relied on a special annotation process and kept the total size of the dataset comparatively small. However, crawling images from Tumblr targeting keywords related to those concepts lead us to gather images whe...
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 2: Average scores for the six different responses, on the six quality: Understandable, Natural, Maintains Context, Interesting, Uses Knowledge and Overall Quality.
['[BOLD] System', '[BOLD] Und (0-1)', '[BOLD] Nat (1-3)', '[BOLD] MCtx (1-3)', '[BOLD] Int (1-3)', '[BOLD] UK (0-1)', '[BOLD] OQ (1-5)']
[['Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat', 'Topical-Chat'], ['Original Ground-Truth', '0.95', '2.72', '2.72', '2.64', '0.72', '4.25'], ['Argmax Decoding', '0.60', '2.08', '2.13', '1.94', '0.47', '2.76'], ['Nucleus Sampling (0.3)', '0.51', '2.02', '1.90', '1.82', '0...
Across both datasets and all qualities, the new human generated response strongly outperforms all other response types, even the original ground truth. This may be because the new human generated response was written with this quality annotation in mind, and as such is optimized for turn-level evaluation. On the other ...
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 1: Inter-annotator agreement for all the metrics. For all the correlations presented in this table, p<0.01.
['[BOLD] Metric', '[BOLD] Spearman', '[BOLD] Pearson']
[['Topical-Chat', 'Topical-Chat', 'Topical-Chat'], ['Understandable', '0.5102', '0.5102'], ['Natural', '0.4871', '0.4864'], ['Maintains Context', '0.5599', '0.5575'], ['Interesting', '0.5811', '0.5754'], ['Uses Knowledge', '0.7090', '0.7090'], ['Overall Quality', '0.7183', '0.7096'], ['PersonaChat', 'PersonaChat', 'Per...
The correlation between each pair of annotations is computed and the average correlation over all the pairs is reported. Correlation is used instead of Cohen’s Kappa in order to better account for the ordinal nature of the ratings (i.e., 4 should correlate better with 5 than 1), and to maintain consistency with the eva...
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 3: Turn-level correlations on Topical-Chat. We show: (1) best non-USR metric, (2) best USR sub-metric and (3) USR metric. All measures in this table are statistically significant to p<0.01.
['Metric', 'Spearman', 'Pearson']
[['Understandable', 'Understandable', 'Understandable'], ['BERTScore (base)', '0.2502', '0.2611'], ['USR - MLM', '[BOLD] 0.3268', '[BOLD] 0.3264'], ['USR', '0.3152', '0.2932'], ['Natural', 'Natural', 'Natural'], ['BERTScore (base)', '0.2094', '0.2260'], ['USR - MLM', '[BOLD] 0.3254', '[BOLD] 0.3370'], ['USR', '0.3037',...
USR is shown to strongly outperform both word-overlap and embedding-based metrics across all of the dialog qualities. Interestingly, the best non-USR metric is consistently either METEOR or BERTScore – possibly because both methods are adept at comparing synonyms during evaluation. For some dialog qualities, the overal...
USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
2005.00456
Table 5: Turn-level correlations between all automatic metrics and the Overall Quality ratings for the Topical-Chat corpus. All values with p>0.05 are italicized.
['Metric', 'Spearman', 'Pearson']
[['Word-Overlap Metrics', 'Word-Overlap Metrics', 'Word-Overlap Metrics'], ['F-1', '0.1645', '0.1690'], ['BLEU-1', '0.2728', '0.2876'], ['BLEU-2', '0.2862', '0.3012'], ['BLEU-3', '0.2569', '0.3006'], ['BLEU-4', '0.2160', '0.2956'], ['METEOR', '0.3365', '0.3908'], ['ROUGE-L', '0.2745', '0.2870'], ['Embedding Based Metri...
USR shows a strong improvement over all other methods. This strong performance can be attributed to two factors: (1) the ability of MLM and DR to accurately quantify qualities of a generated response without a reference response, and (2) the ability of USR to effectively combine MLM and DR into a better correlated over...
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
1703.08581
Table 3: Speech recognition model performance in WER.
['[EMPTY]', 'Fisher dev', 'Fisher dev2', 'Fisher test', 'Callhome devtest', 'Callhome evltest']
[['Ours', '25.7', '25.1', '23.2', '44.5', '45.3'], ['Post et al. ', '41.3', '40.0', '36.5', '64.7', '65.3'], ['Kumar et al. ', '29.8', '29.8', '25.3', '–', '–']]
We construct a baseline cascade of a Spanish ASR seq2seq model whose output is passed into a Spanish to English NMT model. Performance on the Fisher task is significantly better than on Callhome since it contains more formal speech, consisting of conversations between strangers while Callhome conversations were often b...
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
1703.08581
Table 1: Varying number of decoder layers in the speech translation model. BLEU score on the Fisher/dev set.
['Num decoder layers [ITALIC] D 1', 'Num decoder layers [ITALIC] D 2', 'Num decoder layers [ITALIC] D 3', 'Num decoder layers [ITALIC] D 4', 'Num decoder layers [ITALIC] D 5']
[['43.8', '45.1', '45.2', '45.5', '45.3']]
In contrast, seq2seq NMT models often use much deeper decoders, In analogy to a traditional ASR system, one may think of the seq2seq encoder behaving as the acoustic model while the decoder acts as the language model. The additional complexity of the translation task when compared to monolingual language modeling motiv...
Sequence-to-Sequence Models Can Directly Translate Foreign Speech
1703.08581
Table 5: Speech translation model performance in BLEU score.
['Model', 'Fisher dev', 'Fisher dev2', 'Fisher test', 'Callhome devtest', 'Callhome evltest']
[['End-to-end ST 3', '46.5', '47.3', '47.3', '16.4', '16.6'], ['Multi-task ST / ASR 3', '48.3', '49.1', '48.7', '16.8', '17.4'], ['ASR→NMT cascade 3', '45.1', '46.1', '45.5', '16.2', '16.6'], ['Post et al. ', '–', '35.4', '–', '–', '11.7'], ['Kumar et al. ', '–', '40.1', '40.4', '–', '–']]
Despite not having access to source language transcripts at any stage of the training, the end-to-end model outperforms the baseline cascade, which passes the 1-best Spanish ASR output into the NMT model, by about 1.8 BLEU points on the Fisher/test set. We obtain an additional improvement of 1.4 BLEU points or more on ...
TACAM: Topic And Context Aware Argument Mining
1906.00923
Table 1. In-Topic
['two-class', 'Method BiLSTM', '0.74']
[['two-class', 'BiCLSTM', '0.74'], ['two-class', 'TACAM-WE', '0.74'], ['two-class', 'TACAM-KG', '0.73'], ['two-class', 'CAM-BERT Base', '0.79'], ['two-class', 'TACAM-BERT Base', '[BOLD] 0.81'], ['[EMPTY]', 'CAM-BERT Large', '0.80'], ['[EMPTY]', 'TACAM-BERT Large', '[BOLD] 0.81'], ['three-class', 'BiLSTM', '0.56'], ['th...
In this setting we do not expect a large improvement by providing topic information since the models have already been trained with arguments of the same topics as in the training set. However, we see a relative increase of about 10% for the two-classes and 20% for the three-classes classification problem by using cont...
TACAM: Topic And Context Aware Argument Mining
1906.00923
Table 2. Cross-Topic
['[EMPTY]', 'Method', 'Topics Abortion', 'Topics Cloning', 'Topics Death penalty', 'Topics Gun control', 'Topics Marij. legal.', 'Topics Min. wage', 'Topics Nucl. energy', 'Topics School unif.', '\\diameter']
[['two-classes', 'BiLSTM', '0.61', '0.72', '0.70', '0.75', '0.64', '0.62', '0.67', '0.54', '0.66'], ['two-classes', 'BiCLSTM', '0.67', '0.71', '0.71', '0.73', '0.69', '0.75', '0.71', '0.58', '0.70'], ['two-classes', 'TACAM-WE', '0.64', '0.71', '0.70', '0.74', '0.64', '0.63', '0.68', '0.55', '0.66'], ['two-classes', 'TA...
In this experiment, which reflects a real-life argument search scenario, we want to prove our two hypotheses: When classifying potential arguments, it is advantageous to take information about the topic into account. The context of an argument and topic context are important for the classification decision. On the whol...
TACAM: Topic And Context Aware Argument Mining
1906.00923
Table 4. Topic dependent cross-topic classification results
['[EMPTY]', 'Method', 'Topics Abortion', 'Topics Cloning', 'Topics Death penalty', 'Topics Gun control', 'Topics Marij. legal.', 'Topics Min. wage', 'Topics Nucl. energy', 'Topics School unif.', '\\diameter']
[['two-classes', 'BiLSTM', '0.57', '0.59', '0.53', '0.59', '0.62', '0.62', '0.59', '0.57', '0.58'], ['two-classes', 'BiCLSTM', '0.62', '0.72', '0.46', '0.46', '0.76', '0.60', '0.69', '0.45', '0.60'], ['two-classes', 'CAM-BERT Base', '0.56', '0.63', '0.60', '0.62', '0.61', '0.55', '0.60', '0.53', '0.59'], ['two-classes'...
For the two-classes problem we observe a massive performance drop of ten points in macro-f1 score for the BiCLSTM model. Nonetheless, the model still makes use of topic information and outperforms the standard BiLSTM by two macro-f1 score points. Our approach TACAM-BERT Base is more robust, the performance falls by mod...
A Bayesian Model for Generative Transition-based Dependency Parsing
1506.04334
Table 6: Language modelling test results. Above, training and testing on WSJ. Below, training semi-supervised and testing on WMT.
['Model', 'Perplexity']
[['HPYP 5-gram', '147.22'], ['ChelbaJ00', '146.1'], ['EmamiJ05', '131.3'], ['[BOLD] HPYP-DP', '[BOLD] 145.54'], ['HPYP 5-gram', '178.13'], ['[BOLD] HPYP-DP', '[BOLD] 163.96']]
We note that the perplexities reported are upper bounds on the true perplexity of the model, as it is intractable to sum over all possible parses of a sentence to compute the marginal probability of the words. As an approximation we sum over the final beam after decoding.
A Bayesian Model for Generative Transition-based Dependency Parsing
1506.04334
Table 3: Effect of including elements in the model conditioning contexts. Results are given on the YM development set.
['Context elements', 'UAS', 'LAS']
[['[ITALIC] σ1. [ITALIC] t, [ITALIC] σ2. [ITALIC] t', '73.25', '70.14'], ['+ [ITALIC] rc1( [ITALIC] σ1). [ITALIC] t', '80.21', '76.64'], ['+ [ITALIC] lc1( [ITALIC] σ1). [ITALIC] t', '85.18', '82.03'], ['+ [ITALIC] σ3. [ITALIC] t', '87.23', '84.26'], ['+ [ITALIC] rc1( [ITALIC] σ2). [ITALIC] t', '87.95', '85.04'], ['+ [I...
The first modelling choice is the selection and ordering of elements in the conditioning contexts of the HPYP priors. The first two words on the stack are the most important, but insufficient – second-order dependencies and further elements on the stack should also be included in the contexts. The challenge is that the...
A Bayesian Model for Generative Transition-based Dependency Parsing
1506.04334
Table 5: Parsing accuracies on the YM test set. compared against previous published results. TitovH07 was retrained to enable direct comparison.
['Model', 'UAS', 'LAS']
[['Eisner96', '80.7', '-'], ['WallachSM08', '85.7', '-'], ['TitovH07', '89.36', '87.65'], ['[BOLD] HPYP-DP', '[BOLD] 88.47', '[BOLD] 86.13'], ['MaltParser', '88.88', '87.41'], ['ZhangN11', '92.9', '91.8'], ['ChoiM13', '92.96', '91.93']]
Our HPYP model performs much better than Eisner’s generative model as well as the Bayesian version of that model proposed by \newciteWallachSM08 (the result for Eisner’s model is given as reported by \newciteWallachSM08 on the WSJ). The accuracy of our model is only 0.8 UAS below the generative model of \newciteTitovH0...
ESPnet: End-to-End Speech Processing Toolkit
1804.00015
Table 2: Comparisons (CER, WER, and training time) of the WSJ task with other end-to-end ASR systems.
['Method', 'Wall Clock Time', '# GPUs']
[['ESPnet (Chainer)', '20 hours', '1'], ['ESPnet (PyTorch)', '5 hours', '1'], ['seq2seq + CNN ', '120 hours', '10']]
The use of a deeper encoder network, integration of character-based LSTMLM, and joint CTC/attention decoding steadily improved the performance. But we can state that ESPnet provides reasonable performance by comparing with these prior studies.
ESPnet: End-to-End Speech Processing Toolkit
1804.00015
Table 2: Comparisons (CER, WER, and training time) of the WSJ task with other end-to-end ASR systems.
['Method', 'Metric', 'dev93', 'eval92']
[['ESPnet with VGG2-BLSTM', 'CER', '10.1', '7.6'], ['+ BLSTM layers (4 → 6)', 'CER', '8.5', '5.9'], ['+ char-LSTMLM', 'CER', '8.3', '5.2'], ['+ joint decoding', 'CER', '5.5', '3.8'], ['+ label smoothing', 'CER', '5.3', '3.6'], ['[EMPTY]', 'WER', '12.4', '8.9'], ['seq2seq + CNN (no LM) ', 'WER', '[EMPTY]', '10.5'], ['se...
The use of a deeper encoder network, integration of character-based LSTMLM, and joint CTC/attention decoding steadily improved the performance. But we can state that ESPnet provides reasonable performance by comparing with these prior studies.
Automatic Speech Recognition with Very Large Conversational Finnish and Estonian Vocabularies
1707.04227
TABLE III: Comparison of uniform data processing, random sampling of web data by 20 %, and weighted parameter updates from web data by a factor of 0.4, in NNLM training. The models were trained using normal softmax. Includes development set perplexity, word error rate (%), and word error rate after interpolation with t...
['Subset Processing', 'Training Time', 'Perplexity', 'WER', '+NGram']
[['[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes', '[BOLD] Finnish, 5k classes'], ['Uniform', '143 h', '511', '26.0', '25.6'], ['Sampling', '128 h', '505', '26.2', '25.6'], ['Weighting', '101 h', '521', '26.4', '25.5'], ['[BOLD] Finnish, 42.5k subwo...
Optimizing the weights for neural network training is more difficult than for the n-gram mixture models. As we do not have a computational method for optimizing the weights, we tried a few values, observing the development set perplexity during training. Sampling 20 % of the web data on each iteration, or weighting the...
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 4: Results for CLTC1, CLTC2, CLTC3 and UCLTC
['Setup', 'Source', 'Target', 'Examples', 'LIFG (%)']
[['CLTC1', 'F', 'F', '150', '65.70'], ['CLTC1', 'E', 'E', '150', '67.60'], ['CLTC1', 'G', 'G', '150', '67.10'], ['CLTC2', 'E', 'F', '150', '62.00'], ['CLTC2', 'G', 'F', '150', '59.60'], ['CLTC2', 'F', 'E', '150', '60.50'], ['CLTC2', 'G', 'E', '150', '61.80'], ['CLTC2', 'F', 'G', '150', '60.90'], ['CLTC2', 'E', 'G', '15...
We tested all combinations of source and target languages for all the CLTC setups. We can see similar patterns to those shown above. With every source language added to the training set, the performance of the testing set (now in two or three target languages) improves.
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 1: CLTC Results on the Webis-CLS-10C Dataset
['Baseline', 'Source', 'Target', 'Baseline Results', 'LIFG']
[['SHFR-ECOC', 'E', 'F', '62.09', '90.00'], ['SHFR-ECOC', 'E', 'G', '65.22', '91.29'], ['Inverted', 'E', 'G', '49.00', '91.00'], ['DCI', 'E', 'F', '83.80', '90.38'], ['DCI', 'E', 'G', '83.80', '92.07']]
As the baselines did, we reported the accuracy achieved, except when we compare with Inverted, which reported F1, and so did we.
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 2: CLTC Results on Reuters RCV1/RCV2 Dataset
['Baseline', 'Source', 'Target', 'Baseline Results', 'LIFG']
[['SHFR-ECOC', 'E', 'S', '72.79', '85.70'], ['SHFR-ECOC', 'F', 'S', '73.82', '85.95'], ['[EMPTY]', 'G', 'S', '74.15', '87.16'], ['Inverted', 'E', 'G', '55.00', '89.00'], ['SHFA', 'E', 'S', '76.40', '85.70'], ['SHFA', 'F', 'S', '76.80', '85.95'], ['[EMPTY]', 'G', 'S', '77.10', '87.16'], ['DMMC', 'E', 'F', '65.52', '88.6...
As the baselines did, we reported the accuracy achieved, except when we compare with Inverted, which reported F1, and so did we.
Automatic Generation of Language-Independent Featuresfor Cross-Lingual Classification
1802.04028
Table 3: The Effect of Hierarchical Feature Generation
['Source', 'Target', 'LIFG – w/o [ITALIC] CMeta', 'LIFG – w/ [ITALIC] CMeta']
[['E', 'F', '52.63', '62.03'], ['E', 'G', '55.19', '63.34'], ['F', 'E', '50.87', '60.49'], ['F', 'G', '49.32', '60.88'], ['G', 'E', '51.06', '59.61'], ['G', 'F', '50.01', '61.84']]
As can be seen, the result improvement is significant – about 10% on average. Clearly, abstract features significantly contributes to performance, and should, therefore, be used when available.
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show pa...
['Task 3: Displaying options', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['resto-paris-expen-frech-8stars?', '0.00', '1.00', '0.96', '0.91'], ['Do you have something else?', '0.41', '0.99', '0.00', '0.00'], ['Sure let me find another option.', '1.00', '0.00', '0.00', '0.12'], ['resto-paris-expen-frech-5stars?', '0.00', '1.00', '0.96', '0.91'], ['No this does not work for me.', '0.00', '0.0...
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, whic...
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show pa...
['Task 2: Two Supporting Facts', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['Sandra picked up the apple there.', '0.95', '0.89', '0.98', '0.00'], ['Sandra dropped the apple.', '0.83', '0.05', '0.92', '0.01'], ['Daniel grabbed the apple there.', '0.88', '0.93', '0.98', '0.00'], ['Sandra travelled to the bathroom.', '0.01', '0.18', '0.63', '0.02'], ['Daniel went to the hallway.', '0.01', '0.24...
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, whic...
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show pa...
['Task 15: Deduction', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['Mice are afraid of wolves.', '0.11', '0.99', '0.13', '0.78'], ['Gertrude is a mouse.', '0.77', '0.99', '0.96', '0.00'], ['Cats are afraid of sheep.', '0.01', '0.99', '0.07', '0.03'], ['Winona is a mouse.', '0.14', '0.85', '0.77', '0.05'], ['Sheep are afraid of wolves.', '0.02', '0.98', '0.27', '0.05'], ['What is Ger...
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, whic...
Query-Reduction Networksfor Question Answering
1606.04582
Figure 3: (top) bAbI QA dataset (Weston et al., 2016) visualization of update and reset gates in QRN ‘2r’ model (bottom two) bAbI dialog and DSTC2 dialog dataset (Bordes and Weston, 2016) visualization of update and reset gates in QRN ‘2r’ model. Note that the stories can have as many as 800+ sentences; we only show pa...
['Task 6: DSTC2 dialog', 'Layer 1 [ITALIC] z1', 'Layer 1 → [ITALIC] r1', 'Layer 1 ← [ITALIC] r1', 'Layer 2 [ITALIC] z2']
[['Spanish food.', '0.84', '0.07', '0.00', '0.82'], ['You are lookng for a spanish restaurant right?', '0.98', '0.02', '0.49', '0.75'], ['Yes.', '0.01', '1.00', '0.33', '0.13'], ['What part of town do you have in mind?', '0.20', '0.73', '0.41', '0.11'], ['I don’t care.', '0.00', '1.00', '0.02', '0.00'], ['What price ra...
In QA Task 2 example (top left), we observe high update gate values in the first layer on facts that state who has the apple, and in the second layer, the high update gate values are on those that inform where that person went to. We also observe that the forward reset gate at t=2 in the first layer (→r12) is low, whic...
Query-Reduction Networksfor Question Answering
1606.04582
Table 2: bAbI QA dataset [Weston et al., 2016] error rates (%) of QRN and previous work: LSTM [Weston et al., 2016], End-to-end Memory Networks (N2N) [Sukhbaatar et al., 2015], Dynamic Memory Networks (DMN+) [Xiong et al., 2016], Gated End-to-end Memory Networks(GMemN2N) [Perez and Liu, 2016]. Results within each task ...
['Task', '1k Previous works', '1k Previous works', '1k Previous works', '1k Previous works', '1k QRN', '1k QRN', '1k QRN', '1k QRN', '1k QRN', '1k QRN', '10k Previous works', '10k Previous works', '10k Previous works', '10k QRN', '10k QRN', '10k QRN', '10k QRN']
[['Task', 'LSTM', 'N2N', 'DMN+', 'GMemN2N', '1r', '2', '2r', '3r', '6r', '6r200*', 'N2N', 'DMN+', 'GMemN2N', '2r', '2rv', '3r', '6r200'], ['1: Single supporting fact', '50.0', '0.1', '1.3', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '13.1', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0'], ['2: Two supporting facts', '8...
In 1k data, QRN’s ‘2r’ (2 layers + reset gate + d=50) outperforms all other models by a large margin (2.8+%). In 10k dataset, the average accuracy of QRN’s ‘6r200’ (6 layers + reset gate + d=200) model outperforms all previous models by a large margin (2.5+%), achieving a nearly perfect score of 99.7%.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 4: Evaluation of VOGNet in GT5 setting by training (first column) and testing (top row) on SVSQ, TEMP, SPAT respectively
['[EMPTY]', 'SVSQ Acc', 'SVSQ SAcc', 'TEMP Acc', 'TEMP SAcc', 'SPAT Acc', 'SPAT SAcc']
[['SVSQ', '76.38', '59.58', '1.7', '0.42', '2.27', '0.6'], ['TEMP', '75.4', '57.38', '23.07', '12.06', '18.03', '8.16'], ['SPAT', '75.15', '57.02', '22.6', '11.04', '23.53', '11.58']]
However, the reverse is not true i.e. models trained on SVSQ fail miserably in SPAT and TEMP (accuracy is <3%). This suggests that both TEMP and SPAT moderately counter the bias caused by having a single object instance in a video. Interestingly, while VOGNet trained on TEMP doesn’t perform well on SPAT (performance is...
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 3: Comparison of VOGNet against ImgGrnd and VidGrnd. GT5 and P100 use 5 and 100 proposals per frame. Here, Acc: Grounding Accuracy, VAcc: Video accuracy, Cons: Consistency, SAcc: Strict Accuracy (see Section 4.3 for details). On the challenging evaluation metrics of TEMP and SPAT, VOGNet (ours) shows significant ...
['[EMPTY]', 'Model', 'SVSQ Acc', 'SVSQ SAcc', 'SEP Acc', 'SEP VAcc', 'SEP SAcc', 'TEMP Acc', 'TEMP VAcc', 'TEMP Cons', 'TEMP SAcc', 'SPAT Acc', 'SPAT VAcc', 'SPAT Cons', 'SPAT SAcc']
[['GT5', 'ImgGrnd', '75.31', '56.53', '39.78', '51.14', '30.34', '17.02', '7.24', '34.73', '7.145', '16.93', '9.38', '49.21', '7.02'], ['GT5', 'VidGrnd', '75.42', '57.16', '41.59', '54.16', '31.22', '19.92', '8.83', '31.70', '8.67', '20.18', '11.39', '49.01', '8.64'], ['GT5', 'VOGNet', '[BOLD] 76.34', '[BOLD] 58.85', '...
across GT5 (5 proposal boxes per frame) and P100 (100 proposal boxes per frame). In practice, SPAT and TEMP strategies when applied to contrastive videos from ActivityNet are effective proxies to obtaining naturally occurring contrastive examples from the web.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 7: Ablative study comparing gains from Multi-Modal Transformer (MTx) and Object Transformer (OTx) and Relative Position Encoding (RPE). L: Number of Layers, H: Number of Heads in the Transformer. Note that VOGNet = ImgGrnd +MTx(1L,3H) +OTx(1L,3H) + RPE
['SPAT', 'Acc', 'VAcc', 'Cons', 'SAcc']
[['ImgGrnd', '17.03', '9.71', '50.41', '7.14'], ['+OTx(1L, 3H)', '19.8', '10.91', '48.34', '8.45'], ['+RPE', '20.2', '11.66', '49.21', '9.28'], ['+MTx(1L, 3H)', '19.23', '10.49', '48.19', '8.14'], ['+RPE', '19.09', '10.46', '50.09', '8.23'], ['+OTx(3L, 6H)', '21.14', '12.1', '49.66', '9.52'], ['+OTx + MTx + RPE', '[BOL...
Ablation Study: We observe: ( i) self-attention via object is an effective way to encode object relations across frames (ii) multi-modal transformer applied on individual frames gives modest gains but falls short of object transformer due to lack of temporal information (iii) relative position encoding (RPE) boosts str...
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 3: Total number of lemmatized words (with at least 20 occurrence) in the train set of ASRL.
['V', 'Arg0', 'Arg1', 'Arg2', 'ArgM-LOC']
[['338', '93', '281', '114', '59']]
Verb Figure Arg1 Figure Arg2 The higher number of verbs In comparison, Arg0 is highly unbalanced as agents are mostly restricted to “people”. We also observe that “man” appears much more often than “woman”/“she”. This indicates gender bias in video curation or video description. Another interesting observation is that ...
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 4: Comparing models trained with GT5 and P100. All models are tested in P100 setting.
['Model', 'Train', 'SVSQ Acc', 'SVSQ SAcc', 'SEP Acc', 'SEP VAcc', 'SEP SAcc', 'TEMP Acc', 'TEMP VAcc', 'TEMP Cons', 'TEMP SAcc', 'SPAT Acc', 'SPAT VAcc', 'SPAT Cons', 'SPAT SAcc']
[['ImgGrnd', 'GT5', '46.31', '24.83', '20.55', '47.49', '9.92', '8.06', '2.68', '25.35', '2.68', '4.64', '2.47', '34.17', '1.31'], ['ImgGrnd', 'P100', '55.22', '32.7', '26.29', '46.9', '15.4', '9.71', '3.59', '22.97', '3.49', '7.39', '4.02', '37.15', '2.72'], ['VidGrnd', 'GT5', '43.37', '22.64', '22.67', '49.6', '11.67...
GT5 models in P100 setting: While testing in P100, for TEMP and SPAT, we set the threshold for models trained in GT5 as 0.5 which is higher than the threshold used when testing in GT5 (0.2). This is expected as a lower threshold would imply a higher chance of a false positive.
Video Object Grounding using Semantic Roles in Language Description
2003.10606
Table 5: Ablative study layers and heads of Transformers.
['SPAT', 'Acc', 'VAcc', 'Cons', 'SAcc']
[['ImgGrnd', '17.03', '9.71', '50.41', '7.14'], ['+OTx (1L, 3H)', '19.8', '10.91', '48.34', '8.45'], ['+OTx (2L, 3H)', '20.8', '11.38', '49.45', '9.17'], ['+OTx (2L, 6H)', '[BOLD] 21.16', '[BOLD] 12.2', '48.86', '[BOLD] 9.58'], ['+OTx (3L, 3H)', '20.68', '11.34', '48.66', '9.19'], ['+OTx (3L, 6H)', '21.14', '12.1', '[B...
Transformer Ablation: It is interesting to note adding more heads better than adding more layers for object transformer, while in the case of multi-modal transformer both number of heads and number of layers help. Finally, we find that simply adding more layers and heads to the object transformer is insufficient, as a ...
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(e) Transposition
['[EMPTY]', '[BOLD] A', '[BOLD] simple', 'minimum', 'edit', 'distance', 'algorithm']
[['minimum', '0', '0', '1', '0', '0', '0'], ['edit', '0', '0', '0', '2', '0', '0'], ['distance', '0', '0', '0', '0', '3', '0'], ['algorithm', '0', '0', '0', '0', '0', '4'], ['[BOLD] A', '[BOLD] 1', '0', '0', '0', '0', '0'], ['[BOLD] simple', '0', '[BOLD] 2', '0', '0', '0', '0']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which...
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(a) Identical text sequences
['[EMPTY]', 'A', 'simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '[BOLD] 1', '0', '0', '0', '0', '0'], ['simple', '0', '[BOLD] 2', '0', '0', '0', '0'], ['minimum', '0', '0', '[BOLD] 3', '0', '0', '0'], ['edit', '0', '0', '0', '[BOLD] 4', '0', '0'], ['distance', '0', '0', '0', '0', '[BOLD] 5', '0'], ['algorithm', '0', '0', '0', '0', '0', '[BOLD] 6']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which...
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(b) Deletion
['[EMPTY]', 'A', '[BOLD] simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '1', '[BOLD] 0', '0', '0', '0', '0'], ['minimum', '0', '[BOLD] 0', '1', '0', '0', '0'], ['edit', '0', '[BOLD] 0', '0', '2', '0', '0'], ['distance', '0', '[BOLD] 0', '0', '0', '3', '0'], ['algorithm', '0', '[BOLD] 0', '0', '0', '0', '4']]
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which...
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(c) Addition
['[EMPTY]', 'A', 'simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '1', '0', '0', '0', '0', '0'], ['simple', '0', '2', '0', '0', '0', '0'], ['[BOLD] new', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0'], ['minimum', '0', '0', '1', '0', '0', '0'], ['edit', '0', '0', '0', '2', '0', '0'], ['distance', '0', '0', '0', '0', '3', '0'], ['algorithm', '0', '0', '...
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which...
Tracking Amendments to Legislation and Other Political Texts with a Novel Minimum-Edit-Distance Algorithm: DocuToads.
1608.06459
(d) Substitution
['[EMPTY]', 'A', '[BOLD] simple', 'minimum', 'edit', 'distance', 'algorithm']
[['A', '1', '[BOLD] 0', '0', '0', '0', '0'], ['[BOLD] new', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0', '[BOLD] 0'], ['minimum', '0', '[BOLD] 0', '1', '0', '0', '0'], ['edit', '0', '[BOLD] 0', '0', '2', '0', '0'], ['distance', '0', '[BOLD] 0', '0', '0', '3', '0'], ['algorithm', '0', '[BOLD] 0', '0', '0'...
The most common source of disagreement in our sample of articles can be traced primarily to the addition of new articles. In such cases, new text was added between two previously existing articles in the reference document. DocuToads, however, has to record the additions at an index position in the reference text which...
Explaining Question Answering Models through Text Generation
2004.05569
Table 10: Results of End2End compared to our model (with GS and ST variants) on hypernym extraction.
['Model', 'Accuracy']
[['+GS +ST', '84.0'], ['+GS -ST', '61.0'], ['-GS +ST', '84.7'], ['-GS -ST', '54.7'], ['End2End', '86.5']]
We report results on the synthetic hypernym extraction task with and without the Gumbel-softmax trick and the ST estimator. We observe that the ST estimator is crucial even on such a simple task, which aligns with prior observations havrylov2017emergence that ST helps overcome the discrepancy between training time and ...
Explaining Question Answering Models through Text Generation
2004.05569
Table 3: Human-evaluation results for how reasonable hypotheses are (CSQA development set). Each rater determined whether a hypothesis is reasonable (1 point), somewhat reasonable (0.5 point) or not reasonable (0 points). The score is the average rating across raters and examples.
['Model', 'Score']
[['| [ITALIC] c|=3+KLD+REP', '0.72'], ['Top- [ITALIC] K=5 ST', '[BOLD] 0.74'], ['SupGen | [ITALIC] c|=3', '0.60'], ['SupGen | [ITALIC] c|=30', '0.55']]
Top-K=5 ST achieved the highest score of 0.74. While SupGen models produce more natural texts, they are judged to be less reasonable in the context of the question.
Capsule-Transformer for Neural Machine Translation
2004.14649
Table 2: Effect in encoder and decoder.
['[BOLD] #', '[ITALIC] Layers', '[BOLD] BLEU']
[['1', '-', '24.28'], ['2', '1-3', '24.64'], ['3', '4-6', '24.48'], ['4', '1-6', '24.87']]
S4SS3SSS0Px1 Effect on Transformer Componets To evaluate the effect of capsule routing SAN in encoder and decoder , we perform an ablation study. Especially the modified decoder still outperforms the baseline even we have removed the vertical routing part, which demonstrates the effectiveness of our model. The row 4 pr...
Capsule-Transformer for Neural Machine Translation
2004.14649
Table 1: Comparing with existing NMT systems on WMT17 Chinese-to-English (Zh-En) and WMT14 English-to-German (En-De) tasks.
['[BOLD] System', '[BOLD] Architecture', '[BOLD] Zh-En', '[BOLD] En-De']
[['[ITALIC] Existing NMT Systems', '[ITALIC] Existing NMT Systems', '[ITALIC] Existing NMT Systems', '[ITALIC] Existing NMT Systems'], ['Wu et al. ( 2016 )', 'RNN with 8 layers', '-', '26.30'], ['Gehring et al. ( 2017 )', 'CNN with 15 layers', '-', '26.36'], ['Vaswani et al. ( 2017 )', 'Transformer- [ITALIC] Base', '-'...
As shown in the table, our capsule-Transformer model consistently improves the performance across both language pairs and model variations, which shows the effectiveness and generalization ability of our approach. For WMT17 Zh-En task, our model outperforms all the models listed above, especially only the capsule-Trans...