gem_id stringlengths 37 41 | paper_id stringlengths 3 4 | paper_title stringlengths 19 183 | paper_abstract stringlengths 168 1.38k | paper_content dict | paper_headers dict | slide_id stringlengths 37 41 | slide_title stringlengths 2 85 | slide_content_text stringlengths 11 2.55k | target stringlengths 11 2.55k | references list |
|---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-8#paper-972#slide-9 | 972 | Obtaining SMT dictionaries for related languages | This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"Methodology",
"Cognate detection",
"Cognate Ranking",
"Results and Discussion",
"Data",
"Evaluation of the Ranking Model",
... | GEM-SciDuet-train-8#paper-972#slide-9 | Manual evaluation | Conclusions Results Machine Translation
Results on sample of 100 words
n-best lists L, L-R, L-C ranking model E
List L List L-R List L-C | Conclusions Results Machine Translation
Results on sample of 100 words
n-best lists L, L-R, L-C ranking model E
List L List L-R List L-C | [] |
GEM-SciDuet-train-8#paper-972#slide-10 | 972 | Obtaining SMT dictionaries for related languages | This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"Methodology",
"Cognate detection",
"Cognate Ranking",
"Results and Discussion",
"Data",
"Evaluation of the Ranking Model",
... | GEM-SciDuet-train-8#paper-972#slide-10 | Addition of lists SMT | 1-best lists with L-C and E ranking pt-es: 80K training sentences, 100K cognate pairs
significant uk-ru: 140K training sentences, 100K cognate pairs | 1-best lists with L-C and E ranking pt-es: 80K training sentences, 100K cognate pairs
significant uk-ru: 140K training sentences, 100K cognate pairs | [] |
GEM-SciDuet-train-8#paper-972#slide-12 | 972 | Obtaining SMT dictionaries for related languages | This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"Methodology",
"Cognate detection",
"Cognate Ranking",
"Results and Discussion",
"Data",
"Evaluation of the Ranking Model",
... | GEM-SciDuet-train-8#paper-972#slide-12 | Conclusions | MT dictionaries extracted from comparable resources for related languages
Positive results on the n-bes lists with L-C
Frequency window heuristic shows poor results
ML models are able to rank similar words on the top of the list
Preliminary results on an SMT system show modest improvements compare to the baseline
The O... | MT dictionaries extracted from comparable resources for related languages
Positive results on the n-bes lists with L-C
Frequency window heuristic shows poor results
ML models are able to rank similar words on the top of the list
Preliminary results on an SMT system show modest improvements compare to the baseline
The O... | [] |
GEM-SciDuet-train-8#paper-972#slide-13 | 972 | Obtaining SMT dictionaries for related languages | This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"3.1",
"3.2",
"3.3",
"4"
],
"paper_header_content": [
"Introduction",
"Methodology",
"Cognate detection",
"Cognate Ranking",
"Results and Discussion",
"Data",
"Evaluation of the Ranking Model",
... | GEM-SciDuet-train-8#paper-972#slide-13 | Future work | Morphology features for the n-best list (Unsupervised)
Instead of prefix heuristic (L-C) and stemmer (L-R)
Contribution for all the produced cognate lists on SMT
Using char-based transliteration model trained on Zoo plus n-best lists
Motivation alignment learns useful transformations: e.g. introducao (pt) vs introducci... | Morphology features for the n-best list (Unsupervised)
Instead of prefix heuristic (L-C) and stemmer (L-R)
Contribution for all the produced cognate lists on SMT
Using char-based transliteration model trained on Zoo plus n-best lists
Motivation alignment learns useful transformations: e.g. introducao (pt) vs introducci... | [] |
GEM-SciDuet-train-9#paper-975#slide-0 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-0 | Latent Dirichlet Allocation | David Blei. Probabilistic topic models. Comm. ACM. 2012 | David Blei. Probabilistic topic models. Comm. ACM. 2012 | [] |
GEM-SciDuet-train-9#paper-975#slide-2 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-2 | Variations and extensions | Author topic model (Rosen-Zvi et al 2004)
Supervised LDA (SLDA; McAuliffe and Blei, 2008)
Dirichlet multinomial regression (Mimno and McCallum, 2008)
Sparse additive generative models (SAGE; Eisenstein et al,
Structural topic model (Roberts et al, 2014) | Author topic model (Rosen-Zvi et al 2004)
Supervised LDA (SLDA; McAuliffe and Blei, 2008)
Dirichlet multinomial regression (Mimno and McCallum, 2008)
Sparse additive generative models (SAGE; Eisenstein et al,
Structural topic model (Roberts et al, 2014) | [] |
GEM-SciDuet-train-9#paper-975#slide-3 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-3 | Desired features of model | Easy modification by end-users.
Covariates: features which influences text (as in SAGE).
Labels: features to be predicted along with text (as in SLDA).
Possibility of sparse topics.
Incorporate additional prior knowledge.
Use variational autoencoder (VAE) style of inference (Kingma | Easy modification by end-users.
Covariates: features which influences text (as in SAGE).
Labels: features to be predicted along with text (as in SLDA).
Possibility of sparse topics.
Incorporate additional prior knowledge.
Use variational autoencoder (VAE) style of inference (Kingma | [] |
GEM-SciDuet-train-9#paper-975#slide-4 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-4 | Desired outcome | Coherent groupings of words (something like topics), with offsets for observed metadata
Encoder to map from documents to latent representations
Classifier to predict labels from from latent representation | Coherent groupings of words (something like topics), with offsets for observed metadata
Encoder to map from documents to latent representations
Classifier to predict labels from from latent representation | [] |
GEM-SciDuet-train-9#paper-975#slide-5 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-5 | Model | p( w) i generator network: p(w i) = fg( )
ELBO Eq[log p(words ri DKL[q(ri words)p(ri
encoder network: q( i w) = fe( ) | p( w) i generator network: p(w i) = fg( )
ELBO Eq[log p(words ri DKL[q(ri words)p(ri
encoder network: q( i w) = fe( ) | [] |
GEM-SciDuet-train-9#paper-975#slide-6 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-6 | Scholar | p(word i ci softmax(d Ti B(topic) cTi B(cov))
Optionally include interactions between topics and covariates
p(yi i ci fy (i ci
log i f(words, ci yi
Optional incorporation of word vectors to embed input | p(word i ci softmax(d Ti B(topic) cTi B(cov))
Optionally include interactions between topics and covariates
p(yi i ci fy (i ci
log i f(words, ci yi
Optional incorporation of word vectors to embed input | [] |
GEM-SciDuet-train-9#paper-975#slide-7 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-7 | Optimization | Tricks from Srivastava and Sutton, 2017:
Adam optimizer with high-learning rate to bypass mode collapse
Batch-norm layers to avoid divergence
Annealing away from batch-norm output to keep results interpretable | Tricks from Srivastava and Sutton, 2017:
Adam optimizer with high-learning rate to bypass mode collapse
Batch-norm layers to avoid divergence
Annealing away from batch-norm output to keep results interpretable | [] |
GEM-SciDuet-train-9#paper-975#slide-8 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-8 | Output of Scholar | B(topic),B(cov): Coherent groupings of positive and negative
deviations from background ( topics)
f, f: Encoder network: mapping from words to topics:
i softmax(fe(words, ci yi
fy : Classifier mapping from i to labels: y fy (i ci | B(topic),B(cov): Coherent groupings of positive and negative
deviations from background ( topics)
f, f: Encoder network: mapping from words to topics:
i softmax(fe(words, ci yi
fy : Classifier mapping from i to labels: y fy (i ci | [] |
GEM-SciDuet-train-9#paper-975#slide-9 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-9 | Evaluation | 1. Performance as a topic model, without metadata (perplexity, coherence)
2. Performance as a classifier, compared to SLDA
3. Exploratory data analysis | 1. Performance as a topic model, without metadata (perplexity, coherence)
2. Performance as a classifier, compared to SLDA
3. Exploratory data analysis | [] |
GEM-SciDuet-train-9#paper-975#slide-10 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-10 | Quantitative results basic model | LDA SAGE NVDM Scholar Scholar Scholar +wv +sparsity | LDA SAGE NVDM Scholar Scholar Scholar +wv +sparsity | [] |
GEM-SciDuet-train-9#paper-975#slide-11 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-11 | Classification results | LR SLDA Scholar Scholar (labels) (covariates) | LR SLDA Scholar Scholar (labels) (covariates) | [] |
GEM-SciDuet-train-9#paper-975#slide-12 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-12 | Exploratory Data Analysis | Data: Media Frames Corpus (Card et al, 2015)
Collection of thousands of news articles annotated in terms of tone and framing
Relevant metadata: year of publication, newspaper, etc. | Data: Media Frames Corpus (Card et al, 2015)
Collection of thousands of news articles annotated in terms of tone and framing
Relevant metadata: year of publication, newspaper, etc. | [] |
GEM-SciDuet-train-9#paper-975#slide-13 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-13 | Tone as a label | english language city spanish community boat desert died men miles coast haitian visas visa applications students citizenship asylum judge appeals deportation court labor jobs workers percent study wages bush border president bill republicans state gov benefits arizona law bill bills arrested charged charges agents ope... | english language city spanish community boat desert died men miles coast haitian visas visa applications students citizenship asylum judge appeals deportation court labor jobs workers percent study wages bush border president bill republicans state gov benefits arizona law bill bills arrested charged charges agents ope... | [] |
GEM-SciDuet-train-9#paper-975#slide-14 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-14 | Tone as a covariate with interactions | Base topics Anti-immigration Pro-immigration ice customs agency population born percent judge case court guilty patrol border miles licenses drivers card island story chinese guest worker workers benefits bill welfare criminal customs jobs million illegals guilty charges man patrol border foreign sept visas smuggling f... | Base topics Anti-immigration Pro-immigration ice customs agency population born percent judge case court guilty patrol border miles licenses drivers card island story chinese guest worker workers benefits bill welfare criminal customs jobs million illegals guilty charges man patrol border foreign sept visas smuggling f... | [] |
GEM-SciDuet-train-9#paper-975#slide-15 | 975 | Neural Models for Documents with Metadata | Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customizatio... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Background and Motivation",
"SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsi... | GEM-SciDuet-train-9#paper-975#slide-15 | Conclusions | Variational autoencoders (VAEs) provide a powerful framework for latent variable modeling
We use the VAE framework to create a customizable model for documents with metadata
We obtain comparable performance with enhanced flexibility and scalability
Code is available: www.github.com/dallascard/scholar | Variational autoencoders (VAEs) provide a powerful framework for latent variable modeling
We use the VAE framework to create a customizable model for documents with metadata
We obtain comparable performance with enhanced flexibility and scalability
Code is available: www.github.com/dallascard/scholar | [] |
GEM-SciDuet-train-10#paper-977#slide-0 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-0 | Language generation Equivalence in the target space | Ground truth sequences lie in a union of low-dimensional subspaces where sequences convey the same message.
I France won the world cup for the second time.
I France captured its second world cup title.
Some words in the vocabulary share the same meaning.
I Capture, conquer, win, gain, achieve, accomplish, . . .
ACL 201... | Ground truth sequences lie in a union of low-dimensional subspaces where sequences convey the same message.
I France won the world cup for the second time.
I France captured its second world cup title.
Some words in the vocabulary share the same meaning.
I Capture, conquer, win, gain, achieve, accomplish, . . .
ACL 201... | [] |
GEM-SciDuet-train-10#paper-977#slide-1 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-1 | Contributions | Take into consideration the nature of the target language space with:
A token-level smoothing for a robust multi-class classification.
A sequence-level smoothing to explore relevant alternative sequences.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | Take into consideration the nature of the target language space with:
A token-level smoothing for a robust multi-class classification.
A sequence-level smoothing to explore relevant alternative sequences.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-2 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-2 | Maximum likelihood estimation MLE | For a pair (x y), we model the conditional distribution:
Given the ground truth target sequence y?:
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
Zero-one loss, all the outputs y y? are treated equally.
Discrepancy at the sentence level between the training (1-gram) and evaluation metr... | For a pair (x y), we model the conditional distribution:
Given the ground truth target sequence y?:
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
Zero-one loss, all the outputs y y? are treated equally.
Discrepancy at the sentence level between the training (1-gram) and evaluation metr... | [] |
GEM-SciDuet-train-10#paper-977#slide-3 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-3 | Loss smoothing | Prerequisite: A word embedding w (e.g. Glove) in the target space and a distance d
with a temperature st. r | Prerequisite: A word embedding w (e.g. Glove) in the target space and a distance d
with a temperature st. r | [] |
GEM-SciDuet-train-10#paper-977#slide-4 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-4 | Token level smoothing | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-5 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-5 | Loss smoothing Token level | Uniform label smoothing over all words in the vocabulary:
We can leverage word co-occurrence statistics to build a non-uniform and meaningful distribution.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
We can estimate the exact KL divergence for every target token. | Uniform label smoothing over all words in the vocabulary:
We can leverage word co-occurrence statistics to build a non-uniform and meaningful distribution.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
We can estimate the exact KL divergence for every target token. | [] |
GEM-SciDuet-train-10#paper-977#slide-6 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-6 | Sequence level smoothing | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-7 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-7 | Loss smoothing Sequence level | Prerequisite: A distance d in the sequences space Vn, n N.
Hamming Edit 1BLEU 1CIDEr
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
Can we evaluate the partition function Z for a given reward?
We can approximate Z for Hamming distance. | Prerequisite: A distance d in the sequences space Vn, n N.
Hamming Edit 1BLEU 1CIDEr
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
Can we evaluate the partition function Z for a given reward?
We can approximate Z for Hamming distance. | [] |
GEM-SciDuet-train-10#paper-977#slide-8 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-8 | Loss smoothing Sequence level Hamming distance | consider only sequences of the same length as y? (d(y y if |y |y
We partition the set of sequences y?:
their distance to the ground truth
d d Sd Sd
The reward in each subset is a constant.
The cardinality of each subset is known.
d Z |Sd exp
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothin... | consider only sequences of the same length as y? (d(y y if |y |y
We partition the set of sequences y?:
their distance to the ground truth
d d Sd Sd
The reward in each subset is a constant.
The cardinality of each subset is known.
d Z |Sd exp
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothin... | [] |
GEM-SciDuet-train-10#paper-977#slide-9 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-9 | Loss smoothing Sequence level Other distances | We cannot easily sample from more complicated rewards such as BLEU or CIDEr.
Choose q the reward distribution relative to Hamming distance.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | We cannot easily sample from more complicated rewards such as BLEU or CIDEr.
Choose q the reward distribution relative to Hamming distance.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-10 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-10 | Loss smoothing Sequence level Support reduction | Can we reduce the support of r?
Reduce the support from V |y?| to V |y
sub where Vsub V.
Vsub Vbatch tokens occuring in the SGD mini-batch.
Vsub Vrefs tokens occuring in the available references.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | Can we reduce the support of r?
Reduce the support from V |y?| to V |y
sub where Vsub V.
Vsub Vbatch tokens occuring in the SGD mini-batch.
Vsub Vrefs tokens occuring in the available references.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-11 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-11 | Loss smoothing Sequence level Lazy training | Default training Lazy training
l y l is: l y l is:
not forwarded in the RNN.
log p(yl |yl x)
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
|y ||cell |, where cell are the cell parameters. | Default training Lazy training
l y l is: l y l is:
not forwarded in the RNN.
log p(yl |yl x)
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
|y ||cell |, where cell are the cell parameters. | [] |
GEM-SciDuet-train-10#paper-977#slide-12 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-12 | Image captioning on MS COCO Setup | 5 captions for every image.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | 5 captions for every image.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-13 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-13 | Image captioning on MS COCO Results | Loss Reward Vsub BLEU-1 BLEU-4 CIDEr
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | Loss Reward Vsub BLEU-1 BLEU-4 CIDEr
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-14 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-14 | Machine translation Setup | Bi-LSTM encoder-decoder with attention (Bahdanau et al. 2015)
IWSLT14 DEEN WMT14 ENFR
Dev 7k Dev 6k
Test 7k Test 3k
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | Bi-LSTM encoder-decoder with attention (Bahdanau et al. 2015)
IWSLT14 DEEN WMT14 ENFR
Dev 7k Dev 6k
Test 7k Test 3k
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-15 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-15 | Machine translation Results | Loss Reward Vsub WMT14 EnFr IWSLT14 DeEn
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | Loss Reward Vsub WMT14 EnFr IWSLT14 DeEn
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-16 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-16 | Conclusion | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-17 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-17 | Takeaways | Improving over MLE with:
Sequence-level smoothing: an extension of RAML (Norouzi et al. 2016)
I Reduced support of the reward distribution.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
Token-level smoothing: smoothing across semantically similar tokens instead of
the usual uniform noi... | Improving over MLE with:
Sequence-level smoothing: an extension of RAML (Norouzi et al. 2016)
I Reduced support of the reward distribution.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing
Token-level smoothing: smoothing across semantically similar tokens instead of
the usual uniform noi... | [] |
GEM-SciDuet-train-10#paper-977#slide-18 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-18 | Future work | Validate on other seq2seq models besides LSTM encoder-decoders.
Validate on models with BPE instead of words.
I Experiment with other distributions for sampling other than the Hamming distance.
I Sparsify the reward distribution for scalability.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoo... | Validate on other seq2seq models besides LSTM encoder-decoders.
Validate on models with BPE instead of words.
I Experiment with other distributions for sampling other than the Hamming distance.
I Sparsify the reward distribution for scalability.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoo... | [] |
GEM-SciDuet-train-10#paper-977#slide-19 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-19 | Appendices | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-20 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-20 | Training time | Average wall time to process a single batch (10 images 50 captions) when training the RNN language model with fixed CNN (without attention) on a Titan X GPU.
Loss MLE Tok Seq Seq lazy Seq Seq lazy Seq Seq lazy Tok-Seq Tok-Seq Tok-Seq
Reward Glove sim Hamming
Vsub V V Vbatch Vbatch Vrefs Vrefs V Vbatch Vrefs
ACL 2018, M... | Average wall time to process a single batch (10 images 50 captions) when training the RNN language model with fixed CNN (without attention) on a Titan X GPU.
Loss MLE Tok Seq Seq lazy Seq Seq lazy Seq Seq lazy Tok-Seq Tok-Seq Tok-Seq
Reward Glove sim Hamming
Vsub V V Vbatch Vbatch Vrefs Vrefs V Vbatch Vrefs
ACL 2018, M... | [] |
GEM-SciDuet-train-10#paper-977#slide-21 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-21 | Generated captions | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing | [] |
GEM-SciDuet-train-10#paper-977#slide-22 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-22 | Generated translations EnFr | I think its conceivable that these data are used for mutual benefit.
Jestime quil est concevable que ces donnees soient utilisees dans leur interet mutuel.
Je pense quil est possible que ces donnees soient utilisees a des fins reciproques.
Je pense quil est possible que ces donnees soient utilisees pour le benefice mut... | I think its conceivable that these data are used for mutual benefit.
Jestime quil est concevable que ces donnees soient utilisees dans leur interet mutuel.
Je pense quil est possible que ces donnees soient utilisees a des fins reciproques.
Je pense quil est possible que ces donnees soient utilisees pour le benefice mut... | [] |
GEM-SciDuet-train-10#paper-977#slide-23 | 977 | Token-level and sequence-level loss smoothing for RNN language models | Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from "exposure bias": during training tokens... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1.2",
"4.2.1",
"5"
],
"paper_header_content": [
"Introduction",
"Related work",
"Loss smoothing for RNN training",
"Maximum likelihood RNN training",
"Sequence-level loss s... | GEM-SciDuet-train-10#paper-977#slide-23 | MS COCO server results | BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr SPICE
Ours: Tok-Seq CIDEr Ours: Tok-Seq CIDEr +
Table: MS-COCO s server evaluation . (+) for ensemble submissions, for submissions with CIDEr optimization and () for models using additional data.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoot... | BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr SPICE
Ours: Tok-Seq CIDEr Ours: Tok-Seq CIDEr +
Table: MS-COCO s server evaluation . (+) for ensemble submissions, for submissions with CIDEr optimization and () for models using additional data.
ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoot... | [] |
GEM-SciDuet-train-11#paper-978#slide-0 | 978 | Zero-Shot Transfer Learning for Event Extraction | Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5.1",
"5.2",
"5.3",
"6.1",
"6.3",
"6.4",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Approach Overview",
"Trigger and Argument Identification",
"Trigger and Type Structure Composition",
"... | GEM-SciDuet-train-11#paper-978#slide-0 | Background | based on predefined event schema and rich features encoded from annotated event
Pros: extract high quality events for predefined types
Cons: require large amount of human annotations and cannot extract event mentions for new event types
Traditional Event Extraction Pipeline
Consumer 1: I want an event extractor for Tra... | based on predefined event schema and rich features encoded from annotated event
Pros: extract high quality events for predefined types
Cons: require large amount of human annotations and cannot extract event mentions for new event types
Traditional Event Extraction Pipeline
Consumer 1: I want an event extractor for Tra... | [] |
GEM-SciDuet-train-11#paper-978#slide-1 | 978 | Zero-Shot Transfer Learning for Event Extraction | Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5.1",
"5.2",
"5.3",
"6.1",
"6.3",
"6.4",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Approach Overview",
"Trigger and Argument Identification",
"Trigger and Type Structure Composition",
"... | GEM-SciDuet-train-11#paper-978#slide-1 | Motivation | Zero Shot Learning for Event Extraction
both event mentions and types have rich semantics and structures, which can specify their consistency and connections
E1. The Government of China has ruled Tibet since 1951 after dispatching troops to the
E2. Iranian state television stated that the conflict between the Iranian p... | Zero Shot Learning for Event Extraction
both event mentions and types have rich semantics and structures, which can specify their consistency and connections
E1. The Government of China has ruled Tibet since 1951 after dispatching troops to the
E2. Iranian state television stated that the conflict between the Iranian p... | [] |
GEM-SciDuet-train-11#paper-978#slide-3 | 978 | Zero-Shot Transfer Learning for Event Extraction | Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5.1",
"5.2",
"5.3",
"6.1",
"6.3",
"6.4",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Approach Overview",
"Trigger and Argument Identification",
"Trigger and Type Structure Composition",
"... | GEM-SciDuet-train-11#paper-978#slide-3 | Approach Details | Trigger and Argument Identification
AMR parsing and FrameNet verbs/nominal lexical units
Subset of AMR relations
None-Core Roles mod, location, instrument, poss, manner, topic, medium, prep-X
Temporal year, duration, decade, weekday, time
Spatial destination, path, location
Event and Type Structure Construction
Structu... | Trigger and Argument Identification
AMR parsing and FrameNet verbs/nominal lexical units
Subset of AMR relations
None-Core Roles mod, location, instrument, poss, manner, topic, medium, prep-X
Temporal year, duration, decade, weekday, time
Spatial destination, path, location
Event and Type Structure Construction
Structu... | [] |
GEM-SciDuet-train-11#paper-978#slide-4 | 978 | Zero-Shot Transfer Learning for Event Extraction | Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5.1",
"5.2",
"5.3",
"6.1",
"6.3",
"6.4",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Approach Overview",
"Trigger and Argument Identification",
"Trigger and Type Structure Composition",
"... | GEM-SciDuet-train-11#paper-978#slide-4 | Evaluation | Zero-Shot Classification for ACE Events
Given trigger and argument boundaries, use a subset of ACE types for training, and remained types for testing
Seen types for each experiment setting
Setting Top-N Seen Types for Training/Dev
D Attack, Transport, Die, Meet, Arrest-Jail, Transfer-Money, Sentence, Elect, Transfer-Ow... | Zero-Shot Classification for ACE Events
Given trigger and argument boundaries, use a subset of ACE types for training, and remained types for testing
Seen types for each experiment setting
Setting Top-N Seen Types for Training/Dev
D Attack, Transport, Die, Meet, Arrest-Jail, Transfer-Money, Sentence, Elect, Transfer-Ow... | [] |
GEM-SciDuet-train-11#paper-978#slide-5 | 978 | Zero-Shot Transfer Learning for Event Extraction | Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5.1",
"5.2",
"5.3",
"6.1",
"6.3",
"6.4",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Approach Overview",
"Trigger and Argument Identification",
"Trigger and Type Structure Composition",
"... | GEM-SciDuet-train-11#paper-978#slide-5 | Discussion | Impact of AMR Parsing
AMR is used to identify candidate triggers and arguments, as well as construct event structures
Compare AMR with Semantic Role Labeling (SRL) on a subset of
ERE corpus with perfect AMR annotations
Train on top-6 most popular seen (training) types: Arrest-Jail,
Execute, Die, Meet, Sentence, Charge-... | Impact of AMR Parsing
AMR is used to identify candidate triggers and arguments, as well as construct event structures
Compare AMR with Semantic Role Labeling (SRL) on a subset of
ERE corpus with perfect AMR annotations
Train on top-6 most popular seen (training) types: Arrest-Jail,
Execute, Die, Meet, Sentence, Charge-... | [] |
GEM-SciDuet-train-11#paper-978#slide-6 | 978 | Zero-Shot Transfer Learning for Event Extraction | Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5.1",
"5.2",
"5.3",
"6.1",
"6.3",
"6.4",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Approach Overview",
"Trigger and Argument Identification",
"Trigger and Type Structure Composition",
"... | GEM-SciDuet-train-11#paper-978#slide-6 | Conclusion and Future Work | We model event extraction as a generic grounding problem, instead of classification
By leveraging existing human constructed event schemas and manual annotations for a small set of seen types, the zero shot framework can improve the scalability of event extraction and save human effort
In the future, we will extend thi... | We model event extraction as a generic grounding problem, instead of classification
By leveraging existing human constructed event schemas and manual annotations for a small set of seen types, the zero shot framework can improve the scalability of event extraction and save human effort
In the future, we will extend thi... | [] |
GEM-SciDuet-train-12#paper-980#slide-0 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-0 | An Example Dialogue with Movie Bot | Actual dialogues can be more complex:
Speech/Natural language understanding errors
o Input may be spoken language form o Need to reason under uncertainty
o Revise information collected earlier
Source code available at https://github/com/MiuLab/TC-Bot | Actual dialogues can be more complex:
Speech/Natural language understanding errors
o Input may be spoken language form o Need to reason under uncertainty
o Revise information collected earlier
Source code available at https://github/com/MiuLab/TC-Bot | [] |
GEM-SciDuet-train-12#paper-980#slide-1 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-1 | Task oriented slot filling Dialogues | Domain: movie, restaurant, flight,
Slot: information to be filled in before completing a task
o For Movie-Bot: movie-name, theater, number-of-tickets, price,
o Inspired by speech act theory (communication as action)
request, confirm, inform, thank-you,
o Some may take parameters:
"Is Kungfu Panda the movie you are look... | Domain: movie, restaurant, flight,
Slot: information to be filled in before completing a task
o For Movie-Bot: movie-name, theater, number-of-tickets, price,
o Inspired by speech act theory (communication as action)
request, confirm, inform, thank-you,
o Some may take parameters:
"Is Kungfu Panda the movie you are look... | [] |
GEM-SciDuet-train-12#paper-980#slide-2 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-2 | A Multi turn Task oriented Dialogue Architecture | Request(movie; actor=bill murray) Knowledge Base
When was it released | Request(movie; actor=bill murray) Knowledge Base
When was it released | [] |
GEM-SciDuet-train-12#paper-980#slide-3 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-3 | A unified view dialogue as optimal decision making | Dialogue as a Markov Decision Process (MDP)
Given state , select action according to (hierarchical) policy
Receive reward , observe new state
Continue the cycle until the episode terminates.
Goal of dialogue learning: find optimal to maximize expected rewards
Dialogue State (s) Action (a) Reward (r)
(Q&A bot over KB, W... | Dialogue as a Markov Decision Process (MDP)
Given state , select action according to (hierarchical) policy
Receive reward , observe new state
Continue the cycle until the episode terminates.
Goal of dialogue learning: find optimal to maximize expected rewards
Dialogue State (s) Action (a) Reward (r)
(Q&A bot over KB, W... | [] |
GEM-SciDuet-train-12#paper-980#slide-4 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-4 | Task completion dialogue as RL | (utterances in natural language form)
o +10 upon successful termination o -10 upon unsuccessful termination o -1 per turn o
Pioneered by [Levin+ 00] Other early examples: [Singh+ 02; Pietquin+ 04; Williams&Young 07; etc.] | (utterances in natural language form)
o +10 upon successful termination o -10 upon unsuccessful termination o -1 per turn o
Pioneered by [Levin+ 00] Other early examples: [Singh+ 02; Pietquin+ 04; Williams&Young 07; etc.] | [] |
GEM-SciDuet-train-12#paper-980#slide-5 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-5 | RL vs SL supervised learning | Differences from supervised learning
Learn by trial-and-error (experimenting)
Optimize long-term reward (1
Need temporal credit assignment
Similarities to supervised learning
input/feature Generalization and representation
SL Hierarchical problem solving | Differences from supervised learning
Learn by trial-and-error (experimenting)
Optimize long-term reward (1
Need temporal credit assignment
Similarities to supervised learning
input/feature Generalization and representation
SL Hierarchical problem solving | [] |
GEM-SciDuet-train-12#paper-980#slide-6 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-6 | Learning w real users | - Expensive: need large amounts of real experience except for very simple tasks
- Risky: bad experiences (during exploration) drive users away | - Expensive: need large amounts of real experience except for very simple tasks
- Risky: bad experiences (during exploration) drive users away | [] |
GEM-SciDuet-train-12#paper-980#slide-7 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-7 | Learning w user simulators | - Inexpensive: generate large amounts of simulated experience for free
- Overfitting: discrepancy btw real users and simulators
Dialog agent simulated experience | - Inexpensive: generate large amounts of simulated experience for free
- Overfitting: discrepancy btw real users and simulators
Dialog agent simulated experience | [] |
GEM-SciDuet-train-12#paper-980#slide-8 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-8 | Dyna Q integrating planning and learning | combining model-free and model-based RL
tabular methods and linear function approximation | combining model-free and model-based RL
tabular methods and linear function approximation | [] |
GEM-SciDuet-train-12#paper-980#slide-9 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-9 | Deep Dyna Q DDQ Integrating Planning for Dialogue Policy Learning | Policy as DNN, trained using DQN
Apply to dialogue: simulated user as world model
Dialogued agent trained using
Limited real user experience
Large amounts of simulated experience Acting Direct World model
Limited real experience is used to improve RL
World model (simulated user) Model learning | Policy as DNN, trained using DQN
Apply to dialogue: simulated user as world model
Dialogued agent trained using
Limited real user experience
Large amounts of simulated experience Acting Direct World model
Limited real experience is used to improve RL
World model (simulated user) Model learning | [] |
GEM-SciDuet-train-12#paper-980#slide-12 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-12 | Dialogue System Evaluation | Metrics: what numbers matter?
o Success rate: #Successful_Dialogues / #All_Dialogues o Average turns: average number of turns in a dialogue o User satisfaction o Consistency, diversity, engaging, ... o Latency, backend retrieval cost,
Methodology: how to measure those numbers? | Metrics: what numbers matter?
o Success rate: #Successful_Dialogues / #All_Dialogues o Average turns: average number of turns in a dialogue o User satisfaction o Consistency, diversity, engaging, ... o Latency, backend retrieval cost,
Methodology: how to measure those numbers? | [] |
GEM-SciDuet-train-12#paper-980#slide-13 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-13 | Evaluation methodology | (lab, Mechanical Turk, )
(optionally with continuing incremental refinement) | (lab, Mechanical Turk, )
(optionally with continuing incremental refinement) | [] |
GEM-SciDuet-train-12#paper-980#slide-15 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-15 | Agenda based Simulated User | [Schatzmann & Young 09]
User state consists of (agenda, goal); goal is fixed throughout dialogue
Agenda is maintained (stochastically) by a first-in-last-out stack
Implementation of a simplified user simulator: https://github.com/MiuLab/TC-Bot | [Schatzmann & Young 09]
User state consists of (agenda, goal); goal is fixed throughout dialogue
Agenda is maintained (stochastically) by a first-in-last-out stack
Implementation of a simplified user simulator: https://github.com/MiuLab/TC-Bot | [] |
GEM-SciDuet-train-12#paper-980#slide-16 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-16 | Simulated user evaluation | DQN vs DDQ ()
: number of planning steps
(generating K simulated dialogues per real dialogue) | DQN vs DDQ ()
: number of planning steps
(generating K simulated dialogues per real dialogue) | [] |
GEM-SciDuet-train-12#paper-980#slide-17 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-17 | Impact of world model quality | pretrained on labeled data, and updated using real dialogue on the fly | pretrained on labeled data, and updated using real dialogue on the fly | [] |
GEM-SciDuet-train-12#paper-980#slide-18 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-18 | Human in the loop experiments learning dialogue via interacting with real users | DDQ agents significantly outperforms the DQN agent
A larger leads to more aggressive planning and better results
Pre-training world model with human conversational data improves the learning efficiency and the agents performance | DDQ agents significantly outperforms the DQN agent
A larger leads to more aggressive planning and better results
Pre-training world model with human conversational data improves the learning efficiency and the agents performance | [] |
GEM-SciDuet-train-12#paper-980#slide-19 | 980 | Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning | Training a task-completion dialogue agent via reinforcement learning (RL) is costly because it requires many interactions with real users. One common alternative is to use a user simulator. However, a user simulator usually lacks the language complexity of human interlocutors and the biases in its design may tend to de... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4"
],
"paper_header_content": [
"Introduction",
"Direct Reinforcement Learning",
"Planning",
"World Model Learning",
"Experiments and Results",
"Dataset",
"Dia... | GEM-SciDuet-train-12#paper-980#slide-19 | Conclusion and Future Work | Deep Dyna-Q: integrating planning for dialogue policy learning
Make the best use of limited real user experiences
Learning when to switch between real and simulated users
Exploration: trying actions to improve the world model
Exploitation: trying to behave in the optimal way given the current world model | Deep Dyna-Q: integrating planning for dialogue policy learning
Make the best use of limited real user experiences
Learning when to switch between real and simulated users
Exploration: trying actions to improve the world model
Exploitation: trying to behave in the optimal way given the current world model | [] |
GEM-SciDuet-train-13#paper-982#slide-1 | 982 | Integrating Case Frame into Japanese to Chinese Hierarchical Phrase-based Translation Model | This paper presents a novel approach to enhance hierarchical phrase-based (HP-B) machine translation systems with case frame (CF).we integrate the Japanese shallow CF into both rule extraction and decoding. All of these rules are then employed to decode new sentences in Japanese with source language case frame. The res... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Case Frame",
"The proposed approach",
"Case Frame Rules Extraction",
"Transforming Case Frame Rule i... | GEM-SciDuet-train-13#paper-982#slide-1 | Motivation | Hierarchical phrase-based model limit
Linguistic features (Japanese) subject object verb structure auxiliary words | Hierarchical phrase-based model limit
Linguistic features (Japanese) subject object verb structure auxiliary words | [] |
GEM-SciDuet-train-13#paper-982#slide-2 | 982 | Integrating Case Frame into Japanese to Chinese Hierarchical Phrase-based Translation Model | This paper presents a novel approach to enhance hierarchical phrase-based (HP-B) machine translation systems with case frame (CF).we integrate the Japanese shallow CF into both rule extraction and decoding. All of these rules are then employed to decode new sentences in Japanese with source language case frame. The res... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Case Frame",
"The proposed approach",
"Case Frame Rules Extraction",
"Transforming Case Frame Rule i... | GEM-SciDuet-train-13#paper-982#slide-2 | Verb Case Frame | Deep verb case frame between paralleled sentences in two languages
Subject Object Time Location Tool
Specific to Japanese explicit case frame
Agent Time Object Goal Tool Verb
Time Agent Tool Verb Object Goal
Deep case frame to shallow case frame for Japanese | Deep verb case frame between paralleled sentences in two languages
Subject Object Time Location Tool
Specific to Japanese explicit case frame
Agent Time Object Goal Tool Verb
Time Agent Tool Verb Object Goal
Deep case frame to shallow case frame for Japanese | [] |
GEM-SciDuet-train-13#paper-982#slide-3 | 982 | Integrating Case Frame into Japanese to Chinese Hierarchical Phrase-based Translation Model | This paper presents a novel approach to enhance hierarchical phrase-based (HP-B) machine translation systems with case frame (CF).we integrate the Japanese shallow CF into both rule extraction and decoding. All of these rules are then employed to decode new sentences in Japanese with source language case frame. The res... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Case Frame",
"The proposed approach",
"Case Frame Rules Extraction",
"Transforming Case Frame Rule i... | GEM-SciDuet-train-13#paper-982#slide-3 | Method | Case Frame Rule extraction
Obtain case frame rules from paralleled sentences with word alignments
Transform case frame rules into hiero rules.
examples of case frame rules
(a) the example of phrase rule transformation
(b) the example of reordering rule transformation | Case Frame Rule extraction
Obtain case frame rules from paralleled sentences with word alignments
Transform case frame rules into hiero rules.
examples of case frame rules
(a) the example of phrase rule transformation
(b) the example of reordering rule transformation | [] |
GEM-SciDuet-train-13#paper-982#slide-4 | 982 | Integrating Case Frame into Japanese to Chinese Hierarchical Phrase-based Translation Model | This paper presents a novel approach to enhance hierarchical phrase-based (HP-B) machine translation systems with case frame (CF).we integrate the Japanese shallow CF into both rule extraction and decoding. All of these rules are then employed to decode new sentences in Japanese with source language case frame. The res... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Case Frame",
"The proposed approach",
"Case Frame Rules Extraction",
"Transforming Case Frame Rule i... | GEM-SciDuet-train-13#paper-982#slide-4 | Experiment | CWMT 2011 Japanese-Chinese Corpus (sentence pairs)
ASPEC-JC Corpus (sentence pairs)
Training data: 680 thousand
exp1: Strong hierarchical phrase-based system (baseline)
exp2: exp1 with case frame rules
exp3: exp1 with manually case frame rules
Variables in rule are without distinction during decoding
system system CWMT... | CWMT 2011 Japanese-Chinese Corpus (sentence pairs)
ASPEC-JC Corpus (sentence pairs)
Training data: 680 thousand
exp1: Strong hierarchical phrase-based system (baseline)
exp2: exp1 with case frame rules
exp3: exp1 with manually case frame rules
Variables in rule are without distinction during decoding
system system CWMT... | [] |
GEM-SciDuet-train-13#paper-982#slide-5 | 982 | Integrating Case Frame into Japanese to Chinese Hierarchical Phrase-based Translation Model | This paper presents a novel approach to enhance hierarchical phrase-based (HP-B) machine translation systems with case frame (CF).we integrate the Japanese shallow CF into both rule extraction and decoding. All of these rules are then employed to decode new sentences in Japanese with source language case frame. The res... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Case Frame",
"The proposed approach",
"Case Frame Rules Extraction",
"Transforming Case Frame Rule i... | GEM-SciDuet-train-13#paper-982#slide-5 | Conclusion | This paper presented an approach to improve HPB model systems by augmenting the SCFG with Japanese CFRs.
The CF are used to introduce new linguistically sensible hypotheses into the translation search space while maintaining the Hiero robustness qualities and avoiding computational explosions.
We obtain significant imp... | This paper presented an approach to improve HPB model systems by augmenting the SCFG with Japanese CFRs.
The CF are used to introduce new linguistically sensible hypotheses into the translation search space while maintaining the Hiero robustness qualities and avoiding computational explosions.
We obtain significant imp... | [] |
GEM-SciDuet-train-13#paper-982#slide-6 | 982 | Integrating Case Frame into Japanese to Chinese Hierarchical Phrase-based Translation Model | This paper presents a novel approach to enhance hierarchical phrase-based (HP-B) machine translation systems with case frame (CF).we integrate the Japanese shallow CF into both rule extraction and decoding. All of these rules are then employed to decode new sentences in Japanese with source language case frame. The res... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"5",
"6"
],
"paper_header_content": [
"Introduction",
"Case Frame",
"The proposed approach",
"Case Frame Rules Extraction",
"Transforming Case Frame Rule i... | GEM-SciDuet-train-13#paper-982#slide-6 | Future work | Soft/hard constraints on case frame rule matching
Challenge to resolve the problem of tense and aspect etc. | Soft/hard constraints on case frame rule matching
Challenge to resolve the problem of tense and aspect etc. | [] |
GEM-SciDuet-train-14#paper-986#slide-0 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-0 | Corpus highlights | Slides available at http://bit.ly/cl-scisumm|6-slides and will be filed in GitHub.
Continuing effort to advance scientific document summarization by encouraging the incorporation of semantic and citation information.
Corpus enlarged from 10 (pilot) to 30 CL articles
Annotation by 6 paid and trained annotators from U-Hy... | Slides available at http://bit.ly/cl-scisumm|6-slides and will be filed in GitHub.
Continuing effort to advance scientific document summarization by encouraging the incorporation of semantic and citation information.
Corpus enlarged from 10 (pilot) to 30 CL articles
Annotation by 6 paid and trained annotators from U-Hy... | [] |
GEM-SciDuet-train-14#paper-986#slide-1 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-1 | Oral Sessions | Slides available at http://bit.ly/cl-scisummi|6-slides and will be filed in GitHub.
Rais) nai System 8 Top in Task |B, among top performers for
Task |A and Task 2
* Remote presentation from China pose | onl) System 6 Among top performers for Task IA | Slides available at http://bit.ly/cl-scisummi|6-slides and will be filed in GitHub.
Rais) nai System 8 Top in Task |B, among top performers for
Task |A and Task 2
* Remote presentation from China pose | onl) System 6 Among top performers for Task IA | [] |
GEM-SciDuet-train-14#paper-986#slide-2 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-2 | Evaluation | Still a work in progress:
Will present results based on the CEUR paper (old), stacked average ofall runs...
... and contrast with newer (still preliminary) results (new), individual runs separated
Task |A Exact sentence ID match
EC asi conditional on Task |A
Bag of Words (BOVV) overlap between discourse facets
BIRNDL 2... | Still a work in progress:
Will present results based on the CEUR paper (old), stacked average ofall runs...
... and contrast with newer (still preliminary) results (new), individual runs separated
Task |A Exact sentence ID match
EC asi conditional on Task |A
Bag of Words (BOVV) overlap between discourse facets
BIRNDL 2... | [] |
GEM-SciDuet-train-14#paper-986#slide-3 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-3 | System Results Task 1A and 1B | es) \ HS VY S S 6 Gy a PS) aS as a ne aS ns as A ro a i of of? of? ro a a of? a of oY PY oy of? of a
BIRNDL 2016: CL-SciSumm 16 Overview 23 June 2016 7 CEUR version (all system runs averaged) | es) \ HS VY S S 6 Gy a PS) aS as a ne aS ns as A ro a i of of? of? ro a a of? a of oY PY oy of? of a
BIRNDL 2016: CL-SciSumm 16 Overview 23 June 2016 7 CEUR version (all system runs averaged) | [] |
GEM-SciDuet-train-14#paper-986#slide-4 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-4 | Best Performing System Task 1A | System ID Avg Best performing StDev performance Systems | System ID Avg Best performing StDev performance Systems | [] |
GEM-SciDuet-train-14#paper-986#slide-5 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-5 | Best Performing System Task 1B | System ID Avg StDev performance Best performing | System ID Avg StDev performance Best performing | [] |
GEM-SciDuet-train-14#paper-986#slide-6 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-6 | Best Performing System Task 2 | System ID Approaches Comments Systems | System ID Approaches Comments Systems | [] |
GEM-SciDuet-train-14#paper-986#slide-7 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-7 | New Results Task 1A | New Results (Task | A)
BIRNDL 2016: CL-SciSumm 16 Overview VER TAO) a
System ID Approach Task 1a Comments | New Results (Task | A)
BIRNDL 2016: CL-SciSumm 16 Overview VER TAO) a
System ID Approach Task 1a Comments | [] |
GEM-SciDuet-train-14#paper-986#slide-8 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-8 | New Results Task 1B | ID Approach Task 1B | ID Approach Task 1B | [] |
GEM-SciDuet-train-14#paper-986#slide-9 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-9 | New Results Task 2 | New Results Task 2
Peele n ulate kod)
TINWAaa$oT MSc eRRCAAicne een a Nts tch Aone Le ests ech Atom CERAM Stch Aine Mee a Si-6co ALL m Pac tco Aine - TNL SST
ZNOUSOT Bay ieh- e-L0h a
ONTLOASS ZTINISE ONTLOASS BBS Ete
Ba feXol Bits qduwoowrss CawoowLss
Tek oh cuks take)
ONILOASS Bere oR Ao)
ante sew tt 7TDWISE (aCe) Rte... | New Results Task 2
Peele n ulate kod)
TINWAaa$oT MSc eRRCAAicne een a Nts tch Aone Le ests ech Atom CERAM Stch Aine Mee a Si-6co ALL m Pac tco Aine - TNL SST
ZNOUSOT Bay ieh- e-L0h a
ONTLOASS ZTINISE ONTLOASS BBS Ete
Ba feXol Bits qduwoowrss CawoowLss
Tek oh cuks take)
ONILOASS Bere oR Ao)
ante sew tt 7TDWISE (aCe) Rte... | [] |
GEM-SciDuet-train-14#paper-986#slide-10 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-10 | Supplemental Analyses | We investigated whether high deviations could be because of the topic
Topics with both high and low number of citances have mixed results
No significant patterns of performance against:
Number of citances of the topic set
Age of the paper
BIRNDL 2016: CL-SciSumm 16 Overview PEM wA0 Ty a) | We investigated whether high deviations could be because of the topic
Topics with both high and low number of citances have mixed results
No significant patterns of performance against:
Number of citances of the topic set
Age of the paper
BIRNDL 2016: CL-SciSumm 16 Overview PEM wA0 Ty a) | [] |
GEM-SciDuet-train-14#paper-986#slide-11 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-11 | Limitations | Task |B: limited number of samples for most (e.g.,hypothesis) discourse facets, inconsistent labeling
Preprocessing: OCR + Parsing
Software: Protege w/ manual alignment and post-processing
Scaling the corpus was difficult: key bottleneck in the corpus development
The Corpus size, #citing papers | Task |B: limited number of samples for most (e.g.,hypothesis) discourse facets, inconsistent labeling
Preprocessing: OCR + Parsing
Software: Protege w/ manual alignment and post-processing
Scaling the corpus was difficult: key bottleneck in the corpus development
The Corpus size, #citing papers | [] |
GEM-SciDuet-train-14#paper-986#slide-13 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-13 | Conclusions | Slides available at http://bit.ly/cl-scisummi|6-slides and will be filed in GitHub.
Successful enlargement of the 2014 pilot task, albeit with some clarification issues
We invite teams to examine the detailed results available with the GitHub repo: https://erthub.com/WING-NUS/scisumm-corpus/
Results and finalized analy... | Slides available at http://bit.ly/cl-scisummi|6-slides and will be filed in GitHub.
Successful enlargement of the 2014 pilot task, albeit with some clarification issues
We invite teams to examine the detailed results available with the GitHub repo: https://erthub.com/WING-NUS/scisumm-corpus/
Results and finalized analy... | [] |
GEM-SciDuet-train-14#paper-986#slide-15 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-15 | Scientific Document Summarization | wos 1ie-lell Yom I nal eae
Surface, lexical, semantic or rhetorical features of the paper
Community creates a summary when citing
Capture all aspects of a paper | wos 1ie-lell Yom I nal eae
Surface, lexical, semantic or rhetorical features of the paper
Community creates a summary when citing
Capture all aspects of a paper | [] |
GEM-SciDuet-train-14#paper-986#slide-16 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-16 | Scientific Document Summarization Citation based extractive summaries | Qazvinian,V.,and Radey, D.R.Identifying non-explicit citing sentences for citation-based summarization (ACL, 2010)
Abu-|bara, Amjad, and Dragomir Radev. Reference scope identification in citing sentences. (ACL, 2012)
Abu-Jbara, Amjad, and Dragomir Radev. Coherent citation- based summarization of scientific papers. (ACL... | Qazvinian,V.,and Radey, D.R.Identifying non-explicit citing sentences for citation-based summarization (ACL, 2010)
Abu-|bara, Amjad, and Dragomir Radev. Reference scope identification in citing sentences. (ACL, 2012)
Abu-Jbara, Amjad, and Dragomir Radev. Coherent citation- based summarization of scientific papers. (ACL... | [] |
GEM-SciDuet-train-14#paper-986#slide-17 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-17 | In summary | Community concurs that a citation-based summary of a scientific document Is important.
Citing papers cite different aspects of the same reference paper.
Assigning facets to these citances may help create | Community concurs that a citation-based summary of a scientific document Is important.
Citing papers cite different aspects of the same reference paper.
Assigning facets to these citances may help create | [] |
GEM-SciDuet-train-14#paper-986#slide-19 | 986 | Overview of the CL-SciSumm 2016 Shared Task | The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Task",
"CL-SciSumm Pilot 2014",
"Development",
"Annotation",
"Overview of Approaches",
"System Runs",
"Conclusion"
]
} | GEM-SciDuet-train-14#paper-986#slide-19 | Annotating the SciSumm corpus | 6 annotators selected from a pool of 25
6 hours of training
Gold standard annotations for Task |A and IB,
per topic or reference paper
Community and hand-written summaries for Task 2, | 6 annotators selected from a pool of 25
6 hours of training
Gold standard annotations for Task |A and IB,
per topic or reference paper
Community and hand-written summaries for Task 2, | [] |
GEM-SciDuet-train-15#paper-991#slide-0 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-0 | Introduction | I Bilingual transfer learning is important for overcoming data
sparsity in the target language
I Bilingual word embeddings eliminate the gap between source
and target language vocabulary
I Resources required for bilingual methods are often
I Texts for embeddings
I Source language training samples
I We focused on domain... | I Bilingual transfer learning is important for overcoming data
sparsity in the target language
I Bilingual word embeddings eliminate the gap between source
and target language vocabulary
I Resources required for bilingual methods are often
I Texts for embeddings
I Source language training samples
I We focused on domain... | [] |
GEM-SciDuet-train-15#paper-991#slide-1 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-1 | Motivation | I Cross-lingual sentiment analysis of tweets
triste sad awful horrible bad malo super super
mug jarra rojo hoy red today
I Combination of two methods:
I Domain adaptation of bilingual word embeddings
I Semi-supervised system for exploiting unlabeled data
I No additional annotated resource is needed:
I Cross-lingual sen... | I Cross-lingual sentiment analysis of tweets
triste sad awful horrible bad malo super super
mug jarra rojo hoy red today
I Combination of two methods:
I Domain adaptation of bilingual word embeddings
I Semi-supervised system for exploiting unlabeled data
I No additional annotated resource is needed:
I Cross-lingual sen... | [] |
GEM-SciDuet-train-15#paper-991#slide-2 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-2 | Word Embedding Adaptation | Source Out-of-domain In-domain W2V MWE
Target Out-of-domain In-domain W2V MWE BWE
I Goal: domain-specific bilingual word embeddings with general
Monolingual word embeddings on concatenated data
I Easily accessible general (out-of-domain) data
Map monolingual embeddings to a common space using
I Small seed lexicon conta... | Source Out-of-domain In-domain W2V MWE
Target Out-of-domain In-domain W2V MWE BWE
I Goal: domain-specific bilingual word embeddings with general
Monolingual word embeddings on concatenated data
I Easily accessible general (out-of-domain) data
Map monolingual embeddings to a common space using
I Small seed lexicon conta... | [] |
GEM-SciDuet-train-15#paper-991#slide-3 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-3 | Semi Supervised Approach | I Goal: Unlabeled samples for training
I Tailored system from computer vision to NLP (Hausser et al., 2017)
I Labeled/unlabeled samples in the same class are similar
I Sample representation is given by the n 1th layer
I Walking cycles: labeled unlabeled labeled
I Maximize the number of correct cycles
I L Lclassificatio... | I Goal: Unlabeled samples for training
I Tailored system from computer vision to NLP (Hausser et al., 2017)
I Labeled/unlabeled samples in the same class are similar
I Sample representation is given by the n 1th layer
I Walking cycles: labeled unlabeled labeled
I Maximize the number of correct cycles
I L Lclassificatio... | [] |
GEM-SciDuet-train-15#paper-991#slide-4 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-4 | Cross Lingual Sentiment Analysis of Tweets | I RepLab 2013 sentiment classification (+/0/-) of En/Es tweets
I @churcaballero jajaja con lo bien que iba el volvo...
I General domain data: 49.2M OpenSubtitles sentences
I Twitter specific data:
I 22M downloaded tweets
I Seed lexicon: frequent English words from BNC (Kilgarriff, 1997)
I Labeled data: RepLab En traini... | I RepLab 2013 sentiment classification (+/0/-) of En/Es tweets
I @churcaballero jajaja con lo bien que iba el volvo...
I General domain data: 49.2M OpenSubtitles sentences
I Twitter specific data:
I 22M downloaded tweets
I Seed lexicon: frequent English words from BNC (Kilgarriff, 1997)
I Labeled data: RepLab En traini... | [] |
GEM-SciDuet-train-15#paper-991#slide-5 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-5 | Medical Bilingual Lexicon Induction | I Mine Dutch translations of English medical words
I General domain data: 2M Europarl (v7) sentences
I Medical data: 73.7K medical Wikipedia sentences
I Medical seed lexicon (Heyman et al., 2017)
En word in BNC 5 most similar and 5 random Du pair
En word in medical lexicon 3 most similar Du
I Classifier based approach
... | I Mine Dutch translations of English medical words
I General domain data: 2M Europarl (v7) sentences
I Medical data: 73.7K medical Wikipedia sentences
I Medical seed lexicon (Heyman et al., 2017)
En word in BNC 5 most similar and 5 random Du pair
En word in medical lexicon 3 most similar Du
I Classifier based approach
... | [] |
GEM-SciDuet-train-15#paper-991#slide-6 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-6 | Results Sentiment Analysis | labeled data unlabeled data
Table 1: Accuracy on cross-lingual sentiment analysis of tweets | labeled data unlabeled data
Table 1: Accuracy on cross-lingual sentiment analysis of tweets | [] |
GEM-SciDuet-train-15#paper-991#slide-7 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-7 | Results Bilingual Lexicon Induction | labeled lexicon unlabeled lexicon medical BNC medical medical medical
Table 2: F1 scores of medical bilingual lexicon induction | labeled lexicon unlabeled lexicon medical BNC medical medical medical
Table 2: F1 scores of medical bilingual lexicon induction | [] |
GEM-SciDuet-train-15#paper-991#slide-8 | 991 | Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable | Bilingual tasks, such as bilingual lexicon induction and cross-lingual classification, are crucial for overcoming data sparsity in the target language. Resources required for such tasks are often out-of-domain, thus domain adaptation is an important problem here. We make two contributions. First, we test a delightfully... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"2.2",
"2.3",
"3",
"3.1",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5",
"5.1",
"5.2",
"5.3",
"6",
"6.1",
"6.2",
"6.3",
"7"
],
"paper_header_content": [
"Introduction",
"Previous Work 2.1 Bilingual ... | GEM-SciDuet-train-15#paper-991#slide-8 | Conclusions | I Bilingual transfer learning yield poor results when using
I We showed that performance can be increased by using only
additional unlabeled monolingual data
I Delightfully simple approach to adapt embeddings
I Broadly applicable method to exploit unlabeled data
I Language and task independent approaches | I Bilingual transfer learning yield poor results when using
I We showed that performance can be increased by using only
additional unlabeled monolingual data
I Delightfully simple approach to adapt embeddings
I Broadly applicable method to exploit unlabeled data
I Language and task independent approaches | [] |
GEM-SciDuet-train-16#paper-994#slide-0 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additio... | GEM-SciDuet-train-16#paper-994#slide-0 | Text Simplification | Last year I read the book John authored John wrote a book. I read the book.
Original sentence One or several simpler sentences
Multiple motivations Preprocessing for Natural Language Processing tasks
e.g., machine translation, relation extraction, parsing
Reading aids, Language Comprehension
e.g., people with aphasia, ... | Last year I read the book John authored John wrote a book. I read the book.
Original sentence One or several simpler sentences
Multiple motivations Preprocessing for Natural Language Processing tasks
e.g., machine translation, relation extraction, parsing
Reading aids, Language Comprehension
e.g., people with aphasia, ... | [] |
GEM-SciDuet-train-16#paper-994#slide-1 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ... | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
... | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additio... | GEM-SciDuet-train-16#paper-994#slide-1 | In this talk | Compares favorably to the state-of-the-art in combined structural and lexical simplification.
The first simplification system combining structural transformations, using semantic structures, and neural machine translation.
Alleviates the over-conseratism of MT-based systems. | Compares favorably to the state-of-the-art in combined structural and lexical simplification.
The first simplification system combining structural transformations, using semantic structures, and neural machine translation.
Alleviates the over-conseratism of MT-based systems. | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.