gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
dict
paper_headers
dict
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-1#paper-954#slide-0
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-0
Syntax in Statistical Machine Translation
Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions?
Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions?
[]
GEM-SciDuet-train-1#paper-954#slide-1
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-1
Syntax in the Language Model
Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions? An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings. Phrase-based decoder produces translation in the target...
Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions? An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings. Phrase-based decoder produces translation in the target...
[]
GEM-SciDuet-train-1#paper-954#slide-2
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-2
Incremental Parsing
DT NN VP PP The president VB NP IN NP meets DT NN on Friday NP/NN NN VP/NP DT board Motivation Decoder Integration Results Questions? the president VB NP VP/NN Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents NP VP S/NP NP the board DT president VB the Incompl...
DT NN VP PP The president VB NP IN NP meets DT NN on Friday NP/NN NN VP/NP DT board Motivation Decoder Integration Results Questions? the president VB NP VP/NN Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents NP VP S/NP NP the board DT president VB the Incompl...
[]
GEM-SciDuet-train-1#paper-954#slide-3
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-3
Incremental Parsing using HHMM Schuler et al 2010
Hierarchical Hidden Markov Model Circles denote hidden random variables Edges denote conditional dependencies NP/NN NN VP/NP DT board Isomorphic Tree Path DT president VB the Shaded circles denote observed values Motivation Decoder Integration Results Questions? Analogous to Maximally Incremental e1 =The e2 =president ...
Hierarchical Hidden Markov Model Circles denote hidden random variables Edges denote conditional dependencies NP/NN NN VP/NP DT board Isomorphic Tree Path DT president VB the Shaded circles denote observed values Motivation Decoder Integration Results Questions? Analogous to Maximally Incremental e1 =The e2 =president ...
[]
GEM-SciDuet-train-1#paper-954#slide-4
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-4
Phrase Based Translation
Der Prasident trifft am Freitag den Vorstand The president meets the board on Friday s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?
Der Prasident trifft am Freitag den Vorstand The president meets the board on Friday s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?
[]
GEM-SciDuet-train-1#paper-954#slide-5
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-5
Phrase Based Translation with Syntactic LM
represents parses of the partial translation at node h in stack t s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?
represents parses of the partial translation at node h in stack t s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?
[]
GEM-SciDuet-train-1#paper-954#slide-6
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-6
Integrate Parser into Phrase based Decoder
EAAAAA EEAAAA EEIAAA EEIIAA s the the president president meets meets the Motivation Syntactic LM Results Questions? president meets the board
EAAAAA EEAAAA EEIAAA EEIIAA s the the president president meets meets the Motivation Syntactic LM Results Questions? president meets the board
[]
GEM-SciDuet-train-1#paper-954#slide-7
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-7
Direct Maximum Entropy Model of Translation
e argmax exp jhj(e,f) h Distortion model n-gram LM Set of j feature weights Syntactic LM P( th) AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?
e argmax exp jhj(e,f) h Distortion model n-gram LM Set of j feature weights Syntactic LM P( th) AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?
[]
GEM-SciDuet-train-1#paper-954#slide-8
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-8
Does an Incremental Syntactic LM Help Translation
but will it make my BLEU score go up? Motivation Syntactic LM Decoder Integration Questions? Moses with LM(s) BLEU Using n-gram LM only Using n-gram LM + Syntactic LM NIST OpenMT 2008 Urdu-English data set Moses with standard phrase-based translation model Tuning and testing restricted to sentences 20 words long Result...
but will it make my BLEU score go up? Motivation Syntactic LM Decoder Integration Questions? Moses with LM(s) BLEU Using n-gram LM only Using n-gram LM + Syntactic LM NIST OpenMT 2008 Urdu-English data set Moses with standard phrase-based translation model Tuning and testing restricted to sentences 20 words long Result...
[]
GEM-SciDuet-train-1#paper-954#slide-9
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-9
Perplexity Results
Language models trained on WSJ Treebank corpus Motivation Syntactic LM Decoder Integration Questions? WSJ 5-gram + WSJ SynLM ...and n-gram model for larger English Gigaword corpus. Gigaword 5-gram + WSJ SynLM
Language models trained on WSJ Treebank corpus Motivation Syntactic LM Decoder Integration Questions? WSJ 5-gram + WSJ SynLM ...and n-gram model for larger English Gigaword corpus. Gigaword 5-gram + WSJ SynLM
[]
GEM-SciDuet-train-1#paper-954#slide-10
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-10
Summary
Straightforward general framework for incorporating any Incremental Syntactic LM into Phrase-based Translation We used an Incremental HHMM Parser as Syntactic LM Syntactic LM shows substantial decrease in perplexity on out-of-domain data over n-gram LM when trained on same data Syntactic LM interpolated with n-gram LM ...
Straightforward general framework for incorporating any Incremental Syntactic LM into Phrase-based Translation We used an Incremental HHMM Parser as Syntactic LM Syntactic LM shows substantial decrease in perplexity on out-of-domain data over n-gram LM when trained on same data Syntactic LM interpolated with n-gram LM ...
[]
GEM-SciDuet-train-1#paper-954#slide-11
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-11
This looks a lot like CCG
Our parser performs some CCG-style operations: Type raising in conjunction with forward function composition Motivation Syntactic LM Decoder Integration Results
Our parser performs some CCG-style operations: Type raising in conjunction with forward function composition Motivation Syntactic LM Decoder Integration Results
[]
GEM-SciDuet-train-1#paper-954#slide-12
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-12
Why not just use CCG
No probablistic version of incremental CCG Our parser is constrained (we dont have backward composition) We do use those components of CCG (forward function application and forward function composition) which are useful for probabilistic incremental parsing Motivation Syntactic LM Decoder Integration Results
No probablistic version of incremental CCG Our parser is constrained (we dont have backward composition) We do use those components of CCG (forward function application and forward function composition) which are useful for probabilistic incremental parsing Motivation Syntactic LM Decoder Integration Results
[]
GEM-SciDuet-train-1#paper-954#slide-13
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-13
Speed Results
Mean per-sentence decoding time Parser beam sizes are indicated for the syntactic LM Parser runs in linear time, but were parsing all paths through the Moses lattice as they are generated by the decoder More informed pruning, but slower decoding Motivation Syntactic LM Decoder Integration Results
Mean per-sentence decoding time Parser beam sizes are indicated for the syntactic LM Parser runs in linear time, but were parsing all paths through the Moses lattice as they are generated by the decoder More informed pruning, but slower decoding Motivation Syntactic LM Decoder Integration Results
[]
GEM-SciDuet-train-1#paper-954#slide-14
954
Incremental Syntactic Language Models for Phrase-based Translation
This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Mod...
GEM-SciDuet-train-1#paper-954#slide-14
Phrase Based Translation w ntactic
e string of n target language words e1. . .en et the first t words in e, where tn t set of all incremental parses of et def t subset of parses t that remain after parser pruning e argmax P( e) t1 t Motivation Syntactic LM Decoder Integration Results
e string of n target language words e1. . .en et the first t words in e, where tn t set of all incremental parses of et def t subset of parses t that remain after parser pruning e argmax P( e) t1 t Motivation Syntactic LM Decoder Integration Results
[]
GEM-SciDuet-train-2#paper-957#slide-0
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-0
Introduction
I How far can we go with a language agnostic model? I We experiment with [Enright and Kondrak, 2007]s parallel document identification I We adapt the method to the BUCC-2015 Shared task based on two assumptions: Source documents should be paired 1-to-1 with target documents We have access to comparable documents in sev...
I How far can we go with a language agnostic model? I We experiment with [Enright and Kondrak, 2007]s parallel document identification I We adapt the method to the BUCC-2015 Shared task based on two assumptions: Source documents should be paired 1-to-1 with target documents We have access to comparable documents in sev...
[]
GEM-SciDuet-train-2#paper-957#slide-1
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-1
Method
I Fast parallel document identification [Enright and Kondrak, 2007] I Documents = bags of hapax words I Words = blank separated strings that are 4+ characters long I Given a document in language A, the document in language B that shares the largest number of words is considered as parallel I Works very well for paralle...
I Fast parallel document identification [Enright and Kondrak, 2007] I Documents = bags of hapax words I Words = blank separated strings that are 4+ characters long I Given a document in language A, the document in language B that shares the largest number of words is considered as parallel I Works very well for paralle...
[]
GEM-SciDuet-train-2#paper-957#slide-2
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-2
Improvements using 1 to 1 alignments
I In baseline, document pairs are scored independently I Multiple source documents are paired to a same target document I 60% of English pages are paired with multiple pages in French or German I We remove multiply assigned source documents using pigeonhole reasoning I From 60% to 11% of multiply assigned source docume...
I In baseline, document pairs are scored independently I Multiple source documents are paired to a same target document I 60% of English pages are paired with multiple pages in French or German I We remove multiply assigned source documents using pigeonhole reasoning I From 60% to 11% of multiply assigned source docume...
[]
GEM-SciDuet-train-2#paper-957#slide-3
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-3
Improvements using cross lingual information
I Simple document weighting function score ties I We break the remaining score ties using a third language I From 11% to less than 4% of multiply assigned source documents
I Simple document weighting function score ties I We break the remaining score ties using a third language I From 11% to less than 4% of multiply assigned source documents
[]
GEM-SciDuet-train-2#paper-957#slide-4
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-4
Experimental settings
I We focus on the French-English and German-English pairs I The following measures are considered relevant I Mean Average Precision (MAP)
I We focus on the French-English and German-English pairs I The following measures are considered relevant I Mean Average Precision (MAP)
[]
GEM-SciDuet-train-2#paper-957#slide-5
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-5
Results FR EN
Strategy MAP Succ. P@5 MAP Succ. P@5
Strategy MAP Succ. P@5 MAP Succ. P@5
[]
GEM-SciDuet-train-2#paper-957#slide-6
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-6
Results DE EN
Strategy MAP Succ. P@5 MAP Succ. P@5
Strategy MAP Succ. P@5 MAP Succ. P@5
[]
GEM-SciDuet-train-2#paper-957#slide-7
957
LINA: Identifying Comparable Documents from Wikipedia
This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual informatio...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
GEM-SciDuet-train-2#paper-957#slide-7
Summary
I Unsupervised, hapax words-based method I Promising results, about 60% of success using pigeonhole reasoning I Using a third language slightly improves the performance I Finding the optimal alignment across the all languages I Relaxing the hapax-words constraint
I Unsupervised, hapax words-based method I Promising results, about 60% of success using pigeonhole reasoning I Using a third language slightly improves the performance I Finding the optimal alignment across the all languages I Relaxing the hapax-words constraint
[]
GEM-SciDuet-train-3#paper-964#slide-0
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-0
Sentence Representation in Conversations
Traditional System: hand-crafted semantic frame Not scalable to complex domains Neural dialog models: continuous hidden vectors Directly output system responses in words Hard to interpret & control [Ritter et al 2011, Vinyals et al
Traditional System: hand-crafted semantic frame Not scalable to complex domains Neural dialog models: continuous hidden vectors Directly output system responses in words Hard to interpret & control [Ritter et al 2011, Vinyals et al
[]
GEM-SciDuet-train-3#paper-964#slide-1
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-1
Why discrete sentence representation
1. Inrepteablity & controbility & multimodal distribution 2. Semi-supervised Learning [Kingma et al 2014 NIPS, Zhou et al 2017 ACL] 3. Reinforcement Learning [Wen et al 2017] X = What time do you want to travel? Model Z1Z2Z3 Encoder Decoder
1. Inrepteablity & controbility & multimodal distribution 2. Semi-supervised Learning [Kingma et al 2014 NIPS, Zhou et al 2017 ACL] 3. Reinforcement Learning [Wen et al 2017] X = What time do you want to travel? Model Z1Z2Z3 Encoder Decoder
[]
GEM-SciDuet-train-3#paper-964#slide-2
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-2
Baseline Discrete Variational Autoencoder VAE
M discrete K-way latent variables z with RNN recognition & generation network. Reparametrization using Gumbel-Softmax [Jang et al., 2016; Maddison et al., 2016] M discrete K-way latent variables z with GRU encoder & decoder. FAIL to learn meaningful z because of posterior collapse (z is constant regardless of x) MANY p...
M discrete K-way latent variables z with RNN recognition & generation network. Reparametrization using Gumbel-Softmax [Jang et al., 2016; Maddison et al., 2016] M discrete K-way latent variables z with GRU encoder & decoder. FAIL to learn meaningful z because of posterior collapse (z is constant regardless of x) MANY p...
[]
GEM-SciDuet-train-3#paper-964#slide-3
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-3
Anti Info Nature in Evidence Lower Bound ELBO
Write ELBO as an expectation over the whole dataset Expand the KL term, and plug back in: Minimize I(Z, X) to 0 Posterior collapse with powerful decoder.
Write ELBO as an expectation over the whole dataset Expand the KL term, and plug back in: Minimize I(Z, X) to 0 Posterior collapse with powerful decoder.
[]
GEM-SciDuet-train-3#paper-964#slide-4
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-4
Discrete Information VAE DI VAE
A natural solution is to maximize both data log likelihood & mutual information. Match prior result for continuous VAE. [Mazhazni et al 2015, Kim et al 2017] Propose Batch Prior Regularization (BPR) to minimize KL [q(z)||p(z)] for discrete latent Fundamentally different from KL-annealing, since
A natural solution is to maximize both data log likelihood & mutual information. Match prior result for continuous VAE. [Mazhazni et al 2015, Kim et al 2017] Propose Batch Prior Regularization (BPR) to minimize KL [q(z)||p(z)] for discrete latent Fundamentally different from KL-annealing, since
[]
GEM-SciDuet-train-3#paper-964#slide-5
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-5
Learning from Context Predicting DI VST
Skip-Thought (ST) is well-known distributional sentence representation [Hill et al 2016] The meaning of sentences in dialogs is highly contextual, e.g. dialog acts. We extend DI-VAE to Discrete Information Variational Skip Thought (DI-VST).
Skip-Thought (ST) is well-known distributional sentence representation [Hill et al 2016] The meaning of sentences in dialogs is highly contextual, e.g. dialog acts. We extend DI-VAE to Discrete Information Variational Skip Thought (DI-VST).
[]
GEM-SciDuet-train-3#paper-964#slide-6
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-6
Integration with Encoder Decoders
Policy Network z P(z|c) Recognition Network z Generator Optional: penalize decoder if generated x not exhibiting z [Hu et al 2017]
Policy Network z P(z|c) Recognition Network z Generator Optional: penalize decoder if generated x not exhibiting z [Hu et al 2017]
[]
GEM-SciDuet-train-3#paper-964#slide-7
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-7
Evaluation Datasets
a. Past evaluation dataset for text VAE [Bowman et al 2015] Stanford Multi-domain Dialog Dataset (SMD) [Eric and Manning 2017] a. 3,031 Human-Woz dialog dataset from 3 domains: weather, navigation & scheduling. Switchboard (SW) [Jurafsky et al 1997] a. 2,400 human-human telephone non-task-oriented dialogues about a giv...
a. Past evaluation dataset for text VAE [Bowman et al 2015] Stanford Multi-domain Dialog Dataset (SMD) [Eric and Manning 2017] a. 3,031 Human-Woz dialog dataset from 3 domains: weather, navigation & scheduling. Switchboard (SW) [Jurafsky et al 1997] a. 2,400 human-human telephone non-task-oriented dialogues about a giv...
[]
GEM-SciDuet-train-3#paper-964#slide-8
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-8
The Effectiveness of Batch Prior Regularization BPR
DAE: Autoencoder + Gumbel Softmax DVAE: Discrete VAE with ELBO loss DI-VAE: Discrete VAE + BPR DST: Skip thought + Gumbel Softmax DI-VST: Variational Skip Thought + BPR Table 1: Results for various discrete sentence representations.
DAE: Autoencoder + Gumbel Softmax DVAE: Discrete VAE with ELBO loss DI-VAE: Discrete VAE + BPR DST: Skip thought + Gumbel Softmax DI-VST: Variational Skip Thought + BPR Table 1: Results for various discrete sentence representations.
[]
GEM-SciDuet-train-3#paper-964#slide-9
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-9
How large should the batch size be
When batch size N = 0 A large batch size leads to more meaningful latent action z I(x,z) is not the final goal
When batch size N = 0 A large batch size leads to more meaningful latent action z I(x,z) is not the final goal
[]
GEM-SciDuet-train-3#paper-964#slide-11
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-11
Differences between DI VAE DI VST
DI-VAE cluster utterances based on the More error-prone since harder to predict Utterance used in the similar context Easier to get agreement.
DI-VAE cluster utterances based on the More error-prone since harder to predict Utterance used in the similar context Easier to get agreement.
[]
GEM-SciDuet-train-3#paper-964#slide-12
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-12
Interpreting Latent Actions
M=3, K=5. The trained R will map any utterance into a1 -a2 -a3 . E.g. How are you? Automatic Evaluation on SW & DD Compare latent actions with The higher the more correlated Human Evaluation on SMD Expert look at 5 examples and give a name to the latent actions 5 workers look at the expert name and Select the ones that...
M=3, K=5. The trained R will map any utterance into a1 -a2 -a3 . E.g. How are you? Automatic Evaluation on SW & DD Compare latent actions with The higher the more correlated Human Evaluation on SMD Expert look at 5 examples and give a name to the latent actions 5 workers look at the expert name and Select the ones that...
[]
GEM-SciDuet-train-3#paper-964#slide-13
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-13
Predict Latent Action by the Policy Network
Provide useful measure about the complexity of the domain. Usr > Sys & Chat > Task Predict latent actions from DI-VAE is harder than the ones from DI-VST Two types of latent actions has their own pros & cons. Which one is better is
Provide useful measure about the complexity of the domain. Usr > Sys & Chat > Task Predict latent actions from DI-VAE is harder than the ones from DI-VST Two types of latent actions has their own pros & cons. Which one is better is
[]
GEM-SciDuet-train-3#paper-964#slide-14
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-14
Interpretable Response Generation
Examples of interpretable dialog First time, a neural dialog system
Examples of interpretable dialog First time, a neural dialog system
[]
GEM-SciDuet-train-3#paper-964#slide-15
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-15
Conclusions and Future Work
An analysis of ELBO that explains the posterior collapse issue for sentence VAE. DI-VAE and DI-VST for learning rich sentence latent representation and integration Learn better context-based latent actions Encode human knowledge into the learning process. Learn structured latent action space for complex domains. Evalua...
An analysis of ELBO that explains the posterior collapse issue for sentence VAE. DI-VAE and DI-VST for learning rich sentence latent representation and integration Learn better context-based latent actions Encode human knowledge into the learning process. Learn structured latent action space for complex domains. Evalua...
[]
GEM-SciDuet-train-3#paper-964#slide-16
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-16
Semantic Consistency of the Generation
Use the recognition network as a classifier to predict the latent action z based on the Report accuracy by comparing z and z. DI-VAE has higher consistency than DI-VST L helps more in complex domain attr L helps DI-VST more than DI-VAE attr DI-VST is not directly helping generating x ST-ED doesnt work well on SW due to...
Use the recognition network as a classifier to predict the latent action z based on the Report accuracy by comparing z and z. DI-VAE has higher consistency than DI-VST L helps more in complex domain attr L helps DI-VST more than DI-VAE attr DI-VST is not directly helping generating x ST-ED doesnt work well on SW due to...
[]
GEM-SciDuet-train-3#paper-964#slide-17
964
Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation
The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Aut...
GEM-SciDuet-train-3#paper-964#slide-17
What defines Interpretable Latent Actions
Definition: Latent action is a set of discrete variable that define the high-level attributes of an utterance (sentence) X. Latent action is denoted as Z. Z should capture salient sentence-level features about the response X. The meaning of latent symbols Z should be independent of the context C. If meaning of Z depend...
Definition: Latent action is a set of discrete variable that define the high-level attributes of an utterance (sentence) X. Latent action is denoted as Z. Z should capture salient sentence-level features about the response X. The meaning of latent symbols Z should be independent of the context C. If meaning of Z depend...
[]
GEM-SciDuet-train-4#paper-965#slide-0
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-0
Lemmatization
INST ar celu ar celiem Latvian: cels (English: road)
INST ar celu ar celiem Latvian: cels (English: road)
[]
GEM-SciDuet-train-4#paper-965#slide-1
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-1
Previous work
sentence context helps to lemmatize ambiguous and unseen words Bergmanis and Goldwater, 2018
sentence context helps to lemmatize ambiguous and unseen words Bergmanis and Goldwater, 2018
[]
GEM-SciDuet-train-4#paper-965#slide-2
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-2
Ambiguous words
A cels (road): NOUN, sing., ACC B celis (knee): NOUN, plur., DAT
A cels (road): NOUN, sing., ACC B celis (knee): NOUN, plur., DAT
[]
GEM-SciDuet-train-4#paper-965#slide-3
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-3
Learning from sentences
Lemma annotated sentences are scarce for low resource languages annotating sentences is slow N types > N (contiguous) tokens
Lemma annotated sentences are scarce for low resource languages annotating sentences is slow N types > N (contiguous) tokens
[]
GEM-SciDuet-train-4#paper-965#slide-4
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-4
N types N tokens
Training on 1k UDT tokens/types
Training on 1k UDT tokens/types
[]
GEM-SciDuet-train-4#paper-965#slide-5
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-5
Types in context
algorithms get smarter computers faster Bergmanis and Goldwater, 2018
algorithms get smarter computers faster Bergmanis and Goldwater, 2018
[]
GEM-SciDuet-train-4#paper-965#slide-6
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-6
Proposal Data Augmentation
...to get types in context
...to get types in context
[]
GEM-SciDuet-train-4#paper-965#slide-7
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-7
Method Data Augmentation
Inflection cels cela N;LOC;SG Dzives pedeja cela pavadot musu cels Context cels cela N;LOC;SG Lemma cels cela N;LOC;SG
Inflection cels cela N;LOC;SG Dzives pedeja cela pavadot musu cels Context cels cela N;LOC;SG Lemma cels cela N;LOC;SG
[]
GEM-SciDuet-train-4#paper-965#slide-8
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-8
Inflection Tables
INST ar celu ar celiem Latvian: cels (English: road) ACC celu celiem celus celt (build) celot (travel) celis (knee)
INST ar celu ar celiem Latvian: cels (English: road) ACC celu celiem celus celt (build) celot (travel) celis (knee)
[]
GEM-SciDuet-train-4#paper-965#slide-9
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-9
Key question
If ambiguous words enforce the use of context: Is context still useful in the absence of ambiguous forms?
If ambiguous words enforce the use of context: Is context still useful in the absence of ambiguous forms?
[]
GEM-SciDuet-train-4#paper-965#slide-10
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-10
Experiments
Train: 1k types from universal dependency corpus UniMorph in Wikipedia contexts Estonian, Finnish, Latvian, Polish, Romanian, Russian, Swedish, Turkish Metric: type level macro average accuracy Test: on standard splits of universal dependency corpus
Train: 1k types from universal dependency corpus UniMorph in Wikipedia contexts Estonian, Finnish, Latvian, Polish, Romanian, Russian, Swedish, Turkish Metric: type level macro average accuracy Test: on standard splits of universal dependency corpus
[]
GEM-SciDuet-train-4#paper-965#slide-12
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-12
Does model learn from context
context vs no context
context vs no context
[]
GEM-SciDuet-train-4#paper-965#slide-13
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-13
Afix ambiguity wuger
Lemma depends on context: A if wuger is adjective then lemma could be wug B if wuger is noun then lemma could be wuger
Lemma depends on context: A if wuger is adjective then lemma could be wug B if wuger is noun then lemma could be wuger
[]
GEM-SciDuet-train-4#paper-965#slide-14
965
Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text
Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "3", "5" ], "paper_header_content": [ "Introduction", "Data Augmentation", "Experimental Setup", "Conclusion" ] }
GEM-SciDuet-train-4#paper-965#slide-14
Takeaways conclusions
Despite biased data and divergent lemmatization standards Type based data augmentation helps Even without the ambiguous types that enforce the use of context Model use context to disambiguate affixes of unseen words
Despite biased data and divergent lemmatization standards Type based data augmentation helps Even without the ambiguous types that enforce the use of context Model use context to disambiguate affixes of unseen words
[]
GEM-SciDuet-train-5#paper-966#slide-0
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-0
What is Automated Essay Scoring AES
Computer produces summative assessment for evaluation Aim: reduce human workload AES has been put into practical use by ETS from 1999
Computer produces summative assessment for evaluation Aim: reduce human workload AES has been put into practical use by ETS from 1999
[]
GEM-SciDuet-train-5#paper-966#slide-1
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-1
Prompt specific and Independent AES
Most existing AES approaches are prompt-specific Require human labels for each prompt to train Can achieve satisfying human-machine agreement Prompt-independent AES remains a challenge Only non-target human labels are available
Most existing AES approaches are prompt-specific Require human labels for each prompt to train Can achieve satisfying human-machine agreement Prompt-independent AES remains a challenge Only non-target human labels are available
[]
GEM-SciDuet-train-5#paper-966#slide-2
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-2
Challenges in Prompt independent AES
Source Prompts Target Prompt Learn essays Predict target Previous approaches learn on source prompts Domain adaption [Phandi et al. EMNLP 2015] Cross-domain learning [Dong & Zhang, EMNLP Achieved Avg. QWK = 0.6395 at best with up to 100 labeled target essays Off-topic: essays written for source prompts are mostly irrel...
Source Prompts Target Prompt Learn essays Predict target Previous approaches learn on source prompts Domain adaption [Phandi et al. EMNLP 2015] Cross-domain learning [Dong & Zhang, EMNLP Achieved Avg. QWK = 0.6395 at best with up to 100 labeled target essays Off-topic: essays written for source prompts are mostly irrel...
[]
GEM-SciDuet-train-5#paper-966#slide-3
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-3
TDNN A Two stage Deep Neural Network for Prompt
Based on the idea of transductive transfer learning Learn on target essays Utilize the content of target essays to rate
Based on the idea of transductive transfer learning Learn on target essays Utilize the content of target essays to rate
[]
GEM-SciDuet-train-5#paper-966#slide-4
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-4
The Two stage Architecture
Prompt-independent stage: train a shallow model to create pseudo labels on the target prompt Prompt-dependent stage: learn an end-to-end model to predict essay ratings for the target prompts
Prompt-independent stage: train a shallow model to create pseudo labels on the target prompt Prompt-dependent stage: learn an end-to-end model to predict essay ratings for the target prompts
[]
GEM-SciDuet-train-5#paper-966#slide-5
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-5
Prompt independent stage
Train a robust prompt-independent AES model Learning algorithm: RankSVM for AES Select confident essays written for the target prompt Predicted ratings in as negative examples Predicted ratings in as positive examples Converted to 0/1 labels Common sense: 8 is good, <5 is bad
Train a robust prompt-independent AES model Learning algorithm: RankSVM for AES Select confident essays written for the target prompt Predicted ratings in as negative examples Predicted ratings in as positive examples Converted to 0/1 labels Common sense: 8 is good, <5 is bad
[]
GEM-SciDuet-train-5#paper-966#slide-6
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-6
Prompt dependent stage
Train a hybrid deep model for a prompt- An end-to-end neural network with three parts
Train a hybrid deep model for a prompt- An end-to-end neural network with three parts
[]
GEM-SciDuet-train-5#paper-966#slide-7
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-7
Architecture of the hybrid deep model
Multi-layer structure: Words (phrases) - Sentences Essay
Multi-layer structure: Words (phrases) - Sentences Essay
[]
GEM-SciDuet-train-5#paper-966#slide-8
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-8
Model Training
Training loss: MSE on 0/1 pseudo labels Validation metric: Kappa on 30% non-target essays Select the model that can best rate
Training loss: MSE on 0/1 pseudo labels Validation metric: Kappa on 30% non-target essays Select the model that can best rate
[]
GEM-SciDuet-train-5#paper-966#slide-9
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-9
Dataset and Metrics
We use the standard ASAP corpus 8 prompts with >10K essays in total Prompt-independent AES: 7 prompts are used for training, 1 for testing Report on common human-machine agreement metrics Pearsons correlation coefficient (PCC) Spearmans correlation coefficient (SCC) Quadratic weighted Kappa (QWK)
We use the standard ASAP corpus 8 prompts with >10K essays in total Prompt-independent AES: 7 prompts are used for training, 1 for testing Report on common human-machine agreement metrics Pearsons correlation coefficient (PCC) Spearmans correlation coefficient (SCC) Quadratic weighted Kappa (QWK)
[]
GEM-SciDuet-train-5#paper-966#slide-10
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-10
Baselines
RankSVM based on prompt-independent handcrafted Also used in the prompt-independent stage in TDNN Two LSTM layer + linear layer CNN + LSTM + linear layer
RankSVM based on prompt-independent handcrafted Also used in the prompt-independent stage in TDNN Two LSTM layer + linear layer CNN + LSTM + linear layer
[]
GEM-SciDuet-train-5#paper-966#slide-11
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-11
RankSVM is the most robust baseline
High variance of DNN models performance on all 8 prompts Possibly caused by learning on non-target prompts RankSVM appears to be the most stable baseline Justifies the use of RankSVM in the first stage of TDNN
High variance of DNN models performance on all 8 prompts Possibly caused by learning on non-target prompts RankSVM appears to be the most stable baseline Justifies the use of RankSVM in the first stage of TDNN
[]
GEM-SciDuet-train-5#paper-966#slide-12
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-12
Comparison to the best baseline
TDNN outperforms the best baseline on 7 out of 8 prompts Performance improvements gained by learning on the target prompt
TDNN outperforms the best baseline on 7 out of 8 prompts Performance improvements gained by learning on the target prompt
[]
GEM-SciDuet-train-5#paper-966#slide-13
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-13
Average performance on 8 prompts
Method QWK PCC SCC
Method QWK PCC SCC
[]
GEM-SciDuet-train-5#paper-966#slide-14
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-14
Sanity Check Relative Precision
How the quality of pseudo examples affects the performance of The sanctity of the selected essays, namely, the number of positive (negative) essays that are better (worse) than all negative (positive) Such relative precision is at least 80% and mostly beyond 90% on different prompts TDNN can at least learn from correct...
How the quality of pseudo examples affects the performance of The sanctity of the selected essays, namely, the number of positive (negative) essays that are better (worse) than all negative (positive) Such relative precision is at least 80% and mostly beyond 90% on different prompts TDNN can at least learn from correct...
[]
GEM-SciDuet-train-5#paper-966#slide-15
966
TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring
Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Two-stage Deep Neural Network for AES", "Overview", "Building Blocks", "Objective and Training", "Results and Analyzes", "Related Work",...
GEM-SciDuet-train-5#paper-966#slide-15
Conclusions
It is beneficial to learn an AES model on the target prompt Syntactic features are useful addition to the widely used Word2Vec embeddings Sanity check: small overlap between pos/neg examples Prompt-independent AES remains an open problem TDNN can achieve 0.68 at best
It is beneficial to learn an AES model on the target prompt Syntactic features are useful addition to the widely used Word2Vec embeddings Sanity check: small overlap between pos/neg examples Prompt-independent AES remains an open problem TDNN can achieve 0.68 at best
[]
GEM-SciDuet-train-6#paper-970#slide-0
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-0
The task
Why AstraZeneca plc Dixons Carphone PLC Are Red-Hot Growth Training data: 1142 samples, 960 headlines/sentences. Testing data: 491 samples, 461 headlines/sentences.
Why AstraZeneca plc Dixons Carphone PLC Are Red-Hot Growth Training data: 1142 samples, 960 headlines/sentences. Testing data: 491 samples, 461 headlines/sentences.
[]
GEM-SciDuet-train-6#paper-970#slide-1
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-1
Models
1. Support Vector Regression (SVR) [1] 2. Bi-directional Long Short-Term Memory BLSTM [2][3]
1. Support Vector Regression (SVR) [1] 2. Bi-directional Long Short-Term Memory BLSTM [2][3]
[]
GEM-SciDuet-train-6#paper-970#slide-2
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-2
Pre Processing and Additional data used
Used 189, 206 financial articles (e.g. Financial Times) that were manually downloaded from Factiva1 to create a Word2Vec model [5]2. These were created using Gensim3.
Used 189, 206 financial articles (e.g. Financial Times) that were manually downloaded from Factiva1 to create a Word2Vec model [5]2. These were created using Gensim3.
[]
GEM-SciDuet-train-6#paper-970#slide-3
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-3
Support Vector Regression SVR 1
Features and settings that we changed 1. Tokenisation - Whitespace or Unitok4 2. N-grams - uni-grams, bi-grams and both. 3. SVR settings - penalty parameter C and epsilon parameter.
Features and settings that we changed 1. Tokenisation - Whitespace or Unitok4 2. N-grams - uni-grams, bi-grams and both. 3. SVR settings - penalty parameter C and epsilon parameter.
[]
GEM-SciDuet-train-6#paper-970#slide-4
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-4
Word Replacements
AstraZeneca PLC had an improved performance where as Dixons companyname had an posword performance where as companyname
AstraZeneca PLC had an improved performance where as Dixons companyname had an posword performance where as companyname
[]
GEM-SciDuet-train-6#paper-970#slide-5
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-5
Two BLSTM models
Drop out between layers 25 times trained over Early stopping used to
Drop out between layers 25 times trained over Early stopping used to
[]
GEM-SciDuet-train-6#paper-970#slide-7
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-7
SVR best features
Using uni-grams and bi-grams to be the best. 2.4% improvement Using a tokeniser always better. Affects bi-gram results the most. 1% improvement using Unitok5 over whitespace. SVR parameter settings important 8% difference between using Incorporating the target aspect increased performance. 0.3% Using all word replaceme...
Using uni-grams and bi-grams to be the best. 2.4% improvement Using a tokeniser always better. Affects bi-gram results the most. 1% improvement using Unitok5 over whitespace. SVR parameter settings important 8% difference between using Incorporating the target aspect increased performance. 0.3% Using all word replaceme...
[]
GEM-SciDuet-train-6#paper-970#slide-8
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-8
Results across the different metrics
Metric 1 was the final metric used.
Metric 1 was the final metric used.
[]
GEM-SciDuet-train-6#paper-970#slide-9
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-9
Future Work
1. Incorporate aspects into the BLSTMs shown to be useful by Wang 2. Improve BLSTMs by using an attention model Wang et al. [7]. 3. Add known financial sentiment lexicon into the LSTM model [6].
1. Incorporate aspects into the BLSTMs shown to be useful by Wang 2. Improve BLSTMs by using an attention model Wang et al. [7]. 3. Add known financial sentiment lexicon into the LSTM model [6].
[]
GEM-SciDuet-train-6#paper-970#slide-10
970
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Ter...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.1.1", "4.1.2", "4.1.3", "4.1.4", "4.1.5", "4.2", "3.", "4.2.1", "4.2.2", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Data", "System description", ...
GEM-SciDuet-train-6#paper-970#slide-10
Summary
1. BLSTM outperform SVRs with minimal feature engineering. 2. The future is to incorporate more financial information into the
1. BLSTM outperform SVRs with minimal feature engineering. 2. The future is to incorporate more financial information into the
[]
GEM-SciDuet-train-7#paper-971#slide-0
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-0
Exploring intellectual structures
Collaboration, Author co-citation analysis, Journal Impact Factor, SJR Document citation analysis, Co-word analysis, Citation sentence: Containing brief content of cited work and opinion that the author of citing work on the cited work Topic Model: Adopting Author Conference Topic (ACT) model (Tang, Jin Oncology: The r...
Collaboration, Author co-citation analysis, Journal Impact Factor, SJR Document citation analysis, Co-word analysis, Citation sentence: Containing brief content of cited work and opinion that the author of citing work on the cited work Topic Model: Adopting Author Conference Topic (ACT) model (Tang, Jin Oncology: The r...
[]
GEM-SciDuet-train-7#paper-971#slide-1
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-1
Citation Sentence
Embedding useful contents signifying the influence of cited authors on Being considered as an invisible intellectual place for idea exchanging Playing a role of supporting and expressing their own arguments by Exploring the implicit topics resided in citation sentences
Embedding useful contents signifying the influence of cited authors on Being considered as an invisible intellectual place for idea exchanging Playing a role of supporting and expressing their own arguments by Exploring the implicit topics resided in citation sentences
[]
GEM-SciDuet-train-7#paper-971#slide-2
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-2
Original ACT Model Tang Jin and Zhang 2008
Purpose of Academic search
Purpose of Academic search
[]
GEM-SciDuet-train-7#paper-971#slide-3
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-3
Modified AJT Model
1) Citation Data Extraction 2n d journal Topic 2 Which topic is most salient? Who is the active authors sharing other authors ideas? Which journal leads such endeavor?
1) Citation Data Extraction 2n d journal Topic 2 Which topic is most salient? Who is the active authors sharing other authors ideas? Which journal leads such endeavor?
[]
GEM-SciDuet-train-7#paper-971#slide-4
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-4
Method
The 77-SNP PRS was associated with a larg er effect than previously reported for a 10-SNP-PRS (<xref 3) Citing Authors rid=CIT0020 ref-type=bibr> 20 </xref>).
The 77-SNP PRS was associated with a larg er effect than previously reported for a 10-SNP-PRS (<xref 3) Citing Authors rid=CIT0020 ref-type=bibr> 20 </xref>).
[]
GEM-SciDuet-train-7#paper-971#slide-5
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-5
Data collection
PubMed Central: 6,360 full-text articles 15 journals of Oncology: by Thomson Reuters JCR & journals impact factor Cancer Cell, Journal of the National Cancer Institute, Leukemia, Oncogene, Annals of Oncology, Neuro-Oncology, Stem Cells, Oncotarget, OncoInnunology, Molecular Oncology, Breast Cancer Research Journal of T...
PubMed Central: 6,360 full-text articles 15 journals of Oncology: by Thomson Reuters JCR & journals impact factor Cancer Cell, Journal of the National Cancer Institute, Leukemia, Oncogene, Annals of Oncology, Neuro-Oncology, Stem Cells, Oncotarget, OncoInnunology, Molecular Oncology, Breast Cancer Research Journal of T...
[]
GEM-SciDuet-train-7#paper-971#slide-6
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-6
Research Flow
1) Citation Data Extraction
1) Citation Data Extraction
[]
GEM-SciDuet-train-7#paper-971#slide-7
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-7
Results 8 Topics
Labeled by 3 Experts Author Group 1 Author Group 2 Author Group 3 Author Group 4 Journal Group 1 Journal Group 2 Journal Group 3 Journal Group 4 Research Annals of Oncology Pigment Cell & Melanoma Research Journal of Thoracic Oncology
Labeled by 3 Experts Author Group 1 Author Group 2 Author Group 3 Author Group 4 Journal Group 1 Journal Group 2 Journal Group 3 Journal Group 4 Research Annals of Oncology Pigment Cell & Melanoma Research Journal of Thoracic Oncology
[]
GEM-SciDuet-train-7#paper-971#slide-8
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-8
Results contd
Author Group 5 Author Group 6 Author Group Author Group 8 Journal Group 5 Journal Group 6 Journal Group 7 Journal Group 8 Annals of Oncology Cancer Cell Annals of Oncology Breast Cancer Research
Author Group 5 Author Group 6 Author Group Author Group 8 Journal Group 5 Journal Group 6 Journal Group 7 Journal Group 8 Annals of Oncology Cancer Cell Annals of Oncology Breast Cancer Research
[]
GEM-SciDuet-train-7#paper-971#slide-9
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-9
Conclusion
AJT model: to detect leading authors and journals in sub-disciplines represented by discovered topics in a certain field Citation sentences: Discovering latent meaning associated citation sentences and the major players leading the field
AJT model: to detect leading authors and journals in sub-disciplines represented by discovered topics in a certain field Citation sentences: Discovering latent meaning associated citation sentences and the major players leading the field
[]
GEM-SciDuet-train-7#paper-971#slide-10
971
Exploring the leading authors and journals in major topics by citation sentences and topic modeling
Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the to...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.4", "4" ], "paper_header_content": [ "Introduction", "Main idea", "Data collection", "AJT Model", "Conclusion" ] }
GEM-SciDuet-train-7#paper-971#slide-10
Future works
Comparing the proposed approach with the general topic modeling Investigating whether there is a different impact of using citation sentences and general meta-data (abstract and title) Considering the window size of citation sentences enriching citation
Comparing the proposed approach with the general topic modeling Investigating whether there is a different impact of using citation sentences and general meta-data (abstract and title) Considering the window size of citation sentences enriching citation
[]
GEM-SciDuet-train-8#paper-972#slide-0
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-0
Motivation
Extracting cognates for related languages in Romance and Reducing the number of unknown words on SMT training data Learning regular differences in words roots/endings shared across related languages
Extracting cognates for related languages in Romance and Reducing the number of unknown words on SMT training data Learning regular differences in words roots/endings shared across related languages
[]
GEM-SciDuet-train-8#paper-972#slide-1
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-1
Method
Produce n-best lists of cognates using a family of distance measures from comparable corpora Prune the n-best lists by ranking Machine Learning (ML) algorithm trained over parallel corpora Motivation n-best list allows surface variation on possible cognate translations
Produce n-best lists of cognates using a family of distance measures from comparable corpora Prune the n-best lists by ranking Machine Learning (ML) algorithm trained over parallel corpora Motivation n-best list allows surface variation on possible cognate translations
[]
GEM-SciDuet-train-8#paper-972#slide-2
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-2
Similarity metrics
Compare words between frequency lists over comparable corpora L matching between the languages using Levenshtein distance: L-R Levenshtein distance computed separately for the roots and for the endings: aceito (pt) vs acepto (es) rejeito (pt) vs rechazo (es) L-C Levenshtein distance over words with similar number of st...
Compare words between frequency lists over comparable corpora L matching between the languages using Levenshtein distance: L-R Levenshtein distance computed separately for the roots and for the endings: aceito (pt) vs acepto (es) rejeito (pt) vs rechazo (es) L-C Levenshtein distance over words with similar number of st...
[]
GEM-SciDuet-train-8#paper-972#slide-3
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-3
Search space constraints
Motivation Exhaustive method compares all the combinations of source and target words Order the target side frequency list into bins of similar frequency Compare each source word with target bins of similar frequency around a window L-C metric only compares words that share a given n prefix
Motivation Exhaustive method compares all the combinations of source and target words Order the target side frequency list into bins of similar frequency Compare each source word with target bins of similar frequency around a window L-C metric only compares words that share a given n prefix
[]
GEM-SciDuet-train-8#paper-972#slide-4
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-4
Ranking
Motivation Prune n-best lists by ranking ML algorithm Training data come from aligned parallel corpora where the rank is given by the alignment probability from GIZA++ Simulate cognate training data by pruning pairs of words below a Levenshtein threshold
Motivation Prune n-best lists by ranking ML algorithm Training data come from aligned parallel corpora where the rank is given by the alignment probability from GIZA++ Simulate cognate training data by pruning pairs of words below a Levenshtein threshold
[]
GEM-SciDuet-train-8#paper-972#slide-5
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-5
Features
Number of times of each edit operation, the model assigns a different weight to each operation Cosine between the distributional vectors of the source and target words vectors from word2vec mapped to same space via a learned transformation matrix SVM ranking default configuration (RBF kernel) Easy-adapt features given ...
Number of times of each edit operation, the model assigns a different weight to each operation Cosine between the distributional vectors of the source and target words vectors from word2vec mapped to same space via a learned transformation matrix SVM ranking default configuration (RBF kernel) Easy-adapt features given ...
[]
GEM-SciDuet-train-8#paper-972#slide-6
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-6
Data description
n-best lists from Wikipedia dumps (frequency lists) ML training Wiki-titles, parallel data from inter language links from the tittles of the Wikipedia articles 500K aligned links (i.e. sentences) Opensubs, 90K training instances Zoo proprietary corpus of subtitles produced by professional translators, 20K training inst...
n-best lists from Wikipedia dumps (frequency lists) ML training Wiki-titles, parallel data from inter language links from the tittles of the Wikipedia articles 500K aligned links (i.e. sentences) Opensubs, 90K training instances Zoo proprietary corpus of subtitles produced by professional translators, 20K training inst...
[]
GEM-SciDuet-train-8#paper-972#slide-7
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-7
Language pairs
Romance Source: Portuguese, French, Italian Target: Spanish Slavonic Source: Ukrainian, Bulgarian Target: Russian
Romance Source: Portuguese, French, Italian Target: Spanish Slavonic Source: Ukrainian, Bulgarian Target: Russian
[]
GEM-SciDuet-train-8#paper-972#slide-8
972
Obtaining SMT dictionaries for related languages
This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "4" ], "paper_header_content": [ "Introduction", "Methodology", "Cognate detection", "Cognate Ranking", "Results and Discussion", "Data", "Evaluation of the Ranking Model", ...
GEM-SciDuet-train-8#paper-972#slide-8
Results on heldout data
Error score on heldout data E Edit distance features EC Edit distance plus distributed vectors features Zoo error% Opensubs error% Wiki-titles error% Romance pt-es it-es fr-es Model E Model EC Model E Model EC Model E Model EC
Error score on heldout data E Edit distance features EC Edit distance plus distributed vectors features Zoo error% Opensubs error% Wiki-titles error% Romance pt-es it-es fr-es Model E Model EC Model E Model EC Model E Model EC
[]