gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
dict
paper_headers
dict
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-16#paper-994#slide-2
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-2
Current Approaches and Challenges
Sentence simplification as monolingual machine translation
Sentence simplification as monolingual machine translation
[]
GEM-SciDuet-train-16#paper-994#slide-3
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-3
Conservatism in MT Based Simplification
In both SMT and NMT Text Simplification, a large proportion of the input sentences are not modified. (Alva-Manchego et al., 2017; on the Newsela corpus). It is confirmed in the present work (experiments on Wikipedia): of the input sentences remain unchanged. - None of the references are identical to the source. - Accor...
In both SMT and NMT Text Simplification, a large proportion of the input sentences are not modified. (Alva-Manchego et al., 2017; on the Newsela corpus). It is confirmed in the present work (experiments on Wikipedia): of the input sentences remain unchanged. - None of the references are identical to the source. - Accor...
[]
GEM-SciDuet-train-16#paper-994#slide-4
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-4
Sentence Splitting in Text Simplification
Splitting in NMT-Based Simplification Sentence splitting is not addressed. Rareness of splittings in the simplification training corpora. Recently, corpus focusing on sentence splitting for the Split-and-Rephrase task (Narayan et al., 2017) where the other operations are not addressed. Directly modeling sentence splitt...
Splitting in NMT-Based Simplification Sentence splitting is not addressed. Rareness of splittings in the simplification training corpora. Recently, corpus focusing on sentence splitting for the Split-and-Rephrase task (Narayan et al., 2017) where the other operations are not addressed. Directly modeling sentence splitt...
[]
GEM-SciDuet-train-16#paper-994#slide-5
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-5
Direct Semantic Splitting DSS
A simple algorithm that directly decomposes the sentence into its semantic components, using 2 splitting rules. The splitting is directed by semantic parsing. The semantic annotation directly captures shared arguments. It can be used as a preprocessing step for other simplification operations. Input sentence Split sent...
A simple algorithm that directly decomposes the sentence into its semantic components, using 2 splitting rules. The splitting is directed by semantic parsing. The semantic annotation directly captures shared arguments. It can be used as a preprocessing step for other simplification operations. Input sentence Split sent...
[]
GEM-SciDuet-train-16#paper-994#slide-6
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-6
The Semantic Structures
Semantic Annotation: UCCA (Abend and Rappoport, 2013) - Based on typological and cognitive theories L A A and P He came back home played piano Parallel Scene (H) Linker (L) P surprised His Participant (A) Process (P) arrival E observed E C A A Parallel Scene (H) the planet R S C Participant (A) Process (P) State (S) E ...
Semantic Annotation: UCCA (Abend and Rappoport, 2013) - Based on typological and cognitive theories L A A and P He came back home played piano Parallel Scene (H) Linker (L) P surprised His Participant (A) Process (P) arrival E observed E C A A Parallel Scene (H) the planet R S C Participant (A) Process (P) State (S) E ...
[]
GEM-SciDuet-train-16#paper-994#slide-7
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-7
The Semantic Rules
Placing each Scene in a different sentence. Fits with event-wise simplification (Glavas and Stajner, 2013) Here we only use semantic criteria. It was also investigated in the context of Text Simplification evaluation: SAMSA measure (Sulem, Abend and Rappoport, NAACL 2018) A A and He came back home and played piano. He ...
Placing each Scene in a different sentence. Fits with event-wise simplification (Glavas and Stajner, 2013) Here we only use semantic criteria. It was also investigated in the context of Text Simplification evaluation: SAMSA measure (Sulem, Abend and Rappoport, NAACL 2018) A A and He came back home and played piano. He ...
[]
GEM-SciDuet-train-16#paper-994#slide-8
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-8
Combining DSS with Neural Text Simplification
After DSS, the output is fed to an MT-based simplification system. We use a state-of-the-art NMT-Based TS system, NTS (Nisioi et al., 2017). The combined system is called SENTS. NTS was built using the OpenNMT (Klein et al., 2017) framework. We use the NTS-w2v provided model where word2vec embeddings are used for the i...
After DSS, the output is fed to an MT-based simplification system. We use a state-of-the-art NMT-Based TS system, NTS (Nisioi et al., 2017). The combined system is called SENTS. NTS was built using the OpenNMT (Klein et al., 2017) framework. We use the NTS-w2v provided model where word2vec embeddings are used for the i...
[]
GEM-SciDuet-train-16#paper-994#slide-9
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-9
Experiments
Test set of Xu et al., 2016: sentences, each with 8 references e.g., percentage of sentences copied from the input (%Same) First 70 sentences of the corpus 3 annotators native English speakers 4 questions for each input-output pair Is the output fluent and grammatical? Does the output preserve the meaning of the input?...
Test set of Xu et al., 2016: sentences, each with 8 references e.g., percentage of sentences copied from the input (%Same) First 70 sentences of the corpus 3 annotators native English speakers 4 questions for each input-output pair Is the output fluent and grammatical? Does the output preserve the meaning of the input?...
[]
GEM-SciDuet-train-16#paper-994#slide-10
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-10
Results
BLEU SARI G M S StS Automatic evaluation: BLEU, SARI Human evaluation (first 70 sentences): G Grammaticality: 1 to 5 scale S Simplicity: -2 to +2 scale P Meaning Preservation: 1 to 5 scale StS Structural Simplicity: -2 to +2 scale Identity gets the highest BLEU score and the lowest SARI scores. The two SENTS systems ou...
BLEU SARI G M S StS Automatic evaluation: BLEU, SARI Human evaluation (first 70 sentences): G Grammaticality: 1 to 5 scale S Simplicity: -2 to +2 scale P Meaning Preservation: 1 to 5 scale StS Structural Simplicity: -2 to +2 scale Identity gets the highest BLEU score and the lowest SARI scores. The two SENTS systems ou...
[]
GEM-SciDuet-train-16#paper-994#slide-12
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-12
Conclusion 1
We presented here the first simplification system combining semantic structures and neural machine translation. Our system compares favorably to the state-of-the-art in combined structural and lexical simplification. This approach addresses the conservatism of MT-based systems. Sentence splitting is performed without r...
We presented here the first simplification system combining semantic structures and neural machine translation. Our system compares favorably to the state-of-the-art in combined structural and lexical simplification. This approach addresses the conservatism of MT-based systems. Sentence splitting is performed without r...
[]
GEM-SciDuet-train-16#paper-994#slide-13
994
Simple and Effective Text Simplification Using Semantic and Neural Methods
Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4", "5", "6", "7", "8" ], "paper_header_content": [ "Introduction", "Related Work", "Semantic Representation", "The Semantic Rules", "Neural Component", "Experimental Setup", "Results", "Additio...
GEM-SciDuet-train-16#paper-994#slide-13
Conclusion 2
Sentence splitting is treated as the decomposition of the sentence into its Scenes (as in SAMSA evaluation measure; Sulem, Abend and Rappoport, NAACL 2018) Future work will leverage UCCAs cross-linguistic applicability to support multi-lingual text simplification and simplification pre-processing for MT.
Sentence splitting is treated as the decomposition of the sentence into its Scenes (as in SAMSA evaluation measure; Sulem, Abend and Rappoport, NAACL 2018) Future work will leverage UCCAs cross-linguistic applicability to support multi-lingual text simplification and simplification pre-processing for MT.
[]
GEM-SciDuet-train-17#paper-1001#slide-1
1001
Consistent Improvement in Translation Quality of Chinese-Japanese Technical Texts by Adding Additional Quasi-parallel Training Data
Bilingual parallel corpora are an extremely important resource as they are typically used in data-driven machine translation. There already exist many freely available corpora for European languages, but almost none between Chinese and Japanese. The constitution of large bilingual corpora is a problem for less document...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Chinese and Japanese Parallel Sentences", "Chinese and Japanese M...
GEM-SciDuet-train-17#paper-1001#slide-1
SMT Experiments
Experimental results of SMT BLEU NIST WER TER RIBES baseline zh-ja + additional training data Table: Evaluation results for ChineseJapanese translation across two SMT systems (baseline and baseline + additional quasi-parallel data), Moses version: 1.0, segmentation tools: urheen and mecab.
Experimental results of SMT BLEU NIST WER TER RIBES baseline zh-ja + additional training data Table: Evaluation results for ChineseJapanese translation across two SMT systems (baseline and baseline + additional quasi-parallel data), Moses version: 1.0, segmentation tools: urheen and mecab.
[]
GEM-SciDuet-train-18#paper-1009#slide-0
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-0
Data is Limited
Most of the popular models in NLP are data-driven We often need to operate in a specific scenario Limited data Take spoken language understanding as an example Need to be implemented for many domains Limited data Intent Detection flights from Boston to Tokyo intent: flight Slot Filling flights from Boston to Tokyo from...
Most of the popular models in NLP are data-driven We often need to operate in a specific scenario Limited data Take spoken language understanding as an example Need to be implemented for many domains Limited data Intent Detection flights from Boston to Tokyo intent: flight Slot Filling flights from Boston to Tokyo from...
[]
GEM-SciDuet-train-18#paper-1009#slide-1
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-1
Regular Expression Rules
When data is limited Use rule-based system Regular expression is the most commonly used rule in NLP Many regular expression rules in company Intent Detection flights from Boston to Tokyo intent: flight Slot Filling flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo However, regular expressions are hard to...
When data is limited Use rule-based system Regular expression is the most commonly used rule in NLP Many regular expression rules in company Intent Detection flights from Boston to Tokyo intent: flight Slot Filling flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo However, regular expressions are hard to...
[]
GEM-SciDuet-train-18#paper-1009#slide-2
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-2
Which Part of Regular Expression to Use
Regular expression (RE) output is useful flights? from/ flights from Boston to Tokyo intent: flight flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo RE contains clue words NN should attend to these clue words for prediction
Regular expression (RE) output is useful flights? from/ flights from Boston to Tokyo intent: flight flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo RE contains clue words NN should attend to these clue words for prediction
[]
GEM-SciDuet-train-18#paper-1009#slide-3
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-3
Method 1 RE Output As Features
Embed the REtag, append to input REtag: flight Softmax Classifier RE feat s Attention Intent Detection h1 h2 h3 h4 h5 BLSTM RE Instance x1 x2 x3 x4 x5 flights from Boston to Miami /^flights? from/ flights from Boston to Miami REtag: O O B-loc.city O B-loc.city
Embed the REtag, append to input REtag: flight Softmax Classifier RE feat s Attention Intent Detection h1 h2 h3 h4 h5 BLSTM RE Instance x1 x2 x3 x4 x5 flights from Boston to Miami /^flights? from/ flights from Boston to Miami REtag: O O B-loc.city O B-loc.city
[]
GEM-SciDuet-train-18#paper-1009#slide-4
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-4
Method 2 RE Output Fusion in Output
is the NN output score for class k (before softmax) , whether regular expression predict class k Intent: flight logitk=l ogitk+ w kzk Softmax Classifier Intent Detection h1 h2 h3 h4 h5 BLSTM RE Instance x1 x2 x3 x4 x5 flights from Boston to Miami /^flights? from/ Slot Filling h1 h2 h3 h4 h5 flights from Boston to /from...
is the NN output score for class k (before softmax) , whether regular expression predict class k Intent: flight logitk=l ogitk+ w kzk Softmax Classifier Intent Detection h1 h2 h3 h4 h5 BLSTM RE Instance x1 x2 x3 x4 x5 flights from Boston to Miami /^flights? from/ Slot Filling h1 h2 h3 h4 h5 flights from Boston to /from...
[]
GEM-SciDuet-train-18#paper-1009#slide-5
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-5
Method 3 Clue Words Guide Attention
Attention should match clue words BLSTM RE Instance x1 x2 x3 x4 x5 flights from Boston to Miami Gold Att: Positive Regular Expressions (REs) Negative REs REs can indicate the input belong to class k, or does not belong to class k Correction of wrong predictions How long does it take to fly from LA to NYC? intent: abbre...
Attention should match clue words BLSTM RE Instance x1 x2 x3 x4 x5 flights from Boston to Miami Gold Att: Positive Regular Expressions (REs) Negative REs REs can indicate the input belong to class k, or does not belong to class k Correction of wrong predictions How long does it take to fly from LA to NYC? intent: abbre...
[]
GEM-SciDuet-train-18#paper-1009#slide-6
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-6
Experiment Setup
Written by a paid annotator We want to answer the following questions: Can regular expressions (REs) improve the neural network (NN) when data is limited (only use a small fraction of the training data)? Can REs still improve NN when using the full dataset? How does RE complexity influence the results?
Written by a paid annotator We want to answer the following questions: Can regular expressions (REs) improve the neural network (NN) when data is limited (only use a small fraction of the training data)? Can REs still improve NN when using the full dataset? How does RE complexity influence the results?
[]
GEM-SciDuet-train-18#paper-1009#slide-7
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-7
Few Shot Learning Experiment
Using clue words to guide attention performs best for intent detection Using RE output as feature performs best for slot filling
Using clue words to guide attention performs best for intent detection Using RE output as feature performs best for slot filling
[]
GEM-SciDuet-train-18#paper-1009#slide-8
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-8
Full Dataset Experiment
Use all the training data
Use all the training data
[]
GEM-SciDuet-train-18#paper-1009#slide-9
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-9
Complex RE vs Simple RE
Complex RE: many semantically independant groups Complex RE: /(_AIRCRAFT_CODE) that fly/ Complex Simple Complex Simple Complex REs yield better results Simple REs also clearly improves the baseline
Complex RE: many semantically independant groups Complex RE: /(_AIRCRAFT_CODE) that fly/ Complex Simple Complex Simple Complex REs yield better results Simple REs also clearly improves the baseline
[]
GEM-SciDuet-train-18#paper-1009#slide-10
1009
Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding
The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5.1", "5.2", "5.3", "5.4", "6", "7" ], "paper_header_content": [ "Introduction", "Typesetting", "Problem Definiti...
GEM-SciDuet-train-18#paper-1009#slide-10
Conclusion
Using REs can help to train of NN when data is limited Guiding attention is best for intent detection (sentence classification) RE output as feature is best for slot filling (sequence labeling) We can start with simple REs, and increase complexity gradually
Using REs can help to train of NN when data is limited Guiding attention is best for intent detection (sentence classification) RE output as feature is best for slot filling (sequence labeling) We can start with simple REs, and increase complexity gradually
[]
GEM-SciDuet-train-19#paper-1013#slide-0
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-0
Research Context
Domain Specific Diachronic Corpus Example: searching vegetarian in biblical scholarship archive Were All Men Vegetarians God instructed Adam saying, I have given you every herb that yields Of every tree of the garden thou mayest freely eat: and thou shalt eat the herb of the field; (King James Bible, Genesis) (by Eric ...
Domain Specific Diachronic Corpus Example: searching vegetarian in biblical scholarship archive Were All Men Vegetarians God instructed Adam saying, I have given you every herb that yields Of every tree of the garden thou mayest freely eat: and thou shalt eat the herb of the field; (King James Bible, Genesis) (by Eric ...
[]
GEM-SciDuet-train-19#paper-1013#slide-1
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-1
Diachronic Thesaurus
A useful tool for supporting searches in diachronic corpus Target term vegetarian modern Related terms tree of the garden herb of the field ancient Users are mostly aware of modern language Collecting relevant related terms For given thesaurus entries Collecting a relevant list of modern target terms
A useful tool for supporting searches in diachronic corpus Target term vegetarian modern Related terms tree of the garden herb of the field ancient Users are mostly aware of modern language Collecting relevant related terms For given thesaurus entries Collecting a relevant list of modern target terms
[]
GEM-SciDuet-train-19#paper-1013#slide-2
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-2
Diachronic Thesaurus Our Task
Utilize a given candidate list of modern terms as input Predict which candidates are relevant for the domain corpus
Utilize a given candidate list of modern terms as input Predict which candidates are relevant for the domain corpus
[]
GEM-SciDuet-train-19#paper-1013#slide-3
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-3
Background Terminology Extraction TE
1. Automatically extract prominent terms from a given corpus SSccoorree ccaannddiiddaattee tteerrmmss ffoorr ddoommaaiinn rreelleevvaannccyy Statistical measures for identifying prominent terms Frequencies in the target corpus (e.g. tf, tf-idf) Comparison with frequencies in a reference background corpus
1. Automatically extract prominent terms from a given corpus SSccoorree ccaannddiiddaattee tteerrmmss ffoorr ddoommaaiinn rreelleevvaannccyy Statistical measures for identifying prominent terms Frequencies in the target corpus (e.g. tf, tf-idf) Comparison with frequencies in a reference background corpus
[]
GEM-SciDuet-train-19#paper-1013#slide-4
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-4
Supervised framework for TE
Candidate target terms are learning instances Calculate a set of features for each candidate Classification predicts which candidates are suitable Features : state-of-the-art TE scoring measures
Candidate target terms are learning instances Calculate a set of features for each candidate Classification predicts which candidates are suitable Features : state-of-the-art TE scoring measures
[]
GEM-SciDuet-train-19#paper-1013#slide-5
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-5
Contributions
Integrating Query Performance Prediction in term scoring 2. Penetrating to ancient texts, via query expansion
Integrating Query Performance Prediction in term scoring 2. Penetrating to ancient texts, via query expansion
[]
GEM-SciDuet-train-19#paper-1013#slide-6
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-6
Contribution 1
Integrating Query Performance Prediction Penetrating to ancient texts
Integrating Query Performance Prediction Penetrating to ancient texts
[]
GEM-SciDuet-train-19#paper-1013#slide-7
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-7
Query Performance Prediction QPP
Estimate the retrieval quality of search queries Assess quality of query results on the text collection. Our terminology scoring task QPP scoring measures are potentially useful may capture additional aspects of term relevancy for the collection term is relevant for a domain term is a good query Two types of statistica...
Estimate the retrieval quality of search queries Assess quality of query results on the text collection. Our terminology scoring task QPP scoring measures are potentially useful may capture additional aspects of term relevancy for the collection term is relevant for a domain term is a good query Two types of statistica...
[]
GEM-SciDuet-train-19#paper-1013#slide-8
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-8
Penetrating to ancient periods
In a diachronic corpus A candidate term might be rare in its original modern form, yet frequently referred to by archaic forms query term: vegetarian Of every tree of the garden thou mayest freely eat: every herb that yields Were All Men Vegetarians God instructed Adam saying, I have given you every herb that yields (G...
In a diachronic corpus A candidate term might be rare in its original modern form, yet frequently referred to by archaic forms query term: vegetarian Of every tree of the garden thou mayest freely eat: every herb that yields Were All Men Vegetarians God instructed Adam saying, I have given you every herb that yields (G...
[]
GEM-SciDuet-train-19#paper-1013#slide-9
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-9
Evaluation Setting
Diachronic corpus: the Responsa Project Questions posed to rabbis along their detailed rabbinic answers Written over a period of about a thousand years Used for previous IR and NLP research Balanced for positive and negative examples Support Vector Machine with polynomial kernel
Diachronic corpus: the Responsa Project Questions posed to rabbis along their detailed rabbinic answers Written over a period of about a thousand years Used for previous IR and NLP research Balanced for positive and negative examples Support Vector Machine with polynomial kernel
[]
GEM-SciDuet-train-19#paper-1013#slide-10
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-10
Results
Additional QPP features increase the classification accuracy Utilizing ancient documents, via query expansion, improves Improvement over baseline statistically significant
Additional QPP features increase the classification accuracy Utilizing ancient documents, via query expansion, improves Improvement over baseline statistically significant
[]
GEM-SciDuet-train-19#paper-1013#slide-11
1013
Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus
A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, whi...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction", "Term Scoring Measures", "Terminology Extraction", "Query Performance Prediction", "Integrated Term Scoring", "Evaluation Setting", "Re...
GEM-SciDuet-train-19#paper-1013#slide-11
Summary
Task: target term selection for a diachronic thesaurus Integrating Query Performance Prediction in Term Scoring 2. Penetrating to ancient texts via query expansion Utilize additional query expansion algorithms Investigate the selective query expansion approach
Task: target term selection for a diachronic thesaurus Integrating Query Performance Prediction in Term Scoring 2. Penetrating to ancient texts via query expansion Utilize additional query expansion algorithms Investigate the selective query expansion approach
[]
GEM-SciDuet-train-20#paper-1018#slide-0
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-0
Reasoning for Question Answering
Reasoning is crucial for building systems that can dialogue with humans in natural language. Reasoning: The process of forming conclusions, judgments, or inferences from facts or premises. Inferential Reasoning: Premise 1, Premise 2 -> Conclusion John is in the kitchen, John has the ball -> The ball is in the kitchen R...
Reasoning is crucial for building systems that can dialogue with humans in natural language. Reasoning: The process of forming conclusions, judgments, or inferences from facts or premises. Inferential Reasoning: Premise 1, Premise 2 -> Conclusion John is in the kitchen, John has the ball -> The ball is in the kitchen R...
[]
GEM-SciDuet-train-20#paper-1018#slide-1
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-1
bAbI Dataset
One of the earliest datasets to measure Category 2: Two Supporting Facts. the reasoning abilities of ML systems. Mary went to the kitchen. Sandra journeyed to the office Mary got the football there. Is(Football, Garden) Mary travelled to the garden. Where is the football? garden Easy to evaluate different reasoning cap...
One of the earliest datasets to measure Category 2: Two Supporting Facts. the reasoning abilities of ML systems. Mary went to the kitchen. Sandra journeyed to the office Mary got the football there. Is(Football, Garden) Mary travelled to the garden. Where is the football? garden Easy to evaluate different reasoning cap...
[]
GEM-SciDuet-train-20#paper-1018#slide-2
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-2
Memory Augmented Neural Networks
Process a set of inputs and store them in memory. Then, at each hop, an important part of the memory is retrieved and used to retrieve more memories. Finally, the last retrieved memory is used i to compute the answer. y 01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there...
Process a set of inputs and store them in memory. Then, at each hop, an important part of the memory is retrieved and used to retrieve more memories. Finally, the last retrieved memory is used i to compute the answer. y 01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there...
[]
GEM-SciDuet-train-20#paper-1018#slide-3
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-3
Relational Neural Networks
Relation Networks (Santoro et al. 2017) Neural Network with an inductive bias to learn pairwise relations of the input objects and their properties. A type of Graph Neural Networks. L(y, y) yiln( yi) 01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled...
Relation Networks (Santoro et al. 2017) Neural Network with an inductive bias to learn pairwise relations of the input objects and their properties. A type of Graph Neural Networks. L(y, y) yiln( yi) 01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled...
[]
GEM-SciDuet-train-20#paper-1018#slide-4
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-4
Working Memory Networks
A Memory Network model with a new working memory buffer and relational reasoning module. Produces state-of-the-art results in reasoning tasks. Inspired by the Multi-component model of working memory. Short-term Memory Module Attention Module Reasoning Module 01: Daniel went to the bathroom. 02: Sandra journeyed to the ...
A Memory Network model with a new working memory buffer and relational reasoning module. Produces state-of-the-art results in reasoning tasks. Inspired by the Multi-component model of working memory. Short-term Memory Module Attention Module Reasoning Module 01: Daniel went to the bathroom. 02: Sandra journeyed to the ...
[]
GEM-SciDuet-train-20#paper-1018#slide-5
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-5
Results Jointly trained bAbI 10k
Note that EntNet (Henaff et al.) solves all tasks in the per-task version: A single model for each task. LSTM MemNN MemNN-S (Sukhbaatar et al.) (Sukhbaatar et al.) (Sukhbaatar et al.) RN (Santoro et al.) SDNC (Rae et al.) WMemNN (Pavez et al.) WMemNN* (Pavez et al.)
Note that EntNet (Henaff et al.) solves all tasks in the per-task version: A single model for each task. LSTM MemNN MemNN-S (Sukhbaatar et al.) (Sukhbaatar et al.) (Sukhbaatar et al.) RN (Santoro et al.) SDNC (Rae et al.) WMemNN (Pavez et al.) WMemNN* (Pavez et al.)
[]
GEM-SciDuet-train-20#paper-1018#slide-6
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-6
Ablations
complex attention patterns multiple relations 2 supporting facts 3 supporting facts counting basic induction size reasoning positional reasoning path finding
complex attention patterns multiple relations 2 supporting facts 3 supporting facts counting basic induction size reasoning positional reasoning path finding
[]
GEM-SciDuet-train-20#paper-1018#slide-7
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-7
Time comparison
For 30 memories there is a speedup of almost
For 30 memories there is a speedup of almost
[]
GEM-SciDuet-train-20#paper-1018#slide-8
1018
Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module
During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that c...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "3.1", "3.2", "3.3", "3.4", "4.1", "4.2", "4.3", "4.4", "5" ], "paper_header_content": [ "Introduction", "Model", "W-MemN2N for Textual Question Answering", "Memory Augmented Neural Networks", "Memory N...
GEM-SciDuet-train-20#paper-1018#slide-8
Conclusions
We presented the Working Memory Neural Network, a Memory Network model augmented with a new working memory buffer and relational reasoning module. It retains the relational reasoning capabilities of the relation network while reducing it computation times considerably. We hope that this contribution may help applying t...
We presented the Working Memory Neural Network, a Memory Network model augmented with a new working memory buffer and relational reasoning module. It retains the relational reasoning capabilities of the relation network while reducing it computation times considerably. We hope that this contribution may help applying t...
[]
GEM-SciDuet-train-21#paper-1019#slide-0
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-0
This talk in one slide
Training semantic parsing with denotation-only supervision is challenging because of spuriousness: incorrect logical forms can yield correct denotations. Iterative training: Online search with initialization MML over offline search output Coverage during online search State-of-the-art single model performances: WikiTab...
Training semantic parsing with denotation-only supervision is challenging because of spuriousness: incorrect logical forms can yield correct denotations. Iterative training: Online search with initialization MML over offline search output Coverage during online search State-of-the-art single model performances: WikiTab...
[]
GEM-SciDuet-train-21#paper-1019#slide-1
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-1
Semantic Parsing for Question Answering
Question: Which athlete was from South Korea Get rows where Nation is South Korea Filter rows where value in Olympics Get value from Athlete column Kim Yu-na South Korea (KOR) south_korea) athlete) Patrick Chan Canada (CAN) WikiTableQuestions, Pasupat and Liang (2015)
Question: Which athlete was from South Korea Get rows where Nation is South Korea Filter rows where value in Olympics Get value from Athlete column Kim Yu-na South Korea (KOR) south_korea) athlete) Patrick Chan Canada (CAN) WikiTableQuestions, Pasupat and Liang (2015)
[]
GEM-SciDuet-train-21#paper-1019#slide-2
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-2
Weakly Supervised Semantic Parsing
xi: Which athlete was from South Korea after 2010? wi: Kim Yu-na South Korea Tenley Albright United States Test: Given find such that
xi: Which athlete was from South Korea after 2010? wi: Kim Yu-na South Korea Tenley Albright United States Test: Given find such that
[]
GEM-SciDuet-train-21#paper-1019#slide-3
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-3
Challenge Spurious logical forms
Which athletes are from South Korea after Logical forms that lead to answer: (reverse athlete)(and(nation s outh_korea)(year ((reverse date) Athlete from South Korea after 2010 Plushenko Russia (RUS) (reverse athlete)(and(nation Athlete from South Korea with 2 medals s outh_korea)(medals 2))) Kim Yu-na South Korea (KOR...
Which athletes are from South Korea after Logical forms that lead to answer: (reverse athlete)(and(nation s outh_korea)(year ((reverse date) Athlete from South Korea after 2010 Plushenko Russia (RUS) (reverse athlete)(and(nation Athlete from South Korea with 2 medals s outh_korea)(medals 2))) Kim Yu-na South Korea (KOR...
[]
GEM-SciDuet-train-21#paper-1019#slide-4
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-4
Training Objectives
Maximum Marginal Likelihood Reward/Cost -based approaches Krishn amurthy et al. (2017), and others Proposal: Alternate between the two objectives while gradually and others increasing the s earch space! Maxim ize the marginal likelihood of an approximate set of logical forms Minimum Bayes Risk training: Minimize the ex...
Maximum Marginal Likelihood Reward/Cost -based approaches Krishn amurthy et al. (2017), and others Proposal: Alternate between the two objectives while gradually and others increasing the s earch space! Maxim ize the marginal likelihood of an approximate set of logical forms Minimum Bayes Risk training: Minimize the ex...
[]
GEM-SciDuet-train-21#paper-1019#slide-5
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-5
Spuriousness solution 1 Iterative search
Limited depth exhaustive search Step 0: Get seed set of logical forms till depth k Max logical form depth = k + s Step 1: Train model using MML on seed set Step 2: Train using MBR on all data till a greater depth k + s Minimum Bayes Risk training till depth k + s Step 3: Replace offline search with trained MBR and upda...
Limited depth exhaustive search Step 0: Get seed set of logical forms till depth k Max logical form depth = k + s Step 1: Train model using MML on seed set Step 2: Train using MBR on all data till a greater depth k + s Minimum Bayes Risk training till depth k + s Step 3: Replace offline search with trained MBR and upda...
[]
GEM-SciDuet-train-21#paper-1019#slide-6
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-6
Spuriousness Solution 2 Coverage guidance
There is exactly one square touching the bottom of a box. (count_equals (square (touch_bottom all_objects)) Insight: There is a significant amount of trivial overlap Solution: Use overlap as a measure guide search
There is exactly one square touching the bottom of a box. (count_equals (square (touch_bottom all_objects)) Insight: There is a significant amount of trivial overlap Solution: Use overlap as a measure guide search
[]
GEM-SciDuet-train-21#paper-1019#slide-7
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-7
Training with Coverage Guidance
Augment the reward-based objective:
Augment the reward-based objective:
[]
GEM-SciDuet-train-21#paper-1019#slide-10
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-10
Results of using coverage guided training on NLVR
Model does not learn without coverage! Coverage helps even with strong initialization when trained from scratch when model initialized from an MML model trained on a seed set of offline searched paths * using structured representations
Model does not learn without coverage! Coverage helps even with strong initialization when trained from scratch when model initialized from an MML model trained on a seed set of offline searched paths * using structured representations
[]
GEM-SciDuet-train-21#paper-1019#slide-11
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-11
Comparison with previous approaches on NLVR
MaxEnt, BiAttPonter are not semantic parsers Abs. supervision + Rerank uses manually labeled abstractions of utterance - logical form pairs to get training data for a supervised system, and reranking Our work outperforms Goldman et al., 2018 with fewer resources * using structured representations
MaxEnt, BiAttPonter are not semantic parsers Abs. supervision + Rerank uses manually labeled abstractions of utterance - logical form pairs to get training data for a supervised system, and reranking Our work outperforms Goldman et al., 2018 with fewer resources * using structured representations
[]
GEM-SciDuet-train-21#paper-1019#slide-12
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-12
Comparison with previous approaches on WikiTableQuestions
Non-neural models Reinforcement Learning Non-RL Neural Models models
Non-neural models Reinforcement Learning Non-RL Neural Models models
[]
GEM-SciDuet-train-21#paper-1019#slide-13
1019
Iterative Search for Weakly Supervised Semantic Parsing
Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates b...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2.1", "2.2", "2.2.1", "2.2.2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "6.4", "6.5", "7", "8" ], "paper_header_content": [ "Introduction", "Weakly supervised semant...
GEM-SciDuet-train-21#paper-1019#slide-13
Summary
Spuriousness is a challenge in training semantic parsers with weak supervision Iterative training: Online search with initialization MML over offline search output Coverage during online search SOTA single model performances:
Spuriousness is a challenge in training semantic parsers with weak supervision Iterative training: Online search with initialization MML over offline search output Coverage during online search SOTA single model performances:
[]
GEM-SciDuet-train-22#paper-1021#slide-0
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-0
Image Captioning
Two young kids with backpacks sitting on the porch.
Two young kids with backpacks sitting on the porch.
[]
GEM-SciDuet-train-22#paper-1021#slide-1
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-1
Visual Storytelling
The brother did not want to talk to his sister. The siblings made up. They started to talk and smile. Their parents showed up. They were happy to see them. The brother and sister were ready for the first day of school. They were excited to go to their first day and meet new friends. They told their mom how happy they w...
The brother did not want to talk to his sister. The siblings made up. They started to talk and smile. Their parents showed up. They were happy to see them. The brother and sister were ready for the first day of school. They were excited to go to their first day and meet new friends. They told their mom how happy they w...
[]
GEM-SciDuet-train-22#paper-1021#slide-2
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-2
Reinforcement Learning
o Directly optimize the existing metrics BLEU, METEOR, ROUGE, CIDEr Rennie 2017, Self-critical Sequence Training for Image Captioning
o Directly optimize the existing metrics BLEU, METEOR, ROUGE, CIDEr Rennie 2017, Self-critical Sequence Training for Image Captioning
[]
GEM-SciDuet-train-22#paper-1021#slide-3
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-3
Inverse Reinforcement Learning
R eward Reward Inverse Reinforcement Reinforcement O ptimal Optima l Fu nction Functio n Learning i (IRL) ( ) Policy Policy
R eward Reward Inverse Reinforcement Reinforcement O ptimal Optima l Fu nction Functio n Learning i (IRL) ( ) Policy Policy
[]
GEM-SciDuet-train-22#paper-1021#slide-4
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-4
Adversarial REward Learning AREL
Reward Model Story Policy Model
Reward Model Story Policy Model
[]
GEM-SciDuet-train-22#paper-1021#slide-5
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-5
Policy Model
My brother recently graduated college. CNN It was a formal cap and gown event. My mom and dad attended. Later, my aunt and grandma showed up. When the event was over he even got congratulated by the mascot.
My brother recently graduated college. CNN It was a formal cap and gown event. My mom and dad attended. Later, my aunt and grandma showed up. When the event was over he even got congratulated by the mascot.
[]
GEM-SciDuet-train-22#paper-1021#slide-6
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-6
Reward Model
my mom and dad attended Story Convolution Pooling FC layer Kim 2014, Convolutional Neural Networks for Sentence Classification
my mom and dad attended Story Convolution Pooling FC layer Kim 2014, Convolutional Neural Networks for Sentence Classification
[]
GEM-SciDuet-train-22#paper-1021#slide-7
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-7
Associating Reward with Story
Energy-based models associate an energy value with a sample modeling the data as a Boltzmann distribution Approximate data distribution Partition function Optimal reward function is achieved when LeCun et al. 2006, A tutorial on energy-based learning
Energy-based models associate an energy value with a sample modeling the data as a Boltzmann distribution Approximate data distribution Partition function Optimal reward function is achieved when LeCun et al. 2006, A tutorial on energy-based learning
[]
GEM-SciDuet-train-22#paper-1021#slide-8
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-8
AREL Objective
Therefore, we define an adversarial objective with KL-divergence Empirical distribution Policy distribution The objective of Reward Model The objective of Policy Model
Therefore, we define an adversarial objective with KL-divergence Empirical distribution Policy distribution The objective of Reward Model The objective of Policy Model
[]
GEM-SciDuet-train-22#paper-1021#slide-10
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-10
Automatic Evaluation
Method BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr Seq2seq (Huang et al.) HierAttRNN (Yu et al.) ABL REUL E -(RoL urs) Huang et al. 2016, Visual Storytelling Yu et al. 2017, Hierarchically-Attentive RNN for Album Summarization and Storytelling
Method BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr Seq2seq (Huang et al.) HierAttRNN (Yu et al.) ABL REUL E -(RoL urs) Huang et al. 2016, Visual Storytelling Yu et al. 2017, Hierarchically-Attentive RNN for Album Summarization and Storytelling
[]
GEM-SciDuet-train-22#paper-1021#slide-11
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-11
Human Evaluation
XE BLEU-RL CIDEr-RL GAN AREL Relevance: the story accurately describes what is happening in the photo stream and covers the main objects. Expressiveness: coherence, grammatically and semantically correct, no repetition, expressive language style. Concreteness: the story should narrate concretely what is in the images r...
XE BLEU-RL CIDEr-RL GAN AREL Relevance: the story accurately describes what is happening in the photo stream and covers the main objects. Expressiveness: coherence, grammatically and semantically correct, no repetition, expressive language style. Concreteness: the story should narrate concretely what is in the images r...
[]
GEM-SciDuet-train-22#paper-1021#slide-12
1021
No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling
Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challe...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Problem Statement", "Model", "Learning", "Experimental Setup", "Automatic Evaluation", "Human Evaluat...
GEM-SciDuet-train-22#paper-1021#slide-12
Takeaway
o Generating and evaluating stories are both challenging due to the complicated nature of stories o No existing metrics are perfect for either training or testing o AREL is a better learning framework for visual storytelling Can be applied to other generation tasks o Our approach is model-agnostic Advanced models bette...
o Generating and evaluating stories are both challenging due to the complicated nature of stories o No existing metrics are perfect for either training or testing o AREL is a better learning framework for visual storytelling Can be applied to other generation tasks o Our approach is model-agnostic Advanced models bette...
[]
GEM-SciDuet-train-23#paper-1024#slide-0
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-0
Multimodal Machine Translation
Practical application of machine translation Translate a source sentence along with related nonlinguistic information two young girls are sitting on the street eating corn . deux jeunes filles sont assises dans la rue , mangeant du mais . NAACL SRW 2019, Minneapolis
Practical application of machine translation Translate a source sentence along with related nonlinguistic information two young girls are sitting on the street eating corn . deux jeunes filles sont assises dans la rue , mangeant du mais . NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-1
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-1
Issue of MMT
Multi30k [Elliott et al., 2016] has only small mount of data Statistic of training data Hard to train rare word translation Tend to output synonyms guided by language model Source deux jeunes filles sont assises dans la rue , mangeant du mais . Reference two young girls are sitting on the street eating corn . NMT two y...
Multi30k [Elliott et al., 2016] has only small mount of data Statistic of training data Hard to train rare word translation Tend to output synonyms guided by language model Source deux jeunes filles sont assises dans la rue , mangeant du mais . Reference two young girls are sitting on the street eating corn . NMT two y...
[]
GEM-SciDuet-train-23#paper-1024#slide-2
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-2
Previous Solutions
Parallel corpus without images [Elliott and Kadar, 2017; Gronroos et al., 2018] Pseudo in-domain data by filtering general domain data Back-translation of caption/monolingual data NAACL SRW 2019, Minneapolis
Parallel corpus without images [Elliott and Kadar, 2017; Gronroos et al., 2018] Pseudo in-domain data by filtering general domain data Back-translation of caption/monolingual data NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-3
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-3
Motivation
Introduce pretrained word embedding to MMT Improve rare word translation in MMT Pretrained word embeddings with conventional MMT? Pretrained Word Embedding in text-only NMT Initialize embedding layers in encoder/decoder [Qi et al., 2018] Improve overall performance in low-resource domain Search-based decoder with conti...
Introduce pretrained word embedding to MMT Improve rare word translation in MMT Pretrained word embeddings with conventional MMT? Pretrained Word Embedding in text-only NMT Initialize embedding layers in encoder/decoder [Qi et al., 2018] Improve overall performance in low-resource domain Search-based decoder with conti...
[]
GEM-SciDuet-train-23#paper-1024#slide-4
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-4
Baseline IMAGINATION
While validating, testing Bahdanau et al., 2015 Train both MT task and shared space learning task to improve the shared encoder. NAACL SRW 2019, Minneapolis
While validating, testing Bahdanau et al., 2015 Train both MT task and shared space learning task to improve the shared encoder. NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-5
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-5
MMT with Embedding Prediction
1. Use embedding prediction While validating, testing in decoder 2. Initialize embedding layers in encoder/decoder with pretrained word embeddings While training 3. Shift visual features to make the mean vector be a zero NAACL SRW 2019, Minneapolis
1. Use embedding prediction While validating, testing in decoder 2. Initialize embedding layers in encoder/decoder with pretrained word embeddings While training 3. Shift visual features to make the mean vector be a zero NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-6
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-6
Embedding Prediction
i.e. Continuous Output [Kumar and Tsvetkov, 2019] Predict a word embedding and search for the nearest word 1. Predict a word embedding of next word. 2. Compute cosine similarities with each word in pretrained word embedding. 3. Find and output the most similar word as system output. Pretrained word embedding will NOT b...
i.e. Continuous Output [Kumar and Tsvetkov, 2019] Predict a word embedding and search for the nearest word 1. Predict a word embedding of next word. 2. Compute cosine similarities with each word in pretrained word embedding. 3. Find and output the most similar word as system output. Pretrained word embedding will NOT b...
[]
GEM-SciDuet-train-23#paper-1024#slide-7
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-7
Embedding Layer Initialization
Initialize embedding layer with pretrained word embedding Fine-tune the embedding layer in encoder DO NOT update the embedding layer in decoder NAACL SRW 2019, Minneapolis
Initialize embedding layer with pretrained word embedding Fine-tune the embedding layer in encoder DO NOT update the embedding layer in decoder NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-8
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-8
Loss Function
Model loss: Interpolation of each loss [Elliot and Kadaar, 2017] MT task: Max-margin with negative sampling [Lazaridou et al., 2015] Shared space learning task: Max-margin [Elliot and Kadaar, 2017] NAACL SRW 2019, Minneapolis
Model loss: Interpolation of each loss [Elliot and Kadaar, 2017] MT task: Max-margin with negative sampling [Lazaridou et al., 2015] Shared space learning task: Max-margin [Elliot and Kadaar, 2017] NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-9
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-9
Hubness Problem
Certain words (hubs) appear frequently in the neighbors of other words Even of the word that has entirely no relationship with hubs Prevent the embedding prediction model from searching for correct output words Incorrectly output the hub word NAACL SRW 2019, Minneapolis
Certain words (hubs) appear frequently in the neighbors of other words Even of the word that has entirely no relationship with hubs Prevent the embedding prediction model from searching for correct output words Incorrectly output the hub word NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-10
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-10
All but the Top
Address hubness problem in other NLP tasks Debias a pretrained word embedding based on its global bias 1. Shift all word embeddings to make their mean vector into a zero vector 2. Subtract top 5 PCA components from each shifted word embedding Applied to pretrained word embeddings for encoder/decoder NAACL SRW 2019, Min...
Address hubness problem in other NLP tasks Debias a pretrained word embedding based on its global bias 1. Shift all word embeddings to make their mean vector into a zero vector 2. Subtract top 5 PCA components from each shifted word embedding Applied to pretrained word embeddings for encoder/decoder NAACL SRW 2019, Min...
[]
GEM-SciDuet-train-23#paper-1024#slide-11
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-11
Implementation and Dataset
Multi30k (French to English) Pretrained ResNet50 for visual encoder Trained on Common Crawl and Wikipedia Our code is here: https://github.com/toshohirasawa/nmtpytorch-emb-pred NAACL SRW 2019, Minneapolis
Multi30k (French to English) Pretrained ResNet50 for visual encoder Trained on Common Crawl and Wikipedia Our code is here: https://github.com/toshohirasawa/nmtpytorch-emb-pred NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-12
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-12
Hyper Parameters
dimension of hidden state: 256 RNN type: GRU dimension of word embedding: 300 dimension of shared space: 2048 NAACL SRW 2019, Minneapolis
dimension of hidden state: 256 RNN type: GRU dimension of word embedding: 300 dimension of shared space: 2048 NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-13
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-13
Word level F1 score
Frequency in training data NAACL SRW 2019, Minneapolis
Frequency in training data NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-14
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-14
Ablation wrt Embedding Layers
Encoder Decoder Fixed BLEU METEOR FastText FastText Yes random Encoder/Decoder: Initialize embedding layer with random values or FastText word embedding. Fixed (Yes/No): Whether fix the embedding layer in decoder or fine-tune that while training. Fixing the embedding layer in decoder is essential Keep word embeddings i...
Encoder Decoder Fixed BLEU METEOR FastText FastText Yes random Encoder/Decoder: Initialize embedding layer with random values or FastText word embedding. Fixed (Yes/No): Whether fix the embedding layer in decoder or fine-tune that while training. Fixing the embedding layer in decoder is essential Keep word embeddings i...
[]
GEM-SciDuet-train-23#paper-1024#slide-15
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-15
Overall Performance
Model (+ pretrained): Apply embedding layer initialization and All-but-the-Top debiasing. Our model performs better than baselines Even those with embedding layer initialization NAACL SRW 2019, Minneapolis
Model (+ pretrained): Apply embedding layer initialization and All-but-the-Top debiasing. Our model performs better than baselines Even those with embedding layer initialization NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-16
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-16
Ablation wrt Visual Features
Visual Features BLEU METEOR Visual Features (Centered/Raw/No): Use centered visual features or raw visual features to train model. No show the result of text-only NMT with embedding prediction model. Centering visual features is required to train our model NAACL SRW 2019, Minneapolis
Visual Features BLEU METEOR Visual Features (Centered/Raw/No): Use centered visual features or raw visual features to train model. No show the result of text-only NMT with embedding prediction model. Centering visual features is required to train our model NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-17
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-17
Conclusion and Future Works
MMT with embedding prediction improves ... It is essential for embedding prediction model to ... Fix the embedding in decoder Debias the pretrained word embedding Center the visual feature for multitask learning Better training corpora for embedding learning in MMT domain Incorporate visual features into contextualized...
MMT with embedding prediction improves ... It is essential for embedding prediction model to ... Fix the embedding in decoder Debias the pretrained word embedding Center the visual feature for multitask learning Better training corpora for embedding learning in MMT domain Incorporate visual features into contextualized...
[]
GEM-SciDuet-train-23#paper-1024#slide-18
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-18
Translation Example
un homme en velo pedale devant une voute . a man on a bicycle pedals through an archway . a man on a bicycle pedal past an arch . Source a man on a bicycle pedals outside a monument . IMAGINATION a man on a bicycle pedals in front of a archway . NAACL SRW 2019, Minneapolis
un homme en velo pedale devant une voute . a man on a bicycle pedals through an archway . a man on a bicycle pedal past an arch . Source a man on a bicycle pedals outside a monument . IMAGINATION a man on a bicycle pedals in front of a archway . NAACL SRW 2019, Minneapolis
[]
GEM-SciDuet-train-23#paper-1024#slide-19
1024
Multimodal Machine Translation with Embedding Prediction
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for transla...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Multimodal Machine Translation with Embedding Prediction", "Neural Machine Translation with Embedding Prediction", "Visual Lat...
GEM-SciDuet-train-23#paper-1024#slide-19
Translation Example long
quatre hommes , dont trois portent des kippas , sont assis sur un tapis a motifs bleu et vert olive . four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat . four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet . four men , three of...
quatre hommes , dont trois portent des kippas , sont assis sur un tapis a motifs bleu et vert olive . four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat . four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet . four men , three of...
[]
GEM-SciDuet-train-24#paper-1025#slide-0
1025
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related work", "Proposed method", "Embedding normalization", "Fully unsupervised initializa...
GEM-SciDuet-train-24#paper-1025#slide-0
Introduction
- Cross-lingual transfer learning - Very good results - Even better results - Tested in favorable conditions - Fail in more challenging datasets Previous work This work - Adversarial learning - Self-learning - Tested in favorable conditions - Works in challenging datasets - Fail in more challenging datasets
- Cross-lingual transfer learning - Very good results - Even better results - Tested in favorable conditions - Fail in more challenging datasets Previous work This work - Adversarial learning - Self-learning - Tested in favorable conditions - Works in challenging datasets - Fail in more challenging datasets
[]
GEM-SciDuet-train-24#paper-1025#slide-1
1025
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related work", "Proposed method", "Embedding normalization", "Fully unsupervised initializa...
GEM-SciDuet-train-24#paper-1025#slide-1
Cross lingual embedding mappings
Basque English Training dictionary Basque arg min English = arg min min
Basque English Training dictionary Basque arg min English = arg min min
[]
GEM-SciDuet-train-24#paper-1025#slide-2
1025
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related work", "Proposed method", "Embedding normalization", "Fully unsupervised initializa...
GEM-SciDuet-train-24#paper-1025#slide-2
Artetxe et al ACL 2017
= arg min min - 25 word pairs - num. * none 0 a | Iteration - Numeral list A
= arg min min - 25 word pairs - num. * none 0 a | Iteration - Numeral list A
[]
GEM-SciDuet-train-24#paper-1025#slide-3
1025
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related work", "Proposed method", "Embedding normalization", "Fully unsupervised initializa...
GEM-SciDuet-train-24#paper-1025#slide-3
Proposed method
1) Fully unsupervised initialization for x in vocab: two due (two) cane (dog) = sorted = sorted - Stochastic dictionary induction - Frequency-based vocabulary cutoff - Bidirectional dictionary induction - Final symmetric re-weighting (Artetxe et al., 2018)
1) Fully unsupervised initialization for x in vocab: two due (two) cane (dog) = sorted = sorted - Stochastic dictionary induction - Frequency-based vocabulary cutoff - Bidirectional dictionary induction - Final symmetric re-weighting (Artetxe et al., 2018)
[]
GEM-SciDuet-train-24#paper-1025#slide-4
1025
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related work", "Proposed method", "Embedding normalization", "Fully unsupervised initializa...
GEM-SciDuet-train-24#paper-1025#slide-4
Experiments
10 runs for each method Successful runs (>5% accuracy) Method ES-EN IT-EN TR-EN Number of successful runs (Hard) dataset by Dinu et al. (2016) + extensions Supervision Method EN-IT EN-DE EN-FI EN-ES
10 runs for each method Successful runs (>5% accuracy) Method ES-EN IT-EN TR-EN Number of successful runs (Hard) dataset by Dinu et al. (2016) + extensions Supervision Method EN-IT EN-DE EN-FI EN-ES
[]
GEM-SciDuet-train-24#paper-1025#slide-5
1025
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings
Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in ...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "3.3", "3.4", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related work", "Proposed method", "Embedding normalization", "Fully unsupervised initializa...
GEM-SciDuet-train-24#paper-1025#slide-5
Conclusions
Not a solved problem! More robust and accurate than previous methods Future work: from bilingual to multilingual
Not a solved problem! More robust and accurate than previous methods Future work: from bilingual to multilingual
[]
GEM-SciDuet-train-25#paper-1026#slide-0
1026
SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA
We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports ra...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "5", "2", "6", "8", "9" ], "paper_header_content": [ "Overview", "Task Definition", "Data & Resources", "TUPA: The Baseline Parser", "Evaluation", "·", "Participating Systems", "Discussion", "Con...
GEM-SciDuet-train-25#paper-1026#slide-0
Universal Conceptual Cognitive Annotation UCCA
Cross-linguistically applicable semantic representation (Abend and Rappoport, 2013). Builds on Basic Linguistic Theory (R. M. W. Dixon). Stable in translation (Sulem et al., 2015). After graduation John moved to Paris P D L A A Intuitive annotation interface and guidelines (Abend et al., 2017). The Task: UCCA parsing i...
Cross-linguistically applicable semantic representation (Abend and Rappoport, 2013). Builds on Basic Linguistic Theory (R. M. W. Dixon). Stable in translation (Sulem et al., 2015). After graduation John moved to Paris P D L A A Intuitive annotation interface and guidelines (Abend et al., 2017). The Task: UCCA parsing i...
[]
GEM-SciDuet-train-25#paper-1026#slide-1
1026
SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA
We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports ra...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "5", "2", "6", "8", "9" ], "paper_header_content": [ "Overview", "Task Definition", "Data & Resources", "TUPA: The Baseline Parser", "Evaluation", "·", "Participating Systems", "Discussion", "Con...
GEM-SciDuet-train-25#paper-1026#slide-1
Applications
Machine translation (Birch et al., 2016) Sentence splitting for text simplification (Sulem et al., 2018b). Grammatical error correction (Choshen and Abend, 2018) He gve an apple for john He gave John an apple
Machine translation (Birch et al., 2016) Sentence splitting for text simplification (Sulem et al., 2018b). Grammatical error correction (Choshen and Abend, 2018) He gve an apple for john He gave John an apple
[]
GEM-SciDuet-train-25#paper-1026#slide-2
1026
SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA
We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports ra...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "5", "2", "6", "8", "9" ], "paper_header_content": [ "Overview", "Task Definition", "Data & Resources", "TUPA: The Baseline Parser", "Evaluation", "·", "Participating Systems", "Discussion", "Con...
GEM-SciDuet-train-25#paper-1026#slide-2
Graph Structure
Labeled directed acyclic graphs (DAGs). Complex units are non-terminal nodes. Phrases may be discontinuous. They thought - - - remote edge R P D A taking a short break Remote edges enable reentrancy.
Labeled directed acyclic graphs (DAGs). Complex units are non-terminal nodes. Phrases may be discontinuous. They thought - - - remote edge R P D A taking a short break Remote edges enable reentrancy.
[]
GEM-SciDuet-train-25#paper-1026#slide-3
1026
SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA
We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports ra...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "5", "2", "6", "8", "9" ], "paper_header_content": [ "Overview", "Task Definition", "Data & Resources", "TUPA: The Baseline Parser", "Evaluation", "·", "Participating Systems", "Discussion", "Con...
GEM-SciDuet-train-25#paper-1026#slide-3
Baseline
TUPA, a transition-based UCCA parser (Hershcovich et al., 2017). taking a short break NodeC LSTM LSTM LSTM LSTM LSTM LSTM LSTM They thought about taking a short break
TUPA, a transition-based UCCA parser (Hershcovich et al., 2017). taking a short break NodeC LSTM LSTM LSTM LSTM LSTM LSTM LSTM They thought about taking a short break
[]
GEM-SciDuet-train-25#paper-1026#slide-4
1026
SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA
We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports ra...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "5", "2", "6", "8", "9" ], "paper_header_content": [ "Overview", "Task Definition", "Data & Resources", "TUPA: The Baseline Parser", "Evaluation", "·", "Participating Systems", "Discussion", "Con...
GEM-SciDuet-train-25#paper-1026#slide-4
Data
English Wikipedia articles (Wiki). English-French-German parallel corpus from Twenty Thousand Leagues Under the Sea (20K). sentences tokens
English Wikipedia articles (Wiki). English-French-German parallel corpus from Twenty Thousand Leagues Under the Sea (20K). sentences tokens
[]
GEM-SciDuet-train-25#paper-1026#slide-5
1026
SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA
We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports ra...
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, ...
{ "paper_header_number": [ "1", "2", "3", "4", "5", "2", "6", "8", "9" ], "paper_header_content": [ "Overview", "Task Definition", "Data & Resources", "TUPA: The Baseline Parser", "Evaluation", "·", "Participating Systems", "Discussion", "Con...
GEM-SciDuet-train-25#paper-1026#slide-5
Tracks
French low-resource (only 15 training sentences)
French low-resource (only 15 training sentences)
[]