id
stringlengths
8
19
document
stringlengths
2.18k
16.2k
challenge
stringlengths
76
208
approach
stringlengths
79
223
outcome
stringlengths
84
209
P06-1112
In this paper , we explore correlation of dependency relation paths to rank candidate answers in answer extraction . Using the correlation measure , we compare dependency relations of a candidate answer and mapped question phrases in sentence with the corresponding relations in question . Different from previous studie...
A generally accessible NER system for QA systems produces a larger answer candidate set which would be hard for current surface word-level ranking methods.
They propose a statistical method which takes correlations of dependency relation paths computed by the Dynamic Time Wrapping algorithm into account for ranking candidate answers.
The proposed method outperforms state-of-the-art syntactic relation-based methods by up to 20% and shows it works even better on harder questions where NER performs poorly.
2020.acl-main.528
Recently , many works have tried to augment the performance of Chinese named entity recognition ( NER ) using word lexicons . As a representative , Lattice-LSTM ( Zhang and Yang , 2018 ) has achieved new benchmark results on several public Chinese NER datasets . However , Lattice-LSTM has a complex model architecture ....
Named entity recognition in Chinese requires word segmentation causes errors or character-level model with lexical features that is complex and expensive.
They propose to encode lexicon features into character representations so it can keep the system simpler and achieve faster inference than previous models.
The proposed efficient character-based LSTM method with lexical features achieves 6.15 times faster inference speed and better performance than previous models.
P19-1352
Word embedding is central to neural machine translation ( NMT ) , which has attracted intensive research interest in recent years . In NMT , the source embedding plays the role of the entrance while the target embedding acts as the terminal . These layers occupy most of the model parameters for representation learning ...
Word embeddings occupy a large amount of memory, and weight tying does not mitigate this issue for distant language pairs on translation tasks.
They propose a language independet method where a model shares embeddings between source and target only when words have some common characteristics.
Experiments on machine translation datasets involving multiple language families and scripts show that the proposed model outperforms baseline models while using fewer parameters.
D12-1061
This paper explores log-based query expansion ( QE ) models for Web search . Three lexicon models are proposed to bridge the lexical gap between Web documents and user queries . These models are trained on pairs of user queries and titles of clicked documents . Evaluations on a real world data set show that the lexicon...
Term mismatches between a query and documents hinder retrievals of relevant documents and black box statistical machine translation models are used to expand queries.
They propose to train lexicon query expansion models by using transaction logs that contain pairs of queries and titles of clicked documents.
The proposed query expansion model enables retrieval systems to significantly outperform models with previous expansion models while being more transparent.
N07-1011
Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases . In this paper , we propose a machine learning method that enables features over sets of noun phrases , resulting in a first-order probabilistic model for coreference . We outline a set of approximations that make t...
Existing approaches treat noun phrase coreference resolution as a set of independent binary classifications limiting the features to be only pairs of noun phrases.
They propose a machine learning method that uses sets of noun phrases as features that are coupled with a sampling method to enable scalability.
Evaluation on the ACE coreference dataset, the proposed method achieves a 45% error reduction over a previous method.
2021.acl-long.67
Bilingual lexicons map words in one language to their translations in another , and are typically induced by learning linear projections to align monolingual word embedding spaces . In this paper , we show it is possible to produce much higher quality lexicons with methods that combine ( 1 ) unsupervised bitext mining ...
Existing methods to induce bilingual lexicons use linear projections to align word embeddings that are based on unrealistic simplifying assumptions.
They propose to use both unsupervised bitext mining and unsupervised word alignment methods to produce higher quality lexicons.
The proposed method achieves the state-of-the-art in the bilingual lexical induction task while keeping the interpretability of their pipeline.
D18-1065
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy , accurate , and efficient attention mechanism for sequence to sequence learning . The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention . O...
Softmax attention models are popular because of their differentiable and easy to implement nature while hard attention models outperform them when successfully trained.
They propose a method to approximate the joint attention-output distribution which provides sharp attention as hard attention and easy implementation as soft attention.
The proposed approach outperforms soft attention models and recent hard attention and Sparsemax models on five translation tasks and also on morphological inflection tasks.
2022.acl-long.304
Contrastive learning has achieved impressive success in generation tasks to militate the " exposure bias " problem and discriminatively exploit the different quality of references . Existing works mostly focus on contrastive learning on the instance-level without discriminating the contribution of each word , while key...
Existing works on contrastive learning for text generation focus only on instance-level while word-level information such as keywords is also of great importance.
They propose a CVAE-based hierarchical contrastive learning within instance and keyword-level using a keyword graph which iteratively polishes the keyword representations.
The proposed model outperforms CVAE and baselines on storytelling, paraphrasing, and dialogue generation tasks.
2020.emnlp-main.384
Word embedding models are typically able to capture the semantics of words via the distributional hypothesis , but fail to capture the numerical properties of numbers that appear in a text . This leads to problems with numerical reasoning involving tasks such as question answering . We propose a new methodology to assi...
Existing word embeddings treat numbers like words failing to capture numeration and magnitude properties of numbers which is problematic for tasks such as question answering.
They propose a deterministic technique to learn numerical embeddings where cosine similarity reflects the actual distance and a regularization approach for a contextual setting.
A Bi-LSTM network initialized with the proposed embedding shows the ability to capture numeration and magnitude and to perform list maximum, decoding, and addition.
P12-1103
We propose a novel approach to improve SMT via paraphrase rules which are automatically extracted from the bilingual training data . Without using extra paraphrase resources , we acquire the rules by comparing the source side of the parallel corpus with the target-to-source translations of the target side . Besides the...
Incorporating paraphrases improves statistical machine translation however no works investigate sentence level paraphrases.
They propose to use bilingual training data to obtain paraphrase rules on word, phrase and sentence levels to rewrite inputs to be MT-favored.
The acquired paraphrase rules improve translation qualities in oral and news domains.
N09-1072
Automatically extracting social meaning and intention from spoken dialogue is an important task for dialogue systems and social computing . We describe a system for detecting elements of interactional style : whether a speaker is awkward , friendly , or flirtatious . We create and use a new spoken corpus of 991 4-minut...
Methods to extract social meanings such as engagement from speech remain unknown while it is important in sociolinguistics and to develop socially aware computing systems.
They create a spoken corpus from conversations in speed-dating and perform analysis using extracted dialogue features with a focus on genders.
They found several gender dependent and independent phenomena in conversations related to the speed of speaking, laughing or asking questions.
P18-1256
We introduce the task of predicting adverbial presupposition triggers such as also and again . Solving such a task requires detecting recurring or similar events in the discourse context , and has applications in natural language generation tasks such as summarization and dialogue systems . We create two new datasets f...
Adverbaial triggers indicate the event recurrence, continuation, or termination in the discourse context and are frequently found in English but there are few related works.
They introduce an adverbial presupposition trigger prediction task and datasets and propose an attention mechanism that augments a recurrent neural network without additional trainable parameters.
The proposed model outperforms baselines including an LSTM-based language model on most of the triggers on the two datasets.
P08-1116
This paper proposes a novel method that exploits multiple resources to improve statistical machine translation ( SMT ) based paraphrasing . In detail , a phrasal paraphrase table and a feature function are derived from each resource , which are then combined in a log-linear SMT model for sentence-level paraphrase gener...
Paraphrase generation requires monolingual parallel corpora which is not easily obtainable, and few works focus on using the extracted phrasal paraphrases in sentence-level paraphrase generation.
They propose to exploit six paraphrase resources to extract phrasal paraphrase tables that are further used to build a log-linear statistical machine translation-based paraphrasing model.
They show that using multiple resources enhances paraphrase generation quality in precision on phrase and sentence level especially when they are similar to user queries.
P08-1027
There are many possible different semantic relationships between nominals . Classification of such relationships is an important and difficult task ( for example , the well known noun compound classification task is a special case of this problem ) . We propose a novel pattern clusters method for nominal relationship (...
Using annotated data or semantic resources such as WordNet for relation classification introduces errors and such data is not available in many domains and languages.
They propose an unsupervised pattern clustering method for nominal relation classification using a large generic corpus enabling scale in domain and language.
Experiments on the ACL SemEval-07 dataset show the proposed method performs better than existing methods that do not use disambiguation tags.
2021.emnlp-main.185
Learning sentence embeddings from dialogues has drawn increasing attention due to its low annotation cost and high domain adaptability . Conventional approaches employ the siamese-network for this task , which obtains the sentence embeddings through modeling the context-response semantic relevance by applying a feed-fo...
Existing methods to learn representations from dialogues have a similarity-measurement gap between training and evaluation time and do not exploit the multi-turn structure of data.
They propose a dialogue-based contrastive learning approach to learn sentence embeddings from dialogues by modelling semantic matching relationships between the context and response implicitly.
The proposed approach outperforms baseline methods on two newly introduced tasks coupled with three multi-turn dialogue datasets in terms of MAP and Spearman's correlation measures.
P02-1051
Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere , and because many are domain specific , not to be found in bilingual dictionaries . We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual re...
Translating named entities is challenging since they can appear from nowhere, and cannot be found in bilingual dictionaries because they are domain specific.
They propose an algorithm for Arabic-English named entity translation which uses easily obtainable monolingual and bilingual resources and a limited amount of hard-to-obtain bilingual resources.
The proposed algorithm is compared with human translators and a commercial system and it performs at near human translation.
E06-1014
Probabilistic Latent Semantic Analysis ( PLSA ) models have been shown to provide a better model for capturing polysemy and synonymy than Latent Semantic Analysis ( LSA ) . However , the parameters of a PLSA model are trained using the Expectation Maximization ( EM ) algorithm , and as a result , the trained model is d...
EM algorithm-baed Probabilistic latent semantic analysis models provide high variance in performance and models with different initializations are not comparable.
They propose to use Latent Semantic Analysis to initialize probabilistic latent semantic analysis models, EM algorithm is further used to refine the initial estimate.
They show that the model initialized in the proposed method always outperforms existing methods.
2021.naacl-main.34
We rely on arguments in our daily lives to deliver our opinions and base them on evidence , making them more convincing in turn . However , finding and formulating arguments can be challenging . In this work , we present the Arg-CTRL-a language model for argument generation that can be controlled to generate sentence-l...
Argumentative content generation can support humans but current models produce lengthy texts and offer a little controllability on aspects of the argument for users.
They train a controllable language model on a corpus annotated with control codes provided by a stance detection model and introduce a dataset for evaluation.
The proposed model can generate arguments that are genuine and argumentative and grammatically correct and also counter-arguments in a transparent and interpretable way.
N16-1181
We describe a question answering model that applies to both images and structured knowledge bases . The model uses natural language strings to automatically assemble neural networks from a collection of composable modules . Parameters for these modules are learned jointly with network-assembly parameters via reinforcem...
Existing works on visual learning use manually-specified modular structures.
They propose a question-answering model trained jointly to translate from questions to dynamically assembled neural networks then produce answers with using images or knowledge bases.
The proposed model achieves state-of-the-arts on visual and structured domain datasets showing that coutinous representations improve the expressiveness and learnability of semantic parsers.
2020.aacl-main.88
Large pre-trained language models reach stateof-the-art results on many different NLP tasks when fine-tuned individually ; They also come with a significant memory and computational requirements , calling for methods to reduce model sizes ( green AI ) . We propose a twostage model-compression method to reduce a model '...
Existing coarse-grained approaches for reducing the inference time of pretraining models remove layers, posing a trade-off between compression and the accuracy of a model.
They propose a model-compression method which decompresses the matrix and performs feature distillation on the internal representations to recover from the decomposition.
The proposed method reduces the model size by 0.4x and increases inference speed by 1.45x while keeping the performance degradation minimum on the GLUE benchmark.
D16-1205
Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words . Current DSMs , however , represent context words as separate features , thereby loosing important information for word expectations , such as word interrelations . In this paper , we present a D...
Providing richer contexts to Distributional Semantic Models improves by taking word interrelations into account but it would suffer from data sparsity.
They propose a Distributional Semantic Model that incorporates verb contexts as joint syntactic dependencies so that it emulates knowledge about event participants.
They show that representations obtained by the proposed model outperform more complex models on two verb similarity datasets with a limited training corpus.
2021.acl-long.57
In this paper , we propose Inverse Adversarial Training ( IAT ) algorithm for training neural dialogue systems to avoid generic responses and model dialogue history better . In contrast to standard adversarial training algorithms , IAT encourages the model to be sensitive to the perturbation in the dialogue history and...
Neural end-to-end dialogue models generate fluent yet dull and generic responses without taking dialogue histories into account due to the over-simplified maximum likelihood estimation objective.
They propose an algorithm which encourages to be sensitive to perturbations in dialogue histories and generates more diverse and consistent responses by applying penalization.
The proposed approach can model dialogue history better and generate more diverse and consistent responses on OpenSubtitles and DailyDialog.
D09-1065
demonstrated that corpus-extracted models of semantic knowledge can predict neural activation patterns recorded using fMRI . This could be a very powerful technique for evaluating conceptual models extracted from corpora ; however , fMRI is expensive and imposes strong constraints on data collection . Following on expe...
The expensive cost of using fMRI hinders studies on the relationship between corpus-extracted models of semantic knowledge and neural activation patterns.
They propose to use EEG activation patterns instead of fMRI to reduce the cost.
They show that using EEG signals with corpus-based models, they can predict word level distinctions significantly above chance.
D09-1085
This paper introduces a new parser evaluation corpus containing around 700 sentences annotated with unbounded dependencies , from seven different grammatical constructions . We run a series of off-theshelf parsers on the corpus to evaluate how well state-of-the-art parsing technology is able to recover such dependencie...
While recent statistical parsers perform well on Penn Treebank, the results can be misleading due to several reasons originating from evaluation and datasets.
They propose a new corpus with unbounded dependencies from difference grammatical constructions.
Their evaluation of existing parsers with the proposed corpus shows lower scores than reported in previous works indicating a poor ability to recover unbounded dependencies.
P12-1013
Learning entailment rules is fundamental in many semantic-inference applications and has been an active field of research in recent years . In this paper we address the problem of learning transitive graphs that describe entailment rules between predicates ( termed entailment graphs ) . We first identify that entailmen...
Current inefficient algorithms aim to obtain entailment rules for semantic inference hindering the use of large resources.
They propose an efficient polynomial approximation algorithm that exploits their observation, entailment graphs have a "tree-like" property.
Their iterative algorithm runs by orders of magnitude faster than current exact state-of-the-art solutions while keeping close quality.
D15-1054
Sponsored search is at the center of a multibillion dollar market established by search technology . Accurate ad click prediction is a key component for this market to function since the pricing mechanism heavily relies on the estimation of click probabilities . Lexical features derived from the text of both the query ...
Conventional word embeddings with a simple integration of click feedback information and averaging to obtain sentence representations do not work well for ad click prediction.
They propose several joint word embedding methods to leverage positive and negative click feedback which put query vectors close to relevant ad vectors.
The use of features obtained from the new models improves on a large sponsored search data of commercial Yahoo! search engine.
D09-1072
We propose a new model for unsupervised POS tagging based on linguistic distinctions between open and closed-class items . Exploiting notions from current linguistic theory , the system uses far less information than previous systems , far simpler computational methods , and far sparser descriptions in learning context...
Current approaches tackle unsupervised POS tagging as a sequential labelling problem and require a complete knowledge of the lexicon.
They propose to first identify functional syntactic contexts and then use them to make predictions for POS tagging.
The proposed method achieves equivalent performance by using 0.6% of the lexical knowledge used in baseline models.
2021.naacl-main.458
Non-autoregressive Transformer is a promising text generation model . However , current non-autoregressive models still fall behind their autoregressive counterparts in translation quality . We attribute this accuracy gap to the lack of dependency modeling among decoder inputs . In this paper , we propose CNAT , which ...
Non-autoregressive translation models fall behind their autoregressive counterparts in translation quality due to the lack of dependency modelling for the target outputs.
They propose a non-autoregressive transformer-based model which implicitly learns categorical codes as latent variables into the decoding to complement missing dependencies.
The proposed model achieves state-of-the-art without knowledge distillation and a competitive decoding speedup with an interactive-based model when coupled with knowledge distillation and reranking techniques.
2021.emnlp-main.765
The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction ( OpenRE ) . However , high-dimensional vectors can encode complex linguistic information which leads to the problem that the derived clusters can not explicitly align with the relat...
Even though high-dimensional vectors that can encode complex information used for relation extraction are not guaranteed to be consistent with relational semantic similarity.
They propose to use available relation labeled data to obtain relation-oriented representation by minimizing the distance between the same relation instances.
The proposed approach can reduce error rates significantly from the best models for open relation extraction.
P10-1077
Prior use of machine learning in genre classification used a list of labels as classification categories . However , genre classes are often organised into hierarchies , e.g. , covering the subgenres of fiction . In this paper we present a method of using the hierarchy of labels to improve the classification accuracy ....
Existing genre classification methods achieve high accuracy without regarding hierarchical structures exhibiting unrealistic experimental setups such as a limited number of genres and sources.
They propose a structural reformulation of the Support Vector Machine to take hierarchical information of genres into account by using similarities between different genres.
The proposed model outperforms non-hierarchical models on only one corpus and they discuss that it may be due to insufficient depth or inbalance of hierarchies.
2020.acl-main.282
The International Classification of Diseases ( ICD ) provides a standardized way for classifying diseases , which endows each disease with a unique code . ICD coding aims to assign proper ICD codes to a medical record . Since manual coding is very laborious and prone to errors , many methods have been proposed for the ...
Existing models that classify texts in medical records into the International Classification of Diseases reduce manual efforts however they ignore Code Hierarchy and Code Co-occurrence.
They propose a hyperbolic representation method to leverage the code hierarchy and a graph convolutional network to utilize the code-occurrence for automatic coding.
The proposed model outperforms state-of-the-art methods on two widely used datasets.
N12-1028
The important mass of textual documents is in perpetual growth and requires strong applications to automatically process information . Automatic titling is an essential task for several applications : ' No Subject ' e-mails titling , text generation , summarization , and so forth . This study presents an original appro...
Automatically titling documents is a complex task because of its subjectivity and titles must be informative, catchy and syntactically correct.
They propose to approach automatic titling by normalizing a verb phrase selected to be relevant into a noun phrase with morphological and semantic processing.
They show that the proposed normalizing process can produce informative and/or catchy titles but evaluations remain challenging due to its subjectivity.
E14-1026
We present a simple preordering approach for machine translation based on a featurerich logistic regression model to predict whether two children of the same node in the source-side parse tree should be swapped or not . Given the pair-wise children regression scores we conduct an efficient depth-first branch-and-bound ...
Preordering methods for machine translation systems that involve little or no human assistance, run on limited computational resources and use linguistic analysis tools are required.
They propose a logistic regression-based model with lexical features which predicts whether two children of the same node in the parse tree should be swapped.
Experiments on translation tasks from English to Japanese and Korean show the proposed method outperforms baseline preordering methods and runs 80 times faster.
2020.acl-main.443
There is an increasing interest in studying natural language and computer code together , as large corpora of programming texts become readily available on the Internet . For example , StackOverflow currently has over 15 million programming related questions written by 8.5 million users . Meanwhile , there is still a l...
Resources and fundamental techniques are missing for identifying software-related named entities such as variable names or application names within natural language texts.
They introduce a manually annotated named entity corpus for the computer programming domain and an attention-based model which incorporates a context-independent code token classifier.
The proposed model outperforms BiLSTM-CRF and fine-tuned BERT models by achieving a 79.10 F1 score for code and named entity recognition on their dataset which
D14-1205
Populating Knowledge Base ( KB ) with new knowledge facts from reliable text resources usually consists of linking name mentions to KB entities and identifying relationship between entity pairs . However , the task often suffers from errors propagating from upstream entity linkers to downstream relation extractors . In...
Existing pipeline approaches to populate Knowledge Base with new knowledge facts from texts suffer from error propagating from upstream entity linkers to downstream relation extractors.
They propose to formulate the problem in an Integer Linear Program to find an optimal configuration from the top k results of both tasks.
They show that the proposed framework can reduce error propagations and outperform competitive pipeline baselines with state-of-the-art relation extraction models.
N19-1233
Generative Adversarial Networks ( GANs ) are a promising approach for text generation that , unlike traditional language models ( LM ) , does not suffer from the problem of " exposure bias " . However , A major hurdle for understanding the potential of GANs for text generation is the lack of a clear evaluation metric ....
Generative Adversarial Networks-based text generation models do not suffer from the exposure bias problem, however; they cannot be evaluated as other language models with log-probability.
They propose a way to approximate distributions from GAN-based models' outputs so that they can be evaluated as standard language models.
When GAN-based models are compared using the same evaluation metric as proposed, they perform much worse than current best language models.
P14-1064
Statistical phrase-based translation learns translation rules from bilingual corpora , and has traditionally only used monolingual evidence to construct features that rescore existing translation candidates . In this work , we present a semi-supervised graph-based approach for generating new translation rules that leve...
The performance of statistical phrase-based translation is limited by the size of the available phrasal inventory both for resource rich and poor language pairs.
They propose a semi-supervised approach that produces new translation rules from monolingual data by phrase graph construction and graph propagation techniques.
Their method significantly improves over existing phrase-based methods on Arabic-English and Urdu-English systems when large language models are used.
N18-1114
We present a new approach to the design of deep networks for natural language processing ( NLP ) , based on the general technique of Tensor Product Representations ( TPRs ) for encoding and processing symbol structures in distributed neural networks . A network architecture -the Tensor Product Generation Network ( TPGN...
While Tensor Product Representation is a powerful model for obtaining vector embeddings for symbol structures, its application with deep learning models is still less investigated.
They propose a newly designed model that is based on Tensor Product Representations for encoding and processing words and sentences.
The Tensor Product Representation-based generative model outperforms LSTM models by evaluating on COCO image-caption dataset and also achieves high interpretability.
N15-1159
This paper describes a simple and principled approach to automatically construct sentiment lexicons using distant supervision . We induce the sentiment association scores for the lexicon items from a model trained on a weakly supervised corpora . Our empirical findings show that features extracted from such a machine-l...
While sentiment lexicons are useful for building accurate sentiment classification systems, existing methods suffer from low recall or interpretability.
They propose to use Twitter's noisy opinion labels as distant supervision to learn a supervised polarity classifier and use it to obtain sentiment lexicons.
Using the obtained lexicon with an existing model achieves the state-of-the-art on the SemEval-13 message level task and outperforms baseline models in several other datasets.
D07-1036
Parallel corpus is an indispensable resource for translation model training in statistical machine translation ( SMT ) . Instead of collecting more and more parallel training corpora , this paper aims to improve SMT performance by exploiting full potential of the existing parallel corpora . Two kinds of methods are pro...
Statistical machine translation systems require corpora limited in domain and size, and a model trained on one domain does not perform well on other domains.
They propose offline and online methods to maximize the potential of available corpora by weighting training samples or submodules using an information retrieval model.
The proposed approaches improve translation quality without additional resources using even less data, further experiments with larger training data show that the methods can scale.
P03-1015
The paper describes two parsing schemes : a shallow approach based on machine learning and a cascaded finite-state parser with a hand-crafted grammar . It discusses several ways to combine them and presents evaluation results for the two individual approaches and their combination . An underspecification scheme for the...
Combining different methods often achieves the best results especially combinations of shallow and deep can realize both interpretability and good results.
They propose several ways to combine a machine learning-based shallow method and a hand-crafted grammar-based cascaded method for parsers.
Evaluations on a treebank of German newspaper texts show that the proposed method achieves substantial gain when there are ambiguities.
N09-1062
Tree substitution grammars ( TSGs ) are a compelling alternative to context-free grammars for modelling syntax . However , many popular techniques for estimating weighted TSGs ( under the moniker of Data Oriented Parsing ) suffer from the problems of inconsistency and over-fitting . We present a theoretically principle...
Although Probabilistic Context Free Grammers-based models are currently successful, they suffer from inconsistency and over-fitting when learning from a treebank.
They propose a Probabilistic Tree Substitution Grammer model with a Bayesian-based algorithm for training to accurately model the data and to keep the grammar simple.
The proposed model learns local structures for latent linguistic phenomena outperforms standard methods and is comparable to state-of-the-art methods on small data.
2020.emnlp-main.505
News headline generation aims to produce a short sentence to attract readers to read the news . One news article often contains multiple keyphrases that are of interest to different users , which can naturally have multiple reasonable headlines . However , most existing methods focus on the single headline generation ....
Existing news headline generation models only focus on generating one output even though news articles often have multiple points.
They propose a multi-source transformer decoder and train it using a new large-scale keyphrase-aware news headline corpus built from a search engine.
Their model outperforms strong baselines on their new real-world keyphrase-aware headline generation dataset.
N16-1103
Universal schema builds a knowledge base ( KB ) of entities and relations by jointly embedding all relation types from input KBs as well as textual patterns observed in raw text . In most previous applications of universal schema , each textual pattern is represented as a single embedding , preventing generalization to...
Existing approaches to incorporate universal schemas for automatic knowledge base construction has limitation in generalization to unseen inputs from training time.
They propose to combine universal schemas and neural network-based deep encoders to achieve generalization to an unseen language without additional annotations.
The proposed approach outperforms existing methods on benchmarks in English and Spanish while having no hand-coded rules or training data for Spanish.
E17-1022
We propose UDP , the first training-free parser for Universal Dependencies ( UD ) . Our algorithm is based on PageRank and a small set of head attachment rules . It features two-step decoding to guarantee that function words are attached as leaf nodes . The parser requires no training , and it is competitive with a del...
For dependency parsing, unsupervised methods struggle with learning relations that match conventions of the test data and supervised counterparts suffer from word order target adaptation.
They propose an unsupervised approach based on PageRank and a set of head attachment rules that solely depend on explicit part-of-speech constraints from Universal Dependencies.
The proposed linguistically sound method performs competitively with a delexicalized transfer system while having few parameters and robustness to domain changes across languages.
P19-1081
We study a conversational reasoning model that strategically traverses through a largescale common fact knowledge graph ( KG ) to introduce engaging and contextually diverse entities and attributes . For this study , we collect a new Open-ended Dialog ↔ KG parallel corpus called OpenDialKG , where each utterance from 1...
Using a large knowledge base for dialogue systems is intractable or not scalable which calls for methods that prune search space for entities.
They provide an open-ended dialogue corpus where each utterance is annotated with entities and paths and propose a model that works on this data structure.
The proposed model can produce more natural responses than state-of-the-art models on automatic and human evaluation, and generated knowledge graph paths provide explainability.
D12-1011
Existing techniques for disambiguating named entities in text mostly focus on Wikipedia as a target catalog of entities . Yet for many types of entities , such as restaurants and cult movies , relational databases exist that contain far more extensive information than Wikipedia . This paper introduces a new task , call...
Existing approaches to disambiguate named entities solely use Wikipedia as a catalogue however Many kinds of named entities are missed in Wikipedia.
They propose a task where systems need to reference arbitrary databases for finding named entities not only Wikipedia, together with methods to achieve domain adaptation.
A mixture of two domain adaptation methods outperforms existing systems that only rely on Wikipedia for their new Open-DB Named Entity Disambiguation task.
2020.emnlp-main.308
Solving algebraic word problems has recently emerged as an important natural language processing task . To solve algebraic word problems , recent studies suggested neural models that generate solution equations by using ' Op ( operator / operand ) ' tokens as a unit of input / output . However , such a neural model suf...
Neural models largely underperform hand-crafted feature-based models on algebraic word datasets such as ALG514 because of two issues namely expression fragmentation and operand-context separation.
They propose a model which generates an operator and required operands and applies operand-context pointers to resolve the expression fragmentation and operand-context separation issues respectively.
The proposed model performs comparable results to the state-of-the-art models with hand-crafted features and outperforms neural models by 40% on three datasets.
P98-1104
In this paper I will report the result of a quantitative analysis of the dynamics of the constituent elements of Japanese terminology . In Japanese technical terms , the linguistic contribution of morphemes greatly differ according to their types of origin . To analyse this aspect , a quantitative method is applied , w...
Static quantitative descriptions are not sufficient to analyse Japanese terminology because of the dynamic nature of samples calling for a method beyond the sample size.
They apply a quantitative method which can characterise the dynamic nature of morphemes using a small sample of Japanese terminology.
They show that the method can successfully analyze the dynamic nature of the morphemes in Japanese terminology with suitable means.
2021.acl-long.420
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa . Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage , so that they suffer from discrepancy between the two stages . Such a problem would lead to th...
Existing ways of injecting syntactic knowledge into pretraining models cause discrepancies between pretraining and fine-tuning and require expensive annotation.
They propose to inject syntactic features obtained by an off-the-shelf parser into pretraining models coupled with a new syntax-aware attention layer.
The proposed model achieves state-of-the-art in relation classification, entity typing, and question answering tasks.
P16-1067
This paper proposes an unsupervised approach for segmenting a multiauthor document into authorial components . The key novelty is that we utilize the sequential patterns hidden among document elements when determining their authorships . For this purpose , we adopt Hidden Markov Model ( HMM ) and construct a sequential...
There is no method for multiauthor segmentation of a document into author components which can be applied to authorship verification, plagiarism detection and author attribution.
They propose a HMM-based sequential probabilistic model that captures the dependencies of sequential sentences and their authors coupled with an unsupervised initialization method.
Experiments with artificial and authentic scientific document datasets show that the proposed model outperforms existing methods and also be able to provide confidence scores.
N13-1083
We investigate two systems for automatic disfluency detection on English and Mandarin conversational speech data . The first system combines various lexical and prosodic features in a Conditional Random Field model for detecting edit disfluencies . The second system combines acoustic and language model scores for detec...
Existing works on detecting speech disfluency which can be a problem for downstream processing and creating transcripts only focus on English.
They evaluate a Conditional Random Field-based edit disfluency detection model and a system which combines acoustic and language model that detects filled pauses in Mandarin.
Their system comparisons in English and Mandarin show that combining lexical and prosodic features achieves improvements in both languages.
P01-1026
We propose a method to generate large-scale encyclopedic knowledge , which is valuable for much NLP research , based on the Web . We first search the Web for pages containing a term in question . Then we use linguistic patterns and HTML structures to extract text fragments describing the term . Finally , we organize ex...
Existing methods that extract encyclopedic knowledge from the Web output unorganized clusters of term descriptions not necessarily related to explicit criteria while clustering is performed.
They propose to use word senses and domains for organizing extracted term descriptions on questions extracted from the Web to improve the quality.
The generated encyclopedia is applied to a Japanese question and answering system and improves over a system which solely depends on a dictionary.
D15-1028
Research on modeling time series text corpora has typically focused on predicting what text will come next , but less well studied is predicting when the next text event will occur . In this paper we address the latter case , framed as modeling continuous inter-arrival times under a log-Gaussian Cox process , a form of...
Modeling the event inter-arrival time of tweets is challenging due to complex temporal patterns but few works aim to predict the next text event occurrence.
They propose to apply a log-Gaussian Cox process model which captures the varying arriving rate over time coupled with the textual contents of tweets.
The proposed model outperforms baseline models on an inter-arrival time prediction task around a riots rumour and shows that it improves with textual features.
P18-1222
Hypertext documents , such as web pages and academic papers , are of great importance in delivering information in our daily life . Although being effective on plain documents , conventional text embedding methods suffer from information loss if directly adapted to hyper-documents . In this paper , we propose a general...
Existing text embedding methods do not take structures of hyper-documents into account losing useful properties for downstream tasks.
They propose an embedding method for hyper-documents that learns citation information along with four criteria to assess the properties the models should preserve.
The proposed model satisfies all of the introduced criteria and performs two tasks in the academic domain better than existing models.
N18-1108
Recurrent neural networks ( RNNs ) have achieved impressive results in a variety of linguistic processing tasks , suggesting that they can induce non-trivial properties of language . We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure . We test whether RNNs trained with a ge...
Previous work only shows that RNNs can handle constructions that require hierarchical structure when explicit supervision on the target task is given.
They introduce a probing method for syntactic abilities to evaluate long-distance agreement on standard and nonsensical sentences in multiple languages with different morphological systems.
The RNNs trained on an LM objective can solve long-distance agreement problems well even on nonsensical sentences consistently across languages indicating their deeper grammatical competence.
D09-1115
Current system combination methods usually use confusion networks to find consensus translations among different systems . Requiring one-to-one mappings between the words in candidate translations , confusion networks have difficulty in handling more general situations in which several words are connected to another se...
System combination methods based on confusion networks only allow word level 1-to-1 mappings, and some workarounds cause another type of problem such as degeneration.
They propose to use lattices to combine systems that enable to process of a sequence of words rather than one word that can mitigate degeneration.
They show that their approach significantly outperforms the state-of-the-art confusion-network-based systems on Chinese-to-English translation tasks.
E17-1110
The growing demand for structured knowledge has led to great interest in relation extraction , especially in cases with limited supervision . However , existing distance supervision approaches only extract relations expressed in single sentences . In general , cross-sentence relation extraction is under-explored , even...
Existing distance supervision methods for relation extraction cannot capture relations crossing the sentence boundary which is important in specialized domains with long-tail knowledge.
They propose a method for applying distance supervision to cross-sentence relation extraction by adopting a document-level graph representation that incorporates intra-sentential dependencies and inter-sentential relations.
Experiments on extracting drug-gene interactions from biomedical literature show that the proposed method doubles the performance of single-sentence extraction methods.
P07-1026
Convolution tree kernel has shown promising results in semantic role classification . However , it only carries out hard matching , which may lead to over-fitting and less accurate similarity measure . To remove the constraint , this paper proposes a grammardriven convolution tree kernel for semantic role classificatio...
Despite its success in semantic role classification, convolution tree kernels based on the hard matching between two sub-trees suffer from over-fitting.
They propose to integrate a linguistically motivated grammar-baed convolution tree kernel into a standard tree kernel to achieve better substructure matching and tree node matching.
The new grammar-driven tree kernel significantly outperforms baseline kernels on the CoNLL-2005 task.
E09-1032
We explore the problem of resolving the second person English pronoun you in multi-party dialogue , using a combination of linguistic and visual features . First , we distinguish generic and referential uses , then we classify the referential uses as either plural or singular , and finally , for the latter cases , we i...
Although the word "you" is frequently used and has several possible meanings, such as reference or generic, it is not well studied yet.
They first manually automatically separate the word "you" between generic and referential, then later use a multimodal system for automation.
They show that visual features can help distinguish the word "you" in multi-party conversations.
P10-1139
There is a growing research interest in opinion retrieval as on-line users ' opinions are becoming more and more popular in business , social networks , etc . Practically speaking , the goal of opinion retrieval is to retrieve documents , which entail opinions or comments , relevant to a target subject specified by the...
Existing approaches to the opinion retrieval task represent documents using bag-of-words disregarding contextual information between an opinion and its corresponding text.
They propose a sentence-based approach which captures both inter and intra sentence contextual information combined with a unified undirected graph.
The proposed method outperforms existing approaches on the COAE08 dataset showing that word pairs can represent information for opinion retrieval well.
N03-1024
We describe a syntax-based algorithm that automatically builds Finite State Automata ( word lattices ) from semantically equivalent translation sets . These FSAs are good representations of paraphrases . They can be used to extract lexical and syntactic paraphrase pairs and to generate new , unseen sentences that expre...
Existing approaches to represent paraphrases as sets or pairs of semantically equivalent words, phrases and patterns that are weak for text generation purposes.
They propose a syntax-based algorithm that builds Finite State Automata from translation sets which are good representations of paraphrases.
Manual and automatic evaluations show that the representations extracted by the proposed method can be used for automatic translation evaluations.
P18-1159
While sophisticated neural-based techniques have been developed in reading comprehension , most approaches model the answer in an independent manner , ignoring its relations with other answer candidates . This problem can be even worse in open-domain scenarios , where candidates from multiple passages should be combine...
Existing models for reading comprehension do not consider multiple answer candidates which can be problematic when they need to fuse information from multiple passages.
They propose to approach reading comprehension with an extract-then-select procedure, where a model learns two tasks jointly using latent variables and reinforcement learning.
The proposed model can fuse answer candidates from multiple candidates and significantly outperform existing models on two open-domain reading comprehension tasks.
W06-1672
We present two discriminative methods for name transliteration . The methods correspond to local and global modeling approaches in modeling structured output spaces . Both methods do not require alignment of names in different languages -their features are computed directly from the names themselves . We perform an exp...
The name transliteration task aims to transcribe extracted names into English, and since current extraction systems are fairly fast, applicable techniques for transliteration are limited.
They present two discriminative methods that learn a map function from one language into another using a dictionary without the notion of alignment.
The proposed methods outperform state-of-the-art probabilistic models on name transliteration from Arabic, Korean, and Russian to English, and the global discriminative modelling performs the best.
2022.acl-long.393
Motivated by the success of T5 ( Text-To-Text Transfer Transformer ) in pre-trained natural language processing models , we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech / text representation learning . The SpeechT5 framework consists of a shared en...
Existing speech pre-training methods ignore the importance of textual data and solely depend on encoders leaving the decoder out of pre-training for generation tasks.
They propose a unified-modal encoder-decoder framework with shared and modal-specific networks for self-supervised speech and text representation learning by using unlabeled text and speech corpus.
The fine-tuned proposed model is evaluated on a variety of spoken processing tasks and outperforms state-of-the-art models on voice conversion speaker identification tasks.
D13-1158
Recent studies on extractive text summarization formulate it as a combinatorial optimization problem such as a Knapsack Problem , a Maximum Coverage Problem or a Budgeted Median Problem . These methods successfully improved summarization quality , but they did not consider the rhetorical relations between the textual u...
Existing optimization-based methods for extractive summarization do not consider the rhetorical relations between textual units leading to generating uncoherent summaries or missing significant textual units.
They propose to first transform a rhetorical discourse tree into a dependency-based tree and then trim it as a Tree Knapsack Problem.
The proposed method achieves the highest ROUGE-1,2 scores on 30 documents selected from the RST Discourse Treebank Corpus.
D19-1098
Pre-training Transformer from large-scale raw texts and fine-tuning on the desired task have achieved state-of-the-art results on diverse NLP tasks . However , it is unclear what the learned attention captures . The attention computed by attention heads seems not to match human intuitions about hierarchical structures ...
This is unclear what attention heads in pre-training transformers models capture and it seems not to match human intuitions about hierarchical structures.
They propose to add an extra constraint to attention heads of the bidirectional Transformer encoder and a module induces tree structures from raw texts.
The proposed model achieves better unsupervised tree structure induction, language modelling, and more explainable attention scores which are coherent to human expert annotations.
D09-1131
This paper employs morphological structures and relations between sentence segments for opinion analysis on words and sentences . Chinese words are classified into eight morphological types by two proposed classifiers , CRF classifier and SVM classifier . Experiments show that the injection of morphological information...
There is not much work on applying morphological information in opinion extraction in Chinese.
They propose to utilize morphological and syntactic features for Chinese opinion analysis on word and sentence levels.
They show that using morphological structures helps opinion analysis in Chinese, outperforming the existing bat-of-character approach and the dictionary-based approach.
P19-1252
In this paper , we investigate the importance of social network information compared to content information in the prediction of a Twitter user 's occupational class . We show that the content information of a user 's tweets , the profile descriptions of a user 's follower / following community , and the user 's social...
Existing systems only use limited information from the tweets network to perform occupation classification.
They extend existing Twitter occupation classification graph-based models to exploit content information by adding textual data to existing datasets.
They show that textual feature enables graph neural networks to predict Twitter user occupation well even with a limited amount of training data.
P06-1073
Short vowels and other diacritics are not part of written Arabic scripts . Exceptions are made for important political and religious texts and in scripts for beginning students of Arabic . Script without diacritics have considerable ambiguity because many words with different diacritic patterns appear identical in a di...
Short vowels and other diacritics are not expressed in written Arabic, making it difficult to read for beginner readers or system developments.
They propose an approach that uses maximum entropy to restore diacritics in Arabic documents by learning relations between a wide varieties of features.
They show that by taking various kinds of features their system outperforms the existing state-of-the-art decritization model.
2021.eacl-main.251
Current state-of-the-art systems for joint entity relation extraction ( Luan et al . , 2019 ; Wadden et al . , 2019 ) usually adopt the multi-task learning framework . However , annotations for these additional tasks such as coreference resolution and event extraction are always equally hard ( or even harder ) to obtai...
Current joint entity relation extraction models follow a multitask learning setup however datasets with multiple types of annotation are not available for many domains.
They propose to pre-train a language model entity relation extraction with four newly introduced objective functions which utilize automatically obtained annotations by NER models.
The models pre-trained by the proposed method significantly outperform BERT and current state-of-the-art models on three entity relation extraction datasets.
P16-1089
We present the Siamese Continuous Bag of Words ( Siamese CBOW ) model , a neural network for efficient estimation of highquality sentence embeddings . Averaging the embeddings of words in a sentence has proven to be a surprisingly successful and efficient way of obtaining sentence embeddings . However , word embeddings...
While an average of word embeddings has proven to be successful as sentence-level representations, it is suboptimal because they are not optimized to represent sentences.
They propose to train word embeddings directly for the purpose of being averaged by predicting sounding sentences from a sentence representation using unlabeled data.
Evaluations show that their word embeddings outperform existing methods in 14 out of 20 datasets and they are stable in choice of parameters.
D17-1222
We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoderdecoder model equipped with a deep recurrent generative decoder ( DRGN ) . Latent structure information implied in the target summaries is learned based on a recurrent latent random model for improving the summ...
Although humans follow inherent structures in summary writing, currently there are no abstractive summarization models which take latent structure information and recurrent dependencies into account.
They propose a Variational Auto-Encoder-based sequence-to-sequence oriented encoder-decoder model with a deep recurrent generative decoder which learns latent structure information implied in the target summaries.
The proposed model outperforms the state-of-the-art models on some datasets in different languages.
2021.naacl-main.72
Multi-layer multi-head self-attention mechanism is widely applied in modern neural language models . Attention redundancy has been observed among attention heads but has not been deeply studied in the literature . Using BERT-base model as an example , this paper provides a comprehensive study on attention redundancy wh...
While there are works that report a redundancy among attention heads in modern language models, no works investigate its pattern deeply.
They perform token and sentence level analysis on redundancy matrices from pre-trained and fine-tuned BERT-base models and further propose a pruning method based on findings.
They find that many heads are redundant regardless of phase and task, and show the proposed pruning method can perform robustly.
D17-1220
Comprehending lyrics , as found in songs and poems , can pose a challenge to human and machine readers alike . This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts , and provide commentary to aid readers in reaching the correct interpretation . We introduce the t...
Because of its creative nature, understanding lyrics can be challenging both for humans and machines.
They propose a task of automated lyric annotation with a dataset collected from an online platform which explains lyrics to readers.
They evaluate translation and retrieval models with automatic and human evaluation and show that different models capture different aspects well.
H05-1023
Most statistical translation systems are based on phrase translation pairs , or " blocks " , which are obtained mainly from word alignment . We use blocks to infer better word alignment and improved word alignment which , in turn , leads to better inference of blocks . We propose two new probabilistic models based on t...
Automatic word alignment used in statistical machine translations, does not achieve satisfactory performance in some language pairs such as because of the limitations of HMMs.
They propose to use phrase translation pairs to get better word alignments using two new probabilistic models based on EM-algorithm that localizes the alignments.
The proposed models outperform IBM Model-4 by 10%, both on small and large training setups, and the translation models based on this result improve qualities.
E06-1051
We propose an approach for extracting relations between entities from biomedical literature based solely on shallow linguistic information . We use a combination of kernel functions to integrate two different information sources : ( i ) the whole sentence where the relation appears , and ( ii ) the local contexts aroun...
Deep linguistic features obtained by parsers are not always robust and available for limited languages and domains, however; applications of shallow features are under investigated.
They propose an approach for entity relation extraction using shallow linguistic information such as tokenization, sentence splitting, Part-of-Speech tagging and lemmatization coupled with kernel functions.
Evaluations of two biomedical datasets show that the proposed method outperforms existing systems which depend on syntactic or manually annotated semantic information.
N09-1032
Domain adaptation is an important problem in named entity recognition ( NER ) . NER classifiers usually lose accuracy in the domain transfer due to the different data distribution between the source and the target domains . The major reason for performance degrading is that each entity type often has lots of domainspec...
Named entity recognition classifiers lose accuracy in domain transfers because each entity type has domain-specific term representations and existing approaches require expensive labeled data.
They propose to capture latent semantic associations among words in the unlabeled corpus and use them to tune original named entity models.
The proposed model improves the performance on the English and Chinese corpus across domains especially on each NE type recognition.
D09-1030
Manual evaluation of translation quality is generally thought to be excessively time consuming and expensive . We explore a fast and inexpensive way of doing it using Amazon 's Mechanical Turk to pay small sums to a large number of non-expert annotators . For $ 10 we redundantly recreate judgments from a WMT08 translat...
Because of the high cost required for manual evaluation, most works rely on automatic evaluation metrics although there are several drawbacks.
They investigate whether judgements by non-experts from Amazon's Mechanical Turk can be a fast and inexpensive means of evaluation for machine translation systems.
They found that non-expert judgements with high agreement correlate better with gold standard judgements than BLEU while keeping the cost low.
D18-1133
State-of-the-art networks that model relations between two pieces of text often use complex architectures and attention . In this paper , instead of focusing on architecture engineering , we take advantage of small amounts of labelled data that model semantic phenomena in text to encode matching features directly in th...
State-of-the-art models that model relations between two texts use complex architectures and attention which requires a long time and large data at training.
They propose a method that directly models higher-level semantic links between two texts that are annotated by a fast model.
The proposed model outperforms a tree kernel model and complex neural models while keeping the model simple and the training fast.
2021.emnlp-main.411
Language representations are known to carry certain associations ( e.g. , gendered connotations ) which may lead to invalid and harmful predictions in downstream tasks . While existing methods are effective at mitigating such unwanted associations by linear projection , we argue that they are too aggressive : not only ...
Existing methods that remove harmful stereotypical associations from word embeddings either require inefficient retraining or remove information which should be retained.
They propose a method which orthogonalizes and rectifies incorrectly associated subspaces of concepts in an embedding space and a metric for evaluating information retention.
NLI-based evaluation on gender-occupation associations shows that the proposed approach is well-balanced ensuring semantic information is retained in the embeddings while mitigating biases.
2020.acl-main.75
Humor plays an important role in human languages and it is essential to model humor when building intelligence systems . Among different forms of humor , puns perform wordplay for humorous effects by employing words with double entendre and high phonetic similarity . However , identifying and modeling puns are challeng...
Puns involve implicit semantic or phonological tricks however there is no general framework to model these two types of signals as a whole.
They propose to jointly model contextualized word embeddings and phonological word representations by breaking each word into a sequence of phonemes for pun detection.
The proposed approach outperforms the state-of-the-art methods in pun detection and location tasks.
D10-1083
Part-of-speech ( POS ) tag distributions are known to exhibit sparsity -a word is likely to take a single predominant tag in a corpus . Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy . However , in existing systems , this expansion come with a steep increase in mo...
Assuming there is only one tag for a word is a powerful heuristic for Part-of-speech tagging but incorporating this into a model leads to complexity.
They propose an unsupervised method that directly incorporates a one-tag-per-word assumption into a HMM-based model.
Their proposed method reduces the number of model parameters which results in faster training speed and also outperforms more complex systems.
P10-1072
We present a game-theoretic model of bargaining over a metaphor in the context of political communication , find its equilibrium , and use it to rationalize observed linguistic behavior . We argue that game theory is well suited for modeling discourse as a dynamic resulting from a number of conflicting pressures , and ...
Metaphors used in political arguments provide elaborate conceptual correspondences with a tendency of politicians to be compelled by the rival's metaphorical framework to be explained.
They propose a game-theoric model of bargaining over a metaphor which is suitable to model its dynamics and use to rationalize observed linguistic behavior.
They show that the proposed framework can rationalize political communications with the use of extended metaphors based on the characteristics of the setting.
2020.acl-main.47
We examine a methodology using neural language models ( LMs ) for analyzing the word order of language . This LM-based method has the potential to overcome the difficulties existing methods face , such as the propagation of preprocessor errors in count-based methods . In this study , we explore whether the LMbased meth...
Linguistical approaches to analyze word order phenomena have scalability and preprocessor error propagation problems, and the use of language models is limited in English.
They validate language models as a tool to study word order in Japanese by examining the relationship between canonical word order and generation probability.
They show that language models have sufficient word order knowledge in Japanese to be used as a tool for linguists.
2021.naacl-main.150
A conventional approach to improving the performance of end-to-end speech translation ( E2E-ST ) models is to leverage the source transcription via pre-training and joint training with automatic speech recognition ( ASR ) and neural machine translation ( NMT ) tasks . However , since the input modalities are different ...
Existing works on end-to-end speech recognition models use the source transcriptions for performance improvements but it is challenging due to the modality gap.
They propose a bidirectional sequence knowledge distillation which learns from text-based NMT systems with a single decoder to enhance the model to capture semantic representations.
Evaluations on autoregressive and non-autoregressive models show that the proposed method improves in both directions and the results are consistent in different model sizes.
N07-1072
This paper explores the problem of computing text similarity between verb phrases describing skilled human behavior for the purpose of finding approximate matches . Four parsers are evaluated on a large corpus of skill statements extracted from an enterprise-wide expertise taxonomy . A similarity measure utilizing comm...
Existing systems for skilled expertise matching use exact matching between skill statements resulting in missing good matches and calling for a system with approximate matching.
They evaluate four different parsers to take structural information into consideration by matching skill statements on corresponding semantic roles from generated parse trees.
The proposed similarity measure outperforms a standard statistical information-theoretic measure and is comparable to a human agreement.
2020.acl-main.573
Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations . Some pioneering work has proved that storing a handful of historical relation examples in episodic memory and replaying them in subsequent tr...
Storing histories of examples is shown to be effective for continual relation learning however existing methods suffer from overfitting to memorize a few old memories.
They propose a human memory mechanism inspired by memory activation and reconsolidation method which aims to keep a stable understanding of old relations.
The proposed method mitigates catastrophic forgetting of old relations and achieves state-of-the-art on several relation extraction datasets showing it can use memorized examples.
2021.emnlp-main.66
This paper proposes to study a fine-grained semantic novelty detection task , which can be illustrated with the following example . It is normal that a person walks a dog in the park , but if someone says " A man is walking a chicken in the park , " it is novel . Given a set of natural language descriptions of normal s...
Existing works on novelty or abnormally detection are coarse-grained only focusing on the document or sentence level as a text classification task.
They propose a fine-grained semantic novelty detection problem where systems detect whether a textual description is a novel fact, coupled with a graph attention-based model.
The proposed model outperforms 11 baseline models on the created dataset from an image caption dataset for the proposed task by large margins.
P12-1096
Long distance word reordering is a major challenge in statistical machine translation research . Previous work has shown using source syntactic trees is an effective way to tackle this problem between two languages with substantial word order difference . In this work , we further extend this line of exploration and pr...
Long distance word reordering remains a challenge for statistical machine translation and existing approaches do it during the preprocessing.
They propose a ranking-based reordering approach where the ranking model is automatically derived from the word aligned parallel data using a syntax parser.
Large scale evaluation of Japanese-English and English-Japanese shows that the proposed approach significantly outperforms the baseline phrase-based statistical machine translation system.
D08-1038
How can the development of ideas in a scientific field be studied over time ? We apply unsupervised topic modeling to the ACL Anthology to analyze historical trends in the field of Computational Linguistics from 1978 to 2006 . We induce topic clusters using Latent Dirichlet Allocation , and examine the strength of each...
How topics or ideas have developed over time in NLP community remains unknown while there are analysis over the ACL anthology citation graph.
They propose to use Latent Dirichlet Allocation for studying topic shift over time and a model to compute the diversity of ideas and topic entropy.
They found that COLING has more diversity than ACL, but all the conferences are becoming to cover more topics over time, and applications increase generally.
P17-1024
In this paper , we aim to understand whether current language and vision ( LaVi ) models truly grasp the interaction between the two modalities . To this end , we propose an extension of the MS-COCO dataset , FOIL-COCO , which associates images with both correct and ' foil ' captions , that is , descriptions of the ima...
Despite the success of language and vision models on visual question answering tasks, what these models are learning remains unknown because of coarse-grained datasets.
They propose to automatically inject one mistake to captions in the MS-COCO dataset as a foil word and three diagnostic tasks to study models' behaviors.
Using the introduced dataset, they find that best performing models fail on the proposed tasks indicating their abilities to integrate two modalities.
D17-1323
Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web . Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding social biases found ...
Language is used for visual recognition problems such as captioning to improve performance however it can also encode social biases found in web corpora.
They propose a framework to quantify bias for visual semantic role labelling and multilabel object classification and a constraint inference framework to calibrate existing models.
They find that existing datasets contain gender bias the use of text can amplify it, and the proposed framework can reduce bias without performance loss.
P16-1177
We present a pairwise context-sensitive Autoencoder for computing text pair similarity . Our model encodes input text into context-sensitive representations and uses them to compute similarity between text pairs . Our model outperforms the state-of-the-art models in two semantic retrieval tasks and a contextual word si...
Existing approaches for textual representation learning only use local information without contexts which capture global information that can guide neural networks in generating accurate representations.
They propose a pairwise context-sensitive Autoencoder which integrates sentential or document context for computing text pair similarity.
The proposed model outperforms the state-of-the-art models in two retrieval and word similarity tasks and an unsupervised version performs comparable with several supervised baselines.
D09-1066
Distance-based ( windowless ) word assocation measures have only very recently appeared in the NLP literature and their performance compared to existing windowed or frequency-based measures is largely unknown . We conduct a largescale empirical comparison of a variety of distance-based and frequency-based measures for ...
The performance of new windowless word association measures which take the number of tokens separating words into account remains unknown.
They conduct large-scale empirical comparisons of window-based and windowless association measures for the reproduction of syntagmatic human association norms.
The best windowless measures perform on part with best window-based measures on correlation with human association scores.
D09-1042
This paper presents an effective method for generating natural language sentences from their underlying meaning representations . The method is built on top of a hybrid tree representation that jointly encodes both the meaning representation as well as the natural language in a tree structure . By using a tree conditio...
While hybrid trees are shown to be effective for semantic parsing, their application for text generation is under explored.
They propose a phrase-level tree conditional random field that uses a hybrid tree of a meaning representation for the text generation model.
Experiments in four languages with automatic evaluation metrics show that the proposed conditional random field-based model outperforms the previous state-of-the-art system.
P98-1081
In this paper we examine how the differences in modelling between different data driven systems performing the same NLP task can be exploited to yield a higher accuracy than the best individual system . We do this by means of an experiment involving the task of morpho-syntactic wordclass tagging . Four well-known tagge...
Different data driven approaches tend to produce different errors and their qualities are limited due to the learning method and available training material.
They propose to combine four different modelling methods for the task of morpho-syntactic wordclass tagging by using several voting strategies and second stage classifiers.
All combinations outperform the best component, with the best one showing a 19.1% lower error rate and raising the performance ceiling.
2020.emnlp-main.500
Adversarial attacks for discrete data ( such as texts ) have been proved significantly more challenging than continuous data ( such as images ) since it is difficult to generate adversarial samples with gradient-based methods . Current successful attack methods for texts usually adopt heuristic replacement strategies o...
Generating adversarial samples with gradient-based methods for text data is because of its discrete nature and existing complicated heuristic-based methods suffer from finding optimal solutions.
They propose to use BERT to generate adversarial samples by first finding the valuable words and generating substitutes for these words in a semantic-preserving way.
The proposed method outperforms state-of-the-art methods in success rate and perturb percentage while preserving fluency and sematic of generated samples with low cost.
E17-1060
We investigate the generation of onesentence Wikipedia biographies from facts derived from Wikidata slot-value pairs . We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries . Our model incorporates a novel secondary objective that helps ensure it ge...
Wikipedia and other collaborative knowledge bases have coverage and quality issues especially on a long tail of specialist topics.
They propose a recurrent neural network sequence-to-sequence model with an attention mechanism trained on a multi-task autoencoding objective to generate one-sentence Wikipedia biographies from Wikidata.
The proposed model achieves 41 BLEU score outperforming the baseline model and human annotators prefer the 40% of outputs as good as Wikipedia gold references.
D08-1050
Most state-of-the-art wide-coverage parsers are trained on newspaper text and suffer a loss of accuracy in other domains , making parser adaptation a pressing issue . In this paper we demonstrate that a CCG parser can be adapted to two new domains , biomedical text and questions for a QA system , by using manually-anno...
Most existing parsers are tuned for newspaper texts making them limited in applicable domains.
They propose a method to adapt a CCG parser to new domains using manually-annotated data only at POS and lexical category levels.
The proposed method achieves comparable results to in-domain parsers without expensive full annotations on biomedical texts and questions that are rare in existing benchmark datasets.