Dataset Viewer
Auto-converted to Parquet Duplicate
source
sequence
target
stringlengths
95
1.47k
[ "abstract: We present our approach to the problem of how an agent, within an economic Multi-Agent System, can determine when it should behave strategically (i.e. learn and use models of other agents), and when it should act as a simple price-taker. We provide a framework for the incremental implementation of modeli...
Within the MAS community, some work @cite_1 has focused on how artificial AI-based learning agents would fare in communities of similar agents. For example, @cite_2 and show how agents can learn the capabilities of others via repeated interactions, but these agents do not learn to predict what actions other might take....
[ "abstract: Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environm...
Grasping action is the most basic component of any interaction and it is composed of three major components @cite_1 . The first one is related to the process of approaching the arm and hand to the target object, considering the overall body movement. The second component focuses on the hand and body pre-shaping before ...
[ "abstract: Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environm...
Grasping data-driven approaches have existed since a long time ago @cite_1 . These methods are based on large databases of predefined hand poses selected using user criteria or based on grasp taxonomies (i.e. final grasp poses when an object was successfully grasped) which provide us the ability to discriminate between...
[ "abstract: Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environm...
The selection process is also constrained by the hand high degree of freedom (DOF). In order to deal with dimensionality and redundancy many researchers have used techniques such as principal component analysis (PCA) @cite_1 @cite_2 . For the same purpose, studied the correlations between hand DOFs aiming to simplify h...
[ "abstract: Abstract Interaction in virtual reality (VR) environments (e.g. grasping and manipulating virtual objects) is essential to ensure a pleasant and immersive experience. In this work, we propose a visually realistic, flexible and robust grasping system that enables real-time interactions in virtual environm...
In order to achieve realistic object interactions, physical simulations on the objects should also be considered @cite_1 @cite_2 . Moreover, hand and finger movement trajectories need to be both, kinematically and dynamically valid @cite_3 . @cite_1 simulate hand interaction, such as two hands grasping each other in th...
[ "abstract: Graph Interpolation Grammars are a declarative formalism with an operational semantics. Their goal is to emulate salient features of the human parser, and notably incrementality. The parsing process defined by GIGs incrementally builds a syntactic representation of a sentence as each successive lexeme is...
Graph interpolation can be viewed as an extension of tree adjunction to parse graphs. And, indeed, TAGs @cite_1 , by introducing a 2-dimensional formalism into computational linguistics, have made a decisive step towards designing a syntactic theory that is both computationally tractable and linguistically realistic. I...
[ "abstract: Graph Interpolation Grammars are a declarative formalism with an operational semantics. Their goal is to emulate salient features of the human parser, and notably incrementality. The parsing process defined by GIGs incrementally builds a syntactic representation of a sentence as each successive lexeme is...
In Lexical Functional Grammars @cite_1 , grammatical functions are loosely coupled with phrase structure, which seems to be just the opposite of what is done in a GIG, in which functional edges are part of the phrase structure. Nonetheless, these two approaches share the concern of bringing out a functional structure, ...
[ "abstract: Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widel...
To our knowledge, lexical databases have been used only once in TC. Hearst @cite_1 adapted a disambiguation algorithm by Yarowsky using WordNet to recognize category occurrences. Categories are made of WordNet terms, which is not the general case of standard or user-defined categories. It is a hard task to adapt WordNe...
[ "abstract: Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widel...
Lexical databases have been employed recently in word sense disambiguation. For example, Agirre and Rigau @cite_1 make use of a semantic distance that takes into account structural factors in WordNet for achieving good results for this task. Additionally, Resnik @cite_2 combines the use of WordNet and a text collection...
[ "abstract: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an...
Word--sense disambiguation has more commonly been cast as a problem in supervised learning (e.g., @cite_1 , , @cite_2 , @cite_6 , @cite_4 , @cite_5 , @cite_6 , @cite_7 , @cite_8 ). However, all of these methods require that manually sense tagged text be available to train the algorithm. For most domains such text is no...
[ "abstract: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an...
A more recent bootstrapping approach is described in @cite_1 . This algorithm requires a small number of training examples to serve as a seed. There are a variety of options discussed for automatically selecting seeds; one is to identify collocations that uniquely distinguish between senses. For plant , the collocation...
[ "abstract: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an...
While @cite_1 does not discuss distinguishing more than 2 senses of a word, there is no immediate reason to doubt that the one sense per collocation'' rule @cite_2 would still hold for a larger number of senses. In future work we will evaluate using the one sense per collocation'' rule to seed our various methods. This...
[ "abstract: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an...
Clustering has most often been applied in natural language processing as a method for inducing syntactic or semantically related groupings of words (e.g., , @cite_2 , , @cite_3 , , @cite_4 ).
[ "abstract: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an...
An early application of clustering to word--sense disambiguation is described in @cite_1 . There words are represented in terms of the co-occurrence statistics of four letter sequences. This representation uses 97 features to characterize a word, where each feature is a linear combination of letter four-grams formulate...
[ "abstract: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an...
The features used in this work are complex and difficult to interpret and it isn't clear that this complexity is required. @cite_1 compares his method to @cite_2 and shows that for four words the former performs significantly better in distinguishing between two senses.
[ "abstract: This paper presents a new measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. Experimental evaluation suggests that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judgments, with an upper bound of r =...
The literature on corpus-based determination of word similarity has recently been growing by leaps and bounds, and is too extensive to discuss in detail here (for a review, see @cite_1 ), but most approaches to the problem share a common assumption: semantically similar words have similar distributional behavior in a c...
[ "abstract: Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models...
Statistical analysis of NLP data has often been limited to the application of standard models, such as n-gram (Markov chain) models and the Naive Bayes model. While n-grams perform well in part--of--speech tagging and speech processing, they require a fixed interdependency structure that is inappropriate for the broad ...
[ "abstract: Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models...
In order to utilize models with more complicated interactions among feature variables, @cite_1 introduce the use of sequential model selection and decomposable models for word--sense disambiguation. They recommended a model selection procedure using BSS and the exact conditional test in combination with a test for mode...
[ "abstract: Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models...
Alternative probabilistic approaches have involved using a single contextual feature to perform disambiguation (e.g., @cite_6 , @cite_2 , and @cite_3 present techniques for identifying the optimal feature to use in disambiguation). Maximum Entropy models have been used to express the interactions among multiple feature...
[ "abstract: In this paper, we define the notion of a preventative expression and discuss a corpus study of such expressions in instructional text. We discuss our coding schema, which takes into account both form and function features, and present measures of inter-coder reliability for those features. We then discus...
In computational linguistics, on the other hand, positive imperatives have been extensively investigated, both from the point of view of interpretation @cite_3 @cite_2 @cite_3 and generation @cite_5 @cite_5 . Little work, however, has been directed at negative imperatives. (for exceptions see the work of in interpretat...
[ "abstract: Hashing is promising for large-scale information retrieval tasks thanks to the efficiency of distance evaluation between binary codes. Generative hashing is often used to generate hashing codes in an unsupervised way. However, existing generative hashing methods only considered the use of simple priors, ...
Recently, VDSH @cite_1 proposed to use a VAE to learn the latent representations of documents and then use a separate stage to cast the continuous representations into binary codes. While fairly successful, this generative hashing model requires a two-stage training. NASH @cite_2 proposed to substitute the Gaussian pri...
[ "abstract: Blind image denoising is an important yet very challenging problem in computer vision due to the complicated acquisition process of real images. In this work we propose a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for bli...
Most classical image denoising methods belong to this category, through designing a MAP model with a fidelity loss term and a regularization one delivering the pre-known image prior. Along this line, total variation denoising @cite_1 , anisotropic diffusion @cite_2 and wavelet coring @cite_3 use the statistical regular...
[ "abstract: Blind image denoising is an important yet very challenging problem in computer vision due to the complicated acquisition process of real images. In this work we propose a new variational inference method, which integrates both noise estimation and image denoising into a unique Bayesian framework, for bli...
Instead of pre-setting image prior, deep learning methods directly learn a denoiser (formed as a deep neural network) from noisy to clean ones on a large collection of noisy-clean image pairs. Jain and Seung @cite_1 firstly adopted a five layer convolution neural network (CNN) for the task. Then some auto-encoder based...
[ "abstract: Textual network embeddings aim to learn a low-dimensional representation for every node in the network so that both the structural and textual information from the networks can be well preserved in the representations. Traditionally, the structural and textual embeddings were learned by models that rarel...
Text Embedding There has been various methods to embed textual information into vector representations for NLP tasks. The classical method for embedding textual information could be one-hot vector, term frequency inverse document frequency (TF-IDF), etc. Due to the high-dimension and sparsity problems in here, @cite_1 ...
[ "abstract: Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded...
Modern neural networks that provide good performance tend to be large and overparameterised, fuelled by observations that larger @cite_1 @cite_2 @cite_3 networks tend to be easier to train. This in turn drives numerous efforts to reduce model size using techniques such as weight pruning and quantisation @cite_4 @cite_5...
[ "abstract: Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded...
Early works like @cite_1 and @cite_2 explored pruning by computing the Hessian of the loss with respect to the parameters in order to assess the saliency of each parameter. Other works involving saliency computation include @cite_3 and @cite_4 where sensitivity of the loss with respect to neurons and weights are used r...
[ "abstract: Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded...
Most of the recent works in network pruning focused on vision-centric classification tasks using Convolutional Neural Networks (CNNs) and occasionally RNNs. Techniques proposed include magnitude-based pruning @cite_1 @cite_2 @cite_3 and variational pruning @cite_4 @cite_5 @cite_6 . Among these, magnitude-based weight p...
[ "abstract: Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded...
[label= *)] Simple and fast. Our approach enables easy pruning of the RNN decoder equipped with visual attention, whereby the best number of weights to prune in each layer is automatically determined. Compared to works such as @cite_1 @cite_2 , our approach is simpler with a single hyperparameter versus @math - @math h...
[ "abstract: Recurrent Neural Network (RNN) has been deployed as the de facto model to tackle a wide variety of language generation problems and achieved state-of-the-art (SOTA) performance. However despite its impressive results, the large number of parameters in the RNN model makes deployment in mobile and embedded...
While there are other works on compressing RNNs, most of the methods proposed either comes with structural constraints or are complementary to model pruning in principle. Examples include using low-rank matrix factorisations @cite_1 @cite_2 , product quantisation on embeddings , factorising word predictions into multip...
[ "abstract: BERT (, 2018) and RoBERTa (, 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a coll...
BERT @cite_1 is a pre-trained transformer network @cite_2 , which set for various NLP tasks new state-of-the-art results, including question answering, sentence classification, and sentence-pair regression. The input for BERT for sentence-pair regression consists of the two sentences, separated by a special [SEP] token...
[ "abstract: Video action recognition, which is topical in computer vision and video analysis, aims to allocate a short video clip to a pre-defined category such as brushing hair or climbing stairs. Recent works focus on action recognition with deep neural networks that achieve state-of-the-art results in need of hig...
Pooling methods are requisite either in two-stream networks @cite_1 or in other feature fusion models. @cite_2 simply uses average pooling and outperforms others. @cite_3 proposes bilinear pooling to model local parts of object: two feature representations are learned separately and then multiplied using the outer prod...
[ "abstract: Video action recognition, which is topical in computer vision and video analysis, aims to allocate a short video clip to a pre-defined category such as brushing hair or climbing stairs. Recent works focus on action recognition with deep neural networks that achieve state-of-the-art results in need of hig...
Recently, lightweight neural networks including SqeezeNet @cite_1 , Xception @cite_2 , ShuffleNet @cite_3 , ShuffleNetV2 @cite_4 , MobileNet @cite_5 , and MobileNetV2 @cite_6 have been proposed to run on mobile devices with the parameters and computation reduced significantly. Since we focus on mobile video action reco...
[ "abstract: In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable. This problem includes difference of convex (DC) functions and a family of bi-convex function...
Another important result is following the Bennett's inequality. Corollary 5 in @cite_1 shows that: where @math is the sample variance. It is notable that @math is equivalent (with a constant scaling) to the empirical variance @math . Similarly, the above uniform estimate can be extended to infinite loss classes using d...
[ "abstract: In this paper, we study a family of non-convex and possibly non-smooth inf-projection minimization problems, where the target objective function is equal to minimization of a joint function over another variable. This problem includes difference of convex (DC) functions and a family of bi-convex function...
An intuitive approach to considering the variance-based regularization is to include the first two terms on the right hand side into the objective, which is the formulation proposed in @cite_1 , i.e., sample variance penalty (SVP): An excess risk bound of @math may be achieved by solving the SVP. However, @cite_1 does ...
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3