paper_id stringlengths 10 10 | paper_url stringlengths 37 80 | title stringlengths 4 518 | abstract stringlengths 3 7.27k | arxiv_id stringlengths 9 16 ⌀ | url_abs stringlengths 18 601 | url_pdf stringlengths 21 601 | aspect_tasks list | aspect_methods list | aspect_datasets list |
|---|---|---|---|---|---|---|---|---|---|
21mBprZ3au | https://paperswithcode.com/paper/the-variational-fair-autoencoder | The Variational Fair Autoencoder | We investigate the problem of learning representations that are invariant to
certain nuisance or sensitive factors of variation in the data while retaining
as much of the remaining information as possible. Our model is based on a
variational autoencoding architecture with priors that encourage independence
between sens... | 1511.00830 | http://arxiv.org/abs/1511.00830v6 | http://arxiv.org/pdf/1511.00830v6.pdf | [
"Sentiment Analysis"
] | [] | [
"Multi-Domain Sentiment Dataset"
] |
mzmZPxHbHZ | https://paperswithcode.com/paper/breaking-the-softmax-bottleneck-a-high-rank | Breaking the Softmax Bottleneck: A High-Rank RNN Language Model | We formulate language modeling as a matrix factorization problem, and show
that the expressiveness of Softmax-based models (including the majority of
neural language models) is limited by a Softmax bottleneck. Given that natural
language is highly context-dependent, this further implies that in practice
Softmax with di... | 1711.03953 | http://arxiv.org/abs/1711.03953v4 | http://arxiv.org/pdf/1711.03953v4.pdf | [
"Language Modelling",
"Word Embeddings"
] | [
"Sigmoid Activation",
"Tanh Activation",
"Dropout",
"Temporal Activation Regularization",
"Activation Regularization",
"Weight Tying",
"Embedding Dropout",
"Variational Dropout",
"LSTM",
"DropConnect",
"AWD-LSTM",
"Mixture of Softmaxes",
"Softmax"
] | [
"Penn Treebank (Word Level)",
"WikiText-2"
] |
4sgwBMIVZJ | https://paperswithcode.com/paper/partially-shuffling-the-training-data-to-1 | Partially Shuffling the Training Data to Improve Language Models | Although SGD requires shuffling the training data between epochs, currently
none of the word-level language modeling systems do this. Naively shuffling all
sentences in the training data would not permit the model to learn
inter-sentence dependencies. Here we present a method that partially shuffles
the training data b... | 1903.04167 | http://arxiv.org/abs/1903.04167v2 | http://arxiv.org/pdf/1903.04167v2.pdf | [
"Language Modelling",
"Sentence Ordering"
] | [
"SGD"
] | [
"Penn Treebank (Word Level)",
"WikiText-2"
] |
wjL-ZZVuIm | https://paperswithcode.com/paper/dynamic-evaluation-of-neural-sequence-models | Dynamic Evaluation of Neural Sequence Models | We present methodology for using dynamic evaluation to improve neural
sequence models. Models are adapted to recent history via a gradient descent
based mechanism, causing them to assign higher probabilities to re-occurring
sequential patterns. Dynamic evaluation outperforms existing adaptation
approaches in our compar... | 1709.07432 | http://arxiv.org/abs/1709.07432v2 | http://arxiv.org/pdf/1709.07432v2.pdf | [
"Language Modelling"
] | [] | [
"Text8",
"Penn Treebank (Word Level)",
"WikiText-2",
"Hutter Prize"
] |
Afw7UcYbWU | https://paperswithcode.com/paper/direct-output-connection-for-a-high-rank | Direct Output Connection for a High-Rank Language Model | This paper proposes a state-of-the-art recurrent neural network (RNN)
language model that combines probability distributions computed not only from a
final RNN layer but also from middle layers. Our proposed method raises the
expressive power of a language model based on the matrix factorization
interpretation of langu... | 1808.10143 | http://arxiv.org/abs/1808.10143v2 | http://arxiv.org/pdf/1808.10143v2.pdf | [
"Constituency Parsing",
"Language Modelling",
"Machine Translation"
] | [] | [
"Penn Treebank (Word Level)",
"WikiText-2",
"Penn Treebank"
] |
nCrJQdu1BQ | https://paperswithcode.com/paper/on-the-state-of-the-art-of-evaluation-in | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental var... | 1707.05589 | http://arxiv.org/abs/1707.05589v2 | http://arxiv.org/pdf/1707.05589v2.pdf | [
"Language Modelling"
] | [] | [
"WikiText-2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.