query stringlengths 273 149k | pos sequence | neg sequence | task stringclasses 1
value | instruction dict |
|---|---|---|---|---|
Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine ... | [
"We provide necessary and sufficient analytical forms for the critical points of the square loss functions for various neural networks, and exploit the analytical forms to characterize the landscape properties for the loss functions of these neural networks."
] | [
"We propose a extension of the batch normalization, show a first-of-its-kind convergence analysis for this extension and show in numerical experiments that it has better performance than the original batch normalizatin."
] | scitldr | {
"query": "Represent the Science article:",
"pos": "Represent the Science abstract:",
"neg": "Represent the Science abstract:"
} |
The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” , two biologically-plausible algorithms, proposed by and , relax BP’... | [
"Biologically plausible learning algorithms, particularly sign-symmetry, work well on ImageNet"
] | [
"We propose a extension of the batch normalization, show a first-of-its-kind convergence analysis for this extension and show in numerical experiments that it has better performance than the original batch normalizatin."
] | scitldr | {
"query": "Represent the Science article:",
"pos": "Represent the Science sentence:",
"neg": "Represent the Science sentence:"
} |
We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bi... | [
"We introduce the 2-simplicial Transformer and show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning."
] | [
"We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features."
] | scitldr | {
"query": "Represent the Science article:",
"pos": "Represent the Science abstract:",
"neg": "Represent the Science abstract:"
} |
We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error p... | [
"Accurate forecasting over very long time horizons using tensor-train RNNs"
] | [
"To solve the gradient vanishing/exploding problems, we proprose an efficient parametrization of the transition matrix of RNN that loses no expressive power, converges faster and has good generalization."
] | scitldr | {
"query": "Represent the Science passage:",
"pos": "Represent the Science summarization:",
"neg": "Represent the Science summarization:"
} |
Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference network... | [
"We propose a variational message-passing algorithm for models that contain both the deep model and probabilistic graphical model."
] | [
"We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not ELBO). Experiments on deep VAEs suggest a similar phenomenon is at play."
] | scitldr | {
"query": "Represent the Science document:",
"pos": "Represent the Science summarization:",
"neg": "Represent the Science summarization:"
} |
"Modern deep neural networks have a large amount of weights, which make them difficult to deploy on (...TRUNCATED) | ["A simple modification to low-rank factorization that improves performances (in both image and lang(...TRUNCATED) | ["We propose accelerating Batch Normalization (BN) through sampling less correlated data for reducti(...TRUNCATED) | scitldr | {"query":"Represent the Science passage:","pos":"Represent the Science sentence:","neg":"Represent t(...TRUNCATED) |
"Deep learning training accesses vast amounts of data at high velocity, posing challenges for datase(...TRUNCATED) | ["We propose a simple, general, and space-efficient data format to accelerate deep learning training(...TRUNCATED) | ["Our proposed algorithm does not use all of the unlabeled data for the training, and it rather uses(...TRUNCATED) | scitldr | {"query":"Represent the Science paper:","pos":"Represent the Science text:","neg":"Represent the Sci(...TRUNCATED) |
"It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when sem(...TRUNCATED) | ["ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION P(...TRUNCATED) | ["We find that deep networks which generalize poorly are more reliant on single directions than thos(...TRUNCATED) | scitldr | {"query":"Represent the Science paper:","pos":"Represent the Science summarization:","neg":"Represen(...TRUNCATED) |
"Generative Adversarial Networks (GANs) have achieved remarkable in the task of generating realistic(...TRUNCATED) | ["Are GANs successful because of adversarial training or the use of ConvNets? We show a ConvNet gene(...TRUNCATED) | ["We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not (...TRUNCATED) | scitldr | {"query":"Represent the Science passage:","pos":"Represent the Science text:","neg":"Represent the S(...TRUNCATED) |
"In this paper, we propose a novel kind of kernel, random forest kernel, to enhance the empirical pe(...TRUNCATED) | [
"Equip MMD GANs with a new random-forest kernel."
] | ["The paper designs two algorithms for the stochastic AUC maximization problem with state-of-the-art(...TRUNCATED) | scitldr | {"query":"Represent the Science paper:","pos":"Represent the Science abstract:","neg":"Represent the(...TRUNCATED) |
End of preview. Expand in Data Studio
- Downloads last month
- 3