forum_id
string
forum_title
string
forum_authors
list
forum_abstract
string
forum_keywords
list
forum_pdf_url
string
forum_url
string
note_id
string
note_type
string
note_created
int64
note_replyto
string
note_readers
list
note_signatures
list
venue
string
year
string
note_text
string
0OR_OycNMzOF9
Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
[ "Sainbayar Sukhbaatar", "Takaki Makino", "Kazuyuki Aihara" ]
Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from ...
[ "invariance", "image sequences", "features", "learning", "image features", "images", "invariant representations", "hardest challenges", "computer vision", "spatial pooling" ]
https://openreview.net/pdf?id=0OR_OycNMzOF9
https://openreview.net/forum?id=0OR_OycNMzOF9
Deofes8a4Heux
review
1,362,197,040,000
0OR_OycNMzOF9
[ "everyone" ]
[ "anonymous reviewer 8b0d" ]
ICLR.cc/2013/conference
2013
title: review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences review: The paper presents a method to learn invariant features by using temporal coherence. A set of linear pooling units are trained on top of a set of (pre-trained) features using what is effectively a linear auto-e...
0OR_OycNMzOF9
Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
[ "Sainbayar Sukhbaatar", "Takaki Makino", "Kazuyuki Aihara" ]
Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from ...
[ "invariance", "image sequences", "features", "learning", "image features", "images", "invariant representations", "hardest challenges", "computer vision", "spatial pooling" ]
https://openreview.net/pdf?id=0OR_OycNMzOF9
https://openreview.net/forum?id=0OR_OycNMzOF9
A2auXgoqFvTyV
comment
1,362,655,740,000
7N2E7oCO6yPiH
[ "everyone" ]
[ "Sainbayar Sukhbaatar" ]
ICLR.cc/2013/conference
2013
reply: Thank you for the detailed review. Those are good points, and we will consider them in our next revision. We also want to give some explanations. About fixed cluster size: - Yes. In topographic maps, clusters are not required to have the same size. We will fix this sentence in the next revision. We meant to ...
0OR_OycNMzOF9
Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
[ "Sainbayar Sukhbaatar", "Takaki Makino", "Kazuyuki Aihara" ]
Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from ...
[ "invariance", "image sequences", "features", "learning", "image features", "images", "invariant representations", "hardest challenges", "computer vision", "spatial pooling" ]
https://openreview.net/pdf?id=0OR_OycNMzOF9
https://openreview.net/forum?id=0OR_OycNMzOF9
2U4l21HEl7SVL
review
1,361,924,340,000
0OR_OycNMzOF9
[ "everyone" ]
[ "Ian Goodfellow" ]
ICLR.cc/2013/conference
2013
review: First off, let me say thank you for citing my + my co-authors' paper on measuring invariances. I have a few thoughts about invariance and temporal coherence that I hope you might find helpful. Regarding invariance, I think that invariance is not such a great property on its own. What you really want is to...
0OR_OycNMzOF9
Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
[ "Sainbayar Sukhbaatar", "Takaki Makino", "Kazuyuki Aihara" ]
Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from ...
[ "invariance", "image sequences", "features", "learning", "image features", "images", "invariant representations", "hardest challenges", "computer vision", "spatial pooling" ]
https://openreview.net/pdf?id=0OR_OycNMzOF9
https://openreview.net/forum?id=0OR_OycNMzOF9
IFLJkDHcu-Ice
comment
1,362,650,580,000
lvwFsD4fResyH
[ "everyone" ]
[ "Sainbayar Sukhbaatar" ]
ICLR.cc/2013/conference
2013
reply: First of all, thank you for reviewing our paper. It was a valuable feedback. We will try to include mentioned papers in the next revision. About the video dataset: - We will include a detailed explanation in the next revision. In short, 40 short (2-5 minutes in length) videos are used in our experiments. We ...
YBi6KFA7PfKo5
Two SVDs produce more focal deep learning representations
[ "Hinrich Schuetze", "Christian Scheible" ]
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for comput...
[ "representations", "efficient", "property", "svds", "focal deep", "key characteristic", "work", "deep learning", "neural networks" ]
https://openreview.net/pdf?id=YBi6KFA7PfKo5
https://openreview.net/forum?id=YBi6KFA7PfKo5
aK4z5qBF7bEod
review
1,363,717,680,000
YBi6KFA7PfKo5
[ "everyone" ]
[ "Hinrich Schuetze" ]
ICLR.cc/2013/conference
2013
review: Thanks for your comments! The suggestions seem all good and pertinent to us and (in case the paper should be accepted and assuming there is enough space) we will incorporate them when revising the paper. In particular: relate the new method to overview in Turney&Pantel, to kernel PCA and matrix factorization ap...
YBi6KFA7PfKo5
Two SVDs produce more focal deep learning representations
[ "Hinrich Schuetze", "Christian Scheible" ]
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for comput...
[ "representations", "efficient", "property", "svds", "focal deep", "key characteristic", "work", "deep learning", "neural networks" ]
https://openreview.net/pdf?id=YBi6KFA7PfKo5
https://openreview.net/forum?id=YBi6KFA7PfKo5
VFwT2CLWfA2kU
review
1,361,986,620,000
YBi6KFA7PfKo5
[ "everyone" ]
[ "anonymous reviewer 2448" ]
ICLR.cc/2013/conference
2013
title: review of Two SVDs produce more focal deep learning representations review: This paper proposes to use two consecutive SVDs to produce a continuous representation. This paper also introduces a property called focality. They claim that this property may be important for neural network: many classifiers cannot ...
YBi6KFA7PfKo5
Two SVDs produce more focal deep learning representations
[ "Hinrich Schuetze", "Christian Scheible" ]
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for comput...
[ "representations", "efficient", "property", "svds", "focal deep", "key characteristic", "work", "deep learning", "neural networks" ]
https://openreview.net/pdf?id=YBi6KFA7PfKo5
https://openreview.net/forum?id=YBi6KFA7PfKo5
vNpsUSMf3tNfx
comment
1,363,717,200,000
VFwT2CLWfA2kU
[ "everyone" ]
[ "Hinrich Schuetze" ]
ICLR.cc/2013/conference
2013
reply: Thanks for your comments! If the paper is accepted, we will expand the description of the discrimination task and explain in more detail how it is related to focality (the idea is that a single hidden unit does well on the discrimination -- which is what focality is supposed to capture). We will also expand t...
YBi6KFA7PfKo5
Two SVDs produce more focal deep learning representations
[ "Hinrich Schuetze", "Christian Scheible" ]
A key characteristic of work on deep learning and neural networks in general is that it relies on representations of the input that support generalization, robust inference, domain adaptation and other desirable functionalities. Much recent progress in the field has focused on efficient and effective methods for comput...
[ "representations", "efficient", "property", "svds", "focal deep", "key characteristic", "work", "deep learning", "neural networks" ]
https://openreview.net/pdf?id=YBi6KFA7PfKo5
https://openreview.net/forum?id=YBi6KFA7PfKo5
3wTuUWS9F_w4i
review
1,362,188,640,000
YBi6KFA7PfKo5
[ "everyone" ]
[ "anonymous reviewer 4c9d" ]
ICLR.cc/2013/conference
2013
title: review of Two SVDs produce more focal deep learning representations review: This paper introduces a novel method to induce word vector representations from a corpus of unlabeled text. The method relies upon 'stacking' singular value decomposition with an intermediate normalization nonlinearity. The authors propo...
9bFY3t2IJ19AC
Affinity Weighted Embedding
[ "Jason Weston", "Ron Weiss", "Hector Yee" ]
Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which ...
[ "models", "affinity", "linear", "wsabie", "psi", "successful", "ranking", "recommendation", "annotation tasks", "scalable" ]
https://openreview.net/pdf?id=9bFY3t2IJ19AC
https://openreview.net/forum?id=9bFY3t2IJ19AC
9A_uTWCfuoTeF
review
1,362,123,720,000
9bFY3t2IJ19AC
[ "everyone" ]
[ "anonymous reviewer 3e4d" ]
ICLR.cc/2013/conference
2013
title: review of Affinity Weighted Embedding review: Affinity Weighted Embedding Paper summary This paper extends supervised embedding models by combining them multiplicatively, i.e. f'(x,y) = G(x,y) f(x,y). It considers two types of model, dot product in the *embedding* space and kernel density in the *embeddi...
9bFY3t2IJ19AC
Affinity Weighted Embedding
[ "Jason Weston", "Ron Weiss", "Hector Yee" ]
Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which ...
[ "models", "affinity", "linear", "wsabie", "psi", "successful", "ranking", "recommendation", "annotation tasks", "scalable" ]
https://openreview.net/pdf?id=9bFY3t2IJ19AC
https://openreview.net/forum?id=9bFY3t2IJ19AC
X-2g4ZbGhE5Gf
review
1,363,646,880,000
9bFY3t2IJ19AC
[ "everyone" ]
[ "Jason Weston" ]
ICLR.cc/2013/conference
2013
review: - The results of G alone are basically the 'k-Nearest Neighbor (Wsabie space)' results that are in the tables. - We initialized the parameters of step 3 with the ones from step 1. Without this I think the results could be worse as you are losing a lot of the pairwise label comparisons from the training if G ...
9bFY3t2IJ19AC
Affinity Weighted Embedding
[ "Jason Weston", "Ron Weiss", "Hector Yee" ]
Supervised (linear) embedding models like Wsabie and PSI have proven successful at ranking, recommendation and annotation tasks. However, despite being scalable to large datasets they do not take full advantage of the extra data due to their linear nature, and typically underfit. We propose a new class of models which ...
[ "models", "affinity", "linear", "wsabie", "psi", "successful", "ranking", "recommendation", "annotation tasks", "scalable" ]
https://openreview.net/pdf?id=9bFY3t2IJ19AC
https://openreview.net/forum?id=9bFY3t2IJ19AC
T5KWotfp6lot7
review
1,362,229,560,000
9bFY3t2IJ19AC
[ "everyone" ]
[ "anonymous reviewer 0248" ]
ICLR.cc/2013/conference
2013
title: review of Affinity Weighted Embedding review: This work proposes a new nonlinear embedding model and applies it to a music annotation and image annotation task. Motivated by the fact that linear embedding models typically underfit on large datasets, the authors propose a nonlinear embedding model with greater ca...
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
gD5ygpn3FZ9Tf
review
1,362,079,980,000
11y_SldoumvZl
[ "everyone" ]
[ "anonymous reviewer c82a" ]
ICLR.cc/2013/conference
2013
title: review of Factorized Topic Models review: * A brief summary of the paper's contributions, in the context of prior work. This paper suggests an improvement over the LDA topic model with class labels of Fei-Fei and Perona [6], which consists in the incorporation of a prior that encourages the class conditional ...
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
ADCLANJlZFDlw
comment
1,362,753,420,000
gD5ygpn3FZ9Tf
[ "everyone" ]
[ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ]
ICLR.cc/2013/conference
2013
reply: We would like to thank the reviewers for their insightful comments about the paper. We will first provide general comments in response to issues raised by more than one reviewer, and then discuss each of the reviews in more detail. From reading the reviews, we realize that the main contribution of the paper s...
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
rr6RmiA9Hhs9i
review
1,362,457,800,000
11y_SldoumvZl
[ "everyone" ]
[ "anonymous reviewer fda8" ]
ICLR.cc/2013/conference
2013
title: review of Factorized Topic Models review: This paper introduces a new prior for topics in LDA to disentangle general variance and class specific variance. The other reviews already mentioned the lack of novelty and some missing descriptions. Concretely, the definition of p( heta | kappa), which is central to ...
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
8nXtnZf5sU-bd
comment
1,363,382,280,000
eeCgjoYcgmDco
[ "everyone" ]
[ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ]
ICLR.cc/2013/conference
2013
reply: Once again, thanks to reviewer c82a for very helpful comments. We agree that the statement regarding connection between the prior and F(k) was not correct. The parameter kappa should not be considered as a prior in the model, instead it is used as a implementation specific parameter. We have now reorganized the ...
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
YYiHlnPjU5YVO
review
1,363,623,420,000
11y_SldoumvZl
[ "everyone" ]
[ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ]
ICLR.cc/2013/conference
2013
review: Dear reviewers, the new version of the paper, addressing all the changes in our comments, is public visible now in arXiv. Thanks in advance for your time.
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
eeCgjoYcgmDco
comment
1,363,139,160,000
ADCLANJlZFDlw
[ "everyone" ]
[ "anonymous reviewer c82a" ]
ICLR.cc/2013/conference
2013
reply: The additional comparison with SLDA is a good step in the right direction and certainly improves my personal appreciation of this paper. Unfortunately, I still cant vouch for the validity of the learning algorithm. First, I'm now even more confused as to what the prior actually is. Indeed, the prior is stated...
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
InujBpA-6qILy
review
1,362,214,440,000
11y_SldoumvZl
[ "everyone" ]
[ "anonymous reviewer 232f" ]
ICLR.cc/2013/conference
2013
title: review of Factorized Topic Models review: This paper presents an extension of Latent Dirichlet Allocation (LDA) which explicitly factors data into a structured noise part (varation shared among classes) and a signal part (variation within a specific class). The model is shown to outperform a baseline of LDA with...
11y_SldoumvZl
Factorized Topic Models
[ "Cheng Zhang", "Carl Henrik Ek", "Hedvig Kjellstrom" ]
In this paper we present a new type of latent topic model, which exploits supervision to produce a factorized representation of the observed data. The structured parameterization separately encodes variance that is shared between classes from variance that is private to each class by the introduction of a new prior. Th...
[ "topic models", "factorized representation", "variance", "new type", "latent topic model", "supervision", "observed data", "structured parameterization", "classes", "private" ]
https://openreview.net/pdf?id=11y_SldoumvZl
https://openreview.net/forum?id=11y_SldoumvZl
LpoA5MF9bm520
review
1,362,753,660,000
11y_SldoumvZl
[ "everyone" ]
[ "Cheng Zhang, Carl Henrik Ek, Hedvig Kjellstrom" ]
ICLR.cc/2013/conference
2013
review: We would like to thank the reviewers for their insightful comments about the paper. We will first provide general comments in response to issues raised by more than one reviewer, and then discuss each of the reviews in more detail. From reading the reviews, we realize that the main contribution of the paper ...
bI58OFtQlLOQ7
Deep Learning for Detecting Robotic Grasps
[ "Ian Lenz", "Honglak Lee", "Ashutosh Saxena" ]
In this work, we consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. We present a two-step cascaded structure, where we have two deep networks, with the top detections from the first one re-evaluated by the second one. The first deep network has fewer features, is therefore ...
[ "deep learning", "robotic grasps", "features", "work", "problem", "view", "scene", "objects", "cascaded structure" ]
https://openreview.net/pdf?id=bI58OFtQlLOQ7
https://openreview.net/forum?id=bI58OFtQlLOQ7
Fsg-G38UWSlUP
review
1,362,414,180,000
bI58OFtQlLOQ7
[ "everyone" ]
[ "anonymous reviewer cf06" ]
ICLR.cc/2013/conference
2013
title: review of Deep Learning for Detecting Robotic Grasps review: This paper uses a two-pass detection mechanism with sparse autoencoders for robotic grasp detection, a new application of deep learning. The methods used are fairly standard by now (two pass and autoencoders), so the main novelty of the paper is its ni...
bI58OFtQlLOQ7
Deep Learning for Detecting Robotic Grasps
[ "Ian Lenz", "Honglak Lee", "Ashutosh Saxena" ]
In this work, we consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. We present a two-step cascaded structure, where we have two deep networks, with the top detections from the first one re-evaluated by the second one. The first deep network has fewer features, is therefore ...
[ "deep learning", "robotic grasps", "features", "work", "problem", "view", "scene", "objects", "cascaded structure" ]
https://openreview.net/pdf?id=bI58OFtQlLOQ7
https://openreview.net/forum?id=bI58OFtQlLOQ7
Sl9E4V1iE8lfU
review
1,362,192,180,000
bI58OFtQlLOQ7
[ "everyone" ]
[ "anonymous reviewer b096" ]
ICLR.cc/2013/conference
2013
title: review of Deep Learning for Detecting Robotic Grasps review: Summary: this paper uses the common 2-step procedure to first eliminate most of unlikely detection windows (high recall), then use a network with higher capacity for better discrimination (high precision). Deep learning (in the unsupervised sense) help...
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
JJQpYH2mRDJmM
review
1,363,989,120,000
-AIqBI4_qZAQ1
[ "everyone" ]
[ "Luis Gonzalo Sánchez" ]
ICLR.cc/2013/conference
2013
review: The newest version of the paper will appear on arXiv by Monday March 25th. In the mean time the paper can be seen at the following link: https://docs.google.com/file/d/0B6IHvj9GXU3dMk1IeUNfUEpqSmc/edit?usp=sharing
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
J04ah1kBas0qR
review
1,362,229,800,000
-AIqBI4_qZAQ1
[ "everyone" ]
[ "anonymous reviewer 4ccd" ]
ICLR.cc/2013/conference
2013
title: review of Information Theoretic Learning with Infinitely Divisible Kernels review: This paper introduces new entropy-like quantities on positive semi definite matrices. These quantities can be directly calculated from the Gram matrix of the data, and they do not require density estimation. This is an attractive ...
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
suhMsqNkdKs6R
comment
1,363,799,700,000
5pA7ERXu7H5uQ
[ "everyone" ]
[ "Luis Gonzalo Sánchez" ]
ICLR.cc/2013/conference
2013
reply: This is the same comment from below, we just realized that this is the reply button for your comments. Dear reviewer, we appreciate the comments and the effort put into reviewing our work. We believe you have made a very valid point by asking us about the role of alpha. The order of the matrix entropy acts as a...
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
ssJmfOuxKafV5
review
1,363,775,340,000
-AIqBI4_qZAQ1
[ "everyone" ]
[ "Luis Gonzalo Sánchez" ]
ICLR.cc/2013/conference
2013
review: The new version of the paper can be accessed through https://docs.google.com/file/d/0B6IHvj9GXU3dekxXMHZVdmphTXc/edit?usp=sharing until it is updated in arXiv
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
cUCwU-yxtoURe
review
1,362,176,820,000
-AIqBI4_qZAQ1
[ "everyone" ]
[ "anonymous reviewer 2169" ]
ICLR.cc/2013/conference
2013
title: review of Information Theoretic Learning with Infinitely Divisible Kernels review: The paper introduces a new approach to supervised metric learning. The setting is somewhat similar to the information-theoretic approach of Davis et al. (2007). The main difference is that here the parameterized Mahalanobis dis...
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
nVC7VhbpFDnlL
comment
1,363,773,600,000
J04ah1kBas0qR
[ "everyone" ]
[ "Luis Gonzalo Sánchez" ]
ICLR.cc/2013/conference
2013
reply: Thanks again for the good comments. We have worked hard on improving the presentation of the results. With regard to your cons: i) We improve the presentation of the ideas by highlighting what are the contributions and why they are relevant. In section 3, where there was no clear delineation between what is k...
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
5pA7ERXu7H5uQ
review
1,362,276,900,000
-AIqBI4_qZAQ1
[ "everyone" ]
[ "anonymous reviewer 5093" ]
ICLR.cc/2013/conference
2013
title: review of Information Theoretic Learning with Infinitely Divisible Kernels review: This paper proposes a new type of information measure for positive semidefinte matrices, which is essentially the logarithm of the sum of powers of eigenvalues. Several entropy-like properties are shown based on properties of spec...
-AIqBI4_qZAQ1
Information Theoretic Learning with Infinitely Divisible Kernels
[ "Luis Gonzalo Sánchez", "Jose C. Principe" ]
In this paper, we develop a framework for information theoretic learning based on infinitely divisible matrices. We formulate an entropy-like functional on positive definite matrices based on Renyi's entropy definition and examine some key properties of this functional that lead to the concept of infinite divisibility....
[ "information theoretic learning", "functional", "divisible kernels", "framework", "divisible matrices", "positive definite matrices", "renyi", "entropy definition", "key properties" ]
https://openreview.net/pdf?id=-AIqBI4_qZAQ1
https://openreview.net/forum?id=-AIqBI4_qZAQ1
hhNRhrYspih_x
comment
1,363,772,280,000
cUCwU-yxtoURe
[ "everyone" ]
[ "Luis Gonzalo Sánchez" ]
ICLR.cc/2013/conference
2013
reply: Thanks for the comments. We really appreciate the time you put into reviewing our paper. I agree that in the original presentation many of the main points and contributions of the paper where hard to grasp. In the new version, we have made our contributions explicit. and some of the technical exposition was mod...
KKZ-FeUj-9kjY
Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
[ "Xanadu Halkias", "Sébastien PARIS", "Herve Glotin" ]
Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, re...
[ "dbn", "deep belief networks", "digit recognition", "sparse constraints", "sparse penalty", "dbns", "approximate accuracy rates" ]
https://openreview.net/pdf?id=KKZ-FeUj-9kjY
https://openreview.net/forum?id=KKZ-FeUj-9kjY
ttT0L-IGxpbuw
review
1,362,153,000,000
KKZ-FeUj-9kjY
[ "everyone" ]
[ "anonymous reviewer 0136" ]
ICLR.cc/2013/conference
2013
title: review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint review: The paper proposes a mixed norm penalty for regularizing RBMs and DBNs. The work extends previous work on sparse RBMs and DBNs and extends the work of Luo et al. (2011) on sparse group RBMs (and DBMs) to deep belief nets. T...
KKZ-FeUj-9kjY
Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
[ "Xanadu Halkias", "Sébastien PARIS", "Herve Glotin" ]
Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, re...
[ "dbn", "deep belief networks", "digit recognition", "sparse constraints", "sparse penalty", "dbns", "approximate accuracy rates" ]
https://openreview.net/pdf?id=KKZ-FeUj-9kjY
https://openreview.net/forum?id=KKZ-FeUj-9kjY
ijgMjq-uMOiYw
review
1,362,144,480,000
KKZ-FeUj-9kjY
[ "everyone" ]
[ "anonymous reviewer 61fc" ]
ICLR.cc/2013/conference
2013
title: review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint review: In this paper the authors propose a method to make the hidden units of RBM group sparse. The key idea is to add a penalty term to the negative log-likelihood loss penalizing the L2/L1 norm over the activations of the RBM. T...
KKZ-FeUj-9kjY
Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint
[ "Xanadu Halkias", "Sébastien PARIS", "Herve Glotin" ]
Deep Belief Networks (DBN) have been successfully applied on popular machine learning tasks. Specifically, when applied on hand-written digit recognition, DBNs have achieved approximate accuracy rates of 98.8%. In an effort to optimize the data representation achieved by the DBN and maximize their descriptive power, re...
[ "dbn", "deep belief networks", "digit recognition", "sparse constraints", "sparse penalty", "dbns", "approximate accuracy rates" ]
https://openreview.net/pdf?id=KKZ-FeUj-9kjY
https://openreview.net/forum?id=KKZ-FeUj-9kjY
dWSK4E1RkeWRi
review
1,362,193,620,000
KKZ-FeUj-9kjY
[ "everyone" ]
[ "anonymous reviewer e6d4" ]
ICLR.cc/2013/conference
2013
title: review of Sparse Penalty in Deep Belief Networks: Using the Mixed Norm Constraint review: Since the last version of the paper (v2) is incomplete my following comments are mainly based on the first version. This paper proposes using $l_{1,2}$ regularization (for both non-overlapping and overlapping groups) upo...
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
obPcCcSvhKovH
review
1,362,369,360,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "Marc'Aurelio Ranzato" ]
ICLR.cc/2013/conference
2013
review: Another minor comment related to the visualization method: since there is no iterative 'inference' step typical of deconv. nets (the features are already given by a direct forward pass) then this method is perhaps more similar to this old paper of mine: M. Ranzato, F.J. Huang, Y. Boureau, Y. LeCun, 'Unsupervis...
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
BBmMrdZA5UBaz
review
1,362,349,140,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "Marc'Aurelio Ranzato" ]
ICLR.cc/2013/conference
2013
review: I really like this paper because: - it is simple yet very effective and - the empirical validation not only demonstrates the method but it also helps understanding where the gain comes from (tab. 5 was very useful to understand the regularization effect brought by the sampling noise). I also found intrigui...
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
SPk0N0RlUTrqv
review
1,394,470,920,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "anonymous reviewer f4a8" ]
ICLR.cc/2013/conference
2013
review: I apologize for the delay in my reply. Verdict: weak accept.
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
WilRXfhv6jXxa
review
1,361,845,800,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "anonymous reviewer 2b4c" ]
ICLR.cc/2013/conference
2013
title: review of Stochastic Pooling for Regularization of Deep Convolutional Neural Networks review: This paper introduces a new regularization technique based on inexpensive approximations to model averaging, similar to dropout. As with dropout, the training procedure involves stochasticity but the trained model ...
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
OOBjrzG_LdOEf
review
1,362,085,800,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "Ian Goodfellow" ]
ICLR.cc/2013/conference
2013
review: I'm excited about this paper because it introduces another trick for cheap model averaging like dropout. It will be interesting to see if this kind of fast model averaging turns into a whole subfield. I recently got some very good results ( http://arxiv.org/abs/1302.4389 ) by using a model that works well wi...
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
w0XswsNFad7Qu
review
1,394,470,920,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "anonymous reviewer f4a8" ]
ICLR.cc/2013/conference
2013
review: I apologize for the delay in my reply. Verdict: weak accept.
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
1toZvrIP-Xvme
review
1,394,470,860,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "anonymous reviewer f4a8" ]
ICLR.cc/2013/conference
2013
review: I apologize for the delay in my reply. Verdict: weak accept.
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
ZVb9LYU20iZhX
review
1,362,379,980,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "anonymous reviewer f4a8" ]
ICLR.cc/2013/conference
2013
title: review of Stochastic Pooling for Regularization of Deep Convolutional Neural Networks review: Regularization methods are critical for the successful applications of neural networks. This work introduces a new dropout-inspired regularization method named stochastic pooling. The method is simple, applica...
l_PClqDdLb5Bp
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
[ "Matthew Zeiler", "Rob Fergus" ]
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within th...
[ "regularization", "data augmentation", "stochastic pooling", "simple", "effective", "conventional deterministic", "operations" ]
https://openreview.net/pdf?id=l_PClqDdLb5Bp
https://openreview.net/forum?id=l_PClqDdLb5Bp
lWJdCuzGuRlGF
review
1,362,101,820,000
l_PClqDdLb5Bp
[ "everyone" ]
[ "anonymous reviewer cd07" ]
ICLR.cc/2013/conference
2013
title: review of Stochastic Pooling for Regularization of Deep Convolutional Neural Networks review: The authors introduce a stochastic pooling method in the context of convolutional neural networks, which replaces the traditionally used average or max pooling operators. In the stochastic pooling a multinomial ...
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
ELp1azAY4uaYz
review
1,362,415,140,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer 3c5e" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Estimation of Word Representations in Vector Space review: This paper introduces a linear word vector learning model and shows that it performs better on a linear evaluation task than nonlinear models. While the new evaluation experiment is interesting the paper has too many issues in its cur...
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
bf2Dnm5t9Ubqe
review
1,360,865,940,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer f5bf" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Estimation of Word Representations in Vector Space review: The paper studies the problem of learning vector representations for words based on large text corpora using 'neural language models' (NLMs). These models learn a feature vector for each word in such a way, that the feature vector of ...
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
6NMO6i-9pXN8q
review
1,363,602,720,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer 3c5e" ]
ICLR.cc/2013/conference
2013
review: It is really unfortunate that the responding author seems to care solely about every possible tweak to his model and combinations of his models but shows a strong disregard for a proper scientific comparison that would show what's really the underlying reason for the increase in accuracy on (again) his own ...
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
OOksUbLar_UGE
review
1,363,350,360,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer 13e8" ]
ICLR.cc/2013/conference
2013
review: In light of the authors' response I'm changing my score for the paper to Weak Reject.
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
3Ms_MCOhFG34r
review
1,368,188,160,000
idpCdOWtqXd60
[ "everyone" ]
[ "Pontus Stenetorp" ]
ICLR.cc/2013/conference
2013
review: In response to the request for references made by the first author for the statement regarding semantic similarity being intransitive, I think the reference should be to 'Features of similarity' by Tversky (1977). Please find what I believe to be the relevant portion below. `We say 'the portrait resemble...
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
ddu0ScgIDPSxi
review
1,363,279,380,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer f5bf" ]
ICLR.cc/2013/conference
2013
review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form. Quality rating: Strong reject Confidence: Reviewer is knowledgeable
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
QDmFD7aPnX1h7
review
1,360,857,420,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer 13e8" ]
ICLR.cc/2013/conference
2013
title: review of Efficient Estimation of Word Representations in Vector Space review: The authors propose two log-linear language models for learning real-valued vector representations of words. The models are designed to be simple and fast and are shown to be scalable to very large datasets. The resulting word embeddi...
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
sJxHJpdSKIJNL
review
1,363,326,840,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer f5bf" ]
ICLR.cc/2013/conference
2013
review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form. Quality rating: Strong reject Confidence: Reviewer is knowledgeable
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
mmlAm0ZawBraS
review
1,363,279,380,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer f5bf" ]
ICLR.cc/2013/conference
2013
review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form. Quality rating: Strong reject Confidence: Reviewer is knowledgeable
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
qX8Cq3hI2EXpf
review
1,363,279,380,000
idpCdOWtqXd60
[ "everyone" ]
[ "anonymous reviewer f5bf" ]
ICLR.cc/2013/conference
2013
review: The revision and rebuttal failed to address the issues raised by the reviewers. I do not think the paper should be accepted in its current form. Quality rating: Strong reject Confidence: Reviewer is knowledgeable
idpCdOWtqXd60
Efficient Estimation of Word Representations in Vector Space
[ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ]
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. ...
[ "word representations", "vectors", "efficient estimation", "vector space", "novel model architectures", "continuous vector representations", "words", "large data sets", "quality" ]
https://openreview.net/pdf?id=idpCdOWtqXd60
https://openreview.net/forum?id=idpCdOWtqXd60
C8Vn84fqSG8qa
review
1,362,716,940,000
idpCdOWtqXd60
[ "everyone" ]
[ "Tomas Mikolov" ]
ICLR.cc/2013/conference
2013
review: We have updated the paper (new version will be visible on Monday): - added new results with comparison of models trained on the same data with the same dimensionality of the word vectors - additional comparison on a task that was used previously for comparison of word vectors - added citations, more di...
zzy0H3ZbWiHsS
Audio Artist Identification by Deep Neural Network
[ "胡振", "Kun Fu", "Changshui Zhang" ]
Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluati...
[ "mirex", "important tasks", "imirsel", "audio artist identification", "deep neural network", "great contributions", "music information retrieval", "mir" ]
https://openreview.net/pdf?id=zzy0H3ZbWiHsS
https://openreview.net/forum?id=zzy0H3ZbWiHsS
Zg8fgYb5dAUiY
review
1,362,479,820,000
zzy0H3ZbWiHsS
[ "everyone" ]
[ "anonymous reviewer 589d" ]
ICLR.cc/2013/conference
2013
title: review of Audio Artist Identification by Deep Neural Network review: A brief summary of the paper’s contributions. In the context of prior work: This paper builds a hybrid model based on Deep Belief Network (DBN) and Stacked Denoising Autoencoder (SDA) and applies it to Audio Artist Identification (AAI) task. S...
zzy0H3ZbWiHsS
Audio Artist Identification by Deep Neural Network
[ "胡振", "Kun Fu", "Changshui Zhang" ]
Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluati...
[ "mirex", "important tasks", "imirsel", "audio artist identification", "deep neural network", "great contributions", "music information retrieval", "mir" ]
https://openreview.net/pdf?id=zzy0H3ZbWiHsS
https://openreview.net/forum?id=zzy0H3ZbWiHsS
obqUAuHWC9mWc
review
1,362,137,160,000
zzy0H3ZbWiHsS
[ "everyone" ]
[ "anonymous reviewer 8eb9" ]
ICLR.cc/2013/conference
2013
title: review of Audio Artist Identification by Deep Neural Network review: This paper present an application of an hybrid deep learning model to the task of audio artist identification. Novelty: + The novelty of the paper comes from using an hybrid unsupervised learning approach by stacking Denoising Auto-Encoders...
zzy0H3ZbWiHsS
Audio Artist Identification by Deep Neural Network
[ "胡振", "Kun Fu", "Changshui Zhang" ]
Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluati...
[ "mirex", "important tasks", "imirsel", "audio artist identification", "deep neural network", "great contributions", "music information retrieval", "mir" ]
https://openreview.net/pdf?id=zzy0H3ZbWiHsS
https://openreview.net/forum?id=zzy0H3ZbWiHsS
k3fr32tl6qARo
review
1,362,226,800,000
zzy0H3ZbWiHsS
[ "everyone" ]
[ "anonymous reviewer b7e1" ]
ICLR.cc/2013/conference
2013
title: review of Audio Artist Identification by Deep Neural Network review: This paper describes work to collect a new dataset with music from 11 classical composers for the task of audio composer identification (although the title, abstract, and introduction use the phrase 'audio artist identification' which is a diff...
zzy0H3ZbWiHsS
Audio Artist Identification by Deep Neural Network
[ "胡振", "Kun Fu", "Changshui Zhang" ]
Since officially began in 2005, the annual Music Information Retrieval Evaluation eXchange (MIREX) has made great contributions to the Music Information Retrieval (MIR) research. By defining some important tasks and providing a meaningful comparison system, the International Music Information Retrieval Systems Evaluati...
[ "mirex", "important tasks", "imirsel", "audio artist identification", "deep neural network", "great contributions", "music information retrieval", "mir" ]
https://openreview.net/pdf?id=zzy0H3ZbWiHsS
https://openreview.net/forum?id=zzy0H3ZbWiHsS
qbjSYWhow-bDl
review
1,362,725,700,000
zzy0H3ZbWiHsS
[ "everyone" ]
[ "胡振" ]
ICLR.cc/2013/conference
2013
review: Thank you. We will revise our paper as soon as possible. Zhen
7IOAIAx1AiEYC
Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients
[ "Tom Schaul", "Yann LeCun" ]
Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stati...
[ "rates", "sparse", "parallelization", "stochastic", "adaptive learning rates", "gradients adaptive", "gradients recent work", "successful framework", "stochastic gradient descent", "sgd" ]
https://openreview.net/pdf?id=7IOAIAx1AiEYC
https://openreview.net/forum?id=7IOAIAx1AiEYC
UUYiUZMOiCjl1
review
1,362,388,500,000
7IOAIAx1AiEYC
[ "everyone" ]
[ "anonymous reviewer 7b8e" ]
ICLR.cc/2013/conference
2013
title: review of Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients review: This is a paper that builds up on the adaptive learning rate scheme proposed in [1], for choosing learning rate when optimizing a neural network. The first result (eq. 3) is that of figuring out an ...
7IOAIAx1AiEYC
Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients
[ "Tom Schaul", "Yann LeCun" ]
Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stati...
[ "rates", "sparse", "parallelization", "stochastic", "adaptive learning rates", "gradients adaptive", "gradients recent work", "successful framework", "stochastic gradient descent", "sgd" ]
https://openreview.net/pdf?id=7IOAIAx1AiEYC
https://openreview.net/forum?id=7IOAIAx1AiEYC
hhgfZq1Yf5hzr
review
1,362,001,560,000
7IOAIAx1AiEYC
[ "everyone" ]
[ "anonymous reviewer 7318" ]
ICLR.cc/2013/conference
2013
title: review of Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients review: summary: The paper proposes a new variant of stochastic gradient descent that is fully automated (no hyper-parameter to tune) and is robust to various scenarios, including mini-batches, sparsity, an...
7IOAIAx1AiEYC
Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients
[ "Tom Schaul", "Yann LeCun" ]
Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stati...
[ "rates", "sparse", "parallelization", "stochastic", "adaptive learning rates", "gradients adaptive", "gradients recent work", "successful framework", "stochastic gradient descent", "sgd" ]
https://openreview.net/pdf?id=7IOAIAx1AiEYC
https://openreview.net/forum?id=7IOAIAx1AiEYC
_VZcVNP2cvtGj
review
1,362,529,800,000
7IOAIAx1AiEYC
[ "everyone" ]
[ "Tom Schaul, Yann LeCun" ]
ICLR.cc/2013/conference
2013
review: We thank the reviewers for their constructive comments. We'll try to clarify a few points they bring up: Parallelization: The batchsize-aware adaptive learning rates (equation 3) are applicable independently of how the minibatches are computed, whether on a multi-core machine, or across multiple machines. Th...
7IOAIAx1AiEYC
Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients
[ "Tom Schaul", "Yann LeCun" ]
Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in non-stati...
[ "rates", "sparse", "parallelization", "stochastic", "adaptive learning rates", "gradients adaptive", "gradients recent work", "successful framework", "stochastic gradient descent", "sgd" ]
https://openreview.net/pdf?id=7IOAIAx1AiEYC
https://openreview.net/forum?id=7IOAIAx1AiEYC
_5dVjqxuVf560
review
1,361,565,480,000
7IOAIAx1AiEYC
[ "everyone" ]
[ "anonymous reviewer 0321" ]
ICLR.cc/2013/conference
2013
title: review of Adaptive learning rates and parallelization for stochastic, sparse, non-smooth gradients review: This is a followup paper for reference [1] which describes a parameter free adaptive method to set learning rates for SGD. This submission cannot be read without first reading [1]. It expands the wor...
6elK6-b28q62g
Behavior Pattern Recognition using A New Representation Model
[ "Eric qiao", "Peter A. Beling" ]
We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of th...
[ "irl", "agents", "basis", "mdp", "reward functions", "behavior pattern recognition", "new representation model", "use", "inverse reinforcement learning" ]
https://openreview.net/pdf?id=6elK6-b28q62g
https://openreview.net/forum?id=6elK6-b28q62g
zkxNBUsiN6B38
review
1,363,763,280,000
6elK6-b28q62g
[ "everyone" ]
[ "Eric qiao" ]
ICLR.cc/2013/conference
2013
review: Based on the reviews, a revised version will be updated on arXiv tonight. Thanks.
6elK6-b28q62g
Behavior Pattern Recognition using A New Representation Model
[ "Eric qiao", "Peter A. Beling" ]
We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of th...
[ "irl", "agents", "basis", "mdp", "reward functions", "behavior pattern recognition", "new representation model", "use", "inverse reinforcement learning" ]
https://openreview.net/pdf?id=6elK6-b28q62g
https://openreview.net/forum?id=6elK6-b28q62g
KK9P-lgBP7-mW
review
1,362,703,740,000
6elK6-b28q62g
[ "everyone" ]
[ "anonymous reviewer 8f06" ]
ICLR.cc/2013/conference
2013
title: review of Behavior Pattern Recognition using A New Representation Model review: Summary: The paper presents an approach to activity recognition based on inverse reinforcement learning. It proposes an IRL algorithm based on Gaussian Processes. Evaluation is presented for classification and clustering of behavi...
6elK6-b28q62g
Behavior Pattern Recognition using A New Representation Model
[ "Eric qiao", "Peter A. Beling" ]
We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of th...
[ "irl", "agents", "basis", "mdp", "reward functions", "behavior pattern recognition", "new representation model", "use", "inverse reinforcement learning" ]
https://openreview.net/pdf?id=6elK6-b28q62g
https://openreview.net/forum?id=6elK6-b28q62g
N6tX5S-nXZNbo
review
1,363,762,920,000
6elK6-b28q62g
[ "everyone" ]
[ "Eric qiao" ]
ICLR.cc/2013/conference
2013
review: To Reviewer 698b. --------------------- Response: We propose a new problem that aims to categorize decision-makers by learning from the samples of their sequential decision-making behavior. The first key to success of this problem is an appropriately designed feature representation constructed from observatio...
6elK6-b28q62g
Behavior Pattern Recognition using A New Representation Model
[ "Eric qiao", "Peter A. Beling" ]
We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of th...
[ "irl", "agents", "basis", "mdp", "reward functions", "behavior pattern recognition", "new representation model", "use", "inverse reinforcement learning" ]
https://openreview.net/pdf?id=6elK6-b28q62g
https://openreview.net/forum?id=6elK6-b28q62g
PPs3ZO_pnzZTb
review
1,362,473,880,000
6elK6-b28q62g
[ "everyone" ]
[ "anonymous reviewer 08b2" ]
ICLR.cc/2013/conference
2013
title: review of Behavior Pattern Recognition using A New Representation Model review: This paper proposes a behavior pattern recognition framework that re-represents the problem of classifying behavior trajectories as a problem of classifying reward functions instead. Since the reward function of the agent that is cla...
6elK6-b28q62g
Behavior Pattern Recognition using A New Representation Model
[ "Eric qiao", "Peter A. Beling" ]
We study the use of inverse reinforcement learning (IRL) as a tool for the recognition of agents' behavior on the basis of observation of their sequential decision behavior interacting with the environment. We model the problem faced by the agents as a Markov decision process (MDP) and model the observed behavior of th...
[ "irl", "agents", "basis", "mdp", "reward functions", "behavior pattern recognition", "new representation model", "use", "inverse reinforcement learning" ]
https://openreview.net/pdf?id=6elK6-b28q62g
https://openreview.net/forum?id=6elK6-b28q62g
kA2a1ywTaHAT3
review
1,362,418,320,000
6elK6-b28q62g
[ "everyone" ]
[ "anonymous reviewer 698b" ]
ICLR.cc/2013/conference
2013
title: review of Behavior Pattern Recognition using A New Representation Model review: I am not a huge expert in reinforcement learning but nonetheless I have to say this paper is quite confusing to me. I had a hard time understanding the point. Moreover, I think the topic of this paper has nothing to do whatsoever wit...
V_-8VUqv8h_H3
The Manifold of Human Emotions
[ "Seungyeon Kim", "Fuxin Li", "Guy Lebanon", "Irfan Essa" ]
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather ...
[ "human emotions", "manifold", "model", "presence", "positive", "negative emotions", "text document", "higher dimensional extensions", "sentiment concept" ]
https://openreview.net/pdf?id=V_-8VUqv8h_H3
https://openreview.net/forum?id=V_-8VUqv8h_H3
DsMNDQOdK3o4y
comment
1,362,951,000,000
ADj5N2hoX0_ox
[ "everyone" ]
[ "Seungyeon Kim, Fuxin Li, Guy Lebanon, Irfan Essa" ]
ICLR.cc/2013/conference
2013
reply: 1. P(Y|Z) can be computed using Bayes rule on P(Z|Y). We had to remove lots of details due to the space limits. Detailed implementation is on our full paper on ArXiv (http://arxiv.org/abs/1202.1568). 2. A lot of references and comparisons are omitted because of the space limits, but we will try to include sug...
V_-8VUqv8h_H3
The Manifold of Human Emotions
[ "Seungyeon Kim", "Fuxin Li", "Guy Lebanon", "Irfan Essa" ]
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather ...
[ "human emotions", "manifold", "model", "presence", "positive", "negative emotions", "text document", "higher dimensional extensions", "sentiment concept" ]
https://openreview.net/pdf?id=V_-8VUqv8h_H3
https://openreview.net/forum?id=V_-8VUqv8h_H3
C4MuPqjpEwP7S
review
1,362,239,340,000
V_-8VUqv8h_H3
[ "everyone" ]
[ "anonymous reviewer e0d0" ]
ICLR.cc/2013/conference
2013
title: review of The Manifold of Human Emotions review: This paper proposes a new method for sentiment analysis of text documents based on two phases: first, learning a continuous vector representation of the document (a projection on the mood manifold) and second, learning to map from this representation to the sen...
V_-8VUqv8h_H3
The Manifold of Human Emotions
[ "Seungyeon Kim", "Fuxin Li", "Guy Lebanon", "Irfan Essa" ]
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather ...
[ "human emotions", "manifold", "model", "presence", "positive", "negative emotions", "text document", "higher dimensional extensions", "sentiment concept" ]
https://openreview.net/pdf?id=V_-8VUqv8h_H3
https://openreview.net/forum?id=V_-8VUqv8h_H3
zzCNIJyUdvSfw
comment
1,362,951,060,000
C4MuPqjpEwP7S
[ "everyone" ]
[ "Seungyeon Kim, Fuxin Li, Guy Lebanon, Irfan Essa" ]
ICLR.cc/2013/conference
2013
reply: 1. It is more related to latent variable models than neural network as it doesn’t have any activation function between layers. Moreover, neural network is learned by back-propagation algorithms, but our model is learnt using maximum likelihood with marginalizing latent variable Z. Linear regression part is resul...
V_-8VUqv8h_H3
The Manifold of Human Emotions
[ "Seungyeon Kim", "Fuxin Li", "Guy Lebanon", "Irfan Essa" ]
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather ...
[ "human emotions", "manifold", "model", "presence", "positive", "negative emotions", "text document", "higher dimensional extensions", "sentiment concept" ]
https://openreview.net/pdf?id=V_-8VUqv8h_H3
https://openreview.net/forum?id=V_-8VUqv8h_H3
ADj5N2hoX0_ox
review
1,362,105,540,000
V_-8VUqv8h_H3
[ "everyone" ]
[ "anonymous reviewer 9992" ]
ICLR.cc/2013/conference
2013
title: review of The Manifold of Human Emotions review: This paper introduces a model for sentiment analysis aimed at capturing blended, non-binary notions of sentiment. The paper uses a novel dataset of >1 million blog posts (livejournal) using 32 emoticons as labels. The model uses a Gaussian latent variable to embed...
KHMdKiX2lbguE
Boltzmann Machines and Denoising Autoencoders for Image Denoising
[ "KyungHyun Cho" ]
Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the fie...
[ "boltzmann machines", "autoencoders", "image", "models", "noise", "experiments", "probabilistic model", "local image patches", "various researchers", "deep" ]
https://openreview.net/pdf?id=KHMdKiX2lbguE
https://openreview.net/forum?id=KHMdKiX2lbguE
PLgu8d4J3rRz9
review
1,362,189,720,000
KHMdKiX2lbguE
[ "everyone" ]
[ "anonymous reviewer bf00" ]
ICLR.cc/2013/conference
2013
title: review of Boltzmann Machines and Denoising Autoencoders for Image Denoising review: This paper is an empirical comparison of the different models (Boltzmann Machines and Denoising Autoencoders) on the task of image denoising. Based on the experiments the authors claimed the increasing model depth improves the de...
KHMdKiX2lbguE
Boltzmann Machines and Denoising Autoencoders for Image Denoising
[ "KyungHyun Cho" ]
Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the fie...
[ "boltzmann machines", "autoencoders", "image", "models", "noise", "experiments", "probabilistic model", "local image patches", "various researchers", "deep" ]
https://openreview.net/pdf?id=KHMdKiX2lbguE
https://openreview.net/forum?id=KHMdKiX2lbguE
VC6Ay131A-y1w
review
1,362,494,700,000
KHMdKiX2lbguE
[ "everyone" ]
[ "Kyunghyun Cho" ]
ICLR.cc/2013/conference
2013
review: Dear reviewer (d5d4), Thank you for your thorough review and comments. - 'the paper fails to compare against robust Boltzmann machines (Tang et al., CVPR 2012)' Thanks for pointing it out, and I agree that the RoBM be tried as well. It will be possible to use the already trai...
KHMdKiX2lbguE
Boltzmann Machines and Denoising Autoencoders for Image Denoising
[ "KyungHyun Cho" ]
Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the fie...
[ "boltzmann machines", "autoencoders", "image", "models", "noise", "experiments", "probabilistic model", "local image patches", "various researchers", "deep" ]
https://openreview.net/pdf?id=KHMdKiX2lbguE
https://openreview.net/forum?id=KHMdKiX2lbguE
ppSEYjkaMGYj5
review
1,362,411,780,000
KHMdKiX2lbguE
[ "everyone" ]
[ "Kyunghyun Cho" ]
ICLR.cc/2013/conference
2013
review: Dear reviewers (bf00) and (9120), First of all, thank you for your thorough reviews. Please, find my response to your comments below. A revision of the paper that includes the fixes made accordingly will be available at the arXiv.org tomorrow (Tue, 5 Mar 2013 01:00:00 GMT). To both reviewers (bf00)...
KHMdKiX2lbguE
Boltzmann Machines and Denoising Autoencoders for Image Denoising
[ "KyungHyun Cho" ]
Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the fie...
[ "boltzmann machines", "autoencoders", "image", "models", "noise", "experiments", "probabilistic model", "local image patches", "various researchers", "deep" ]
https://openreview.net/pdf?id=KHMdKiX2lbguE
https://openreview.net/forum?id=KHMdKiX2lbguE
CIGoQSPKoZIKs
review
1,362,486,600,000
KHMdKiX2lbguE
[ "everyone" ]
[ "anonymous reviewer d5d4" ]
ICLR.cc/2013/conference
2013
title: review of Boltzmann Machines and Denoising Autoencoders for Image Denoising review: A brief summary of the paper's contributions, in the context of prior work. The paper proposed to use Gaussian deep Boltzmann machines (GDBM) for image denoising tasks, and it empirically compared the denoising performance to an...
KHMdKiX2lbguE
Boltzmann Machines and Denoising Autoencoders for Image Denoising
[ "KyungHyun Cho" ]
Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the fie...
[ "boltzmann machines", "autoencoders", "image", "models", "noise", "experiments", "probabilistic model", "local image patches", "various researchers", "deep" ]
https://openreview.net/pdf?id=KHMdKiX2lbguE
https://openreview.net/forum?id=KHMdKiX2lbguE
tO_8tX3y-7SXz
review
1,362,361,020,000
KHMdKiX2lbguE
[ "everyone" ]
[ "anonymous reviewer 9120" ]
ICLR.cc/2013/conference
2013
title: review of Boltzmann Machines and Denoising Autoencoders for Image Denoising review: The paper conducts an empirical performance comparison, on the task of image denoising, where the denoising of large images is based on combining densoing of small patches. In this context, the study compares usign, as small patc...
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
WYMDnhGXd0L_5
review
1,363,287,900,000
G0OapcfeK3g_R
[ "everyone" ]
[ "Vamsi Potluru" ]
ICLR.cc/2013/conference
2013
review: Thanks to all the reviewers for their detailed and insightful comments and suggestions. We are working on incorporating most of them in to our paper and should have the updated version this weekend.
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
9pQNdTOGrb9Pw
review
1,360,229,520,000
G0OapcfeK3g_R
[ "everyone" ]
[ "Paul Shearer" ]
ICLR.cc/2013/conference
2013
review: The main convergence result in the paper, Theorem 3, does not prove what it purports to prove. Specifically the proof of Theorem 3 refers to a completely different optimization problem than the one the authors claim to be solving on page 5 and throughout the paper. In the proof the authors replace the noncon...
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
QOxbO7qFg2Och
comment
1,364,235,300,000
YlFHNQiVHDYVP
[ "everyone" ]
[ "Vamsi Potluru" ]
ICLR.cc/2013/conference
2013
reply: Thanks again for your detailed comments. We will incorporate them into our paper.
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
gWF1WlYIRPpoT
review
1,362,215,700,000
G0OapcfeK3g_R
[ "everyone" ]
[ "anonymous reviewer 1d08" ]
ICLR.cc/2013/conference
2013
title: review of Block Coordinate Descent for Sparse NMF review: Summary: The paper presents a new optimization algorithm for solving NMF problems with the Euclidean norm as fitting cost and subject to sparsity constraints. The sparsity is imposed explicitly by adding an equality constraint to the optimization probl...
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
Y8F18yu7HQ6aJ
review
1,361,826,300,000
G0OapcfeK3g_R
[ "everyone" ]
[ "Vamsi Potluru" ]
ICLR.cc/2013/conference
2013
review: Thanks a lot for pointing this out. You are right about the issue. We are currently working on fixing the proof, as we hope that in our particular case the objective function will force the L2 equality constraint to be active at the optimum. The algorithm does still work fine in practice, and we have never ...
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
YlFHNQiVHDYVP
review
1,363,996,080,000
G0OapcfeK3g_R
[ "everyone" ]
[ "anonymous reviewer d723" ]
ICLR.cc/2013/conference
2013
review: Dear authors, the revision of your paper is appreciated; the three major issues from my review have been resolved. I agree that explicit constraints may be harder to optimize, but the argument that then (non-expert) users can get the representation they want without fiddling parameters is a very good one ...
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
cc18-e0C8uSHG
review
1,363,661,460,000
G0OapcfeK3g_R
[ "everyone" ]
[ "Vamsi Potluru" ]
ICLR.cc/2013/conference
2013
review: Anonymous d723: 1. Thanks for pointing out the bug in the projection operator algorithm Sparse-opt. We re-ran all the algorithms on the datasets based on the suggested bugfix and generated new figures for all the datasets. 2. We highlight the efficiency of our algorithm (...
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
OEMFOvtudWEJh
review
1,362,274,980,000
G0OapcfeK3g_R
[ "everyone" ]
[ "anonymous reviewer 202b" ]
ICLR.cc/2013/conference
2013
title: review of Block Coordinate Descent for Sparse NMF review: This paper considers a dictionary learning algorithm for positive data. The sparse NMF approach imposes sparsity on the atoms, and positivity on both atoms and decomposition coefficients. The formulation is standard, i.e., applying a contraint on the ...
G0OapcfeK3g_R
Block Coordinate Descent for Sparse NMF
[ "Vamsi Potluru", "Sergey M. Plis", "Jonathan Le Roux", "Barak A. Pearlmutter", "Vince D. Calhoun", "Thomas P. Hayes" ]
Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L$_0$ norm, however its optimization is NP-hard. Mixed norms, such as L$_1$...
[ "sparsity", "norm", "datasets", "block coordinate descent", "nmf", "ubiquitous tool", "data analysis" ]
https://openreview.net/pdf?id=G0OapcfeK3g_R
https://openreview.net/forum?id=G0OapcfeK3g_R
KKA-Ef3zTjKbl
review
1,362,186,300,000
G0OapcfeK3g_R
[ "everyone" ]
[ "anonymous reviewer d723" ]
ICLR.cc/2013/conference
2013
title: review of Block Coordinate Descent for Sparse NMF review: This paper proposes new algorithms to minimize the Non-negative Matrix Factorization (NMF) reconstruction error in Frobenius norm subject to additional sparseness constraints (NMFSC) as originally proposed by [R1]. The original method from [R1] to minimiz...
aQZtOGDyp-Ozh
Learning Stable Group Invariant Representations with Convolutional Networks
[ "Joan Bruna", "Arthur Szlam", "Yann LeCun" ]
Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring...
[ "variability", "invariance group", "convolutional networks", "translations", "rotations", "express part", "many recognition problems", "group structure" ]
https://openreview.net/pdf?id=aQZtOGDyp-Ozh
https://openreview.net/forum?id=aQZtOGDyp-Ozh
s1Kr1S64z0s8a
review
1,362,379,800,000
aQZtOGDyp-Ozh
[ "everyone" ]
[ "anonymous reviewer 3316" ]
ICLR.cc/2013/conference
2013
title: review of Learning Stable Group Invariant Representations with Convolutional Networks review: This short paper presents a discussion on the nature and the type of invariances that are represented and learned by convolutional neural networks. It claims that the invariance a layer in a convolutional neural n...
aQZtOGDyp-Ozh
Learning Stable Group Invariant Representations with Convolutional Networks
[ "Joan Bruna", "Arthur Szlam", "Yann LeCun" ]
Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring...
[ "variability", "invariance group", "convolutional networks", "translations", "rotations", "express part", "many recognition problems", "group structure" ]
https://openreview.net/pdf?id=aQZtOGDyp-Ozh
https://openreview.net/forum?id=aQZtOGDyp-Ozh
uLsKzjPT0lx8V
review
1,361,928,660,000
aQZtOGDyp-Ozh
[ "everyone" ]
[ "anonymous reviewer bf60" ]
ICLR.cc/2013/conference
2013
title: review of Learning Stable Group Invariant Representations with Convolutional Networks review: I fully admit that I don't know enough about group theory to evaluate this submission. However, I do know about convolutional networks, so it is troubling that I can't understand it. Since this is only a worksho...
aQZtOGDyp-Ozh
Learning Stable Group Invariant Representations with Convolutional Networks
[ "Joan Bruna", "Arthur Szlam", "Yann LeCun" ]
Transformation groups, such as translations or rotations, effectively express part of the variability observed in many recognition problems. The group structure enables the construction of invariant signal representations with appealing mathematical properties, where convolutions, together with pooling operators, bring...
[ "variability", "invariance group", "convolutional networks", "translations", "rotations", "express part", "many recognition problems", "group structure" ]
https://openreview.net/pdf?id=aQZtOGDyp-Ozh
https://openreview.net/forum?id=aQZtOGDyp-Ozh
7XaieIunN4X1I
review
1,363,658,220,000
aQZtOGDyp-Ozh
[ "everyone" ]
[ "Joan Bruna" ]
ICLR.cc/2013/conference
2013
review: I would like to thank the reviewers for their time and constructive comments. Indeed, the paper, in its current form, explores the connection between deep convolutional networks and group invariance; but it lacks practical examples to motivate why this connection might be useful or interesting. I completely a...
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on em...
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
KB-5ppfbu7pwL
review
1,362,170,160,000
6s2YsOZPYcb8N
[ "everyone" ]
[ "anonymous reviewer 5a71" ]
ICLR.cc/2013/conference
2013
title: review of Cutting Recursive Autoencoder Trees review: The paper considers the compositional model of Socher et al. (EMNLP 2011) for predicting sentence opinion polarity. The authors define several model simplification types (e.g., reducing the maximal number of levels) and study how these changes affect sentimen...
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on em...
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
SPfmPG0ry9nrB
review
1,362,361,260,000
6s2YsOZPYcb8N
[ "everyone" ]
[ "anonymous reviewer 2611" ]
ICLR.cc/2013/conference
2013
title: review of Cutting Recursive Autoencoder Trees review: This research analyses the Semi Supervised Recurive Autoencoder (RAE) of Socher et al., obtained with the NLP task of sentiment classification from sentences of movie reviews. A first qualitative analysis conducted wth th help of human annotators, reveals...
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on em...
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
XHzDeHdtlbXIc
review
1,362,455,040,000
6s2YsOZPYcb8N
[ "everyone" ]
[ "Arun Tejasvi Chaganty" ]
ICLR.cc/2013/conference
2013
review: The paper presents a very interesting error analysis of recursive autoencoder trees. However, I would wish the following aspects of the evaluation were addressed. a) In the qualitative analysis (Section 5), only 10 samples out of a corpus of over 10,000 were studied. This is too small to make any statisti...
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on em...
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
fvJTwf6BDQvYu
review
1,362,019,200,000
6s2YsOZPYcb8N
[ "everyone" ]
[ "anonymous reviewer 5b0f" ]
ICLR.cc/2013/conference
2013
title: review of Cutting Recursive Autoencoder Trees review: This paper analyzes recursive autoencoders for a binary sentiment analysis task. The authors include two types of analyses: looking at example trees for syntactic and semantic structure and analyzing performance when the induced tree structures are cut at v...
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on em...
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
Od6cRb72yhb2P
review
1,363,702,380,000
6s2YsOZPYcb8N
[ "everyone" ]
[ "Christian Scheible" ]
ICLR.cc/2013/conference
2013
review: Thanks everyone for your comments! I would like to address some of the points made across various comments. I would like to point out to reviewer 'Anonymous 5b0f' that, in the experiment 'noembed', while the embeddings are not used in the classifier, they are still learned during RAE training . Thus, to trai...
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on em...
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
9IkTIwySTQw0C
review
1,362,043,620,000
6s2YsOZPYcb8N
[ "everyone" ]
[ "Sida Wang" ]
ICLR.cc/2013/conference
2013
review: I've also done some (unpublished) analysis with using random and degenerate tree structures and found that it did not matter very much under the RAE framework. I just have a short comment for the results table. Given that most of the different schemes eventually got us roughly identical results near includin...
6s2YsOZPYcb8N
Cutting Recursive Autoencoder Trees
[ "Christian Scheible", "Hinrich Schuetze" ]
Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on em...
[ "recursive autoencoder trees", "difficult", "analysis", "models", "considerable success", "natural language processing", "deep architectures", "useful representations", "improvements", "various tasks" ]
https://openreview.net/pdf?id=6s2YsOZPYcb8N
https://openreview.net/forum?id=6s2YsOZPYcb8N
vDY7MvZACzMTc
review
1,362,181,620,000
6s2YsOZPYcb8N
[ "everyone" ]
[ "Sam Bowman" ]
ICLR.cc/2013/conference
2013
review: I was very impressed by some of these results—especially those for the noembed models—and this does seem to provide evidence that the high performance of RAEs on sentence-level binary sentiment classification need not reflect a breakthrough due to the use of tree structures. There were a couple of points tha...
ttxM6DQKghdOi
Discrete Restricted Boltzmann Machines
[ "Guido F. Montufar", "Jason Morton" ]
In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and ca...
[ "boltzmann machines", "discrete", "models", "hidden variables", "number", "cardinalities", "state spaces", "probability distributions", "products", "simplices" ]
https://openreview.net/pdf?id=ttxM6DQKghdOi
https://openreview.net/forum?id=ttxM6DQKghdOi
uc6XK8UgDGKmi
review
1,363,572,060,000
ttxM6DQKghdOi
[ "everyone" ]
[ "Guido F. Montufar, Jason Morton" ]
ICLR.cc/2013/conference
2013
review: We appreciate the comments of all three reviewers. We posted a revised version of the paper to the arxiv (scheduled to be announced March 18 2013). While reviewer 1922 found the paper ``comprehensive'' and ``clearly written'', reviewers e437 and fce0 were very concerned with the presentation of the paper, d...
ttxM6DQKghdOi
Discrete Restricted Boltzmann Machines
[ "Guido F. Montufar", "Jason Morton" ]
In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and ca...
[ "boltzmann machines", "discrete", "models", "hidden variables", "number", "cardinalities", "state spaces", "probability distributions", "products", "simplices" ]
https://openreview.net/pdf?id=ttxM6DQKghdOi
https://openreview.net/forum?id=ttxM6DQKghdOi
AAvOd8oYsZAh8
review
1,362,487,980,000
ttxM6DQKghdOi
[ "everyone" ]
[ "anonymous reviewer fce0" ]
ICLR.cc/2013/conference
2013
title: review of Discrete Restricted Boltzmann Machines review: This paper reviews properties of the Naive Bayes models and Binary RBMs before moving on to introducing discrete RBMs for which they extend universal approximation and other properties. I think such a review and extensions are extremely interesting for ...
ttxM6DQKghdOi
Discrete Restricted Boltzmann Machines
[ "Guido F. Montufar", "Jason Morton" ]
In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and ca...
[ "boltzmann machines", "discrete", "models", "hidden variables", "number", "cardinalities", "state spaces", "probability distributions", "products", "simplices" ]
https://openreview.net/pdf?id=ttxM6DQKghdOi
https://openreview.net/forum?id=ttxM6DQKghdOi
_YRe0x39e7YBa
review
1,363,534,860,000
ttxM6DQKghdOi
[ "everyone" ]
[ "Aaron Courville" ]
ICLR.cc/2013/conference
2013
review: To the reviewers of this paper, There appear to be some disagreement of the utility of the contributions of this paper to a machine learning audience. Please read over the comments of the other reviewers and submit comment as you see fit.
ttxM6DQKghdOi
Discrete Restricted Boltzmann Machines
[ "Guido F. Montufar", "Jason Morton" ]
In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and ca...
[ "boltzmann machines", "discrete", "models", "hidden variables", "number", "cardinalities", "state spaces", "probability distributions", "products", "simplices" ]
https://openreview.net/pdf?id=ttxM6DQKghdOi
https://openreview.net/forum?id=ttxM6DQKghdOi
86Fqwo3AqRw0s
review
1,362,471,060,000
ttxM6DQKghdOi
[ "everyone" ]
[ "anonymous reviewer 1922" ]
ICLR.cc/2013/conference
2013
title: review of Discrete Restricted Boltzmann Machines review: This paper presents a comprehensive theoretical discussion on the approximation properties of discrete restricted Boltzmann machines. The paper is clearly written. It provides a contextual introduction to the theoretical results by reviewing approximation ...
ttxM6DQKghdOi
Discrete Restricted Boltzmann Machines
[ "Guido F. Montufar", "Jason Morton" ]
In this paper we describe discrete restricted Boltzmann machines: graphical probability models with bipartite interactions between discrete visible and hidden variables. These models generalize standard binary restricted Boltzmann machines and discrete na'ive Bayes models. For a given number of visible variables and ca...
[ "boltzmann machines", "discrete", "models", "hidden variables", "number", "cardinalities", "state spaces", "probability distributions", "products", "simplices" ]
https://openreview.net/pdf?id=ttxM6DQKghdOi
https://openreview.net/forum?id=ttxM6DQKghdOi
gE0uE2A98H59Y
review
1,360,957,080,000
ttxM6DQKghdOi
[ "everyone" ]
[ "anonymous reviewer e437" ]
ICLR.cc/2013/conference
2013
title: review of Discrete Restricted Boltzmann Machines review: The paper provides a theoretical analysis of Restricted Boltzmann Machines with multivalued discrete units, with the emphasis on representation capacity of such models. Discrete RBMs are a special case of exponential family harmoniums introduced by Well...
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-der...
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
37JmPPz9dT39G
comment
1,363,216,920,000
uEQsuu1xiBueM
[ "everyone" ]
[ "Razvan Pascanu, Yoshua Bengio" ]
ICLR.cc/2013/conference
2013
reply: We've made drastic changes to the paper, which should be visible starting Thu, 14 Mar 2013 00:00:00 GMT. We made the paper available also at http://www-etud.iro.umontreal.ca/~pascanur/papers/ICLR_natural_gradient.pdf * Regarding the title, we have changed it to 'Revisiting Natural Gradient for Deep Networks',...
jbLdjjxPd-b2l
Natural Gradient Revisited
[ "Razvan Pascanu", "Yoshua Bengio" ]
The aim of this paper is two-folded. First we intend to show that Hessian-Free optimization (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of Natural Gradient Descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-der...
[ "natural gradient", "aim", "first", "optimization", "martens", "subspace descent", "vinyals", "povey", "implementations" ]
https://openreview.net/pdf?id=jbLdjjxPd-b2l
https://openreview.net/forum?id=jbLdjjxPd-b2l
uEQsuu1xiBueM
review
1,362,372,600,000
jbLdjjxPd-b2l
[ "everyone" ]
[ "anonymous reviewer 6f71" ]
ICLR.cc/2013/conference
2013
title: review of Natural Gradient Revisited review: Summary The paper reviews the concept of natural gradient, re-derives it in the context of neural network training, compares a number of natural gradient-based algorithms and discusses their differences. The paper's aims are highly relevant to the state of the fiel...