forum_id string | forum_title string | forum_authors list | forum_abstract string | forum_keywords list | forum_pdf_url string | forum_url string | note_id string | note_type string | note_created int64 | note_replyto string | note_readers list | note_signatures list | venue string | year string | note_text string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | OgesTW8qZ5TWn | review | 1,363,419,120,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments and agree with most of them.
- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012).
Experimental results show that ou... |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | PnfD3BSBKbnZh | review | 1,362,079,260,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"anonymous reviewer 75b8"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors
review: - A brief summary of the paper's contributions, in the context of prior work.
This paper proposes a new energy function (or scoring function) for ranking pairs of entities and their relations... |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | yA-tyFEFr2A5u | review | 1,362,246,000,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"anonymous reviewer 7e51"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors
review: This paper proposes a new model for modeling data of multi-relational knowledge bases such as Wordnet or YAGO. Inspired by the work of (Bordes et al., AAAI11), they propose a neural network-base... |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | 7jyp7wrwSzagb | review | 1,363,419,120,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments and agree with most of them.
- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012).
Experimental results show that ou... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | rGZJRE7IJwrK3 | review | 1,392,852,360,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Charles Martin"
] | ICLR.cc/2013/conference | 2013 | review: It is noted that the connection between RG and multi-scale modeling has been pointed out by Candes in
E. J. Candès, P. Charlton and H. Helgason. Detecting highly oscillatory signals by chirplet path pursuit. Appl. Comput. Harmon. Anal. 24 14-40.
where it was noted that the multi-scale basis suggested in ... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 4Uh8Uuvz86SFd | comment | 1,363,212,060,000 | 7to37S6Q3_7Qe | [
"everyone"
] | [
"Cédric Bény"
] | ICLR.cc/2013/conference | 2013 | reply: I have submitted a replacement to the arXiv on March 13, which should be available the same day at 8pm EST/EDT as version 4.
In order to address the first issue, I rewrote section 2 to make it less confusing, specifically by not trying to be overly general. I also rewrote the caption of figure 1 to make it a ... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 7to37S6Q3_7Qe | review | 1,362,321,600,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"anonymous reviewer 441c"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep learning and the renormalization group
review: The model tries to relate renormalization group and deep learning, specifically hierarchical Bayesian network. The primary problems are that 1) the paper is only descriptive - it does not explain models clearly and precisely, and 2) it has no numerica... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | tb0cgaJXQfgX6 | review | 1,363,477,320,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Reviewer 441c,
Have you taken a look at the new version of the paper? Does it go some way to addressing your concerns? |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 7Kq-KFuY-y7S_ | review | 1,365,121,080,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: It seems to me like there could be an interesting connection between approximate inference in graphical models and the renormalization methods.
There is in fact a long history of interactions between condensed matter physics and graphical models. For example, it is well known that the loopy belief propagati... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | Qj1vSox-vpQ-U | review | 1,362,219,360,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"anonymous reviewer acf4"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep learning and the renormalization group
review: This paper discusses deep learning from the perspective of renormalization groups in theoretical physics. Both concepts are naturally related; however, this relation has not been formalized adequately thus far and advancing this is a novelty of the p... |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified li... | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | ff2dqJ6VEpR8u | review | 1,362,252,900,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer 5a78"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: In the deep learning community there has been a recent trend in
moving away from the traditional sigmoid/tanh activation function to
inject non-linearity into the model. One activation function that has
been shown to work well in... |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified li... | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | kH1XHWcuGjDuU | review | 1,361,946,600,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer 9c3f"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: This paper analyzes properties of rectified linear autoencoder
networks.
In particular, the paper shows that rectified linear networks are
similar to linear networks (ICA). The major difference is the
nolinearity ('switching') t... |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified li... | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | oozAQe0eAnQ1w | review | 1,362,360,840,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer ab3b"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: The paper draws links between autoencoders with tied weights and rectified linear units (similar to Glorot et al AISTATS 2011), the triangle k-means and soft-thresholding of Coates et al. (AISTATS 2011 and ICML 2011), and the linear-au... |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a nov... | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | EW9REhyYQcESw | review | 1,362,202,140,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"anonymous reviewer 1024"
] | ICLR.cc/2013/conference | 2013 | title: review of Why Size Matters: Feature Coding as Nystrom Sampling
review: The authors provide an analysis of the accuracy bounds of feature coding + linear classifier pipelines. They predict an approximate accuracy bound given the dictionary size and correctly estimate the phenomenon observed in the literature wher... |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a nov... | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | oxSZoe2BGRoB6 | review | 1,362,196,320,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"anonymous reviewer 998c"
] | ICLR.cc/2013/conference | 2013 | title: review of Why Size Matters: Feature Coding as Nystrom Sampling
review: This paper presents a theoretical analysis and empirical validation of a novel view of feature extraction systems based on the idea of Nystrom sampling for kernel methods. The main idea is to analyze the kernel matrix for a feature space def... |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a nov... | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | 8sJwMe5ZwE8uz | review | 1,363,264,440,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"Oriol Vinyals, Yangqing Jia, Trevor Darrell"
] | ICLR.cc/2013/conference | 2013 | review: We agree with the reviewer regarding the existence of better dictionary learning methods, and note that many of these are also related to corresponding advanced Nystrom sampling methods, such as [Zhang et al. Improved Nystrom low-rank approximation and error analysis. ICML 08]. These methods could improve perfo... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | RzSh7m1KhlzKg | review | 1,363,574,460,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: I would like to thank the reviewers for their investment of time and effort to formulate their valued comments. The paper was updated according to your comments. Below I address your concerns:
A common remark is the lack of comparison with state-of-the-art NMF solvers for Kullback-Leibler divergence (KLD). I... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | FFkZF49pZx-pS | review | 1,362,210,360,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 4322"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: Summary:
The paper presents a new algorithm for solving L1 regularized NMF problems in which the fitting term is the Kullback-Leiber divergence. The strategy combines the classic multiplicative updates with a diagonal app... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | MqwZf2jPZCJ-n | review | 1,363,744,920,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: First: sorry for the multiple postings. Browser acting weird. Can't remove them ...
Update: I was able to get the sbcd code to work. Two mods required (refer to Algorithm 1 in the Li, Lebanon & Park paper - ref [18] in v2 paper on arxiv):
1) you have to be careful with initialization. If the estimates for W... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | oo1KoBhzu3CGs | review | 1,362,192,540,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 57f3"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: This paper develops a new iterative optimization algorithm for performing non-negative matrix factorization, assuming a standard 'KL-divergence' objective function. The method proposed combines the use of a traditional upda... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | aplzZcXNokptc | review | 1,363,615,980,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: About the comparison with Cyclic Coordinate Descent (as described in C.-J. Hsieh and I. S. Dhillon, “Fast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization,” in proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), San Dieg... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | EW5mE9upmnWp1 | review | 1,362,382,860,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 482c"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: Overview:
This paper proposes an element-wise (diagonal Hessian) Newton method to speed up convergence of the multiplicative update algorithm (MU) for NMF problems. Monotonic progress is guaranteed by an element-wise fall... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | UgMKgxnHDugHr | review | 1,362,080,640,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"anonymous reviewer cfb0"
] | ICLR.cc/2013/conference | 2013 | title: review of Zero-Shot Learning Through Cross-Modal Transfer
review: *A brief summary of the paper's contributions, in the context of prior work*
This paper introduces a zero-shot learning approach to image classification. The model first tries to detect whether an image contains an object from a so-far unseen cat... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | 88s34zXWw20My | review | 1,362,001,800,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"anonymous reviewer 310e"
] | ICLR.cc/2013/conference | 2013 | title: review of Zero-Shot Learning Through Cross-Modal Transfer
review: summary:
the paper presents a framework to learn to classify images that can come either from known
or unknown classes. This is done by first mapping both images and classes into a joint embedding
space. Furthermore, the probability of an image... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | ddIxYp60xFd0m | review | 1,363,754,820,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their feedback.
I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.
- Thanks for the reference... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | SSiPd5Rr9bdXm | review | 1,363,754,760,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their feedback.
I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.
- Thanks for the reference... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | eG1mGYviVwE-r | comment | 1,363,730,760,000 | Av10rQ9sBlhsf | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: Okay, thanks. We understand your viewpoint. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | EHF-pZ3qwbnAT | review | 1,362,609,900,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer a9e8"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: This paper explores how inference can be done in a part-sharing model and the computational cost of doing so. It relies on 'executive summaries' where each layer only holds approximate information about th... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | sPw_squDz1sCV | review | 1,363,536,060,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Reviewer c1e8,
Please read the authors' responses to your review. Do they change your evaluation of the paper? |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | Rny5iXEwhGnYN | comment | 1,362,095,760,000 | p7BE8U1NHl8Tr | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: The unsupervised learning will also appear at ICLR. So we didn't describe it in this paper and concentrated instead on the advantages of compositional models for search after the learning has been done.
The reviewer says that this result is not very novel and mentions analogies to complexity gain of large con... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | O3uWBm_J8IOlG | comment | 1,363,731,300,000 | EHF-pZ3qwbnAT | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for your comments. The paper is indeed conjectural which is why we are submitting it to this new type of conference. But we have some proof of content from some of our earlier work -- and we are working on developing real world models using these types of ideas. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | Av10rQ9sBlhsf | comment | 1,363,643,940,000 | Rny5iXEwhGnYN | [
"everyone"
] | [
"anonymous reviewer c1e8"
] | ICLR.cc/2013/conference | 2013 | reply: Sorry: I should have written 'although I do not see it as very surprising' instead of 'novel'.
The analogy with convolutional networks is that quantities computed by low-level nodes can be shared by several high level nodes. This is trivial in the case of conv. nets, and not trivial in your case because you h... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | oCzZPts6ZYo6d | review | 1,362,211,680,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer 915e"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: This paper presents a complexity analysis of certain inference algorithms for compositional models of images based on part sharing.
The intuition behind these models is that objects are composed of parts... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | p7BE8U1NHl8Tr | review | 1,361,997,540,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer c1e8"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: The paper describe a compositional object models that take the form of a hierarchical generative models. Both object and part models provide (1) a set of part models, and (2) a generative model essentially... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | zV1YApahdwAIu | comment | 1,362,352,080,000 | oCzZPts6ZYo6d | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: We hadn't thought of renormalization or image compression. But renormalization does deal with scale (I think B. Gidas had some papers on this in the 90's). There probably is a relation to image compression which we should explore. |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | qO9gWZZ1gfqhl | review | 1,362,163,380,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 777f"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: Segmentation with multi-scale max pooling CNN, applied to indoor vision, using depth information. Interesting paper! Fine results.
Question: how does that compare to multi-scale max pooling CNN for a previous award-winning application, nam... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | tG4Zt9xaZ8G5D | comment | 1,363,298,100,000 | Ub0AUfEOKkRO1 | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and helpful comments. We computed and added error bars as suggested in Table 1. However, computing standard deviation for the individual means per class of objects does not apply here: the per class accuracies are not computed image per image. Each number corresponds to a ratio of the ... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | OOB_F66xrPKGA | comment | 1,363,297,980,000 | 2-VeRGGdvD-58 | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and helpful comments.
The missing values in the depth acquisition were pre-processed using inpainting code available online on Nathan Siberman’s web page. We added the reference to the paper.
In the paper, we made the observation that the classes for which depth fails to outperform ... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | Ub0AUfEOKkRO1 | review | 1,362,368,040,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 5193"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: This work builds on recent object-segmentation work by Farabet et al., by augmenting the pixel-processing pathways with ones that processes a depth map from a Kinect RGBD camera. This work seems to me a well-motivated and natural extension no... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | VVbCVyTLqczWn | comment | 1,363,297,440,000 | qO9gWZZ1gfqhl | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and pointing out the paper of Ciresan et al., that we added to our list of references. Similarly to us, they apply the idea of using a kind of multi-scale network. However, Ciseran's approach to foveation differs from ours: where we use a multiscale pyramid to provide a foveated input t... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | 2-VeRGGdvD-58 | review | 1,362,213,660,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 03ba"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: This work applies convolutional neural networks to the task of RGB-D indoor scene segmentation. The authors previously evaulated the same multi-scale conv net architecture on the data using only RGB information, this work demonstrates that fo... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | LkyqLtotdQLG4 | review | 1,362,012,600,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 9212"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: The paper describes a Natural Gradient technique to train Boltzman machines. This is essentially the approach of Amari et al (1992) where the Fisher information matrix is expressed in which the authors estimate the Fisher in... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | o5qvoxIkjTokQ | review | 1,362,294,960,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 7e2e"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: This paper presents a natural gradient algorithm for deep Boltzmann machines. The authors must be commended for their extremely clear and succinct description of the natural gradient method in Section 2. This presentation is ... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | dt6KtywBaEvBC | review | 1,362,379,800,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 77a7"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: This paper introduces a new gradient descent algorithm that combines is based on Hessian-free optimization, but replaces the approximate Hessian-vector product by an approximate Fisher information matrix-vector product. It is... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | pC-4pGPkfMnuQ | review | 1,363,459,200,000 | OpvgONa-3WODz | [
"everyone"
] | [
"Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: Thank you to the reviewers for the helpful feedback. The provided references will no doubt come in handy for future work.
To all reviewers:In an effort to speedup run time, we have re-implemented a significant portion of the MFNG algorithm. This resulted in large speedups for the diagonal approximation of MF... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | d6u7vbCNJV6Q8 | review | 1,361,968,020,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer ac47"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: Deep predictive coding networks
This paper introduces a new model which combines bottom-up, top-down, and temporal information to learning a generative model in an unsupervised fashion on videos. The model is formulated in terms of states, which carry temporal... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | Xu4KaWxqIDurf | review | 1,363,393,200,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | review: The revised paper is uploaded onto arXiv. It will be announced on 18th March.
In the mean time, the paper is also made available at
https://www.dropbox.com/s/klmpu482q6nt1ws/DPCN.pdf |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | 00ZvUXp_e10_E | comment | 1,363,392,660,000 | EEhwkCLtAuko7 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for you review and comments, particularly for pointing out some mistakes in the paper. Following is our response to some concerns you have raised.
>>> 'You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?'
... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | iiUe8HAsepist | comment | 1,363,392,180,000 | d6u7vbCNJV6Q8 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> 'The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | EEhwkCLtAuko7 | review | 1,362,405,300,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer 62ac"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: This paper attempts to capture both the temporal dynamics of signals and the contribution of top down connections for inference using a deep model. The experimental results are qualitatively encouraging, and the model structure seems like a sensible direction to... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | o1YP1AMjPx1jv | comment | 1,363,393,020,000 | Za8LX-xwgqXw5 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | XTZrXGh8rENYB | comment | 1,363,393,320,000 | 3vEUvBbCrO8cu | [
"everyone"
] | [
"Rakesh Chalasani"
] | ICLR.cc/2013/conference | 2013 | reply: This is in reply to reviewer 1829, mistakenly pasted here. Please ignore. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | Za8LX-xwgqXw5 | review | 1,362,498,780,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer 1829"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: A brief summary of the paper's contributions, in the context of prior work.
The paper proposes a hierarchical sparse generative model in the context of a dynamical system. The model can capture temporal dependencies in time-varying data, and top-down information... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | 3vEUvBbCrO8cu | review | 1,363,392,960,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | review: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | UUlHmZjBOIUBb | review | 1,362,353,160,000 | zzEf5eKLmAG0o | [
"everyone"
] | [
"anonymous reviewer d966"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums
review: The paper introduces an new algorithm for simultaneously learning a hidden layer (latent representation) for multiple data views as well as automatically segmenting that hidden layer into shared and view-spe... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | tt7CtuzeCYt5H | comment | 1,363,857,240,000 | DNKnDqeVJmgPF | [
"everyone"
] | [
"YoonSeop Kang"
] | ICLR.cc/2013/conference | 2013 | reply: 1. The distribution of sigma(s_{kj}) had modes near 0 and 1, but the graph of the distribution was omitted due to the space constraints. The amount of separation between modes were affected by the hyperparameters that were not mentioned in the paper.
2. It is true that the separation between digit features a... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | qqdsq7GUspqD2 | comment | 1,363,857,540,000 | UUlHmZjBOIUBb | [
"everyone"
] | [
"YoonSeop Kang"
] | ICLR.cc/2013/conference | 2013 | reply: 1. As the switch parameters converge quickly, the training time of our model was not very different from that of DWH.
2. We performed the experiment several times, but the result was consistent. Still, it is our fault that we didn't repeat the experiments enough to add error bars to the results.
3. MVHs are of... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | DNKnDqeVJmgPF | review | 1,360,866,060,000 | zzEf5eKLmAG0o | [
"everyone"
] | [
"anonymous reviewer 0e7e"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums
review: The authors propose a bipartite, undirected graphical model for multiview learning, called structure-adapting multiview harmonimum (SA-MVH). The model is based on their earlier model called multiview harmoni... |
End of preview. Expand in Data Studio
OpenReview Raw
Raw peer review data from OpenReview, covering major ML/AI venues (ICLR, NeurIPS, EMNLP, COLM, ACM MM, and more). Includes reviews, official comments, meta-reviews, and decisions for 49,023 unique papers.
Originally from sumukshashidhar-archive/openreview_raw.
This dataset is a compilation of publicly available data from OpenReview. All original content and data rights belong to OpenReview. This compilation is made available under the Open Data Commons Attribution License (ODC-By). Users must attribute both this compilation and the original source (OpenReview) in any use of this dataset.
Dataset Statistics
| Statistic | Value |
|---|---|
| Total rows | 626,430 |
| Unique papers | 49,023 |
| Unique venues | 349 |
| Year range | 2013–2025 |
Note Types
| Type | Count | % |
|---|---|---|
| official_comment | 349,653 | 55.8% |
| official_review | 186,462 | 29.8% |
| decision | 31,450 | 5.0% |
| review | 28,616 | 4.6% |
| comment | 16,753 | 2.7% |
| meta_review | 13,496 | 2.2% |
Top Venues
| Venue | Count |
|---|---|
| ICLR 2025 | 198,960 |
| ICLR 2024 | 110,570 |
| NeurIPS 2024 | 75,555 |
| NeurIPS 2023 | 64,562 |
| EMNLP 2023 | 22,742 |
| NeurIPS 2022 | 16,278 |
| ICLR 2022 | 14,593 |
| NeurIPS 2021 | 13,605 |
| ICLR 2021 | 12,275 |
| ICLR 2019 | 11,916 |
Year Distribution
| Year | Count |
|---|---|
| 2013 | 373 |
| 2014 | 651 |
| 2016 | 295 |
| 2017 | 626 |
| 2018 | 1,158 |
| 2019 | 14,284 |
| 2020 | 12,979 |
| 2021 | 35,943 |
| 2022 | 44,621 |
| 2023 | 96,525 |
| 2024 | 219,635 |
| 2025 | 199,340 |
Note Text Length (characters)
| Statistic | Value |
|---|---|
| Mean | 2,268 |
| Median | 2,023 |
| Min | 10 |
| Max | 56,453 |
Schema
- forum_id — OpenReview forum identifier (one per paper)
- forum_title — Paper title
- forum_authors — List of paper authors
- forum_abstract — Paper abstract
- forum_keywords — Paper keywords
- forum_pdf_url — Link to PDF on OpenReview
- forum_url — Link to forum on OpenReview
- note_id — Unique identifier for this note (review/comment/decision)
- note_type — One of:
official_review,official_comment,decision,review,comment,meta_review - note_created — Unix timestamp (milliseconds) of note creation
- note_replyto — ID of the note this is replying to
- note_readers — List of reader groups with access
- note_signatures — List of note author signatures
- venue — Conference/venue identifier
- year — Publication year
- note_text — Full text content of the note
- Downloads last month
- 41