forum_id string | forum_title string | forum_authors list | forum_abstract string | forum_keywords list | forum_pdf_url string | forum_url string | note_id string | note_type string | note_created int64 | note_replyto string | note_readers list | note_signatures list | venue string | year string | note_text string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robot... | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | KOmskcVuMBOLt | review | 1,362,354,720,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"anonymous reviewer d6ae"
] | ICLR.cc/2013/conference | 2013 | title: review of Clustering Learning for Robotic Vision
review: I am *very* sympathetic to the aims of the authors:
Find simple, effective and fast deep networks to understand sensor data. The authors defer some of the more interesting bits to future work however: they note that sum-abs-diff should be much more effici... |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robot... | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | DGTnGO8CnrcPN | review | 1,362,366,000,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"anonymous reviewer d2a7"
] | ICLR.cc/2013/conference | 2013 | title: review of Clustering Learning for Robotic Vision
review: # Summary
This paper compares two types of filtering operator (linear filtering vs. distance filtering) in convolutional neural networks for image processing. The paper evaluates two fairly arbitrarily-chosen architectures on the CIFAR-10 and SVHN imag... |
ACBmCbico7jkg | Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models | [
"Derek Rose",
"Itamar Arel"
] | Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propos... | [
"task",
"maps",
"model",
"gradient",
"selection",
"pattern recognition architecture",
"visual pipeline models",
"feature extraction",
"pipeline"
] | https://openreview.net/pdf?id=ACBmCbico7jkg | https://openreview.net/forum?id=ACBmCbico7jkg | RRH1s5U_dcQjB | review | 1,362,378,120,000 | ACBmCbico7jkg | [
"everyone"
] | [
"anonymous reviewer 06d9"
] | ICLR.cc/2013/conference | 2013 | review: NA |
ACBmCbico7jkg | Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models | [
"Derek Rose",
"Itamar Arel"
] | Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propos... | [
"task",
"maps",
"model",
"gradient",
"selection",
"pattern recognition architecture",
"visual pipeline models",
"feature extraction",
"pipeline"
] | https://openreview.net/pdf?id=ACBmCbico7jkg | https://openreview.net/forum?id=ACBmCbico7jkg | cjiVGTKF7OjND | review | 1,362,402,060,000 | ACBmCbico7jkg | [
"everyone"
] | [
"anonymous reviewer f473"
] | ICLR.cc/2013/conference | 2013 | title: review of Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models
review: The paper proposes to learn the weights of the pooling region in a neural network for recognition. The idea is a good one, but the paper is a bit terse. Its not really clear what we are looking at in Figure 1b -... |
ACBmCbico7jkg | Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models | [
"Derek Rose",
"Itamar Arel"
] | Hyper-parameter selection remains a daunting task when building a pattern recognition architecture which performs well, particularly in recently constructed visual pipeline models for feature extraction. We re-formulate pooling in an existing pipeline as a function of adjustable pooling map weight parameters and propos... | [
"task",
"maps",
"model",
"gradient",
"selection",
"pattern recognition architecture",
"visual pipeline models",
"feature extraction",
"pipeline"
] | https://openreview.net/pdf?id=ACBmCbico7jkg | https://openreview.net/forum?id=ACBmCbico7jkg | 55Y25pcVULOXK | review | 1,362,378,060,000 | ACBmCbico7jkg | [
"everyone"
] | [
"anonymous reviewer 06d9"
] | ICLR.cc/2013/conference | 2013 | title: review of Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models
review: The paper by Rose & Arel entitled 'Gradient Driven Learning for Pooling in Visual Pipeline Feature Extraction Models' describes a new approach for optimizing hyper parameters in spatial pyramid-like architectures... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | nYshYtAXG48ze | review | 1,364,786,880,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: I want to say thanks again to the conference organizers, reviewers and openreview.net developers for doing a great job.
I have updated the code on my webpage to include two additional features: max norm weight clipping and training deep autoencoders. Autoencoder training uses symmetric encoding / decoding an... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | mm_3mNH6nD4hc | review | 1,363,601,400,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: I have submitted an updated version to arxiv and should appear shortly. My apologies for the delay. From the suggestion of reviewer 0a71 I've renamed the paper to 'Training Neural Networks with Dropout Stochastic Hessian-Free Optimization'. |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | TF3miswPCQiau | review | 1,362,400,260,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"anonymous reviewer f834"
] | ICLR.cc/2013/conference | 2013 | title: review of Training Neural Networks with Stochastic Hessian-Free Optimization
review: This paper looks at designing an SGD-like version of the 'Hessian-free' (HF) optimization approach which is applied to training shallow to moderately deep neural nets for classification tasks. The approach consists of the usual... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | gehZgYtw_1v8S | review | 1,362,161,760,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"anonymous reviewer 0a71"
] | ICLR.cc/2013/conference | 2013 | title: review of Training Neural Networks with Stochastic Hessian-Free Optimization
review: Summary and general overview:
----------------------------------------------
The paper tries to explore an online regime for Hessian Free as well as using drop outs. The new method is called Stochastic Hessian Free and is test... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | lcfIcbYPqX3P7 | review | 1,367,022,720,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: Dear reviewers,
To better account for the mentioned weaknesses of the paper, I've re-implemented SHF with GPU compatibility and evaluated the algorithm on the CURVES and MNIST deep autoencoder tasks. I'm using the same setup as in Chapter 7 of Ilya Sutskever's PhD thesis, which allows for comparison against ... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | av7x0igQwD0M- | review | 1,362,494,640,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: Thank you for your comments!
To Anonymous 0a71:
---------------------------------
(1,8): I agree. Indeed, it is straightforward to add an additional experiment without the use of dropout. At the least, the experimental section can be modified to indicate whether the method is using dropout or not instea... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | 3nHzayPmAI5r1 | comment | 1,363,585,560,000 | av7x0igQwD0M- | [
"everyone"
] | [
"anonymous reviewer 0a71"
] | ICLR.cc/2013/conference | 2013 | reply: Regarding using HF for classification. My point was that lack of results in the literature about classification error with HF might be just due to the fact that this is a new method, arguably hard to implement and hence not many had a chance to play with it. I'm not sure that just using HF (the way James introdu... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | UJZtu0oLtcJh1 | review | 1,362,391,800,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"anonymous reviewer 4709"
] | ICLR.cc/2013/conference | 2013 | title: review of Training Neural Networks with Stochastic Hessian-Free Optimization
review: This paper makes an attempt at extending the Hessian-free learning work to a stochastic setting. In a nutshell, the changes are:
- shorter CG runs
- cleverer information sharing across CG runs that has an annealing effect
... |
tFbuFKWX3MFC8 | Training Neural Networks with Stochastic Hessian-Free Optimization | [
"Ryan Kiros"
] | Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property a... | [
"optimization",
"neural networks",
"stochastic",
"deep autoencoders",
"recurrent networks",
"conjugate gradient algorithm",
"update directions",
"products",
"order"
] | https://openreview.net/pdf?id=tFbuFKWX3MFC8 | https://openreview.net/forum?id=tFbuFKWX3MFC8 | CUXbqkRcJWqcy | review | 1,360,514,640,000 | tFbuFKWX3MFC8 | [
"everyone"
] | [
"Ryan Kiros"
] | ICLR.cc/2013/conference | 2013 | review: Code is now available: http://www.ualberta.ca/~rkiros/
Included are scripts to reproduce the results in the paper. |
2rHk2kZ5knTJ6 | A Geometric Descriptor for Cell-Division Detection | [
"Marcelo Cicconet",
"Italo Lima",
"Davi Geiger",
"Kris Gunsalus"
] | We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the tw... | [
"detection",
"geometric descriptor",
"centroids",
"sequence",
"descriptor",
"processing network",
"wavelet filtering",
"test",
"mirror symmetry",
"pairs"
] | https://openreview.net/pdf?id=2rHk2kZ5knTJ6 | https://openreview.net/forum?id=2rHk2kZ5knTJ6 | UvnQU-IxtJfA2 | review | 1,362,163,620,000 | 2rHk2kZ5knTJ6 | [
"everyone"
] | [
"anonymous reviewer ba30"
] | ICLR.cc/2013/conference | 2013 | title: review of A Geometric Descriptor for Cell-Division Detection
review: Goal: automatically spot the point in a video sequence where a cell-division occurs.
Interesting application of deep networks. |
2rHk2kZ5knTJ6 | A Geometric Descriptor for Cell-Division Detection | [
"Marcelo Cicconet",
"Italo Lima",
"Davi Geiger",
"Kris Gunsalus"
] | We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the tw... | [
"detection",
"geometric descriptor",
"centroids",
"sequence",
"descriptor",
"processing network",
"wavelet filtering",
"test",
"mirror symmetry",
"pairs"
] | https://openreview.net/pdf?id=2rHk2kZ5knTJ6 | https://openreview.net/forum?id=2rHk2kZ5knTJ6 | ddQbtyHpiUz9Z | review | 1,362,034,500,000 | 2rHk2kZ5knTJ6 | [
"everyone"
] | [
"David Warde-Farley"
] | ICLR.cc/2013/conference | 2013 | review: The proposed method appears to be an engineered descriptor that doesn't involve any learning. While the application is interesting, ICLR is probably not an appropriate venue. |
2rHk2kZ5knTJ6 | A Geometric Descriptor for Cell-Division Detection | [
"Marcelo Cicconet",
"Italo Lima",
"Davi Geiger",
"Kris Gunsalus"
] | We describe a method for cell-division detection based on a geometric-driven descriptor that can be represented as a 5-layers processing network, based mainly on wavelet filtering and a test for mirror symmetry between pairs of pixels. After the centroids of the descriptors are computed for a sequence of frames, the tw... | [
"detection",
"geometric descriptor",
"centroids",
"sequence",
"descriptor",
"processing network",
"wavelet filtering",
"test",
"mirror symmetry",
"pairs"
] | https://openreview.net/pdf?id=2rHk2kZ5knTJ6 | https://openreview.net/forum?id=2rHk2kZ5knTJ6 | uVT9-IDrqY-ci | review | 1,362,198,120,000 | 2rHk2kZ5knTJ6 | [
"everyone"
] | [
"anonymous reviewer 3bab"
] | ICLR.cc/2013/conference | 2013 | title: review of A Geometric Descriptor for Cell-Division Detection
review: This paper aims to annotate the point at which cells divide in a video sequence.
Pros:
- a useful and interesting application.
Cons:
- it does not seem to involve any learning, it clearly does not fit at ICLR.
- no comparison to other ... |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | KktHprTPH5p6q | review | 1,363,851,960,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 8b9c"
] | ICLR.cc/2013/conference | 2013 | review: One additional comment is that the work bears some similarities to Hinton's recent work on 'capsules' and it may be worth citing that paper:
Hinton, G. E., Krizhevsky, A. and Wang, S. (2011)
Transforming Auto-encoders.
ICANN-11: International Conference on Artificial Neural Networks, Helsinki.
http://www... |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | Gp6ETkwghDG9l | comment | 1,363,646,700,000 | DdhjdI7FMGDFT | [
"everyone"
] | [
"anonymous reviewer 8ed7"
] | ICLR.cc/2013/conference | 2013 | reply: The authors have improved the paper, addressing many of the issues I brought up. I would modify my review to be Neutral; if that is not an acceptable evaluation, then I modify my review to a Weak Accept. I am only posting this response to the poster asking for an updated evaluation, because I am not sure if I am... |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | 3yWm3DNg8o3fu | review | 1,363,126,320,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Sebastian Hitziger, Maureen Clerc, Alexandre Gramfort, Sandrine Saillet, Christian Bénar, Théodore Papadopoulo"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their constructive comments. We submitted a new version of the paper to arXiv, which should be made available on Wednesday, March 13. As one major change we now point out the similarity to convolutional/shift-invariant sparse coding (SISC)*, but also mention the differences mainly int... |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | zo2FGvCYFkoR4 | review | 1,362,402,300,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 8b9c"
] | ICLR.cc/2013/conference | 2013 | title: review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
review: The paper proposes a method for learning shiftable dictionary elements - i.e., each dictionary is allowed to shift to its optimal position to model structure in a signal. Results on test data show a signific... |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | HrUgwafkmVrpB | review | 1,363,533,480,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper? |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | 9CrL9uhDy_qlF | review | 1,363,533,540,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper? |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | DJA5lKoL8-lLY | review | 1,362,362,340,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 8ed7"
] | ICLR.cc/2013/conference | 2013 | title: review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
review: This paper introduces a dictionary learning technique that incorporates time delays or shifts on the learned dictionary, called JADL, to better account for this structure in multi-trial neuroelectric signals.... |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | NjApJLTlfWxlo | review | 1,362,376,680,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"anonymous reviewer 5e7a"
] | ICLR.cc/2013/conference | 2013 | title: review of Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals
review: This paper introduces a sparse coding variant called 'jitter-adaptive' sparse coding, aimed at improving the efficiency of sparse coding by augmenting a dictionary with temporally shifted elements. The motiva... |
4eEO5rd6xSevQ | Jitter-Adaptive Dictionary Learning - Application to Multi-Trial Neuroelectric Signals | [
"Sebastian Hitziger",
"Maureen Clerc",
"Alexandre Gramfort",
"Sandrine Saillet",
"Christian Bénar",
"Théodore Papadopoulo"
] | Dictionary Learning has proven to be a powerful tool for many image processing tasks, where atoms are typically defined on small image patches. As a drawback, the dictionary only encodes basic structures. In addition, this approach treats patches of different locations in one single set, which means a loss of informati... | [
"dictionary",
"features",
"application",
"atoms",
"signals",
"case",
"neuroelectric signals",
"powerful tool"
] | https://openreview.net/pdf?id=4eEO5rd6xSevQ | https://openreview.net/forum?id=4eEO5rd6xSevQ | DdhjdI7FMGDFT | review | 1,363,533,480,000 | 4eEO5rd6xSevQ | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Please read the author's responses to your review and the updated version of the paper. Do they change your evaluation of the paper? |
MQm0HKx20L7iN | Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering | [
"Boyi Xie",
"Shuheng Zheng"
] | Large scale agglomerative clustering is hindered by computational burdens. We propose a novel scheme where exact inter-instance distance calculation is replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing (KLSH) hashed values. This results in a method that drastically decreases computation tim... | [
"hashing",
"agglomerative",
"computational burdens",
"novel scheme",
"exact",
"distance calculation",
"distance",
"klsh",
"values"
] | https://openreview.net/pdf?id=MQm0HKx20L7iN | https://openreview.net/forum?id=MQm0HKx20L7iN | vpc3vyRo-2AFM | review | 1,362,080,280,000 | MQm0HKx20L7iN | [
"everyone"
] | [
"anonymous reviewer c8d7"
] | ICLR.cc/2013/conference | 2013 | title: review of Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering
review: This paper proposes to use kernelized locality-sensitive hashing (KLSH), based on a similarity metric learned from labeled data, to accelerate agglomerative (hierarchical) clustering. Agglomerative clustering req... |
MQm0HKx20L7iN | Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering | [
"Boyi Xie",
"Shuheng Zheng"
] | Large scale agglomerative clustering is hindered by computational burdens. We propose a novel scheme where exact inter-instance distance calculation is replaced by the Hamming distance between Kernelized Locality-Sensitive Hashing (KLSH) hashed values. This results in a method that drastically decreases computation tim... | [
"hashing",
"agglomerative",
"computational burdens",
"novel scheme",
"exact",
"distance calculation",
"distance",
"klsh",
"values"
] | https://openreview.net/pdf?id=MQm0HKx20L7iN | https://openreview.net/forum?id=MQm0HKx20L7iN | Z9bz9yXn_F9nA | review | 1,362,172,860,000 | MQm0HKx20L7iN | [
"everyone"
] | [
"anonymous reviewer cce9"
] | ICLR.cc/2013/conference | 2013 | title: review of Kernelized Locality-Sensitive Hashing for Semi-Supervised Agglomerative Clustering
review: This workshop submission proposes a method for clustering data which applies a semi-supervised distance metric to the data prior to applying kernelized locality-sensitive hashing for agglom
erative clustering. T... |
iKeAKFLmxoim3 | Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images | [
"Ognjen Rudovic",
"Maja Pantic",
"Vladimir Pavlovic"
] | We propose a novel method for automatic pain intensity estimation from facial images based on the framework of kernel Conditional Ordinal Random Fields (KCORF). We extend this framework to account for heteroscedasticity on the output labels(i.e., pain intensity scores) and introduce a novel dynamic features, dynamic ra... | [
"pain intensity estimation",
"facial images",
"framework",
"intensity scores",
"novel",
"kcorf"
] | https://openreview.net/pdf?id=iKeAKFLmxoim3 | https://openreview.net/forum?id=iKeAKFLmxoim3 | VTEO8hp3ad83Q | review | 1,362,297,780,000 | iKeAKFLmxoim3 | [
"everyone"
] | [
"anonymous reviewer 9402"
] | ICLR.cc/2013/conference | 2013 | title: review of Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images
review: This extended abstract discusses a modification to an existing ordinal conditional random field model (CORF) so as to treat non-stationary data. This is done by making the variance in a probi... |
iKeAKFLmxoim3 | Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images | [
"Ognjen Rudovic",
"Maja Pantic",
"Vladimir Pavlovic"
] | We propose a novel method for automatic pain intensity estimation from facial images based on the framework of kernel Conditional Ordinal Random Fields (KCORF). We extend this framework to account for heteroscedasticity on the output labels(i.e., pain intensity scores) and introduce a novel dynamic features, dynamic ra... | [
"pain intensity estimation",
"facial images",
"framework",
"intensity scores",
"novel",
"kcorf"
] | https://openreview.net/pdf?id=iKeAKFLmxoim3 | https://openreview.net/forum?id=iKeAKFLmxoim3 | lBM7_cfUaYlP1 | review | 1,362,186,300,000 | iKeAKFLmxoim3 | [
"everyone"
] | [
"anonymous reviewer 0342"
] | ICLR.cc/2013/conference | 2013 | title: review of Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity
Estimation from Facial Images
review: This paper seeks to estimate ordinal labels of pain intensity from
videos of faces. The paper discusses a new variation of a conditional
random field in which the produced labels are ordin... |
zzKhQhsTYlzAZ | Regularized Discriminant Embedding for Visual Descriptor Learning | [
"Kye-Hyeon Kim",
"Rui Cai",
"Lei Zhang",
"Seungjin Choi"
] | Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various e... | [
"pairs",
"discriminant",
"visual descriptor",
"images",
"changes",
"viewpoint",
"resolution",
"noise",
"illumination",
"representations"
] | https://openreview.net/pdf?id=zzKhQhsTYlzAZ | https://openreview.net/forum?id=zzKhQhsTYlzAZ | FBx7CpGZiEA32 | review | 1,362,287,940,000 | zzKhQhsTYlzAZ | [
"everyone"
] | [
"anonymous reviewer 1e7c"
] | ICLR.cc/2013/conference | 2013 | title: review of Regularized Discriminant Embedding for Visual Descriptor Learning
review: The paper aims to present a method for discriminant analysis for image
descriptors. The formulation splits a given dataset of labeled images
into 4 categories, Relevant/Irrelevant and Near/Far pairs
(RN,RF,IN,IF). The final fo... |
zzKhQhsTYlzAZ | Regularized Discriminant Embedding for Visual Descriptor Learning | [
"Kye-Hyeon Kim",
"Rui Cai",
"Lei Zhang",
"Seungjin Choi"
] | Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various e... | [
"pairs",
"discriminant",
"visual descriptor",
"images",
"changes",
"viewpoint",
"resolution",
"noise",
"illumination",
"representations"
] | https://openreview.net/pdf?id=zzKhQhsTYlzAZ | https://openreview.net/forum?id=zzKhQhsTYlzAZ | -7pc74mqcO-Mr | review | 1,362,186,780,000 | zzKhQhsTYlzAZ | [
"everyone"
] | [
"anonymous reviewer 39f1"
] | ICLR.cc/2013/conference | 2013 | title: review of Regularized Discriminant Embedding for Visual Descriptor Learning
review: This paper describes a method for learning visual feature descriptors that are invariant to changes in illumination, viewpoint, and image quality. The method can be used for multi-view matching and alignment, or for robust image ... |
zzKhQhsTYlzAZ | Regularized Discriminant Embedding for Visual Descriptor Learning | [
"Kye-Hyeon Kim",
"Rui Cai",
"Lei Zhang",
"Seungjin Choi"
] | Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various e... | [
"pairs",
"discriminant",
"visual descriptor",
"images",
"changes",
"viewpoint",
"resolution",
"noise",
"illumination",
"representations"
] | https://openreview.net/pdf?id=zzKhQhsTYlzAZ | https://openreview.net/forum?id=zzKhQhsTYlzAZ | Xf5Pf5SWhtEYT | review | 1,363,779,180,000 | zzKhQhsTYlzAZ | [
"everyone"
] | [
"Kye-Hyeon Kim, Rui Cai, Lei Zhang, Seungjin Choi"
] | ICLR.cc/2013/conference | 2013 | review: We sincerely appreciate all the reviewers for their time and comments to this manuscript.
We fully agree that it is really hard to find maningful contributions from this short paper, while we tried our best to emphasize them. As we have noted, the full version of this manuscript is currently under review in an... |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, ... | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | 0sZLsSijYosjR | review | 1,360,886,580,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Yanqing Chen"
] | ICLR.cc/2013/conference | 2013 | review: Hello dear reviewer,
Thank you for your well thought out review. We hope to have a draft which addresses some of your comments shortly.
Regards |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, ... | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | KcIrcVwbnRc0P | review | 1,362,189,720,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Andrew Maas"
] | ICLR.cc/2013/conference | 2013 | review: On the topic of comparing word representations, quality as a function of word frequency is something I've often found to be a problem. For example, rare words are often important for sentiment analysis, but many word representation learners produce poor representations for all but the top 1000 or so most freque... |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, ... | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | -82Lr-SgHKmgJ | review | 1,360,855,200,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"anonymous reviewer 406c"
] | ICLR.cc/2013/conference | 2013 | title: review of The Expressive Power of Word Embeddings
review: The paper proposes a method for evaluating real-valued vector embeddings of words based on several word and word-pair classification tasks. Though evaluation of such embeddings is an interesting and important problem, the experimental setup used it virtua... |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, ... | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | 224E22nDWH2Ia | review | 1,362,170,040,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"anonymous reviewer af94"
] | ICLR.cc/2013/conference | 2013 | title: review of The Expressive Power of Word Embeddings
review: The submission considers 3 types of publicly-available distributed representations of words: produced by SENNA (Collobert and Weston, 11), the hierarchical bilinear language model (Mnih and Hinton, 2007) and Turian et al's (2010) implementation of the SEN... |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, ... | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | QrngQQuNMcQNZ | review | 1,362,416,940,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"anonymous reviewer 24e2"
] | ICLR.cc/2013/conference | 2013 | title: review of The Expressive Power of Word Embeddings
review: This paper compares three available word vector embeddings on several tasks.
The paper lacks somewhat in novelty since the vectors are simply downloaded. This also makes their comparison somewhat harder since the final result is largely dependent on th... |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, ... | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | ggT4SGBq4iS57 | review | 1,362,457,800,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena"
] | ICLR.cc/2013/conference | 2013 | review: We thank the anonymous reviewers for their thoughtful comments. We have taken them into consideration, and have uploaded a revised manuscript to arXiv over the weekend. (it should be available in a few hours)
Specific changes include:
1. We have evaluated 3-class versions of our classifiers on the sentimen... |
3JiGJa1ZBn9W0 | The Expressive Power of Word Embeddings | [
"Yanqing Chen",
"Bryan P",
"Rami Al-Rfou",
"Steven Skiena"
] | We seek to better understand the difference in quality of the several publicly released embeddings. We propose several tasks that help to distinguish the characteristics of different embeddings. Our evaluation shows that embeddings are able to capture deep semantics even in the absence of sentence structure. Moreover, ... | [
"embeddings",
"quality",
"expressive power",
"word",
"characteristics",
"difference",
"several",
"several tasks",
"different embeddings",
"evaluation"
] | https://openreview.net/pdf?id=3JiGJa1ZBn9W0 | https://openreview.net/forum?id=3JiGJa1ZBn9W0 | 7AoBA7CD4T7Fu | review | 1,363,573,560,000 | 3JiGJa1ZBn9W0 | [
"everyone"
] | [
"Yanqing Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena"
] | ICLR.cc/2013/conference | 2013 | review: We thank the anonymous reviewers for the reference to Huang et al (2012). We have added the embeddings generated by Huang to our comparison, and we believe that they are an interesting addition.
The latest version of our submission can be found on arxiv. |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible... | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | bbQXGy3KgUrcP | comment | 1,363,649,700,000 | zzS1zF0bHj6V7 | [
"everyone"
] | [
"Charles Cadieu"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and feedback. Here are some specific replies. (>-mark indicates quote from review)
> * The two macaque subjects in the study by Majaj et al (2012) are unlikely to have been exposed to images of 3 object categories in the dataset: cars, planes or other animals such as cows and elepha... |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible... | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | zzS1zF0bHj6V7 | review | 1,362,225,300,000 | 7hXs7GzQHo-QK | [
"everyone"
] | [
"anonymous reviewer 4738"
] | ICLR.cc/2013/conference | 2013 | title: review of The Neural Representation Benchmark and its Evaluation on Brain and
Machine
review: This paper applies the methodology for 'kernel analysis of deep networks' (Montavon et al, 2011) to the neural code measured on two areas (V4 and IT) on the visual cortex of the macaque. It compares, on the same te... |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible... | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | RRN_zPMIpEzTn | comment | 1,363,649,460,000 | fD8BKQYEClkvP | [
"everyone"
] | [
"Charles Cadieu"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and feedback. Here are some comments on your suggestions:
> The dataset used in the paper is composed of objects that are superposed to an independent background. While authors motivate their choice by controlling the factors of variations in the representation, it would be interest... |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible... | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | fD8BKQYEClkvP | review | 1,362,156,600,000 | 7hXs7GzQHo-QK | [
"everyone"
] | [
"anonymous reviewer b28a"
] | ICLR.cc/2013/conference | 2013 | title: review of The Neural Representation Benchmark and its Evaluation on Brain and
Machine
review: The paper presents a benchmark for comparing representations of image data in brains and machines. The benchmark consists of looking at how the image categorization task is encoded in the leading kernel principal c... |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible... | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | E6HmsiyOvphK_ | review | 1,362,226,860,000 | 7hXs7GzQHo-QK | [
"everyone"
] | [
"anonymous reviewer d59c"
] | ICLR.cc/2013/conference | 2013 | title: review of The Neural Representation Benchmark and its Evaluation on Brain and
Machine
review: This paper assesses feature learning algorithms by comparing their performance on an object classification task to that of Macaque IT and V4 neurons. The work provides a new dataset of images, an analysis method fo... |
7hXs7GzQHo-QK | The Neural Representation Benchmark and its Evaluation on Brain and
Machine | [
"Charles Cadieu",
"Ha Hong",
"Dan Yamins",
"Nicolas Pinto",
"Najib J. Majaj",
"James J. DiCarlo"
] | A key requirement for the development of effective learning representations is their evaluation and comparison to representations we know to be effective. In natural sensory domains, the community has viewed the brain as a source of inspiration and as an implicit benchmark for success. However, it has not been possible... | [
"evaluation",
"brain",
"neural representation benchmark",
"machine",
"representations",
"neural representation",
"benchmark",
"analysis",
"representational performance"
] | https://openreview.net/pdf?id=7hXs7GzQHo-QK | https://openreview.net/forum?id=7hXs7GzQHo-QK | g05Ygn6IJZ0iX | comment | 1,363,649,760,000 | E6HmsiyOvphK_ | [
"everyone"
] | [
"Charles Cadieu"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and feedback. (>-mark indicates quote from review)
> Because of the many design choices to be made in reducing neural data to a feature representation (the use of multi units rather than singular units, time averaging, short presentation times--many of which are discussed by the auth... |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model w... | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | JNpPfPeAkDJqK | review | 1,363,672,020,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"simon bolivar"
] | ICLR.cc/2013/conference | 2013 | review: It has already been mentioned above, but I checked the longer version of the document posted at http://www.cc.gatech.edu/~lebanon/papers/lee_icml_2013.pdf
and there really is not enough discussion of the huge previous literature on locally low rank representations, going back at least as far back as
http://i... |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model w... | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | CkupCgw-sY1o7 | review | 1,363,319,940,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"Joonseok Lee, Seungyeon Kim, Guy Lebanon, Yoram Singer"
] | ICLR.cc/2013/conference | 2013 | review: We appreciate for both of your reviews and questions.
- Kernel width: we validated the kernel width experimentally. Specifically, we examined the following kernel types: Gaussian, triangular, and Epanchnikov kernels. We also experimented with the kernel width (0.6, 0.7, 0.8). We found that sufficiently large... |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model w... | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | 4eqD-9JEKn4Ea | review | 1,362,123,600,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"anonymous reviewer 4b7c"
] | ICLR.cc/2013/conference | 2013 | title: review of Matrix Approximation under Local Low-Rank Assumption
review: Matrix Approximation under Local Low-Rank Assumption
Paper summary
This paper deals with low-rank matrix approximation/completion. To reconstruct a matrix element M_{i,j}, the proposed method performs a weighted low rank matrix approxim... |
PRuOK_LY_WPIq | Matrix Approximation under Local Low-Rank Assumption | [
"Joonseok Lee",
"Seungyeon Kim",
"Guy Lebanon",
"Yoram Singer"
] | Matrix approximation is a common tool in machine learning for building accurate prediction models for recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model w... | [
"local",
"assumption matrix approximation",
"observed matrix",
"matrix approximation",
"common tool",
"machine learning",
"accurate prediction models",
"recommendation systems",
"text mining",
"computer vision"
] | https://openreview.net/pdf?id=PRuOK_LY_WPIq | https://openreview.net/forum?id=PRuOK_LY_WPIq | 9QsSQSzMpW9Ac | review | 1,362,191,520,000 | PRuOK_LY_WPIq | [
"everyone"
] | [
"anonymous reviewer 76ef"
] | ICLR.cc/2013/conference | 2013 | title: review of Matrix Approximation under Local Low-Rank Assumption
review: Approximation and completion of sparse matrices is a common task. As popularized by the Netflix prize, there are many possible approaches, and combinations of different styles of approach can lead to better predictions than individual methods... |
BmOABAaTQDmt2 | A Semantic Matching Energy Function for Learning with Multi-relational
Data | [
"Xavier Glorot",
"Antoine Bordes",
"Jason Weston",
"Yoshua Bengio"
] | Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-re... | [
"data",
"graphs",
"relational learning",
"crucial",
"huge amounts",
"many application domains",
"computational biology",
"information retrieval",
"natural language processing"
] | https://openreview.net/pdf?id=BmOABAaTQDmt2 | https://openreview.net/forum?id=BmOABAaTQDmt2 | gL2tL3lwAfLw1 | review | 1,363,968,300,000 | BmOABAaTQDmt2 | [
"everyone"
] | [
"Xavier Glorot, Antoine Bordes, Jason Weston, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments.
It is true that our model should be compared with (Jenatton et al., NIPS12). This model has been developed simultaneously as ours, that's why it has not been included in the first version. We added this reference and their results (LFM model) in a revised version o... |
BmOABAaTQDmt2 | A Semantic Matching Energy Function for Learning with Multi-relational
Data | [
"Xavier Glorot",
"Antoine Bordes",
"Jason Weston",
"Yoshua Bengio"
] | Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-re... | [
"data",
"graphs",
"relational learning",
"crucial",
"huge amounts",
"many application domains",
"computational biology",
"information retrieval",
"natural language processing"
] | https://openreview.net/pdf?id=BmOABAaTQDmt2 | https://openreview.net/forum?id=BmOABAaTQDmt2 | ibXkikDckabeu | review | 1,362,123,900,000 | BmOABAaTQDmt2 | [
"everyone"
] | [
"anonymous reviewer 428a"
] | ICLR.cc/2013/conference | 2013 | title: review of A Semantic Matching Energy Function for Learning with Multi-relational
Data
review: Semantic Matching Energy Function for Learning with Multi-Relational Data
Paper Summary
This paper deals with learning an energy model over 3-way relationships. Each entity in the relation is associated a low... |
BmOABAaTQDmt2 | A Semantic Matching Energy Function for Learning with Multi-relational
Data | [
"Xavier Glorot",
"Antoine Bordes",
"Jason Weston",
"Yoshua Bengio"
] | Large-scale relational learning becomes crucial for handling the huge amounts of structured data generated daily in many application domains ranging from computational biology or information retrieval, to natural language processing. In this paper, we present a new neural network architecture designed to embed multi-re... | [
"data",
"graphs",
"relational learning",
"crucial",
"huge amounts",
"many application domains",
"computational biology",
"information retrieval",
"natural language processing"
] | https://openreview.net/pdf?id=BmOABAaTQDmt2 | https://openreview.net/forum?id=BmOABAaTQDmt2 | fjenfiFhEZfLM | review | 1,362,379,680,000 | BmOABAaTQDmt2 | [
"everyone"
] | [
"anonymous reviewer cae2"
] | ICLR.cc/2013/conference | 2013 | title: review of A Semantic Matching Energy Function for Learning with Multi-relational
Data
review: The paper proposes two functions for assigning energies to triples of
entities, represented as vectors. One energy function essentially
adds the vectors of the relations and the entities, while another
energy f... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | DtAvRX423kRIf | review | 1,361,903,280,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: This is an interesting investigation and I only have remarks to make regarding the CIFAR-10 and CIFAR-100 results and the rapidly moving state-of-the-art (SOTA). In particular, on CIFAR-100, the 56.29% accuracy is not state-of-the-art anymore (thankfully, our field is moving fast!). There was first the result ... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | 4w1kwHXszr4D8 | review | 1,362,138,060,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 2426"
] | ICLR.cc/2013/conference | 2013 | title: review of Learnable Pooling Regions for Image Classification
review: This paper proposes a method to jointly train a pooling layer and a classifier in a supervised way.
The idea is to first extract some features and then train a 2 layer neural net by backpropagation (although in practice they use l-bfgs). The ... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | uEhruhQZrGeZw | review | 1,361,927,280,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 45d8"
] | ICLR.cc/2013/conference | 2013 | review: PS. After reading some of the other comments, I see that I was wrong about the weights in the linear layer being possibly negative. I actually wasn't able to find the part of the paper that specifies this. I think in general the paper could be improved by being a little bit more straightforward. The method is v... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | 6tLOt5yk_I6cd | review | 1,363,741,140,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 45d8"
] | ICLR.cc/2013/conference | 2013 | review: I'm not sure why the authors are claiming state of the art on CIFAR-10 in their response, because the paper doesn't make this claim and I don't see any update to the paper. The method does not actually have state of the art on CIFAR-10 even under the constraint that it follow the architecture considered in the ... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | ttaRtzuy2NtjF | review | 1,360,139,640,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: As far as I can tell, the algorithm in section 2.2 (pooling + linear classifier) is essentially a 2-layer neural net trained with backprop, except that the hidden layer is linear with positive weights.
The only innovation seems to be the weight spatial smoothness regularizer of section 2.3. I think this should... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | ddaBUNcnvHrLK | review | 1,361,922,660,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Ian Goodfellow"
] | ICLR.cc/2013/conference | 2013 | review: This is a follow-up to Yoshua Bengio's comment. I'm lead author on the paper that he linked to.
One reason that Zeiler & Fergus got good results on CIFAR-100 with stochastic max pooling and my co-authors and I got good results on CIFAR-100 with maxout is that we were both using deep architectures. I think th... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | 0IOVI1hnXH0m- | review | 1,362,196,620,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer c1a0"
] | ICLR.cc/2013/conference | 2013 | title: review of Learnable Pooling Regions for Image Classification
review: The paper presents a method for training pooling regions in image classification pipelines (similar to those that employ bag-of-words or spatial pyramid models). The system uses a linear pooling matrix to parametrize the pooling units and foll... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | L9s74sx8Ka9cP | comment | 1,363,751,520,000 | 6tLOt5yk_I6cd | [
"everyone"
] | [
"Mateusz Malinowski"
] | ICLR.cc/2013/conference | 2013 | reply: As Table 1 shows our method gives similar results to Jia's method (79.6% and 80.17% accuracy). If we allow transfer between datasets, our method gives slightly better results (Table 5 reports 80.35% test accuracy for our method).
We could weight features with real-valued weights constrained to unit cube, and ... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | bYfTY-ABwrbB2 | review | 1,363,737,660,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Mateusz Malinowski, Mario Fritz"
] | ICLR.cc/2013/conference | 2013 | review: We thank all the reviewers for their comments.
We will include suggested papers on related work and origins of pooling architectures as well as improvement on the state of the art that occurred in the meanwhile.
The reviewers acknowledge our analysis of regularization schemes to learn weighted pooling units t... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | xEdmrekMJsvCj | review | 1,361,914,920,000 | rOvg47Txgprkn | [
"everyone"
] | [
"anonymous reviewer 45d8"
] | ICLR.cc/2013/conference | 2013 | title: review of Learnable Pooling Regions for Image Classification
review: Summary:
The paper proposes to replace the final stages of Coates and Ng's CIFAR-10 classification pipeline. In place of the hand-designed 3x3 mean pooling layer, the paper proposes to learn a pooling layer. In place of the SVM, the paper prop... |
rOvg47Txgprkn | Learnable Pooling Regions for Image Classification | [
"Mateusz Malinowski",
"Mario Fritz"
] | From the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the pred... | [
"model",
"schemes",
"learnable pooling regions",
"image classification",
"early hmax model",
"spatial pyramid matching",
"important role",
"visual recognition pipelines",
"spatial pooling"
] | https://openreview.net/pdf?id=rOvg47Txgprkn | https://openreview.net/forum?id=rOvg47Txgprkn | mdD47o8J4hmr1 | review | 1,360,973,580,000 | rOvg47Txgprkn | [
"everyone"
] | [
"Mateusz Malinowski"
] | ICLR.cc/2013/conference | 2013 | review: Our paper addresses the shortcomings of fixed and data-independent pooling regions in architectures such as Spatial Pyramid Matching [Lazebnik et. al., 'Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories', CVPR 2006], where dictionary-based features are pooled over large ... |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple uni... | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | Oel6vaaN-neNQ | review | 1,362,279,120,000 | 5Qbn4E0Njz4Si | [
"everyone"
] | [
"anonymous reviewer 7984"
] | ICLR.cc/2013/conference | 2013 | title: review of Hierarchical Data Representation Model - Multi-layer NMF
review: The paper proposes to stack NMF models on top of each other. At each level, a non-linear function of normalized decomposition coefficients is used and decomposed using another NMF.
This is essentially an instance of a deep belief netwo... |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple uni... | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | ZIE1IP5KlJTK- | review | 1,362,127,980,000 | 5Qbn4E0Njz4Si | [
"everyone"
] | [
"anonymous reviewer d1c1"
] | ICLR.cc/2013/conference | 2013 | title: review of Hierarchical Data Representation Model - Multi-layer NMF
review: This paper proposes a multilayer architecture based upon stacking non-negative matrix factorization modules and fine-tuning the entire architecture with reconstruction error. Experiments on text classification and MNIST reconstruction dem... |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple uni... | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | -B7o-Yy0XjB0_ | comment | 1,363,334,160,000 | Oel6vaaN-neNQ | [
"everyone"
] | [
"Hyun-Ah Song"
] | ICLR.cc/2013/conference | 2013 | reply: - Points on the con that experimental results are not great:
When we refer to Figure 2 in the paper, the proposed hierarchical feature extraction method results in much better classification and reconstruction performance, especially for small number of features.
It can be interpreted that, by taking hierarchi... |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple uni... | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | APRX62OnXa6nY | comment | 1,363,255,140,000 | ZIE1IP5KlJTK- | [
"everyone"
] | [
"Hyun-Ah Song"
] | ICLR.cc/2013/conference | 2013 | reply: - Description of proposed architecture:
Sorry about insufficient description of the proposed architecture! We had to fit all of the content into 3 pages.. We added more details on the architecture of our network, which includes actual computations involved in implementing the network in Appendix.
- Compariso... |
5Qbn4E0Njz4Si | Hierarchical Data Representation Model - Multi-layer NMF | [
"Hyun-Ah Song",
"Soo-Young Lee"
] | Understanding and representing the underlying structure of feature hierarchies present in complex data in intuitively understandable manner is an important issue. In this paper, we propose a data representation model that demonstrates hierarchical feature learning using NMF with sparsity constraint. We stack simple uni... | [
"complex data",
"network",
"feature hierarchies",
"understandable manner",
"hierarchical feature",
"nmf",
"nmf understanding",
"underlying structure"
] | https://openreview.net/pdf?id=5Qbn4E0Njz4Si | https://openreview.net/forum?id=5Qbn4E0Njz4Si | CC-TCptvxlrvi | comment | 1,363,255,380,000 | Oel6vaaN-neNQ | [
"everyone"
] | [
"Hyun-Ah Song"
] | ICLR.cc/2013/conference | 2013 | reply: - Details on the specifics of the method:
Sorry for the insufficient explanations on the method. We had to fit into 3 page limit.. We added detailed explanation of the method and computation in Appendix.
- Hierarchies by topic models [A,B]:
Thanks for the recommendation! In this paper, we focused on the ge... |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter map... | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | D3uj2h4TUE2ce | review | 1,363,179,420,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"Felix Bauer"
] | ICLR.cc/2013/conference | 2013 | review: Points raised by reviewers:
reviewer 43a2:
(1) Good classification of rotations and scale are reported in Table 1, unfortunately these appear to be on toy, not natural, images. Impressive grouping of complex transformations such as translations and rotations are shown in Figure 3.
(2) While the gabors learne... |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter map... | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | VlvAlDIDt_Sa0 | review | 1,362,171,300,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"anonymous reviewer ea89"
] | ICLR.cc/2013/conference | 2013 | title: review of Feature grouping from spatially constrained multiplicative interaction
review: The model presented in this paper is an extension of a previous model that extracts features from images, and these featuers are multiplied together to extract motion information (or other relation between two images). The n... |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter map... | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | yTWI4b3EnB4CU | review | 1,361,968,140,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"anonymous reviewer 43a2"
] | ICLR.cc/2013/conference | 2013 | title: review of Feature grouping from spatially constrained multiplicative interaction
review: This paper introduces a group-gated Boltzmann machine for learning the transformations between a pair of images more efficiently than with a standard gated Boltzmann machine. Experiments show the model learns phase invariant... |
4UGuUZWZmi4Ze | Feature grouping from spatially constrained multiplicative interaction | [
"Felix Bauer",
"Roland Memisevic"
] | We present a feature learning model that learns to encode relationships between images. The model is defined as a Gated Boltzmann Machine, which is constrained such that hidden units that are nearby in space can gate each other's connections. We show how frequency/orientation 'columns' as well as topographic filter map... | [
"model",
"feature",
"multiplicative interaction feature",
"multiplicative interaction",
"relationships",
"images",
"gated boltzmann machine",
"units",
"space",
"connections"
] | https://openreview.net/pdf?id=4UGuUZWZmi4Ze | https://openreview.net/forum?id=4UGuUZWZmi4Ze | ah5kV2s_ULa20 | review | 1,362,214,680,000 | 4UGuUZWZmi4Ze | [
"everyone"
] | [
"anonymous reviewer cce5"
] | ICLR.cc/2013/conference | 2013 | title: review of Feature grouping from spatially constrained multiplicative interaction
review: This paper proposes a novel generalization of the Gated Boltzmann Machine. Unlike a traditional GBM, this model is constrained in a way that hidden units that are grouped together (groupings defined a priori) can gate each o... |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | ChpzCSZ9zqCTR | review | 1,361,967,300,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"anonymous reviewer 9741"
] | ICLR.cc/2013/conference | 2013 | title: review of Big Neural Networks Waste Capacity
review: This papers show the effects of under-fitting in a neural network as the size of a single neural network layer increases. The overall model is composed of SIFT extraction, k-mean, and this single hidden layer neural network. The paper suggest that this under-f... |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | MvRrJo2NhwMOE | review | 1,362,019,740,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"anonymous reviewer b2da"
] | ICLR.cc/2013/conference | 2013 | title: review of Big Neural Networks Waste Capacity
review: The net gets bigger, yet keeps underfitting the training set. Authors suspect that gradient descent is the culprit. An interesting study! |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | PPZdA2YqSgAq6 | review | 1,362,402,480,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"George Dahl"
] | ICLR.cc/2013/conference | 2013 | review: The authors speculate that the inability of additional units to reduce
the training error beyond a certain point in their experiments might
be because 'networks with more capacity have more local minima.' How
can this claim about local minima be reconciled with theoretical
asymptotic results that show that,... |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | 5w24FePB4ywro | review | 1,362,373,200,000 | TT0bFo9VZpFWg | [
"everyone"
] | [
"Andrew Maas"
] | ICLR.cc/2013/conference | 2013 | review: Interesting topic. Another potential explanation for the diminishing return is the already good performance of networks with 5k hidden units. It could be that last bit of training performance requires fitting an especially difficult / nonlinear function and thus even 15k units in a single layer MLP can't do it.... |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | CqF6fhZ9QLCrY | comment | 1,363,311,720,000 | PPZdA2YqSgAq6 | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks to your comment, we have clarified our argument. The main point is not that the training error does not fall beyond a certain point, the main point is that there are *quickly diminishing returns for added number of hidden units* to the point where adding capacity is almost useless. Since measuring VC-dim... |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | JqnQqLEIc6q5e | comment | 1,363,644,660,000 | wjvpl_b23glfA | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for your suggestion. We didn't plot the cross-entropy it is harder to interpret, but it might be interesting in comparison with the training error curve. |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | IyZiWpNTixIVv | comment | 1,363,311,660,000 | 5w24FePB4ywro | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: Interesting point, the asymptote in Figure 1 could be explained by the optimization problem becoming more difficult. However, this does not conflict with our argument. We have clarified this in the paper. Our argument relies on Figure 2, which shows the return on investement for adding units. We see that the ROI... |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | wjvpl_b23glfA | comment | 1,363,381,980,000 | IyZiWpNTixIVv | [
"everyone"
] | [
"Marc Shivers"
] | ICLR.cc/2013/conference | 2013 | reply: Have you looked at the decrease in the cross-entropy optimization objective, rather than training error, as a function of number of hidden units? It would be interesting to see a version of Figure 2 that compared the decrease in cross-entropy as you add hidden units with the decrease you would get if your addit... |
TT0bFo9VZpFWg | Big Neural Networks Waste Capacity | [
"Yann Dauphin",
"Yoshua Bengio"
] | This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training o... | [
"big neural networks",
"optimization",
"capacity",
"article",
"failure",
"added capacity",
"past research suggest",
"returns",
"size"
] | https://openreview.net/pdf?id=TT0bFo9VZpFWg | https://openreview.net/forum?id=TT0bFo9VZpFWg | URyDlbBNoEUIn | comment | 1,363,311,600,000 | ChpzCSZ9zqCTR | [
"everyone"
] | [
"Yann Dauphin"
] | ICLR.cc/2013/conference | 2013 | reply: The 3 assumptions can be thought of has 3 conditions that are necessary for the model to be able to fit ImageNet. In traditional experiments this would be true, however, in this case we are only monitoring *training* error. To learn the training set, only one assumption is necessary: no training image has an exa... |
g6Jl6J3aMs6a7 | Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN | [
"Steven R. Young",
"Itamar Arel"
] | This paper presents a basic enhancement to the DeSTIN deep learning architecture by replacing the explicitly calculated transition tables that are used to capture temporal features with a simpler, more scalable mechanism. This mechanism uses feedback of state information to cluster over a space comprised of both the sp... | [
"feature extractor",
"recurrent online clustering",
"destin",
"basic enhancement",
"transition tables",
"temporal features",
"simpler",
"scalable mechanism"
] | https://openreview.net/pdf?id=g6Jl6J3aMs6a7 | https://openreview.net/forum?id=g6Jl6J3aMs6a7 | GGdathbFl15ug | review | 1,362,391,440,000 | g6Jl6J3aMs6a7 | [
"everyone"
] | [
"anonymous reviewer 675f"
] | ICLR.cc/2013/conference | 2013 | title: review of Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN
review: The paper presents an extension to the author's prior 'DeSTIN' framework for spatio-temporal clustering. The lookup table that was previously used for state transitions is replaced by a feedback, output-to-input l... |
g6Jl6J3aMs6a7 | Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN | [
"Steven R. Young",
"Itamar Arel"
] | This paper presents a basic enhancement to the DeSTIN deep learning architecture by replacing the explicitly calculated transition tables that are used to capture temporal features with a simpler, more scalable mechanism. This mechanism uses feedback of state information to cluster over a space comprised of both the sp... | [
"feature extractor",
"recurrent online clustering",
"destin",
"basic enhancement",
"transition tables",
"temporal features",
"simpler",
"scalable mechanism"
] | https://openreview.net/pdf?id=g6Jl6J3aMs6a7 | https://openreview.net/forum?id=g6Jl6J3aMs6a7 | 8BGL8F0WLpBcE | review | 1,362,163,920,000 | g6Jl6J3aMs6a7 | [
"everyone"
] | [
"anonymous reviewer 6b68"
] | ICLR.cc/2013/conference | 2013 | title: review of Recurrent Online Clustering as a Spatio-Temporal Feature Extractor in
DeSTIN
review: Improves the DeSTIN architecture by the same authors.
They write on MNIST:
A classification accuracy of 98.71% was achieved which is comparable to results using the first-generation DeSTIN architecture [1] ... |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | 24bs4th0sfgwE | review | 1,362,833,520,000 | eQWJec0ursynH | [
"everyone"
] | [
"anonymous reviewer c262"
] | ICLR.cc/2013/conference | 2013 | title: review of Barnes-Hut-SNE
review: The paper addresses the problem of low-dimensional data embedding for visualization purposes via stochastic neighbor embedding, in which Euclidean dissimilarities in the data space are modulated by the Gaussian kernel, and a configuration of points in the low-dimensional embeddin... |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | DyHSDHfKmbDPM | review | 1,362,421,080,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: I have experimented with dual-tree variants of my algorithm (which required only trivial changes in the existing code), experimenting with both quadtrees and kd-trees as the underlying tree structures. Perhaps surprisingly, the dual-tree algorithm has approximately the same accuracy-speed trade-off as the Barne... |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | Dkj3DFf4GZJPh | review | 1,362,177,000,000 | eQWJec0ursynH | [
"everyone"
] | [
"anonymous reviewer d9db"
] | ICLR.cc/2013/conference | 2013 | title: review of Barnes-Hut-SNE
review: Stochastic neighbour embedding (SNE) is a sound, probabilistic method for dimensionality reduction. One of its limitations is that its complexity is O(N^2), where N is the, typically large, number of data points. To surmount this limitation, the this paper proposes computational... |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | TTxAqxZdhgIV0 | review | 1,362,330,660,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: Thanks a bunch for these insightful reviews and for the useful pointers to related work (some of which I was not aware of)!
In preliminary experiments, I compared locality-sensitive hashing and vantage-point trees in the initial nearest-neighbor (in the high-dimensional space). I found vantage-point trees to... |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | 2VfI2cAZSF2P0 | review | 1,362,192,420,000 | eQWJec0ursynH | [
"everyone"
] | [
"anonymous reviewer 7db1"
] | ICLR.cc/2013/conference | 2013 | title: review of Barnes-Hut-SNE
review: The submitted paper proposes a more efficient implementation of the Student-t distributed version of SNE. t-SNE is O(n^2), and the proposed implementation is O(nlogn). This offers a substantial improvement in the efficiency, such that very large datasets may be embedded. Furtherm... |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | pA91py2CW8AQg | review | 1,362,758,580,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: I updated the paper according the reviewers' comments, and included results with a dual-tree implementation of t-SNE in the appendix. The updated paper should appear on Arxiv soon. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | Hy8wy4X01CHmD | review | 1,363,113,120,000 | eQWJec0ursynH | [
"everyone"
] | [
"Laurens van der Maaten"
] | ICLR.cc/2013/conference | 2013 | review: In typical applications of Barnes-Hut (like t-SNE), the force nearly vanishes in the far field, which allows for averaging those far-field forces without losing much accuracy.
In algorithms that minimize, e.g., the squared error between two sets of pairwise distances, I guess you could do the opposite. The f... |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | AZcnMdQBqGZS4 | review | 1,362,833,640,000 | eQWJec0ursynH | [
"everyone"
] | [
"Alex Bronstein"
] | ICLR.cc/2013/conference | 2013 | review: Laurens, have you thought about using similar ideas for embedding algorithms that also exploit global similarities (like multidimensional scaling)? I think in many types of data analysis, this can be extremely important. |
eQWJec0ursynH | Barnes-Hut-SNE | [
"Laurens van der Maaten"
] | The paper presents an O(N log N)-implementation of t-SNE -- an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots and that normally runs in O(N^2). The new implementation uses vantage-point trees to compute sparse pairwise similarities between the input data object... | [
"algorithm",
"n log n",
"embedding technique",
"visualization",
"data",
"scatter plots",
"new implementation",
"trees",
"sparse pairwise similarities",
"input data objects"
] | https://openreview.net/pdf?id=eQWJec0ursynH | https://openreview.net/forum?id=eQWJec0ursynH | H3-iUVuyZzUgh | review | 1,365,114,600,000 | eQWJec0ursynH | [
"everyone"
] | [
"Zhirong Yang"
] | ICLR.cc/2013/conference | 2013 | review: Great work, congratulations! It seems we and you have simultaneously found essentially the same solution. Our paper and software are here:
Zhirong Yang, Jaakko Peltonen, Samuel Kaski. Scalable Optimization of Neighbor Embedding for Visualization. Accepted to ICML2013.
Preprint and software: http://researc... |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learni... | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | QQ1JEKYFTIQhj | review | 1,362,262,200,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"anonymous reviewer 4490"
] | ICLR.cc/2013/conference | 2013 | title: review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines
review: This paper proposes a scheme for utilizing LNP model neurons to perform inference in Boltzmann Machines. The contribution of the work is to map a Boltzmann Machine network onto a set of LNP model units an... |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learni... | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | B4qSE6NM3ZEOV | review | 1,362,383,640,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"Yuanlong Shao"
] | ICLR.cc/2013/conference | 2013 | review: Thank you very much for the valuable reviews and references! I learned quite a lot from reading the suggested papers.
--> For Reviewer caa8:
- Regarding the question raised in the end of your review, I think a somewhat related question is why neurons use spikes and whether we shall follow that in our com... |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learni... | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | 1JfiMxWFQy15Z | review | 1,361,988,540,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"anonymous reviewer caa8"
] | ICLR.cc/2013/conference | 2013 | title: review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines
review: The paper provides an explicit connection between the linear-nonlinear-poisson (LNP) model of biological neural networks and the Boltzmann machine. The author proposes a semi-stochastic inference procedure... |
fm5jfAwPbOfP6 | Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines | [
"Yuanlong Shao"
] | One conjecture in both deep learning and classical connectionist viewpoint is that the biological brain implements certain kinds of deep networks as its back-end. However, to our knowledge, a detailed correspondence has not yet been set up, which is important if we want to bridge between neuroscience and machine learni... | [
"bayesian inference",
"neuron networks",
"boltzmann machines",
"conjecture",
"deep learning",
"classical connectionist viewpoint",
"deep networks",
"knowledge",
"detailed correspondence"
] | https://openreview.net/pdf?id=fm5jfAwPbOfP6 | https://openreview.net/forum?id=fm5jfAwPbOfP6 | 88txIZ2gY7lJh | review | 1,362,392,700,000 | fm5jfAwPbOfP6 | [
"everyone"
] | [
"anonymous reviewer ef61"
] | ICLR.cc/2013/conference | 2013 | title: review of Linear-Nonlinear-Poisson Neuron Networks Perform Bayesian Inference On Boltzmann Machines
review: This paper argues that inference in Boltzmann machines can be performed using neurons modelled according to the Linear Nonlinear-Poisson model. The LNP model is first presented, then one variant of inferen... |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from ... | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | lvwFsD4fResyH | review | 1,361,921,040,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"anonymous reviewer 1dcf"
] | ICLR.cc/2013/conference | 2013 | title: review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
review: Summary:
This paper proposes learning a pooling layer (not necessarily of a convolutional network) by using temporal coherence to learn the pools. Training is accomplished by minimizing a criterion that encoura... |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from ... | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | agstF_wXReF7S | review | 1,362,276,780,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: Interesting paper.
You might be interested in this paper by Karol Gregor and myself: http://arxiv.org/abs/1006.0448
The second part of the paper also describes a kind of pooling based on temporal constancy. |
0OR_OycNMzOF9 | Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences | [
"Sainbayar Sukhbaatar",
"Takaki Makino",
"Kazuyuki Aihara"
] | Learning invariant representations from images is one of the hardest challenges facing computer vision. Spatial pooling is widely used to create invariance to spatial shifting, but it is restricted to convolutional models. In this paper, we propose a novel pooling method that can learn soft clustering of features from ... | [
"invariance",
"image sequences",
"features",
"learning",
"image features",
"images",
"invariant representations",
"hardest challenges",
"computer vision",
"spatial pooling"
] | https://openreview.net/pdf?id=0OR_OycNMzOF9 | https://openreview.net/forum?id=0OR_OycNMzOF9 | 7N2E7oCO6yPiH | review | 1,362,203,160,000 | 0OR_OycNMzOF9 | [
"everyone"
] | [
"anonymous reviewer 2c2a"
] | ICLR.cc/2013/conference | 2013 | title: review of Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences
review: Many vision algorithms comprise a pooling step, which combines the outputs of a feature extraction layer to create invariance or reduce dimensionality, often by taking their average. This paper proposes to refin... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.