Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'review_assessment:_checking_correctness_of_experiments', 'review_assessment:_checking_correctness_of_derivations_and_theory', 'experience_assessment', 'review_assessment:_thoroughness_in_paper_reading'}) and 1 missing columns ({'confidence'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Vidushee/openreview-peer-reviews/data/iclr_2020_reviews.csv (at revision 5f75f3464d5080ef8f3d6144aacba86d24f849b7), [/tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2018_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2018_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2019_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2019_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2020_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2020_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2021_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2021_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2022_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2022_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2023_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2023_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2024_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2024_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2021_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2021_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2022_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2022_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2023_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2023_reviews.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 760, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
forum: string
paper_title: string
review_id: string
cdate: int64
experience_assessment: string
rating: string
review: string
review_assessment:_checking_correctness_of_derivations_and_theory: string
review_assessment:_checking_correctness_of_experiments: string
review_assessment:_thoroughness_in_paper_reading: string
title: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1829
to
{'forum': Value('string'), 'paper_title': Value('string'), 'review_id': Value('string'), 'cdate': Value('int64'), 'confidence': Value('string'), 'rating': Value('string'), 'review': Value('string'), 'title': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'review_assessment:_checking_correctness_of_experiments', 'review_assessment:_checking_correctness_of_derivations_and_theory', 'experience_assessment', 'review_assessment:_thoroughness_in_paper_reading'}) and 1 missing columns ({'confidence'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Vidushee/openreview-peer-reviews/data/iclr_2020_reviews.csv (at revision 5f75f3464d5080ef8f3d6144aacba86d24f849b7), [/tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2018_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2018_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2019_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2019_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2020_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2020_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2021_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2021_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2022_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2022_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2023_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2023_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2024_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/iclr_2024_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2021_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2021_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2022_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2022_reviews.csv), /tmp/hf-datasets-cache/medium/datasets/40695032662493-config-parquet-and-info-Vidushee-openreview-peer--eaae9f24/hub/datasets--Vidushee--openreview-peer-reviews/snapshots/5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2023_reviews.csv (origin=hf://datasets/Vidushee/openreview-peer-reviews@5f75f3464d5080ef8f3d6144aacba86d24f849b7/data/neurips_2023_reviews.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
forum string | paper_title string | review_id string | cdate int64 | confidence string | rating string | review string | title string |
|---|---|---|---|---|---|---|---|
ryzm6BATZ | Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks | H1NEs7Clz | 1,512,090,412,089 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold | Summary:
The paper extends the the recently proposed Boundary Equilibrium Generative Adversarial Networks (BEGANs), with the hope of generating images which are more realistic. In particular, the authors propose to change the energy function associated with the auto-encoder, from an L2 norm (a single number) to an ene... | An incremental paper with moderately interesting results on a single dataset |
ryzm6BATZ | Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks | HJZIu0Kef | 1,511,807,049,322 | 3: The reviewer is fairly confident that the evaluation is correct | 5: Marginally below acceptance threshold | This paper proposed some new energy function in the BEGAN (boundary equilibrium GAN framework), including l_1 score, Gradient magnitude similarity score, and chrominance score, which are motivated and borrowed from the image quality assessment techniques. These energy component in the objective function allows learning... | Novelty of the paper is a bit restricted, and design choices appear to be lacking strong justifications. |
ryzm6BATZ | Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks | Bk8udEEeM | 1,511,438,445,887 | 3: The reviewer is fairly confident that the evaluation is correct | 5: Marginally below acceptance threshold | Quick summary:
This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term. The model is then trained with CelebA under different parameters settings and results are analyzed.
Quality and significance:
This is quite a technical paper, written in ... | A very technical paper with unclear significance. |
ryykVe-0W | Learning Independent Features with Adversarial Nets for Non-linear ICA | HyoEDdvxG | 1,511,651,123,102 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 3: Clear rejection | The focus of the paper is independent component analysis (ICA) and its nonlinear variants such as the post non-linear (PNL) ICA model. Motivated by the fact that estimating mutual information and similar dependency measures require density estimates and hard to optimize, the authors propose a Wasserstein GAN (generativ... | Proposed Wasserstein GAN: not well-suited to ICA |
ryykVe-0W | Learning Independent Features with Adversarial Nets for Non-linear ICA | H1hlWndxM | 1,511,731,443,944 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 5: Marginally below acceptance threshold | The paper proposes a GAN variant for solving the nonlinear independent component analysis (ICA) problem. The method seems interesting, but the presentation has a severe lack of focus.
First, the authors should focus their discussion instead of trying to address a broad range of ICA problems from linear to post-nonline... | Interesting nonlinear ICA method, but unfocused presentation and poor comparisons |
ryykVe-0W | Learning Independent Features with Adversarial Nets for Non-linear ICA | ry2lpp_ez | 1,511,738,612,355 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold |
The idea of ICA is constructing a mapping from dependent inputs to outputs (=the derived features) such that the outputs are as independent as possible. As the input/output densities are often not known and/or are intractable, natural independence measures such as mutual information are hard to estimate. In practice, ... | Thought provoking paper but lacks more detailed analysis |
rywHCPkAW | Noisy Networks For Exploration | H14gEaFxG | 1,511,801,835,995 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | A new exploration method for deep RL is presented, based on the idea of injecting noise into the deep networks’ weights. The noise may take various forms (either uncorrelated or factored) and its magnitude is trained by gradient descent along other parameters. It is shown how to implement this idea both in DQN (and its... | Good paper but lack of empirical comparison & analysis |
rywHCPkAW | Noisy Networks For Exploration | Hyf0aUVeM | 1,511,448,010,505 | 3: The reviewer is fairly confident that the evaluation is correct | 5: Marginally below acceptance threshold | In this paper, a new heuristic is introduced with the purpose of controlling the exploration in deep reinforcement learning.
The proposed approach, NoisyNet, seems very simple and smart: a noise of zero mean and unknown variance is added to each weight of the deep network. The matrices of unknown variances are consid... | The proposed approach is interesting and has strengths, but the paper has weaknesses. I am somewhat divided for acceptance. |
rywHCPkAW | Noisy Networks For Exploration | rJ6Z7prxf | 1,511,539,460,976 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7: Good paper, accept | This paper introdues NoisyNets, that are neural networks whose parameters are perturbed by a parametric noise function, and they apply them to 3 state-of-the-art deep reinforcement learning algorithms: DQN, Dueling networks and A3C. They obtain a substantial performance improvement over the baseline algorithms, without... | A good paper, despite a weak analysis |
rywDjg-RW | Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples | S1qCIfJWz | 1,512,150,737,563 | 3: The reviewer is fairly confident that the evaluation is correct | 8: Top 50% of accepted papers, clear accept | This is a strong paper. It focuses on an important problem (speeding up program synthesis), it’s generally very well-written, and it features thorough evaluation. The results are impressive: the proposed system synthesizes programs from a single example that generalize better than prior state-of-the-art, and it does so... | Strong paper; accept |
rywDjg-RW | Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples | SkPNib9ez | 1,511,820,079,417 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold | This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem.
This paper requires a large amount of background knowledge as it depends on understanding program synthesis as it is done in the programming lang... | Incremental paper but well-written |
rywDjg-RW | Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples | SyFsGdSlM | 1,511,518,880,762 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | The paper presents a branch-and-bound approach to learn good programs
(consistent with data, expected to generalise well), where an LSTM is
used to predict which branches in the search tree should lead to good
programs (at the leaves of the search tree). The LSTM learns from
inputs of program spec + candidate branch (g... | Although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic. |
ryvxcPeAb | Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient | SJIOPWdgf | 1,511,688,046,234 | 3: The reviewer is fairly confident that the evaluation is correct | 5: Marginally below acceptance threshold | This paper focuses on enhancing the transferability of adversarial examples from one model to another model. The main contribution of this paper is to factorize the adversarial perturbation direction into model-specific and data-dependent. Motivated by finding the data-dependent direction, the paper proposes the noise ... | Some arguments are not well justified |
ryvxcPeAb | Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient | rkzeadBxf | 1,511,521,513,976 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4: Ok but not good enough - rejection | This paper postulates that an adversarial perturbation consists of a model-specific and data-specific component, and that amplification of the latter is best suited for adversarial attacks.
This paper has many grammatical errors. The article is almost always missing from nouns. Some of the sentences need changing. For... | Review |
ryvxcPeAb | Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient | rkKt2t2xz | 1,511,984,257,528 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5: Marginally below acceptance threshold | The problem of exploring the cross-model (and cross-dataset) generalization of adversarial examples is relatively neglected topic. However the paper's list of related work on that toopic is a bit lacking as in section 3.1 it omits referencing the "Explaining and Harnessing..." paper by Goodfellow et al., which presente... | Interesting study of the most intriguing but lesser studied aspect of adversarial examples. |
ryup8-WCW | Measuring the Intrinsic Dimension of Objective Landscapes | B1IwI-2xz | 1,511,949,918,452 | 3: The reviewer is fairly confident that the evaluation is correct | 7: Good paper, accept | This paper proposes an empirical measure of the intrinsic dimensionality of a neural network problem. Taking the full dimensionality to be the total number of parameters of the network model, the authors assess intrinsic dimensionality by randomly projecting the network to a domain with fewer parameters (corresponding ... | ICLR 2018 official review (Reviewer 2) |
ryup8-WCW | Measuring the Intrinsic Dimension of Objective Landscapes | BkJsM2vgf | 1,511,666,326,601 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7: Good paper, accept | [ =============================== REVISION =========================================================]
My questions are answered, paper undergone some revision to clarify the presentation. I still maintain that it is a good paper and argue for acceptance - it provides a witty way of checking whether the network is overp... | Good paper |
ryup8-WCW | Measuring the Intrinsic Dimension of Objective Landscapes | BJva6gOgM | 1,511,685,567,247 | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper | 6: Marginally above acceptance threshold | While deep learning usually involves estimating a large number of variable, this paper suggests to reduce its number by assuming that these variable lie in a low-dimensional subspace. In practice, this subspace is chosen randomly. Simulations show the promise of the proposed method. In particular, figure 2 shows that ... | An proposal that reduces the degree of freedom in deep learning |
rytstxWAW | FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling | B1ymVPEgM | 1,511,449,622,942 | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper | 7: Good paper, accept | The paper presents a novel view of GCN that interprets graph convolutions as integral transforms of embedding functions. This addresses the issue of lack of sample independence in training and allows for the use of Monte Carlo methods. It further explores variance reduction to speed up training via importance sampling.... | present a novel view of GCN that leads to scalable GCN further with importance sampling for variance reduction |
rytstxWAW | FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling | SJce_4YlM | 1,511,766,002,084 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | Update:
I have read the rebuttal and the revised manuscript. Additionally I had a brief discussion with the authors regarding some aspects of their probabilistic framework. I think that batch training of GCN is an important problem and authors have proposed an interesting solution to this problem. I appreciated all th... | Interesting ideas, but I have both theoretical and practical concerns |
rytstxWAW | FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling | H1IdT6AlG | 1,512,131,950,169 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 8: Top 50% of accepted papers, clear accept | This paper addresses the memory bottleneck problem in graph neural networks and proposes a novel importance sampling scheme that is based on sampling vertices (instead of sampling local neighbors as in [1]). Experimental results demonstrate a significant speedup in per-batch training time compared to previous works whi... | Fast solution for the memory bottleneck issue in graph neural networks |
rytstxWAW | FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling | HJDVPNYgf | 1,511,765,807,138 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7: Good paper, accept | The paper focuses on the recently graph convolutional network (GCN) framework.
They authors identify a couple of issues with GCN: the fact that both training and test data need to be present at training time, making it transductive in nature and the fact that the notion of ‘neighborhood’ grows as the signal propagates ... | Solid idea, excellent presentation, questions about experiments |
rytNfI1AZ | Training wide residual networks for deployment using a single bit for each weight | SkGtH2Kxf | 1,511,798,137,959 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold | The authors propose to train neural networks with 1bit weights by storing and updating full precision weights in training, but using the reduced 1bit version of the network to compute predictions and gradients in training. They add a few tricks to keep the optimization numerically efficient. Since right now more and mo... | Solid work |
rytNfI1AZ | Training wide residual networks for deployment using a single bit for each weight | BJyxkbFxz | 1,511,751,402,736 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | This paper introduces several ideas: scaling, warm-restarting learning rate, cutout augmentation.
I would like to see detailed ablation studies: how the performance is influenced by the warm-restarting learning rates, how the performance is influenced by cutout. Is the scaling scheme helpful for existing single-bit a... | Mixed ideas |
rytNfI1AZ | Training wide residual networks for deployment using a single bit for each weight | HJ0pVRqxM | 1,511,871,686,185 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | The paper trains wide ResNets for 1-bit per weight deployment.
The experiments are conducted on CIFAR-10, CIFAR-100, SVHN and ImageNet32.
+the paper reads well
+the reported performance is compelling
Perhaps the authors should make it clear in the abstract by replacing:
"Here, we report methodological innovations th... | a single bit for each weight |
ryserbZR- | Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach | Bk72o4NWM | 1,512,487,850,795 | 3: The reviewer is fairly confident that the evaluation is correct | 5: Marginally below acceptance threshold | The authors approach the task of labeling histology images with just a single global label, with promising results on two different data sets. This is of high relevance given the difficulty in obtaining expert annotated data. At the same time the key elements of the presented approach remain identical to those in a pre... | Interesting application and results with incremental innovation on exististing work |
ryserbZR- | Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach | SkWQLvebf | 1,512,236,569,311 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold | This paper proposes a deep learning (DL) approach (pre-trained CNNs) to the analysis of histopathological images for disease localization.
It correctly identifies the problem that DL usually requires large image databases to provide competitive results, while annotated histopathological data repositories are costly to ... | Down-to-earth practical application of DL in a medico-clinical context |
ryserbZR- | Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach | S1O8uhkxf | 1,511,143,504,077 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5: Marginally below acceptance threshold | This paper describes a semi-supervised method to classify and segment WSI histological images that are only labeled at the whole image level. Images are tiled and tiles are sampled and encoded into a feature vector via a ResNET-50 pretrained on ImageNET. A 1D convolutional layer followed by a min-max layer and 2 fully ... | Interesting MIL approach, lacks technical depth for this conference |
rypT3fb0b | LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING | rkPj2vjeM | 1,511,910,560,326 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 8: Top 50% of accepted papers, clear accept | SUMMARY
The paper proposes to apply GrOWL regularization to the tensors of parameters between each pair of layers. The groups are composed of all coefficients associated to inputs coming from the same neuron in the previous layer. The proposed algorithm is a simple proximal gradient algorithm using the proximal operato... | A nice paper that would be more compelling with a comparison with the group elastic net |
rypT3fb0b | LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING | H1fONf_gG | 1,511,691,370,290 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold | This paper proposes to apply a group ordered weighted l1 (GrOWL) regularization term to promote sparsity and parameter sharing in training deep neural networks and hence compress the model to a light version.
The GrOWL regularizer (Oswal et al., 2016) penalizes the sorted l2 norms of the rows in a parameter matrix wit... | Unclear motivation and insufficient experimental results |
rypT3fb0b | LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING | rkJfM20eG | 1,512,124,935,161 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7: Good paper, accept | The authors propose to use the group ordered weighted l1 regulariser (GrOWL) combined with clustering of correlated features to select and tie parameters, leading to a sparser representation with a reduced parameter space. They apply the proposed method two well-known benchmark datasets under a fully connected and a co... | An incremental improvement for compressing deep neural networks |
rylejExC- | Stochastic Training of Graph Convolutional Networks | B1FrpdOeM | 1,511,718,209,337 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4: Ok but not good enough - rejection | The paper proposes a method to speed up the training of graph convolutional networks, which are quite slow for large graphs. The key insight is to improve the estimates of the average neighbor activations (via neighbor sampling) so that we can either sample less neighbors or have higher accuracy for the same number of ... | Interesting but not enough |
rylejExC- | Stochastic Training of Graph Convolutional Networks | rJA4cxJlf | 1,511,094,837,764 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 3: Clear rejection | This paper proposes a new training method for graph convolutional networks. The experimental results look interesting. However, this paper has some issues.
This paper is hard to read. There are some undefined or multi-used notations. For instance, sigma is used for two different meanings: an activation function and va... | A new training method of graph convolutional networks. Good but there are some errors. |
rylejExC- | Stochastic Training of Graph Convolutional Networks | S1g5R5Ogz | 1,511,726,728,086 | 3: The reviewer is fairly confident that the evaluation is correct | 7: Good paper, accept | Existing training algorithms for graph convolutional nets are slow. This paper develops new novel methods, with a nice mix of theory, practicalities, and experiments.
Let me caution that I am not familiar with convolutional nets applied to graph data.
Clearly, the existing best algorithm - neighborhood sampling is sl... | Existing training algorithms for graph convolutional nets are slow. This paper develops new novel methods, with a nice mix of theory, practicalities and experiments. |
rylSzl-R- | On Unifying Deep Generative Models | SJAtVYteG | 1,511,785,605,896 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | The authors develops a framework interpreting GAN algorithms as performing a form of variational inference on a generative model reconstructing an indicator variable of whether a sample is from the true of generative data distributions. Starting from the ‘non-saturated’ GAN loss the key result (lemma 1) shows that GANs... | Review of On Unifying Deep Generative Models |
rylSzl-R- | On Unifying Deep Generative Models | SJIHn0tlz | 1,511,808,061,836 | 3: The reviewer is fairly confident that the evaluation is correct | 7: Good paper, accept | The paper provides a symmetric modeling perspective ("generation" and "inference" are just different naming, the underlying techniques can be exchanged) to unify existing deep generative models, particularly VAEs and GANs. Someone had to formally do this, and the paper did a good job in describing the new view (by borr... | Good paper |
rylSzl-R- | On Unifying Deep Generative Models | BkONJetlM | 1,511,747,375,839 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7: Good paper, accept | Update 1/11/18:
I'm happy with the comments from the authors. I think the explanation of non-saturating vs saturating objective is nice, and I've increased the score.
Note though: I absolutely expect a revision at camera-ready if the paper gets accepted (we did not get one).
Original review:
The paper is overall a g... | Overall good perspective on GANs that connect them to other variational methods |
ryk77mbRZ | Noise-Based Regularizers for Recurrent Neural Networks | Syr4moYxG | 1,511,793,452,781 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 2: Strong rejection | The authors of the paper advocate injecting noise into the activations of recurrent networks for regularisation. This is done by replacing the deterministic units with stochastic ones.
The paper has several issues with respect to the method and related work.
- The paper needs to mention [Graves 2011], which is one ... | Sever issues with prior work and justification |
ryk77mbRZ | Noise-Based Regularizers for Recurrent Neural Networks | Sk6_QZcgM | 1,511,818,100,676 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5: Marginally below acceptance threshold | The RNN transition function is: h_t+1 = f(h_t,x_t)
This paper proposes using a stochastic transition function instead of a deterministic one.
i.e h_{t+1} \sim expfam(mean = f(h_t,x_t), gamma) where expfam denotes a distribution from the exponential family.
The experimental results consider text modeling (evaluating on... | Sample hidden states of an RNN instead of predicting them deterministically. Interesting idea that is insufficiently explored. |
ryk77mbRZ | Noise-Based Regularizers for Recurrent Neural Networks | ry22qzclM | 1,511,824,052,525 | 3: The reviewer is fairly confident that the evaluation is correct | 3: Clear rejection | In order to regularize RNNs, the paper suggests to inject noise into hidden units. More specifically, the suggested technique resembles optimizing the expected log likelihood under the hidden states prior, a lower bound to the data log-likelihood.
The described approach seems to be simple. Yet, several details are unc... | Running an RNN for one step from noisy hidden states is a valid regularizer |
ryjw_eAaZ | Unsupervised Deep Structure Learning by Recursive Dependency Analysis | ryilanteG | 1,511,800,055,530 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4: Ok but not good enough - rejection | The paper proposes an unsupervised structure learning method for deep neural networks. It first constructs a fully visible DAG by learning from data, and decomposes variables into autonomous sets. Then latent variables are introduced and stochastic inverse is generated. Later a deep neural network structure is construc... | There is a major technical flaw in this paper. And some experiment settings are not convincing. |
ryjw_eAaZ | Unsupervised Deep Structure Learning by Recursive Dependency Analysis | HJZz1Wqef | 1,511,816,969,540 | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper | 5: Marginally below acceptance threshold | This paper tackles the important problem of structure learning by introducing an unsupervised algorithm, which encodes a hierarchy of independencies in the input distribution and allows introducing skip connections among neurons in different layers. The quality of the learnt structure is evaluated in the context of ima... | Interesting unsupervised structure learning algorithm |
ryjw_eAaZ | Unsupervised Deep Structure Learning by Recursive Dependency Analysis | SJGyhgwZz | 1,512,668,122,231 | 3: The reviewer is fairly confident that the evaluation is correct | 5: Marginally below acceptance threshold | Authors propose a deep architecture learning algorithm in an unsupervised fashion. By finding conditional in-dependencies in input as a Bayesian network and using a stochastic inverse mechanism that preserves the conditional dependencies, they suggest an optimal structure of fully connected hidden layers (depth, number... | Promising method, inconclusive results |
ryj38zWRb | Optimizing the Latent Space of Generative Networks | SynXdTKeM | 1,511,802,915,972 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | In this paper, the authors propose a new architecture for generative neural networks. Rather than the typical adversarial training procedure used to train a generator and a discriminator, the authors train a generator only. To ensure that noise vectors get mapped to images from the target distribution, the generator is... | This paper is a potentially interesting alternative training procedure to GANs. |
ryj38zWRb | Optimizing the Latent Space of Generative Networks | BkILtntlz | 1,511,799,117,545 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4: Ok but not good enough - rejection | Summary: The authors observe that the success of GANs can be attributed to two factors; leveraging the inductive bias of deep CNNs and the adversarial training protocol. In order to disentangle the factors of success, and they propose to eliminate the adversarial training protocol while maintaining the first factor. Th... | OPTIMIZING THE LATENT SPACE OF GENERATIVE NETWORKS |
ryj38zWRb | Optimizing the Latent Space of Generative Networks | HyE2oHixz | 1,511,902,124,122 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold | The paper is well written and easy to follow. I find the results very interesting. In particular the paper shows that many properties of GAN (or generative) models (e.g. interpolation, feature arithmetic) are a in great deal result of the inductive bias of deep CNN’s and can be obtained with simple reconstruction losse... | Official review |
ryj0790hb | Incremental Learning through Deep Adaptation | HyK6w83xM | 1,511,970,753,360 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 5: Marginally below acceptance threshold | ----------------- Summary -----------------
The paper tackles the problem of task-incremental learning using deep networks. It devises an architecture and a training procedure aiming for some desirable properties; a) it does not require retraining using previous tasks’ data, b) the number of network parameters grows on... | Extensive experiments but inconclusive for the main message (task-incremental learning) |
ryj0790hb | Incremental Learning through Deep Adaptation | BJJTve9gM | 1,511,815,094,629 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed.
Pros
The proposed approach is novel and broadly applicabl... | - |
ryj0790hb | Incremental Learning through Deep Adaptation | HyOveS5gf | 1,511,833,696,471 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4: Ok but not good enough - rejection | This paper proposes new idea of using controller modules for increment learning. Instead of finetuning the whole network, only the added parameters of the controller modules are learned while the output of the old task stays the same. Experiments are conducted on multiple image classification datasets.
I found the id... | Interesting idea but missing some simple baselines. |
ryiAv2xAZ | Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples | By_HQdCeG | 1,512,108,863,682 | 3: The reviewer is fairly confident that the evaluation is correct | 6: Marginally above acceptance threshold | This paper proposes a new method of detecting in vs. out of distribution samples. Most existing approaches for this deal with detecting out of distributions at *test time* by augmenting input data and or temperature scaling the softmax and applying a simple classification rule based on the output. This paper proposes a... | simple, effective method, some discussion/understanding missing |
ryiAv2xAZ | Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples | B1ja8-9lf | 1,511,818,946,645 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | I have read authors' reply. In response to authors' comprehensive reply and feedback. I upgrade my score to 6.
-----------------------------
This paper presents a novel approach to calibrate classifiers for out of distribution samples. In additional to the original cross entropy loss, the “confidence loss” was prop... | Interesting idea, but not yet convinced |
ryiAv2xAZ | Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples | B1klq-5lG | 1,511,819,751,148 | 3: The reviewer is fairly confident that the evaluation is correct | 7: Good paper, accept | The manuscript proposes a generative approach to detect which samples are within vs. out of the sample space of the training distribution. This distribution is used to adjust the classifier so it makes confident predictions within sample, and less confident predictions out of sample, where presumably it is prone to mis... | interesting idea for robust classification |
ryepFJbA- | On Convergence and Stability of GANs | SyYO2aIlG | 1,511,607,408,697 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 4: Ok but not good enough - rejection | This paper addresses the well-known stability problem encountered when training GANs. As many other papers, they suggest adding a regularization penalty on the discriminator which penalizes the gradient with respect to the data, effectively linearizing the data manifold.
Relevance: Although I think some of the empiric... | Rather incremental work, I doubt the scientific contribution is significant |
ryepFJbA- | On Convergence and Stability of GANs | Hkd3vAUeG | 1,511,610,288,102 | 3: The reviewer is fairly confident that the evaluation is correct | 3: Clear rejection | This paper contains a collection of ideas about Generative Adversarial Networks (GAN) but it is very hard for me to get the main point of this paper. I am not saying ideas are not interesting, but I think the author needs to choose the main point of the paper, and should focus on delivering in-depth studies on the main... | Lack of the main point |
ryepFJbA- | On Convergence and Stability of GANs | ByPQQOX1G | 1,510,339,359,074 | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper | 5: Marginally below acceptance threshold | Summary
========
The authors present a new regularization term, inspired from game theory, which encourages the discriminator's gradient to have a norm equal to one. This leads to reduce the number of local minima, so that the behavior of the optimization scheme gets closer to the optimization of a zero-sum games with ... | A simple regularization term for training GANs is introduced, with good numerical performance. |
rye7IMbAZ | Explicit Induction Bias for Transfer Learning with Convolutional Networks | BJQD_I_eM | 1,511,708,763,402 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | The paper proposes an analysis on different adaptive regularization techniques for deep transfer learning.
Specifically it focuses on the use of an L2-SP condition that constraints the new parameters to be close to the
ones previously learned when solving a source task.
+ The paper is easy to read and well organized... | well written, needs more comparisons/analysis |
rye7IMbAZ | Explicit Induction Bias for Transfer Learning with Convolutional Networks | ryD53e9xG | 1,511,816,334,711 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 6: Marginally above acceptance threshold | This work addresses the scenario of fine-tuning a pre-trained network for new data/tasks and empirically studies various regularization techniques. Overall, the evaluation concludes with recommending that all layers of a network whose weights are directly transferred during fine-tuning should be regularized against the... | A reasonably thorough study of regularization techniques for transfer learning through fine-tuning |
rye7IMbAZ | Explicit Induction Bias for Transfer Learning with Convolutional Networks | Hku7RS6lf | 1,512,033,824,456 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7: Good paper, accept | The paper addresses the problem of transfer learning in deep networks. A pretrained network on a large dataset exists, what is the best way to retrain the model on a new small dataset?
It argues that the standard regularization done in conventional fine-tuning procedures is not optimal, since it tries to get the param... | limited novelty, but consistently improving fine-tuning |
rydeCEhs- | SMASH: One-Shot Model Architecture Search through HyperNetworks | SycMimAgG | 1,512,090,386,109 | 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper | 6: Marginally above acceptance threshold | This paper tackles the problem of finding an optimal architecture for deep neural nets . They propose to solve it by training an auxiliary HyperNet to generate the main model. The authors propose the so called "SMASH" algorithm that ranks the neural net architectures based on their validation error. The authors adopt a... | The paper tackles an important problem on learning neural net architectures that outperforms comparable methods and is reasonably faster |
rydeCEhs- | SMASH: One-Shot Model Architecture Search through HyperNetworks | rJ200-5ez | 1,511,821,012,458 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 7: Good paper, accept | This paper is about a new experimental technique for exploring different neural architectures. It is well-written in general, numerical experiments demonstrate the framework and its capabilities as well as its limitations.
A disadvantage of the approach may be that the search for architectures is random. It would be ... | An experimental framework for designing neural architectures |
rydeCEhs- | SMASH: One-Shot Model Architecture Search through HyperNetworks | SkmGrjvlz | 1,511,662,859,420 | 3: The reviewer is fairly confident that the evaluation is correct | 7: Good paper, accept | Summary of paper - This paper presents SMASH (or the one-Shot Model Architecture Search through Hypernetworks) which has two training phases (one to quickly train a random sample of network architectures and one to train the best architecture from the first stage). The paper presents a number of interesting experiments... | Well written paper that introduces and applies SMASH framework with some experimental success |
rybDdHe0Z | Sequence Transfer Learning for Neural Decoding | S1D3Hb9eM | 1,511,818,671,426 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 3: Clear rejection | This work addresses brain state decoding (intent to move) based on intra-cranial "electrocorticography (ECoG) grids". ECoG signals are generally of much higher quality than more conventional EEG signals acquired on the skalp, hence it appears meaningful to invest significant effort to decode.
Preprocessing is only d... | Application of LSTM to decoding of neural signals, limited novelty, inconclusive |
rybDdHe0Z | Sequence Transfer Learning for Neural Decoding | HJ_bsmPxG | 1,511,631,616,458 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 4: Ok but not good enough - rejection | The paper describes an approach to use LSTM’s for finger classification based on ECOG. and a transfer learning extension of which two variations exists. From the presented results, the LSTM model is not an improvement over a basic linear model. The transfer learning models performs better than subject specific models o... | Difficult problem, some aspects are unclear, evaluation could be improved |
rybDdHe0Z | Sequence Transfer Learning for Neural Decoding | HJmBCpKeG | 1,511,804,475,119 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 6: Marginally above acceptance threshold | The ms applies an LSTM on ECoG data and studies tranfer between subjects etc.
The data includes only few samples per class. The validation procedure to obtain the model accuray is a bit iffy.
The ms says: The test data contains 'at least 2 samples per class'. Data of the type analysed is highly dependend, so it is n... | LSTMs for ECoG |
rybAWfx0b | COLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELS | B10RWItgz | 1,511,772,630,427 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 5: Marginally below acceptance threshold | This paper present a simple but effective approach to utilize language model information in a seq2seq framework. The experimental results show improvement for both baseline and adaptation scenarios.
Pros:
The approach is adapted from deep fusion but the results are promising, especially for the off-domain setup. The a... | Review |
rybAWfx0b | COLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELS | Sy0xMaHlG | 1,511,539,189,778 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 5: Marginally below acceptance threshold | The paper proposes a novel approach to integrate a language model (LM) to a seq2seq based speech recognition system (ASR). The LM is pretrained on separate data (presumably larger, potentially not the same exact distribution). It has a similar flavor as DeepFusion (DF), a previous work which also integrated an LM to a ... | review |
rybAWfx0b | COLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELS | ryGQ4uugM | 1,511,715,865,734 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 6: Marginally above acceptance threshold | The paper proposes a new way of integrating a language model into a seq2seq network: instead of adding the language model only during decoding, the model has access to a pretrained language model during training too. This makes the training and testing conditions more similar. Moreover, only the logits of the pretraine... | Better integration of language models into sequence 2 sequence networks. |
ryb83alCZ | Towards Unsupervised Classification with Deep Generative Models | SJk7H29xM | 1,511,863,574,587 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4: Ok but not good enough - rejection | This paper addresses the question of unsupervised clustering with high classification performance. They propose a deep variational autoencoder architecture with categorical latent variables at the deepest layer and propose to train it with modifications of the standard variational approach with reparameterization gradi... | Apparently impressive result, but very little novelty |
ryb83alCZ | Towards Unsupervised Classification with Deep Generative Models | BkmqxxDbz | 1,512,665,227,546 | 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature | 4: Ok but not good enough - rejection | Summary
The authors propose a hierarchical generative model with both continuous and discrete latent variables. The authors empirically demonstrate that the latent space of their model separates well healthy vs pathological cells in a dataset for Chronic lymphocytic leukemia (CLL) diagnostics.
Main
Overall the pap... | Interesting results, weak novelty, unjustified model choices. |
ryb83alCZ | Towards Unsupervised Classification with Deep Generative Models | SyangtilG | 1,511,915,700,689 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 4: Ok but not good enough - rejection | The authors propose a deep hierarchical model for unsupervised classification by using a combination of latent continuous and discrete distributions.
Although, the detailed description of flow cytometry and chronic lymphocytic leukemia are appreciated, they are probably out of the scope of the paper or not relevant fo... | TOWARDS UNSUPERVISED CLASSIFICATION WITH DEEP GENERATIVE MODELS |
ryazCMbR- | Communication Algorithms via Deep Learning | S1PB3Ocef | 1,511,849,022,677 | 4: The reviewer is confident but not absolutely certain that the evaluation is correct | 6: Marginally above acceptance threshold | Error-correcting codes constitute a well-researched area of study within communication engineering. In communication, messages that are to be transmitted are encoded into binary vector called codewords that contained some redundancy. The codewords are then transmitted over a channel that has some random noise. At the r... | An interesting paper that brings in the tools of recursive neural networks to error-correcting codes for communication |
End of preview.
No dataset card yet
- Downloads last month
- 50