Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringclasses
10 values
title
stringclasses
10 values
abstract
stringclasses
10 values
full_text
stringclasses
10 values
authors
stringclasses
10 values
decision
stringclasses
1 value
year
int64
2.02k
2.02k
api_raw_submission
stringclasses
10 values
review
stringclasses
1 value
reviewer_id
stringclasses
1 value
rating
null
confidence
null
api_raw_review
stringclasses
1 value
criteria_count
dict
reward_value
int64
-10
-10
reward_value_length_adjusted
float64
-2.67
-2.67
length_penalty
float64
2.67
2.67
reward_u
float64
0
0
reward_h
float64
0
0
meteor_score
float64
0
0
criticism
int64
0
0
example
int64
0
0
importance_and_relevance
int64
0
0
materials_and_methods
int64
0
0
praise
int64
0
0
presentation_and_reporting
int64
0
0
results_and_discussion
int64
0
0
suggestion_and_solution
int64
0
0
dimension_scores
dict
overall_score
int64
-10
-10
source
stringclasses
1 value
review_src
stringclasses
1 value
relative_rank
int64
0
0
win_prob
float64
0
0
thinking_trace
stringclasses
1 value
prompt
stringclasses
1 value
prompt_length
int64
0
0
conversations
null
v1LHecOShq
The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations
We introduce a new test of how well language models capture meaning in children's books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the...
Published as a conference paper at ICLR 2016 THE GOLDILOCKS PRINCIPLE : R EADING CHILDREN ’S BOOKS WITH EXPLICIT MEMORY REPRESENTATIONS Felix Hill∗, Antoine Bordes, Sumit Chopra & Jason Weston Facebook AI Research 770 Broadway New York, USA felix.hill@cl.cam.ac.uk,{abordes,spchopra,jase}@fb.com ABSTRACT We introduce a ...
Felix Hill, Antoine Bordes, Sumit Chopra, Jason Weston
Unknown
2,016
{"id": "v1LHecOShq", "original": null, "cdate": 1451606400000, "pdate": 1451606400000, "odate": null, "mdate": 1684320083542, "tcdate": 1684320083542, "tmdate": 1693224597811, "ddate": null, "number": 834624, "content": {"venue": "ICLR 2016", "venueid": "dblp.org/journals/CORR/2016", "_bibtex": "@inproceedings{DBLP:jou...
N/A
null
null
{}
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 0, "praise": 0, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 0, "total": 0 }
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 0, "praise": 0, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 0 }
-10
iclr2016
openreview
0
0
0
null
UrbYiH8ZTcX
Net2Net: Accelerating Learning via Knowledge Transfer
We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design ...
Published as a conference paper at ICLR 2016 Net2Net: ACCELERATING LEARNING VIA KNOWLEDGE TRANSFER Tianqi Chen∗, Ian Goodfellow, and Jonathon Shlens Google Inc., Mountain View, CA tqchen@cs.washington.edu, {goodfellow,shlens}@google.com ABSTRACT We introduce techniques for rapidly transferring the information stored in...
Tianqi Chen, Ian J. Goodfellow, Jonathon Shlens
Unknown
2,016
{"id": "UrbYiH8ZTcX", "original": null, "cdate": 1451606400000, "pdate": 1451606400000, "odate": null, "mdate": 1683900409470, "tcdate": 1683900409470, "tmdate": 1692992596357, "ddate": null, "number": 739947, "content": {"venue": "ICLR 2016", "venueid": "dblp.org/journals/CORR/2016", "_bibtex": "@inproceedings{DBLP:jo...
N/A
null
null
{}
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 0, "praise": 0, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 0, "total": 0 }
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 0, "praise": 0, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 0 }
-10
iclr2016
openreview
0
0
0
null
avqvQm4kloh
Variational Gaussian Process
Variational inference is a powerful tool for approximate inference, and it has been recently applied for representation learning with deep generative models. We develop the variational Gaussian process (VGP), a Bayesian nonparametric variational family, which adapts its shape to match complex posterior distributions. T...
Published as a conference paper at ICLR 2016 THE VARIATIONAL GAUSSIAN PROCESS Dustin Tran Harvard University dtran@g.harvard.edu Rajesh Ranganath Princeton University rajeshr@cs.princeton.edu David M. Blei Columbia University david.blei@columbia.edu ABSTRACT Variational inference is a powerful tool for approximate infe...
Dustin Tran, Rajesh Ranganath, David M. Blei
Unknown
2,016
{"id": "avqvQm4kloh", "original": null, "cdate": 1451606400000, "pdate": null, "odate": null, "mdate": null, "tcdate": 1590634476171, "tmdate": 1667891260955, "ddate": null, "number": 129577, "content": {"venue": "ICLR 2016", "venueid": "dblp.org/journals/CORR/2016", "_bibtex": "@inproceedings{DBLP:journals/corr/TranRB...
N/A
null
null
{}
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 0, "praise": 0, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 0, "total": 0 }
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{ "criticism": 0, "example": 0, "importance_and_relevance": 0, "materials_and_methods": 0, "praise": 0, "presentation_and_reporting": 0, "results_and_discussion": 0, "suggestion_and_solution": 0 }
-10
iclr2016
openreview
0
0
0
null
TXCg7dVp8Rd
Generating Images from Captions with Attention
"Motivated by the recent progress in generative models, we introduce a model that generates images f(...TRUNCATED)
"Published as a conference paper at ICLR 2016\nGENERATING IMAGES FROM CAPTIONS\nWITH ATTENTION\nElma(...TRUNCATED)
Elman Mansimov, Emilio Parisotto, Lei Jimmy Ba, Ruslan Salakhutdinov
Unknown
2,016
"{\"id\": \"TXCg7dVp8Rd\", \"original\": null, \"cdate\": 1451606400000, \"pdate\": null, \"odate\":(...TRUNCATED)
N/A
null
null
{}
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
iclr2016
openreview
0
0
0
null
J5UHcKp8gNm
Regularizing RNNs by Stabilizing Activations
"We stabilize the activations of Recurrent Neural Networks (RNNs) by penalizing the squared distance(...TRUNCATED)
"Published as a conference paper at ICLR 2016\nREGULARIZING RNN S BY STABILIZING ACTIVATIONS\nDavid (...TRUNCATED)
David Krueger, Roland Memisevic
Unknown
2,016
"{\"id\": \"J5UHcKp8gNm\", \"original\": null, \"cdate\": 1451606400000, \"pdate\": null, \"odate\":(...TRUNCATED)
N/A
null
null
{}
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
iclr2016
openreview
0
0
0
null
BAls_GkIre9
Neural Networks with Few Multiplications
"For most deep learning algorithms training is notoriously time consuming. Since most of the computa(...TRUNCATED)
"Published as a conference paper at ICLR 2016\nNEURAL NETWORKS WITH FEW MULTIPLICATIONS\nZhouhan Lin(...TRUNCATED)
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio
Unknown
2,016
"{\"id\": \"BAls_GkIre9\", \"original\": null, \"cdate\": 1451606400000, \"pdate\": null, \"odate\":(...TRUNCATED)
N/A
null
null
{}
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
iclr2016
openreview
0
0
0
null
KrT5Ba_VQP
Towards Universal Paraphrastic Sentence Embeddings
"We consider the problem of learning general-purpose, paraphrastic sentence embeddings based on supe(...TRUNCATED)
"arXiv:1511.08198v3 [cs.CL] 4 Mar 2016\nPublished as a conference paper at ICLR 2016\nTOWARDS UNIV(...TRUNCATED)
John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu
Unknown
2,016
"{\"id\": \"KrT5Ba_VQP\", \"original\": null, \"cdate\": 1451606400000, \"pdate\": null, \"odate\": (...TRUNCATED)
N/A
null
null
{}
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
iclr2016
openreview
0
0
0
null
aCWnhi8JVKGz
Density Modeling of Images using a Generalized Normalization Transformation
"We introduce a parametric nonlinear transformation that is well-suited for Gaussianizing data from (...TRUNCATED)
"Published as a conference paper at ICLR 2016\nDENSITY MODELING OF IMAGES USING A\nGENERALIZED NORMA(...TRUNCATED)
Jona Ballé, Valero Laparra, Eero P. Simoncelli
Unknown
2,016
"{\"id\": \"aCWnhi8JVKGz\", \"original\": null, \"cdate\": 1451606400000, \"pdate\": 1451606400000, (...TRUNCATED)
N/A
null
null
{}
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
iclr2016
openreview
0
0
0
null
8a0W2Dw0sFD
Convergent Learning: Do different neural networks learn the same representations?
"Recent success in training deep neural networks have prompted active investigation into the feature(...TRUNCATED)
"Published as a conference paper at ICLR 2016\nCONVERGENT LEARNING : D O DIFFERENT NEURAL\nNETWORKS (...TRUNCATED)
Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, John E. Hopcroft
Unknown
2,016
"{\"id\": \"8a0W2Dw0sFD\", \"original\": null, \"cdate\": 1451606400000, \"pdate\": 1451606400000, \(...TRUNCATED)
N/A
null
null
{}
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
iclr2016
openreview
0
0
0
null
Ilo2gNofdm
The Variational Fair Autoencoder
"We investigate the problem of learning representations that are invariant to certain nuisance or se(...TRUNCATED)
"Published as a conference paper at ICLR 2016\nTHE VARIATIONAL FAIR AUTOENCODER\nChristos Louizos∗(...TRUNCATED)
Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard S. Zemel
Unknown
2,016
"{\"id\": \"Ilo2gNofdm\", \"original\": null, \"cdate\": 1451606400000, \"pdate\": null, \"odate\": (...TRUNCATED)
N/A
null
null
{}
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
-2.667231
2.667231
0
0
0
0
0
0
0
0
0
0
0
{"criticism":0,"example":0,"importance_and_relevance":0,"materials_and_methods":0,"praise":0,"presen(...TRUNCATED)
-10
iclr2016
openreview
0
0
0
null

No dataset card yet

Downloads last month
29