title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Stress-Testing Capability Elicitation With Password-Locked Models | Accept (poster) | Summary: The paper studies whether fine-tuning can elicit the hidden capabilities of LLMs, especially motivated by the setting of dangerous capabilities evaluations.
To provide a specific experimental setup, the paper considers password-locked models, which responds with weak answers unless provided with a fixed pass... | Rebuttal 1:
Rebuttal: Thanks for the review! We are glad you found our study well-motivated, our main lines of experiments solid, and our results novel.
> I can imagine that we get the results in section 5, not (only) because SFT is good in capability elicitation, but because it's relatively weak in robustly password-... | Summary: To determine whether models are safe to deploy, it's important for labs and regulators to be able to "elicit" model capabilities, including ones that are somehow "hidden" (whether by a malicious deployer, by some unforeseen property of the training data, or by an unaligned model that is able to plan). One way ... | Rebuttal 1:
Rebuttal: Thanks for the review! We’re happy to hear you found our framing useful, liked our experimental methodology, and found some of our results interesting/surprising.
> [...] I'd say password-locking is a fairly simple way of modeling hidden capabilities and could be quite unlike the threat models th... | Summary: The authors study the problem of testing advanced LLMs for non-obvious capabilities in a toy setting, where a model has been fine-tuned to only perform well when presented with a fixed password, and to perform poorly otherwise. This is meant to simulate the difficulty of finding the correct prompt for elicitin... | Rebuttal 1:
Rebuttal: Thank you very much for the detailed feedback! We are glad you found the problem our paper addresses interesting, our reduction to password-locked models creative, and our experiments diverse.
>Editorially, I do not think RL is a very important setting, since it seems unlikely that we would ever ... | Summary: This paper studies the problem of eliciting hidden capabilities from language models, through fine-tuning. In order to plant 'locked' capabilities and then do controlled test of elicitation, the authors lock the capabilities using passwords they bake into the prompts at fine-tuning. Then, they try to elicit th... | Rebuttal 1:
Rebuttal: Thanks for the thoughtful review!
> Is model locking something that people really do in practice? Using a set of words in a prompt is pretty weak in practice, there might be stronger ways to lock a model, such as posthoc methods at decoding time. [...] Using fine-tuning and RL at decoding time is... | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their thoughtful feedback, and are pleased to see that the vote for acceptance is unanimous. Below we address two threads common across several reviews. We are looking forward to further discussion!
## 1. Can password-locked models be unlocked using jailbr... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging | Accept (poster) | Summary: The authors present a Federated Hardware-Prompt learning (FedHP) framework to address the fact that compressive snapshot spectral imaging devices may not be easily tuneable against changes in the coded aperture, and that in fact the said access to coded apertures may not be possible due to privacy reasons. The... | Rebuttal 1:
Rebuttal: We much appreciate that the Reviewer `aeby` provides valuable comments and finds the method is designed with clear purpose.
`R4.1`: The biggest weakness is arguably that the paper covers a somewhat very niche topic, which is the application of a federated learning scheme to compressive snapshot ... | Summary: The paper addresses the challenges faced in snapshot compressive imaging (SCI) systems due to hardware shifts and the need for adaptability across multiple hardware configurations. By introducing a hardware-prompt network and leveraging federated learning, the framework enhances the adaptability and performanc... | Rebuttal 1:
Rebuttal: We much appreciate that the Reviewer `1X5M` provides valuable comments and finds the proposed work have significant practical relevance to the study and the collected SSHD dataset can benefit the future research. We will release the dataset and the training/testing code.
`R3.1`: The literature r... | Summary: Most existing reconstruction models in snapshot compressive imaging systems are trained using a single hardware configuration, making them highly susceptible to hardware variations. Previous approaches attempted to address this issue by centralizing data from multiple hardware configurations for training, but ... | Rebuttal 1:
Rebuttal: We much appreciate that the Reviewer `Jxiy` provides valuable comments and finds the proposed method addresses the issue from the new perspective of hardware with good performance.
`R2.1`: The number of clients used in the experiments is still relatively small. Although a simple comparison of th... | Summary: The paper introduces FedHP, a reconstruction method for snapshot compressive imaging systems, which addresses the challenge of cross-hardware learning by proposing a federated learning approach. The key contribution lies in using a hardware-conditioned prompter to align data distributions across different hard... | Rebuttal 1:
Rebuttal: We much appreciate that the Reviewer `BdgG` provides valuable comments and finds the problem novel and the method convincing.
`R1.1`: There are some typos in the writing. For example, the caption of Figure 3 and the bold parts in the second row of Table 1 and the eighth row of Table 2 are confus... | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation | Accept (poster) | Summary: The paper proposes a framework that integrates large multimodal language models (MLLMs) and diffusion models to enable holistic language planning and vision planning for long-horizon robotic manipulation tasks with complex instructions. The authors jointly train the MLLM and diffusion model for language reason... | Rebuttal 1:
Rebuttal: # Q1 More analysis of Training Cost & Performance between LLM backbone
Thanks for the insightful suggestions! Considering most PERIA's computational cost is to align between vision and language, and bridge the gap between pretrained LLMs and image editing models in general domains versus robotics... | Summary: The paper tackles the problem of long-horizon task planning on pick-and-place tasks in the Ravens domain. Given a dataset of trajectories, it first learns the projection to align the vision and language encoder for a multimodal LLM. Then it finetunes both the multimodal LLM and a diffusion model to generate a ... | Rebuttal 1:
Rebuttal: # Q1 Effectiveness of Diffusion model & MLLMs & Tasks better described in image
Thank you for insightful questions! We appreciate the opportunity to respond point by point:
1. **Tasks better described in image**
We highly agree that tasks with subgoals better described in visual space are ... | Summary: The paper proposes a holistic vision-language planning method for long-horizon robot manipulation, by learning a multi-modal large language model (MLLM). The MLLM generates interleaved language actions and keyframe images based on language goal and the initial image. Each pair of generated language and keyfram... | Rebuttal 1:
Rebuttal: # Q1 Details of Low-level Policy
Sorry for the confusion. Due to limited space, we placed the training details of low-level policy in Appendix E. Thanks for bringing the attention to critical importance of this section for a comprehensive understanding of the PERIA architecture and we plan to in... | Summary: This paper focuses on robotic manipulation with complex instructions. It proposes PERIA, a framework that integrates MLLM and diffusion models to incorporate both language planning and visual planning for long-horizon language-instructed manipulation tasks. Specifically, PERIA first performs a lightweight mult... | Rebuttal 1:
Rebuttal: # Q1 Computation resources
Sorry for the ambiguity arising from distributed presentation of computational resource requirements across Appendix. The computational cost of PERIA across three primary stages: Perceive (8 V100 GPUs * 8 hours ), Reason & Imagine (8 V100 GPUs * 42 hours), and Act (sing... | Rebuttal 1:
Rebuttal: # **General Response**
---
**Sincere thanks to all the Reviewers for the valuable suggestions and recognition of our work!**
We sincerely appreciate all reviewers' time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our key contributions and clear p... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models | Accept (poster) | Summary: The paper studies the convergence of EM for learning mixtures of Gaussians. Specifically, they consider a simplified setting where the Gaussians are in $d$-dimensions and all have covariance $I_d$. They consider an overparameterized version of the problem where they parametrize the mixture they are trying to... | Rebuttal 1:
Rebuttal: Thank you for the positive review. We have addressed your concern below.
> The results of the paper only work when the ground truth is "trivial" i.e. a single Gaussian.
We agree that the single Gaussian ground truth is a simpler case compared to the most general problem. But our setting is nonet... | Summary: This paper talks about the gradient-EM algorithm for over-parameterized GMM. The paper mostly shows the GLOBAL convergence and its rate when using this model to learn a single Gaussian.
Strengths: I believe any non-convex global convergence optimization problem is valuable. It is an extension of Dwivedi et al... | Rebuttal 1:
Rebuttal: Thanks for your detailed review! We have addressed your questions below.
> The over-parametrized model may have severe overfitting problem.
We believe this is a misunderstanding. The aim of this paper is not to propose a new algorithm/model, but to understand the convergence behavior of the wide... | Summary: The paper focuses on the setting of a Gaussian Mixture Model with several summands and an input vector produced by one Gaussian distribution, where it employs the Expectation-Maximization rule to infer the model's parameters. Since the problem of having arbitrary number of summands has been unsolved, the paper... | Rebuttal 1:
Rebuttal: Thanks for your review and positive comment! We have addressed your question below.
> The experimental evaluation is used as a proof of concept and thus is limited. The authors could have (potentially) experimented with several datasets, with varying weights in the GMM, and try to benchmark their... | Summary: The paper considers fitting a single Gaussian with multiple-component Gaussian mixture models (GMM) through the Gradient EM algorithm. While the two balanced over-specified Gaussian setting has been widely studied in the previous work, generalizing it to multiple-component GMM requires significant algebraic ef... | Rebuttal 1:
Rebuttal: Thank you for the detailed review. We answer each of your questions below.
> The gap between this lower bound and the upper bound is large.
Thank you for pointing out this problem. In the initial version we didn't optimize the exponent. Indeed, we can obtain significantly refined results which r... | Rebuttal 1:
Rebuttal: We appreciate all the reviewers for their detailed and positive feedbacks. In the uploaded pdf file, we add several experiments:
- Experiment of statistical rates, for questions of Reviewer 6yVv (Figure 1).
- Impact of initialization on the convergence speed, for questions of Reviewer DCG2 (Figur... | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation | Accept (poster) | Summary: This paper proposes a method of generating prompts for evaluating large language models such that the prompts are dynamic and allow for showing meaningful performance gaps between different language models.The authors show that the generated data is more-challenging and discriminative than prior datasets.
Str... | Rebuttal 1:
Rebuttal: **Q:** *If a particular language model is used to generate data using the proposed method, is there any bias where that model will perform better at solving those problems? For example, if Claude generates the prompt set, will the prompt set be easier for Claude than GPT?*
**A:** Thank you for yo... | Summary: The paper proposes a prompt synthesis framework for evaluating LLMs to accurately reflect different Large Language Model abilities. The authors develop two models to measure LLMs’ question discriminative power and difficulty. This study presents “instruction gradient” and “response gradient” methods to exploit... | Rebuttal 1:
Rebuttal: **Q:** *The proposed methods - “Instruction gradient” and “response gradient” are not properly described in the manuscript. Authors should write the working procedure of these methods in detail in the main manuscript, as these are the centerpiece of the whole question generation process.*
**A:** ... | Summary: The paper introduces a novel framework for evaluating Large Language Models LLMs) based on Item Discrimination ID theory, which generates adaptive, high- quality prompts to effectively differentiate model performance. Key contributions include a dynamic evaluation set that evolves with LLM advancements, a s... | Rebuttal 1:
Rebuttal: **Q:** *The paper only used one LLM Hunyuan) to generalize data and did not verify whether the proposed method can generalize to other LLMs.*
**A:** Thank you for your question about our paper. Our proposed method is designed for existing LLMs and is **not limited to a particular model**. The wor... | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation | Accept (spotlight) | "Summary: The paper addresses challenges in surgical video-language pretraining (VLP) due to the kno(...TRUNCATED) | "Rebuttal 1:\nRebuttal: **[Q1. Plan to Expand Dataset]** Scaling and diversifying the surgical visio(...TRUNCATED) | "Summary: The paper presents a novel approach for enhancing surgical video analysis by incorporating(...TRUNCATED) | "Rebuttal 1:\nRebuttal: **[Q1. Dataset Limitations]** Thank you for the insightful suggestion. In th(...TRUNCATED) | "Summary: This paper proposes a Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretra(...TRUNCATED) | "Rebuttal 1:\nRebuttal: **[Q1. Augmentation Removes Variation]** Thank you for pointing out one of t(...TRUNCATED) | "Summary: The paper presents a new framework called PeskaVLP for surgical video-language pretraining(...TRUNCATED) | "Rebuttal 1:\nRebuttal: **[Q1 SVL Dataset]**\n\n**[Q1.1. Types of surgeries in SVL dataset]** In the(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank all the reviewers for the insightful comments to improve our work. (...TRUNCATED) | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers | Accept (poster) | "Summary: The paper investigates the complexity of sampling from heavy-tailed distributions and pres(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED) | "Summary: This paper studies the problem of heavy-tailed sampling. First, the paper shows that while(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED) | "Summary: The paper focus on studying the complexity of heavy-tailed sampling and present a separati(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED) | "Summary: The authors provide a lower bound for sampling from heavy tailed distributions under the G(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED) | NeurIPS_2024_submissions_huggingface | 2,024 | "Summary: This paper studies the complexity of sampling heavy-tailed distributions. It provides lowe(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We sincerely thank the reviewer for their valuable advice and comments and g(...TRUNCATED) | null | null | null | null | null | null |
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning | Reject | "Summary: This paper introduces Accordion Networks (AccNets), a novel neural network structure compo(...TRUNCATED) | null | "Summary: The authors present a generalization bound for deep neural networks that describes how dep(...TRUNCATED) | null | "Summary: The authors introduce accordion networks (AccNets), which are compositions of multiple sha(...TRUNCATED) | null | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OxonFair: A Flexible Toolkit for Algorithmic Fairness | Accept (poster) | "Summary: The paper introduces \"AnonFair,\" a toolkit designed to enforce algorithmic fairness acro(...TRUNCATED) | "Rebuttal 1:\nRebuttal: Thank you for taking the time to review our manuscript and for providing det(...TRUNCATED) | "Summary: This paper describes a new toolkit for algorithmic fairness, enabling the optimization of (...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for the feedback and helpful suggestions that will be (...TRUNCATED) | "Summary: The paper introduces a new toolkit designed to enhance algorithmic fairness with greater e(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for their detailed feedback. We are happy to see that (...TRUNCATED) | "Summary: The paper describes details of a fairness toolkit (\"AnonFair\"), which confers fairness t(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for their review, and we hope to address the issues ra(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewers for their helpful and largely positive comments (**ov(...TRUNCATED) | NeurIPS_2024_submissions_huggingface | 2,024 | "Summary: This paper presents AnonFair, a cutting-edge open-source toolkit designed to promote algor(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for the positive comments and constructive feedback.\n(...TRUNCATED) | null | null | null | null | null | null |
G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training | Accept (poster) | "Summary: This paper proposes G2D, a novel vision-language pre-training (VLP) framework for medical (...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for the questions!\n>Theoretical analysis for why pseu(...TRUNCATED) | "Summary: This manuscript describes a medical vision-language pre-training framework called Global t(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for the positive feedbacks!\n> Unclear if specific sen(...TRUNCATED) | "Summary: The paper proposes an encoder-decoder medical VLP approach for global-to-dense visual repr(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for the positive feedbacks!\n>Comparing with MGCA and (...TRUNCATED) | "Summary: The paper proposes a new medical vision-language model, G2D, which employs vision-language(...TRUNCATED) | "Rebuttal 1:\nRebuttal: We thank the reviewer for the positive feedbacks!\n>Detecting and measuring (...TRUNCATED) | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
End of preview. Expand in Data Studio
NeurIPS Papers Dataset
This dataset contains information about NeurIPS conference paper submissions including peer reviews, author rebuttals, and decision outcomes across multiple years.
Files
dataset.csv: Main dataset file containing all paper submission data
Dataset Structure
The CSV file contains the following columns:
title: Paper titlepaper_decision: Decision outcome (Accept/Reject with specific categories)review_1,review_2, etc.: Peer reviews from different reviewersrebuttals_1,rebuttals_2, etc.: Author rebuttals responding to reviewsglobal_rebuttals: Overall author responsesdataset_source: Source of the dataconference_year: Year of the conference
Usage
import pandas as pd
# Load the dataset
df = pd.read_csv('merged_neurips_dataset.csv')
# Example: Print first paper title
print(df['title'].iloc[0])
# Example: Filter accepted papers
accepted_papers = df[df['paper_decision'].str.contains('Accept', na=False)]
print(f"Number of accepted papers: {len(accepted_papers)}")
# Example: Analyze decision distribution
decision_counts = df['paper_decision'].value_counts()
print(decision_counts)
Sample Data Structure
Each row represents a paper submission with associated reviews and rebuttals:
title: "Stress-Testing Capability Elicitation With Password-Locked Models"
paper_decision: "Accept (poster)"
review_1: "Summary: The paper studies whether fine-tuning can elicit..."
rebuttals_1: "Rebuttal 1: Thanks for the review! We are glad you found..."
...
Data Statistics
- File size: ~287MB
- Format: CSV with comma-separated values
- Encoding: UTF-8
- Contains: Paper reviews, rebuttals, and metadata from NeurIPS conferences
Use Cases
This dataset is valuable for:
- Peer review analysis: Study patterns in academic peer review
- Natural language processing: Train models on academic text
- Research evaluation: Analyze correlation between reviews and acceptance
- Academic writing: Understand successful paper characteristics
- Sentiment analysis: Analyze reviewer sentiment and author responses
Citation
If you use this dataset in your research, please cite appropriately and ensure compliance with NeurIPS terms of service.
License
This dataset is released under the MIT License. Please ensure you have appropriate permissions to use this data and comply with NeurIPS's terms of service.
- Downloads last month
- 17