title stringlengths 17 147 | url stringlengths 42 42 | detail_url stringlengths 42 42 | authors stringlengths 8 486 | tags stringclasses 2
values | abstract stringlengths 468 2.51k | pdf stringlengths 71 71 |
|---|---|---|---|---|---|---|
Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks | https://openreview.net/forum?id=aVh9KRZdRk | https://openreview.net/forum?id=aVh9KRZdRk | Tianyu He,Darshil Doshi,Aritra Das,Andrey Gromov | NIPS 2024,Oral | Large language models can solve tasks that were not present in the training set. This capability is believed to be due to in-context learning and skill composition. In this work, we study the emergence of in-context learning and skill composition in a collection of modular arithmetic tasks. Specifically, we consider a ... | https://openreview.net/pdf/5737b58d308dafc16130635934df4276a7a574aa.pdf |
Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes | https://openreview.net/forum?id=REIK4SZMJt | https://openreview.net/forum?id=REIK4SZMJt | Spencer Rooke,Zhaoze Wang,Ronald W Di Tullio,Vijay Balasubramanian | NIPS 2024,Oral | Many animals learn cognitive maps of their environment - a simultaneous representation of context, experience, and position. Place cells in the hippocampus, named for their explicit encoding of position, are believed to be a neural substrate of these maps, with place cell "remapping" explaining how this system can rep... | https://openreview.net/pdf/9753767cc23ca7180fd4278699c23a3b28c99199.pdf |
Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction | https://openreview.net/forum?id=gojL67CfS8 | https://openreview.net/forum?id=gojL67CfS8 | Keyu Tian,Yi Jiang,Zehuan Yuan,BINGYUE PENG,Liwei Wang | NIPS 2024,Oral | We present Visual AutoRegressive modeling (VAR), a new generation paradigm that redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction". This simple, intuitive methodology allows autoregres... | https://openreview.net/pdf/1366e6f25deff9942d17a853f81351d6caa8dcdf.pdf |
Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions | https://openreview.net/forum?id=bCMpdaQCNW | https://openreview.net/forum?id=bCMpdaQCNW | Zhe Hu,Tuo Liang,Jing Li,Yiren Lu,Yunlai Zhou,Yiran Qiao,Jing Ma,Yu Yin | NIPS 2024,Oral | Recent advancements in large vision language models have demonstrated remarkable proficiency across a wide range of tasks.
Yet, these models still struggle with understanding the nuances of human humor through juxtaposition, particularly when it involves nonlinear narratives that underpin many jokes and humor cues. T... | https://openreview.net/pdf/1f618d0020c8650176d91ef4418ef3cea6151adb.pdf |
Human Expertise in Algorithmic Prediction | https://openreview.net/forum?id=wpGJ2AX6SZ | https://openreview.net/forum?id=wpGJ2AX6SZ | Rohan Alur,Manish Raghavan,Devavrat Shah | NIPS 2024,Oral | We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach leverages human judgment to distinguish inputs which are *algorithmically indistinguishable*, or "look the same" to predictive algorithms. We argue that this framing clarifies the problem of human-AI collaborati... | https://openreview.net/pdf/4f5dc6075a84c5c600343c682e95020208b5f943.pdf |
Learning diffusion at lightspeed | https://openreview.net/forum?id=y10avdRFNK | https://openreview.net/forum?id=y10avdRFNK | Antonio Terpin,Nicolas Lanzetti,Martín Gadea,Florian Dorfler | NIPS 2024,Oral | Diffusion regulates numerous natural processes and the dynamics of many successful generative models. Existing models to learn the diffusion terms from observational data rely on complex bilevel optimization problems and model only the drift of the system.
We propose a new simple model, JKOnet*, which bypasses the comp... | https://openreview.net/pdf/71e85a95e3f40ebd277c5df65f9dff3c748e2ddb.pdf |
Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning | https://openreview.net/forum?id=9O2sVnEHor | https://openreview.net/forum?id=9O2sVnEHor | Raffaele Paolino,Sohir Maskey,Pascal Welke,Gitta Kutyniok | NIPS 2024,Oral | We introduce $r$-loopy Weisfeiler-Leman ($r$-$\ell$WL), a novel hierarchy of graph isomorphism tests and a corresponding GNN framework, $r$-$\ell$MPNN, that can count cycles up to length $r{+}2$. Most notably, we show that $r$-$\ell$WL can count homomorphisms of cactus graphs. This extends 1-WL, which can only count ho... | https://openreview.net/pdf/160b0368f27f6ae00575a4abc8d44870237c95f9.pdf |
Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought | https://openreview.net/forum?id=pC44UMwy2v | https://openreview.net/forum?id=pC44UMwy2v | Qiguang Chen,Libo Qin,Jiaqi WANG,Jingxuan Zhou,Wanxiang Che | NIPS 2024,Oral | Chain-of-Thought (CoT) reasoning has emerged as a promising approach for enhancing the performance of large language models (LLMs) on complex reasoning tasks. Recently, a series of studies attempt to explain the mechanisms underlying CoT, aiming to deepen the understanding of its efficacy. Nevertheless, the existing re... | https://openreview.net/pdf/47a165ca745dea00bf9fe4ba52210932fb6d1787.pdf |
Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity | https://openreview.net/forum?id=qf2uZAdy1N | https://openreview.net/forum?id=qf2uZAdy1N | Philip Amortila,Dylan J Foster,Nan Jiang,Akshay Krishnamurthy,Zakaria Mhammedi | NIPS 2024,Oral | Real-world applications of reinforcement learning often involve environments where agents operate on complex, high-dimensional observations, but the underlying (``latent'') dynamics are comparatively simple. However, beyond restrictive settings
such as tabular latent dynamics, the fundamental statistical requireme... | https://openreview.net/pdf/17710a946394531d22cd1cf32e0a7fd7bac1e6ac.pdf |
Generalization Error Bounds for Two-stage Recommender Systems with Tree Structure | https://openreview.net/forum?id=m1a4CrRJR7 | https://openreview.net/forum?id=m1a4CrRJR7 | Jin Zhang,Ze Liu,Defu Lian,Enhong Chen | NIPS 2024,Oral | Two-stage recommender systems play a crucial role in efficiently identifying relevant items and personalizing recommendations from a vast array of options. This paper, based on an error decomposition framework, analyzes the generalization error for two-stage recommender systems with a tree structure, which consist of a... | https://openreview.net/pdf/0573ad42adbbc93100e6c898b23c116d78de695b.pdf |
Aligner: Efficient Alignment by Learning to Correct | https://openreview.net/forum?id=kq166jACVP | https://openreview.net/forum?id=kq166jACVP | Jiaming Ji,Boyuan Chen,Hantao Lou,Donghai Hong,Borong Zhang,Xuehai Pan,Tianyi Qiu,Juntao Dai,Yaodong Yang | NIPS 2024,Oral | With the rapid development of large language models (LLMs) and ever-evolving practical requirements, finding an efficient and effective alignment method has never been more critical. However, the tension between the complexity of current alignment methods and the need for rapid iteration in deployment scenarios necessi... | https://openreview.net/pdf/80ca837e0c7f9e0d8dbf5b1edefbdf611c8ded34.pdf |
Questioning the Survey Responses of Large Language Models | https://openreview.net/forum?id=Oo7dlLgqQX | https://openreview.net/forum?id=Oo7dlLgqQX | Ricardo Dominguez-Olmedo,Moritz Hardt,Celestine Mendler-Dünner | NIPS 2024,Oral | Surveys have recently gained popularity as a tool to study large language models. By comparing models’ survey responses to those of different human reference populations, researchers aim to infer the demographics, political opinions, or values best represented by current language models. In this work, we critically exa... | https://openreview.net/pdf/6a9813651d8de7fdc565ddb5dacecf057526a29a.pdf |
Stochastic Taylor Derivative Estimator: Efficient amortization for arbitrary differential operators | https://openreview.net/forum?id=J2wI2rCG2u | https://openreview.net/forum?id=J2wI2rCG2u | Zekun Shi,Zheyuan Hu,Min Lin,Kenji Kawaguchi | NIPS 2024,Oral | Optimizing neural networks with loss that contain high-dimensional and high-order differential operators
is expensive to evaluate with back-propagation due to $\mathcal{O}(d^{k})$ scaling of the derivative tensor size and the $\mathcal{O}(2^{k-1}L)$ scaling in the computation graph, where $d$ is the dimension of the ... | https://openreview.net/pdf/525882bf51a6cb819e7762a437a606419814f5c7.pdf |
Do Finetti: On Causal Effects for Exchangeable Data | https://openreview.net/forum?id=4rCZeCZAON | https://openreview.net/forum?id=4rCZeCZAON | Siyuan Guo,Chi Zhang,Karthika Mohan,Ferenc Huszár,Bernhard Schölkopf | NIPS 2024,Oral | We study causal effect estimation in a setting where the data are not i.i.d.$\ $(independent and identically distributed). We focus on exchangeable data satisfying an assumption of independent causal mechanisms. Traditional causal effect estimation frameworks, e.g., relying on structural causal models and do-calculus, ... | https://openreview.net/pdf/8f348634669f055ea725df69d4de4fac31b49194.pdf |
LLM Evaluators Recognize and Favor Their Own Generations | https://openreview.net/forum?id=4NJBV6Wp0h | https://openreview.net/forum?id=4NJBV6Wp0h | Arjun Panickssery,Samuel R. Bowman,Shi Feng | NIPS 2024,Oral | Self-evaluation using large language models (LLMs) has proven valuable not only in benchmarking but also methods like reward modeling, constitutional AI, and self-refinement. But new biases are introduced due to the same LLM acting as both the evaluator and the evaluatee. One such bias is self-preference, where an LLM ... | https://openreview.net/pdf/17f3e3ce067de145352b0881a5a5a351cfcceac4.pdf |
Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs | https://openreview.net/forum?id=pGEY8JQ3qx | https://openreview.net/forum?id=pGEY8JQ3qx | Matthew Zurek,Yudong Chen | NIPS 2024,Oral | We study the sample complexity of learning an $\varepsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound $\widetilde{O}\left(SA\frac{\mathsf{H}}{\varepsilon^2} \right)$, where $\mathsf{H}$ is the span of the ... | https://openreview.net/pdf/2ff245e09d2ec82378e2aa6ffea57a9ec01c043c.pdf |
Learning Formal Mathematics From Intrinsic Motivation | https://openreview.net/forum?id=uNKlTQ8mBD | https://openreview.net/forum?id=uNKlTQ8mBD | Gabriel Poesia,David Broman,Nick Haber,Noah Goodman | NIPS 2024,Oral | How did humanity coax mathematics from the aether? We explore the Platonic view that mathematics can be discovered from its axioms---a game of conjecture and proof. We describe an agent that jointly learns to pose challenging problems for itself (conjecturing) and solve them (theorem proving). Given a mathematical doma... | https://openreview.net/pdf/42d3b14720041d447c657071a08de640733954a0.pdf |
Identification and Estimation of the Bi-Directional MR with Some Invalid Instruments | https://openreview.net/forum?id=S2P6KPLtm8 | https://openreview.net/forum?id=S2P6KPLtm8 | Feng Xie,Zhen Yao,Lin Xie,Yan Zeng,Zhi Geng | NIPS 2024,Oral | We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist.
To address this problem, most existing methods attempt to find proper valid instrumental ... | https://openreview.net/pdf/7864b4bc0bd0c32d66af795cacadc545cbdd6432.pdf |
Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models | https://openreview.net/forum?id=V0oJaLqY4E | https://openreview.net/forum?id=V0oJaLqY4E | Sangwoong Yoon,Himchan Hwang,Dohyun Kwon,Yung-Kyun Noh,Frank C. Park | NIPS 2024,Oral | We present a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when the number of generation time steps is small. Similar to how IRL trains a policy based on the reward function learned from expert demonstrations, we train (or fine-... | https://openreview.net/pdf/fbd48eb1b53fd48de22ddd59edf0d18875315635.pdf |
Improving Environment Novelty Quantification for Effective Unsupervised Environment Design | https://openreview.net/forum?id=UdxpjKO2F9 | https://openreview.net/forum?id=UdxpjKO2F9 | Jayden Teoh,Wenjun Li,Pradeep Varakantham | NIPS 2024,Oral | Unsupervised Environment Design (UED) formalizes the problem of autocurricula through interactive training between a teacher agent and a student agent. The teacher generates new training environments with high learning potential, curating an adaptive curriculum that strengthens the student's ability to handle unseen sc... | https://openreview.net/pdf/395c3c5df43310736f6134ab07ff32330b2a8f45.pdf |
Enhancing Preference-based Linear Bandits via Human Response Time | https://openreview.net/forum?id=aIPwlkdOut | https://openreview.net/forum?id=aIPwlkdOut | Shen Li,Yuyang Zhang,Zhaolin Ren,Claire Liang,Na Li,Julie Shah | NIPS 2024,Oral | Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices. Although binary choices are simple and widely used, they provide limited information about preference strength. To address this, we leverage human response times, which are inversely ... | https://openreview.net/pdf/b32d10afd0c5117bb0b9ac42cf07b7786e40cbd9.pdf |
Scale Equivariant Graph Metanetworks | https://openreview.net/forum?id=8Fxqn1tZM1 | https://openreview.net/forum?id=8Fxqn1tZM1 | Ioannis Kalogeropoulos,Giorgos Bouritsas,Yannis Panagakis | NIPS 2024,Oral | This paper pertains to an emerging machine learning paradigm: learning higher- order functions, i.e. functions whose inputs are functions themselves, particularly when these inputs are Neural Networks (NNs). With the growing interest in architectures that process NNs, a recurring design principle has permeated the fiel... | https://openreview.net/pdf/6d3b36cd5d6e1acb5d27b18b7da7333f5c075e0e.pdf |
CAT3D: Create Anything in 3D with Multi-View Diffusion Models | https://openreview.net/forum?id=TFZlFRl9Ks | https://openreview.net/forum?id=TFZlFRl9Ks | Ruiqi Gao,Aleksander Holynski,Philipp Henzler,Arthur Brussee,Ricardo Martin Brualla,Pratul P. Srinivasan,Jonathan T. Barron,Ben Poole | NIPS 2024,Oral | Advances in 3D reconstruction have enabled high-quality 3D capture, but require a user to collect hundreds to thousands of images to create a 3D scene. We present CAT3D, a method for creating anything in 3D by simulating this real-world capture process with a multi-view diffusion model. Given any number of input images... | https://openreview.net/pdf/a17526d158b6388ba1714b7d1decfdd7ec50e8da.pdf |
Stylus: Automatic Adapter Selection for Diffusion Models | https://openreview.net/forum?id=3Odq2tGSpp | https://openreview.net/forum?id=3Odq2tGSpp | Michael Luo,Justin Wong,Brandon Trabucco,Yanping Huang,Joseph E. Gonzalez,Zhifeng Chen,Russ Salakhutdinov,Ion Stoica | NIPS 2024,Oral | Beyond scaling base models with more data or parameters, fine-tuned adapters provide an alternative way to generate high fidelity, custom images at reduced costs. As such, adapters have been widely adopted by open-source communities, accumulating a database of over 100K adapters—most of which are highly customized with... | https://openreview.net/pdf/b41be568e09a4892b988b18214b6686115e4ccb9.pdf |
The Sample-Communication Complexity Trade-off in Federated Q-Learning | https://openreview.net/forum?id=6YIpvnkjUK | https://openreview.net/forum?id=6YIpvnkjUK | Sudeep Salgia,Yuejie Chi | NIPS 2024,Oral | We consider the problem of Federated Q-learning, where $M$ agents aim to collaboratively learn the optimal Q-function of an unknown infinite horizon Markov Decision Process with finite state and action spaces. We investigate the trade-off between sample and communication complexity for the widely used class of intermit... | https://openreview.net/pdf/aa89287b43d0d38cc8ef9cd412964652a0b005cb.pdf |
Guiding a Diffusion Model with a Bad Version of Itself | https://openreview.net/forum?id=bg6fVPVs3s | https://openreview.net/forum?id=bg6fVPVs3s | Tero Karras,Miika Aittala,Tuomas Kynkäänniemi,Jaakko Lehtinen,Timo Aila,Samuli Laine | NIPS 2024,Oral | The primary axes of interest in image-generating diffusion models are image quality, the amount of variation in the results, and how well the results align with a given condition, e.g., a class label or a text prompt. The popular classifier-free guidance approach uses an unconditional model to guide a conditional model... | https://openreview.net/pdf/9173da6000cdac7dc5129691366a29747954b7ef.pdf |
RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation | https://openreview.net/forum?id=r5spnrY6H3 | https://openreview.net/forum?id=r5spnrY6H3 | Changli Wu,Qi Chen,Jiayi Ji,Haowei Wang,Yiwei Ma,You Huang,Gen Luo,Hao Fei,Xiaoshuai Sun,Rongrong Ji | NIPS 2024,Oral | 3D Referring Expression Segmentation (3D-RES) aims to segment 3D objects by correlating referring expressions with point clouds. However, traditional approaches frequently encounter issues like over-segmentation or mis-segmentation, due to insufficient emphasis on spatial information of instances. In this paper, we int... | https://openreview.net/pdf/074c8caaa0b5feabaad18b25db6c0ee86ed09863.pdf |
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time | https://openreview.net/forum?id=5zSCSE0k41 | https://openreview.net/forum?id=5zSCSE0k41 | Sicheng Xu,Guojun Chen,Yu-Xiao Guo,Jiaolong Yang,Chong Li,Zhenyu Zang,Yizhong Zhang,Xin Tong,Baining Guo | NIPS 2024,Oral | We introduce VASA, a framework for generating lifelike talking faces with appealing visual affective skills (VAS) given a single static image and a speech audio clip. Our premiere model, VASA-1, is capable of not only generating lip movements that are exquisitely synchronized with the audio, but also producing a large ... | https://openreview.net/pdf/ccbb9d0f4688567aed95ad757cf65f0dd4538631.pdf |
Learning rigid-body simulators over implicit shapes for large-scale scenes and vision | https://openreview.net/forum?id=QDYts5dYgq | https://openreview.net/forum?id=QDYts5dYgq | Yulia Rubanova,Tatiana Lopez-Guevara,Kelsey R Allen,William F Whitney,Kim Stachenfeld,Tobias Pfaff | NIPS 2024,Oral | Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games. Rigid interactions are notoriously hard to model: small changes to the initial state or the simulation parameters can lead to large changes in the final state. Recently, learned... | https://openreview.net/pdf/a025a4908402e558708ed28771812dd10af193dd.pdf |
Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations | https://openreview.net/forum?id=HRkniCWM3E | https://openreview.net/forum?id=HRkniCWM3E | Nicholas Gao,Stephan Günnemann | NIPS 2024,Oral | Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost. Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independ... | https://openreview.net/pdf/c766b139548380a74ad7a69a3c638798a81d5de3.pdf |
DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices | https://openreview.net/forum?id=Pezt0xttae | https://openreview.net/forum?id=Pezt0xttae | Yongzhe Jia,Xuyun Zhang,Hongsheng Hu,Kim-Kwang Raymond Choo,Lianyong Qi,Xiaolong Xu,Amin Beheshti,Wanchun Dou | NIPS 2024,Oral | Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However, existing FL frameworks suffer from efficacy deterioration due to the system heterogeneity inherent in... | https://openreview.net/pdf/40235b2ea6b49d81841886f194bd9d4a2897ff15.pdf |
DenoiseRep: Denoising Model for Representation Learning | https://openreview.net/forum?id=OycU0bAus6 | https://openreview.net/forum?id=OycU0bAus6 | zhengrui Xu,Guan'an Wang,Xiaowen Huang,Jitao Sang | NIPS 2024,Oral | The denoising model has been proven a powerful generative model but has little exploration of discriminative tasks. Representation learning is important in discriminative tasks, which is defined as *"learning representations (or features) of the data that make it easier to extract useful information when building class... | https://openreview.net/pdf/ccc22185c7b5ceeab3929bff884d84473546f5d7.pdf |
Optimal Parallelization of Boosting | https://openreview.net/forum?id=rtz4df9IF1 | https://openreview.net/forum?id=rtz4df9IF1 | Arthur da Cunha,Mikael Møller Høgsgaard,Kasper Green Larsen | NIPS 2024,Oral | Recent works on the parallel complexity of Boosting have established strong lower bounds on the tradeoff between the number of training rounds $p$ and the total parallel work per round $t$.
These works have also presented highly non-trivial parallel algorithms that shed light on different regions of this tradeoff.
Desp... | https://openreview.net/pdf/b88f812c42a45b79e5e8663c27463c4580ab45a6.pdf |
Decompose, Analyze and Rethink: Solving Intricate Problems with Human-like Reasoning Cycle | https://openreview.net/forum?id=NPKZF1WDjZ | https://openreview.net/forum?id=NPKZF1WDjZ | Shangzi Xue,Zhenya Huang,Jiayu Liu,Xin Lin,Yuting Ning,Binbin Jin,Xin Li,Qi Liu | NIPS 2024,Oral | In this paper, we introduce DeAR (_Decompose-Analyze-Rethink_), a framework that iteratively builds a reasoning tree to tackle intricate problems within a single large language model (LLM). Unlike approaches that extend or search for rationales, DeAR is featured by 1) adopting a tree-based question decomposition manner... | https://openreview.net/pdf/48641218f9362ec9ed75e6482a2030d00757c6d8.pdf |
Bayesian-guided Label Mapping for Visual Reprogramming | https://openreview.net/forum?id=135eKqDoRR | https://openreview.net/forum?id=135eKqDoRR | Chengyi Cai,Zesheng Ye,Lei Feng,Jianzhong Qi,Feng Liu | NIPS 2024,Oral | *Visual reprogramming* (VR) leverages the intrinsic capabilities of pretrained vision models by adapting their input or output interfaces to solve downstream tasks whose labels (i.e., downstream labels) might be totally different from the labels associated with the pretrained models (i.e., pretrained labels).
When ada... | https://openreview.net/pdf/5bd51ea14b1857a137832007130aaf712c5b6a63.pdf |
Policy Learning from Tutorial Books via Understanding, Rehearsing and Introspecting | https://openreview.net/forum?id=Ddak3nSqQM | https://openreview.net/forum?id=Ddak3nSqQM | Xiong-Hui Chen,Ziyan Wang,Yali Du,Shengyi Jiang,Meng Fang,Yang Yu,Jun Wang | NIPS 2024,Oral | When humans need to learn a new skill, we can acquire knowledge through written books, including textbooks, tutorials, etc. However, current research for decision-making, like reinforcement learning (RL), has primarily required numerous real interactions with the target environment to learn a skill, while failing to ut... | https://openreview.net/pdf/f4d95b3399a1323142228b0362d42345119de142.pdf |
GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation | https://openreview.net/forum?id=SSCtCq2MH2 | https://openreview.net/forum?id=SSCtCq2MH2 | Junhao Cai,Yuji Yang,Weihao Yuan,Yisheng HE,Zilong Dong,Liefeng Bo,Hui Cheng,Qifeng Chen | NIPS 2024,Oral | This paper studies the problem of estimating physical properties (system identification) through visual observations. To facilitate geometry-aware guidance in physical property estimation, we introduce a novel hybrid framework that leverages 3D Gaussian representation to not only capture explicit shapes but also enable... | https://openreview.net/pdf/35d3fb34ac9b1b65eb96b7a01480e9b13895a855.pdf |
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression | https://openreview.net/forum?id=YvA8UF0I37 | https://openreview.net/forum?id=YvA8UF0I37 | Vladimir Malinovskii,Denis Mazur,Ivan Ilin,Denis Kuznedelev,Konstantin Pavlovich Burlachenko,Kai Yi,Dan Alistarh,Peter Richtárik | NIPS 2024,Oral | There has been significant interest in "extreme" compression of large language models (LLMs), i.e. to 1-2 bits per parameter, which allows such models to be executed efficiently on resource-constrained devices.
Existing work focused on improved one-shot quantization techniques and weight representations; yet, purely ... | https://openreview.net/pdf/a41bd553618c035e26d1f1f6a8ebd19108274f50.pdf |
RL-GPT: Integrating Reinforcement Learning and Code-as-policy | https://openreview.net/forum?id=LEzx6QRkRH | https://openreview.net/forum?id=LEzx6QRkRH | Shaoteng Liu,Haoqi Yuan,Minda Hu,Yanwei Li,Yukang Chen,Shu Liu,Zongqing Lu,Jiaya Jia | NIPS 2024,Oral | Large Language Models (LLMs) have demonstrated proficiency in utilizing various tools by coding, yet they face limitations in handling intricate logic and precise control. In embodied tasks, high-level planning is amenable to direct coding, while low-level actions often necessitate task-specific refinement, such as Rei... | https://openreview.net/pdf/8489e6d14edc65b16f5f04f6773edb790ac430a4.pdf |
Statistical Efficiency of Distributional Temporal Difference Learning | https://openreview.net/forum?id=eWUM5hRYgH | https://openreview.net/forum?id=eWUM5hRYgH | Yang Peng,Liangyu Zhang,Zhihua Zhang | NIPS 2024,Oral | Distributional reinforcement learning (DRL) has achieved empirical success in various domains.
One of the core tasks in the field of DRL is distributional policy evaluation, which involves estimating the return distribution $\eta^\pi$ for a given policy $\pi$.
The distributional temporal difference learning has been ac... | https://openreview.net/pdf/3002a75ebfe6a386efc8dee88d8a2382d1d837e1.pdf |
Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering | https://openreview.net/forum?id=R8SolCx62K | https://openreview.net/forum?id=R8SolCx62K | Dongxiao He,Lianze Shan,Jitao Zhao,Hengrui Zhang,Zhen Wang,Weixiong Zhang | NIPS 2024,Oral | Graph Contrastive Learning (GCL) has emerged as a powerful approach for generating graph representations without the need for manual annotation. Most advanced GCL methods fall into three main frameworks: node discrimination, group discrimination, and bootstrapping schemes, all of which achieve comparable performance. H... | https://openreview.net/pdf/e21a9b3822e99ccaefbd6f6562cd41ff019e09ba.pdf |
You Only Cache Once: Decoder-Decoder Architectures for Language Models | https://openreview.net/forum?id=25Ioxw576r | https://openreview.net/forum?id=25Ioxw576r | Yutao Sun,Li Dong,Yi Zhu,Shaohan Huang,Wenhui Wang,Shuming Ma,Quanlu Zhang,Jianyong Wang,Furu Wei | NIPS 2024,Oral | We introduce a decoder-decoder architecture, YOCO, for large language models, which only caches key-value pairs once. It consists of two components, i.e., a cross-decoder stacked upon a self-decoder. The self-decoder efficiently encodes global key-value (KV) caches that are reused by the cross-decoder via cross-attenti... | https://openreview.net/pdf/c001fdfd3a2894f8c62da3eef3be8317b3800c61.pdf |
Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation | https://openreview.net/forum?id=cFqAANINgW | https://openreview.net/forum?id=cFqAANINgW | Jingchang Chen,Hongxuan Tang,Zheng Chu,Qianglong Chen,Zekun Wang,Ming Liu,Bing Qin | NIPS 2024,Oral | Despite recent progress made by large language models in code generation, they still struggle with programs that meet complex requirements. Recent work utilizes plan-and-solve decomposition to decrease the complexity and leverage self-tests to refine the generated program. Yet, planning deep-inside requirements in adva... | https://openreview.net/pdf/d6fd653a659d95ce4466896d76af521361a4e0ef.pdf |
DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs | https://openreview.net/forum?id=mp8u2Pcmqz | https://openreview.net/forum?id=mp8u2Pcmqz | Haokun Lin,Haobo Xu,Yichen Wu,Jingzhi Cui,Yingtao Zhang,Linzhan Mou,Linqi Song,Zhenan Sun,Ying Wei | NIPS 2024,Oral | Quantization of large language models (LLMs) faces significant challenges, particularly due to the presence of outlier activations that impede efficient low-bit representation. Traditional approaches predominantly address Normal Outliers, which are activations across all tokens with relatively large magnitudes. However... | https://openreview.net/pdf/e940d83a63794869ac25c4a08c075cc76b1ebdef.pdf |
Not All Tokens Are What You Need for Pretraining | https://openreview.net/forum?id=0NMzBwqaAJ | https://openreview.net/forum?id=0NMzBwqaAJ | Zhenghao Lin,Zhibin Gou,Yeyun Gong,Xiao Liu,yelong shen,Ruochen Xu,Chen Lin,Yujiu Yang,Jian Jiao,Nan Duan,Weizhu Chen | NIPS 2024,Oral | Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that ''Not all tokens in a corpus are equally important for language model training''. Our initial analysis examines token-level training dynamics of language model, r... | https://openreview.net/pdf/479db135fe05befa88285a35b9f23c2e1122fa8f.pdf |
Achieving Optimal Clustering in Gaussian Mixture Models with Anisotropic Covariance Structures | https://openreview.net/forum?id=ge8GZn8Gtu | https://openreview.net/forum?id=ge8GZn8Gtu | Xin Chen,Anderson Ye Zhang | NIPS 2024,Oral | We study clustering under anisotropic Gaussian Mixture Models (GMMs), where covariance matrices from different clusters are unknown and are not necessarily the identity matrix. We analyze two anisotropic scenarios: homogeneous, with identical covariance matrices, and heterogeneous, with distinct matrices per cluster. F... | https://openreview.net/pdf/43a0e0281aa6e1dcadbd067c201ceb2c07c5bf4c.pdf |
Return of Unconditional Generation: A Self-supervised Representation Generation Method | https://openreview.net/forum?id=clTa4JFBML | https://openreview.net/forum?id=clTa4JFBML | Tianhong Li,Dina Katabi,Kaiming He | NIPS 2024,Oral | Unconditional generation -- the problem of modeling data distribution without relying on human-annotated labels -- is a long-standing and fundamental challenge in generative models, creating a potential of learning from large-scale unlabeled data. In the literature, the generation quality of an unconditional method has... | https://openreview.net/pdf/5eb9f339be4769dbc0a7ac40c1b8e020626b9052.pdf |
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs | https://openreview.net/forum?id=Vi8AepAXGy | https://openreview.net/forum?id=Vi8AepAXGy | Shengbang Tong,Ellis L Brown II,Penghao Wu,Sanghyun Woo,ADITHYA JAIRAM IYER,Sai Charitha Akula,Shusheng Yang,Jihan Yang,Manoj Middepogu,Ziteng Wang,Xichen Pan,Rob Fergus,Yann LeCun,Saining Xie | NIPS 2024,Oral | We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach. While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research. This gap hin... | https://openreview.net/pdf/6e2bfbfc4a63dae9ce2226db223d05c1152a1fb8.pdf |
MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making | https://openreview.net/forum?id=EKdk4vxKO4 | https://openreview.net/forum?id=EKdk4vxKO4 | Yubin Kim,Chanwoo Park,Hyewon Jeong,Yik Siu Chan,Xuhai Xu,Daniel McDuff,Hyeonhoon Lee,Marzyeh Ghassemi,Cynthia Breazeal,Hae Won Park | NIPS 2024,Oral | Foundation models are becoming valuable tools in medicine. Yet despite their promise, the best way to leverage Large Language Models (LLMs) in complex medical tasks remains an open question. We introduce a novel multi-agent framework, named **M**edical **D**ecision-making **Agents** (**MDAgents**) that helps to address... | https://openreview.net/pdf/9993edbaf6679577c07aeae6b39fe0a546abaca1.pdf |
Graph Diffusion Transformers for Multi-Conditional Molecular Generation | https://openreview.net/forum?id=cfrDLD1wfO | https://openreview.net/forum?id=cfrDLD1wfO | Gang Liu,Jiaxin Xu,Tengfei Luo,Meng Jiang | NIPS 2024,Oral | Inverse molecular design with diffusion models holds great potential for advancements in material and drug discovery. Despite success in unconditional molecule generation, integrating multiple properties such as synthetic score and gas permeability as condition constraints into diffusion models remains unexplored. We p... | https://openreview.net/pdf/46c02e1bf7e313ee41cca4c78d39825812de8c3d.pdf |
MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model | https://openreview.net/forum?id=x7pjdDod6Z | https://openreview.net/forum?id=x7pjdDod6Z | Minghua Liu,Chong Zeng,Xinyue Wei,Ruoxi Shi,Linghao Chen,Chao Xu,Mengqi Zhang,Zhaoning Wang,Xiaoshuai Zhang,Isabella Liu,Hongzhi Wu,Hao Su | NIPS 2024,Oral | Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction model that expli... | https://openreview.net/pdf/0137993914b1c34b105ba8ce5545d99389e3b12a.pdf |
Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework | https://openreview.net/forum?id=tnh4LK72yj | https://openreview.net/forum?id=tnh4LK72yj | Zhongchao Yi,Zhengyang Zhou,Qihe Huang,Yanjiang Chen,Liheng Yu,Xu Wang,Yang Wang | NIPS 2024,Oral | Spatiotemporal learning has become a pivotal technique to enable urban intelligence. Traditional spatiotemporal models mostly focus on a specific task by assuming a same distribution between training and testing sets. However, given that urban systems are usually dynamic, multi-sourced with imbalanced data distribution... | https://openreview.net/pdf/97148ef3439d4c09aeb2847ed85a61ab7bd105d9.pdf |
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning | https://openreview.net/forum?id=qEpi8uWX3N | https://openreview.net/forum?id=qEpi8uWX3N | Chunlin Tian,Zhan Shi,Zhijiang Guo,Li Li,Cheng-zhong Xu | NIPS 2024,Oral | Adapting Large Language Models (LLMs) to new tasks through fine-tuning has been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) techniques, such as LoRA. However, these methods often underperform compared to full fine-tuning, particularly in scenarios involving complex datasets. This i... | https://openreview.net/pdf/60e4bb51758f975380df1586e785d29a101c7f4a.pdf |
SeeA*: Efficient Exploration-Enhanced A* Search by Selective Sampling | https://openreview.net/forum?id=mSaqxZVZW8 | https://openreview.net/forum?id=mSaqxZVZW8 | Dengwei Zhao,Shikui Tu,Lei Xu | NIPS 2024,Oral | Monte-Carlo tree search (MCTS) and reinforcement learning contributed crucially to the success of AlphaGo and AlphaZero, and A$^*$ is a tree search algorithm among the most well-known ones in the classical AI literature. MCTS and A$^*$ both perform heuristic search and are mutually beneficial. Efforts have been made t... | https://openreview.net/pdf/fa5dedfe169ea46edcf332d8d7d9b5256b506793.pdf |
Improved Distribution Matching Distillation for Fast Image Synthesis | https://openreview.net/forum?id=tQukGCDaNT | https://openreview.net/forum?id=tQukGCDaNT | Tianwei Yin,Michaël Gharbi,Taesung Park,Richard Zhang,Eli Shechtman,Fredo Durand,William T. Freeman | NIPS 2024,Oral | Recent approaches have shown promises distilling expensive diffusion models into efficient one-step generators.
Amongst them, Distribution Matching Distillation (DMD) produces one-step generators that match their teacher in distribution, i.e., the distillation process does not enforce a one-to-one correspondence with t... | https://openreview.net/pdf/3c7ea6adb0b86f707c8c396aa752165bc482e55b.pdf |
E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection | https://openreview.net/forum?id=47loYmzxep | https://openreview.net/forum?id=47loYmzxep | Jiaqing Zhang,Mingxiang Cao,Weiying Xie,Jie Lei,DaixunLi,Wenbo Huang,Yunsong Li,Xue Yang | NIPS 2024,Oral | Multimodal image fusion and object detection are crucial for autonomous driving. While current methods have advanced the fusion of texture details and semantic information, their complex training processes hinder broader applications. Addressing this challenge, we introduce E2E-MFD, a novel end-to-end algorithm for mul... | https://openreview.net/pdf/b861f70a3f6d0b0377a6c809e5aeb3cc2bb8a6ba.pdf |
MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map | https://openreview.net/forum?id=Y8YVCOMEpz | https://openreview.net/forum?id=Y8YVCOMEpz | Yuhong Chou,Man Yao,Kexin Wang,Yuqi Pan,Rui-Jie Zhu,Jibin Wu,Yiran Zhong,Yu Qiao,Bo XU,Guoqi Li | NIPS 2024,Oral | Various linear complexity models, such as Linear Transformer (LinFormer), State Space Model (SSM), and Linear RNN (LinRNN), have been proposed to replace the conventional softmax attention in Transformer structures. However, the optimal design of these linear models is still an open question. In this work, we attempt t... | https://openreview.net/pdf/6115a7c6711108daff03a490bc177f2d26b8446b.pdf |
Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery | https://openreview.net/forum?id=C4NbtYnyQg | https://openreview.net/forum?id=C4NbtYnyQg | Haonan Lin,Wenbin An,Jiahao Wang,Yan Chen,Feng Tian,Mengmeng Wang,QianYing Wang,Guang Dai,Jingdong Wang | NIPS 2024,Oral | Recent advancements have shown promise in applying traditional Semi-Supervised Learning strategies to the task of Generalized Category Discovery (GCD). Typically, this involves a teacher-student framework in which the teacher imparts knowledge to the student to classify categories, even in the absence of explicit label... | https://openreview.net/pdf/2b0097d679b2b1297e2351cac3b7369e7b84e150.pdf |
NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction | https://openreview.net/forum?id=8qu52Fl1Dt | https://openreview.net/forum?id=8qu52Fl1Dt | Zixuan Gong,Guangyin Bao,Qi Zhang,Zhongwei Wan,Duoqian Miao,Shoujin Wang,Lei Zhu,Changwei Wang,Rongtao Xu,Liang Hu,Ke Liu,Yu Zhang | NIPS 2024,Oral | Reconstruction of static visual stimuli from non-invasion brain activity fMRI achieves great success, owning to advanced deep learning models such as CLIP and Stable Diffusion. However, the research on fMRI-to-video reconstruction remains limited since decoding the spatiotemporal perception of continuous visual experie... | https://openreview.net/pdf/258f5ea41fed74143053a220d1c9971bc970b99a.pdf |
The Road Less Scheduled | https://openreview.net/forum?id=0XeNkkENuI | https://openreview.net/forum?id=0XeNkkENuI | Aaron Defazio,Xingyu Alice Yang,Ahmed Khaled,Konstantin Mishchenko,Harsh Mehta,Ashok Cutkosky | NIPS 2024,Oral | Existing learning rate schedules that do not require specification of the optimization stopping step $T$ are greatly out-performed by learning rate schedules that depend on $T$. We propose an approach that avoids the need for this stopping time by eschewing the use of schedules entirely, while exhibiting state-of-the-a... | https://openreview.net/pdf/6c9eff74f240a8115542beea292c058b239a8712.pdf |
Convolutional Differentiable Logic Gate Networks | https://openreview.net/forum?id=4bKEFyUHT4 | https://openreview.net/forum?id=4bKEFyUHT4 | Felix Petersen,Hilde Kuehne,Christian Borgelt,Julian Welzel,Stefano Ermon | NIPS 2024,Oral | With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference.
Recently, an approach for learning logic gate networks directly via a differentiable relaxation was proposed. Logic gate networks are faster than conventional neural network approache... | https://openreview.net/pdf/550935e8b4e775076ce2310d9d089be095ad0708.pdf |
SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning | https://openreview.net/forum?id=uDD44NROOt | https://openreview.net/forum?id=uDD44NROOt | Huy Hoang,Tien Anh Mai,Pradeep Varakantham | NIPS 2024,Poster | We focus on offline imitation learning (IL), which aims to mimic an expert's behavior using demonstrations without any interaction with the environment. One of the main challenges in offline IL is the limited support of expert demonstrations, which typically cover only a small fraction of the state-action space. While ... | https://openreview.net/pdf/21f890aa8acefa4c5640a534a16533bb251a5681.pdf |
Gradient Guidance for Diffusion Models: An Optimization Perspective | https://openreview.net/forum?id=X1QeUYBXke | https://openreview.net/forum?id=X1QeUYBXke | Yingqing Guo,Hui Yuan,Yukang Yang,Minshuo Chen,Mengdi Wang | NIPS 2024,Poster | Diffusion models have demonstrated empirical successes in various applications and can be adapted to task-specific needs via guidance. This paper studies a form of gradient guidance for adapting a pre-trained diffusion model towards optimizing user-specified objectives. We establish a mathematical framework for guided ... | https://openreview.net/pdf/f1a0fd98ecfdc9b4afa72ce8adc61e3dea16e2ca.pdf |
Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models | https://openreview.net/forum?id=ncYGjx2vnE | https://openreview.net/forum?id=ncYGjx2vnE | Ali Behrouz,Michele Santacatterina,Ramin Zabih | NIPS 2024,Poster | Modeling multivariate time series is a well-established problem with a wide range of applications from healthcare to financial markets. It, however, is challenging as it requires methods to (1) have high expressive power of representing complicated dependencies along the time axis to capture both long-term progression ... | https://openreview.net/pdf/293e7ef70612d586ad3576a085191e54b2c0eb16.pdf |
A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation | https://openreview.net/forum?id=s3icZC2NLq | https://openreview.net/forum?id=s3icZC2NLq | Heyang Zhao,Jiafan He,Quanquan Gu | NIPS 2024,Poster | The exploration-exploitation dilemma has been a central challenge in reinforcement learning (RL) with complex model classes. In this paper, we propose a new algorithm, Monotonic Q-Learning with Upper Confidence Bound (MQL-UCB) for RL with general function approximation. Our key algorithmic design includes (1) a genera... | https://openreview.net/pdf/b3423ead9010a96399c1d7d679491e9c48a0fd4f.pdf |
VQ-Map: Bird's-Eye-View Map Layout Estimation in Tokenized Discrete Space via Vector Quantization | https://openreview.net/forum?id=bKuxygBW2Y | https://openreview.net/forum?id=bKuxygBW2Y | Yiwei Zhang,Jin Gao,Fudong Ge,Guan Luo,Bing Li,Zhaoxiang Zhang,Haibin Ling,Weiming Hu | NIPS 2024,Poster | Bird's-eye-view (BEV) map layout estimation requires an accurate and full understanding of the semantics for the environmental elements around the ego car to make the results coherent and realistic. Due to the challenges posed by occlusion, unfavourable imaging conditions and low resolution, \emph{generating} the BEV s... | https://openreview.net/pdf/685c7f5fa23644eff84f69db3233d4fb61bc6c4e.pdf |
On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks | https://openreview.net/forum?id=3LZHatxUa9 | https://openreview.net/forum?id=3LZHatxUa9 | Jiong Zhu,Gaotang Li,Yao-An Yang,Jing Zhu,Xuehao Cui,Danai Koutra | NIPS 2024,Poster | Heterophily, or the tendency of connected nodes in networks to have different class labels or dissimilar features, has been identified as challenging for many Graph Neural Network (GNN) models. While the challenges of applying GNNs for node classification when class labels display strong heterophily are well understood... | https://openreview.net/pdf/7c0d24d8c5b940086df83fb002c3e92da763b36b.pdf |
Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation | https://openreview.net/forum?id=7G362fgJFd | https://openreview.net/forum?id=7G362fgJFd | Xin Yuan,Michael Maire | NIPS 2024,Poster | We develop a neural network architecture which, trained in an unsupervised manner as a denoising diffusion model, simultaneously learns to both generate and segment images. Learning is driven entirely by the denoising diffusion objective, without any annotation or prior knowledge about regions during training. A comp... | https://openreview.net/pdf/0b0e26bd5cb8b993746d295c433c593d7ad86d9c.pdf |
Probabilistic Decomposed Linear Dynamical Systems for Robust Discovery of Latent Neural Dynamics | https://openreview.net/forum?id=XPhSbybD73 | https://openreview.net/forum?id=XPhSbybD73 | Yenho Chen,Noga Mudrik,Kyle A. Johnsen,Sankaraleengam Alagapan,Adam Shabti Charles,Christopher John Rozell | NIPS 2024,Poster | Time-varying linear state-space models are powerful tools for obtaining mathematically interpretable representations of neural signals. For example, switching and decomposed models describe complex systems using latent variables that evolve according to simple locally linear dynamics. However, existing methods for late... | https://openreview.net/pdf/97fd4685ad572113a49942a0e71937b3db55efb0.pdf |
Implicit Regularization of Decentralized Gradient Descent for Sparse Regression | https://openreview.net/forum?id=MlADRQI0Wf | https://openreview.net/forum?id=MlADRQI0Wf | Tongle Wu,Ying Sun | NIPS 2024,Poster | We consider learning a sparse model from linear measurements taken by a network of agents. Different from existing decentralized methods designed based on the LASSO regression with explicit $\ell_1$ norm regularization, we exploit the implicit regularization of decentralized optimization method applied to an over-para... | https://openreview.net/pdf/c2c69e05224053f3049709bd80a96662992b6366.pdf |
Universal Exact Compression of Differentially Private Mechanisms | https://openreview.net/forum?id=CgGjT8EG8A | https://openreview.net/forum?id=CgGjT8EG8A | Yanxiao Liu,Wei-Ning Chen,Ayfer Ozgur,Cheuk Ting Li | NIPS 2024,Poster | To reduce the communication cost of differential privacy mechanisms, we introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer while ensuring local differential privacy. Unlike previous simulation-based local differential privacy mechanisms, P... | https://openreview.net/pdf/bc8db1e9cf2899d281127d72d1993d71ead0af3c.pdf |
Learning Representations for Hierarchies with Minimal Support | https://openreview.net/forum?id=HFS800reZK | https://openreview.net/forum?id=HFS800reZK | Benjamin Rozonoyer,Michael Boratko,Dhruvesh Patel,Wenlong Zhao,Shib Sankar Dasgupta,Hung Le,Andrew McCallum | NIPS 2024,Poster | When training node embedding models to represent large directed graphs (digraphs), it is impossible to observe all entries of the adjacency matrix during training. As a consequence most methods employ sampling. For very large digraphs, however, this means many (most) entries may be unobserved during training. In genera... | https://openreview.net/pdf/98c7ccf6ef86019ffc994aba434e5c6603739459.pdf |
OwMatch: Conditional Self-Labeling with Consistency for Open-world Semi-Supervised Learning | https://openreview.net/forum?id=rle9X7DQuH | https://openreview.net/forum?id=rle9X7DQuH | Shengjie Niu,Lifan Lin,Jian Huang,Chao Wang | NIPS 2024,Poster | Semi-supervised learning (SSL) offers a robust framework for harnessing the potential of unannotated data. Traditionally, SSL mandates that all classes possess labeled instances. However, the emergence of open-world SSL (OwSSL) introduces a more practical challenge, wherein unlabeled data may encompass samples from uns... | https://openreview.net/pdf/3dcbcaa02ca1db047267a26a4853ed26ee59bd15.pdf |
Fair Allocation in Dynamic Mechanism Design | https://openreview.net/forum?id=bEunGps83o | https://openreview.net/forum?id=bEunGps83o | Alireza Fallah,Michael Jordan,Annie S Ulichney | NIPS 2024,Poster | We consider a dynamic mechanism design problem where an auctioneer sells an indivisible good to two groups of buyers in every round, for a total of $T$ rounds. The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each g... | https://openreview.net/pdf/8365b7cc74e6acf8ccffc75743d5ba8d7745188d.pdf |
Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models | https://openreview.net/forum?id=MN7d0S2i1d | https://openreview.net/forum?id=MN7d0S2i1d | Puqian Wang,Nikos Zarifis,Ilias Diakonikolas,Jelena Diakonikolas | NIPS 2024,Poster | A single-index model (SIM) is a function of the form $\sigma(\mathbf{w}^{\ast} \cdot \mathbf{x})$, where
$\sigma: \mathbb{R} \to \mathbb{R}$ is a known link function and $\mathbf{w}^{\ast}$ is a hidden unit vector.
We study the task of learning SIMs in the agnostic (a.k.a. adversarial label noise) model
with respect ... | https://openreview.net/pdf/cf0991dda9a6419627e0a2ad5fa255be8c831ebe.pdf |
Once Read is Enough: Domain-specific Pretraining-free Language Models with Cluster-guided Sparse Experts for Long-tail Domain Knowledge | https://openreview.net/forum?id=manHbkpIW6 | https://openreview.net/forum?id=manHbkpIW6 | Fang Dong,Mengyi Chen,Jixian Zhou,Yubin Shi,Yixuan Chen,Mingzhi Dong,Yujiang Wang,Dongsheng Li,Xiaochen Yang,Rui Zhu,Robert P. Dick,Qin Lv,Fan Yang,Tun Lu,Ning Gu,Li Shang | NIPS 2024,Poster | Language models (LMs) only pretrained on a general and massive corpus usually cannot attain satisfying performance on domain-specific downstream tasks, and hence, applying domain-specific pretraining to LMs is a common and indispensable practice.
However, domain-specific pretraining can be costly and time-consuming, hi... | https://openreview.net/pdf/b28c3a4f4f5da3bb75eb2cc6852c1eb990371e11.pdf |
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models | https://openreview.net/forum?id=JhqyeppMiD | https://openreview.net/forum?id=JhqyeppMiD | Yuancheng Xu,Jiarui Yao,Manli Shu,Yanchao Sun,Zichu Wu,Ning Yu,Tom Goldstein,Furong Huang | NIPS 2024,Poster | Vision-Language Models (VLMs) excel in generating textual responses from visual inputs, but their versatility raises security concerns. This study takes the first step in exposing VLMs’ susceptibility to data poisoning attacks that can manipulate responses to innocuous, everyday prompts. We introduce Shadowcast, a stea... | https://openreview.net/pdf/9d686ad4b89c927c71ccff3e7ea68ea1b6c0dce2.pdf |
Multi-Instance Partial-Label Learning with Margin Adjustment | https://openreview.net/forum?id=NnAi0L5H8J | https://openreview.net/forum?id=NnAi0L5H8J | Wei Tang,Yin-Fang Yang,Zhaofei Wang,Weijia Zhang,Min-Ling Zhang | NIPS 2024,Poster | Multi-instance partial-label learning (MIPL) is an emerging learning framework where each training sample is represented as a multi-instance bag associated with a candidate label set. Existing MIPL algorithms often overlook the margins for attention scores and predicted probabilities, leading to suboptimal generalizati... | https://openreview.net/pdf/6d7eb1b41514181cec8475f2ea9d3edf24e6cd56.pdf |
Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization | https://openreview.net/forum?id=GN2GXjPyN8 | https://openreview.net/forum?id=GN2GXjPyN8 | Xiangxin Zhou,Dongyu Xue,Ruizhe Chen,Zaixiang Zheng,Liang Wang,Quanquan Gu | NIPS 2024,Poster | Antibody design, a crucial task with significant implications across various disciplines such as therapeutics and biology, presents considerable challenges due to its intricate nature. In this paper, we tackle antigen-specific antibody sequence-structure co-design as an optimization problem towards specific preferences... | https://openreview.net/pdf/1707cccb06a5edc814908e30e85b89e886aed8f5.pdf |
Deep Support Vectors | https://openreview.net/forum?id=5WoYFypPv0 | https://openreview.net/forum?id=5WoYFypPv0 | Junhoo Lee,Hyunho Lee,Kyomin Hwang,Nojun Kwak | NIPS 2024,Poster | Deep learning has achieved tremendous success. However, unlike SVMs, which provide direct decision criteria and can be trained with a small dataset, it still has significant weaknesses due to its requirement for massive datasets during training and the black-box characteristics on decision criteria. This paper addresse... | https://openreview.net/pdf/c34cbd4c21b4871ff90d03acc5b73b7af13721a3.pdf |
Balancing Context Length and Mixing Times for Reinforcement Learning at Scale | https://openreview.net/forum?id=VaJ4XOW7Ey | https://openreview.net/forum?id=VaJ4XOW7Ey | Matthew Riemer,Khimya Khetarpal,Janarthanan Rajendran,Sarath Chandar | NIPS 2024,Poster | Due to the recent remarkable advances in artificial intelligence, researchers have begun to consider challenging learning problems such as learning to generalize behavior from large offline datasets or learning online in non-Markovian environments. Meanwhile, recent advances in both of these areas have increasingly rel... | https://openreview.net/pdf/0d2f1e3d4565423b45b2830d8dcae8ea0d71fa8d.pdf |
MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution | https://openreview.net/forum?id=qevq3FZ63J | https://openreview.net/forum?id=qevq3FZ63J | Wei Tao,Yucheng Zhou,Yanlin Wang,Wenqiang Zhang,Hongyu Zhang,Yu Cheng | NIPS 2024,Poster | In software development, resolving the emergent issues within GitHub repositories is a complex challenge that involves not only the incorporation of new code but also the maintenance of existing code.
Large Language Models (LLMs) have shown promise in code generation but face difficulties in resolving Github issues, pa... | https://openreview.net/pdf/160f5e4c2c7ce5f4555901cb61fa6bd97dbfbd5c.pdf |
NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention | https://openreview.net/forum?id=4xDxVQHsbZ | https://openreview.net/forum?id=4xDxVQHsbZ | Tianyi Zhang,Jonah Wonkyu Yi,Bowen Yao,Zhaozhuo Xu,Anshumali Shrivastava | NIPS 2024,Poster | Large Language Model (LLM) inference on Central Processing Units (CPU) is challenging due to the vast quantities of Multiply-Add (MAD) matrix operations in the attention computations. This paper highlights a rare gem in modern CPUs, Single-Instruction-Multiple-Data (SIMD) registers, which allows for ultra-low-latency ... | https://openreview.net/pdf/68372dd1d74a348f9569575a9907e59741292fab.pdf |
Navigating the Effect of Parametrization for Dimensionality Reduction | https://openreview.net/forum?id=eYNYnYle41 | https://openreview.net/forum?id=eYNYnYle41 | Haiyang Huang,Yingfan Wang,Cynthia Rudin | NIPS 2024,Poster | Parametric dimensionality reduction methods have gained prominence for their ability to generalize to unseen datasets, an advantage that traditional non-parametric approaches typically lack. Despite their growing popularity, there remains a prevalent misconception among practitioners about the equivalence in performanc... | https://openreview.net/pdf/dd9ebeee6f173ea24fa48be291e3625217634dd4.pdf |
$\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$ | https://openreview.net/forum?id=ZfBuhzE556 | https://openreview.net/forum?id=ZfBuhzE556 | Junkang Wu,Yuexiang Xie,Zhengyi Yang,Jiancan Wu,Jinyang Gao,Bolin Ding,Xiang Wang,Xiangnan He | NIPS 2024,Poster | Direct Preference Optimization (DPO) has emerged as a compelling approach for training Large Language Models (LLMs) to adhere to human preferences. However, the performance of DPO is sensitive to the fine-tuning of its trade-off parameter $\beta$, as well as to the quality of the preference data. We analyze the impact ... | https://openreview.net/pdf/30536c86d3ed63ada9ccbfca8f6fbea2d6282296.pdf |
Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling | https://openreview.net/forum?id=CMgxAaRqZh | https://openreview.net/forum?id=CMgxAaRqZh | Yiran Zhao,Wenyue Zheng,Tianle Cai,Do Xuan Long,Kenji Kawaguchi,Anirudh Goyal,Michael Shieh | NIPS 2024,Poster | Safety of Large Language Models (LLMs) has become a central issue given their rapid progress and wide applications. Greedy Coordinate Gradient (GCG) is shown to be effective in constructing prompts containing adversarial suffixes to break the presumingly safe LLMs, but the optimization of GCG is time-consuming and limi... | https://openreview.net/pdf/c8b4a1521c3825d5fc77d1bc75f534885da21586.pdf |
Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers | https://openreview.net/forum?id=EXuv4tVNa3 | https://openreview.net/forum?id=EXuv4tVNa3 | Chau Pham,Bryan A. Plummer | NIPS 2024,Poster | Multi-Channel Imaging (MCI) contains an array of challenges for encoding useful feature representations not present in traditional images. For example, images from two different satellites may both contain RGB channels, but the remaining channels can be different for each imaging source. Thus, MCI models must support a... | https://openreview.net/pdf/19191cda99db12be6bc8912fc1698da138cab1c6.pdf |
SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion | https://openreview.net/forum?id=89AUi5L1uA | https://openreview.net/forum?id=89AUi5L1uA | Lu Han,Xu-Yang Chen,Han-Jia Ye,De-Chuan Zhan | NIPS 2024,Poster | Multivariate time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare. Recent studies have highlighted the advantages of channel independence to resist distribution drift but neglect channel correlations, limiting further enhancements. Several methods u... | https://openreview.net/pdf/c8f5e1f12b1143b1e273394867caf779b33c0a82.pdf |
SEEV: Synthesis with Efficient Exact Verification for ReLU Neural Barrier Functions | https://openreview.net/forum?id=nWMqQHzI3W | https://openreview.net/forum?id=nWMqQHzI3W | Hongchao Zhang,Zhizhen Qin,Sicun Gao,Andrew Clark | NIPS 2024,Poster | Neural Control Barrier Functions (NCBFs) have shown significant promise in enforcing safety constraints on nonlinear autonomous systems. State-of-the-art exact approaches to verifying safety of NCBF-based controllers exploit the piecewise-linear structure of ReLU neural networks, however, such approaches still rely on ... | https://openreview.net/pdf/8c8be656daa65c9db0d7eaaf0f5e2cbcf3137202.pdf |
Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees | https://openreview.net/forum?id=ZIpdu0cHYu | https://openreview.net/forum?id=ZIpdu0cHYu | Sijia Chen,Yibo Wang,Yi-Feng Wu,Qing-Guo Chen,Zhao Xu,Weihua Luo,Kaifu Zhang,Lijun Zhang | NIPS 2024,Poster | Tool-augmented large language models (LLMs) leverage tools, often in the form of APIs, to improve their reasoning capabilities on complex tasks. This enables them to act as intelligent agents interacting with the real world. The recently introduced ToolLLaMA model by Qin et al. [2023] utilizes the depth-first search-ba... | https://openreview.net/pdf/74ee6f313ee1667abf207c714f9e3e241341d853.pdf |
A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints | https://openreview.net/forum?id=uZi7H5Ac0X | https://openreview.net/forum?id=uZi7H5Ac0X | Liuyuan Jiang,Quan Xiao,Victor M. Tenorio,Fernando Real-Rojas,Antonio Marques,Tianyi Chen | NIPS 2024,Poster | Interest in bilevel optimization has grown in recent years, partially due to its relevance for challenging machine-learning problems. Several exciting recent works have been centered around developing efficient gradient-based algorithms that can solve bilevel optimization problems with provable guarantees. However, the... | https://openreview.net/pdf/13a0f27075bedab8b79d901ed72ef74c635ac09c.pdf |
CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework | https://openreview.net/forum?id=v6W55lCkhN | https://openreview.net/forum?id=v6W55lCkhN | Yiyang Zhao,Yunzhuo Liu,Bo Jiang,Tian Guo | NIPS 2024,Poster | This work presents a novel approach to neural architecture search (NAS) that aims to increase carbon efficiency for the model design process. The proposed framework CE-NAS addresses the key challenge of high carbon cost associated with NAS by exploring the carbon emission variations of energy and energy differences of ... | https://openreview.net/pdf/1e1daf62c7b574a8a94781af5ea3ed13da72701b.pdf |
Fairness-Aware Estimation of Graphical Models | https://openreview.net/forum?id=WvWS8goWyR | https://openreview.net/forum?id=WvWS8goWyR | Zhuoping Zhou,Davoud Ataee Tarzanagh,Bojian Hou,Qi Long,Li Shen | NIPS 2024,Poster | This paper examines the issue of fairness in the estimation of graphical models (GMs), particularly Gaussian, Covariance, and Ising models. These models play a vital role in understanding complex relationships in high-dimensional data. However, standard GMs can result in biased outcomes, especially when the underlying ... | https://openreview.net/pdf/3cbfdb839c78a76a277d4d32e573fc2186d4fc53.pdf |
Toward Efficient Inference for Mixture of Experts | https://openreview.net/forum?id=stXtBqyTWX | https://openreview.net/forum?id=stXtBqyTWX | Haiyang Huang,Newsha Ardalani,Anna Sun,Liu Ke,Shruti Bhosale,Hsien-Hsin S. Lee,Carole-Jean Wu,Benjamin Lee | NIPS 2024,Poster | Mixture-of-Experts (MoE) models have recently gained steam in achieving the state-of-the-art performance in a wide range of tasks in computer vision and natural language processing. They effectively expand the model capacity while incurring a minimal increase in computation cost during training. However, deploying such... | https://openreview.net/pdf/b9888255233cbfec88dd7c0bc9b48c48b33bf0ec.pdf |
KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization | https://openreview.net/forum?id=pNnvzQsS4P | https://openreview.net/forum?id=pNnvzQsS4P | Tianyi Zhang,Jonah Wonkyu Yi,Zhaozhuo Xu,Anshumali Shrivastava | NIPS 2024,Poster | Efficient deployment of Large Language Models (LLMs) requires batching multiple requests together to improve throughput. As batch size, context length, or model size increases, the size of key and value (KV) cache quickly becomes the main contributor to GPU memory usage and the bottleneck of inference latency and throu... | https://openreview.net/pdf/cc83819e3c2ee5e47a2a7f0f28eb98ada7deb1ce.pdf |
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks | https://openreview.net/forum?id=J6NByZlLNj | https://openreview.net/forum?id=J6NByZlLNj | Jun Xia,Zhihao Yue,Yingbo Zhou,Zhiwei Ling,Yiyu Shi,Xian Wei,Mingsong Chen | NIPS 2024,Poster | Due to the increasing popularity of Artificial Intelligence (AI), more and more backdoor attacks are designed to mislead Deep Neural Network (DNN) predictions by manipulating training samples or processes. Although backdoor attacks have been investigated in various scenarios, they still suffer from the problems of bot... | https://openreview.net/pdf/b8863e81ef74693919a2a6ff884da8764bc43f8b.pdf |
Fully Explicit Dynamic Gaussian Splatting | https://openreview.net/forum?id=g8pyTkxyIV | https://openreview.net/forum?id=g8pyTkxyIV | Junoh Lee,Changyeon Won,Hyunjun Jung,Inhwan Bae,Hae-Gon Jeon | NIPS 2024,Poster | 3D Gaussian Splatting has shown fast and high-quality rendering results in static scenes by leveraging dense 3D prior and explicit representations. Unfortunately, the benefits of the prior and representation do not involve novel view synthesis for dynamic motions. Ironically, this is because the main barrier is the rel... | https://openreview.net/pdf/0381a18f5cdf57d1b8cc805a21ced8ccfa4a6239.pdf |
Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling | https://openreview.net/forum?id=iWlqbNE8P7 | https://openreview.net/forum?id=iWlqbNE8P7 | Zijie Huang,Wanjia Zhao,Jingdong Gao,Ziniu Hu,Xiao Luo,Yadi Cao,Yuanzhou Chen,Yizhou Sun,Wei Wang | NIPS 2024,Poster | Learning complex physical dynamics purely from data is challenging due to the intrinsic properties of systems to be satisfied. Incorporating physics-informed priors, such as in Hamiltonian Neural Networks (HNNs), achieves high-precision modeling for energy-conservative systems. However, real-world systems often deviate... | https://openreview.net/pdf/5dc1a3884cb257f2b8d5cacac17a2f7d915c8408.pdf |
Adaptive Sampling for Efficient Softmax Approximation | https://openreview.net/forum?id=XsNA2b8GPz | https://openreview.net/forum?id=XsNA2b8GPz | Tavor Baharav,Ryan Kang,Colin Sullivan,Mo Tiwari,Eric Sager Luxenberg,David Tse,Mert Pilanci | NIPS 2024,Poster | The softmax function is ubiquitous in machine learning and optimization applications. Computing the full softmax evaluation of a matrix-vector product can be computationally expensive in high-dimensional settings. In many applications, however, it is sufficient to calculate only the top few outputs of the softmax funct... | https://openreview.net/pdf/e188b661e6b0a37452f6813bf9348a9472d23a63.pdf |
MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering | https://openreview.net/forum?id=yppcLFeZgy | https://openreview.net/forum?id=yppcLFeZgy | YIZHEN LUO,Zikun Nie,Massimo Hong,Suyuan Zhao,Hao Zhou,Zaiqing Nie | NIPS 2024,Poster | Studying protein mutations within amino acid sequences holds tremendous significance in life sciences. Protein language models (PLMs) have demonstrated strong capabilities in broad biological applications. However, due to architectural design and lack of supervision, PLMs model mutations implicitly with evolutionary pl... | https://openreview.net/pdf/6ba89a23eb0008a9e5fa6007a9fcb9c765216d9f.pdf |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
NIPS 2024 Accepted Paper Meta Info Dataset
This dataset is collect from the NIPS 2024 OpenReview website (https://openreview.net/group?id=NeurIPS.cc/2024/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/nips2024). For researchers who are interested in doing analysis of NIPS 2024 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the NIPS 2024 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.
Meta Information of Json File of Paper
{
"title": "Apathetic or Empathetic? Evaluating LLMs' Emotional Alignments with Humans",
"url": "https://openreview.net/forum?id=pwRVGRWtGg",
"detail_url": "https://openreview.net/forum?id=pwRVGRWtGg",
"authors": "Jen-tse Huang,Man Ho LAM,Eric John Li,Shujie Ren,Wenxuan Wang,Wenxiang Jiao,Zhaopeng Tu,Michael Lyu",
"tags": "NIPS 2024,Poster",
"abstract": "Evaluating Large Language Models\u2019 (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes seven LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4, Mixtral-8x22B, and LLaMA-3.1. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, i.e., EmotionBench, are publicly available at https://github.com/CUHK-ARISE/EmotionBench.",
"pdf": "https://openreview.net/pdf/4d6e71e0ca7fffae0c70fd69763ea99167e3d197.pdf"
}
Related
AI Agent Marketplace and Search
AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog
AI Agent Reviews
AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews
AI Equation
List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex
- Downloads last month
- 14