title
string
authors
string
abstract
string
pdf_url
string
source_url
string
id
string
related_notes
string
year
string
conference
string
content
string
content_meta
string
Brain2GAN; Reconstructing perceived faces from the primate brain via StyleGAN3
Thirza Dado, Paolo Papale, Antonio Lozano, Lynn Le, Feng Wang, Marcel van Gerven, Pieter R. Roelfsema, Yağmur Güçlütürk, Umut Güçlü
Neural coding characterizes the relationship between stimuli and their corresponding neural responses. The usage of synthesized yet photorealistic reality by generative adversarial networks (GANs) allows for superior control over these data: the underlying feature representations that account for the semantics in synth...
https://openreview.net/pdf?id=hT1S68yza7
https://openreview.net/forum?id=hT1S68yza7
hT1S68yza7
[{"review_id": "LYZxYFJqI5", "paper_id": "hT1S68yza7", "reviewer": null, "paper_summary": "Summary: Decoding neural activity while subjects are presented with images into the w-latent space of StyleGAN3.\n\nStrengths: A new dataset. The first reconstruction of faces from intracranial data.\n\nWeaknesses: Limited novelt...
2023
ICLR
# BRAIN2GAN; RECONSTRUCTING PERCEIVED FACES FROM THE PRIMATE BRAIN VIA STYLEGAN3 #### Anonymous authors Paper under double-blind review ## ABSTRACT Neural coding characterizes the relationship between stimuli and their corresponding neural responses. The usage of synthesized yet photorealistic reality by generative...
{ "table_of_contents": [ { "title": "BRAIN2GAN; RECONSTRUCTING PERCEIVED FACES\nFROM THE PRIMATE BRAIN VIA STYLEGAN3", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.49505615234375 ], [ 503.5880126953125, 8...
Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics
Shoaib Ahmed Siddiqui, Nitarshan Rajkumar, Tegan Maharaj, David Krueger, Sara Hooker
Modern machine learning research relies on relatively few carefully curated datasets. Even in these datasets, and typically in `untidy' or raw data, practitioners are faced with significant issues of data quality and diversity which can be prohibitively labor intensive to address. Existing methods for dealing with thes...
https://openreview.net/pdf?id=PvLnIaJbt9
https://openreview.net/forum?id=PvLnIaJbt9
PvLnIaJbt9
[{"review_id": "B2vdYFQIL3M", "paper_id": "PvLnIaJbt9", "reviewer": null, "paper_summary": "This is a well written paper presenting a novel contribution accompanied with a well designed set of convincing experiments (with some room for improvements in he empirical scope and process). It has been assessed by four knowle...
2023
ICLR
# <span id="page-0-0"></span>Metadata Archaeology: Unearthing Data Subsets by Leveraging Training Dynamics Shoaib Ahmed Siddiqui University of Cambridge msas3@cam.ac.uk Nitarshan Rajkumar University of Cambridge nr500@cam.ac.uk Tegan Maharaj University of Toronto tegan.maharaj@utoronto.ca David Krueger University o...
{ "table_of_contents": [ { "title": "Metadata Archaeology: Unearthing Data\nSubsets by Leveraging Training Dynamics", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 76.0662841796875 ], [ 504.421875, 76.0662841...
Light Sampling Field and BRDF Representation for Physically-based Neural Rendering
Jing Yang, Hanyuan Xiao, Wenbin Teng, Yunxuan Cai, Yajie Zhao
Physically-based rendering (PBR) is key for immersive rendering effects used widely in the industry to showcase detailed realistic scenes from computer graphics assets. A well-known caveat is that producing the same is computationally heavy and relies on complex capture devices. Inspired by the success in quality and e...
https://openreview.net/pdf?id=yYEb8v65X8
https://openreview.net/forum?id=yYEb8v65X8
yYEb8v65X8
[{"review_id": "UKUFPSVsxWo", "paper_id": "yYEb8v65X8", "reviewer": null, "paper_summary": "All reviewers agree that the paper proposes a technique which can render high-quality human face images with less time compared to an industrial standard path tracer. It was also agreed that the paper is likely to inspire follo...
2023
ICLR
# LIGHT SAMPLING FIELD AND BRDF REPRESENTA-TION FOR PHYSICALLY-BASED NEURAL RENDERING Jing Yang \*, Hanyuan Xiao \*, Wenbin Teng , Yunxuan Cai , Yajie Zhao Institute for Creative Technologies University of Southern California {jyang, hxiao, wteng, ycai, zhao}@ict.usc.edu #### ABSTRACT Physically-based rendering (PBR...
{ "table_of_contents": [ { "title": "LIGHT SAMPLING FIELD AND BRDF REPRESENTA-\nTION FOR PHYSICALLY-BASED NEURAL RENDERING", "heading_level": null, "page_id": 0, "polygon": [ [ 107.25, 80.82421875 ], [ 504.0, 80.82421875 ...
EPISODE: Episodic Gradient Clipping with Periodic Resampled Corrections for Federated Learning with Heterogeneous Data
Michael Crawshaw, Yajie Bao, Mingrui Liu
Gradient clipping is an important technique for deep neural networks with exploding gradients, such as recurrent neural networks. Recent studies have shown that the loss functions of these networks do not satisfy the conventional smoothness condition, but instead satisfy a relaxed smoothness condition, i.e., the Lipsc...
https://openreview.net/pdf?id=ytZIYmztET
https://openreview.net/forum?id=ytZIYmztET
ytZIYmztET
[{"review_id": "6MMbTRgNFU", "paper_id": "ytZIYmztET", "reviewer": null, "paper_summary": "This paper proposes a federated learning with clipping of gradients, in the realistic heterogeneous data setting. The algorithm at every round first computes the full gradient from all the clients and depending on the norm of the...
2023
ICLR
## EPISODE: EPISODIC GRADIENT CLIPPING WITH PE-RIODIC RESAMPLED CORRECTIONS FOR FEDERATED LEARNING WITH HETEROGENEOUS DATA Michael Crawshaw<sup>1</sup> , Yajie Bao<sup>2</sup> , Mingrui Liu1<sup>∗</sup> <sup>1</sup>Department of Computer Science, George Mason University, Fairfax, VA 22030, USA mcrawsha@gmu.edu, baoy...
{ "table_of_contents": [ { "title": "EPISODE: EPISODIC GRADIENT CLIPPING WITH PE-\nRIODIC RESAMPLED CORRECTIONS FOR FEDERATED\nLEARNING WITH HETEROGENEOUS DATA", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.13092041015625 ], ...
On the Expressiveness of Rational ReLU Neural Networks With Bounded Depth
Gennadiy Averkov, Christopher Hojny, Maximilian Merkert
To confirm that the expressive power of ReLU neural networks grows with their depth, the function $F_n = \max (0,x_1,\ldots,x_n )$ has been considered in the literature. A conjecture by Hertrich, Basu, Di Summa, and Skutella [NeurIPS 2021] states that any ReLU network that exactly represents $F_n$ has at least $\lcei...
https://openreview.net/pdf?id=uREg3OHjLL
https://openreview.net/forum?id=uREg3OHjLL
uREg3OHjLL
[{"review_id": "zsfIdvmO9S", "paper_id": "uREg3OHjLL", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# ON THE EXPRESSIVENESS OF RATIONAL RELU NEURAL NETWORKS WITH BOUNDED DEPTH Gennadiy Averkov BTU Cottbus-Senftenberg averkov@b-tu.de Christopher Hojny TU Eindhoven c.hojny@tue.nl Maximilian Merkert TU Braunschweig m.merkert@tu-braunschweig.de # **ABSTRACT** To confirm that the expressive power of ReLU neural netwo...
{ "table_of_contents": [ { "title": "ON THE EXPRESSIVENESS OF RATIONAL RELU NEURAL NETWORKS WITH BOUNDED DEPTH", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 79.6640625 ], [ 455.4140625, 79.6640625 ...
Trajectory attention for fine-grained video motion control
Zeqi Xiao, Wenqi Ouyang, Yifan Zhou, Shuai Yang, Lei Yang, Jianlou Si, Xingang Pan
Recent advancements in video generation have been greatly driven by video diffusion models, with camera motion control emerging as a crucial challenge in creating view-customized visual content. This paper introduces trajectory attention, a novel approach that performs attention along available pixel trajectories for f...
https://openreview.net/pdf?id=2z1HT5lw5M
https://openreview.net/forum?id=2z1HT5lw5M
2z1HT5lw5M
[{"review_id": "Dm8G2Y7Zsh", "paper_id": "2z1HT5lw5M", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# TRAJECTORY ATTENTION FOR FINE-GRAINED VIDEO MOTION CONTROL Zeqi Xiao<sup>1</sup>, Wenqi Ouyang<sup>1</sup>, Yifan Zhou<sup>1</sup>, Shuai Yang<sup>2</sup>, Lei Yang<sup>3</sup>, Jianlou Si<sup>3</sup>, Xingang Pan<sup>1</sup> <sup>1</sup>S-Lab, Nanyang Technological University, <sup>2</sup>Wangxuan Institute of Co...
{ "table_of_contents": [ { "title": "TRAJECTORY ATTENTION FOR FINE-GRAINED VIDEO MOTION CONTROL", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 416.25, 80.05078125 ], [ ...
Adversarial Imitation Learning with Preferences
Aleksandar Taranovic, Andras Gabor Kupcsik, Niklas Freymuth, Gerhard Neumann
Designing an accurate and explainable reward function for many Reinforcement Learning tasks is a cumbersome and tedious process. Instead, learning policies directly from the feedback of human teachers naturally integrates human domain knowledge into the policy optimization process. However, different feedback modalit...
https://openreview.net/pdf?id=bhfp5GlDtGe
https://openreview.net/forum?id=bhfp5GlDtGe
bhfp5GlDtGe
[{"review_id": "FtJO6roG6Hq", "paper_id": "bhfp5GlDtGe", "reviewer": null, "paper_summary": "This paper proposes a method that can utilize both preference and demonstration from an expert. While, as one reviewer concerned about, the technical novelty is not high (simply combining two existing methods), the problem form...
2023
ICLR
# ADVERSARIAL IMITATION LEARNING WITH PREFER-ENCES Aleksandar Taranovic1,2<sup>∗</sup> , Andras Kupcsik<sup>2</sup> , Niklas Freymuth<sup>1</sup> , Gerhard Neumann<sup>1</sup> <sup>1</sup> Autonomous Learning Robots Lab, Karlsruhe Institute of Technology, Karlsruhe, Germany <sup>2</sup> Bosch Center for Artificial I...
{ "table_of_contents": [ { "title": "ADVERSARIAL IMITATION LEARNING WITH PREFER-\nENCES", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.56976318359375, 80.05078125 ], [ ...
Consistency Checks for Language Model Forecasters
Daniel Paleka, Abhimanyu Pallavi Sudhir, Alejandro Alvarez, Vineeth Bhat, Adam Shen, Evan Wang, Florian Tramèr
Forecasting is a task that is difficult to evaluate: the ground truth can only be known in the future. Recent work showing LLM forecasters rapidly approaching human-level performance begs the question: how can we benchmark and evaluate these forecasters *instantaneously*? Following the consistency check framework, we m...
https://openreview.net/pdf?id=r5IXBlTCGc
https://openreview.net/forum?id=r5IXBlTCGc
r5IXBlTCGc
[{"review_id": "ME5Mo9tI3M", "paper_id": "r5IXBlTCGc", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Oral)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommen...
2025
ICLR
# CONSISTENCY CHECKS FOR LANGUAGE MODEL FORECASTERS Daniel Paleka <sup>∗</sup> ETH Zurich Abhimanyu Pallavi Sudhir \* University of Warwick Alejandro Alvarez Independent Vineeth Bhat IIIT Hyderabad Adam Shen Columbia University Evan Wang Cornell University Florian Tramer` ETH Zurich # ABSTRACT Forecasting is a task...
{ "table_of_contents": [ { "title": "CONSISTENCY CHECKS FOR LANGUAGE MODEL\nFORECASTERS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 504.421875, 80.05078125 ], [ ...
Offline RL with Observation Histories: Analyzing and Improving Sample Complexity
Joey Hong, Anca Dragan, Sergey Levine
Offline reinforcement learning (RL) can in principle synthesize more optimal behavior from a dataset consisting only of suboptimal trials. One way that this can happen is by "stitching" together the best parts of otherwise suboptimal trajectories that overlap on similar states, to create new behaviors where each indivi...
https://openreview.net/pdf?id=GnOLWS4Llt
https://openreview.net/forum?id=GnOLWS4Llt
GnOLWS4Llt
[{"review_id": "I0mag2gBuh", "paper_id": "GnOLWS4Llt", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# OFFLINE RL WITH OBSERVATION HISTORIES: ANALYZING AND IMPROVING SAMPLE COMPLEXITY Joey Hong Anca Dragan Sergey Levine UC Berkeley {joey hong,anca,sergey.levine}@berkeley.edu # ABSTRACT Offline reinforcement learning (RL) can in principle synthesize more optimal behavior from a dataset consisting only of suboptimal ...
{ "table_of_contents": [ { "title": "OFFLINE RL WITH OBSERVATION HISTORIES:\nANALYZING AND IMPROVING SAMPLE COMPLEXITY", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 492.3960266113281, 80.0...
A Neural PDE Solver with Temporal Stencil Modeling
Zhiqing Sun, Yiming Yang, Shinjae Yoo
Numerical simulation of non-linear partial differential equations plays a crucial role in modeling physical science and engineering phenomena, such as weather, climate, and aerodynamics. Recent Machine Learning (ML) models trained on low-resolution spatio-temporal signals have shown new promises in capturing important ...
https://openreview.net/pdf?id=Nvlqsofsc6-
https://openreview.net/forum?id=Nvlqsofsc6-
Nvlqsofsc6-
[{"review_id": "79w4ICIpxY2", "paper_id": "Nvlqsofsc6-", "reviewer": null, "paper_summary": "The paper introduces a hybrid model combining ML modules and a PDE solver for solving an incompressible Navier-Stokes equation. The objective is to solve turbulent problems at a reduced cost. The ML component is trained to pred...
2023
ICLR
# A NEURAL PDE SOLVER WITH TEMPORAL STENCIL MODELING Anonymous authors Paper under double-blind review # ABSTRACT Numerical simulation of non-linear partial differential equations plays a crucial role in modeling physical science and engineering phenomena, such as weather, climate, and aerodynamics. Recent Machine L...
{ "table_of_contents": [ { "title": "A NEURAL PDE SOLVER WITH TEMPORAL STENCIL\nMODELING", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 503.5774230957031, 80.05078125 ], [ ...
Untangling Effect and Side Effect: Consistent Causal Inference in Non-Targeted Trials
Georgios Mavroudeas, Malik Magdon-Ismail, Kristin Bennett, Jason Kuruzovich
A treatment is usually appropriate for some group (the ``sick" group) on whom it has an effect, but it can also have a side-effect when given to subjects from another group (the ``healthy" group). In a non-targeted trial both sick and healthy subjects may be treated, producing heterogeneous effects within the treated g...
https://openreview.net/pdf?id=vxln_lFKkfc
https://openreview.net/forum?id=vxln_lFKkfc
vxln_lFKkfc
[{"review_id": "4uthSLfr6A6", "paper_id": "vxln_lFKkfc", "reviewer": null, "paper_summary": "This paper studies the estimation of treatment effects in the non-targeted trials in which one cannot control the selection of treatment units. An asymptotically consistent effect estimation algorithm is proposed and the experi...
2023
ICLR
# UNTANGLING EFFECT AND SIDE EFFECT: CONSIS-TENT CAUSAL INFERENCE IN NON-TARGETED TRIALS Anonymous authors Paper under double-blind review # ABSTRACT A treatment is usually appropriate for some group (the "sick" group) on whom it has an effect, but it can also have a side-effect when given to subjects from another g...
{ "table_of_contents": [ { "title": "UNTANGLING EFFECT AND SIDE EFFECT: CONSIS-\nTENT CAUSAL INFERENCE IN NON-TARGETED TRIALS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.4375 ], [ 503.583984375, 80.437...
Presto! Distilling Steps and Layers for Accelerating Music Generation
Zachary Novack, Ge Zhu, Jonah Casebeer, Julian McAuley, Taylor Berg-Kirkpatrick, Nicholas J. Bryan
Despite advances in diffusion-based text-to-music (TTM) methods, efficient, high-quality generation remains a challenge. We introduce Presto!, an approach to inference acceleration for score-based diffusion transformers via reducing both sampling steps and cost per step. To reduce steps, we develop a new score-based di...
https://openreview.net/pdf?id=Gj5JTAwdoy
https://openreview.net/forum?id=Gj5JTAwdoy
Gj5JTAwdoy
[{"review_id": "U6ZJrG5s5k", "paper_id": "Gj5JTAwdoy", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# *Presto!* DISTILLING STEPS AND LAYERS FOR ACCELERATING MUSIC GENERATION Zachary Novack<sup>∗</sup> UC – San Diego Ge Zhu & Jonah Casebeer Adobe Research Julian McAuley & Taylor Berg-Kirkpatrick UC – San Diego Nicholas J. Bryan Adobe Research # ABSTRACT Despite advances in diffusion-based text-to-music (TTM) met...
{ "table_of_contents": [ { "title": "Presto! DISTILLING STEPS AND LAYERS\nFOR ACCELERATING MUSIC GENERATION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 415.07537841796875, 80.05078125 ...
Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation
Dan Qiao, Yu-Xiang Wang
We study the problem of deployment efficient reinforcement learning (RL) with linear function approximation under the \emph{reward-free} exploration setting. This is a well-motivated problem because deploying new policies is costly in real-life RL applications. Under the linear MDP setting with feature dimension $d$ an...
https://openreview.net/pdf?id=SNwH0dDGl7_
https://openreview.net/forum?id=SNwH0dDGl7_
SNwH0dDGl7_
[{"review_id": "zuONAIbXcG7", "paper_id": "SNwH0dDGl7_", "reviewer": null, "paper_summary": "This paper presents a novel algorithm that simultaneously achieves nearly tight sample complexity and switching cost in reward-free linear MDP. The theoretical result is solid. Multiple new theoretical tools, including policy d...
2023
ICLR
# NEAR-OPTIMAL DEPLOYMENT EFFICIENCY IN REWARD-FREE REINFORCEMENT LEARNING WITH LINEAR FUNCTION APPROXIMATION Dan Qiao Department of Computer Science, UCSB danqiao@ucsb.edu Yu-Xiang Wang Department of Computer Science, UCSB yuxiangw@cs.ucsb.edu ## **ABSTRACT** We study the problem of deployment efficient reinforc...
{ "table_of_contents": [ { "title": "NEAR-OPTIMAL DEPLOYMENT EFFICIENCY IN REWARD-FREE REINFORCEMENT LEARNING WITH LINEAR FUNCTION APPROXIMATION", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.4375 ], [ 504.0, ...
EC-Diffuser: Multi-Object Manipulation via Entity-Centric Behavior Generation
Carl Qi, Dan Haramati, Tal Daniel, Aviv Tamar, Amy Zhang
Object manipulation is a common component of everyday tasks, but learning to manipulate objects from high-dimensional observations presents significant challenges. These challenges are heightened in multi-object environments due to the combinatorial complexity of the state space as well as of the desired behaviors. Whi...
https://openreview.net/pdf?id=o3pJU5QCtv
https://openreview.net/forum?id=o3pJU5QCtv
o3pJU5QCtv
[{"review_id": "IKihTSnxMi", "paper_id": "o3pJU5QCtv", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# EC-DIFFUSER: MULTI-OBJECT MANIPULATION VIA ENTITY-CENTRIC BEHAVIOR GENERATION Carl Qi<sup>1</sup> , Dan Haramati2,<sup>3</sup> , Tal Daniel<sup>2</sup> , Aviv Tamar<sup>2</sup> , Amy Zhang1,<sup>4</sup> <sup>1</sup> UT Austin, <sup>2</sup> Technion, Israel Institute of Technology, <sup>3</sup> Brown University, <sup...
{ "table_of_contents": [ { "title": "EC-DIFFUSER: MULTI-OBJECT MANIPULATION\nVIA ENTITY-CENTRIC BEHAVIOR GENERATION", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.39202880859375 ], [ 458.3238830566406, 8...
Balanced Ranking with Relative Centrality: A multi-core periphery perspective
Chandra Sekhar Mukherjee, Jiapeng Zhang
Ranking of vertices in a graph for different objectives is one of the most fundamental tasks in computer science. It is known that traditional ranking algorithms can generate unbalanced ranking when the graph has underlying communities, resulting in loss of information, polarised opinions, and reduced diversity (Celis,...
https://openreview.net/pdf?id=21rSeWJHPF
https://openreview.net/forum?id=21rSeWJHPF
21rSeWJHPF
[{"review_id": "Pmzn7RnC0a", "paper_id": "21rSeWJHPF", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# BALANCED RANKING WITH RELATIVE CENTRALITY: A MULTI-CORE PERIPHERY PERSPECTIVE Chandra Sekhar Mukherjee <sup>∗</sup> Thomas Lord Department of Computer Science University of Southern California chandrasekhar.mukherjee07@gmail.com Jiapeng Zhang † Thomas Lord Department of Computer Science University of Southern Calif...
{ "table_of_contents": [ { "title": "BALANCED RANKING WITH RELATIVE CENTRALITY:\nA MULTI-CORE PERIPHERY PERSPECTIVE", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.13092041015625 ], [ 506.3755798339844, 8...
NV-Embed: Improved Techniques for Training LLMs as Generalist Embedding Models
Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping
Decoder-only large language model (LLM)-based embedding models are beginning to outperform BERT or T5-based embedding models in general-purpose text embedding tasks, including dense vector-based retrieval. In this work, we introduce the NV-Embed model, incorporating architectural designs, training procedures, and curat...
https://openreview.net/pdf?id=lgsyLSsDRe
https://openreview.net/forum?id=lgsyLSsDRe
lgsyLSsDRe
[{"review_id": "xGUbr8W5Zp", "paper_id": "lgsyLSsDRe", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# NV-EMBED: IMPROVED TECHNIQUES FOR TRAINING LLMS AS GENERALIST EMBEDDING MODELS Chankyu Lee <sup>∗</sup><sup>1</sup> Rajarshi Roy <sup>1</sup> Mengyao Xu <sup>1</sup> Jonathan Raiman <sup>1</sup> Mohammad Shoeybi <sup>1</sup> Bryan Catanzaro <sup>1</sup> Wei Ping <sup>∗</sup> <sup>1</sup> # NVIDIA # ABSTRACT Deco...
{ "table_of_contents": [ { "title": "NV-EMBED: IMPROVED TECHNIQUES FOR TRAINING\nLLMS AS GENERALIST EMBEDDING MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.5680236816406, 80.0507...
Meta-Learning General-Purpose Learning Algorithms with Transformers
Louis Kirsch, James Harrison, Jascha Sohl-Dickstein, Luke Metz
Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead aims to learn those aspects, and promises to unlock greater capabilities with less manual effort. One particularly ambitious goal of ...
https://openreview.net/pdf?id=Y2ShteTrnX2
https://openreview.net/forum?id=Y2ShteTrnX2
Y2ShteTrnX2
[{"review_id": "lzVuMszpuyD", "paper_id": "Y2ShteTrnX2", "reviewer": null, "paper_summary": "This paper proposes a novel meta-learning algorithm. An interesting phase transition between memorization and generalization is observed as the number of tasks is massively scaled up in a controlled (and artificial) manner. \n\...
2023
ICLR
## GENERAL-PURPOSE IN-CONTEXT LEARNING BY META-LEARNING TRANSFORMERS Anonymous authors Paper under double-blind review ## ABSTRACT Modern machine learning requires system designers to specify aspects of the learning pipeline, such as losses, architectures, and optimizers. Meta-learning, or learning-to-learn, instead...
{ "table_of_contents": [ { "title": "GENERAL-PURPOSE IN-CONTEXT LEARNING\nBY META-LEARNING TRANSFORMERS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 439.875, 80.05078125 ], ...
Expected Probabilistic Hierarchies
Marcel Kollovieh, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann
Hierarchical clustering has usually been addressed by discrete optimization using heuristics or continuous optimization of relaxed scores for hierarchies. In this work, we propose to optimize expected scores under a probabilistic model over hierarchies. (1) We show theoretically that the global optimum of the expected ...
https://openreview.net/pdf?id=dPOLZ2u4SKV
https://openreview.net/forum?id=dPOLZ2u4SKV
dPOLZ2u4SKV
[{"review_id": "Jyd517dZrLl", "paper_id": "dPOLZ2u4SKV", "reviewer": null, "paper_summary": "The paper extends the work of Zügner et al. (2021), called FPH, to developing a hierarchical clustering method. They extend the soft hierarchy scores of FPH to expected hierarchy scores, prove results about it and present exper...
2023
ICLR
# EXPECTED PROBABILISTIC HIERARCHIES Anonymous authors Paper under double-blind review # ABSTRACT Hierarchical clustering has usually been addressed by discrete optimization using heuristics or continuous optimization of relaxed scores for hierarchies. In this work, we propose to optimize *expected* scores under a ...
{ "table_of_contents": [ { "title": "EXPECTED PROBABILISTIC HIERARCHIES", "heading_level": null, "page_id": 0, "polygon": [ [ 106.98046875, 80.2540283203125 ], [ 412.88250732421875, 80.2540283203125 ], [ ...
Alchemy: Amplifying Theorem-Proving Capability Through Symbolic Mutation
Shaonan Wu, Shuai Lu, Yeyun Gong, Nan Duan, Ping Wei
Formal proofs are challenging to write even for experienced experts. Recent progress in Neural Theorem Proving (NTP) shows promise in expediting this process. However, the formal corpora available on the Internet are limited compared to the general text, posing a significant data scarcity challenge for NTP. To addre...
https://openreview.net/pdf?id=7NL74jUiMg
https://openreview.net/forum?id=7NL74jUiMg
7NL74jUiMg
[{"review_id": "9MxpZSu3Cr", "paper_id": "7NL74jUiMg", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# ALCHEMY: AMPLIFYING THEOREM-PROVING CAPA-BILITY THROUGH SYMBOLIC MUTATION Shaonan Wu 1,2, <sup>∗</sup> Shuai Lu 3,† Yeyun Gong 3, Nan Duan 3, Ping Wei 1,2,† - <sup>1</sup> National Key Laboratory of Human-Machine Hybrid Augmented Intelligence - Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong Unive...
{ "table_of_contents": [ { "title": "ALCHEMY: AMPLIFYING THEOREM-PROVING CAPA-\nBILITY THROUGH SYMBOLIC MUTATION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.49505615234375 ], [ 503.5697326660156, 80.49...
PromptTTS 2: Describing and Generating Voices with Text Prompt
Yichong Leng, Zhifang Guo, Kai Shen, Zeqian Ju, Xu Tan, Eric Liu, Yufei Liu, Dongchao Yang, leying zhang, Kaitao Song, Lei He, Xiangyang Li, sheng zhao, Tao Qin, Jiang Bian
Speech conveys more information than text, as the same word can be uttered in various voices to convey diverse information. Compared to traditional text-to-speech (TTS) methods relying on speech prompts (reference speech) for voice variability, using text prompts (descriptions) is more user-friendly since speech prompt...
https://openreview.net/pdf?id=NsCXDyv2Bn
https://openreview.net/forum?id=NsCXDyv2Bn
NsCXDyv2Bn
[{"review_id": "o9ngQOfaAd", "paper_id": "NsCXDyv2Bn", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# PROMPTTTS 2: DESCRIBING AND GENERATING VOICES WITH TEXT PROMPT Yichong Leng∗ † , Zhifang Guo<sup>∗</sup> , Kai Shen, Zeqian Ju, Xu Tan, Yanqing Liu, Yufei Liu Dongchao Yang, Leying Zhang, Kaitao Song, Lei He, Xiang-Yang Li, Sheng Zhao Tao Qin, Jiang Bian MOE-Microsoft Key Laboratory of Multimedia Computing and Comm...
{ "table_of_contents": [ { "title": "PROMPTTTS 2: DESCRIBING AND GENERATING\nVOICES WITH TEXT PROMPT", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 503.5575256347656, 80.05078125 ],...
Continuity-Preserving Convolutional Autoencoders for Learning Continuous Latent Dynamical Models from Images
Aiqing Zhu, Yuting Pan, Qianxiao Li
Continuous dynamical systems are cornerstones of many scientific and engineering disciplines. While machine learning offers powerful tools to model these systems from trajectory data, challenges arise when these trajectories are captured as images, resulting in pixel-level observations that are discrete in nature. Cons...
https://openreview.net/pdf?id=MxALfOAnXv
https://openreview.net/forum?id=MxALfOAnXv
MxALfOAnXv
[{"review_id": "j3gEEB74wl", "paper_id": "MxALfOAnXv", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# CONTINUITY-PRESERVING CONVOLUTIONAL AUTOENCODERS FOR LEARNING CONTINUOUS LATENT DYNAMICAL MODELS FROM IMAGES Aiqing Zhu<sup>1</sup>, Yuting Pan<sup>2,3</sup>, Qianxiao Li<sup>1,4\*</sup> zaq@nus.edu.sg, pan.yuting@u.nus.edu, qianxiao@nus.edu.sg #### **ABSTRACT** Continuous dynamical systems are cornerstones of ma...
{ "table_of_contents": [ { "title": "CONTINUITY-PRESERVING CONVOLUTIONAL AUTOENCODERS FOR LEARNING CONTINUOUS LATENT DYNAMICAL MODELS FROM IMAGES", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.82421875 ], [ 453.02...
CORN: Contact-based Object Representation for Nonprehensile Manipulation of General Unseen Objects
Yoonyoung Cho, Junhyek Han, Yoontae Cho, Beomjoon Kim
Nonprehensile manipulation is essential for manipulating objects that are too thin, large, or otherwise ungraspable in the wild. To sidestep the difficulty of contact modeling in conventional modeling-based approaches, reinforcement learning (RL) has recently emerged as a promising alternative. However, previous RL app...
https://openreview.net/pdf?id=KTtEICH4TO
https://openreview.net/forum?id=KTtEICH4TO
KTtEICH4TO
[{"review_id": "X9DK8bVyNy", "paper_id": "KTtEICH4TO", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# CORN: CONTACT-BASED OBJECT REPRESENTATION FOR NONPREHENSILE MANIPULATION OF GENERAL UNSEEN OBJECTS Yoonyoung Cho, Junhyek Han, Yoontae Cho, Beomjoon Kim Korea Advanced Institute of Science and Technology {yoonyoung.cho, junhyek.han, yoontae.cho, beomjoon.kim}@kaist.ac.kr ## **ABSTRACT** Nonprehensile manipulatio...
{ "table_of_contents": [ { "title": "CORN: CONTACT-BASED OBJECT REPRESENTATION FOR NONPREHENSILE MANIPULATION OF GENERAL UNSEEN OBJECTS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 79.6640625 ], [ 504.75, ...
Confidence-Conditioned Value Functions for Offline Reinforcement Learning
Joey Hong, Aviral Kumar, Sergey Levine
Offline reinforcement learning (RL) promises the ability to learn effective policies solely using existing, static datasets, without any costly online interaction. To do so, offline RL methods must handle distributional shift between the dataset and the learned policy. The most common approach is to learn conservative,...
https://openreview.net/pdf?id=Zeb5mTuqT5
https://openreview.net/forum?id=Zeb5mTuqT5
Zeb5mTuqT5
[{"review_id": "eKQms-0LRr", "paper_id": "Zeb5mTuqT5", "reviewer": null, "paper_summary": "(a) Claims: The paper introduces an algorithm that learns a confidence-conditioned lower bound for $Q$ values from an offline dataset. This allows the algorithm to adaptively change its degree of conservatism online/at test time...
2023
ICLR
# CONFIDENCE-CONDITIONED VALUE FUNCTIONS FOR OFFLINE REINFORCEMENT LEARNING Joey Hong, Aviral Kumar, Sergey Levine University of California, Berkeley {joey hong,aviralk}@berkeley.edu, svlevine@eecs.berkeley.edu # ABSTRACT Offline reinforcement learning (RL) promises the ability to learn effective policies solely u...
{ "table_of_contents": [ { "title": "CONFIDENCE-CONDITIONED VALUE FUNCTIONS FOR\nOFFLINE REINFORCEMENT LEARNING", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.56646728515625, 80.05078125...
Active Topological Mapping by Metric-Free Exploration via Task and Motion Imitation
Yuhang He, Irving Fang, Yiming Li, Chen Feng
Topological map is an effective environment representation for visual navigation. It is a graph of image nodes and spatial neighborhood edges without metric information such as global or relative agent poses. However, currently such a map construction relies on either less-efficient random exploration or more demanding...
https://openreview.net/pdf?id=AB4xZG9uzGl
https://openreview.net/forum?id=AB4xZG9uzGl
AB4xZG9uzGl
[{"review_id": "EaZIt2ILyJ", "paper_id": "AB4xZG9uzGl", "reviewer": null, "paper_summary": "This paper had received an unusual number of 5 reviews, which were quite mixed. The reviewers appreciated the idea to hallucinate next goals directly in feature space. However, several critical issues were raised, which the auth...
2023
ICLR
# ACTIVE TOPOLOGICAL MAPPING BY METRIC-FREE EXPLO-RATION VIA TASK AND MOTION IMITATION ### Anonymous authors Paper under double-blind review ### ABSTRACT Topological map is an effective environment representation for visual navigation. It is a graph of image nodes and spatial neighborhood edges without metric infor...
{ "table_of_contents": [ { "title": "ACTIVE TOPOLOGICAL MAPPING BY METRIC-FREE EXPLO-\nRATION VIA TASK AND MOTION IMITATION", "heading_level": null, "page_id": 0, "polygon": [ [ 90.84375, 93.19921875 ], [ 523.0740966796875, 93...
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat, Arto Maranjyan, Peter Richtárik
In $D$istributed optimization and $L$earning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical. We introduce LoCoDL, a communication-efficient algorithm that leverages the two popular and effective techniques of $Lo$cal training, which reduces the communi...
https://openreview.net/pdf?id=PpYy0dR3Qw
https://openreview.net/forum?id=PpYy0dR3Qw
PpYy0dR3Qw
[{"review_id": "o7Cainvdk5", "paper_id": "PpYy0dR3Qw", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
## LOCODL: COMMUNICATION-EFFICIENT DISTRIBUTED LEARNING WITH LOCAL TRAINING AND COMPRESSION ### Laurent Condat, Artavazd Maranjyan & Peter Richtárik Computer Science Program, CEMSE Division King Abdullah University of Science and Technology (KAUST) Thuwal, 23955-6900, Kingdom of Saudi Arabia & SDAIA-KAUST Center of E...
{ "table_of_contents": [ { "title": "LOCODL: COMMUNICATION-EFFICIENT DISTRIBUTED\nLEARNING WITH LOCAL TRAINING AND COMPRESSION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 86.3690185546875 ], [ 504.421875, ...
PersonalLLM: Tailoring LLMs to Individual Preferences
Thomas P Zollo, Andrew Wei Tung Siah, Naimeng Ye, Ang Li, Hongseok Namkoong
As LLMs become capable of complex tasks, there is growing potential for personalized interactions tailored to the subtle and idiosyncratic preferences of the user. We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user. Departing from existing alignment b...
https://openreview.net/pdf?id=2R7498e2Tx
https://openreview.net/forum?id=2R7498e2Tx
2R7498e2Tx
[{"review_id": "RYdTEN0G8d", "paper_id": "2R7498e2Tx", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# <span id="page-0-0"></span>PERSONALLLM: TAILORING LLMS TO INDIVIDUAL PREFERENCES Thomas P. Zollo<sup>∗</sup> Columbia University tpz2105@columbia.edu Andrew Wei Tung Siah<sup>∗</sup> Columbia University andrew.siah@columbia.edu Naimeng Ye Columbia University ny2336@columbia.edu Ang Li Columbia University al4263@co...
{ "table_of_contents": [ { "title": "PERSONALLLM:\nTAILORING LLMS TO INDIVIDUAL PREFERENCES", "heading_level": null, "page_id": 0, "polygon": [ [ 84.8671875, 83.14453125 ], [ 449.5111389160156, 83.14453125 ], [...
Test-Time Adaptation for Combating Missing Modalities in Egocentric Videos
Merey Ramazanova, Alejandro Pardo, Bernard Ghanem, Motasem Alfarra
Understanding videos that contain multiple modalities is crucial, especially in egocentric videos, where combining various sensory inputs significantly improves tasks like action recognition and moment localization. However, real-world applications often face challenges with incomplete modalities due to privacy concern...
https://openreview.net/pdf?id=1L52bHEL5d
https://openreview.net/forum?id=1L52bHEL5d
1L52bHEL5d
[{"review_id": "7xPtAgwjmj", "paper_id": "1L52bHEL5d", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# <span id="page-0-0"></span>TEST-TIME ADAPTATION FOR COMBATING MISSING MODALITIES IN EGOCENTRIC VIDEOS Merey Ramazanova, Alejandro Pardo, Bernard Ghanem, Motasem Alfarra Center of Excellence in Generative AI, KAUST, Saudi Arabia firstname.lastname@kaust.edu.sa ### ABSTRACT Understanding videos that contain multiple...
{ "table_of_contents": [ { "title": "TEST-TIME ADAPTATION FOR COMBATING MISSING\nMODALITIES IN EGOCENTRIC VIDEOS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.13092041015625 ], [ 503.5675964355469, 80.1...
Learned Neural Network Representations are Spread Diffusely with Redundancy
Vedant Nanda, Till Speicher, John P Dickerson, Soheil Feizi, Krishna Gummadi, Adrian Weller
Representations learned by pre-training a neural network on a large dataset are increasingly used successfully to perform a variety of downstream tasks. In this work, we take a closer look at how features are encoded in such pre-trained representations. We find that learned representations in a given layer exhibit a de...
https://openreview.net/pdf?id=G2GpzH1l9AC
https://openreview.net/forum?id=G2GpzH1l9AC
G2GpzH1l9AC
[{"review_id": "5ulQznqEFz", "paper_id": "G2GpzH1l9AC", "reviewer": null, "paper_summary": "This paper explores the representations learned by CNNs, and shows that subsets of the neurons in a particular layer can be used to summarize that layer's contributions to the final prediction. Thus there is a lot of redundancy...
2023
ICLR
# LEARNED NEURAL NETWORK REPRESENTATIONS ARE SPREAD DIFFUSELY WITH REDUNDANCY Anonymous authors Paper under double-blind review ## ABSTRACT Representations learned by pre-training a neural network on a large dataset are increasingly used successfully to perform a variety of downstream tasks. In this work, we take a...
{ "table_of_contents": [ { "title": "LEARNED NEURAL NETWORK REPRESENTATIONS\nARE SPREAD DIFFUSELY WITH REDUNDANCY", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.2540283203125 ], [ 503.57830810546875, 80.2...
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
Mingxu Tao, Yansong Feng, Dongyan Zhao
Large pre-trained language models have helped to achieve state of the art on a variety of NLP tasks, nevertheless, they still suffer from forgetting when incrementally learning a series of sequential tasks. To alleviate this problem, recent works propose several models enhanced by sparse experience replay and local ada...
https://openreview.net/pdf?id=UazgYBMS9-W
https://openreview.net/forum?id=UazgYBMS9-W
UazgYBMS9-W
[{"review_id": "-L5uocT54q", "paper_id": "UazgYBMS9-W", "reviewer": null, "paper_summary": "This paper explores whether BERT forgets representations of previous tasks over the course of being trained on new tasks. The method tracks the encoding ability of BERT for specific tasks before, during, and after learning new t...
2023
ICLR
# CAN BERT REFRAIN FROM FORGETTING ON SEQUEN-TIAL TASKS? A PROBING STUDY Mingxu Tao1,2, Yansong Feng1,3, Dongyan Zhao1,2 <sup>1</sup>Wangxuan Institute of Computer Technology, Peking University, China <sup>2</sup>Center for Data Science, Peking University, China <sup>3</sup>The MOE Key Laboratory of Computational L...
{ "table_of_contents": [ { "title": "CAN BERT REFRAIN FROM FORGETTING ON SEQUEN-\nTIAL TASKS? A PROBING STUDY", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.56976318359375, 80.05078125 ...
Generative Multi-Flow Networks: Centralized, Independent and Conservation
Yinchuan Li, Haozhi Wang, Shuang Luo, yunfeng shao, Jianye HAO
Generative flow networks utilize the flow matching loss to learn a stochastic policy for generating objects from a sequence of actions, such that the probability of generating a pattern can be proportional to the corresponding given reward. However, existing works can only handle single flow model tasks and cannot dire...
https://openreview.net/pdf?id=OTIhUlChVaT
https://openreview.net/forum?id=OTIhUlChVaT
OTIhUlChVaT
[{"review_id": "CWiPPFLN8GD", "paper_id": "OTIhUlChVaT", "reviewer": null, "paper_summary": "The paper received strong and quite consistent criticism from all reviewers and the authors have not responded.", "strengths": "", "weaknesses": "", "comments": "", "overall_score": "Reject", "confidence": null, "contribution":...
2023
ICLR
# GENERATIVE MULTI-FLOW NETWORKS: CENTRAL-IZED, INDEPENDENT AND CONSERVATION Anonymous authors Paper under double-blind review # ABSTRACT Generative flow networks utilize the flow matching loss to learn a stochastic policy for generating objects from a sequence of actions, such that the probability of generating a ...
{ "table_of_contents": [ { "title": "GENERATIVE MULTI-FLOW NETWORKS: CENTRAL-\nIZED, INDEPENDENT AND CONSERVATION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.5697021484375, 80.0507812...
Tight Clusters Make Specialized Experts
Stefan Nielsen, Rachel Teo, Laziz Abdullaev, Tan Minh Nguyen
Sparse Mixture-of-Experts (MoE) architectures have emerged as a promising approach to decoupling model capacity from computational cost. At the core of the MoE model is the router, which learns the underlying clustering structure of the input distribution in order to send input tokens to appropriate experts. However, l...
https://openreview.net/pdf?id=Pu3c0209cx
https://openreview.net/forum?id=Pu3c0209cx
Pu3c0209cx
[{"review_id": "j0xBZEcIbn", "paper_id": "Pu3c0209cx", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# TIGHT CLUSTERS MAKE SPECIALIZED EXPERTS Stefan K. Nielsen<sup>∗</sup> FPT Software AI Center stefannvkp@fpt.com Laziz U. Abdullaev Department of Mathematics National University of Singapore laziz.abdullaev@u.nus.edu Rachel S.Y. Teo<sup>∗</sup> Department of Mathematics National University of Singapore rachel.tsy@u....
{ "table_of_contents": [ { "title": "TIGHT CLUSTERS MAKE SPECIALIZED EXPERTS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.4375 ], [ 462.8538513183594, 80.4375 ], [ 462.8538513...
$\sigma$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples
Antonio Emanuele Cinà, Francesco Villani, Maura Pintor, Lea Schönherr, Battista Biggio, Marcello Pelillo
Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging. While most attacks consider $\ell_2$- and $\ell_\infty$-norm constraints to craft input perturbations, only a few investigate sparse $\ell_1$- and $\ell_0$-norm attacks. In particular, $\ell_0$-norm attacks remain the least...
https://openreview.net/pdf?id=JMPOqoe4tl
https://openreview.net/forum?id=JMPOqoe4tl
JMPOqoe4tl
[{"review_id": "SNY0FOWAkf", "paper_id": "JMPOqoe4tl", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# <span id="page-0-5"></span><span id="page-0-3"></span> $\sigma$ -zero: Gradient-based Optimization of $\ell_0$ -norm Adversarial Examples Antonio Emanuele Cinà<sup>1</sup> Francesco Villani<sup>1</sup> Maura Pintor<sup>2</sup> Lea Schönherr<sup>3</sup> Battista Biggio<sup>2</sup> Marcello Pelillo<sup>4</sup> #### *...
{ "table_of_contents": [ { "title": "\\sigma-zero: Gradient-based Optimization of \\ell_0-norm Adversarial Examples", "heading_level": null, "page_id": 0, "polygon": [ [ 106.5, 79.6640625 ], [ 447.046875, 79.6640625 ],...
Co$^{\mathbf{3}}$Gesture: Towards Coherent Concurrent Co-speech 3D Gesture Generation with Interactive Diffusion
Xingqun Qi, Yatian Wang, Hengyuan Zhang, Jiahao Pan, Wei Xue, Shanghang Zhang, Wenhan Luo, Qifeng Liu, Yike Guo
Generating gestures from human speech has gained tremendous progress in animating virtual avatars. While the existing methods enable synthesizing gestures cooperated by people self-talking, they overlook the practicality of concurrent gesture modeling with two-person interactive conversations. Moreover, the lack of hig...
https://openreview.net/pdf?id=VaowElpVzd
https://openreview.net/forum?id=VaowElpVzd
VaowElpVzd
[{"review_id": "JJ2Ku3wvSQ", "paper_id": "VaowElpVzd", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# CO<sup>3</sup>GESTURE: TOWARDS <u>CO</u>HERENT <u>CO</u>NCURRENT <u>CO</u>-SPEECH 3D GESTURE GENERATION WITH INTERACTIVE DIFFUSION Xingqun Qi<sup>1</sup>,\*, Yatian Wang<sup>1</sup>,\*, Hengyuan Zhang<sup>2</sup>, Jiahao Pan<sup>1</sup> Wei Xue<sup>1</sup>, Shanghang Zhang<sup>2</sup>, Wenhan Luo<sup>1</sup>, Qifeng...
{ "table_of_contents": [ { "title": "CO<sup>3</sup>GESTURE: TOWARDS <u>CO</u>HERENT <u>CO</u>NCURRENT <u>CO</u>-SPEECH 3D GESTURE GENERATION WITH INTERACTIVE DIFFUSION", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 79.5 ], [ ...
DeSCo: Towards Scalable Deep Subgraph Counting
Tianyu Fu, Yu Wang, Zhitao Ying
Subgraph counting is the problem of determining the number of a given query graph in a large targe graph. Despite being a #P problem, subgraph counting is a crucial graph analysis method in domains ranging from biology and social science to risk management and software analysis. However, existing exact counting methods...
https://openreview.net/pdf?id=lL8LF0O8Y2
https://openreview.net/forum?id=lL8LF0O8Y2
lL8LF0O8Y2
[{"review_id": "oP67lDRtIAH", "paper_id": "lL8LF0O8Y2", "reviewer": null, "paper_summary": "The paper presents a new algorithm, DeSCo, for subgraph counting in large graphs. The new proposed algorithm works in three stages. It first builds a canonical partition to reduce the problem from the entire graph to small subgr...
2023
ICLR
# DESCO: TOWARDS SCALABLE DEEP SUBGRAPH COUNTING Anonymous authors Paper under double-blind review # ABSTRACT Subgraph counting is the problem of determining the number of a given query graph in a large target graph. Despite being a #P problem, subgraph counting is a crucial graph analysis method in domains ranging ...
{ "table_of_contents": [ { "title": "DESCO: TOWARDS SCALABLE DEEP SUBGRAPH\nCOUNTING", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.4375 ], [ 503.57684326171875, 80.4375 ], [ 503...
On the Neural Tangent Kernel of Equilibrium Models
Zhili Feng, J Zico Kolter
This work studies the neural tangent kernel (NTK) of deep equilibrium (DEQ) model, a practical ``infinite-depth'' architecture which directly computes the infinite-depth limit of a weight-tied network via root-finding. Even though the NTK of a fully-connected neural network is stochastic if its width and depth both ten...
https://openreview.net/pdf?id=gnULZPMCPz
https://openreview.net/forum?id=gnULZPMCPz
gnULZPMCPz
[{"review_id": "KoOSu6f21c", "paper_id": "gnULZPMCPz", "reviewer": null, "paper_summary": "The work derives the NTK for deep equilibrium (DEQ) models in a certain regime of scaling. The key trick is the exchange of the limit for the depth and width and it is not clear if this is already not possible. For instance, I w...
2023
ICLR
# ON THE NEURAL TANGENT KERNEL OF EQUILIBRIUM MODELS **Anonymous authors**Paper under double-blind review #### **ABSTRACT** This work studies the neural tangent kernel (NTK) of the deep equilibrium (DEQ) model, a practical "infinite-depth" architecture which directly computes the infinite-depth limit of a weight-tie...
{ "table_of_contents": [ { "title": "ON THE NEURAL TANGENT KERNEL OF EQUILIBRIUM MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.5, 78.75 ], [ 504.75, 80.05078125 ], [ 504.75, ...
Does Deep Learning Learn to Abstract? A Systematic Probing Framework
Shengnan An, Zeqi Lin, Bei Chen, Qiang Fu, Nanning Zheng, Jian-Guang Lou
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context. At the same time, there is a lack of clear understanding about both the presence and further characteristics of this capability in deep lea...
https://openreview.net/pdf?id=QB1dMPEXau5
https://openreview.net/forum?id=QB1dMPEXau5
QB1dMPEXau5
[{"review_id": "hyHPLAh-PA", "paper_id": "QB1dMPEXau5", "reviewer": null, "paper_summary": "The authors present a framework for systematically evaluating whether a language model can learn abstract concepts (syntactic categories) when fine-tuned on a target task. In this case the target task is one whose language has a...
2023
ICLR
# DOES DEEP LEARNING LEARN TO ABSTRACT? A SYSTEMATIC PROBING FRAMEWORK Shengnan An∗†, Zeqi Lin‡ , Bei Chen‡ , Qiang Fu‡ , Nanning Zheng† , Jian-Guang LOU‡ † Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University {an1006634493@stu, nnzheng@mail}.xjtu.edu.cn {Zeqi.Lin, beichen, qifu, jlou}@micros...
{ "table_of_contents": [ { "title": "DOES DEEP LEARNING LEARN TO ABSTRACT?\nA SYSTEMATIC PROBING FRAMEWORK", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 451.828125, 80.05078125 ], ...
Procedural Knowledge in Pretraining Drives Reasoning in Large Language Models
Laura Ruis, Maximilian Mozes, Juhan Bae, Siddhartha Rao Kamalakara, Dwaraknath Gnaneshwar, Acyr Locatelli, Robert Kirk, Tim Rocktäschel, Edward Grefenstette, Max Bartolo
The capabilities and limitations of Large Language Models (LLMs) have been sketched out in great detail in recent years, providing an intriguing yet conflicting picture. On the one hand, LLMs demonstrate a general ability to solve problems. On the other hand, they show surprising reasoning gaps when compared to humans,...
https://openreview.net/pdf?id=1hQKHHUsMx
https://openreview.net/forum?id=1hQKHHUsMx
1hQKHHUsMx
[{"review_id": "j5oOKOF5Y3", "paper_id": "1hQKHHUsMx", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
## PROCEDURAL KNOWLEDGE IN PRETRAINING DRIVES REASONING IN LARGE LANGUAGE MODELS Laura Ruis<sup>∗</sup> Maximilian Mozes Juhan Bae AI Centre, UCL Cohere University of Toronto & Vector Institute Siddhartha Rao Kamalakara Cohere Dwarak Talupuru Cohere Acyr Locatelli Cohere Robert Kirk AI Centre, UCL Tim Rocktaschel ¨...
{ "table_of_contents": [ { "title": "PROCEDURAL KNOWLEDGE IN PRETRAINING DRIVES\nREASONING IN LARGE LANGUAGE MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 503.5839538574219, 80.0507...
High-Precision Dichotomous Image Segmentation via Probing Diffusion Capacity
Qian Yu, Peng-Tao Jiang, Hao Zhang, Jinwei Chen, Bo Li, Lihe Zhang, Huchuan Lu
In the realm of high-resolution (HR), fine-grained image segmentation, the primary challenge is balancing broad contextual awareness with the precision required for detailed object delineation, capturing intricate details and the finest edges of objects. Diffusion models, trained on vast datasets comprising billions of...
https://openreview.net/pdf?id=vh1e2WJfZp
https://openreview.net/forum?id=vh1e2WJfZp
vh1e2WJfZp
[{"review_id": "M5ib9iFOEk", "paper_id": "vh1e2WJfZp", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# HIGH-PRECISION DICHOTOMOUS IMAGE SEGMENTA-TION VIA PROBING DIFFUSION CAPACITY Qian Yu1,2<sup>∗</sup> Peng-Tao Jiang2∗† Hao Zhang<sup>2</sup> Jinwei Chen<sup>2</sup> Bo Li<sup>2</sup> Lihe Zhang1† Huchuan Lu<sup>1</sup> <sup>1</sup>Dalian University of Technology <sup>2</sup>vivo Mobile Communication Co., Ltd {ms.yuq...
{ "table_of_contents": [ { "title": "HIGH-PRECISION DICHOTOMOUS IMAGE SEGMENTA-\nTION VIA PROBING DIFFUSION CAPACITY", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.5697021484375, 80.0507...
Diffusion-based Neural Network Weights Generation
Bedionita Soro, Bruno Andreis, Hayeon Lee, Wonyong Jeong, Song Chong, Frank Hutter, Sung Ju Hwang
Transfer learning is a cornerstone of modern deep learning, yet it remains constrained by challenges in model selection and the overhead of extensive model storage. In this work, we present Diffusion-based Neural Network Weights Generation, D2NWG, a novel framework that leverages diffusion processes to synthesize task-...
https://openreview.net/pdf?id=j8WHjM9aMm
https://openreview.net/forum?id=j8WHjM9aMm
j8WHjM9aMm
[{"review_id": "cttgjC4IEa", "paper_id": "j8WHjM9aMm", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# DIFFUSION-BASED NEURAL NETWORK WEIGHTS GENERATION Soro Bedionita1<sup>∗</sup> Bruno Andreis1<sup>∗</sup> Hayeon Lee<sup>1</sup> Wonyong Jeong<sup>3</sup> Song Chong<sup>1</sup> Frank Hutter<sup>2</sup> Sung Ju Hwang1,<sup>3</sup> <sup>1</sup>KAIST <sup>2</sup>University of Freiburg <sup>3</sup>DeepAuto.ai # ABSTRAC...
{ "table_of_contents": [ { "title": "DIFFUSION-BASED NEURAL NETWORK WEIGHTS\nGENERATION", "heading_level": null, "page_id": 0, "polygon": [ [ 105.1875, 80.39202880859375 ], [ 468.97637939453125, 80.39202880859375 ], ...
Finding Private Bugs: Debugging Implementations of Differentially Private Stochastic Gradient Descent
Congyu Fang, Hengrui Jia, Ali Shahin Shamsabadi, Nicolas Papernot
It is important to learn with privacy-preserving algorithms when training data contains sensitive information. Differential privacy (DP) proposes to bound the worst-case privacy leakage of a training algorithm. However, the analytic nature of these algorithmic guarantees makes it difficult to verify that an implementat...
https://openreview.net/pdf?id=gKKUZ4fTEqh
https://openreview.net/forum?id=gKKUZ4fTEqh
gKKUZ4fTEqh
[{"review_id": "8jJWjmZPsHj", "paper_id": "gKKUZ4fTEqh", "reviewer": null, "paper_summary": "The paper proposes a methodology for developers to check for mistakes when implemented differentially private ML methods. The reviewers agree that this is an important question, perhaps more to practitioners, but there is not e...
2023
ICLR
# FINDING PRIVATE BUGS: DEBUGGING IMPLEMEN-TATIONS OF DIFFERENTIALLY PRIVATE STOCHASTIC GRADIENT DESCENT Anonymous authors Paper under double-blind review ## ABSTRACT It is important to learn with privacy-preserving algorithms when training data contains sensitive information. Differential privacy (DP) proposes to ...
{ "table_of_contents": [ { "title": "FINDING PRIVATE BUGS: DEBUGGING IMPLEMEN-\nTATIONS OF DIFFERENTIALLY PRIVATE STOCHASTIC\nGRADIENT DESCENT", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 503.58102...
Neural Rate Control for Learned Video Compression
Yiwei Zhang, Guo Lu, Yunuo Chen, Shen Wang, Yibo Shi, Jing Wang, Li Song
The learning-based video compression method has made significant progress in recent years, exhibiting promising compression performance compared with traditional video codecs. However, prior works have primarily focused on advanced compression architectures while neglecting the rate control technique. Rate control can ...
https://openreview.net/pdf?id=42lcaojZug
https://openreview.net/forum?id=42lcaojZug
42lcaojZug
[{"review_id": "ZFUc2OC9UF", "paper_id": "42lcaojZug", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# NEURAL RATE CONTROL FOR LEARNED VIDEO COM-PRESSION Yiwei Zhang<sup>1</sup> , Guo Lu1, Yunuo Chen<sup>1</sup> , Shen Wang<sup>1</sup> , Yibo Shi<sup>2</sup> , Jing Wang<sup>2</sup> , Li Song1 1 Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University <sup>2</sup>Huawei Technologies, Bei...
{ "table_of_contents": [ { "title": "NEURAL RATE CONTROL FOR LEARNED VIDEO COM-\nPRESSION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.5697021484375, 80.05078125 ], [ ...
Revisiting Group Robustness: Class-specific Scaling is All You Need
Seonguk Seo, Bohyung Han
Group distributionally robust optimization, which aims to improve robust accuracies such as worst-group or unbiased accuracy, is one of the mainstream algorithms to mitigate spurious correlation and reduce dataset bias. While existing approaches have apparently gained performance in robust accuracy, these improvements ...
https://openreview.net/pdf?id=pkgVPeL9gpX
https://openreview.net/forum?id=pkgVPeL9gpX
pkgVPeL9gpX
[{"review_id": "DVG-bH1H8AU", "paper_id": "pkgVPeL9gpX", "reviewer": null, "paper_summary": "This paper proposes a rescaling-based postprocessing method to control the trade-off between worst-group and average-case accuracies. The experiments show that the proposed method outperforms several existing baselines on two ...
2023
ICLR
# REVISITING GROUP ROBUSTNESS: CLASS-SPECIFIC SCALING IS ALL YOU NEED Anonymous authors Paper under double-blind review # ABSTRACT Group distributionally robust optimization, which aims to improve robust accuracies such as worst-group or unbiased accuracy, is one of the mainstream algorithms to mitigate spurious cor...
{ "table_of_contents": [ { "title": "REVISITING GROUP ROBUSTNESS:\nCLASS-SPECIFIC SCALING IS ALL YOU NEED", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 440.29833984375, 80.05078125 ...
$R^2$-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning
Mintong Kang, Bo Li
As large language models (LLMs) become increasingly prevalent across various applications, it is critical to establish safety guardrails to moderate input/output content of LLMs and ensure compliance with safety policies. Existing guardrail models, such as OpenAI Mod and LlamaGuard, treat various safety categories (e.g...
https://openreview.net/pdf?id=CkgKSqZbuC
https://openreview.net/forum?id=CkgKSqZbuC
CkgKSqZbuC
[{"review_id": "S3KzOGnFEx", "paper_id": "CkgKSqZbuC", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# $\mathbb{R}^2$ -Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning Mintong Kang & Bo Li University of Illinois at Urbana Champaign {mintong2, lbo}@illinois.edu ## **ABSTRACT** As large language models (LLMs) become increasingly prevalent across various applications, it is criti...
{ "table_of_contents": [ { "title": "\\mathbb{R}^2-Guard: Robust Reasoning Enabled LLM Guardrail via Knowledge-Enhanced Logical Reasoning", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 505.5, ...
FedPD: Defying data heterogeneity through privacy distillation
Zhiqin Brian Yang, Yonggang Zhang, Yu Zheng, Zhenheng TANG, Xiaowen Chu, Hao Peng, Bo Han
Model performance of federated learning (FL) typically suffers from data heterogeneity, i.e., data distribution varies with clients. Advanced works have already shown great potential for sharing client information to mitigate data heterogeneity. Yet, some literature shows a dilemma in preserving strong privacy and prom...
https://openreview.net/pdf?id=IERSU0La-Nt
https://openreview.net/forum?id=IERSU0La-Nt
IERSU0La-Nt
[{"review_id": "nDWnCSIGPRO", "paper_id": "IERSU0La-Nt", "reviewer": null, "paper_summary": "The paper studies statistical (data) heterogeneity in private federated learning. The proposed idea to address this problem is to divide the features into private and generalizable features. The latter can be extracted from dif...
2023
ICLR
# <span id="page-0-0"></span>FEDPD: DEFYING DATA HETEROGENEITY THROUGH PRIVACY DISTILLATION Anonymous authors Paper under double-blind review ## ABSTRACT Model performance of federated learning (FL) typically suffers from data heterogeneity, i.e., data distribution varies with clients. Advanced works have already s...
{ "table_of_contents": [ { "title": "FEDPD: DEFYING DATA HETEROGENEITY THROUGH\nPRIVACY DISTILLATION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 79.6640625 ], [ 503.5626525878906, 79.6640625 ], ...
Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How
Sebastian Pineda Arango, Fabio Ferreira, Arlind Kadra, Frank Hutter, Josif Grabocka
With the ever-increasing number of pretrained models, machine learning practitioners are continuously faced with which pretrained model to use, and how to finetune it for a new dataset. In this paper, we propose a methodology that jointly searches for the optimal pretrained model and the hyperparameters for finetuning ...
https://openreview.net/pdf?id=tqh1zdXIra
https://openreview.net/forum?id=tqh1zdXIra
tqh1zdXIra
[{"review_id": "HNifEUGAnp", "paper_id": "tqh1zdXIra", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (oral)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommen...
2024
ICLR
# QUICK-TUNE: QUICKLY LEARNING WHICH PRE-TRAINED MODEL TO FINETUNE AND HOW Sebastian Pineda Arango, Fabio Ferreira, Arlind Kadra, Frank Hutter & Josif Grabocka Department of Computer Science University of Freiburg pineda@cs.uni-freiburg.de # ABSTRACT With the ever-increasing number of pretrained models, machine lear...
{ "table_of_contents": [ { "title": "QUICK-TUNE: QUICKLY LEARNING WHICH PRE-\nTRAINED MODEL TO FINETUNE AND HOW", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 79.6640625 ], [ 506.8403625488281, 79.6640625 ...
QUANTILE-LSTM: A ROBUST LSTM FOR ANOMALY DETECTION
Snehanshu Saha, Soma Dhavala, Jyotirmoy Sarkar, Preyank Bhavesh Mota, Santonu Sarkar
Anomalies refer to departure of systems and devices from their normal behaviour in standard operating conditions. An anomaly in an industrial device can indicate an upcoming failure, often in the temporal direction. In this paper, we make two contributions: 1) we estimate conditional quantiles, and consider three diffe...
https://openreview.net/pdf?id=k5e6oQP2zHx
https://openreview.net/forum?id=k5e6oQP2zHx
k5e6oQP2zHx
[{"review_id": "X9BVJDIznBy", "paper_id": "k5e6oQP2zHx", "reviewer": null, "paper_summary": "This paper presents an anomaly detection method in times series, where the main idea is to devise three different LSTM models to predict quantiles in a future window. Quantile-based LSTMs seem to be sound and interesting. The p...
2023
ICLR
# QUANTILE-LSTM: A ROBUST LSTM FOR ANOMALY DETECTION IN TIME SERIES DATA Anonymous authors Paper under double-blind review ### ABSTRACT Anomalies refer to the departure of systems and devices from their normal behaviour in standard operating conditions. An anomaly in an industrial device can indicate an upcoming fai...
{ "table_of_contents": [ { "title": "QUANTILE-LSTM: A ROBUST LSTM FOR\nANOMALY DETECTION IN TIME SERIES DATA", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.49505615234375 ], [ 443.4609375, 80.49505615234...
SegNeRF: 3D Part Segmentation with Neural Radiance Fields
Jesus Zarzar, Sara Rojas Martinez, Silvio Giancola, Bernard Ghanem
Recent advances in Neural Radiance Fields (NeRF) boast impressive performances for generative tasks such as novel view synthesis and 3D reconstruction. Methods based on neural radiance fields are able to represent the 3D world implicitly by relying exclusively on posed images. Yet, they have seldom been explored in the...
https://openreview.net/pdf?id=D9WJEsALpI1
https://openreview.net/forum?id=D9WJEsALpI1
D9WJEsALpI1
[{"review_id": "oSQCxnSAXX", "paper_id": "D9WJEsALpI1", "reviewer": null, "paper_summary": "This paper proposed to perform 3D part segmentation using a neural field representation named SegNeRF. While the reviewers generally find the paper technically sound and the experiments are detailed. there are limited contributi...
2023
ICLR
# SEGNERF: 3D PART SEGMENTATION WITH NEURAL RADIANCE FIELDS #### Anonymous authors Paper under double-blind review ## ABSTRACT Recent advances in Neural Radiance Fields (NeRF) boast impressive performances for generative tasks such as novel view synthesis and 3D reconstruction. Methods based on neural radiance fiel...
{ "table_of_contents": [ { "title": "SEGNERF: 3D PART SEGMENTATION WITH NEURAL\nRADIANCE FIELDS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.49505615234375 ], [ 503.5878601074219, 80.49505615234375 ...
Renamer: A Transformer Architecture In-variant to Variable Renaming
Zachary Ankner, Alex Renda, Michael Carbin
Modeling tasks often take inputs from languages including programming languages and natural language. Many such tasks involve learning functions which are invariant to certain types of input transformations. In this work we consider a specific class of invariance: semantics-preserving variable renaming. We first show t...
https://openreview.net/pdf?id=7hYCGFacpz
https://openreview.net/forum?id=7hYCGFacpz
7hYCGFacpz
[{"review_id": "WL0ZV85Ej1", "paper_id": "7hYCGFacpz", "reviewer": null, "paper_summary": "All reviewers find the paper to be well written - both the problem statement and the proposed solution are presented well. Reviewers mainly are worried about the narrow scope of the paper, evaluation on only one dataset and non-s...
2023
ICLR
# Renamer: A Transformer Architecture Invariant to Variable Renaming ## Anonymous authors Paper under double-blind review # Abstract Many modeling tasks involve learning functions which are invariant to certain types of input transformations. In this work we consider a specific class of invariance: semantics-preser...
{ "table_of_contents": [ { "title": "Renamer: A Transformer Architecture In-\nvariant to Variable Renaming", "heading_level": null, "page_id": 0, "polygon": [ [ 107.46599578857422, 76.15728759765625 ], [ 506.6229248046875, 76....
Can Wikipedia Help Offline Reinforcement Learning?
Machel Reid, Yutaro Yamada, Shixiang Shane Gu
Fine-tuning reinforcement learning (RL) models has been challenging because of a lack of large scale off-the-shelf datasets as well as high variance in transferability among different environments. Recent work has looked at tackling offline RL from the perspective of sequence modeling with improved results as result of...
https://openreview.net/pdf?id=eHrqmewX1B-
https://openreview.net/forum?id=eHrqmewX1B-
eHrqmewX1B-
[{"review_id": "xzfvHUwOKG9", "paper_id": "eHrqmewX1B-", "reviewer": null, "paper_summary": "The paper investigates the benefits of cross-modal pretraining for offline RL. Specifically, the paper finds that a GPT model pretrained on Wikipedia can generalizes better and faster on the D4RL environments than a related bas...
2023
ICLR
# <span id="page-0-0"></span>CAN WIKIPEDIA HELP OFFLINE REINFORCEMENT LEARNING? Anonymous authors Paper under double-blind review ## ABSTRACT Fine-tuning reinforcement learning (RL) models has been challenging because of a lack of large scale off-the-shelf datasets as well as high variance in transferability among d...
{ "table_of_contents": [ { "title": "CAN WIKIPEDIA HELP OFFLINE REINFORCEMENT\nLEARNING?", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.13092041015625 ], [ 504.421875, 80.13092041015625 ], ...
Toeplitz Neural Network for Sequence Modeling
Zhen Qin, Xiaodong Han, Weixuan Sun, Bowen He, Dong Li, Dongxu Li, Yuchao Dai, Lingpeng Kong, Yiran Zhong
Sequence modeling has important applications in natural language processing and computer vision. Recently, the transformer-based models have shown strong performance on various sequence modeling tasks, which rely on attention to capture pairwise token relations, and position embedding to inject positional information. ...
https://openreview.net/pdf?id=IxmWsm4xrua
https://openreview.net/forum?id=IxmWsm4xrua
IxmWsm4xrua
[{"review_id": "nAeqnsJ-gVk", "paper_id": "IxmWsm4xrua", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "Dear David,\n\nThanks for your comments. we will include a discussion between TNN and CKConv in the arxiv version as the ddl of camera ready has passed.\n\nDespite the fact tha...
2023
ICLR
# TOEPLITZ NEURAL NETWORK FOR SEQUENCE MODELING ## **ABSTRACT** Sequence modeling has important applications in natural language processing and computer vision. Recently, the transformer-based models have shown strong performance on various sequence modeling tasks, which rely on attention to capture pairwise token re...
{ "table_of_contents": [ { "title": "TOEPLITZ NEURAL NETWORK FOR SEQUENCE MODELING", "heading_level": null, "page_id": 0, "polygon": [ [ 106.5, 80.4375 ], [ 364.5, 80.4375 ], [ 364.5, 116.01...
Neural Interactive Proofs
Lewis Hammond, Sam Adam-Day
We consider the problem of how a trusted, but computationally bounded agent (a 'verifier') can learn to interact with one or more powerful but untrusted agents ('provers') in order to solve a given task. More specifically, we study the case in which agents are represented using neural networks and refer to solutions of...
https://openreview.net/pdf?id=R2834dhBlo
https://openreview.net/forum?id=R2834dhBlo
R2834dhBlo
[{"review_id": "vOOqsZzoJq", "paper_id": "R2834dhBlo", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# NEURAL INTERACTIVE PROOFS # Lewis Hammond<sup>∗</sup> # Sam Adam-Day<sup>∗</sup> [lewis.hammond@cs.ox.ac.uk](mailto:lewis.hammond@cs.ox.ac.uk) [sam.adam-day@cs.ox.ac.uk](mailto:sam.adam-day@cs.ox.ac.uk) Department of Computer Science, University of Oxford, Oxford, United Kingdom # ABSTRACT We consider the proble...
{ "table_of_contents": [ { "title": "NEURAL INTERACTIVE PROOFS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.2540283203125 ], [ 339.13330078125, 80.2540283203125 ], [ 339.13330...
Learning to Steer Markovian Agents under Model Uncertainty
Jiawei Huang, Vinzenz Thoma, Zebang Shen, Heinrich H. Nax, Niao He
Designing incentives for an adapting population is a ubiquitous problem in a wide array of economic applications and beyond. In this work, we study how to design additional rewards to steer multi-agent systems towards desired policies \emph{without} prior knowledge of the agents' underlying learning dynamics. Motivated...
https://openreview.net/pdf?id=IzYczpPqKq
https://openreview.net/forum?id=IzYczpPqKq
IzYczpPqKq
[{"review_id": "u2id9zawLX", "paper_id": "IzYczpPqKq", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
## LEARNING TO STEER MARKOVIAN AGENTS UNDER MODEL UNCERTAINTY Jiawei Huang: , Vinzenz Thoma; , Zebang Shen: , Heinrich H. Nax§ , Niao He: : Department of Computer Science, ETH Zurich {jiawei.huang, zebang.shen, niao.he}@inf.ethz.ch ; ETH AI Center vinzenz.thoma@ai.ethz.ch § University of Zurich heinrich.nax@uzh.ch ##...
{ "table_of_contents": [ { "title": "LEARNING TO STEER MARKOVIAN AGENTS\nUNDER MODEL UNCERTAINTY", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 428.744384765625, 80.05078125 ], ...
NEW TRAINING FRAMEWORK FOR SPEECH ENHANCEMENT USING REAL NOISY SPEECH
Szu-Wei Fu, Cheng Yu, Yu Tsao, Vishak Gopal, Jayant Gupchup, Ross Cutler
Recently, deep learning-based speech enhancement (SE) models have gained significant improvements. However, the success is mainly based on using synthetic training data created by adding clean speech with noise. On the other hand, in spite of its large amount, real noisy speech is hard to be applied for SE model traini...
https://openreview.net/pdf?id=_j4ZUpoNO1e
https://openreview.net/forum?id=_j4ZUpoNO1e
_j4ZUpoNO1e
[{"review_id": "5ANEaoObE9X", "paper_id": "_j4ZUpoNO1e", "reviewer": null, "paper_summary": "The authors propose a strategy for training with real (and synthetic) noisy speech, making use of pre-fixed or pre-trained or jointly trained speech-quality prediction models. The authors also show that you get better results w...
2023
ICLR
## NEW TRAINING FRAMEWORK FOR SPEECH ENHANCE-MENT USING REAL NOISY SPEECH Anonymous authors Paper under double-blind review ## ABSTRACT Recently, deep learning-based speech enhancement (SE) models have gained significant improvements. However, the success is mainly based on using synthetic training data created by a...
{ "table_of_contents": [ { "title": "NEW TRAINING FRAMEWORK FOR SPEECH ENHANCE-\nMENT USING REAL NOISY SPEECH", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.43121337890625 ], [ 506.86053466796875, 80.4312...
Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations
Xinyue Xu, Yi Qin, Lu Mi, Hao Wang, Xiaomeng Li
Existing methods, such as concept bottleneck models (CBMs), have been successful in providing concept-based interpretations for black-box deep learning models. They typically work by predicting concepts given the input and then predicting the final class label given the predicted concepts. However, (1) they often fail ...
https://openreview.net/pdf?id=I1quoTXZzc
https://openreview.net/forum?id=I1quoTXZzc
I1quoTXZzc
[{"review_id": "PQQ258P6nO", "paper_id": "I1quoTXZzc", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# ENERGY-BASED CONCEPT BOTTLENECK MODELS: UNIFYING PREDICTION, CONCEPT INTERVENTION, AND PROBABILISTIC INTERPRETATIONS Xinyue Xu<sup>1</sup> , Yi Qin<sup>1</sup> , Lu Mi<sup>2</sup> , Hao Wang3†, Xiaomeng Li1† <sup>1</sup>The Hong Kong University of Science and Technology, <sup>2</sup>University of Washington, {xxuc...
{ "table_of_contents": [ { "title": "ENERGY-BASED CONCEPT BOTTLENECK MODELS:\nUNIFYING PREDICTION, CONCEPT INTERVENTION,\nAND PROBABILISTIC INTERPRETATIONS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.49505615234375 ], [ ...
VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Yichao Liang, Nishanth Kumar, Hao Tang, Adrian Weller, Joshua B. Tenenbaum, Tom Silver, Joao F. Henriques, Kevin Ellis
Broadly intelligent agents should form task-specific abstractions that selectively expose the essential elements of a task, while abstracting away the complexity of the raw sensorimotor space. In this work, we present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic ...
https://openreview.net/pdf?id=QOfswj7hij
https://openreview.net/forum?id=QOfswj7hij
QOfswj7hij
[{"review_id": "uRg2vfPOBn", "paper_id": "QOfswj7hij", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
## VISUALPREDICATOR: LEARNING ABSTRACT WORLD MODELS WITH NEURO-SYMBOLIC PREDICATES FOR ROBOT PLANNING Yichao Liang<sup>1</sup> , Nishanth Kumar<sup>3</sup> , Hao Tang<sup>2</sup> , Adrian Weller1,<sup>6</sup> , Joshua B. Tenenbaum<sup>3</sup> , Tom Silver<sup>4</sup> , Joao F. Henriques ˜ 5 , Kevin Ellis<sup>2</sup> ...
{ "table_of_contents": [ { "title": "VISUALPREDICATOR: LEARNING ABSTRACT WORLD\nMODELS WITH NEURO-SYMBOLIC PREDICATES FOR\nROBOT PLANNING", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.49505615234375 ], [ 503.5830...
A Benchmark Study on Calibration
Linwei Tao, Younan Zhu, Haolan Guo, Minjing Dong, Chang Xu
Deep neural networks are increasingly utilized in various machine learning tasks. However, as these models grow in complexity, they often face calibration issues, despite enhanced prediction accuracy. Many studies have endeavored to improve calibration performance through the use of specific loss functions, data prepro...
https://openreview.net/pdf?id=GzNhzX9kVa
https://openreview.net/forum?id=GzNhzX9kVa
GzNhzX9kVa
[{"review_id": "xBw7GdnVJe", "paper_id": "GzNhzX9kVa", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# <span id="page-0-0"></span>A BENCHMARK STUDY ON CALIBRATION Linwei Tao Younan Zhu, Haolan Guo University of Sydney linwei.tao@sydney.edu.au University of Sydney {yzhu0986, hguo4658}@uni.sydney.edu.au Minjing Dong City University of Hong Kong minjdong@cityu.edu.hk Chang Xu University of Sydney c.xu@sydney.edu.au ...
{ "table_of_contents": [ { "title": "A BENCHMARK STUDY ON CALIBRATION", "heading_level": null, "page_id": 0, "polygon": [ [ 106.98046875, 80.24713134765625 ], [ 411.78515625, 80.24713134765625 ], [ 41...
Equivariant Neural Functional Networks for Transformers
Hoang V. Tran, Thieu Vo, An Nguyen The, Tho Tran Huu, Minh-Khoi Nguyen-Nhat, Thanh Tran, Duy-Tung Pham, Tan Minh Nguyen
This paper systematically explores neural functional networks (NFN) for transformer architectures. NFN are specialized neural networks that treat the weights, gradients, or sparsity patterns of a deep neural network (DNN) as input data and have proven valuable for tasks such as learnable optimizers, implicit data repre...
https://openreview.net/pdf?id=uBai0ukstY
https://openreview.net/forum?id=uBai0ukstY
uBai0ukstY
[{"review_id": "l6uHk2tOSL", "paper_id": "uBai0ukstY", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# EQUIVARIANT NEURAL FUNCTIONAL NETWORKS FOR TRANSFORMERS Viet-Hoang Tran1\* Thieu N. Vo1\* An Nguyen The2\* Tho Tran Huu<sup>1</sup> Minh-Khoi Nguyen-Nhat<sup>2</sup> Thanh Tran<sup>3</sup> Duy-Tung Pham<sup>2</sup> Tan M. Nguyen<sup>1</sup> <sup>1</sup>National University of Singapore <sup>2</sup>FPT Software AI Ce...
{ "table_of_contents": [ { "title": "EQUIVARIANT NEURAL FUNCTIONAL NETWORKS\nFOR TRANSFORMERS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 471.8505859375, 80.05078125 ], [...
Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, Huaxiu Yao
Large vision-language models (LVLMs) have shown remarkable abilities in understanding visual information with human languages. However, LVLMs still suffer from object hallucination, which is the problem of generating descriptions that include objects that do not actually exist in the images. This can negatively impact ...
https://openreview.net/pdf?id=oZDJKTlOUe
https://openreview.net/forum?id=oZDJKTlOUe
oZDJKTlOUe
[{"review_id": "VKu0hj9KC3", "paper_id": "oZDJKTlOUe", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
## ANALYZING AND MITIGATING OBJECT HALLUCINA-TION IN LARGE VISION-LANGUAGE MODELS Yiyang Zhou1<sup>∗</sup> Chenhang Cui1<sup>∗</sup> Jaehong Yoon<sup>1</sup> Linjun Zhang<sup>2</sup> Zhun Deng<sup>3</sup> Chelsea Finn<sup>4</sup> Mohit Bansal<sup>1</sup> Huaxiu Yao<sup>1</sup> <sup>1</sup>UNC-Chapel Hill, <sup>2</sup...
{ "table_of_contents": [ { "title": "ANALYZING AND MITIGATING OBJECT HALLUCINA-\nTION IN LARGE VISION-LANGUAGE MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.5697326660156, 80.050...
Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Tom Zahavy, Yannick Schroecker, Feryal Behbahani, Kate Baumli, Sebastian Flennerhag, Shaobo Hou, Satinder Singh
In this work we propose a Reinforcement Learning (RL) agent that can discover complex behaviours in a rich environment with a simple reward function. We define diversity in terms of state-action occupancy measures, since policies with different occupancy measures visit different states on average. More importantly, def...
https://openreview.net/pdf?id=kjkdzBW3b8p
https://openreview.net/forum?id=kjkdzBW3b8p
kjkdzBW3b8p
[{"review_id": "G_TfXxm_mL", "paper_id": "kjkdzBW3b8p", "reviewer": null, "paper_summary": "The paper introduces a framework to learn a diverse set of policies with a controlled amount of sub-optimality. The paper introduces two diversity objectives, and solves the resulting problem using tools for solving convex MDPs....
2023
ICLR
# DISCOVERING POLICIES WITH DOMINO: DIVERSITY OPTIMIZATION MAINTAINING NEAR OPTIMALITY Tom Zahavy, Yannick Schroecker, Feryal Behbahani, Kate Baumli, Sebastian Flennerhag, Shaobo Hou and Satinder Singh DeepMind, London # ABSTRACT In this work we propose a Reinforcement Learning (RL) agent that can discover complex b...
{ "table_of_contents": [ { "title": "DISCOVERING POLICIES WITH DOMINO: DIVERSITY\nOPTIMIZATION MAINTAINING NEAR OPTIMALITY", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.49505615234375 ], [ 504.44146728515625, ...
Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
Yinhuai Wang, Jiwen Yu, Jian Zhang
Most existing Image Restoration (IR) models are task-specific, which can not be generalized to different degradation operators. In this work, we propose the Denoising Diffusion Null-Space Model (DDNM), a novel zero-shot framework for arbitrary linear IR problems, including but not limited to image super-resolution, col...
https://openreview.net/pdf?id=mRieQgMtNTQ
https://openreview.net/forum?id=mRieQgMtNTQ
mRieQgMtNTQ
[{"review_id": "X7fEjRgAjK9", "paper_id": "mRieQgMtNTQ", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": "That's a good observation, but I don't think there's a problem with Eq (14) holds or not, since Eq (14) is just a step of adding noise. The core difficulty is in Eq (12): estim...
2023
ICLR
# ZERO-SHOT IMAGE RESTORATION USING DENOISING DIFFUSION NULL-SPACE MODEL Yinhuai Wang<sup>1\*</sup>, Jiwen Yu<sup>1\*</sup>, Jian Zhang<sup>1,2†</sup> <sup>1</sup>Peking University Shenzhen Graduate School, <sup>2</sup>Peng Cheng Laboratory {yinhuai; yujiwen}@stu.pku.edu.cn, zhangjian.sz@pku.edu.cn # **ABSTRACT** M...
{ "table_of_contents": [ { "title": "ZERO-SHOT IMAGE RESTORATION USING DENOISING DIFFUSION NULL-SPACE MODEL", "heading_level": null, "page_id": 0, "polygon": [ [ 106.5, 80.4375 ], [ 441.0, 80.4375 ], [ ...
SRBGCN: Tangent space-Free Lorentz Transformations for Graph Feature Learning
Abdelrahman Mostafa, Wei Peng, Guoying Zhao
Hyperbolic graph convolutional networks have been successfully applied to represent complex graph data structures. However, optimization on Riemannian manifolds is nontrivial thus most of the existing hyperbolic networks build the network operations on the tangent space of the manifold, which is a Euclidean local appro...
https://openreview.net/pdf?id=BLsM6WymMo6
https://openreview.net/forum?id=BLsM6WymMo6
BLsM6WymMo6
[{"review_id": "mOOnA3yGVX", "paper_id": "BLsM6WymMo6", "reviewer": null, "paper_summary": "The paper proposes a \"fully hyperbolic\" GNN architecture (i.e., without resorting to the tangent space as done typically in manifold optimization). This seems to be the main novelty of the paper, which is incremental in light ...
2023
ICLR
# SRBGCN: TANGENT SPACE-FREE LORENTZ TRANS-FORMATIONS FOR GRAPH FEATURE LEARNING Anonymous authors Paper under double-blind review # ABSTRACT Hyperbolic graph convolutional networks have been successfully applied to represent complex graph data structures. However, optimization on Riemannian manifolds is nontrivial ...
{ "table_of_contents": [ { "title": "SRBGCN: TANGENT SPACE-FREE LORENTZ TRANS-\nFORMATIONS FOR GRAPH FEATURE LEARNING", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.49505615234375 ], [ 503.5697021484375, ...
Learning to Generate All Feasible Actions
Mirco Theile, Daniele Bernardini, Raphael Trumpp, Cristina Piazza, Marco Caccamo, Alberto Sangiovanni-Vincentelli
Several machine learning (ML) applications are characterized by searching for an optimal solution to a complex task. The search space for this optimal solution is often very large, so large in fact that this optimal solution is often not computable. Part of the problem is that many candidate solutions found via ML are ...
https://openreview.net/pdf?id=P8DHF1Y_dph
https://openreview.net/forum?id=P8DHF1Y_dph
P8DHF1Y_dph
[{"review_id": "ybYyt5ZlYVh", "paper_id": "P8DHF1Y_dph", "reviewer": null, "paper_summary": "The paper addresses the problem of learning all feasible actions (i.e., covering all modes) in interactive settings with complex action spaces. The authors propose a generative approach trained using a general optimization targ...
2023
ICLR
# LEARNING TO GENERATE ALL FEASIBLE ACTIONS Anonymous authors Paper under double-blind review # ABSTRACT Several machine learning (ML) applications are characterized by searching for an optimal solution to a complex task. The search space for this optimal solution is often very large, so large in fact that this opt...
{ "table_of_contents": [ { "title": "LEARNING TO GENERATE ALL FEASIBLE ACTIONS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 480.9504699707031, 80.05078125 ], [ 48...
Transformer Meets Twicing: Harnessing Unattended Residual Information
Laziz Abdullaev, Tan Minh Nguyen
Transformer-based deep learning models have achieved state-of-the-art performance across numerous language and vision tasks. While the self-attention mechanism, a core component of transformers, has proven capable of handling complex data patterns, it has been observed that the representational capacity of the attentio...
https://openreview.net/pdf?id=16kG5aNleS
https://openreview.net/forum?id=16kG5aNleS
16kG5aNleS
[{"review_id": "gsagV1x98p", "paper_id": "16kG5aNleS", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# TRANSFORMER MEETS TWICING: HARNESSING UNATTENDED RESIDUAL INFORMATION Laziz U. Abdullaev Department of Mathematics National University of Singapore laziz.abdullaev@u.nus.edu Tan M. Nguyen Department of Mathematics National University of Singapore tanmn@nus.edu.sg # ABSTRACT Transformer-based deep learning models h...
{ "table_of_contents": [ { "title": "TRANSFORMER MEETS TWICING: HARNESSING\nUNATTENDED RESIDUAL INFORMATION", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 79.27734375 ], [ 503.5740966796875, 79.27734375 ...
Chopping Formers is what you need in Vision
Francesca Babiloni, Thomas Tanay, Matteo Maggioni, Jiankang Deng, Ales Leonardis, Stefanos Zafeiriou
This work presents a new dynamic and fully-connected layer (DFC) that generalizes existing layers and is free from hard inductive biases. Then, it describes how to factorize the DFC weights efficiently. Using the Einstein convention as framework, we define the DFC as a fully connected layer with the weight tensor creat...
https://openreview.net/pdf?id=R4ETr5gcg5v
https://openreview.net/forum?id=R4ETr5gcg5v
R4ETr5gcg5v
[{"review_id": "BcHuYli1KV", "paper_id": "R4ETr5gcg5v", "reviewer": null, "paper_summary": "In this work, the authors propose a new neural network architecture based on a tensor-rank decomposition (CP decomposition) which can generalize convolution and self-attention layers. In particular, the authors test the efficacy...
2023
ICLR
# CHOPPING FORMERS IS WHAT YOU NEED IN VISION Anonymous authors Paper under double-blind review # ABSTRACT This work presents a new dynamic and fully-connected layer (DFC) that generalizes existing layers and is free from hard inductive biases. Then, it describes how to factorize the DFC weights efficiently. Using ...
{ "table_of_contents": [ { "title": "CHOPPING FORMERS IS WHAT YOU NEED IN VISION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.4375 ], [ 487.6875, 80.4375 ], [ 487.6875, ...
DeLLMa: Decision Making Under Uncertainty with Large Language Models
Ollie Liu, Deqing Fu, Dani Yogatama, Willie Neiswanger
The potential of large language models (LLMs) as decision support tools is increasingly being explored in fields such as business, engineering, and medicine, which often face challenging tasks of *decision-making under uncertainty*. In this paper, we show that directly prompting LLMs on these types of decision-making p...
https://openreview.net/pdf?id=Acvo2RGSCy
https://openreview.net/forum?id=Acvo2RGSCy
Acvo2RGSCy
[{"review_id": "n39GaHfr9x", "paper_id": "Acvo2RGSCy", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# DELLMA: DECISION MAKING UNDER UNCERTAINTY WITH LARGE LANGUAGE MODELS Ollie Liu<sup>∗</sup> , Deqing Fu<sup>∗</sup> , Dani Yogatama, Willie Neiswanger Thomas Lord Department of Computer Science University of Southern California me@ollieliu.com, {deqingfu, yogatama, neiswang}@usc.edu # ABSTRACT The potential of lar...
{ "table_of_contents": [ { "title": "DELLMA: DECISION MAKING UNDER UNCERTAINTY\nWITH LARGE LANGUAGE MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.2454833984375 ], [ 503.58477783203125, 80.24548339...
PatchDCT: Patch Refinement for High Quality Instance Segmentation
Qinrou Wen, Jirui Yang, Xue Yang, Kewei Liang
High-quality instance segmentation has shown emerging importance in computer vision. Without any refinement, DCT-Mask directly generates high-resolution masks by compressed vectors. To further refine masks obtained by compressed vectors, we propose for the first time a compressed vector based multi-stage refinement fra...
https://openreview.net/pdf?id=t9Zd7Oi5JPl
https://openreview.net/forum?id=t9Zd7Oi5JPl
t9Zd7Oi5JPl
[{"review_id": "l5_hdZtlWpz", "paper_id": "t9Zd7Oi5JPl", "reviewer": null, "paper_summary": "In a gist, the paper proposes mainly to use Discrete Cosine Transforms at a patch and not just image level, so that improve segmentation quality around boundaries.\n\nThis paper received strong acceptance scores from 3 out of 4...
2023
ICLR
# PATCHDCT: PATCH REFINEMENT FOR HIGH QUALITY INSTANCE SEGMENTATION Qinrou Wen<sup>1</sup> , Jirui Yang<sup>2</sup> , Xue Yang<sup>3</sup> , Kewei Liang1,<sup>∗</sup> <sup>1</sup>School of Mathematical Sciences, Zhejiang University <sup>2</sup>Alibaba Group {qinrou.wen,matlkw}@zju.edu.cn, jirui.yjr@alibaba-inc.com y...
{ "table_of_contents": [ { "title": "PATCHDCT: PATCH REFINEMENT FOR HIGH QUALITY\nINSTANCE SEGMENTATION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.57373046875, 80.05078125 ],...
Tracking objects that change in appearance with phase synchrony
Sabine Muzellec, Drew Linsley, Alekh Karkada Ashok, Ennio Mingolla, Girik Malik, Rufin VanRullen, Thomas Serre
Objects we encounter often change appearance as we interact with them. Changes in illumination (shadows), object pose, or the movement of non-rigid objects can drastically alter available image features. How do biological visual systems track objects as they change? One plausible mechanism involves attentional mechanis...
https://openreview.net/pdf?id=m2gVfgWYDO
https://openreview.net/forum?id=m2gVfgWYDO
m2gVfgWYDO
[{"review_id": "Yb8IqEbgpI", "paper_id": "m2gVfgWYDO", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# TRACKING OBJECTS THAT CHANGE IN APPEARANCE WITH PHASE SYNCHRONY # Sabine Muzellec<sup>⋆</sup> CerCo, CNRS, Universite de Toulouse, France Carney Institute for Brain Science Brown University, USA sabine\_muzellec@brown.edu # Drew Linsley<sup>⋆</sup> Carney Institute for Brain Science Department of Cognitive & Psyc...
{ "table_of_contents": [ { "title": "TRACKING OBJECTS THAT CHANGE IN APPEARANCE\nWITH PHASE SYNCHRONY", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.13092041015625 ], [ 503.5625, 80.13092041015625 ...
Process Reward Model with Q-value Rankings
Wendi Li, Yixuan Li
Process Reward Modeling (PRM) is critical for complex reasoning and decision-making tasks where the accuracy of intermediate steps significantly influences the overall outcome. Existing PRM approaches, primarily framed as classification problems, employ cross-entropy loss to independently evaluate each step's correctne...
https://openreview.net/pdf?id=wQEdh2cgEk
https://openreview.net/forum?id=wQEdh2cgEk
wQEdh2cgEk
[{"review_id": "Y58by2QAd1", "paper_id": "wQEdh2cgEk", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# PROCESS REWARD MODEL WITH Q-VALUE RANKINGS # Wendi Li Department of Computer Science Huazhong University of Science and Technology wendili@hust.edu.cn ## Yixuan Li Department of Computer Sciences University of Wisconsin-Madison sharonli@cs.wisc.edu # ABSTRACT Process Reward Modeling (PRM) is critical for comple...
{ "table_of_contents": [ { "title": "PROCESS REWARD MODEL WITH Q-VALUE RANKINGS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.4375 ], [ 503.588134765625, 80.4375 ], [ 503.58813...
Towards One-shot Neural Combinatorial Solvers: Theoretical and Empirical Notes on the Cardinality-Constrained Case
Runzhong Wang, Li Shen, Yiting Chen, Xiaokang Yang, Dacheng Tao, Junchi Yan
One-shot non-autoregressive neural networks, different from RL-based ones, have been actively adopted for solving combinatorial optimization (CO) problems, which can be trained by the objective score in a self-supervised manner. Such methods have shown their superiority in efficiency (e.g. by parallelization) and poten...
https://openreview.net/pdf?id=h21yJhdzbwz
https://openreview.net/forum?id=h21yJhdzbwz
h21yJhdzbwz
[{"review_id": "_K2XlZhQm24", "paper_id": "h21yJhdzbwz", "reviewer": null, "paper_summary": "This paper proposes a specific relaxation of \"combinatorial\" solvers. Instead, what is mostly described here is a solver for binary optimization. \n\nThe solver is proposed to solver binary problems in an amortized way (and c...
2023
ICLR
# TOWARDS ONE-SHOT NEURAL COMBINATORIAL SOLVERS: THEORETICAL AND EMPIRICAL NOTES ON THE CARDINALITY-CONSTRAINED CASE Runzhong Wang<sup>1</sup> , Li Shen<sup>2</sup> , Yiting Chen<sup>1</sup> , Xiaokang Yang<sup>1</sup> , Dacheng Tao<sup>2</sup> , Junchi Yan1<sup>∗</sup> <sup>1</sup>MoE Key Lab of Artificial Intelligen...
{ "table_of_contents": [ { "title": "TOWARDS ONE-SHOT NEURAL COMBINATORIAL\nSOLVERS: THEORETICAL AND EMPIRICAL NOTES\nON THE CARDINALITY-CONSTRAINED CASE", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.4375 ], [ 50...
Nearly Minimax Optimal Offline Reinforcement Learning with Linear Function Approximation: Single-Agent MDP and Markov Game
Wei Xiong, Han Zhong, Chengshuai Shi, Cong Shen, Liwei Wang, Tong Zhang
Offline reinforcement learning (RL) aims at learning an optimal strategy using a pre-collected dataset without further interactions with the environment. While various algorithms have been proposed for offline RL in the previous literature, the minimax optimality has only been (nearly) established for tabular Markov de...
https://openreview.net/pdf?id=UP_GHHPw7rP
https://openreview.net/forum?id=UP_GHHPw7rP
UP_GHHPw7rP
[{"review_id": "pghivSj9Ps", "paper_id": "UP_GHHPw7rP", "reviewer": null, "paper_summary": "This paper considers offline RL for linear MDPs. Here, the algorithm is given a trajectory from a control policy and the algorithm's goal is to compute another policy whose expected reward is as large as possible. The paper pro...
2023
ICLR
# NEARLY MINIMAX OPTIMAL OFFLINE REINFORCE-MENT LEARNING WITH LINEAR FUNCTION APPROXI-MATION: SINGLE-AGENT MDP AND MARKOV GAME Wei Xiong\*1, Han Zhong\*2, Chengshuai Shi3, Cong Shen3, Liwei Wang4,5, Tong Zhang1,6 Department of Mathematics, The Hong Kong University of Science and Technology1 Center for Data Science, Pe...
{ "table_of_contents": [ { "title": "NEARLY MINIMAX OPTIMAL OFFLINE REINFORCE-\nMENT LEARNING WITH LINEAR FUNCTION APPROXI-\nMATION: SINGLE-AGENT MDP AND MARKOV GAME", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.82421875 ], ...
Mitigating Object Hallucination in MLLMs via Data-augmented Phrase-level Alignment
Pritam Sarkar, Sayna Ebrahimi, Ali Etemad, Ahmad Beirami, Sercan O Arik, Tomas Pfister
Despite their significant advancements, Multimodal Large Language Models (MLLMs) often generate factually inaccurate information, referred to as hallucination. In this work, we address object hallucinations in MLLMs, where information is generated about an object not present in the input image. We introduce Data-augmen...
https://openreview.net/pdf?id=yG1fW8igzP
https://openreview.net/forum?id=yG1fW8igzP
yG1fW8igzP
[{"review_id": "em6YbQYFQw", "paper_id": "yG1fW8igzP", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# MITIGATING OBJECT HALLUCINATION IN MLLMS VIA DATA-AUGMENTED PHRASE-LEVEL ALIGNMENT Pritam Sarkar\*\*; Sayna Ebrahimi\*, Ali Etemad\*; Ahmad Beirami\*, Sercan Ö. Arık\*, Tomas Pfister\* \*Queen's University, \*Vector Institute, \*Google DeepMind, \*Google Cloud AI Research {pritam.sarkar,ali.etemad}@queensu.ca {say...
{ "table_of_contents": [ { "title": "MITIGATING OBJECT HALLUCINATION IN MLLMS VIA DATA-AUGMENTED PHRASE-LEVEL ALIGNMENT", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.82421875 ], [ 504.0, 80.82421875 ...
Black-Box Detection of Language Model Watermarks
Thibaud Gloaguen, Nikola Jovanović, Robin Staab, Martin Vechev
Watermarking has emerged as a promising way to detect LLM-generated text, by augmenting LLM generations with later detectable signals. Recent work has proposed multiple families of watermarking schemes, several of which focus on preserving the LLM distribution. This distribution-preservation property is motivated by th...
https://openreview.net/pdf?id=E4LAVLXAHW
https://openreview.net/forum?id=E4LAVLXAHW
E4LAVLXAHW
[{"review_id": "v8xiiCpidX", "paper_id": "E4LAVLXAHW", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# BLACK-BOX DETECTION OF LANGUAGE MODEL WATERMARKS Thibaud Gloaguen, Nikola Jovanovic, Robin Staab, Martin Vechev ´ ETH Zurich tgloaguen@ethz.ch, {nikola.jovanovic, robin.staab, martin.vechev}@inf.ethz.ch # ABSTRACT Watermarking has emerged as a promising way to detect LLM-generated text, by augmenting LLM generati...
{ "table_of_contents": [ { "title": "BLACK-BOX DETECTION OF LANGUAGE MODEL\nWATERMARKS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 464.9765625, 80.05078125 ], [ ...
CO-MOT: Boosting End-to-end Transformer-based Multi-Object Tracking via Coopetition Label Assignment and Shadow Sets
Feng yan, Weixin Luo, Yujie Zhong, Yiyang Gan, Lin Ma
Existing end-to-end Multi-Object Tracking (e2e-MOT) methods have not surpassed non-end-to-end tracking-by-detection methods. One possible reason lies in the training label assignment strategy that consistently binds the tracked objects with tracking queries and assigns few newborns to detection queries. Such an assignm...
https://openreview.net/pdf?id=0ov0dMQ3mN
https://openreview.net/forum?id=0ov0dMQ3mN
0ov0dMQ3mN
[{"review_id": "Tojd1xtOCW", "paper_id": "0ov0dMQ3mN", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# CO-MOT: BOOSTING END-TO-END TRANSFORMER-BASED MULTI-OBJECT TRACKING VIA COOPETITION LABEL ASSIGNMENT AND SHADOW SETS Feng Yan,\* Weixin Luo,\* Yujie Zhong, Yiyang Gan, Lin Ma<sup>†</sup> Meituan Inc., China. {yanfeng05, luoweixin, zhongyujie, ganyiyang}@meituan.comforest.linma@gmail.com #### **ABSTRACT** Existing...
{ "table_of_contents": [ { "title": "CO-MOT: BOOSTING END-TO-END TRANSFORMER-BASED MULTI-OBJECT TRACKING VIA COOPETITION LABEL ASSIGNMENT AND SHADOW SETS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.82421875 ], [ ...
Data Debugging with Shapley Importance over Machine Learning Pipelines
Bojan Karlaš, David Dao, Matteo Interlandi, Sebastian Schelter, Wentao Wu, Ce Zhang
When a machine learning (ML) model exhibits poor quality (e.g., poor accuracy or fairness), the problem can often be traced back to errors in the training data. Being able to discover the data examples that are the most likely culprits is a fundamental concern that has received a lot of attention recently. One prominen...
https://openreview.net/pdf?id=qxGXjWxabq
https://openreview.net/forum?id=qxGXjWxabq
qxGXjWxabq
[{"review_id": "OFIRZCuJw8", "paper_id": "qxGXjWxabq", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
## DATA DEBUGGING WITH SHAPLEY IMPORTANCE OVER MACHINE LEARNING PIPELINES Bojan Karlaš1\*, David Dao<sup>2</sup> , Matteo Interlandi<sup>3</sup> , Sebastian Schelter<sup>4</sup> , Wentao Wu<sup>3</sup> , Ce Zhang<sup>5</sup> <sup>1</sup>Harvard University, <sup>2</sup>ETH Zurich, <sup>3</sup>Microsoft, <sup>4</sup>Uni...
{ "table_of_contents": [ { "title": "DATA DEBUGGING WITH SHAPLEY IMPORTANCE OVER\nMACHINE LEARNING PIPELINES", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.4375 ], [ 503.7049560546875, 80.4375 ], ...
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
Yuhui Xu, Lingxi Xie, Xiaotao Gu, Xin Chen, Heng Chang, Hengheng Zhang, Zhengsu Chen, XIAOPENG ZHANG, Qi Tian
Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to deploy them onto edge devices. In this paper, we propose a quantizatio...
https://openreview.net/pdf?id=WvFoJccpo8
https://openreview.net/forum?id=WvFoJccpo8
WvFoJccpo8
[{"review_id": "b5T44CkgbQ", "paper_id": "WvFoJccpo8", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# QA-LORA: QUANTIZATION-AWARE LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS Yuhui Xu Lingxi Xie Xiaotao Gu Xin Chen Heng Chang Hengheng Zhang Zhensu Chen Xiaopeng Zhang Qi Tian Huawei Inc. (: corresponding author) {xyh6666,198808xc,guxt1994,chenxin061,changh.heng}@gmail.com {imhmhm,chenzhengsu1,zxphistory}@gmail.com,...
{ "table_of_contents": [ { "title": "QA-LORA: QUANTIZATION-AWARE LOW-RANK\nADAPTATION OF LARGE LANGUAGE MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 108.43000030517578, 80.05078125 ], [ 503.5738525390625, 80.05...
No Double Descent in PCA: Training and Pre-Training in High Dimensions
Daniel Gedon, Antonio H. Ribeiro, Thomas B. Schön
With the recent body of work on overparameterized models the gap between theory and practice in contemporary machine learning is shrinking. While many of the present state-of-the-art models have an encoder-decoder architecture, there is little theoretical work for this model structure. To improve our understanding in t...
https://openreview.net/pdf?id=ieWqvOiKgz2
https://openreview.net/forum?id=ieWqvOiKgz2
ieWqvOiKgz2
[{"review_id": "D9AEz2YL8B", "paper_id": "ieWqvOiKgz2", "reviewer": null, "paper_summary": "The paper concerns (two) linear regression models: on in which data is isotropic, the other in which there is planted, latent, linear structure. The authors prove there is no double-descent behavior of (two variants) of PCA + l...
2023
ICLR
# <span id="page-0-0"></span>NO DOUBLE DESCENT IN PCA: TRAINING AND PRE-TRAINING IN HIGH DIMENSIONS Anonymous authors Paper under double-blind review # ABSTRACT With the recent body of work on overparameterized models the gap between theory and practice in contemporary machine learning is shrinking. While many of th...
{ "table_of_contents": [ { "title": "NO DOUBLE DESCENT IN PCA: TRAINING AND\nPRE-TRAINING IN HIGH DIMENSIONS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.35015869140625 ], [ 457.1398010253906, 80.350158...
Mind the GAP: Glimpse-based Active Perception improves generalization and sample efficiency of visual reasoning
Oleh Kolner, Thomas Ortner, Stanisław Woźniak, Angeliki Pantazi
Human capabilities in understanding visual relations are far superior to those of AI systems, especially for previously unseen objects. For example, while AI systems struggle to determine whether two such objects are visually the same or different, humans can do so with ease. Active vision theories postulate that the l...
https://openreview.net/pdf?id=iXCeQ2m6vT
https://openreview.net/forum?id=iXCeQ2m6vT
iXCeQ2m6vT
[{"review_id": "HgWwZllTP5", "paper_id": "iXCeQ2m6vT", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# MIND THE GAP: GLIMPSE-BASED ACTIVE PERCEP-TION IMPROVES GENERALIZATION AND SAMPLE EFFI-CIENCY OF VISUAL REASONING Oleh Kolner1,<sup>2</sup> , Thomas Ortner<sup>1</sup> , Stanisław Wozniak ´ <sup>1</sup> & Angeliki Pantazi<sup>1</sup> # ABSTRACT Human capabilities in understanding visual relations are far superior ...
{ "table_of_contents": [ { "title": "MIND THE GAP: GLIMPSE-BASED ACTIVE PERCEP-\nTION IMPROVES GENERALIZATION AND SAMPLE EFFI-\nCIENCY OF VISUAL REASONING", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ ...
MoDeGPT: Modular Decomposition for Large Language Model Compression
Chi-Heng Lin, Shangqian Gao, James Seale Smith, Abhishek Patel, Shikhar Tuli, Yilin Shen, Hongxia Jin, Yen-Chang Hsu
Large Language Models (LLMs) have significantly advanced AI with their exceptional performance across a wide range of tasks. However, their extensive computational requirements restrict their use on devices with limited resources. While recent compression methods based on low-rank matrices show potential solutions, the...
https://openreview.net/pdf?id=8EfxjTCg2k
https://openreview.net/forum?id=8EfxjTCg2k
8EfxjTCg2k
[{"review_id": "7wROSLajY5", "paper_id": "8EfxjTCg2k", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Oral)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recommen...
2025
ICLR
# <span id="page-0-0"></span>MODEGPT: MODULAR DECOMPOSITION FOR LARGE LANGUAGE MODEL COMPRESSION Chi-Heng Lin<sup>∗</sup> Samsung Research America Shangqian Gao Florida State University James Seale Smith Samsung Research America Abhishek Patel Shikhar Tuli Samsung Research America Yilin Shen Samsung Research America...
{ "table_of_contents": [ { "title": "MODEGPT: MODULAR DECOMPOSITION FOR LARGE\nLANGUAGE MODEL COMPRESSION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 504.421875, 80.05078125 ], ...
Online Policy Optimization for Robust MDP
Jing Dong, Jingwei Li, Baoxiang Wang, Jingzhao Zhang
Reinforcement learning (RL) has exceeded human performance in many synthetic settings such as video games and Go. However, real-world deployment of end-to-end RL models is rare, as RL models can be very sensitive to slight perturbation of the environment. The robust Markov decision process (MDP) framework---in which th...
https://openreview.net/pdf?id=cYZupNY8DS4
https://openreview.net/forum?id=cYZupNY8DS4
cYZupNY8DS4
[{"review_id": "PB00hZVOJC1", "paper_id": "cYZupNY8DS4", "reviewer": null, "paper_summary": "The submitted paper considers the problem of learning a robust policy for MDPs with uncertainty about the decision dynamics. More specifically, they consider an online setting and are interested in deriving regret bounds for th...
2023
ICLR
# ONLINE POLICY OPTIMIZATION FOR ROBUST MDP Anonymous authors Paper under double-blind review ## ABSTRACT Reinforcement learning (RL) has exceeded human performance in many synthetic settings such as video games and Go. However, real-world deployment of end-to-end RL models is less common, as RL models can be very ...
{ "table_of_contents": [ { "title": "ONLINE POLICY OPTIMIZATION FOR ROBUST MDP", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.4375 ], [ 485.296875, 80.4375 ], [ 485.296875, ...
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
Ulyana Piterbarg, Lerrel Pinto, Rob Fergus
Software engineers mainly write code by editing existing programs. In contrast, language models (LMs) autoregressively synthesize programs in a single pass. One explanation for this is the scarcity of sequential edit data. While high-quality instruction data for code synthesis is scarce, edit data for synthesis is even...
https://openreview.net/pdf?id=AqfUa08PCH
https://openreview.net/forum?id=AqfUa08PCH
AqfUa08PCH
[{"review_id": "fo5ZQ8DRSG", "paper_id": "AqfUa08PCH", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# TRAINING LANGUAGE MODELS ON SYNTHETIC EDIT SEQUENCES IMPROVES CODE SYNTHESIS Ulyana Piterbarg, Lerrel Pinto, & Rob Fergus<sup>∗</sup> New York University ## ABSTRACT Software engineers mainly write code by editing existing programs. In contrast, language models (LMs) autoregressively synthesize programs in a singl...
{ "table_of_contents": [ { "title": "TRAINING LANGUAGE MODELS ON SYNTHETIC\nEDIT SEQUENCES IMPROVES CODE SYNTHESIS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.49505615234375 ], [ 456.45037841796875, 8...
In-Situ Text-Only Adaptation of Speech Models with Low-Overhead Speech Imputations
Ashish Mittal, Sunita Sarawagi, Preethi Jyothi
Fast and accurate adaptation of automatic speech recognition (ASR) systems using only text data in the target domain is a problem of long-standing practical relevance. Text-only adaptation was easy in traditional cascaded ASR systems with completely decoupled acoustic and language models. Recently, the RNNTransducer (R...
https://openreview.net/pdf?id=T2Ncx_PN2K
https://openreview.net/forum?id=T2Ncx_PN2K
T2Ncx_PN2K
[{"review_id": "osVAp_pS7k", "paper_id": "T2Ncx_PN2K", "reviewer": null, "paper_summary": "Summary: the paper presents a novel text-only adaptation method for RNN-T based ASR system. It is a lightweight adaptation technique. Experimental results show that the proposed method achieves better quality to address domain di...
2023
ICLR
## IN-SITU TEXT-ONLY ADAPTATION OF SPEECH MOD-ELS WITH LOW-OVERHEAD SPEECH IMPUTATIONS Ashish Mittal IBM Research, IIT Bombay arakeshk@in.ibm.com Sunita Sarawagi & Preethi Jyothi IIT Bombay {sunita,pjyothi}@cse.iitb.ac.in ## ABSTRACT Fast and accurate adaptation of automatic speech recognition (ASR) systems using on...
{ "table_of_contents": [ { "title": "IN-SITU TEXT-ONLY ADAPTATION OF SPEECH MOD-\nELS WITH LOW-OVERHEAD SPEECH IMPUTATIONS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.05078125 ], [ 503.56976318359375, ...
NoVo: Norm Voting off Hallucinations with Attention Heads in Large Language Models
Zheng Yi Ho, Siyuan Liang, Sen Zhang, Yibing Zhan, Dacheng Tao
Hallucinations in Large Language Models (LLMs) remain a major obstacle, particularly in high-stakes applications where factual accuracy is critical. While representation editing and reading methods have made strides in reducing hallucinations, their heavy reliance on specialised tools and training on in-domain samples,...
https://openreview.net/pdf?id=yaOe2xBcLC
https://openreview.net/forum?id=yaOe2xBcLC
yaOe2xBcLC
[{"review_id": "8JBgSxk1VS", "paper_id": "yaOe2xBcLC", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# NOVO: NORM VOTING OFF HALLUCINATIONS WITH ATTENTION HEADS IN LARGE LANGUAGE MODELS Zhengyi Ho<sup>1</sup> , Siyuan Liang1\*, Sen Zhang<sup>2</sup> , Yibing Zhan<sup>2</sup> , Dacheng Tao1\* {zhengyi001, siyuan.liang, dacheng.tao}@ntu.edu.sg {senzhang.thu10, zhanybjy}@gmail.com ### ABSTRACT Hallucinations in Large...
{ "table_of_contents": [ { "title": "NOVO: NORM VOTING OFF HALLUCINATIONS WITH\nATTENTION HEADS IN LARGE LANGUAGE MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 79.6640625 ], [ 503.57440185546875, 79....
HOYER REGULARIZER IS ALL YOU NEED FOR EXTREMELY SPARSE SPIKING NEURAL NETWORKS
Gourav Datta, Zeyu Liu, Peter Anthony Beerel
Spiking Neural networks (SNN) have emerged as an attractive spatio-temporal computing paradigm for a wide range of low-power vision tasks. However, state- of-the-art (SOTA) SNN models either incur multiple time steps which hinder their deployment in real-time use cases or increase the training complexity significantly....
https://openreview.net/pdf?id=0L8tuglXJaW
https://openreview.net/forum?id=0L8tuglXJaW
0L8tuglXJaW
[{"review_id": "uKPS2PPNU5q", "paper_id": "0L8tuglXJaW", "reviewer": null, "paper_summary": "The paper proposes to use a Hoyer regularizer and Hoyer spike layer to improve the training of one-time-step SNNs. The paper receives a borderline score and goes through intensive discussions between the reviewers and the autho...
2023
ICLR
# HOYER REGULARIZER IS ALL YOU NEED FOR ULTRA LOW-LATENCY SPIKING NEURAL NETWORKS Anonymous authors Paper under double-blind review # ABSTRACT Spiking Neural networks (SNN) have emerged as an attractive spatio-temporal computing paradigm for a wide range of low-power vision tasks. However, stateof-the-art (SOTA) SNN...
{ "table_of_contents": [ { "title": "HOYER REGULARIZER IS ALL YOU NEED FOR ULTRA\nLOW-LATENCY SPIKING NEURAL NETWORKS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.2540283203125 ], [ 503.5732727050781, 8...
Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in Language Models
Wenxuan Zhang, Philip Torr, Mohamed Elhoseiny, Adel Bibi
Fine-tuning large language models (LLMs) on human preferences, typically through reinforcement learning from human feedback (RLHF), has proven successful in enhancing their capabilities. However, ensuring the safety of LLMs during fine-tuning remains a critical concern, and mitigating the potential conflicts in safe...
https://openreview.net/pdf?id=GjM61KRiTG
https://openreview.net/forum?id=GjM61KRiTG
GjM61KRiTG
[{"review_id": "ir66w2MGy8", "paper_id": "GjM61KRiTG", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# BI-FACTORIAL PREFERENCE OPTIMIZATION: BALANCING SAFETY-HELPFULNESS IN LANGUAGE MODELS Wenxuan Zhang<sup>1</sup> , Philip H.S. Torr<sup>2</sup> , Mohamed Elhoseiny1<sup>∗</sup> , Adel Bibi2<sup>∗</sup> {wenxuan.zhang,mohamed.elhoseiny}@kaust.edu.sa {philip.torr,adel.bibi}@eng.ox.ac.uk # ABSTRACT Fine-tuning large ...
{ "table_of_contents": [ { "title": "BI-FACTORIAL PREFERENCE OPTIMIZATION:\nBALANCING SAFETY-HELPFULNESS IN LANGUAGE\nMODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.39202880859375 ], [ 503.5659484863281, ...
InverseBench: Benchmarking Plug-and-Play Diffusion Priors for Inverse Problems in Physical Sciences
Hongkai Zheng, Wenda Chu, Bingliang Zhang, Zihui Wu, Austin Wang, Berthy Feng, Caifeng Zou, Yu Sun, Nikola Borislavov Kovachki, Zachary E Ross, Katherine Bouman, Yisong Yue
Plug-and-play diffusion priors (PnPDP) have emerged as a promising research direction for solving inverse problems. However, current studies primarily focus on natural image restoration, leaving the performance of these algorithms in scientific inverse problems largely unexplored. To address this gap, we introduce \t...
https://openreview.net/pdf?id=U3PBITXNG6
https://openreview.net/forum?id=U3PBITXNG6
U3PBITXNG6
[{"review_id": "ZwQU335oj6", "paper_id": "U3PBITXNG6", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# INVERSEBENCH: BENCHMARKING PLUG-AND-PLAY DIFFUSION PRIORS FOR INVERSE PROBLEMS IN PHYSICAL SCIENCES Hongkai Zheng<sup>1,\*</sup>, Wenda Chu<sup>1,\*</sup>, Bingliang Zhang<sup>1,\*</sup>, Zihui Wu<sup>1,\*</sup>, Austin Wang<sup>1</sup>, Berthy T. Feng<sup>1</sup>, Caifeng Zou<sup>1</sup>, Yu Sun<sup>2</sup>, Nikola...
{ "table_of_contents": [ { "title": "INVERSEBENCH: BENCHMARKING PLUG-AND-PLAY DIFFUSION PRIORS FOR INVERSE PROBLEMS IN PHYSICAL SCIENCES", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 79.6640625 ], [ 504.0, ...
Monet: Mixture of Monosemantic Experts for Transformers
Jungwoo Park, Ahn Young Jin, Kee-Eung Kim, Jaewoo Kang
Understanding the internal computations of large language models (LLMs) is crucial for aligning them with human values and preventing undesirable behaviors like toxic content generation. However, mechanistic interpretability is hindered by *polysemanticity*—where individual neurons respond to multiple, unrelated concep...
https://openreview.net/pdf?id=1Ogw1SHY3p
https://openreview.net/forum?id=1Ogw1SHY3p
1Ogw1SHY3p
[{"review_id": "K0FZZBwqNi", "paper_id": "1Ogw1SHY3p", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# MONET: MIXTURE OF MONOSEMANTIC EXPERTS FOR TRANSFORMERS ``` Jungwoo Park<sup>1,3†</sup>, Young Jin Ahn<sup>2†</sup>, Kee-Eung Kim<sup>2*</sup>, Jaewoo Kang<sup>1,3*</sup> <sup>1</sup>Korea University, <sup>2</sup>KAIST, <sup>3</sup>AIGEN Sciences {jungwoo-park, kangj}@korea.ac.kr {snoop2head, kekim}@kaist.ac.kr ``` ...
{ "table_of_contents": [ { "title": "MONET: MIXTURE OF MONOSEMANTIC EXPERTS FOR TRANSFORMERS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.25, 79.6640625 ], [ 504.75, 79.6640625 ], [ 504.7...
Differentially private optimization for non-decomposable objective functions
Weiwei Kong, Andres Munoz medina, Mónica Ribero
Unsupervised pre-training is a common step in developing computer vision models and large language models. In this setting, the absence of labels requires the use of similarity-based loss functions, such as the contrastive loss, that favor minimizing the distance between similar inputs and maximizing the distance betwe...
https://openreview.net/pdf?id=F52tAK5Gbg
https://openreview.net/forum?id=F52tAK5Gbg
F52tAK5Gbg
[{"review_id": "s3d4y7e4tr", "paper_id": "F52tAK5Gbg", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# DIFFERENTIALLY PRIVATE OPTIMIZATION FOR NON-DECOMPOSABLE OBJECTIVE FUNCTIONS Weiwei Kong, Andres Mu ´ noz Medina & M ˜ onica Ribero ´ Google Research New York, NY, USA {weiweikong, ammedina, mribero}@google.com # ABSTRACT Unsupervised pre-training is a common step in developing computer vision models and large lan...
{ "table_of_contents": [ { "title": "DIFFERENTIALLY PRIVATE OPTIMIZATION FOR NON-\nDECOMPOSABLE OBJECTIVE FUNCTIONS", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 503.56976318359375, 80.0507...
Compositional Preference Models for Aligning LMs
Dongyoung Go, Tomasz Korbak, Germán Kruszewski, Jos Rozen, Marc Dymetman
As language models (LMs) become more capable, it is increasingly important to align them with human preferences. However, the dominant paradigm for training Preference Models (PMs) for that purpose suffers from fundamental limitations, such as lack of transparency and scalability, along with susceptibility to overfitti...
https://openreview.net/pdf?id=tiiAzqi6Ol
https://openreview.net/forum?id=tiiAzqi6Ol
tiiAzqi6Ol
[{"review_id": "FZvYrltdyf", "paper_id": "tiiAzqi6Ol", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# COMPOSITIONAL PREFERENCE MODELS FOR ALIGNING LMS ### Dongyoung Go Naver Corp Yonsei University dongyoung.go@navercorp.com ### German Kruszewski, Jos Rozen ´ Naver Labs Europe {german.kruszewski,jos.rozen}@naverlabs.com ### Tomasz Korbak University of Sussex tomasz.korbak@gmail.com ### Marc Dymetman Independen...
{ "table_of_contents": [ { "title": "COMPOSITIONAL PREFERENCE MODELS\nFOR ALIGNING LMS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.98046875, 80.05078125 ], [ 398.23077392578125, 80.05078125 ], [ ...
BrainUICL: An Unsupervised Individual Continual Learning Framework for EEG Applications
Yangxuan Zhou, Sha Zhao, Jiquan Wang, Haiteng Jiang, Shijian Li, Tao Li, Gang Pan
Electroencephalography (EEG) is a non-invasive brain-computer interface technology used for recording brain electrical activity. It plays an important role in human life and has been widely uesd in real life, including sleep staging, emotion recognition, and motor imagery. However, existing EEG-related models cannot be...
https://openreview.net/pdf?id=6jjAYmppGQ
https://openreview.net/forum?id=6jjAYmppGQ
6jjAYmppGQ
[{"review_id": "DyJrMJLFuP", "paper_id": "6jjAYmppGQ", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# BRAINUICL: AN UNSUPERVISED INDIVIDUAL CON-TINUAL LEARNING FRAMEWORK FOR EEG APPLICA-TIONS ``` Yangxuan Zhou1,2, Sha Zhao1,2∗ , Jiquan Wang1,2, Haiteng Jiang3,4,1, Shijian Li1,2 , Tao Li3,4,1, Gang Pa...
{ "table_of_contents": [ { "title": "BRAINUICL: AN UNSUPERVISED INDIVIDUAL CON-\nTINUAL LEARNING FRAMEWORK FOR EEG APPLICA-\nTIONS", "heading_level": null, "page_id": 0, "polygon": [ [ 105.1875, 80.05078125 ], [ 506.8431701660156, ...
A Statistical Framework for Personalized Federated Learning and Estimation: Theory, Algorithms, and Privacy
Kaan Ozkara, Antonious M. Girgis, Deepesh Data, Suhas Diggavi
A distinguishing characteristic of federated learning is that the (local) client data could have statistical heterogeneity. This heterogeneity has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. There have been various personalization methods pro...
https://openreview.net/pdf?id=FUiDMCr_W4o
https://openreview.net/forum?id=FUiDMCr_W4o
FUiDMCr_W4o
[{"review_id": "0Vm-DIi44Bk", "paper_id": "FUiDMCr_W4o", "reviewer": null, "paper_summary": "This paper presents privacy-preserving empirical and hierarchical Bayes algorithms. The analysis and development are both fine, and the topic is interesting, at least for me, relative to the standard 'propagate gradients' idea ...
2023
ICLR
# A STATISTICAL FRAMEWORK FOR PERSONALIZED FEDERATED LEARNING AND ESTIMATION: THEORY, ALGORITHMS, AND PRIVACY Kaan Ozkara\*, Antonious M. Girgis\*, Deepesh Data & Suhas Diggavi Department of Electrical and Computer Engineering University of California, Los Angeles {kaan,amgirgis}@ucla.edu,deepesh.data@gmail.com,suhas...
{ "table_of_contents": [ { "title": "A STATISTICAL FRAMEWORK FOR PERSONALIZED FEDERATED LEARNING AND ESTIMATION: THEORY, ALGORITHMS, AND PRIVACY", "heading_level": null, "page_id": 0, "polygon": [ [ 105.1875, 80.05078125 ], [ 507.75, ...
Bayesian Bi-clustering of Neural Spiking Activity with Latent Structures
Ganchao Wei
Modern neural recording techniques allow neuroscientists to obtain spiking activity of multiple neurons from different brain regions over long time periods, which requires new statistical methods to be developed for understanding structure of the large-scale data. In this paper, we develop a bi-clustering method to clu...
https://openreview.net/pdf?id=ZYm1Ql6udy
https://openreview.net/forum?id=ZYm1Ql6udy
ZYm1Ql6udy
[{"review_id": "VN1WQOqqgR", "paper_id": "ZYm1Ql6udy", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# BAYESIAN BI-CLUSTERING OF NEURAL SPIKING AC-TIVITY WITH LATENT STRUCTURES Ganchao Wei Department of Statistical Science Duke University Durham, NC 27708, USA ganchao.wei@duke.edu # ABSTRACT Modern neural recording techniques allow neuroscientists to obtain spiking activity of multiple neurons from different brain ...
{ "table_of_contents": [ { "title": "BAYESIAN BI-CLUSTERING OF NEURAL SPIKING AC-\nTIVITY WITH LATENT STRUCTURES", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.49505615234375 ], [ 503.5697326660156, 80.49...
An Exact Poly-Time Membership-Queries Algorithm for Extracting a Three-Layer ReLU Network
Amit Daniely, Elad Granot
We consider the natural problem of learning a ReLU network from queries, which was recently remotivated by model extraction attacks. In this work, we present a polynomial-time algorithm that can learn a depth-two ReLU network from queries under mild general position assumptions. We also present a polynomial-time algori...
https://openreview.net/pdf?id=-CoNloheTs
https://openreview.net/forum?id=-CoNloheTs
-CoNloheTs
[{"review_id": "95cFAPnpI-", "paper_id": "-CoNloheTs", "reviewer": null, "paper_summary": "This paper gives the first polynomial-time algorithm that learns 3-layer neural networks with membership queries. Prior work could only handle depth-2 networks or imposed an extremely strong condition on the weights. While the cu...
2023
ICLR
# AN EXACT POLY-TIME MEMBERSHIP-QUERIES AL-GORITHM FOR EXTRACTING A THREE-LAYER RELU NETWORK ## Amit Daniely School of Computer Science and Engineering, The Hebrew University and Google Research Tel-Aviv amit.daniely@mail.huji.ac.il ## Elad Granot School of Computer Science and Engineering, The Hebrew University el...
{ "table_of_contents": [ { "title": "AN EXACT POLY-TIME MEMBERSHIP-QUERIES AL-\nGORITHM FOR EXTRACTING A THREE-LAYER RELU\nNETWORK", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.49505615234375 ], [ 503.56988525390...
Dense Video Object Captioning from Disjoint Supervision
Xingyi Zhou, Anurag Arnab, Chen Sun, Cordelia Schmid
We propose a new task and model for dense video object captioning -- detecting, tracking and captioning trajectories of objects in a video. This task unifies spatial and temporal localization in video, whilst also requiring fine-grained visual understanding that is best described by natural language. We propose a unifi...
https://openreview.net/pdf?id=auZZ2gN0ZN
https://openreview.net/forum?id=auZZ2gN0ZN
auZZ2gN0ZN
[{"review_id": "ew1X9QSUey", "paper_id": "auZZ2gN0ZN", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Spotlight)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "rec...
2025
ICLR
# <span id="page-0-0"></span>DENSE VIDEO OBJECT CAPTIONING FROM DISJOINT SUPERVISION Xingyi Zhou\* Anurag Arnab\* Chen Sun Cordelia Schmid Google DeepMind # ABSTRACT We propose a new task and model for *dense video object captioning* – detecting, tracking and captioning trajectories of objects in a video. This task ...
{ "table_of_contents": [ { "title": "DENSE VIDEO OBJECT CAPTIONING\nFROM DISJOINT SUPERVISION", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.05078125 ], [ 374.244140625, 80.05078125 ], [ ...
GLOMA: Global Video Text Spotting with Morphological Association
Han Wang, Yanjie Wang, Yang Li, Can Huang
Video Text Spotting (VTS) is a fundamental visual task that aims to predict the trajectories and content of texts in a video. Previous works usually conduct local associations and apply IoU-based distance and complex post-processing procedures to boost performance, ignoring the abundant temporal information and the mor...
https://openreview.net/pdf?id=tMKibc9Uxi
https://openreview.net/forum?id=tMKibc9Uxi
tMKibc9Uxi
[{"review_id": "MqQNhPux2Q", "paper_id": "tMKibc9Uxi", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# GLOMA: GLOBAL VIDEO TEXT SPOTTING WITH MORPHOLOGICAL ASSOCIATION Han Wang Bytedance Yanjie Wang Bytedance Yang Li Bytedance Can Huang Bytedance ### ABSTRACT Video Text Spotting (VTS) is a fundamental visual task that aims to predict the trajectories and content of texts in a video. Previous works usually conduct l...
{ "table_of_contents": [ { "title": "GLOMA: GLOBAL VIDEO TEXT SPOTTING WITH\nMORPHOLOGICAL ASSOCIATION", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.4375 ], [ 503.58148193359375, 80.4375 ], ...
Near-optimal Active Regression of Single-Index Models
Yi Li, Wai Ming Tai
The active regression problem of the single-index model is to solve $\min_x \lVert f(Ax)-b\rVert_p$, where $A$ is fully accessible and $b$ can only be accessed via entry queries, with the goal of minimizing the number of queries to the entries of $b$. When $f$ is Lipschitz, previous results only obtain constant-factor ...
https://openreview.net/pdf?id=iF06WjHnNj
https://openreview.net/forum?id=iF06WjHnNj
iF06WjHnNj
[{"review_id": "GvFWY8auA0", "paper_id": "iF06WjHnNj", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (Poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2025
ICLR
# NEAR-OPTIMAL ACTIVE REGRESSION OF SINGLE-INDEX MODELS ## Yi Li School of Physical and Mathematical Sciences and College of Computing and Data Science Nanyang Technological University yili@ntu.edu.sg # Wai Ming Tai Independent Researcher taiwaiming2003@gmail.com # **ABSTRACT** The active regression problem of ...
{ "table_of_contents": [ { "title": "NEAR-OPTIMAL ACTIVE REGRESSION OF SINGLE-INDEX MODELS", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.82421875 ], [ 504.0, 80.82421875 ], [ 5...
EA-HAS-Bench: Energy-aware Hyperparameter and Architecture Search Benchmark
Shuguang Dou, XINYANG JIANG, Cai Rong Zhao, Dongsheng Li
The energy consumption for training deep learning models is increasing at an alarming rate due to the growth of training data and model scale, resulting in a negative impact on carbon neutrality. Energy consumption is an especially pressing issue for AutoML algorithms because it usually requires repeatedly training lar...
https://openreview.net/pdf?id=n-bvaLSCC78
https://openreview.net/forum?id=n-bvaLSCC78
n-bvaLSCC78
[{"review_id": "IjLdH6ZyVF", "paper_id": "n-bvaLSCC78", "reviewer": null, "paper_summary": "This paper introduces a novel joint NAS+HPO benchmark that also includes measurements of energy. All reviewers judged this to be very helpful and gave acceptance scores. Joint NAS + HPO is very important, as also recently addres...
2023
ICLR
# EA-HAS-BENCH: ENERGY-AWARE HYPERPARAMETER AND ARCHITECTURE SEARCH BENCHMARK Shuguang Dou<sup>1</sup>, Xinyang Jiang<sup>2</sup>, Cairong Zhao<sup>1</sup>, Dongsheng Li<sup>2</sup> Tongji University, <sup>2</sup> Microsoft Research Asia #### **ABSTRACT** The energy consumption for training deep learning models is ...
{ "table_of_contents": [ { "title": "EA-HAS-BENCH: ENERGY-AWARE HYPERPARAMETER AND ARCHITECTURE SEARCH BENCHMARK", "heading_level": null, "page_id": 0, "polygon": [ [ 106.5, 80.82421875 ], [ 504.0, 80.82421875 ], ...
Scalable Batch-Mode Deep Bayesian Active Learning via Equivalence Class Annealing
Renyu Zhang, Aly A Khan, Robert L. Grossman, Yuxin Chen
Active learning has demonstrated data efficiency in many fields. Existing active learning algorithms, especially in the context of batch-mode deep Bayesian active models, rely heavily on the quality of uncertainty estimations of the model, and are often challenging to scale to large batches. In this paper, we propose B...
https://openreview.net/pdf?id=GRZtigJljLY
https://openreview.net/forum?id=GRZtigJljLY
GRZtigJljLY
[{"review_id": "i3DNq4e0JR", "paper_id": "GRZtigJljLY", "reviewer": null, "paper_summary": "This work presents a novel batch active learning algorithm set in a Bayesian framework, focused on training Bayesian Neural Networks (BNN). The algorithm is motivated by labeling examples that are most useful in differentiating...
2023
ICLR
# SCALABLE BATCH-MODE DEEP BAYESIAN ACTIVE LEARNING VIA EQUIVALENCE CLASS ANNEALING #### Renyu Zhang<sup>1</sup> , Aly A. Khan2,3, Robert L. Grossman1,4, Yuxin Chen<sup>1</sup> <sup>1</sup>Department of Computer Science, University of Chicago {zhangr,aakhan,rgrossman1,chenyuxin}@uchicago.edu # ABSTRACT Active lear...
{ "table_of_contents": [ { "title": "SCALABLE BATCH-MODE DEEP BAYESIAN ACTIVE\nLEARNING VIA EQUIVALENCE CLASS ANNEALING", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.39202880859375 ], [ 503.58441162109375, ...
Trajeglish: Traffic Modeling as Next-Token Prediction
Jonah Philion, Xue Bin Peng, Sanja Fidler
A longstanding challenge for self-driving development is simulating dynamic driving scenarios seeded from recorded driving logs. In pursuit of this functionality, we apply tools from discrete sequence modeling to model how vehicles, pedestrians and cyclists interact in driving scenarios. Using a simple data-driven toke...
https://openreview.net/pdf?id=Z59Rb5bPPP
https://openreview.net/forum?id=Z59Rb5bPPP
Z59Rb5bPPP
[{"review_id": "YFthgWIloi", "paper_id": "Z59Rb5bPPP", "reviewer": null, "paper_summary": "", "strengths": "", "weaknesses": "", "comments": {"value": ""}, "overall_score": "{'value': 'Accept (poster)'}", "confidence": null, "contribution": null, "correctness": "", "clarity_quality_novelty_reproducibility": "", "recomm...
2024
ICLR
# TRAJEGLISH: TRAFFIC MODELING AS NEXT-TOKEN PREDICTION Jonah Philion1,2,<sup>3</sup> , Xue Bin Peng1,<sup>4</sup> , Sanja Fidler1,2,<sup>3</sup> <sup>1</sup>NVIDIA, <sup>2</sup>University of Toronto, <sup>3</sup>Vector Institute, <sup>4</sup>Simon Fraser University {jphilion, japeng, sfidler}@nvidia.com # ABSTRACT ...
{ "table_of_contents": [ { "title": "TRAJEGLISH: TRAFFIC MODELING AS NEXT-TOKEN\nPREDICTION", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 79.6640625 ], [ 503.58056640625, 79.6640625 ], [ ...
Taming the Long Tail of Deep Probabilistic Forecasting
Mayank Sharan, Jedrzej Kozerawski, Rose Yu
Deep probabilistic forecasting is gaining attention in numerous applications from weather prognosis, through electricity consumption estimation, to autonomous vehicle trajectory prediction. However, existing approaches focus on improvements on average metrics without addressing the long tailed distribution of errors. I...
https://openreview.net/pdf?id=fvvcpsEl3Z6
https://openreview.net/forum?id=fvvcpsEl3Z6
fvvcpsEl3Z6
[{"review_id": "nupaOucKmW", "paper_id": "fvvcpsEl3Z6", "reviewer": null, "paper_summary": "While various methods have been proposed for long-tailed data distributions, this paper aims to arouse the awareness of the research community on some properties related to long-tailed error distributions instead. The paper moti...
2023
ICLR
# TAMING THE LONG TAIL OF DEEP PROBABILISTIC FORECASTING #### Anonymous authors Paper under double-blind review ## ABSTRACT Deep probabilistic forecasting is gaining attention in numerous applications from weather prognosis, through electricity consumption estimation, to autonomous vehicle trajectory prediction. Ho...
{ "table_of_contents": [ { "title": "TAMING THE LONG TAIL OF DEEP PROBABILISTIC\nFORECASTING", "heading_level": null, "page_id": 0, "polygon": [ [ 106.3828125, 80.13092041015625 ], [ 503.5649108886719, 80.13092041015625 ...
Maximal Correlation-Based Post-Nonlinear Learning for Bivariate Causal Discovery
Tianjian Zhang, Feng Yin, Zhi-Quan Luo
Bivariate causal discovery aims to determine the causal relationship between two random variables from passive observational data (as intervention is not affordable in many scientific fields), which is considered fundamental and challenging. Designing algorithms based on the post-nonlinear (PNL) model has aroused much ...
https://openreview.net/pdf?id=Or8rcTLo7U
https://openreview.net/forum?id=Or8rcTLo7U
Or8rcTLo7U
[{"review_id": "I41FMGccED", "paper_id": "Or8rcTLo7U", "reviewer": null, "paper_summary": "The paper focuses on the challenges of estimating post-nonlinear models (PNLs) for causal discovery in the bivariate case. The proposed method combines the objective function in the alternating conditional expectation (ACE) algor...
2023
ICLR
## MAXIMAL CORRELATION-BASED POST-NONLINEAR LEARNING FOR BIVARIATE CAUSAL DISCOVERY Anonymous authors Paper under double-blind review ## ABSTRACT Bivariate causal discovery aims to determine the causal relationship between two random variables from passive observational data (as intervention is not affordable in man...
{ "table_of_contents": [ { "title": "MAXIMAL CORRELATION-BASED POST-NONLINEAR\nLEARNING FOR BIVARIATE CAUSAL DISCOVERY", "heading_level": null, "page_id": 0, "polygon": [ [ 107.578125, 80.39202880859375 ], [ 503.5811462402344, ...