conference
stringclasses
3 values
year
int32
2.02k
2.02k
paper_id
int32
5.89k
80k
title
stringlengths
12
188
abstract
stringlengths
1
4.65k
topics
listlengths
1
20
image_url
stringlengths
54
89
ICML
2,024
33,477
Image Fusion via Vision-Language Model
Image fusion integrates essential information from multiple images into a single composite, enhancing structures, textures, and refining imperfections. Existing methods predominantly focus on pixel-level and semantic visual features for recognition, but often overlook the deeper text-level semantic information beyond v...
[ "Computer Vision", "Image Processing", "Vision-Language Models", "Multimodal Learning", "Data Fusion" ]
https://icml.cc/media/Po…202024/33477.png
ICLR
2,022
7,096
On the benefits of maximum likelihood estimation for Regression and Forecasting
We advocate for a practical Maximum Likelihood Estimation (MLE) approach towards designing loss functions for regression and forecasting, as an alternative to the typical approach of direct empirical risk minimization on a specific target metric. The MLE approach is better suited to capture inductive biases such as pri...
[ "Statistics", "Regression Analysis", "Time Series Forecasting", "Estimation Theory" ]
https://iclr.cc/media/Po…1faf8a492be0.png
ICLR
2,022
6,172
Fast AdvProp
Adversarial Propagation (AdvProp) is an effective way to improve recognition models, leveraging adversarial examples. Nonetheless, AdvProp suffers from the extremely slow training speed, mainly because: a) extra forward and backward passes are required for generating adversarial examples; b) both original samples and t...
[ "Computer Vision", "Adversarial Machine Learning", "Deep Learning", "Image Recognition", "Model Optimization" ]
https://iclr.cc/media/Po…e77ba9ef5f60.png
ICLR
2,024
18,504
ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms
The task of out-of-distribution (OOD) detection is notoriously ill-defined. Earlier works focused on new-class detection, aiming to identify label-altering data distribution shifts, also known as "semantic shift." However, recent works argue for a focus on failure detection, expanding the OOD evaluation framework to ac...
[ "Computer Vision", "Out-of-Distribution Detection", "Data Distribution Shifts", "Image Classification" ]
https://iclr.cc/media/Po…202024/18504.png
ICML
2,024
34,950
No Wrong Turns: The Simple Geometry Of Neural Networks Optimization Paths
Understanding the optimization dynamics of neural networks is necessary for closing the gap between theory and practice. Stochastic first-order optimization algorithms are known to efficiently locate favorable minima in deep neural networks. This efficiency, however, contrasts with the non-convex and seemingly complex ...
[ "Neural Network Optimization", "Machine Learning Theory", "Deep Learning", "Stochastic Optimization", "Computational Geometry in Machine Learning" ]
https://icml.cc/media/Po…202024/34950.png
ICLR
2,022
7,076
Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization
Multiobjective combinatorial optimization (MOCO) problems can be found in many real-world applications. However, exactly solving these problems would be very challenging, particularly when they are NP-hard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In...
[ "Multi-Objective Optimization", "Combinatorial Optimization", "Neural Networks", "Reinforcement Learning", "Evolutionary Algorithms" ]
https://iclr.cc/media/Po…016d5cc52d5e.png
ICLR
2,024
18,263
Topological data analysis on noisy quantum computers
Topological data analysis (TDA) is a powerful technique for extracting complex and valuable shape-related summaries of high-dimensional data. However, the computational demands of classical algorithms for computing TDA are exorbitant, and quickly become impractical for high-order characteristics. Quantum computers offe...
[ "Quantum Computing", "Topological Data Analysis", "Quantum Machine Learning", "Noisy Intermediate-Scale Quantum Technology" ]
https://iclr.cc/media/Po…202024/18263.png
NeurIPS
2,023
72,527
Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and width
We systematically analyze optimization dynamics in deep neural networks (DNNs) trained with stochastic gradient descent (SGD) and study the effect of learning rate $\eta$, depth $d$, and width $w$ of the neural network. By analyzing the maximum eigenvalue $\lambda^H_t$ of the Hessian of the loss, which is a measure of ...
[ "Deep Learning", "Neural Network Optimization", "Machine Learning Theory", "Training Dynamics", "Computational Learning Theory" ]
https://neurips.cc/media…202023/72527.png
NeurIPS
2,022
55,140
Asymptotic Properties for Bayesian Neural Network in Besov Space
Neural networks have shown great predictive power when applied to unstructured data such as images and natural languages. The Bayesian neural network captures the uncertainty of prediction by computing the posterior distribution of the model parameters. In this paper, we show that the Bayesian neural network with spike...
[ "Bayesian Statistics", "Neural Networks", "Machine Learning Theory", "Functional Analysis", "Statistical Learning Theory" ]
https://neurips.cc/media…202022/55140.png
ICML
2,023
25,180
Lowering the Pre-training Tax for Gradient-based Subset Training: A Lightweight Distributed Pre-Training Toolkit
Training data and model sizes are increasing exponentially. One way to reduce training time and resources is to train with a carefully selected subset of the full dataset. Prior work uses the gradient signals obtained during a warm-up or ``pre-training" phase over the full dataset, for determining the core subset; if t...
[ "Distributed Computing", "Model Training Optimization", "Data Subset Selection", "Neural Networks", "Computer Vision" ]
https://icml.cc/media/Po…202023/25180.png
ICML
2,024
33,434
Learning to Intervene on Concept Bottlenecks
While deep learning models often lack interpretability, concept bottleneck models (CBMs) provide inherent explanations via their concept representations. Moreover, they allow users to perform interventional interactions on these concepts by updating the concept values and thus correcting the predictive output of the mo...
[ "Interpretability in AI", "Human-Computer Interaction", "Model Interventions", "Concept Learning", "Deep Learning", "Model Explainability" ]
https://icml.cc/media/Po…202024/33434.png
ICML
2,024
37,260
Using gradients to check sensitivity of MCMC-based analyses to removing data
If the conclusion of a data analysis is sensitive to dropping very few data points, that conclusion might hinge on the particular data at hand rather than representing a more broadly applicable truth. To check for this sensitivity, one idea is to consider every small data subset, drop it, and re-run our analysis. But t...
[ "Bayesian Statistics", "Markov Chain Monte Carlo ", "Sensitivity Analysis", "Statistical Inference", "Computational Statistics" ]
https://icml.cc/media/Po…202024/37260.png
ICML
2,023
24,072
Multi-agent Online Scheduling: MMS Allocations for Indivisible Items
We consider the problem of fairly allocating a sequence of indivisible items that arrive online in an arbitrary order to a group of $n$ agents with additive normalized valuation functions, we consider the allocation of goods and chores separately and propose algorithms for approximating maximin share (MMS) allocations ...
[ "Computer Science", "Algorithm Design", "Multi-agent Systems", "Online Algorithms", "Fair Division", "Operations Research" ]
https://icml.cc/media/Po…202023/24072.png
ICML
2,024
34,594
Modeling Language Tokens as Functionals of Semantic Fields
Recent advances in natural language processing have relied heavily on using Transformer-based language models. However, Transformers often require large parameter sizes and model depth. Existing Transformer-free approaches using state-space models demonstrate superiority over Transformers, yet they still lack a neuro-b...
[ "Natural Language Processing", "Computational Linguistics", "Neural Networks", "Language Modeling" ]
https://icml.cc/media/Po…202024/34594.png
ICLR
2,024
17,411
EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision
We present EmerNeRF, a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes. Grounded in neural fields, EmerNeRF simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping. EmerNeRF hinges upon two core components: First, it stratifie...
[ "Computer Vision", "Neural Rendering", "3D Scene Reconstruction", "Autonomous Driving", "Dynamic Scene Understanding" ]
https://iclr.cc/media/Po…202024/17411.png
NeurIPS
2,023
73,038
Accelerating Monte Carlo Tree Search with Probability Tree State Abstraction
Monte Carlo Tree Search (MCTS) algorithms such as AlphaGo and MuZero have achieved superhuman performance in many challenging tasks. However, the computational complexity of MCTS-based algorithms is influenced by the size of the search space. To address this issue, we propose a novel probability tree state abstraction ...
[ "Reinforcement Learning", "Search Algorithms", "Computational Efficiency" ]
https://neurips.cc/media…202023/73038.png
ICML
2,024
33,203
Learning Modality Knowledge Alignment for Cross-Modality Transfer
Cross-modality transfer aims to leverage large pretrained models to complete tasks that may not belong to the modality of pretraining data. Existing works achieve certain success in extending classical finetuning to cross-modal scenarios, yet we still lack understanding about the influence of modality gap on the transf...
[ "Cross-Modality Transfer", "Meta-Learning", "Transfer Learning", "Multimodal Learning" ]
https://icml.cc/media/Po…202024/33203.png
ICLR
2,024
17,844
Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph
Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning. These issues could be partially addressed by introducing external knowledge graphs (KG) in LLM reasoning. In this p...
[ "Natural Language Processing", "Knowledge Graphs", "Reasoning and Inference", "Computational Linguistics" ]
https://iclr.cc/media/Po…202024/17844.png
ICML
2,022
16,451
A query-optimal algorithm for finding counterfactuals
We design an algorithm for finding counterfactuals with strong theoretical guarantees on its performance. For any monotone model $f : X^d \to \{0,1\}$ and instance $x^\star$, our algorithm makes\[ S(f)^{O(\Delta_f(x^\star))}\cdot \log d\]queries to $f$ and returns an {\sl optimal} counterfactual for $x^\star$: a near...
[ "Algorithms", "Theoretical Computer Science", "Computational Complexity" ]
https://icml.cc/media/Po…e1f085554c8b.png
ICLR
2,022
6,108
Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data. However, potential clients might even be reluctant to label their own data, which could limit the applicability of FL in practice. In this paper, we show the possibility of unsupervised FL whose mo...
[ "Federated Learning", "Unsupervised Learning", "Data Privacy", "Distributed Systems" ]
https://iclr.cc/media/Po…dffd884d60b8.png
ICML
2,022
16,965
ActiveHedge: Hedge meets Active Learning
We consider the classical problem of multi-class prediction with expert advice, but with an active learning twist. In this new setting the learner will only query the labels of a small number of examples, but still aims to minimize regret to the best expert as usual; the learner is also allowed a very short "burn-in" p...
[ "Active Learning", "Online Learning", "Algorithm Design", "Theoretical Computer Science" ]
https://icml.cc/media/Po…351b90680d46.png
ICML
2,023
24,060
Text-To-4D Dynamic Scene Generation
We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The dynamic v...
[ "Computer Vision", "Natural Language Processing", "Graphics and Visualization" ]
https://icml.cc/media/Po…202023/24060.png
ICLR
2,023
11,591
Fast and Precise: Adjusting Planning Horizon with Adaptive Subgoal Search
Complex reasoning problems contain states that vary in the computational cost required to determine the right action plan. To take advantage of this property, we propose Adaptive Subgoal Search (AdaSubS), a search method that adaptively adjusts the planning horizon. To this end, AdaSubS generates diverse sets of subgoa...
[ "Automated Planning and Scheduling", "Search Algorithms", "Reinforcement Learning" ]
https://iclr.cc/media/Po…202023/11591.png
NeurIPS
2,023
72,977
General Munchausen Reinforcement Learning with Tsallis Kullback-Leibler Divergence
Many policy optimization approaches in reinforcement learning incorporate a Kullback-Leilbler (KL) divergence to the previous policy, to prevent the policy from changing too quickly. This idea was initially proposed in a seminal paper on Conservative Policy Iteration, with approximations given by algorithms like TRPO a...
[ "Reinforcement Learning", "Policy Optimization", "Machine Learning Algorithms", "Mathematical Optimization" ]
https://neurips.cc/media…202023/72977.png
NeurIPS
2,022
61,162
Fast Sampling of Diffusion Models with Exponential Integrator
Our goal is to develop a fast sampling method for Diffusion models~(DMs) with a small number of steps while retaining high sample quality. To achieve this, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most cr...
[ "Computational Methods", "Numerical Analysis", "Image Processing" ]
https://neurips.cc/media…202022/61162.png
ICLR
2,023
11,530
Few-shot Cross-domain Image Generation via Inference-time Latent-code Learning
In this work, our objective is to adapt a Deep generative model trained on a large-scale source dataset to multiple target domains with scarce data. Specifically, we focus on adapting a pre-trained Generative Adversarial Network (GAN) to a target domain without re-training the generator. Our method draws the motivation...
[ "Computer Vision", "Generative Adversarial Networks ", "Few-shot Learning", "Domain Adaptation" ]
https://iclr.cc/media/Po…202023/11530.png
ICML
2,022
17,955
Improving Transformers with Probabilistic Attention Keys
Multi-head attention is a driving force behind state-of-the-art transformers, which achieve remarkable performance across a variety of natural language processing (NLP) and computer vision tasks. It has been observed that for many applications, those attention heads learn redundant embedding, and most of them can be re...
[ "Natural Language Processing", "Deep Learning", "Computer Vision", "Neural Networks", "Model Optimization" ]
https://icml.cc/media/Po…d720_vBnNqWw.png
ICLR
2,023
12,147
EVA3D: Compositional 3D Human Generation from 2D Image Collections
Inverse graphics aims to recover 3D models from 2D observations. Utilizing differentiable rendering, recent 3D-aware generative models have shown impressive results of rigid object generation using 2D images. However, it remains challenging to generate articulated objects, like human bodies, due to their complexity and...
[ "Computer Graphics", "3D Modeling", "Generative Adversarial Networks ", "Computer Vision" ]
https://iclr.cc/media/Po…202023/12147.png
NeurIPS
2,023
76,937
Data-Driven Traffic Reconstruction for Identifying Stop-and-Go Waves
Identifying stop-and-go events (SAGs) in traffic flow presents an important avenue for advancing data-driven research for climate change mitigation and sustainability, owing to their substantial impact on carbon emissions, travel time, fuel consumption, and roadway safety. In fact, SAGs are estimated to account for 33-...
[ "Traffic Engineering", "Data Science", "Transportation Systems", "Environmental Sustainability", "Climate Change Mitigation", "Computational Modeling", "Urban Planning" ]
https://neurips.cc/media…202023/76937.png
NeurIPS
2,023
77,762
Variation of Gender Biases in Visual Recognition Models Before and After Finetuning
We introduce a framework to measure how biases change before and after fine-tuning a large scale visual recognition model for a downstream task. Deep learning models trained on increasing amounts of data are known to encode societal biases. Many computer vision systems today rely on models typically pretrained on large...
[ "Computer Vision", "Bias and Fairness in AI", "Deep Learning", "Model Fine-tuning", "Ethical AI" ]
https://neurips.cc/media…202023/77762.png
NeurIPS
2,022
63,765
Optimistic Meta-Gradients
We study the connection between gradient-based meta-learning and convex optimisation. We observe that gradient descent with momentum is as a special case of meta-gradients, and building on recent results in optimisation, we prove convergence rates for meta-learning in the single task setting. While a meta-learned updat...
[ "Meta-Learning", "Optimization", "Convex Optimization" ]
https://neurips.cc/media…202022/63765.png
ICLR
2,024
17,896
Beyond task performance: evaluating and reducing the flaws of large multimodal models with in-context-learning
Following the success of Large Language Models (LLMs), Large Multimodal Models (LMMs), such as the Flamingo model and its subsequent competitors, have started to emerge as natural steps towards generalist agents. However, interacting with recent LMMs reveals major limitations that are hardly captured by the current eva...
[ "Multimodal Models", "Natural Language Processing", "Model Evaluation and Analysis", "In-Context Learning" ]
https://iclr.cc/media/Po…202024/17896.png
ICML
2,022
16,697
Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm
Although learning in high dimensions is commonly believed to suffer from the curse of dimensionality, modern machine learning methods often exhibit an astonishing power to tackle a wide range of challenging real-world learning problems without using abundant amounts of data. How exactly these methods break this curse r...
[ "Deep Learning", "Machine Learning Theory", "High-Dimensional Data Analysis", "Neural Networks", "Algorithmic Synergies", "Model-Data Interaction" ]
https://icml.cc/media/Po…769e84e7c5da.png
ICML
2,024
33,605
A Resilient and Accessible Distribution-Preserving Watermark for Large Language Models
Watermarking techniques offer a promising way to identify machine-generated content via embedding covert information into the contents generated from language models. A challenge in the domain lies in preserving the distribution of original generated content after watermarking. Our research extends and improves upon ex...
[ "Natural Language Processing", "Data Security", "Digital Watermarking", "Artificial Intelligence " ]
https://icml.cc/media/Po…202024/33605.png
NeurIPS
2,022
59,638
Collaborating with language models for embodied reasoning
Reasoning in a complex and ambiguous embodied environment is a key goal for Reinforcement Learning (RL) agents. While some sophisticated RL agents can successfully solve difficult tasks, they require a large amount of training data and often struggle to generalize to new unseen environments and new tasks. On the other ...
[ "Reinforcement Learning", "Natural Language Processing", "Embodied AI" ]
https://neurips.cc/media…202022/59638.png
ICML
2,023
24,040
Data Poisoning Attacks Against Multimodal Encoders
Recently, the newly emerged multimodal models, which leverage both visual and linguistic modalities to train powerful encoders, have gained increasing attention. However, learning from a large-scale unlabeled dataset also exposes the model to the risk of potential poisoning attacks, whereby the adversary aims to pertur...
[ "Machine Learning Security", "Multimodal Learning", "Adversarial Machine Learning", "Data Poisoning", "Computer Vision", "Natural Language Processing" ]
https://icml.cc/media/Po…202023/24040.png
NeurIPS
2,023
71,627
Posterior Sampling for Competitive RL: Function Approximation and Partial Observation
This paper investigates posterior sampling algorithms for competitive reinforcement learning (RL) in the context of general function approximations. Focusing on zero-sum Markov games (MGs) under two critical settings, namely self-play and adversarial learning, we first propose the self-play and adversarial generalized ...
[ "Reinforcement Learning", "Competitive Reinforcement Learning", "Markov Games", "Function Approximation", "Partial Observability", "Game Theory", "Machine Learning Algorithms" ]
https://neurips.cc/media…202023/71627.png
NeurIPS
2,023
71,983
Tester-Learners for Halfspaces: Universal Algorithms
We give the first tester-learner for halfspaces that succeeds universally over a wide class of structured distributions. Our universal tester-learner runs in fully polynomial time and has the following guarantee: the learner achieves error $O(\mathrm{opt}) + \epsilon$ on any labeled distribution that the tester accepts...
[ "Computational Learning Theory", "Algorithm Design", "Theoretical Computer Science", "Probability and Statistics" ]
https://neurips.cc/media…202023/71983.png
ICLR
2,022
6,240
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme
Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when bo...
[ "Speech Synthesis", "Voice Conversion", "Probabilistic Modeling", "Real-Time Systems", "Generative Models" ]
https://iclr.cc/media/Po…c62cb947c162.png
ICML
2,024
34,266
COPAL: Continual Pruning in Large Language Generative Models
Adapting pre-trained large language models to different domains in natural language processing requires two key considerations: high computational demands and model's inability to continual adaptation. To simultaneously address both issues, this paper presents COPAL (COntinualPruning inAdaptiveLanguage settings), an al...
[ "Natural Language Processing", "Model Pruning", "Continual Learning", "Large Language Models " ]
https://icml.cc/media/Po…202024/34266.png
NeurIPS
2,023
72,476
Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularization
Offline reinforcement learning (RL) has received considerable attention in recent years due to its attractive capability of learning policies from offline datasets without environmental interactions. Despite some success in the single-agent setting, offline multi-agent RL (MARL) remains to be a challenge. The large joi...
[ "Multi-Agent Reinforcement Learning", "Offline Reinforcement Learning", "Machine Learning Algorithms", "Value Function Approximation" ]
https://neurips.cc/media…202023/72476.png
ICML
2,023
24,559
Efficient preconditioned stochastic gradient descent for estimation in latent variable models
Latent variable models are powerful tools for modeling complex phenomena involving in particular partially observed data, unobserved variables or underlying complex unknown structures. Inference is often difficult due to the latent structure of the model. To deal with parameter estimation in the presence of latent vari...
[ "Statistical Inference", "Latent Variable Models", "Stochastic Optimization", "Computational Statistics" ]
https://icml.cc/media/Po…202023/24559.png
NeurIPS
2,023
69,951
Three-Way Trade-Off in Multi-Objective Learning: Optimization, Generalization and Conflict-Avoidance
Multi-objective learning (MOL) often arises in emerging machine learning problems when multiple learning criteria or tasks need to be addressed. Recent works have developed various _dynamic weighting_ algorithms for MOL, including MGDA and its variants, whose central idea is to find an update direction that _avoids co...
[ "Multi-Objective Optimization", "Algorithmic Stability", "Generalization in Machine Learning" ]
https://neurips.cc/media…202023/69951.png
NeurIPS
2,022
54,450
Unsupervised Learning of Shape Programs with Repeatable Implicit Parts
Shape programs encode shape structures by representing object parts as subroutines and constructing the overall shape by composing these subroutines. This usually involves the reuse of subroutines for repeatable parts, enabling the modeling of correlations among shape elements such as geometric similarity. However, exi...
[ "Computer Vision", "3D Shape Modeling", "Unsupervised Learning", "Geometric Modeling" ]
https://neurips.cc/media…202022/54450.png
NeurIPS
2,023
76,216
Learned integration contour deformation for signal-to-noise improvement in Monte Carlo calculations
Calculations of the strong nuclear interactions, encoded in the theory of Quantum Chromodynamics (QCD), are extraordinarily computationally demanding. Inparticular, the Monte Carlo integration used in lattice field theory calculations in this context suffers from severe signal-to-noise challenges. Complexifying the int...
[ "Quantum Chromodynamics ", "Lattice Field Theory", "Monte Carlo Methods", "Computational Physics", "Machine Learning in Physics", "Signal Processing" ]
https://neurips.cc/media…202023/76216.png
NeurIPS
2,022
53,297
Grounded Video Situation Recognition
Dense video understanding requires answering several questions such as who is doing what to whom, with what, how, why, and where. Recently, Video Situation Recognition (VidSitu) is framed as a task for structured prediction of multiple events, their relationships, and actions and various verb-role pairs attached to des...
[ "Computer Vision", "Video Understanding", "Natural Language Processing", "Multimodal Learning", "Spatio-Temporal Reasoning" ]
https://neurips.cc/media…202022/53297.png
NeurIPS
2,022
55,682
Avalon: A Benchmark for RL Generalization Using Procedurally Generated Worlds
Despite impressive successes, deep reinforcement learning (RL) systems still fall short of human performance on generalization to new tasks and environments that differ from their training. As a benchmark tailored for studying RL generalization, we introduce Avalon, a set of tasks in which embodied agents in highly div...
[ "Reinforcement Learning", "Machine Learning Benchmarks", "Procedural Content Generation", "Artificial Intelligence Generalization" ]
https://neurips.cc/media…202022/55682.png
NeurIPS
2,022
54,514
High-Order Pooling for Graph Neural Networks with Tensor Decomposition
Graph Neural Networks (GNNs) are attracting growing attention due to their effectiveness and flexibility in modeling a variety of graph-structured data. Exiting GNN architectures usually adopt simple pooling operations~(\eg{} sum, average, max) when aggregating messages from a local neighborhood for updating node repre...
[ "Graph Neural Networks", "Tensor Decomposition", "Graph Theory", "Neural Network Architectures" ]
https://neurips.cc/media…202022/54514.png
ICML
2,022
16,415
MAE-DET: Revisiting Maximum Entropy Principle in Zero-Shot NAS for Efficient Object Detection
In object detection, the detection backbone consumes more than half of the overall inference cost. Recent researches attempt to reduce this cost by optimizing the backbone architecture with the help of Neural Architecture Search (NAS). However, existing NAS methods for object detection require hundreds to thousands of ...
[ "Neural Architecture Search ", "Zero-Shot Learning", "Object Detection", "Computer Vision" ]
https://icml.cc/media/Po…a901_ZIlgpI7.png
ICLR
2,024
19,602
Linear attention is (maybe) all you need (to understand Transformer optimization)
Transformer training is notoriously difficult, requiring a careful design of optimizers and use of various heuristics. We make progress towards understanding the subtleties of training Transformers by carefully studying a simple yet canonical linearizedshallowTransformer model. Specifically, we train linear Transformer...
[ "Neural Networks", "Transformer Models", "Optimization Techniques" ]
https://iclr.cc/media/Po…202024/19602.png
ICML
2,024
34,081
Re-Dock: Towards Flexible and Realistic Molecular Docking with Diffusion Bridge
Accurate prediction of protein-ligand binding structures, a task known as molecular docking is crucial for drug design but remains challenging. While deep learning has shown promise, existing methods often depend on holo-protein structures (docked, and not accessible in realistic tasks) or neglect pocket sidechain conf...
[ "Computational Biology", "Drug Discovery", "Molecular Modeling", "Structural Bioinformatics", "Machine Learning in Biology" ]
https://icml.cc/media/Po…202024/34081.png
ICML
2,022
17,991
Bayesian Model Selection, the Marginal Likelihood, and Generalization
How do we compare between hypotheses that are entirely consistent with observations? The marginal likelihood (aka Bayesian evidence), which represents the probability of generating our observations from a prior, provides a distinctive approach to this foundational question, automatically encoding Occam's razor. Althoug...
[ "Bayesian Statistics", "Model Selection", "Statistical Learning Theory", "Neural Architecture Search", "Hyperparameter Optimization" ]
https://icml.cc/media/Po…8543b60a419a.png
ICML
2,022
16,893
Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets
Permutation invariant neural networks are a promising tool for predictive modeling of set data. We show, however, that existing architectures struggle to perform well when they are deep. In this work, we mathematically and empirically analyze normalization layers and residual connections in the context of deep permutat...
[ "Neural Networks", "Deep Learning", "Set Data Analysis", "Permutation Invariance", "Model Architecture", "Normalization Techniques", "Residual Connections" ]
https://icml.cc/media/Po…9df37c0e8b0f.png
ICLR
2,023
11,057
Transfer Learning with Deep Tabular Models
Recent work on deep learning for tabular data demonstrates the strong performance of deep tabular models, often bridging the gap between gradient boosted decision trees and neural networks. Accuracy aside, a major advantage of neural models is that they are easily fine-tuned in new domains and learn reusable features. ...
[ "Deep Learning", "Transfer Learning", "Tabular Data", "Representation Learning", "Medical Data Analysis", "Neural Networks" ]
https://iclr.cc/media/Po…202023/11057.png
ICML
2,023
24,845
From Noisy Fixed-Point Iterations to Private ADMM for Centralized and Federated Learning
We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations, in order to derive privacy and utility results from this well-studied framework. We show that this new perspective recovers popular private gradient-based methods like DP-SGD and provides a principled way to d...
[ "Differential Privacy", "Optimization Algorithms", "Federated Learning", "Centralized Learning", "Decentralized Learning" ]
https://icml.cc/media/Po…202023/24845.png
ICML
2,024
33,521
Double Variance Reduction: A Smoothing Trick for Composite Optimization Problems without First-Order Gradient
Variance reduction techniques are designed to decrease the sampling variance, thereby accelerating convergence rates of first-order (FO) and zeroth-order (ZO) optimization methods. However, in composite optimization problems, ZO methods encounter an additional variance called the coordinate-wise variance, which stems f...
[ "Optimization", "Zeroth-Order Optimization", "Variance Reduction Techniques", "Composite Optimization", "Computational Mathematics" ]
https://icml.cc/media/Po…202024/33521.png
ICLR
2,024
22,442
Prompt Optimization with Logged Bandit Data
We study how to use naturally available user feedback, such as clicks, to optimize a prompt policy for generating sentences with large language models (LLMs). Naive approaches, including regression-based and importance sampling-based ones, suffer either from biased log data or variance caused by the large action space ...
[ "Natural Language Processing", "Reinforcement Learning", "Bandit Algorithms", "Prompt Engineering" ]
https://iclr.cc/media/Po…202024/22442.png
NeurIPS
2,022
53,204
RISE: Robust Individualized Decision Learning with Sensitive Variables
This paper introduces RISE, a robust individualized decision learning framework with sensitive variables, where sensitive variables are collectible data and important to the intervention decision, but their inclusion in decision making is prohibited due to reasons such as delayed availability or fairness concerns. A na...
[ "Decision Making", "Fairness in AI", "Causal Inference", "Robust Optimization" ]
https://neurips.cc/media…202022/53204.png
ICML
2,024
33,252
Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices
Text-to-image (T2I) diffusion models achieve state-of-the-art results in image synthesis and editing. However, leveraging such pre-trained models for video editing is considered a major challenge. Many existing works attempt to enforce temporal consistency in the edited video through explicit correspondence mechanisms,...
[ "Computer Vision", "Video Editing", "Deep Learning", "Text-to-Image Models", "Diffusion Models" ]
https://icml.cc/media/Po…202024/33252.png
ICLR
2,024
18,994
Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness
Designing expressive Graph Neural Networks (GNNs) is a fundamental topic in the graph learning community. So far, GNN expressiveness has been primarily assessed via the Weisfeiler-Lehman (WL) hierarchy. However, such an expressivity measure has notable limitations: it is inherently coarse, qualitative, and may not well...
[ "Graph Neural Networks ", "Graph Theory", "Computational Graph Theory", "Neural Network Expressiveness", "Algorithmic Graph Theory" ]
https://iclr.cc/media/Po…202024/18994.png
NeurIPS
2,022
58,859
Indexing AI Risks with Incidents, Issues, and Variants
Two years after publicly launching the AI Incident Database (AIID) as a collection of harms or near harms produced by AI in the world, a backlog ofissues'' that do not meet its incident ingestion criteria have accumulated in its review queue. Despite not passing the database's current criteria for incidents, these issu...
[ "Artificial Intelligence Safety", "Risk Management in AI", "AI Ethics and Governance", "Machine Learning Systems", "Incident Reporting and Analysis" ]
https://neurips.cc/media…202022/58859.png
NeurIPS
2,022
54,237
Formulating Robustness Against Unforeseen Attacks
Existing defenses against adversarial examples such as adversarial training typically assume that the adversary will conform to a specific or known threat model, such as $\ell_p$ perturbations within a fixed budget. In this paper, we focus on the scenario where there is a mismatch in the threat model assumed by the def...
[ "Adversarial Machine Learning", "Robustness in Machine Learning", "Security in Machine Learning", "Computer Vision" ]
https://neurips.cc/media…202022/54237.png
ICML
2,024
34,175
KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation
Large Language Model or LLM inference has two phases, the prompt (or prefill) phase to output the first token and the extension (or decoding) phase to the generate subsequent tokens. In this work, we propose an efficient parallelization scheme, KV-Runahead to accelerate the prompt phase. The key observation is that the...
[ "Natural Language Processing", "Parallel Computing", "High-Performance Computing" ]
https://icml.cc/media/Po…202024/34175.png
ICML
2,023
25,228
SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, existing methods cannot maintain accuracy and hardware efficiency at the same time. We propose SmoothQuant, a training-free, accuracy-preserving, and general-p...
[ "Natural Language Processing", "Model Optimization", "Quantization Techniques", "Large Language Models" ]
https://icml.cc/media/Po…202023/25228.png
NeurIPS
2,022
63,775
GraViT-E: Gradient-based Vision Transformer Search with Entangled Weights
Differentiable one-shot neural architecture search methods have recently become popular since they can exploit weight-sharing to efficiently search in large architectural search spaces. These methods traditionally perform a continuous relaxation of the discrete search space to search for an optimal architecture. Howeve...
[ "Neural Architecture Search ", "Vision Transformers", "Deep Learning", "Computer Vision" ]
https://neurips.cc/media…202022/63775.png
ICML
2,024
34,334
Learning Linear Block Error Correction Codes
Error correction codes are a crucial part of the physical communication layer, ensuring the reliable transfer of data over noisy channels. The design of optimal linear block codes capable of being efficiently decoded is of major concern, especially for short block lengths. While neural decoders have recently demonstrat...
[ "Information Theory", "Error Correction Codes", "Neural Networks", "Communications Engineering", "Signal Processing" ]
https://icml.cc/media/Po…202024/34334.png
NeurIPS
2,022
65,701
Evaluating Worst Case Adversarial Weather Perturbations Robustness
Several algorithms are proposed to improve the robustness of deep neural networks against adversarial perturbations beyond $\ell_p$ cases, i.e. weather perturbations. However, evaluations of existing robust training algorithms are over-optimistic. This is in part due to the lack of a standardized evaluation protocol ac...
[ "Adversarial Machine Learning", "Deep Learning", "Computer Vision", "Robustness Evaluation", "Weather Simulation" ]
https://neurips.cc/media…202022/65701.png
ICML
2,023
26,533
A Policy-Decoupled Method for High-Quality Data Augmentation in Offline Reinforcement Learning
Offline reinforcement learning (ORL) has gained attention as a means of training reinforcement learning models using pre-collected static data. To address the issue of limited data and improve downstream ORL performance, recent work has attempted to expand the dataset's coverage through data augmentation. However, most...
[ "Offline Reinforcement Learning", "Data Augmentation" ]
https://icml.cc/media/Po…202023/26533.png
NeurIPS
2,023
72,768
DDF-HO: Hand-Held Object Reconstruction via Conditional Directed Distance Field
Reconstructing hand-held objects from a single RGB image is an important and challenging problem. Existing works utilizing Signed Distance Fields (SDF) reveal limitations in comprehensively capturing the complex hand-object interactions, since SDF is only reliable within the proximity of the target, and hence, infeasi...
[ "Computer Vision", "3D Reconstruction", "Graphics and Visualization", "Human-Computer Interaction" ]
https://neurips.cc/media…202023/72768.png
ICLR
2,024
18,612
Prediction without Preclusion: Recourse Verification with Reachable Sets
Machine learning models are often used to decide who receives a loan, a job interview, or a public benefit. Models in such settings use features without considering theiractionability. As a result, they can assign predictions that are \emph{fixed} -- meaning that individuals who are denied loans and interviews are, in ...
[ "Fairness in AI", "Algorithmic Recourse", "Model Verification", "Ethical AI", "Decision Support Systems" ]
https://iclr.cc/media/Po…202024/18612.png
NeurIPS
2,022
54,806
HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details
Neural rendering can be used to reconstruct implicit representations of shapes without 3D supervision. However, current neural surface reconstruction methods have difficulty learning high-frequency geometry details, so the reconstructed shapes are often over-smoothed. We develop HF-NeuS, a novel method to improve the q...
[ "Computer Graphics", "Neural Rendering", "Surface Reconstruction", "3D Modeling", "Machine Learning for Graphics" ]
https://neurips.cc/media…202022/54806.png
ICLR
2,024
19,037
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks—such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, set...
[ "Artificial Intelligence Safety", "Language Models", "Risk Assessment", "Software Testing and Evaluation", "Machine Learning Applications" ]
https://iclr.cc/media/Po…202024/19037.png
ICLR
2,024
23,105
Rethinking harmless refusals when fine-tuning foundation models
In this paper, we investigate the degree to which fine-tuning in Large Language Models (LLMs) effectively mitigates versus merely conceals undesirable behavior. Through the lens of semi-realistic role-playing exercises designed to elicit such behaviors, we explore the response dynamics of LLMs post fine-tuning interven...
[ "Natural Language Processing", "Ethics in AI", "Human-Computer Interaction" ]
https://iclr.cc/media/Po…202024/23105.png
ICML
2,024
33,209
Meta-Reinforcement Learning Robust to Distributional Shift Via Performing Lifelong In-Context Learning
A key challenge in Meta-Reinforcement Learning (meta-RL) is the task distribution shift, since the generalization ability of most current meta-RL methods is limited to tasks sampled from the training distribution. In this paper, we propose Posterior Sampling Bayesian Lifelong In-Context Reinforcement Learning (PSBL), w...
[ "Meta-Reinforcement Learning", "Distributional Shift", "Lifelong Learning", "In-Context Learning", "Bayesian Methods" ]
https://icml.cc/media/Po…202024/33209.png
NeurIPS
2,023
76,710
On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation
Constructing a robust model that can effectively generalize to test samples under distribution shifts remains a significant challenge in the field of medical imaging. The vision-language foundation model has recently emerged as a promising paradigm, demonstrating impressive learning capabilities across various tasks wh...
[ "Medical Imaging", "Image Segmentation", "Out-of-Distribution Robustness", "Vision-Language Models", "Bayesian Uncertainty Estimation" ]
https://neurips.cc/media…202023/76710.png
ICML
2,024
32,931
LoRA Training in the NTK Regime has No Spurious Local Minima
Low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM), but our theoretical understanding of LoRA has been limited. In this work, we theoretically analyze LoRA fine-tuning in the neural tangent kernel (NTK) regime with $N$ data points, showing: (i)...
[ "Machine Learning Theory", "Neural Networks", "Optimization", "Large Language Models", "Parameter-Efficient Fine-Tuning" ]
https://icml.cc/media/Po…202024/32931.png
NeurIPS
2,023
71,188
Tree Variational Autoencoders
We propose Tree Variational Autoencoder (TreeVAE), a new generative hierarchical clustering model that learns a flexible tree-based posterior distribution over latent variables. TreeVAE hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data. It adap...
[ "Generative Models", "Variational Inference", "Hierarchical Clustering", "Deep Learning" ]
https://neurips.cc/media…202023/71188.png
NeurIPS
2,022
64,401
Utilizing supervised models to infer consensus labels and their quality from data with multiple annotators
Real-world data for classification is often labeled by multiple annotators. For analyzing such data, we introduce CROWDLAB, a straightforward approach to estimate: (1) A consensus label for each example that aggregates the individual annotations (more accurately than aggregation via majority-vote or other algorithms us...
[ "Data Annotation", "Crowdsourcing", "Supervised Learning", "Data Quality Assessment" ]
https://neurips.cc/media…202022/64401.png
ICML
2,023
24,934
Perturbation Analysis of Neural Collapse
Training deep neural networks for classification often includes minimizing the training loss beyond the zero training error point. In this phase of training, a "neural collapse" behavior has been observed: the variability of features (outputs of the penultimate layer) of within-class samples decreases and the mean feat...
[ "Deep Learning", "Neural Networks", "Optimization in Machine Learning", "Theoretical Analysis of Machine Learning Models" ]
https://icml.cc/media/Po…202023/24934.png
ICML
2,024
33,191
Degeneration-free Policy Optimization: RL Fine-Tuning for Language Models without Degeneration
As the pre-training objectives (e.g., next token prediction) of language models (LMs) are inherently not aligned with task scores, optimizing LMs to achieve higher downstream task scores is essential. One of the promising approaches is to fine-tune LMs through reinforcement learning (RL). However, conventional RL metho...
[ "Reinforcement Learning", "Natural Language Processing", "Language Model Fine-Tuning", "Text Generation" ]
https://icml.cc/media/Po…202024/33191.png
ICLR
2,024
21,599
Identifying Complex Dynamics of Power Grid Frequency
The energy system is undergoing rapid changes to integrate a growing number of intermittent renewable generators and facilitate the broader transition toward sustainability.As millions of consumers and thousands of (volatile) generators are connected to the same synchronous grid, no straightforward bottom-up models des...
[ "Power Systems", "Renewable Energy Integration", "Nonlinear Dynamics", "System Identification", "Energy Systems Engineering" ]
https://iclr.cc/media/Po…202024/21599.png
NeurIPS
2,023
71,478
Strategic Behavior in Two-sided Matching Markets with Prediction-enhanced Preference-formation
Two-sided matching markets have long existed to pair agents in the absence of regulated exchanges. A common example is school choice, where a matching mechanism uses student and school preferences to assign students to schools. In such settings, forming preferences is both difficult and critical. Prior work has sugges...
[ "Game Theory", "Market Design", "Mechanism Design", "Behavioral Economics", "Computational Economics", "Education Economics", "Algorithmic Game Theory" ]
https://neurips.cc/media…202023/71478.png
ICLR
2,023
11,155
HyperDeepONet: learning operator with complex target function space using the limited resources via hypernetwork
Fast and accurate predictions for complex physical dynamics are a big challenge across various applications. Real-time prediction on resource-constrained hardware is even more crucial in the real-world problems. The deep operator network (DeepONet) has recently been proposed as a framework for learning nonlinear mappin...
[ "Computational Physics", "Neural Networks", "Scientific Computing", "Applied Mathematics" ]
https://iclr.cc/media/Po…202023/11155.png
ICML
2,024
36,983
Bayesian Disease Progression Modeling That Captures And Accounts For Health Disparities
Disease progression models, in which a patient's latent severity is modeled as progressing over time and producing observed symptoms, have developed great potential to help with disease detection, prediction, and drug development. However, a significant limitation of existing models is that they do not typically accoun...
[ "Bayesian Statistics", "Disease Progression Modeling", "Health Disparities", "Healthcare Analytics", "Biostatistics", "Epidemiology" ]
https://icml.cc/media/Po…202024/36983.png
ICML
2,024
33,712
AD3: Implicit Action is the Key for World Models to Distinguish the Diverse Visual Distractors
Model-based methods have significantly contributed to distinguishing task-irrelevant distractors for visual control. However, prior research has primarily focused on heterogeneous distractors like noisy background videos, leaving homogeneous distractors that closely resemble controllable agents largely unexplored, whic...
[ "Computer Vision", "Reinforcement Learning", "Model-based Learning", "Visual Control Systems" ]
https://icml.cc/media/Po…202024/33712.png
NeurIPS
2,022
61,616
Attack-Agnostic Adversarial Detection
The growing number of adversarial attacks in recent years gives attackers an advantage over defenders, as defenders must train detectors after knowing the types of attacks, and many models need to be maintained to ensure good performance in detecting any upcoming attacks. We propose a way to end the tug-of-war between ...
[ "Adversarial Machine Learning", "Anomaly Detection", "Cybersecurity", "Machine Learning Security", "Computer Vision" ]
https://neurips.cc/media…202022/61616.png
ICLR
2,024
19,517
Don't Play Favorites: Minority Guidance for Diffusion Models
We explore the problem of generating minority samples using diffusion models. The minority samples are instances that lie on low-density regions of a data manifold. Generating a sufficient number of such minority instances is important, since they often contain some unique attributes of the data. However, the conventio...
[ "Generative Models", "Diffusion Models", "Data Imbalance", "Minority Sample Generation", "Computer Vision", "Medical Imaging" ]
https://iclr.cc/media/Po…202024/19517.png
ICML
2,024
33,853
Distilling Morphology-Conditioned Hypernetworks for Efficient Universal Morphology Control
Learning a universal policy across different robot morphologies can significantly improve learning efficiency and enable zero-shot generalization to unseen morphologies. However, learning a highly performant universal policy requires sophisticated architectures like transformers (TF) that have larger memory and computa...
[ "Robotics", "Reinforcement Learning", "Neural Networks" ]
https://icml.cc/media/Po…202024/33853.png
NeurIPS
2,022
54,447
A Fast Post-Training Pruning Framework for Transformers
Pruning is an effective way to reduce the huge inference cost of Transformer models. However, prior work on pruning Transformers requires retraining the models. This can add high training cost and high complexity to model deployment, making it difficult to use in many practical situations. To address this, we propose a...
[ "Natural Language Processing", "Model Compression", "Deep Learning", "Transformer Models", "Model Optimization" ]
https://neurips.cc/media…202022/54447.png
ICML
2,024
37,024
Scaling the Vocabulary of Non-autoregressive Models for Efficient Generative Retrieval
Generative Retrieval introduces a new approach to Information Retrieval by reframing it as a constrained generation task, leveraging recent advancements in Autoregressive (AR) language models. However, AR-based Generative Retrieval methods suffer from high inference latency and cost compared to traditional dense retrie...
[ "Information Retrieval", "Natural Language Processing", "Generative Models", "Non-autoregressive Models", "Computational Linguistics" ]
https://icml.cc/media/Po…202024/37024.png
ICML
2,023
23,908
Provable Benefit of Mixup for Finding Optimal Decision Boundaries
We investigate how pair-wise data augmentation techniques like Mixup affect the sample complexity of finding optimal decision boundaries in a binary linear classification problem. For a family of data distributions with a separability constant $\kappa$, we analyze how well the optimal classifier in terms of training lo...
[ "Data Augmentation", "Classification", "Computational Learning Theory", "Statistical Learning Theory" ]
https://icml.cc/media/Po…202023/23908.png
ICML
2,022
17,779
Balancing Discriminability and Transferability for Source-Free Domain Adaptation
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations; while concurrently preserving the task-discriminability knowledge gathered from the labeled source data. However, the requirement of simultaneous access to labeled source and unlabeled targ...
[ "Domain Adaptation", "Transfer Learning", "Computer Vision", "Privacy-Preserving Machine Learning" ]
https://icml.cc/media/Po…b8d237ec5d4a.png
ICML
2,024
34,121
Think Before You Act: Decision Transformers with Working Memory
Decision Transformer-based decision-making agents have shown the ability to generalize across multiple tasks. However, their performance relies on massive data and computation. We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout trainin...
[ "Reinforcement Learning", "Neural Networks", "Cognitive Science" ]
https://icml.cc/media/Po…202024/34121.png
ICLR
2,023
10,900
Learning Harmonic Molecular Representations on Riemannian Manifold
Molecular representation learning plays a crucial role in AI-assisted drug discovery research. Encoding 3D molecular structures through Euclidean neural networks has become the prevailing method in the geometric deep learning community. However, the equivariance constraints and message passing in Euclidean space may li...
[ "Geometric Deep Learning", "Molecular Representation Learning", "Riemannian Manifolds", "Computational Chemistry", "Drug Discovery", "Machine Learning in Chemistry" ]
https://iclr.cc/media/Po…202023/10900.png
ICML
2,023
27,886
Emergent deception and skepticism via theory of mind
In complex situations involving communication, agents might attempt to mask their intentions, exploiting Shannon's theory of information as a theory of misinformation. Here, we introduce and analyze a simple multiagent reinforcement learning task where a buyer sends signals to a seller via its actions, and in which bo...
[ "Multiagent Systems", "Reinforcement Learning", "Theory of Mind", "Information Theory", "Communication and Signaling in AI", "Game Theory" ]
https://icml.cc/media/Po…202023/27886.png
NeurIPS
2,022
56,971
Shining light on data
Experimental sciences have come to depend heavily on our ability to organize, interpret and analyze high-dimensional datasets produced from observations of a large number of variables governed by natural processes. Natural laws, conservation principles, and dynamical structure introduce intricate inter-dependencies amo...
[ "Data Science", "Quantum Computing", "Computational Physics", "Applied Mathematics", "Data Analysis", "Information Theory" ]
https://neurips.cc/media…202022/56971.png
NeurIPS
2,022
56,451
Towards Improved Learning in Gaussian Processes: The Best of Two Worlds
Gaussian process training decomposes into inference of the (approximate) posterior and learning of the hyperparameters. For non-Gaussian (non-conjugate) likelihoods, two common choices for approximate inference are Expectation Propagation (EP) and Variational Inference (VI), which have complementary strengths and weakn...
[ "Gaussian Processes", "Bayesian Inference", "Approximate Inference Methods", "Hyperparameter Optimization" ]
https://neurips.cc/media…202022/56451.png
ICLR
2,024
21,527
Reconstructing the Breathless Ocean with Spatio-Temporal Graph Learning
The ocean is currently undergoing severe deoxygenation. Accurately reconstructing the breathless ocean is crucial for assessing and protecting marine ecosystem in response to climate change. Existing expert-dominated numerical simulations fail to catch up with the dynamic variation caused by global warming and human ac...
[ "Environmental Science", "Oceanography", "Climate Change", "Data Science", "Marine Ecosystems", "Spatio-Temporal Analysis" ]
https://iclr.cc/media/Po…202024/21527.png
ICML
2,024
34,091
DPZero: Private Fine-Tuning of Language Models without Backpropagation
The widespread practice of fine-tuning large language models (LLMs) on domain-specific data faces two major challenges in memory and privacy. First, as the size of LLMs continues to grow, the memory demands of gradient-based training methods via backpropagation become prohibitively high. Second, given the tendency of L...
[ "Natural Language Processing", "Privacy-Preserving Machine Learning", "Model Fine-Tuning", "Zeroth-Order Optimization" ]
https://icml.cc/media/Po…202024/34091.png
ICLR
2,024
17,595
Zero Bubble (Almost) Pipeline Parallelism
Pipeline parallelism is one of the key components for large-scale distributed training, yet its efficiency suffers from pipeline bubbles which were deemed inevitable. In this work, we introduce a scheduling strategy that, to our knowledge, is the first to successfully achieve zero pipeline bubbles under synchronous tra...
[ "Distributed Systems", "Parallel Computing", "Machine Learning Infrastructure", "High-Performance Computing" ]
https://iclr.cc/media/Po…202024/17595.png