PosterSum: A Multimodal Benchmark for Scientific Poster Summarization
Paper • 2502.17540 • Published • 3
conference stringclasses 3
values | year int32 2.02k 2.02k | paper_id int32 5.89k 80k | title stringlengths 12 188 | abstract stringlengths 1 4.65k | topics sequencelengths 1 20 | image_url stringlengths 54 89 |
|---|---|---|---|---|---|---|
ICLR | 2,024 | 19,205 | A Fast and Provable Algorithm for Sparse Phase Retrieval | We study the sparse phase retrieval problem, which seeks to recover a sparse signal from a limited set of magnitude-only measurements. In contrast to prevalent sparse phase retrieval algorithms that primarily use first-order methods, we propose an innovative second-order algorithm that employs a Newton-type method with... | [
"Signal Processing",
"Computational Mathematics",
"Optimization",
"Algorithms"
] | |
ICLR | 2,024 | 19,234 | Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation and Regression | Data augmentation is critical to the empirical success of modern self-supervised representation learning, such as contrastive learning and masked language modeling.However, a theoretical understanding of the exact role of the augmentation remains limited.Recent work has built the connection between self-supervised lear... | [
"Self-Supervised Learning",
"Representation Learning",
"Data Augmentation",
"Theoretical Analysis in Machine Learning",
"Reproducing Kernel Hilbert Space ",
"Statistical Learning Theory"
] | |
NeurIPS | 2,023 | 71,333 | Regularized Behavior Cloning for Blocking the Leakage of Past Action Information | For partially observable environments, imitation learning with observation histories (ILOH) assumes that control-relevant information is sufficiently captured in the observation histories for imitating the expert actions. In the offline setting wherethe agent is required to learn to imitate without interaction with the... | [
"Imitation Learning",
"Reinforcement Learning",
"Partially Observable Environments",
"Behavior Cloning",
"Regularization Techniques"
] | |
NeurIPS | 2,023 | 72,466 | Koopman Kernel Regression | Many machine learning approaches for decision making, such as reinforcement learning, rely on simulators or predictive models to forecast the time-evolution of quantities of interest, e.g., the state of an agent or the reward of a policy. Forecasts of such complex phenomena are commonly described by highly nonlinear dy... | [
"Reinforcement Learning",
"Dynamical Systems",
"Predictive Modeling",
"Kernel Methods",
"Statistical Learning",
"Optimization",
"Control Theory"
] | |
ICML | 2,023 | 25,261 | Learning GFlowNets From Partial Episodes For Improved Convergence And Stability | Generative flow networks (GFlowNets) are a family of algorithms for training a sequential sampler of discrete objects under an unnormalized target density and have been successfully used for various probabilistic modeling tasks. Existing training objectives for GFlowNets are either local to states or transitions, or pr... | [
"Reinforcement Learning",
"Probabilistic Modeling",
"Algorithmic Development"
] | |
ICML | 2,024 | 33,124 | Viewing Transformers Through the Lens of Long Convolutions Layers | Despite their dominance in modern DL and, especially, NLP domains, transformer architectures exhibit sub-optimal performance on long-range tasks compared to recent layers that are specifically designed for this purpose. In this work, drawing inspiration from key attributes of longrange layers, such as state-space layer... | [
"Deep Learning",
"Natural Language Processing",
"Transformer Models",
"Long-Range Dependencies",
"Neural Network Architectures"
] | |
ICLR | 2,022 | 6,644 | Towards Model Agnostic Federated Learning Using Knowledge Distillation | Is it possible to design an universal API for federated learning using which an ad-hoc group of data-holders (agents) collaborate with each other and perform federated learning? Such an API would necessarily need to be model-agnostic i.e. make no assumption about the model architecture being used by the agents, and als... | [
"Federated Learning",
"Knowledge Distillation",
"Model Agnostic Methods",
"Data Heterogeneity",
"Neural Networks"
] | |
ICML | 2,023 | 24,401 | Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning | Policy optimization methods with function approximation are widely used in multi-agent reinforcement learning. However, it remains elusive how to design such algorithms with statistical guarantees. Leveraging a multi-agent performance difference lemma that characterizes the landscape of multi-agent policy optimization,... | [
"Multi-Agent Reinforcement Learning",
"Policy Optimization",
"Function Approximation",
"Cooperative Markov Games",
"Algorithm Design and Analysis"
] | |
NeurIPS | 2,023 | 79,608 | Grounding Code Generation with Input-Output Specifications | Large language models (LLMs) have demonstrated significant potential in code generation. However, the code generated by these models occasionally deviates from the user's intended outcome, resulting in executable but incorrect code. To mitigate this issue, we propose Gift4Code, a novel approach for the instruction fine... | [
"Code Generation",
"Natural Language Processing",
"Software Engineering"
] | |
ICLR | 2,024 | 18,178 | REFACTOR: Learning to Extract Theorems from Proofs | Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show... | [
"Automated Theorem Proving",
"Formal Methods",
"Computational Mathematics"
] | |
ICLR | 2,024 | 20,866 | Exploring the Limits of Semantic Image Compression at Micro-bits per Pixel | Traditional methods, such as JPEG, perform image compression by operating on structural information, such as pixel values or frequency content. These methods are effective to bitrates around one bit per pixel (bpp) and higher at standard image sizes. However, to compress further text-based semantic compression directly... | [
"Image Compression",
"Semantic Compression",
"Computer Vision",
"Natural Language Processing"
] | |
NeurIPS | 2,023 | 69,905 | Automated Classification of Model Errors on ImageNet | While the ImageNet dataset has been driving computer vision research over the past decade, significant label noise and ambiguity have made top-1 accuracy an insufficient measure of further progress. To address this, new label-sets and evaluation protocols have been proposed for ImageNet showing that state-of-the-art mo... | [
"Computer Vision",
"Image Classification",
"Model Evaluation",
"Error Analysis"
] | |
ICLR | 2,023 | 11,321 | Understanding Embodied Reference with Touch-Line Transformer | We study embodied reference understanding, the task of locating referents using embodied gestural signals and language references. Human studies have revealed that, contrary to popular belief, objects referred to or pointed to do not lie on the elbow-wrist line, but rather on the so-called virtual touch line. Neverthel... | [
"Computer Vision",
"Natural Language Processing",
"Human-Computer Interaction",
"Robotics"
] | |
ICML | 2,022 | 17,303 | Path-Gradient Estimators for Continuous Normalizing Flows | Recent work has established a path-gradient estimator for simple variational Gaussian distributions and has argued that the path-gradient is particularly beneficial in the regime in which the variational distribution approaches the exact target distribution. In many applications, this regime can however not be reached ... | [
"Variational Inference",
"Normalizing Flows",
"Computational Statistics"
] | |
ICLR | 2,023 | 10,802 | The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation | Heterogeneity of data distributed across clients limits the performance of global models trained through federated learning, especially in the settings with highly imbalanced class distributions of local datasets. In recent years, personalized federated learning (pFL) has emerged as a potential solution to the challeng... | [
"Federated Learning",
"Personalized Federated Learning",
"Knowledge Distillation",
"Data Heterogeneity",
"Model Convergence",
"Visual Data Analysis"
] | |
ICLR | 2,023 | 11,174 | ExpressivE: A Spatio-Functional Embedding For Knowledge Graph Completion | Knowledge graphs are inherently incomplete. Therefore substantial research has been directed toward knowledge graph completion (KGC), i.e., predicting missing triples from the information represented in the knowledge graph (KG). KG embedding models (KGEs) have yielded promising results for KGC, yet any current KGE is i... | [
"Knowledge Graphs",
"Data Mining",
"Graph Embedding"
] | |
ICML | 2,024 | 33,187 | Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency | Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial training (AT), manifesting as highly distorted deep neural networks (DNNs) that are vulnerable to multi-step adversarial attacks. However, the underlying factors that lead to the distortion of decision boundaries remain unclear. I... | [
"Deep Learning",
"Adversarial Machine Learning",
"Neural Networks",
"Robustness in Machine Learning"
] | |
NeurIPS | 2,023 | 72,160 | On the Convergence and Sample Complexity Analysis of Deep Q-Networks with $\epsilon$-Greedy Exploration | This paper provides a theoretical understanding of deep Q-Network (DQN) with the $\varepsilon$-greedy exploration in deep reinforcement learning.Despite the tremendous empirical achievement of the DQN, its theoretical characterization remains underexplored.First, the exploration strategy is either impractical or ignore... | [
"Deep Reinforcement Learning",
"Machine Learning Theory",
"Convergence Analysis",
"Sample Complexity Analysis"
] | |
NeurIPS | 2,023 | 74,880 | DiffDock-Pocket: Diffusion for Pocket-Level Docking with Sidechain Flexibility | When a small molecule binds to a protein, the 3D structure of the protein and its function change. Understanding this process, called molecular docking, can be crucial in areas such as drug design. Recent learning-based attempts have shown promising results at this task, yet lack features that traditional approaches su... | [
"Computational Biology",
"Molecular Docking",
"Drug Design",
"Structural Bioinformatics",
"Machine Learning in Biology"
] | |
ICLR | 2,023 | 11,225 | Consolidator: Mergable Adapter with Group Connections for Visual Adaptation | Recently, transformers have shown strong ability as visual feature extractors, surpassing traditional convolution-based models in various scenarios. However, the success of vision transformers largely owes to their capacity to accommodate numerous parameters. As a result, new challenges for adapting a well-trained tran... | [
"Computer Vision",
"Transfer Learning",
"Vision Transformers",
"Model Optimization",
"Deep Learning"
] | |
ICML | 2,023 | 24,636 | Differentially Private Optimization on Large Model at Small Cost | Differentially private (DP) optimization is the standard paradigm to learn large neural networks that are accurate and privacy-preserving. The computational cost for DP deep learning, however, is notoriously heavy due to the per-sample gradient clipping. Existing DP implementations are 2$\sim$1000$\times$ more costly ... | [
"Differential Privacy",
"Optimization",
"Deep Learning",
"Neural Networks",
"Privacy-Preserving Machine Learning",
"Computational Efficiency",
"Machine Learning Algorithms"
] | |
NeurIPS | 2,023 | 70,120 | Deep Equilibrium Based Neural Operators for Steady-State PDEs | Data-driven machine learning approaches are being increasingly used to solve partial differential equations (PDEs). They have shown particularly striking successes when training an operator, which takes as input a PDE in some family, and outputs its solution. However, the architectural design space, especially given st... | [
"Neural Networks",
"Computational Mathematics",
"Partial Differential Equations ",
"Scientific Computing"
] | |
ICLR | 2,024 | 17,592 | Learning invariant representations of time-homogeneous stochastic dynamical systems | We consider the general class of time-homogeneous stochastic dynamical systems, both discrete and continuous, and study the problem of learning a representation of the state that faithfully captures its dynamics. This is instrumental to learning the transfer operator or the generator of the system, which in turn can be... | [
"Dynamical Systems",
"Stochastic Processes",
"Representation Learning",
"Neural Networks",
"Statistical Learning Theory"
] | |
NeurIPS | 2,023 | 72,656 | DAC-DETR: Divide the Attention Layers and Conquer | This paper reveals a characteristic of DEtection Transformer (DETR) that negatively impacts its training efficacy, i.e., the cross-attention and self-attention layers in DETR decoder have contrary impacts on the object queries (though both impacts are important). Specifically, we observe the cross-attention tends to ga... | [
"Computer Vision",
"Object Detection",
"Deep Learning",
"Transformer Models"
] | |
ICML | 2,023 | 23,710 | Active causal structure learning with advice | We introduce the problem of active causal structure learning with advice. In the typical well-studied setting, the learning algorithm is given the essential graph for the observational distribution and is asked to recover the underlying causal directed acyclic graph (DAG) $G^*$ while minimizing the number of interventi... | [
"Causal Inference",
"Graph Theory",
"Algorithms with Predictions",
"Computational Learning Theory"
] | |
NeurIPS | 2,022 | 64,147 | ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation | Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithm (EA) are two major paradigms of policy optimization with distinct learning principles, i.e., gradient-based v.s. gradient-free. An appealing research direction is integrating Deep RL and EA to devise new methods by fusing their complementary advantages. H... | [
"Reinforcement Learning",
"Evolutionary Algorithms",
"Optimization Techniques"
] | |
NeurIPS | 2,023 | 72,903 | From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces | Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available. These input representations have been often coupled with custom, task-specific action spaces. This... | [
"Human-Computer Interaction",
"Computer Vision"
] | |
ICML | 2,022 | 17,457 | Scalable Deep Reinforcement Learning Algorithms for Mean Field Games | Mean Field Games (MFGs) have been introduced to efficiently approximate games with very large populations of strategic agents. Recently, the question of learning equilibria in MFGs has gained momentum, particularly using model-free reinforcement learning (RL) methods. One limiting factor to further scale up using RL is... | [
"Deep Reinforcement Learning",
"Mean Field Games",
"Game Theory",
"Machine Learning Algorithms",
"Neural Networks"
] | |
NeurIPS | 2,022 | 53,762 | Trade-off between Payoff and Model Rewards in Shapley-Fair Collaborative Machine Learning | This paper investigates the problem of fairly trading off between payoff and model rewards in collaborative machine learning (ML) where parties aggregate their datasets together to obtain improved ML models over that of each party. Supposing parties can afford the optimal model trained on the aggregated dataset, we pro... | [
"Collaborative Machine Learning",
"Fairness in Machine Learning",
"Game Theory in Machine Learning",
"Resource Allocation in Machine Learning"
] | |
ICLR | 2,024 | 18,386 | From Zero to Turbulence: Generative Modeling for 3D Flow Simulation | Simulations of turbulent flows in 3D are one of the most expensive simulations in computational fluid dynamics (CFD). Many works have been written on surrogate models to replace numerical solvers for fluid flows with faster, learned, autoregressive models. However, the intricacies of turbulence in three dimensions nece... | [
"Computational Fluid Dynamics ",
"Generative Modeling",
"Turbulence Simulation",
"Machine Learning in Fluid Dynamics",
"Surrogate Modeling",
"3D Flow Simulation"
] | |
NeurIPS | 2,023 | 72,558 | Keep Various Trajectories: Promoting Exploration of Ensemble Policies in Continuous Control | The combination of deep reinforcement learning (DRL) with ensemble methods has been proved to be highly effective in addressing complex sequential decision-making problems. This success can be primarily attributed to the utilization of multiple models, which enhances both the robustness of the policy and the accuracy o... | [
"Deep Reinforcement Learning",
"Ensemble Methods",
"Continuous Control",
"Machine Learning Algorithms",
"Exploration Strategies in Reinforcement Learning"
] | |
ICLR | 2,024 | 19,022 | REValueD: Regularised Ensemble Value-Decomposition for Factorisable Markov Decision Processes | Discrete-action reinforcement learning algorithms often falter in tasks with high-dimensional discrete action spaces due to the vast number of possible actions. A recent advancement leverages value-decomposition, a concept from multi-agent reinforcement learning, to tackle this challenge. This study delves deep into th... | [
"Reinforcement Learning",
"Multi-Agent Systems",
"Machine Learning Algorithms",
"Control Systems"
] | |
ICML | 2,023 | 23,569 | Optimizing the Collaboration Structure in Cross-Silo Federated Learning | In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized. Through utilizing more training data, FL suffers from the potential negative transfer problem: the global FL model may even perform worse than the models trained with local data onl... | [
"Federated Learning",
"Data Privacy",
"Distributed Systems",
"Collaborative Learning"
] | |
ICLR | 2,024 | 17,668 | Neural SDF Flow for 3D Reconstruction of Dynamic Scenes | In this paper, we tackle the problem of 3D reconstruction of dynamic scenes from multi-view videos. Previous dynamic scene reconstruction works either attempt to model the motion of 3D points in space, which constrains them to handle a single articulated object or require depth maps as input. By contrast, we propose to... | [
"Computer Vision",
"3D Reconstruction",
"Dynamic Scene Analysis",
"Neural Networks",
"Multi-View Geometry"
] | |
NeurIPS | 2,022 | 57,822 | Transformers generalize differently from information stored in context vs in weights | Transformer models can use two fundamentally different kinds of information: information stored in weights during training, and information provided ``in-context'' at inference time. In this work, we show that transformers exhibit different inductive biases in how they represent and generalize from the information in t... | [
"Natural Language Processing",
"Deep Learning",
"Neural Networks"
] | |
NeurIPS | 2,023 | 72,650 | New Bounds for Hyperparameter Tuning of Regression Problems Across Instances | The task of tuning regularization coefficients in regularized regression models with provable guarantees across problem instances still poses a significant challenge in the literature. This paper investigates the sample complexity of tuning regularization parameters in linear and logistic regressions under $\ell_1$ and... | [
"Hyperparameter Optimization",
"Regression Analysis",
"Statistical Learning Theory"
] | |
ICLR | 2,023 | 11,153 | How robust is unsupervised representation learning to distribution shift? | The robustness of machine learning algorithms to distributions shift is primarily discussed in the context of supervised learning (SL). As such, there is a lack of insight on the robustness of the representations learned from unsupervised methods, such as self-supervised learning (SSL) and auto-encoder based algorithms... | [
"Unsupervised Learning",
"Representation Learning",
"Distribution Shift",
"Machine Learning Robustness",
"Self-Supervised Learning",
"Auto-Encoders",
"Domain Generalization"
] | |
NeurIPS | 2,023 | 70,685 | Generator Born from Classifier | In this paper, we make a bold attempt toward an ambitious task: given a pre-trained classifier, we aim to reconstruct an image generator, without relying on any data samples. From a black-box perspective, this challenge seems intractable, since it inevitably involves identifying the inverse function for a classifier, w... | [
"Neural Networks",
"Image Generation",
"Deep Learning",
"Generative Models"
] | |
ICLR | 2,023 | 12,160 | Generating Diverse Cooperative Agents by Learning Incompatible Policies | Training a robust cooperative agent requires diverse partner agents. However, obtaining those agents is difficult. Previous works aim to learn diverse behaviors by changing the state-action distribution of agents. But, without information about the task's goal, the diversified agents are not guided to find other import... | [
"Multi-Agent Systems",
"Reinforcement Learning",
"Cooperative AI"
] | |
ICLR | 2,023 | 10,822 | Causal Confusion and Reward Misidentification in Preference-Based Reward Learning | Learning policies via preference-based reward learning is an increasingly popular method for customizing agent behavior, but has been shown anecdotally to be prone to spurious correlations and reward hacking behaviors. While much prior work focuses on causal confusion in reinforcement learning and behavioral cloning, w... | [
"Reinforcement Learning",
"Causal Inference",
"Preference-Based Learning",
"Reward Learning"
] | |
ICML | 2,022 | 16,427 | Universal Joint Approximation of Manifolds and Densities by Simple Injective Flows | We study approximation of probability measures supported on n-dimensional manifolds embedded in R^m by injective flows---neural networks composed of invertible flows and injective layers. We show that in general, injective flows between R^n and R^m universally approximate measures supported on images of extendable embe... | [
"Neural Networks",
"Manifold Learning",
"Algebraic Topology",
"Probability Theory",
"Approximation Theory"
] | |
NeurIPS | 2,023 | 76,120 | Graph-Theoretical Approaches for AI-Driven Discovery in Quantum Optics | Emerging findings in the physical sciences frequently present new avenues for AI applications that can enhance its efficiency or broaden its scope, as we demonstrated in our study on quantum optics. We present a method that represents quantum optics experiments as abstract weighted graphs, converting problems that enco... | [
"Quantum Optics",
"Graph Theory",
"Computational Physics",
"Optimization Techniques"
] | |
NeurIPS | 2,022 | 56,960 | FO-PINNs: A First-Order formulation for Physics~Informed Neural Networks | We present FO-PINNs, physics-informed neural networks that are trained using the first-order formulation of the Partial Differential Equation (PDE) losses. We show that FO-PINNs offer significantly higher accuracy in solving parameterized systems compared to traditional PINNs, and reduce time-per-iteration by removing ... | [
"Computational Physics",
"Numerical Analysis",
"Scientific Computing",
"Neural Networks",
"Partial Differential Equations "
] | |
NeurIPS | 2,022 | 55,747 | ComMU: Dataset for Combinatorial Music Generation | Commercial adoption of automatic music composition requires the capability of generating diverse and high-quality music suitable for the desired context (e.g., music for romantic movies, action games, restaurants, etc.). In this paper, we introduce combinatorial music generation, a new task to create varying background... | [
"Music Information Retrieval",
"Computational Creativity",
"Symbolic Music Generation",
"Artificial Intelligence in Music",
"Music Data and Metadata",
"Music Composition and Production"
] | |
ICML | 2,023 | 24,487 | GC-Flow: A Graph-Based Flow Network for Effective Clustering | Graph convolutional networks (GCNs) are *discriminative models* that directly model the class posterior $p(y|\mathbf{x})$ for semi-supervised classification of graph data. While being effective, as a representation learning approach, the node representations extracted from a GCN often miss useful information for effect... | [
"Graph Neural Networks",
"Clustering",
"Generative Models",
"Representation Learning"
] | |
ICML | 2,024 | 37,259 | Differentiable Local Intrinsic Dimension Estimation with Diffusion Models | High-dimensional data commonly lies on low-dimensional submanifolds, and estimating the local intrinsic dimension (LID) of a datum is a longstanding problem. LID can be understood as the number of local factors of variation: the more factors of variation a datum has, the more complex it tends to be. Estimating this qua... | [
"High-Dimensional Data Analysis",
"Intrinsic Dimension Estimation",
"Diffusion Models",
"Neural Networks",
"Data Science"
] | |
NeurIPS | 2,023 | 76,960 | Physics-informed DeepONet for battery state prediction | Electrification has emerged as a pivotal trend in the energy transition to address climate change, leading to a substantial surge in the demand for batteries. Accurately predicting the internal states and performance of batteries assumes paramount significance, as it ensures the safe and stable operation of batteries a... | [
"Battery Technology",
"Computational Modeling",
"Machine Learning in Energy Systems",
"Physics-informed Machine Learning",
"Energy Storage Systems"
] | |
ICML | 2,023 | 24,321 | Traversing Between Modes in Function Space for Fast Ensembling | Deep ensemble is a simple yet powerful way to improve the performance of deep neural networks. Under this motivation, recent works on mode connectivity have shown that parameters of ensembles are connected by low-loss subspaces, and one can efficiently collect ensemble parameters in those subspaces. While this provides... | [
"Deep Learning",
"Neural Networks",
"Model Ensembling",
"Mode Connectivity"
] | |
ICML | 2,023 | 24,799 | Regret-Minimizing Double Oracle for Extensive-Form Games | By incorporating regret minimization, double oracle methods have demonstrated rapid convergence to Nash Equilibrium (NE) in normal-form games and extensive-form games, through algorithms such as online double oracle (ODO) and extensive-form double oracle (XDO), respectively. In this study, we further examine the theore... | [
"Game Theory",
"Algorithmic Game Theory",
"Computational Complexity"
] | |
NeurIPS | 2,023 | 75,166 | Exploring Practitioner Perspectives On Training Data Attribution Explanations | Explainable AI (XAI) aims to provide insight into opaque model reasoning to humans and as such is an interdisciplinary field by nature. In this paper, we interviewed 10 practitioners to understand the possible usability of training data attribution (TDA) explanations and to explore the design space of such an approach.... | [
"Explainable AI ",
"Human-Computer Interaction ",
"Data Science",
"Interdisciplinary Research"
] | |
NeurIPS | 2,023 | 76,331 | Inductive Link Prediction in Static and Temporal Graphs for Isolated Nodes | Link prediction is a vital task in graph machine learning, involving the anticipation of connections between entities within a network. In the realm of drug discovery, link prediction takes the form of forecasting interactions between drugs and target genes. Likewise, in recommender systems, link prediction entails sug... | [
"Graph Machine Learning",
"Link Prediction",
"Temporal Graphs",
"Inductive Learning",
"Network Analysis",
"Recommender Systems",
"Drug Discovery",
"Machine Learning for Graphs"
] | |
ICML | 2,024 | 34,951 | When Will Gradient Regularization Be Harmful? | Gradient regularization (GR), which aims to penalize the gradient norm atop the loss function, has shown promising results in training modern over-parameterized deep neural networks. However, can we trust this powerful technique? This paper reveals that GR can cause performance degeneration in adaptive optimization sce... | [
"Deep Learning",
"Neural Networks",
"Optimization Techniques",
"Regularization Methods"
] | |
NeurIPS | 2,023 | 71,432 | AdaptSSR: Pre-training User Model with Augmentation-Adaptive Self-Supervised Ranking | User modeling, which aims to capture users' characteristics or interests, heavily relies on task-specific labeled data and suffers from the data sparsity issue. Several recent studies tackled this problem by pre-training the user model on massive user behavior sequences with a contrastive learning task. Generally, thes... | [
"User Modeling",
"Self-Supervised Learning",
"Data Augmentation",
"Recommender Systems",
"Contrastive Learning"
] | |
ICML | 2,022 | 18,413 | Scalable First-Order Bayesian Optimization via Structured Automatic Differentiation | Bayesian Optimization (BO) has shown great promise for the global optimization of functions that are expensive to evaluate, but despite many successes, standard approaches can struggle in high dimensions. To improve the performance of BO, prior work suggested incorporating gradient information into a Gaussian process s... | [
"Bayesian Optimization",
"Automatic Differentiation",
"Gaussian Processes",
"High-Dimensional Optimization"
] | |
NeurIPS | 2,022 | 63,612 | NigerianPidgin++: Towards End-to-End training of an Automatic Speech recognition system for Nigerian Pidgin Language | The use of automatic speech recognition (ASR) systems for spoken languages has become widespread recently. Contrarily, the vast majority of African languages have limited linguistic resources to sustain the robustness of these systems. We present a study on an end-to-end speech recognition system for Nigerian-Pidgin-En... | [
"Automatic Speech Recognition ",
"Computational Linguistics",
"African Languages",
"Machine Learning for Speech Processing"
] | |
ICML | 2,023 | 26,312 | Adversarial Data Augmentations for Out-of-Distribution Generalization | Out-of-distribution (OoD) generalization occurs when representation learning encounters a distribution shift. This frequently happens in practice when training and testing data come from different environments. Covariate shift is a type of distribution shift that occurs only in the input data, while keeping the concept... | [
"Adversarial Learning",
"Out-of-Distribution Generalization",
"Data Augmentation",
"Covariate Shift",
"Graph Classification"
] | |
ICML | 2,024 | 32,924 | WARM: On the Benefits of Weight Averaged Reward Models | Aligning large language models (LLMs) with human preferences through reinforcement learning (RLHF) can lead to reward hacking, where LLMs exploit failures in the reward model (RM) to achieve seemingly high rewards without meeting the underlying objectives. We identify two primary challenges when designing RMs to mitiga... | [
"Reinforcement Learning",
"Natural Language Processing",
"Model Alignment",
"Reward Modeling"
] | |
ICML | 2,023 | 23,965 | Monotonic Location Attention for Length Generalization | We explore different ways to utilize position-based cross-attention in seq2seq networks to enable length generalization in algorithmic tasks. We show that a simple approach of interpolating the original and reversed encoded representations combined with relative attention allows near-perfect length generalization for b... | [
"Natural Language Processing",
"Sequence-to-Sequence Models",
"Attention Mechanisms",
"Algorithmic Tasks",
"Length Generalization"
] | |
ICLR | 2,024 | 18,355 | Unveiling and Manipulating Prompt Influence in Large Language Models | Prompts play a crucial role in guiding the responses of Large Language Models (LLMs). However, the intricate role of individual tokens in prompts, known as input saliency, in shaping the responses remains largely underexplored. Existing saliency methods either misalign with LLM generation objectives or rely heavily on ... | [
"Natural Language Processing",
"Language Models",
"Text Generation",
"Prompt Engineering"
] | |
NeurIPS | 2,022 | 64,236 | Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning | Exploration is critical for deep reinforcement learning in complex environments with high-dimensional observations and sparse rewards. To address this problem, recent approaches proposed to leverage intrinsic rewards to improve exploration, such as novelty-based exploration and prediction-based exploration. However, ma... | [
"Reinforcement Learning",
"Exploration Strategies",
"Intrinsic Motivation"
] | |
ICML | 2,022 | 17,883 | Tackling Data Heterogeneity: A New Unified Framework for Decentralized SGD with Sample-induced Topology | We develop a general framework unifying several gradient-based stochastic optimization methods for empirical risk minimization problems both in centralized and distributed scenarios. The framework hinges on the introduction of an augmented graph consisting of nodes modeling the samples and edges modeling both the inter... | [
"Optimization",
"Distributed Computing",
"Stochastic Gradient Descent ",
"Empirical Risk Minimization",
"Variance Reduction Methods",
"Convergence Analysis"
] | |
NeurIPS | 2,023 | 72,539 | A Reduction-based Framework for Sequential Decision Making with Delayed Feedback | We study stochastic delayed feedback in general single-agent and multi-agent sequential decision making, which includes bandits, single-agent Markov decision processes (MDPs), and Markov games (MGs). We propose a novel reduction-based framework, which turns any multi-batched algorithm for sequential decision making wit... | [
"Reinforcement Learning",
"Sequential Decision Making",
"Multi-Agent Systems",
"Stochastic Processes"
] | |
NeurIPS | 2,022 | 58,018 | Fourier Neural Operator for Plasma Modelling | Predicting plasma evolution within a Tokamak is crucial to building a sustainable fusion reactor. Whether in the simulation space or within the experimental domain, the capability to forecast the spatio-temporal evolution of plasma field variables rapidly and accurately could improve active control methods on current t... | [
"Plasma Physics",
"Computational Physics",
"Machine Learning for Physics",
"Fusion Energy",
"Magnetohydrodynamics",
"Neural Networks"
] | |
NeurIPS | 2,023 | 72,815 | Provable Guarantees for Neural Networks via Gradient Feature Learning | Neural networks have achieved remarkable empirical performance, while the current theoretical analysis is not adequate for understanding their success, e.g., the Neural Tangent Kernel approach fails to capture their key feature learning ability, while recent analyses on feature learning are typically problem-specific. ... | [
"Machine Learning Theory",
"Neural Networks",
"Gradient Descent",
"Feature Learning",
"Theoretical Computer Science"
] | |
ICLR | 2,024 | 17,718 | Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling | Diffusion models excel at generating photo-realistic images but come with significant computational costs in both training and sampling. While various techniques address these computational challenges, a less-explored issue is designing an efficient and adaptable network backbone for iterative refinement. Current optio... | [
"Computer Vision",
"Image Generation",
"Neural Networks",
"Deep Learning",
"Model Optimization"
] | |
ICLR | 2,024 | 19,298 | Towards Understanding Factual Knowledge of Large Language Models | Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks. The factual knowledge acquired during pretraining and instruction tuning can be useful in various downstream tasks, such as question answering, and language generation. Unlike convent... | [
"Natural Language Processing",
"Knowledge Representation",
"Computational Linguistics"
] | |
ICLR | 2,024 | 19,011 | A Benchmark Study on Calibration | Deep neural networks are increasingly utilized in various machine learning tasks. However, as these models grow in complexity, they often face calibration issues, despite enhanced prediction accuracy. Many studies have endeavored to improve calibration performance through the use of specific loss functions, data prepro... | [
"Deep Learning",
"Neural Networks",
"Model Calibration",
"Neural Architecture Search ",
"Model Evaluation and Metrics"
] | |
NeurIPS | 2,022 | 53,977 | CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion | Masked Image Modeling (MIM) has recently been established as a potent pre-training paradigm. A pretext task is constructed by masking patches in an input image, and this masked content is then predicted by a neural network using visible patches as sole input. This pre-training leads to state-of-the-art performance when... | [
"3D Vision",
"Self-Supervised Learning",
"Representation Learning",
"Computer Vision",
"Depth Estimation",
"Optical Flow Estimation",
"Image Processing"
] | |
ICML | 2,023 | 23,469 | Towards Learning Geometric Eigen-Lengths Crucial for Fitting Tasks | Some extremely low-dimensional yet crucial geometric eigen-lengths often determine the success of some geometric tasks. For example, theheightof an object is important to measure to check if it can fit between the shelves of a cabinet, while thewidthof a couch is crucial when trying to move it through a doorway. Humans... | [
"Computer Vision",
"Geometry"
] | |
ICML | 2,023 | 23,533 | DSGD-CECA: Decentralized SGD with Communication-Optimal Exact Consensus Algorithm | Decentralized Stochastic Gradient Descent (SGD) is an emerging neural network training approach that enables multiple agents to train a model collaboratively and simultaneously. Rather than using a central parameter server to collect gradients from all the agents, each agent keeps a copy of the model parameters and com... | [
"Decentralized Machine Learning",
"Stochastic Gradient Descent",
"Neural Network Training",
"Distributed Computing",
"Communication Algorithms"
] | |
ICML | 2,023 | 23,699 | Locally Regularized Neural Differential Equations: Some Black Boxes were meant to remain closed! | Neural Differential Equations have become an important modeling framework due to their ability to adapt to new problems automatically. Training a neural differential equation is effectively a search over a space of plausible dynamical systems. Controlling the computational cost for these models is difficult since it re... | [
"Neural Networks",
"Differential Equations",
"Computational Modeling",
"Numerical Analysis"
] | |
ICLR | 2,024 | 19,139 | HypeBoy: Generative Self-Supervised Representation Learning on Hypergraphs | Hypergraphs are marked by complex topology, expressing higher-order interactions among multiple nodes with hyperedges, and better capturing the topology is essential for effective representation learning. Recent advances in generative self-supervised learning (SSL) suggest that hypergraph neural networks (HNNs) learned... | [
"Graph Neural Networks",
"Self-Supervised Learning",
"Representation Learning",
"Hypergraph Theory"
] | |
NeurIPS | 2,023 | 74,383 | Level Set Teleportation: the Good, the Bad, and the Ugly | We study level set teleportation, an optimization sub-routine which seeks to accelerate gradient methods by maximizing the gradient along the level-curve of parameters with the same objective value. Since the descent lemma implies that gradient descent decreases the objective proportional to the squared norm of the gra... | [
"Optimization",
"Gradient Methods",
"Convex Analysis",
"Numerical Methods",
"Computational Mathematics"
] | |
ICML | 2,023 | 23,573 | Auxiliary Learning as an Asymmetric Bargaining Game | Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets. However, this approach may present several difficulties: (i) optimizing multiple objectives can be more challenging, and (ii) how to balance the auxiliary tasks to be... | [
"Multi-Task Learning",
"Optimization",
"Game Theory"
] | |
ICML | 2,024 | 37,083 | Conformal Prediction for Time Series with Transformer | We present a conformal prediction method for time series using Transformer. Specifically, we use Transformer decoder as a conditional quantile estimator to predict the quantiles of prediction residuals, which are used to estimate prediction interval. We hypothesize that Transformer decoder benefits the estimation of pr... | [
"Time Series Analysis",
"Predictive Modeling",
"Deep Learning",
"Statistical Methods"
] | |
ICLR | 2,024 | 19,354 | Flag Aggregator: Scalable Distributed Training under Failures and Augmented Losses using Convex Optimization | Modern ML applications increasingly rely on complex deep learning models and large datasets. There has been an exponential growth in the amount of computation needed to train the largest models. Therefore, to scale computation and data, these models are inevitably trained in a distributed manner in clusters of nodes, a... | [
"Distributed Machine Learning",
"Convex Optimization",
"Fault Tolerance in Distributed Systems",
"Robustness in Machine Learning",
"Scalable Computing Systems"
] | |
NeurIPS | 2,023 | 70,196 | CQM: Curriculum Reinforcement Learning with a Quantized World Model | Recent curriculum Reinforcement Learning (RL) has shown notable progress in solving complex tasks by proposing sequences of surrogate tasks. However, the previous approaches often face challenges when they generate curriculum goals in a high-dimensional space. Thus, they usually rely on manually specified goal spaces. ... | [
"Reinforcement Learning",
"Curriculum Learning",
"Quantized Models",
"Goal-Oriented Learning"
] | |
ICLR | 2,022 | 6,539 | ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics | Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To sol... | [
"Computer Graphics",
"Animation and Simulation",
"Human-Computer Interaction",
"Neural Networks",
"Inverse Kinematics"
] | |
ICML | 2,023 | 27,607 | MASIL: Towards Maximum Separable Class Representation for Few Shot Class Incremental Learning | Few Shot Class Incremental Learning (FSCIL) with few examples per class for each incremental session is the realistic setting of continual learning since obtaining large number of annotated samples is not feasible and cost effective. We present the framework MASIL as a step towards learning the maximal separable classi... | [
"Continual Learning",
"Few-Shot Learning",
"Incremental Learning",
"Computer Vision"
] | |
NeurIPS | 2,023 | 76,895 | A Scalable Network-Aware Multi-Agent Reinforcement Learning Framework for Distributed Converter-based Microgrid Voltage Control | Renewable energy plays a crucial role in mitigating climate change. With the rising use of distributed energy resources (DERs), microgrids (MGs) have emerged as a solution to accommodate high DER penetration. However, controlling MGs' voltage during islanded operation is challenging due to system's nonlinearity and sto... | [
"Renewable Energy",
"Microgrid Control",
"Multi-Agent Systems",
"Reinforcement Learning",
"Distributed Energy Resources",
"Voltage Control",
"Scalable Algorithms"
] | |
ICML | 2,024 | 33,134 | Align Your Steps: Optimizing Sampling Schedules in Diffusion Models | Diffusion models (DMs) have established themselves as the state-of-the-art generative modeling approach in the visual domain and beyond. A crucial drawback of DMs is their slow sampling speed, relying on many sequential function evaluations through large neural networks. Sampling from DMs can be seen as solving a diffe... | [
"Generative Models",
"Diffusion Models",
"Stochastic Processes",
"Optimization",
"Computer Vision"
] | |
NeurIPS | 2,023 | 70,386 | StateMask: Explaining Deep Reinforcement Learning through State Mask | Despite the promising performance of deep reinforcement learning (DRL) agents in many challenging scenarios, the black-box nature of these agents greatly limits their applications in critical domains. Prior research has proposed several explanation techniques to understand the deep learning-based policies in RL. Most e... | [
"Deep Reinforcement Learning",
"Explainable Artificial Intelligence ",
"Machine Learning Interpretability",
"Adversarial Machine Learning"
] | |
NeurIPS | 2,023 | 70,723 | History Filtering in Imperfect Information Games: Algorithms and Complexity | Historically applied exclusively to perfect information games, depth-limited search with value functions has been key to recent advances in AI for imperfect information games. Most prominent approaches with strong theoretical guarantees requiresubgame decomposition- a process in which a subgame is computed from public ... | [
"Game Theory",
"Computational Complexity",
"Algorithms",
"Decision Making"
] | |
ICLR | 2,023 | 10,850 | Large Language Models are Human-Level Prompt Engineers | By conditioning on natural language instructions, large language models (LLMs) have displayed impressive capabilities as general-purpose computers. However, task performance depends significantly on the quality of the prompt used to steer the model, and most effective prompts have been handcrafted by humans. Inspired b... | [
"Natural Language Processing",
"Prompt Engineering",
"Program Synthesis"
] | |
ICML | 2,023 | 27,279 | AutoML-GPT: Large Language Model for AutoML | With the emerging trend of GPT models, we establish a framework, AutoML-GPT, integrates with a comprehensive set of tools and libraries, granting access to a wide range of data preprocessing techniques, feature engineering methods, and model selection algorithms. Users can specify their requirements, constraints, and e... | [
"AutoML",
"Natural Language Processing",
"Hyperparameter Optimization",
"Model Selection",
"Data Preprocessing",
"Feature Engineering"
] | |
ICML | 2,023 | 24,155 | Convex Geometry of ReLU-layers, Injectivity on the Ball and Local Reconstruction | The paper uses a frame-theoretic setting to study the injectivity of a ReLU-layer on the closed ball of $\mathbb{R}^n$ and its non-negative part. In particular, the interplay between the radius of the ball and the bias vector is emphasized. Together with a perspective from convex geometry, this leads to a computational... | [
"Neural Networks",
"Convex Geometry",
"Computational Mathematics"
] | |
NeurIPS | 2,023 | 69,910 | One-Line-of-Code Data Mollification Improves Optimization of Likelihood-based Generative Models | Generative Models (GMs) have attracted considerable attention due to their tremendous success in various domains, such as computer vision where they are capable to generate impressive realistic-looking images. Likelihood-based GMs are attractive due to the possibility to generate new data by a single model evaluation. ... | [
"Generative Models",
"Likelihood-based Models",
"Data Optimization",
"Computer Vision",
"Variational Autoencoders",
"Normalizing Flows",
"Density Estimation"
] | |
NeurIPS | 2,023 | 72,950 | Structured Federated Learning through Clustered Additive Modeling | Heterogeneous federated learning without assuming any structure is challenging due to the conflicts among non-identical data distributions of clients. In practice, clients often comprise near-homogeneous clusters so training a server-side model per cluster mitigates the conflicts. However, FL with client clustering oft... | [
"Federated Learning",
"Distributed Systems",
"Data Science"
] | |
ICLR | 2,024 | 19,025 | Domain Randomization via Entropy Maximization | Varying dynamics parameters in simulation is a popular Domain Randomization (DR) approach for overcoming the reality gap in Reinforcement Learning (RL). Nevertheless, DR heavily hinges on the choice of the sampling distribution of the dynamics parameters, since high variability is crucial to regularize the agent's beha... | [
"Reinforcement Learning",
"Domain Randomization",
"Sim-to-Real Transfer",
"Robotics",
"Machine Learning Optimization"
] | |
ICML | 2,022 | 17,157 | Rethinking Fano’s Inequality in Ensemble Learning | We propose a fundamental theory on ensemble learning that evaluates a given ensemble system by a well-grounded set of metrics.Previous studies used a variant of Fano's inequality of information theory and derived a lower bound of the classification error rate on the basis of the accuracy and diversity of models.We revi... | [
"Ensemble Learning",
"Information Theory",
"Machine Learning Theory",
"Classification Error Analysis"
] | |
NeurIPS | 2,023 | 72,541 | Differentiable Sampling of Categorical Distributions Using the CatLog-Derivative Trick | Categorical random variables can faithfully represent the discrete and uncertain aspects of data as part of a discrete latent variable model. Learning in such models necessitates taking gradients with respect to the parameters of the categorical probability distributions, which is often intractable due to their combina... | [
"Probabilistic Models",
"Gradient Estimation",
"Categorical Distributions",
"Variational Inference"
] | |
ICLR | 2,023 | 11,629 | MultiViz: Towards Visualizing and Understanding Multimodal Models | The promise of multimodal models for real-world applications has inspired research in visualizing and understanding their internal mechanics with the end goal of empowering stakeholders to visualize model behavior, perform model debugging, and promote trust in machine learning models. However, modern multimodal models ... | [
"Machine Learning Interpretability",
"Multimodal Machine Learning",
"Neural Network Visualization",
"Model Debugging and Analysis"
] | |
NeurIPS | 2,022 | 59,633 | Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training | Reward and representation learning are two long-standing challenges for learning an expanding set of robot manipulation skills from sensory observations. Given the inherent cost and scarcity of in-domain, task-specific robot data, learning from large, diverse, offline human videos has emerged as a promising path toward... | [
"Computer Vision",
"Robotics",
"Reinforcement Learning",
"Representation Learning"
] | |
ICLR | 2,024 | 18,606 | Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization | The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, i.e., in-distribution (InD). In this paper, we study the OOD problem from a neuron activation view. We first formulate neuron activation states by considering both ... | [
"Neural Networks",
"Out-of-distribution Detection",
"Model Generalization",
"Model Robustness"
] | |
ICML | 2,023 | 23,755 | Extending Conformal Prediction to Hidden Markov Models with Exact Validity via de Finetti's Theorem for Markov Chains | Conformal prediction is a widely used method to quantify the uncertainty of a classifier under the assumption of exchangeability (e.g., IID data). We generalize conformal prediction to the Hidden Markov Model (HMM) framework where the assumption of exchangeability is not valid. The key idea of the proposed method is to... | [
"Statistical Learning",
"Uncertainty Quantification",
"Hidden Markov Models",
"Conformal Prediction",
"Probability Theory",
"Theoretical Computer Science"
] | |
ICML | 2,024 | 34,200 | NExT-GPT: Any-to-Any Multimodal LLM | While recently Multimodal Large Language Models (MM-LLMs) have made exciting strides, they mostly fall prey to the limitation of only input-side multimodal understanding, without the ability to produce content in multiple modalities. As we humans always perceive the world and communicate with people through various mod... | [
"Multimodal Machine Learning",
"Large Language Models",
"Natural Language Processing",
"Cross-Modal Content Generation"
] | |
ICLR | 2,022 | 6,140 | SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search | One-shot Neural Architecture Search (NAS) usually constructs an over-parameterized network, which we call a supernet, and typically adopts sharing parameters among the sub-models to improve computational efficiency. One-shot NAS often repeatedly samples sub-models from the supernet and trains them to optimize the share... | [
"Neural Architecture Search ",
"Meta-Learning",
"Deep Learning",
"Model Optimization"
] | |
ICML | 2,023 | 24,139 | Prototype-oriented unsupervised anomaly detection for multivariate time series | Unsupervised anomaly detection (UAD) of multivariate time series (MTS) aims to learn robust representations of normal multivariate temporal patterns. Existing UAD methods try to learn a fixed set of mappings for each MTS, entailing expensive computation and limited model adaptation. To address this pivotal issue, we pr... | [
"Unsupervised Learning",
"Anomaly Detection",
"Multivariate Time Series Analysis",
"Probabilistic Models",
"Meta-Learning",
"Time Series Analysis"
] | |
ICML | 2,023 | 25,202 | Near-Optimal $\Phi$-Regret Learning in Extensive-Form Games | In this paper, we establish efficient and uncoupled learning dynamics so that, when employed by all players in multiplayer perfect-recall imperfect-information extensive-form games, the trigger regret of each player grows as $O(\log T)$ after $T$ repetitions of play. This improves exponentially over the prior best know... | [
"Game Theory",
"Algorithmic Game Theory",
"Multi-agent Systems"
] | |
ICLR | 2,024 | 19,226 | Pre-training Sequence, Structure, and Surface Features for Comprehensive Protein Representation Learning | Proteins can be represented in various ways, including their sequences, 3D structures, and surfaces. While recent studies have successfully employed sequence- or structure-based representations to address multiple tasks in protein science, there has been significant oversight in incorporating protein surface informatio... | [
"Computational Biology",
"Bioinformatics",
"Machine Learning in Biology",
"Protein Structure Prediction",
"Structural Bioinformatics"
] |
The POSTERSUM dataset is a multimodal benchmark designed for the summarization of scientific posters into research paper abstracts. The dataset consists of 16,305 research posters collected from major machine learning conferences, including ICLR, ICML, and NeurIPS, spanning the years 2022-2024. Each poster is provided in image format along with its corresponding abstract as a summary. This dataset is intended for research in multimodal understanding and summarization tasks, particularly in vision-language models (VLMs) and Multimodal Large Language Models (MLLMs).
Each record in the dataset contains the following fields:
conference (string): Name of the conference where the research poster was presented (e.g., ICLR, ICML, NeurIPS).year (int): The year of the conference.paper_id (int): Conference identifier for the research paper associated with the poster.title (string): The title of the research paper.abstract (string): The human-written abstract of the paper, serving as the ground-truth summary for the poster.topics (list of strings): Machine learning topics related to the research (e.g., Reinforcement Learning, Natural Language Processing, Graph Neural Networks).image_url (string): URL to the image file of the scientific poster.@misc{saxena2025postersummultimodalbenchmarkscientific,
title={PosterSum: A Multimodal Benchmark for Scientific Poster Summarization},
author={Rohit Saxena and Pasquale Minervini and Frank Keller},
year={2025},
eprint={2502.17540},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.17540},
}
No elements in this dataset have been identified as either opted-out, or opted-in, by their creator.