paper_id stringlengths 10 10 | title stringlengths 17 149 | abstract stringlengths 468 2.59k | pdf_url stringlengths 71 71 | reviews listlengths 2 7 |
|---|---|---|---|---|
zzOOqD6R1b | Stress-Testing Capability Elicitation With Password-Locked Models | To determine the safety of large language models (LLMs), AI developers must be able to assess their dangerous capabilities. But simple prompting strategies often fail to elicit an LLM’s full capabilities. One way to elicit capabilities more robustly is to fine-tune the LLM to complete the task. In this paper, we invest... | https://openreview.net/pdf/060fc5a68cf9e8cd99067fa71d86b9b2407c68af.pdf | [
{
"confidence": 4,
"rating": 8,
"review_id": "Hgk9jK64zF",
"review_text": "The paper studies whether fine-tuning can elicit the hidden capabilities of LLMs, especially motivated by the setting of dangerous capabilities evaluations. \n\nTo provide a specific experimental setup, the paper considers pa... |
zxSWIdyW3A | Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging | Existing reconstruction models in snapshot compressive imaging systems (SCI) are trained with a single well-calibrated hardware instance, making their perfor- mance vulnerable to hardware shifts and limited in adapting to multiple hardware configurations. To facilitate cross-hardware learning, previous efforts attempt ... | https://openreview.net/pdf/9784818faf1b61e993e8c55556f64ad6c612ecad.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "OYqo48ZEnI",
"review_text": "The authors present a Federated Hardware-Prompt learning (FedHP) framework to address the fact that compressive snapshot spectral imaging devices may not be easily tuneable against changes in the coded aperture, and that in f... |
zw2K6LfFI9 | PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation | Long-horizon manipulation tasks with general instructions often implicitly encapsulate multiple sub-tasks, posing significant challenges in instruction following.
While language planning is a common approach to decompose general instructions into stepwise sub-instructions, text-only guidance may lack expressiveness and... | https://openreview.net/pdf/2a39fcbdd8617cd0a7fbe9312a20b9b51ea8ab74.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "hd3aedGTvC",
"review_text": "The paper proposes a framework that integrates large multimodal language models (MLLMs) and diffusion models to enable holistic language planning and vision planning for long-horizon robotic manipulation tasks with complex in... |
zv9gYC3xgF | Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models | We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with $n>1$ components learns from data that are generated by a single ground truth Gaussian distribution.
While results for the special case of 2-Gaussian mixtures are ... | https://openreview.net/pdf/01089f1b9d7a3757d7fe8abda681870c3db968be.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "6LkxBEXYgp",
"review_text": "The paper studies the convergence of EM for learning mixtures of Gaussians. Specifically, they consider a simplified setting where the Gaussians are in $d$-dimensions and all have covariance $I_d$. They consider an overpara... |
zv4UISZzp5 | IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation | As Large Language Models (LLMs) become more capable of handling increasingly complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test... | https://openreview.net/pdf/74ed0078ffe00fb63ba32cc447f4540054349fbb.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "ecG0hpo8bm",
"review_text": "This paper proposes a method of generating prompts for evaluating large language models such that the prompts are dynamic and allow for showing meaningful performance gaps between different language models.The authors show th... |
zuwpeRkJNH | Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation | Surgical video-language pretraining (VLP) faces unique challenges due to the knowledge domain gap and the scarcity of multi-modal data. This study aims to bridge the gap by addressing issues regarding textual information loss in surgical lecture videos and the spatial-temporal challenges of surgical VLP. To tackle thes... | https://openreview.net/pdf/b754552d7cad51cf70357809a56df08d88257ab9.pdf | [
{
"confidence": 5,
"rating": 8,
"review_id": "x9lmNImh2H",
"review_text": "The paper addresses challenges in surgical video-language pretraining (VLP) due to the knowledge domain gap and scarcity of multi-modal data. It proposes a hierarchical knowledge augmentation approach and the Procedure-Encode... |
zuwLGhgxtQ | A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers | We study the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees i.e., samplers that require only $\mathcal{O}(\log(1/\varepsilon))$ versus $\Omega(\text{poly}(1/\varepsilon))$ iterations to output a sample which is $\varepsilon$-close t... | https://openreview.net/pdf/bd86dfe1f5fac662f55df1bccfbb1134cf9043ed.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "e2ERikvqJN",
"review_text": "The paper investigates the complexity of sampling from heavy-tailed distributions and presents a distinction between obtaining high-accuracy and low-accuracy guarantees. It analyzes two types of proximal samplers: those based... |
zuWgB7GerW | How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning | We show that deep neural networks (DNNs) can efficiently learn any
composition of functions with bounded $F_{1}$-norm, which allows
DNNs to break the curse of dimensionality in ways that shallow networks
cannot. More specifically, we derive a generalization bound that combines
a covering number argument for composition... | https://openreview.net/pdf/d47299e76cea5209510c750a7137c8f8ce0de3bd.pdf | [
{
"confidence": 3,
"rating": 5,
"review_id": "HoG0k5Pjq5",
"review_text": "This paper introduces Accordion Networks (AccNets), a novel neural network structure composed of multiple shallow networks. The authors propose a generalization bound for AccNets that leverages the F1-norms and Lipschitz cons... |
ztwl4ubnXV | OxonFair: A Flexible Toolkit for Algorithmic Fairness | We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfittin... | https://openreview.net/pdf/1198c251b0f5664b73f1ec30b356982f81f81fc7.pdf | [
{
"confidence": 3,
"rating": 7,
"review_id": "eHIhFf9cWw",
"review_text": "The paper introduces \"AnonFair,\" a toolkit designed to enforce algorithmic fairness across various domains, including NLP, computer vision, and traditional tabular data. It is compatible with popular machine learning framew... |
zsXbGJJ7Oo | G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training | Medical imaging tasks require an understanding of subtle and localized visual features due to the inherently detailed and area-specific nature of pathological patterns, which are crucial for clinical diagnosis. Although recent advances in medical vision-language pre-training (VLP) enable models to learn clinically rele... | https://openreview.net/pdf/266314e449f23eb30c332e9f0688da33556f643c.pdf | [
{
"confidence": 5,
"rating": 5,
"review_id": "wPn9WWqSQg",
"review_text": "This paper proposes G2D, a novel vision-language pre-training (VLP) framework for medical imaging that aims to learn both global and dense visual representations from radiography images and their associated radiology reports.... |
zqLAMwVLkt | Generative Semi-supervised Graph Anomaly Detection | This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of ... | https://openreview.net/pdf/3c33b4f4c3c23708a8d12f3c6cbda3a20a9ca71e.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "Z2QN5ZkVlh",
"review_text": "This paper works on node anomaly detection in the novel semi-supervised setting where few labeled normal nodes are given and proposes to generate new anomaly nodes to boost the training data. The anomaly generation algorithm ... |
zpw6NmhvKU | RashomonGB: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting | The Rashomon effect is a mixed blessing in responsible machine learning. It enhances the prospects of finding models that perform well in accuracy while adhering to ethical standards, such as fairness or interpretability. Conversely, it poses a risk to the credibility of machine decisions through predictive multiplicit... | https://openreview.net/pdf/838fbeed0eab05add105305af9fefdf722fe747f.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "tcA0QhNUXj",
"review_text": "This paper proposes a method (RashomonGB ) to estimate the Rashomon sets/predictive multiplicity of gradient boosting models. It estimates multiple ($m$) models at each stage (effectively performing a local exploration) and t... |
znBiAp5ISn | TAS-GNN: Topology-Aware Spiking Graph Neural Networks for Graph Classification | The recent integration of spiking neurons into graph neural networks has been gaining much attraction due to its superior energy efficiency.
Especially because the irregular connection among graph nodes fits the nature of the spiking neural networks, spiking graph neural networks are considered strong alternatives to ... | https://openreview.net/pdf/7ce7c8cc5374dbd6686b378ef8174a06b76e4183.pdf | [
{
"confidence": 4,
"rating": 6,
"review_id": "3IYgelN3ZX",
"review_text": "There's a large performance gap for graph tasks, especially graph classification tasks, between the spiking neural networks and artificial neural networks. The authors proposes the problems as the neuron's under starvation an... |
zn6s6VQYb0 | GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction | Graph-structured data is integral to many applications, prompting the development of various graph representation methods. Graph autoencoders (GAEs), in particular, reconstruct graph structures from node embeddings. Current GAE models primarily utilize self-correlation to represent graph structures and focus on node-le... | https://openreview.net/pdf/57096dd4679d0699198e3899786b24845b43c7a8.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "JVFBYcSJ2e",
"review_text": "This paper proposes a cross-correlation autoencoder for graph structural reconstruction. The authors first analyze the problems of existing self-correlation encoder. Then, a cross-correlation autoencoder is designed. Experime... |
zm1LcgRpHm | Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations | Existing approaches for learning representations of time-series keep the temporal arrangement of the time-steps intact with the presumption that the original order is the most optimal for learning. However, non-adjacent sections of real-world time-series may have strong dependencies. Accordingly, we raise the question:... | https://openreview.net/pdf/d5ba68bdf83d04632580f0b9e7ac80199a8c19c5.pdf | [
{
"confidence": 4,
"rating": 5,
"review_id": "WUjKloq9SX",
"review_text": "This paper introduces a new method for time-series representation learning that enhances the modeling of non-adjacent segment dependencies. Specifically, the proposed method segments, shuffles in a learned manner and stitches... |
zlgfRk2CQa | Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints | Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are of... | https://openreview.net/pdf/0735617b982a5aca1dad5a07d887a2347d77d249.pdf | [
{
"confidence": 3,
"rating": 6,
"review_id": "MnvTMTLcWc",
"review_text": "To solve the stability of Deep Thinking models, this paper proposes to constrain activation functions to be Lipshitz-1 functions. The original DT and DT-R models have training stability problem, basically because of scale exp... |
zkhyrxlwqH | Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization | Estimating the homography between two images is crucial for mid- or high-level vision tasks, such as image stitching and fusion. However, using supervised learning methods is often challenging or costly due to the difficulty of collecting ground-truth data. In response, unsupervised learning approaches have emerged. Mo... | https://openreview.net/pdf/dbd7c26b2dae2f1c86abaa70a60fb6e9e683d675.pdf | [
{
"confidence": 5,
"rating": 3,
"review_id": "hXC6dl8P6M",
"review_text": "The paper proposes an unsupervised homography estimation method for multimodal image pairs using an alternating optimization approach. The claimed key innovation is the introduction of the Geometry Barlow Twins loss function ... |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 14