Dataset Viewer
Auto-converted to Parquet Duplicate
title
stringlengths
14
154
paper_url
stringlengths
42
42
authors
sequencelengths
1
21
type
stringclasses
3 values
abstract
stringlengths
413
2.52k
keywords
stringlengths
4
397
TL;DR
stringlengths
5
250
submission_number
int64
2
14.3k
arxiv_id
stringlengths
10
10
embedding
sequencelengths
384
384
github_url
stringlengths
0
126
github_stars
int64
0
55k
num_models
int64
0
82
num_datasets
int64
0
7
num_spaces
int64
0
100
DarkBench: Benchmarking Dark Patterns in Large Language Models
https://openreview.net/forum?id=odjMSBSWRt
[ "Esben Kran", "Hieu Minh Nguyen", "Akash Kundu", "Sami Jawhar", "Jinsuk Park", "Mateusz Maria Jurewicz" ]
Oral
We introduce DarkBench, a comprehensive benchmark for detecting dark design patterns—manipulative techniques that influence user behavior—in interactions with large language models (LLMs). Our benchmark comprises 660 prompts across six categories: brand bias, user retention, sycophancy, anthropomorphism, harmful genera...
Dark Patterns, AI Deception, Large Language Models
We introduce DarkBench, a benchmark revealing that many large language models employ manipulative dark design patterns. Organizations developing LLMs should actively recognize and mitigate the impact of dark design patterns to promote ethical Al.
14,257
2503.10728
[ -0.02725336328148842, -0.03766116499900818, 0.009408959187567234, 0.008022491820156574, 0.02661564014852047, -0.041066356003284454, -0.015834596008062363, -0.038164280354976654, 0.06413612514734268, -0.06206069141626358, -0.07550547271966934, -0.046710625290870667, 0.03552878648042679, -0....
0
0
0
0
RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style
https://openreview.net/forum?id=QEHrmQPBdd
[ "Yantao Liu", "Zijun Yao", "Rui Min", "Yixin Cao", "Lei Hou", "Juanzi Li" ]
Oral
Reward models are critical in techniques like Reinforcement Learning from Human Feedback (RLHF) and Inference Scaling Laws, where they guide language model alignment and select optimal responses. Despite their importance, existing reward model benchmarks often evaluate models by asking them to distinguish between resp...
Reward Models, Language Models, Evaluation, Alignment
null
13,985
null
[ -0.08573347330093384, -0.07113663852214813, 0.02327777072787285, 0.03225872293114662, 0.0407232865691185, 0.06568308174610138, -0.01666353829205036, -0.042861457914114, 0.08131477236747742, 0.01655031368136406, -0.09637922048568726, -0.0660894364118576, 0.06317998468875885, 0.0042505152523...
0
0
0
0
TopoLM: brain-like spatio-functional organization in a topographic language model
https://openreview.net/forum?id=aWXnKanInf
[ "Neil Rathi", "Johannes Mehrer", "Badr AlKhamissi", "Taha Osama A Binhuraib", "Nicholas Blauch", "Martin Schrimpf" ]
Oral
Neurons in the brain are spatially organized such that neighbors on tissue often exhibit similar response profiles. In the human language system, experimental studies have observed clusters for syntactic and semantic categories, but the mechanisms underlying this functional organization remain unclear. Here, building o...
language modeling, topography, fMRI, neuroscience
We develop a transformer language model with topographically organized units predicting brain-like spatio-functional organization.
13,712
2410.11516
[ 0.04553741216659546, -0.1560545116662979, 0.05150135979056358, 0.026262419298291206, 0.04672883078455925, 0.006334717385470867, -0.019015828147530556, 0.02199351042509079, 0.09638210386037827, -0.017915070056915283, -0.038319721817970276, -0.08659375458955765, 0.029651854187250137, 0.08215...
0
0
0
0
Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows
https://openreview.net/forum?id=XmProj9cPs
[ "Fangyu Lei", "Jixuan Chen", "Yuxiao Ye", "Ruisheng Cao", "Dongchan Shin", "Hongjin SU", "ZHAOQING SUO", "Hongcheng Gao", "Wenjing Hu", "Pengcheng Yin", "Victor Zhong", "Caiming Xiong", "Ruoxi Sun", "Qian Liu", "Sida Wang", "Tao Yu" ]
Oral
Real-world enterprise text-to-SQL workflows often involve complex cloud or local data across various database systems, multiple SQL queries in various dialects, and diverse operations from data transformation to analytics. We introduce Spider 2.0, an evaluation framework comprising $632$ real-world text-to-SQL workflow...
LLM Benchmark, Data Science and Engineering, Code Generation, Text-to-SQL, LLM Agent
A benchmark for enterprise-level Text-to-SQL involving complex databases, challenging tasks, and real-world scenarios.
13,657
2411.07763
[ -0.03679848089814186, -0.04623184725642204, -0.04603664577007294, 0.04519940912723541, -0.014134707860648632, -0.09154586493968964, -0.0011208809446543455, -0.01573074981570244, -0.018215196207165718, -0.00974492821842432, -0.08401896059513092, -0.06710424274206161, 0.04240303114056587, -0...
0
0
0
0
Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
https://openreview.net/forum?id=eHehzSDUFp
[ "Jiyeon Kim", "Hyunji Lee", "Hyowon Cho", "Joel Jang", "Hyeonbin Hwang", "Seungpil Won", "Youbin Ahn", "Dohaeng Lee", "Minjoon Seo" ]
Oral
In this work, we investigate how a model's tendency to broadly integrate its parametric knowledge evolves throughout pretraining, and how this behavior affects overall performance, particularly in terms of knowledge acquisition and forgetting. We introduce the concept of knowledge entropy, which quantifies the range of...
knowledge entropy, knowledge acquisition and forgetting, evolving behavior during LLM pretraining
As pretraining progresses, models exhibit narrower integration of memory vectors, reflected by decreasing knowledge entropy, which hinders both knowledge acquisition and retention.
13,581
2410.01380
[ 0.08067429810762405, -0.06784215569496155, 0.0016120108775794506, 0.1156923845410347, 0.041765280067920685, 0.08341261744499207, 0.03904664143919945, -0.009606799110770226, 0.08937966078519821, -0.005971039179712534, 0.00008143703598761931, 0.08783090859651566, 0.08362888544797897, -0.0014...
https://github.com/kaistai/knowledge-entropy
9
0
0
0
Diffusion-Based Planning for Autonomous Driving with Flexible Guidance
https://openreview.net/forum?id=wM2sfVgMDH
[ "Yinan Zheng", "Ruiming Liang", "Kexin ZHENG", "Jinliang Zheng", "Liyuan Mao", "Jianxiong Li", "Weihao Gu", "Rui Ai", "Shengbo Eben Li", "Xianyuan Zhan", "Jingjing Liu" ]
Oral
Achieving human-like driving behaviors in complex open-world environments is a critical challenge in autonomous driving. Contemporary learning-based planning approaches such as imitation learning methods often struggle to balance competing objectives and lack of safety assurance,due to limited adaptability and inadequa...
diffusion planning, autonomous driving
null
13,578
2501.15564
[ 0.003697252133861184, -0.10823868960142136, -0.02056341990828514, 0.062148094177246094, 0.029695900157094002, -0.01762736774981022, -0.07689015567302704, -0.017527373507618904, -0.031918615102767944, -0.024829700589179993, -0.011157851666212082, -0.010659144259989262, -0.0022969350684434175,...
0
0
0
0
Learning to Search from Demonstration Sequences
https://openreview.net/forum?id=v593OaNePQ
[ "Dixant Mittal", "Liwei Kang", "Wee Sun Lee" ]
Oral
Search and planning are essential for solving many real-world problems. However, in numerous learning scenarios, only action-observation sequences, such as demonstrations or instruction sequences, are available for learning. Relying solely on supervised learning with these sequences can lead to sub-optimal performance ...
planning, reasoning, learning to search, reinforcement learning, large language model
We propose a method that constructs search tree in a differetiable manner, and can be trained from just demonstration sequences.
13,425
null
[ -0.05436089262366295, -0.08911566436290741, 0.03811679035425186, -0.0037279443349689245, 0.04582573473453522, 0.03448101133108139, -0.045912884175777435, -0.009847477078437805, -0.018625015392899513, 0.011606894433498383, -0.02260192297399044, -0.011133828200399876, 0.018584217876195908, 0...
0
0
0
0
Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse
https://openreview.net/forum?id=Iyrtb9EJBp
[ "Maojia Song", "Shang Hong Sim", "Rishabh Bhardwaj", "Hai Leong Chieu", "Navonil Majumder", "Soujanya Poria" ]
Oral
LLMs are an integral component of retrieval-augmented generation (RAG) systems. While many studies focus on evaluating the overall quality of end-to-end RAG systems, there is a gap in understanding the appropriateness of LLMs for the RAG task. To address this, we introduce Trust-Score, a holistic metric that evaluates ...
Large Language Models, Trustworthiness, Hallucinations, Retrieval Augmented Generation
How to better evaluate and make LLM better for RAG task
13,377
2409.11242
[ -0.1142926886677742, -0.029256857931613922, -0.024079805240035057, -0.007724442984908819, -0.020343737676739693, -0.005263214465230703, 0.03379476070404053, -0.011266677640378475, 0.10013871639966965, -0.02107248827815056, 0.00016581999079789966, -0.03880157321691513, 0.14433534443378448, ...
https://github.com/declare-lab/trust-align
51
0
1
0
MAP: Multi-Human-Value Alignment Palette
https://openreview.net/forum?id=NN6QHwgRrQ
[ "Xinran Wang", "Qi Le", "Ammar Ahmed", "Enmao Diao", "Yi Zhou", "Nathalie Baracaldo", "Jie Ding", "Ali Anwar" ]
Oral
Ensuring that generative AI systems align with human values is essential but challenging, especially when considering multiple human values and their potential trade-offs. Since human values can be personalized and dynamically change over time, the desirable levels of value alignment vary across different ethnic groups...
Human value alignment, Generative model
The paper introduces Multi-Human-Value Alignment Palette (MAP), a novel approach to align generative models with multiple human values in a principled way.
13,248
2410.19198
[ -0.02129107341170311, 0.006724673323333263, -0.036662619560956955, -0.06650504469871521, -0.016665300354361534, 0.04870571568608284, 0.0426325760781765, 0.0040106638334691525, 0.013574006967246532, -0.04951547458767891, -0.03328584507107735, -0.14591453969478607, 0.00524666765704751, 0.017...
0
0
0
0
Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model
https://openreview.net/forum?id=is4nCVkSFA
[ "Siyu Chen", "Beining Wu", "Miao Lu", "Zhuoran Yang", "Tianhao Wang" ]
Oral
In this work, we tackle the following question: Can neural networks trained with gradient-based methods achieve the optimal statistical-computational tradeoff in learning Gaussian single-index models? Prior research has shown that any polynomial-time algorithm under the statistical query (SQ) framework requires $\Omeg...
single-index model, feature learning, gradient-based method, computational-statistical tradeoff
We propose a unified gradient-based algorithm for feature learning in Gaussian single-index model with sample complexity matching the SQ lower bound
13,084
null
[ -0.10704405605792999, -0.10548006743192673, 0.07906994968652725, 0.09473977982997894, 0.049008872359991074, 0.04187753424048424, -0.027378041297197342, -0.022082936018705368, 0.02989555150270462, -0.05141766369342804, -0.0634932816028595, 0.028668809682130814, 0.016813315451145172, -0.0023...
0
0
0
0
Consistency Checks for Language Model Forecasters
https://openreview.net/forum?id=r5IXBlTCGc
[ "Daniel Paleka", "Abhimanyu Pallavi Sudhir", "Alejandro Alvarez", "Vineeth Bhat", "Adam Shen", "Evan Wang", "Florian Tramèr" ]
Oral
Forecasting is a task that is difficult to evaluate: the ground truth can only be known in the future. Recent work showing LLM forecasters rapidly approaching human-level performance begs the question: how can we benchmark and evaluate these forecasters *instantaneously*? Following the consistency check framework, we m...
forecasting, markets, trading, LLM, evaluation, eval, consistency, robustness
It is difficult to evaluate AI forecasters instantaneously; we propose market-based consistency evals on LLM forecasters and show plenty of inconsistency.
13,065
2412.18544
[ -0.07717876136302948, -0.09865415841341019, -0.04396665096282959, 0.050274740904569626, 0.033055178821086884, 0.021076753735542297, 0.012797188013792038, -0.0023351050913333893, 0.012750214897096157, -0.03271811455488205, -0.11732664704322815, -0.07693871855735779, 0.03273208066821098, 0.0...
0
0
0
0
Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment
https://openreview.net/forum?id=BPgK5XW1Nb
[ "Dongyoung Kim", "Kimin Lee", "Jinwoo Shin", "Jaehyung Kim" ]
Oral
Aligning large language models (LLMs) with human preferences becomes a key component to obtaining state-of-the-art performance, but it yields a huge cost to construct a large human-annotated preference dataset. To tackle this problem, we propose a new framework, Spread Preference Annotation with direct preference judgm...
large language model, alignment, preference
null
12,928
2406.04412
[ -0.09205672889947891, -0.08180037140846252, -0.01853586733341217, 0.02387421391904354, 0.05137999355792999, 0.04248807579278946, 0.025502393022179604, 0.022205257788300514, 0.020403213798999786, -0.004675958771258593, 0.011237965896725655, -0.0845503956079483, 0.04652528092265129, 0.034931...
0
0
0
0
Brain Bandit: A Biologically Grounded Neural Network for Efficient Control of Exploration
https://openreview.net/forum?id=RWJX5F5I9g
[ "Chen Jiang", "Jiahui An", "Yating Liu", "Ni Ji" ]
Oral
How to balance between exploration and exploitation in an uncertain environment is a central challenge in reinforcement learning. In contrast, humans and animals have demonstrated superior exploration efficiency in novel environments. To understand how the brain’s neural network controls exploration under uncertainty, ...
explore-exploit, stochastic Hopfield network, Thompson sampling, decision under uncertainty, brain-inspired algorithm, reinforcement learning
We demonstrate that a brain-inspired stochastic Hopfield network can achieve efficient, human-like, uncertainty-aware exploration in bandit and MDP tasks.
12,774
null
[ -0.048125628381967545, -0.06399022042751312, 0.02736765518784523, 0.013729391619563103, -0.026136204600334167, -0.013279981911182404, 0.06835620850324631, -0.022411705926060677, 0.06214163452386856, -0.04466582462191582, -0.05740915611386299, -0.01716732047498226, 0.03107030689716339, 0.01...
0
0
0
0
MaestroMotif: Skill Design from Artificial Intelligence Feedback
https://openreview.net/forum?id=or8mMhmyRV
[ "Martin Klissarov", "Mikael Henaff", "Roberta Raileanu", "Shagun Sodhani", "Pascal Vincent", "Amy Zhang", "Pierre-Luc Bacon", "Doina Precup", "Marlos C. Machado", "Pierluca D'Oro" ]
Oral
Describing skills in natural language has the potential to provide an accessible way to inject human knowledge about decision-making into an AI system. We present MaestroMotif, a method for AI-assisted skill design, which yields high-performing and adaptable agents. MaestroMotif leverages the capabilities of Large Lang...
Hierarchical RL, Reinforcement Learning, LLMs
A method for AI-assisted skill design via Motif and LLM code generation, solving tasks zero-shot from language descriptions on NetHack.
12,735
2412.08542
[ -0.023494863882660866, -0.0449480339884758, 0.025618351995944977, 0.07570120692253113, -0.020263411104679108, -0.00032232367084361613, -0.005257580894976854, 0.03576262295246124, -0.04352348670363426, 0.013138490729033947, -0.07918298244476318, -0.09636082500219345, 0.06023058295249939, -0...
0
0
0
0
Learning to Discover Regulatory Elements for Gene Expression Prediction
https://openreview.net/forum?id=Mfnh1Sqdwf
[ "Xingyu Su", "Haiyang Yu", "Degui Zhi", "Shuiwang Ji" ]
Oral
We consider the problem of predicting gene expressions from DNA sequences. A key challenge of this task is to find the regulatory elements that control gene expressions. Here, we introduce Seq2Exp, a Sequence to Expression network explicitly designed to discover and extract regulatory elements that drive target gene ex...
Gene Expression, Deep Learning, Sequence Modeling
null
12,644
2502.13991
[ -0.0845017358660698, -0.025793347507715225, 0.02984725870192051, 0.04579932615160942, 0.11021523922681808, -0.0022563128732144833, 0.0075287725776433945, -0.06440958380699158, -0.030917929485440254, -0.03083932027220726, -0.048635661602020264, -0.05303158238530159, -0.02617608569562435, -0...
https://github.com/divelab/AIRS
615
1
1
0
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
https://openreview.net/forum?id=tyEyYT267x
[ "Marianne Arriola", "Aaron Gokaslan", "Justin T Chiu", "Zhihan Yang", "Zhixuan Qi", "Jiaqi Han", "Subham Sekhar Sahoo", "Volodymyr Kuleshov" ]
Oral
Diffusion language models offer unique benefits over autoregressive models due to their potential for parallelized generation and controllability, yet they lag in likelihood modeling and are limited to fixed-length generation. In this work, we introduce a class of block diffusion language models that interpolate betwee...
Diffusion Models, Text Diffusion, Generative Models
null
12,566
2503.09573
[ -0.10416851937770844, -0.12655200064182281, 0.026569997891783714, 0.018532857298851013, -0.012075117789208889, -0.006582919973880053, -0.05516336113214493, -0.04498690366744995, 0.07734091579914093, -0.0758652314543724, -0.022044286131858826, -0.03679569438099861, -0.03500502184033394, -0....
https://github.com/kuleshov-group/bd3lms
556
8
0
0
Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo
https://openreview.net/forum?id=xoXn62FzD0
[ "João Loula", "Benjamin LeBrun", "Li Du", "Ben Lipkin", "Clemente Pasti", "Gabriel Grand", "Tianyu Liu", "Yahya Emara", "Marjorie Freedman", "Jason Eisner", "Ryan Cotterell", "Vikash Mansinghka", "Alexander K. Lew", "Tim Vieira", "Timothy J. O'Donnell" ]
Oral
A wide range of LM applications require generating text that conforms to syntactic or semantic constraints. Imposing such constraints can be naturally framed as probabilistic conditioning, but exact generation from the resulting distribution—which can differ substantially from the LM’s base distribution—is generally in...
Sequential Monte Carlo, Language Models, Semantic parsing, Bayesian inference, Probabilistic programming, SMC
We introduce a sequential Monte Carlo framework for controlling LMs at inference time via both syntactic and semantic constraints.
12,536
null
[ -0.06695172190666199, -0.09970736503601074, 0.004899227526038885, 0.06267884373664856, 0.03959203138947487, -0.035926658660173416, -0.04199553653597832, -0.024605117738246918, -0.003779246238991618, -0.033395905047655106, -0.02965357154607773, -0.08184044063091278, 0.08553116768598557, -0....
0
0
0
0
Scaling Laws for Precision
https://openreview.net/forum?id=wg1PCg3CUP
[ "Tanishq Kumar", "Zachary Ankner", "Benjamin Frederick Spector", "Blake Bordelon", "Niklas Muennighoff", "Mansheej Paul", "Cengiz Pehlevan", "Christopher Re", "Aditi Raghunathan" ]
Oral
Low precision training and inference affect both the quality and cost of language models, but current scaling laws do not account for this. In this work, we devise "precision-aware" scaling laws for both training and inference. We propose that training in lower precision reduces the model's "effective parameter count,"...
quantization, scaling laws, precision, language models
We model the effects of precision on language model loss scaling, both during and after training. We find that overtrained models degrade more when quantized at inference time, and that training larger models in lower precision can be optimal.
12,529
2411.04330
[ -0.02280518412590027, -0.0446615144610405, 0.027057068422436714, 0.09494249522686005, 0.0349905900657177, 0.05653073266148567, -0.028869183734059334, 0.031977467238903046, 0.03930889815092087, -0.06456945091485977, -0.048771847039461136, -0.03631022572517395, -0.007628241553902626, 0.03554...
0
0
0
0
Context-Parametric Inversion: Why Instruction Finetuning May Not Actually Improve Context Reliance
https://openreview.net/forum?id=SPS6HzVzyt
[ "Sachin Goyal", "Christina Baek", "J Zico Kolter", "Aditi Raghunathan" ]
Oral
Large Language Model's are instruction-finetuned to enhance their ability to follow user instructions and better comprehend input context. Still, they often struggle to follow the input context, especially when it contradicts model's parametric knowledge. This manifests as various failures, such as hallucinations where...
Instruction finetuning, context-vs-parametric reliance
We highlight a surprising phenomenon, where the context reliance of the model decreases unexpectedly, with instruction finetuning, despite an initial increase.
12,499
2410.10796
[ 0.02301429957151413, -0.08786125481128693, 0.04208764061331749, 0.03361399844288826, -0.004300509579479694, -0.04522928595542908, 0.0048074680380523205, -0.014354490675032139, 0.08749780803918839, -0.015223857015371323, 0.03223327919840813, 0.006410186178982258, 0.03412650525569916, -0.053...
0
0
0
0
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
8