paper_id uint32 0 3.7k | title stringlengths 14 154 | paper_url stringlengths 42 42 | authors listlengths 1 21 | type stringclasses 3
values | abstract stringlengths 413 2.52k | keywords stringlengths 4 397 | TL;DR stringlengths 5 250 ⌀ | submission_number int64 2 14.3k | arxiv_id stringlengths 10 10 ⌀ | embedding listlengths 768 768 |
|---|---|---|---|---|---|---|---|---|---|---|
0 | DarkBench: Benchmarking Dark Patterns in Large Language Models | https://openreview.net/forum?id=odjMSBSWRt | [
"Esben Kran",
"Hieu Minh Nguyen",
"Akash Kundu",
"Sami Jawhar",
"Jinsuk Park",
"Mateusz Maria Jurewicz"
] | Oral | We introduce DarkBench, a comprehensive benchmark for detecting dark design patterns—manipulative techniques that influence user behavior—in interactions with large language models (LLMs). Our benchmark comprises 660 prompts across six categories: brand bias, user retention, sycophancy, anthropomorphism, harmful genera... | Dark Patterns, AI Deception, Large Language Models | We introduce DarkBench, a benchmark revealing that many large language models employ manipulative dark design patterns. Organizations developing LLMs should actively recognize and mitigate the impact of dark design patterns to promote ethical Al. | 14,257 | 2503.10728 | [
0.011475447565317154,
0.009862896986305714,
-0.04764394462108612,
0.008322280831634998,
0.02865590900182724,
0.02567763440310955,
0.03321939706802368,
0.022531183436512947,
-0.03509656712412834,
-0.017947131767868996,
-0.026911810040473938,
0.03533045947551727,
-0.057708218693733215,
-0.01... |
1 | RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style | https://openreview.net/forum?id=QEHrmQPBdd | [
"Yantao Liu",
"Zijun Yao",
"Rui Min",
"Yixin Cao",
"Lei Hou",
"Juanzi Li"
] | Oral | Reward models are critical in techniques like Reinforcement Learning from Human Feedback (RLHF) and Inference Scaling Laws, where they guide language model alignment and select optimal responses.
Despite their importance, existing reward model benchmarks often evaluate models by asking them to distinguish between resp... | Reward Models, Language Models, Evaluation, Alignment | null | 13,985 | null | [
-0.01527470350265503,
-0.011983676813542843,
0.00011990717030130327,
0.05472085252404213,
0.015925515443086624,
0.03496106341481209,
0.02222450077533722,
0.008349643088877201,
-0.03587907552719116,
-0.005719794891774654,
-0.007379265036433935,
0.05095171183347702,
-0.05241604521870613,
-0.... |
2 | TopoLM: brain-like spatio-functional organization in a topographic language model | https://openreview.net/forum?id=aWXnKanInf | [
"Neil Rathi",
"Johannes Mehrer",
"Badr AlKhamissi",
"Taha Osama A Binhuraib",
"Nicholas Blauch",
"Martin Schrimpf"
] | Oral | Neurons in the brain are spatially organized such that neighbors on tissue often exhibit similar response profiles. In the human language system, experimental studies have observed clusters for syntactic and semantic categories, but the mechanisms underlying this functional organization remain unclear. Here, building o... | language modeling, topography, fMRI, neuroscience | We develop a transformer language model with topographically organized units predicting brain-like spatio-functional organization. | 13,712 | 2410.11516 | [
-0.04894479736685753,
0.006795223336666822,
0.012124377302825451,
-0.0038749058730900288,
0.02827361412346363,
0.014664778485894203,
0.024945799261331558,
0.012138745747506618,
-0.03987088054418564,
-0.010130858048796654,
-0.045090045779943466,
0.007762247696518898,
-0.050883762538433075,
... |
3 | Spider 2.0: Evaluating Language Models on Real-World Enterprise Text-to-SQL Workflows | https://openreview.net/forum?id=XmProj9cPs | [
"Fangyu Lei",
"Jixuan Chen",
"Yuxiao Ye",
"Ruisheng Cao",
"Dongchan Shin",
"Hongjin SU",
"ZHAOQING SUO",
"Hongcheng Gao",
"Wenjing Hu",
"Pengcheng Yin",
"Victor Zhong",
"Caiming Xiong",
"Ruoxi Sun",
"Qian Liu",
"Sida Wang",
"Tao Yu"
] | Oral | Real-world enterprise text-to-SQL workflows often involve complex cloud or local data across various database systems, multiple SQL queries in various dialects, and diverse operations from data transformation to analytics.
We introduce Spider 2.0, an evaluation framework comprising $632$ real-world text-to-SQL workflow... | LLM Benchmark, Data Science and Engineering, Code Generation, Text-to-SQL, LLM Agent | A benchmark for enterprise-level Text-to-SQL involving complex databases, challenging tasks, and real-world scenarios. | 13,657 | 2411.07763 | [
-0.029984761029481888,
-0.05117392539978027,
-0.014438548125326633,
0.04148005694150925,
0.06265062838792801,
0.005044182762503624,
0.023352786898612976,
0.04093597084283829,
-0.02782435342669487,
-0.05603385716676712,
-0.03901423513889313,
-0.0033811237663030624,
-0.07936139404773712,
-0.... |
4 | Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition | https://openreview.net/forum?id=eHehzSDUFp | [
"Jiyeon Kim",
"Hyunji Lee",
"Hyowon Cho",
"Joel Jang",
"Hyeonbin Hwang",
"Seungpil Won",
"Youbin Ahn",
"Dohaeng Lee",
"Minjoon Seo"
] | Oral | In this work, we investigate how a model's tendency to broadly integrate its parametric knowledge evolves throughout pretraining, and how this behavior affects overall performance, particularly in terms of knowledge acquisition and forgetting. We introduce the concept of knowledge entropy, which quantifies the range of... | knowledge entropy, knowledge acquisition and forgetting, evolving behavior during LLM pretraining | As pretraining progresses, models exhibit narrower integration of memory vectors, reflected by decreasing knowledge entropy, which hinders both knowledge acquisition and retention. | 13,581 | 2410.01380 | [
-0.03146198019385338,
0.006314320024102926,
-0.02386482246220112,
0.0069854832254350185,
0.06182214245200157,
-0.0024073985405266285,
0.05851138010621071,
0.008571662940084934,
-0.049556806683540344,
-0.014392765238881111,
-0.03509105369448662,
0.06213784217834473,
-0.025217240676283836,
0... |
5 | Diffusion-Based Planning for Autonomous Driving with Flexible Guidance | https://openreview.net/forum?id=wM2sfVgMDH | [
"Yinan Zheng",
"Ruiming Liang",
"Kexin ZHENG",
"Jinliang Zheng",
"Liyuan Mao",
"Jianxiong Li",
"Weihao Gu",
"Rui Ai",
"Shengbo Eben Li",
"Xianyuan Zhan",
"Jingjing Liu"
] | Oral | Achieving human-like driving behaviors in complex open-world environments is a critical challenge in autonomous driving. Contemporary learning-based planning approaches such as imitation learning methods often struggle to balance competing objectives and lack of safety assurance,due to limited adaptability and inadequa... | diffusion planning, autonomous driving | null | 13,578 | 2501.15564 | [
-0.013987475074827671,
-0.03392152115702629,
0.005078543908894062,
0.02392151579260826,
0.05787055566906929,
0.030821382999420166,
0.0024182945489883423,
-0.00959070399403572,
0.011672046966850758,
-0.04782818257808685,
0.0008578276610933244,
0.008070448413491249,
-0.027789438143372536,
0.... |
6 | Learning to Search from Demonstration Sequences | https://openreview.net/forum?id=v593OaNePQ | [
"Dixant Mittal",
"Liwei Kang",
"Wee Sun Lee"
] | Oral | Search and planning are essential for solving many real-world problems. However, in numerous learning scenarios, only action-observation sequences, such as demonstrations or instruction sequences, are available for learning. Relying solely on supervised learning with these sequences can lead to sub-optimal performance ... | planning, reasoning, learning to search, reinforcement learning, large language model | We propose a method that constructs search tree in a differetiable manner, and can be trained from just demonstration sequences. | 13,425 | null | [
-0.0351773165166378,
-0.0075204200111329556,
-0.026527535170316696,
0.05603007972240448,
0.04224853962659836,
0.021068384870886803,
0.023309456184506416,
0.011441731825470924,
-0.0098927216604352,
-0.027925288304686546,
0.014736250974237919,
-0.014753822237253189,
-0.05829273536801338,
-0.... |
7 | Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Refuse | https://openreview.net/forum?id=Iyrtb9EJBp | [
"Maojia Song",
"Shang Hong Sim",
"Rishabh Bhardwaj",
"Hai Leong Chieu",
"Navonil Majumder",
"Soujanya Poria"
] | Oral | LLMs are an integral component of retrieval-augmented generation (RAG) systems. While many studies focus on evaluating the overall quality of end-to-end RAG systems, there is a gap in understanding the appropriateness of LLMs for the RAG task. To address this, we introduce Trust-Score, a holistic metric that evaluates ... | Large Language Models, Trustworthiness, Hallucinations, Retrieval Augmented Generation | How to better evaluate and make LLM better for RAG task | 13,377 | 2409.11242 | [
0.022029131650924683,
-0.02131049521267414,
-0.007334582507610321,
0.0510173961520195,
0.040801871567964554,
-0.01915552467107773,
0.01961986906826496,
-0.0009685529512353241,
-0.0052784704603254795,
-0.019884465262293816,
-0.04627111926674843,
0.05479709804058075,
-0.07440562546253204,
-0... |
8 | MAP: Multi-Human-Value Alignment Palette | https://openreview.net/forum?id=NN6QHwgRrQ | [
"Xinran Wang",
"Qi Le",
"Ammar Ahmed",
"Enmao Diao",
"Yi Zhou",
"Nathalie Baracaldo",
"Jie Ding",
"Ali Anwar"
] | Oral | Ensuring that generative AI systems align with human values is essential but challenging, especially when considering multiple human values and their potential trade-offs. Since human values can be personalized and dynamically change over time, the desirable levels of value alignment vary across different ethnic groups... | Human value alignment, Generative model | The paper introduces Multi-Human-Value Alignment Palette (MAP), a novel approach to align generative models with multiple human values in a principled way. | 13,248 | 2410.19198 | [
-0.011101456359028816,
-0.004345667082816362,
-0.024158900603652,
0.034977853298187256,
0.035846807062625885,
0.055055536329746246,
0.01409035176038742,
0.02160101756453514,
-0.009722994640469551,
-0.07199124246835709,
-0.05136179178953171,
0.01456049270927906,
-0.0845523551106453,
-0.0306... |
9 | Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model | https://openreview.net/forum?id=is4nCVkSFA | [
"Siyu Chen",
"Beining Wu",
"Miao Lu",
"Zhuoran Yang",
"Tianhao Wang"
] | Oral | In this work, we tackle the following question: Can neural networks trained with gradient-based methods achieve the optimal statistical-computational tradeoff in learning Gaussian single-index models?
Prior research has shown that any polynomial-time algorithm under the statistical query (SQ) framework requires $\Omeg... | single-index model, feature learning, gradient-based method, computational-statistical tradeoff | We propose a unified gradient-based algorithm for feature learning in Gaussian single-index model with sample complexity matching the SQ lower bound | 13,084 | null | [
-0.03677269443869591,
-0.015584495849907398,
0.03281797841191292,
0.04215344414114952,
0.038520459085702896,
0.05101076140999794,
0.02697884477674961,
-0.0008036288199946284,
-0.061751846224069595,
-0.033139947801828384,
0.0052408333867788315,
0.0025670421309769154,
-0.06572360545396805,
-... |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 15