new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

May 7

The Cylindrical Representation Hypothesis for Language Model Steering

Steering is a widely used technique for controlling large language models, yet its effects are often unstable and hard to predict. Existing theoretical accounts are largely based on the Linear Representation Hypothesis (LRH). While LRH assumes that concepts can be orthogonalized for lossless control, this idealized mapping fails in real representations and cannot account for the observed unpredictability of steering. By relaxing LRH's orthogonality assumption while preserving linear representations, we show that overlapping concept contributions naturally yield a sample-specific axis-orthogonal structure. We formalize this as the Cylindrical Representation Hypothesis (CRH). In CRH, a central axis captures the main difference between concept absence and presence and drives concept generation. A surrounding normal plane controls steering sensitivity by determining how easily the axis can activate the target concept. Within this plane, only specific sensitive sectors strongly facilitate concept activation, while other sectors can suppress or delay it. While the surrounding normal plane can be reliably identified from difference vectors, the sensitive sector cannot, introducing intrinsic uncertainty at the sector level. This uncertainty provides a principled explanation for why steering outcomes often fluctuate even when using well-aligned directions. Our experiments verify the existence of the cylindrical structure and demonstrate that CRH provides a valid and practical way to interpret model steering behavior in real settings: https://github.com/mbzuai-nlp/CRH.

  • 10 authors
·
May 2

Evaluation-driven Scaling for Scientific Discovery

Language models are increasingly used in scientific discovery to generate hypotheses, propose candidate solutions, implement systems, and iteratively refine them. At the core of these trial-and-error loops lies evaluation: the process of obtaining feedback on candidate solutions via verifiers, simulators, or task-specific scoring functions. While prior work has highlighted the importance of evaluation, it has not explicitly formulated the problem of how evaluation-driven discovery loops can be scaled up in a principled and effective manner to push the boundaries of scientific discovery, a problem this paper seeks to address. We introduce Simple Test-time Evaluation-driven Scaling (SimpleTES), a general framework that strategically combines parallel exploration, feedback-driven refinement, and local selection, revealing substantial gains unlocked by scaling evaluation-driven discovery loops along the right dimensions. Across 21 scientific problems spanning six domains, SimpleTES discovers state-of-the-art solutions using gpt-oss models, consistently outperforming both frontier-model baselines and sophisticated optimization pipelines. Particularly, we sped up the widely used LASSO algorithm by over 2x, designed quantum circuit routing policies that reduce gate overhead by 24.5%, and discovered new Erdos minimum overlap constructions that surpass the best-known results. Beyond novel discoveries, SimpleTES produces trajectory-level histories that naturally supervise feedback-driven learning. When post-trained on successful trajectories, models not only improve efficiency on seen problems but also generalize to unseen problems, discovering solutions that base models fail to uncover. Together, our results establish effective evaluation-driven loop scaling as a central axis for advancing LLM-driven scientific discovery, and provide a simple yet practical framework for realizing these gains.

  • 25 authors
·
Apr 20 2

LikeBench: Evaluating Subjective Likability in LLMs for Personalization

A personalized LLM should remember user facts, apply them correctly, and adapt over time to provide responses that the user prefers. Existing LLM personalization benchmarks are largely centered on two axes: accurately recalling user information and accurately applying remembered information in downstream tasks. We argue that a third axis, likability, is both subjective and central to user experience, yet under-measured by current benchmarks. To measure likability holistically, we introduce LikeBench, a multi-session, dynamic evaluation framework that measures likability across multiple dimensions by how much an LLM can adapt over time to a user's preferences to provide more likable responses. In LikeBench, the LLMs engage in conversation with a simulated user and learn preferences only from the ongoing dialogue. As the interaction unfolds, models try to adapt to responses, and after each turn, they are evaluated for likability across seven dimensions by the same simulated user. To the best of our knowledge, we are the first to decompose likability into multiple diagnostic metrics: emotional adaptation, formality matching, knowledge adaptation, reference understanding, conversation length fit, humor fit, and callback, which makes it easier to pinpoint where a model falls short. To make the simulated user more realistic and discriminative, LikeBench uses fine-grained, psychologically grounded descriptive personas rather than the coarse high/low trait rating based personas used in prior work. Our benchmark shows that strong memory performance does not guarantee high likability: DeepSeek R1, with lower memory accuracy (86%, 17 facts/profile), outperformed Qwen3 by 28% on likability score despite Qwen3's higher memory accuracy (93%, 43 facts/profile). Even SOTA models like GPT-5 adapt well in short exchanges but show only limited robustness in longer, noisier interactions.

amazon Amazon
·
Dec 15, 2025 2

Magnetic fields in the infrared dark cloud G34.43+0.24

We present the B-fields mapped in IRDC G34.43+0.24 using 850\,μm polarized dust emission observed with the POL-2 instrument at JCMT. We examine the magnetic field geometries and strengths in the northern, central, and southern regions of the filament. The overall field geometry is ordered and aligned closely perpendicular to the filament's main axis, particularly in regions containing the central clumps MM1 and MM2, whereas MM3 in the north has field orientations aligned with its major axis. The overall field orientations are uniform at large (POL-2 at 14arcsec and SHARP at 10arcsec) to small scales (TADPOL at 2.5arcsec and SMA at 1.5arcsec) in the MM1 and MM2 regions. SHARP/CSO observations in MM3 at 350\,μm from Tang et al. show a similar trend as seen in our POL-2 observations. TADPOL observations demonstrate a well-defined field geometry in MM1/MM2 consistent with MHD simulations of accreting filaments. We obtained a plane-of-sky magnetic field strength of 470pm190\,μG, 100pm40\,μG, and 60pm34\,μG in the central, northern and southern regions of G34, respectively, using the updated Davis-Chandrasekhar-Fermi relation. The estimated value of field strength, combined with column density and velocity dispersion values available in the literature, suggests G34 to be marginally critical with criticality parameter rm λ values 0.8pm0.4, 1.1pm0.8, and 0.9pm0.5 in the central, northern, and southern regions, respectively. The turbulent motions in G34 are sub-Alfvénic with Alfvénic Mach numbers of 0.34pm0.13, 0.53pm0.30, and 0.49pm0.26 in the three regions. The observed aligned B-fields in G34.43+0.24 are consistent with theoretical models suggesting that B-fields play an important role in guiding the contraction of the cloud driven by gravity.

  • 14 authors
·
Aug 8, 2019