new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Apr 16

LiveXiv -- A Multi-Modal Live Benchmark Based on Arxiv Papers Content

The large-scale training of multi-modal models on data scraped from the web has shown outstanding utility in infusing these models with the required world knowledge to perform effectively on multiple downstream tasks. However, one downside of scraping data from the web can be the potential sacrifice of the benchmarks on which the abilities of these models are often evaluated. To safeguard against test data contamination and to truly test the abilities of these foundation models we propose LiveXiv: A scalable evolving live benchmark based on scientific ArXiv papers. LiveXiv accesses domain-specific manuscripts at any given timestamp and proposes to automatically generate visual question-answer pairs (VQA). This is done without any human-in-the-loop, using the multi-modal content in the manuscripts, like graphs, charts, and tables. Moreover, we introduce an efficient evaluation approach that estimates the performance of all models on the evolving benchmark using evaluations of only a subset of models. This significantly reduces the overall evaluation cost. We benchmark multiple open and proprietary Large Multi-modal Models (LMMs) on the first version of our benchmark, showing its challenging nature and exposing the models true abilities, avoiding contamination. Lastly, in our commitment to high quality, we have collected and evaluated a manually verified subset. By comparing its overall results to our automatic annotations, we have found that the performance variance is indeed minimal (<2.5%). Our dataset is available online on HuggingFace, and our code will be available here.

  • 11 authors
·
Oct 14, 2024 2

Understanding the Effect of Noise in LLM Training Data with Algorithmic Chains of Thought

During both pretraining and fine-tuning, Large Language Models (LLMs) are trained on trillions of tokens of text of widely varying quality. Both phases of training typically involve heuristically filtering out ``low-quality'' or noisy training samples, yet little is known quantitatively about how the type or intensity of noise affects downstream performance. In this work, we study how noise in chain of thought (CoT) impacts task performance in the highly-controlled setting of algorithmically solvable tasks. First, we develop the Traced Integer (TInt) framework to generate highly customizable noised execution traces for any arithmetic function on lists of integers. We then define two types of noise: static noise, a local form of noise which is applied after the CoT trace is computed, and dynamic noise, a global form of noise which propagates errors in the trace as it is computed. We then evaluate the test performance of pretrained models both prompted and fine-tuned on noised datasets with varying levels of dataset contamination and intensity. We find fine-tuned models are extremely robust to high levels of static noise but struggle significantly more with lower levels of dynamic noise. In contrast, few-shot prompted models appear more sensitive to even static noise. We conclude with a discussion of how our findings impact noise filtering best-practices, in particular emphasizing the importance of removing samples containing destructive dynamic noise with global errors.

  • 2 authors
·
Feb 6, 2024

When Layers Play the Lottery, all Tickets Win at Initialization

Pruning is a standard technique for reducing the computational cost of deep networks. Many advances in pruning leverage concepts from the Lottery Ticket Hypothesis (LTH). LTH reveals that inside a trained dense network exists sparse subnetworks (tickets) able to achieve similar accuracy (i.e., win the lottery - winning tickets). Pruning at initialization focuses on finding winning tickets without training a dense network. Studies on these concepts share the trend that subnetworks come from weight or filter pruning. In this work, we investigate LTH and pruning at initialization from the lens of layer pruning. First, we confirm the existence of winning tickets when the pruning process removes layers. Leveraged by this observation, we propose to discover these winning tickets at initialization, eliminating the requirement of heavy computational resources for training the initial (over-parameterized) dense network. Extensive experiments show that our winning tickets notably speed up the training phase and reduce up to 51% of carbon emission, an important step towards democratization and green Artificial Intelligence. Beyond computational benefits, our winning tickets exhibit robustness against adversarial and out-of-distribution examples. Finally, we show that our subnetworks easily win the lottery at initialization while tickets from filter removal (the standard structured LTH) hardly become winning tickets.

  • 4 authors
·
Jan 25, 2023

Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data From Large Language Models

Synthetic data generation with Large Language Models is a promising paradigm for augmenting natural data over a nearly infinite range of tasks. Given this variety, direct comparisons among synthetic data generation algorithms are scarce, making it difficult to understand where improvement comes from and what bottlenecks exist. We propose to evaluate algorithms via the makeup of synthetic data generated by each algorithm in terms of data quality, diversity, and complexity. We choose these three characteristics for their significance in open-ended processes and the impact each has on the capabilities of downstream models. We find quality to be essential for in-distribution model generalization, diversity to be essential for out-of-distribution generalization, and complexity to be beneficial for both. Further, we emphasize the existence of Quality-Diversity trade-offs in training data and the downstream effects on model performance. We then examine the effect of various components in the synthetic data pipeline on each data characteristic. This examination allows us to taxonomize and compare synthetic data generation algorithms through the components they utilize and the resulting effects on data QDC composition. This analysis extends into a discussion on the importance of balancing QDC in synthetic data for efficient reinforcement learning and self-improvement algorithms. Analogous to the QD trade-offs in training data, often there exist trade-offs between model output quality and output diversity which impact the composition of synthetic data. We observe that many models are currently evaluated and optimized only for output quality, thereby limiting output diversity and the potential for self-improvement. We argue that balancing these trade-offs is essential to the development of future self-improvement algorithms and highlight a number of works making progress in this direction.

  • 20 authors
·
Dec 3, 2024 3

How Vulnerable Are AI Agents to Indirect Prompt Injections? Insights from a Large-Scale Public Competition

LLM based agents are increasingly deployed in high stakes settings where they process external data sources such as emails, documents, and code repositories. This creates exposure to indirect prompt injection attacks, where adversarial instructions embedded in external content manipulate agent behavior without user awareness. A critical but underexplored dimension of this threat is concealment: since users tend to observe only an agent's final response, an attack can conceal its existence by presenting no clue of compromise in the final user facing response while successfully executing harmful actions. This leaves users unaware of the manipulation and likely to accept harmful outcomes as legitimate. We present findings from a large scale public red teaming competition evaluating this dual objective across three agent settings: tool calling, coding, and computer use. The competition attracted 464 participants who submitted 272000 attack attempts against 13 frontier models, yielding 8648 successful attacks across 41 scenarios. All models proved vulnerable, with attack success rates ranging from 0.5% (Claude Opus 4.5) to 8.5% (Gemini 2.5 Pro). We identify universal attack strategies that transfer across 21 of 41 behaviors and multiple model families, suggesting fundamental weaknesses in instruction following architectures. Capability and robustness showed weak correlation, with Gemini 2.5 Pro exhibiting both high capability and high vulnerability. To address benchmark saturation and obsoleteness, we will endeavor to deliver quarterly updates through continued red teaming competitions. We open source the competition environment for use in evaluations, along with 95 successful attacks against Qwen that did not transfer to any closed source model. We share model-specific attack data with respective frontier labs and the full dataset with the UK AISI and US CAISI to support robustness research.

sureheremarv Gray Swan
·
Mar 16

CACARA: Cross-Modal Alignment Leveraging a Text-Centric Approach for Cost-Effective Multimodal and Multilingual Learning

As deep learning models evolve, new applications and challenges are rapidly emerging. Tasks that once relied on a single modality, such as text, images, or audio, are now enriched by seamless interactions between multimodal data. These connections bridge information gaps: an image can visually materialize a text, while audio can add context to an image. Researchers have developed numerous multimodal models, but most rely on resource-intensive training across multiple modalities. Similarly, extending these models to new languages often follows the same resource-heavy training strategy. In this work, we propose a multimodal and multilingual architecture, CACARA, trained through emergent alignment learning, enabling the seamless integration of new modalities into an existing bimodal/multimodal model without requiring full retraining. This work breaks new ground by demonstrating that this emergent alignment paradigm can unlock multilingual capabilities from monolingual training. By fine-tuning the newly incorporated modality only on data aligned with the English language, our model develops support for over 100 languages without explicit multilingual pretraining or tuning of the text encoder. Such emergent multimodal and multilingual properties are gained efficiently, preserving previously learned knowledge at a training cost comparable to that of a monolingual model. Our strategy achieves up to a 14.24 percentage points improvement in R@1 audio-to-text retrieval, outperforming state-of-the-art multimodal models -- all without the heavy computational cost of retraining across every modality and language.

  • 13 authors
·
Nov 29, 2025

Influence of pressure on properties of multi-gap type-I superconductor BeAu

We report on studies of the superconducting and normal state properties of the noncentrosymmetric superconductor BeAu under hydrostatic pressure conditions. The room-temperature equation of state (EOS) reveals the values of the bulk modulus (B_0) and its first derivative (B^prime_0) at ambient pressure to be B_0 simeq 132~GPa and B^prime_0 simeq 30, respectively. Up to the highest pressures studied (p simeq 2.2~GPa), BeAu remains a multi-gap type-I superconductor. The analysis of B_{rm c}(T, p) data within the self-consistent two-gap approach suggests the presence of two superconducting energy gaps, with the gap-to-T_{rm c} ratios Δ_1/k_{rm B}T_{rm c} sim 2.3 and Δ_2/k_{rm B}T_{rm c} sim 1.1 for the larger and smaller gaps, respectively [Δ= Δ(0) is the zero-temperature value of the gap and k_{rm B} is the Boltzmann constant]. With increasing pressure, Δ_1/k_{rm B}T_{rm c} increases while Δ_2/k_{rm B}T_{rm c} decreases, suggesting that pressure enhances (weakens) the coupling strength between the superconducting carriers within the bands where the larger (smaller) superconducting energy gap has opened. The superconducting transition temperature T_{rm c}, black{the zero-temperature values of the superconducting gaps Δ_1 and Δ_2} and the zero-temperature value of the thermodynamic critical field B_{rm c}(0) decrease with increasing pressure, with the rates of {rm d}T_{rm c}/{rm d}p simeq -0.195~K/GPa, black{{rm d}Δ_1/{rm d}p simeq -0.034~meV/GPa, {rm d}Δ_2/{rm d}p simeq -0.029~meV/GPa,} and {rm d}B_{rm c}(0)/{rm d}p = -2.65(1)~mT/GPa, respectively. The measured B_{rm c}(0) values plotted as a function of T_{rm c} follow an empirical scaling relation established for conventional type-I superconductors.

  • 10 authors
·
Feb 2, 2025

2D Theoretically Twistable Material Database

The study of twisted two-dimensional (2D) materials, where twisting layers create moiré superlattices, has opened new opportunities for investigating topological phases and strongly correlated physics. While systems such as twisted bilayer graphene (TBG) and twisted transition metal dichalcogenides (TMDs) have been extensively studied, the broader potential of a seemingly infinite set of other twistable 2D materials remains largely unexplored. In this paper, we define "theoretically twistable materials" as single- or multi-layer structures that allow for the construction of simple continuum models of their moiré structures. This excludes, for example, materials with a "spaghetti" of bands or those with numerous crossing points at the Fermi level, for which theoretical moiré modeling is unfeasible. We present a high-throughput algorithm that systematically searches for theoretically twistable semimetals and insulators based on the Topological 2D Materials Database. By analyzing key electronic properties, we identify thousands of new candidate materials that could host rich topological and strongly correlated phenomena when twisted. We propose representative twistable materials for realizing different types of moiré systems, including materials with different Bravais lattices, valleys, and strength of spin-orbital coupling. We provide examples of crystal growth for several of these materials and showcase twisted bilayer band structures along with simplified twisted continuum models. Our results significantly broaden the scope of moiré heterostructures and provide a valuable resource for future experimental and theoretical studies on novel moiré systems.

  • 25 authors
·
Nov 14, 2024

Dark Energy Survey Year 3 Results: Cosmology from Cosmic Shear and Robustness to Data Calibration

This work, together with its companion paper, Secco and Samuroff et al. (2021), presents the Dark Energy Survey Year 3 cosmic shear measurements and cosmological constraints based on an analysis of over 100 million source galaxies. With the data spanning 4143 deg^2 on the sky, divided into four redshift bins, we produce the highest significance measurement of cosmic shear to date, with a signal-to-noise of 40. We conduct a blind analysis in the context of the ΛCDM model and find a 3% constraint of the clustering amplitude, S_8equiv σ_8 (Ω_{rm m}/0.3)^{0.5} = 0.759^{+0.025}_{-0.023}. A ΛCDM-Optimized analysis, which safely includes smaller scale information, yields a 2% precision measurement of S_8= 0.772^{+0.018}_{-0.017} that is consistent with the fiducial case. The two low-redshift measurements are statistically consistent with the Planck Cosmic Microwave Background result, however, both recovered S_8 values are lower than the high-redshift prediction by 2.3σ and 2.1σ (p-values of 0.02 and 0.05), respectively. The measurements are shown to be internally consistent across redshift bins, angular scales and correlation functions. The analysis is demonstrated to be robust to calibration systematics, with the S_8 posterior consistent when varying the choice of redshift calibration sample, the modeling of redshift uncertainty and methodology. Similarly, we find that the corrections included to account for the blending of galaxies shifts our best-fit S_8 by 0.5σ without incurring a substantial increase in uncertainty. We examine the limiting factors for the precision of the cosmological constraints and find observational systematics to be subdominant to the modeling of astrophysics. Specifically, we identify the uncertainties in modeling baryonic effects and intrinsic alignments as the limiting systematics.

  • 148 authors
·
May 27, 2021

Sloan Digital Sky Survey IV: Mapping the Milky Way, Nearby Galaxies, and the Distant Universe

We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratio in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially-resolved spectroscopy for thousands of nearby galaxies (median redshift of z = 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between redshifts z = 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGN and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5-meter Sloan Foundation Telescope at Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5-meter du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in July 2016.

  • 353 authors
·
Feb 28, 2017