new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Apr 16

$P$--$V$ criticality, Joule--Thomson expansion, and holographic heat engine of charged Hayward-AdS black holes with a cloud of strings and perfect fluid dark matter

We construct the charged Hayward-anti-de Sitter (AdS) black hole (BH) with a cloud of strings (CS) and perfect fluid dark matter (PFDM), and analyze its extended thermodynamic phase structure. The Hayward parameter g replaces the central singularity with a de Sitter (dS) core, while the CS parameter a and the PFDM parameter β encode astrophysically motivated matter content. Treating the cosmological constant as pressure, we derive the thermodynamic quantities, verify the Smarr relation, and establish P--V criticality with a van der Waals (vdW)-like small-large BH phase transition and mean-field critical exponents. The Gibbs free energy (GFE) exhibits the characteristic swallowtail below the critical pressure. The Joule-Thomson (JT) expansion yields T_i^{rm min}/T_c approx 0.247, roughly half the Reissner--Nordström-AdS value. The parameters g and Q contract the cooling region, β expands it, and a reshapes it non-monotonically. A holographic heat engine with a rectangular cycle gives efficiencies η= 0.362--0.396 and Carnot benchmarking ratios η/η_C = 0.625--0.791 across six configurations. The CS parameter improves the engine efficiency by reducing the enthalpy at fixed thermodynamic volume, while the PFDM parameter degrades it by adding gravitational enthalpy without contributing to the mechanical work.

  • 3 authors
·
Mar 1

Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack

In autonomous driving (AD), accurate perception is indispensable to achieving safe and secure driving. Due to its safety-criticality, the security of AD perception has been widely studied. Among different attacks on AD perception, the physical adversarial object evasion attacks are especially severe. However, we find that all existing literature only evaluates their attack effect at the targeted AI component level but not at the system level, i.e., with the entire system semantics and context such as the full AD pipeline. Thereby, this raises a critical research question: can these existing researches effectively achieve system-level attack effects (e.g., traffic rule violations) in the real-world AD context? In this work, we conduct the first measurement study on whether and how effectively the existing designs can lead to system-level effects, especially for the STOP sign-evasion attacks due to their popularity and severity. Our evaluation results show that all the representative prior works cannot achieve any system-level effects. We observe two design limitations in the prior works: 1) physical model-inconsistent object size distribution in pixel sampling and 2) lack of vehicle plant model and AD system model consideration. Then, we propose SysAdv, a novel system-driven attack design in the AD context and our evaluation results show that the system-level effects can be significantly improved, i.e., the violation rate increases by around 70%.

  • 5 authors
·
Aug 22, 2023

The Principles of Deep Learning Theory

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

  • 3 authors
·
Jun 18, 2021

Evaluating Intelligence via Trial and Error

Intelligence is a crucial trait for species to find solutions within a limited number of trial-and-error attempts. Building on this idea, we introduce Survival Game as a framework to evaluate intelligence based on the number of failed attempts in a trial-and-error process. Fewer failures indicate higher intelligence. When the expectation and variance of failure counts are both finite, it signals the ability to consistently find solutions to new challenges, which we define as the Autonomous Level of intelligence. Using Survival Game, we comprehensively evaluate existing AI systems. Our results show that while AI systems achieve the Autonomous Level in simple tasks, they are still far from it in more complex tasks, such as vision, search, recommendation, and language. While scaling current AI technologies might help, this would come at an astronomical cost. Projections suggest that achieving the Autonomous Level for general tasks would require 10^{26} parameters. To put this into perspective, loading such a massive model requires so many H100 GPUs that their total value is 10^{7} times that of Apple Inc.'s market value. Even with Moore's Law, supporting such a parameter scale would take 70 years. This staggering cost highlights the complexity of human tasks and the inadequacies of current AI technologies. To further investigate this phenomenon, we conduct a theoretical analysis of Survival Game and its experimental results. Our findings suggest that human tasks possess a criticality property. As a result, Autonomous Level requires a deep understanding of the task's underlying mechanisms. Current AI systems, however, do not fully grasp these mechanisms and instead rely on superficial mimicry, making it difficult for them to reach an autonomous level. We believe Survival Game can not only guide the future development of AI but also offer profound insights into human intelligence.

  • 10 authors
·
Feb 26, 2025 3

Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning

Deep neural networks are susceptible to adversarial examples, posing a significant security risk in critical applications. Adversarial Training (AT) is a well-established technique to enhance adversarial robustness, but it often comes at the cost of decreased generalization ability. This paper proposes Robustness Critical Fine-Tuning (RiFT), a novel approach to enhance generalization without compromising adversarial robustness. The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module. To do so, we introduce module robust criticality (MRC), a measure that evaluates the significance of a given module to model robustness under worst-case weight perturbations. Using this measure, we identify the module with the lowest MRC value as the non-robust-critical module and fine-tune its weights to obtain fine-tuned weights. Subsequently, we linearly interpolate between the adversarially trained weights and fine-tuned weights to derive the optimal fine-tuned model weights. We demonstrate the efficacy of RiFT on ResNet18, ResNet34, and WideResNet34-10 models trained on CIFAR10, CIFAR100, and Tiny-ImageNet datasets. Our experiments show that \method can significantly improve both generalization and out-of-distribution robustness by around 1.5% while maintaining or even slightly enhancing adversarial robustness. Code is available at https://github.com/microsoft/robustlearn.

  • 5 authors
·
Aug 1, 2023

Benefits of Resource Strategy for Sustainable Materials Research and Development

Material and product life cycles are based on complex value chains of technology-specific elements. Resource strategy aspects of essential and strategic raw materials have a direct impact on applications of new functionalized materials or the development of novel products. Thus, an urgent challenge of modern materials science is to obtain information about the supply risk and environmental aspects of resource utilization, especially at an early stage of basic research. Combining the fields of materials science, industrial engineering and resource strategy enables a multidisciplinary research approach to identify specific risks within the value chain, aggregated as the so-called resource criticality. Here, we demonstrate a step-by-step criticality assessment in the sector of basic materials research for multifunctional hexagonal manganite YMnO3, which can be a candidate for future electronic systems. Raw material restrictions can be quantitatively identified, even at such an early stage of materials research, from eleven long-term indicators including our new developed Sector Competition Index. This approach for resource strategy for modern material science integrates two objective targets: reduced supply risk and enhanced environmental sustainability of new functionalized materials, showing drawbacks but also benefits towards a sustainable materials research and development.

  • 7 authors
·
Mar 6, 2017

TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection

With the development of large language models (LLMs), the ability to handle longer contexts has become a key capability for Web applications such as cross-document understanding and LLM-powered search systems. However, this progress faces two major challenges: performance degradation due to sequence lengths out-of-distribution, and excessively long inference times caused by the quadratic computational complexity of attention. These issues hinder the application of LLMs in long-context scenarios. In this paper, we propose Dynamic Token-Level KV Cache Selection (TokenSelect), a model-agnostic, training-free method for efficient and accurate long-context inference. TokenSelect builds upon the observation of non-contiguous attention sparsity, using Query-Key dot products to measure per-head KV Cache criticality at token-level. By per-head soft voting mechanism, TokenSelect selectively involves a small number of critical KV cache tokens in the attention calculation without sacrificing accuracy. To further accelerate TokenSelect, we designed the Selection Cache based on observations of consecutive Query similarity and implemented efficient dot product kernel, significantly reducing the overhead of token selection. A comprehensive evaluation of TokenSelect demonstrates up to 23.84x speedup in attention computation and up to 2.28x acceleration in end-to-end latency, while providing superior performance compared to state-of-the-art long-context inference methods.

  • 8 authors
·
Nov 5, 2024

Doping the chiral spin liquid -- topological superconductor or chiral metal?

We point out that there are two different chiral spin liquid states on the triangular lattice and discuss the conducting states that are expected on doping them. These states labeled CS1 and CS2 are associated with two distinct topological orders with different edge states, although they both spontaneously break time reversal symmetry and exhibit the same quantized spin Hall conductance. While CSL1 is related to the Kalmeyer-Laughlin state, CSL2 is the ν=4 member of Kitaev's 16 fold way classification. Both states are described within the Abrikosov fermion representation of spins, and the effect of doping can be accessed by introducing charged holons. On doping CSL2, condensation of charged holons leads to a topological d+id superconductor. However on doping CSL1 , in sharp contrast , two different scenarios can arise: first, if holons condense, a chiral metal with doubled unit cell and finite Hall conductivity is obtained. However, in a second novel scenario, the internal magnetic flux adjusts with doping and holons form a bosonic integer quantum Hall (BIQH) state. Remarkably, the latter phase is identical to a d+id superconductor. In this case the Mott insulator to superconductor transition is associated with a bosonic variant of the integer quantum Hall plateau transition for the holon. We connect the above two scenarios to two recent numerical studies of doped chiral spin liquids on triangular lattice. Our work clarifies the complex relation between topological superconductors, chiral spin liquids and quantum criticality .

  • 3 authors
·
Nov 19, 2020

Fisher Curvature Scaling at Critical Points: An Exact Information-Geometric Exponent from Periodic Boundary Conditions

We study the scalar curvature of the Fisher information metric on the microscopic coupling-parameter manifold of lattice spin models at criticality. For a d-dimensional lattice with periodic boundary conditions and n = L^d sites, the Fisher manifold has m = d cdot n dimensions (one per bond), and we find |R(J_c)| sim n^{d_R} with d_R = (dν+ 2η)/(dν+ η), where ν and η are the correlation-length and anomalous-dimension critical exponents. For 2D Ising (ν= 1, η= 1/4), this predicts d_R = 10/9, confirmed by exact transfer-matrix computations (L = 6--9: d_R = 1.1115 pm 0.0002) and multi-seed MCMC through L = 24. For 3D Ising (ν= 0.630, η= 0.0363), the prediction d_R = 1.019 is consistent with MCMC on L^3 tori up to L = 10 (power-law fit: d_R = 1.040). For 2D Potts q = 3 (predicted 33/29 approx 1.138), FFT-MCMC through L = 40 shows d_eff oscillating non-monotonically around sim 1.20, consistent with O(1/(ln L)^2) logarithmic corrections. For q = 4 (predicted 22/19), effective exponents oscillate with strong logarithmic corrections. The Ricci decomposition identity R_3 = -R_1/2, R_4 = -R_2/2 holds to 5--6 digits for all models. This exponent is distinct from Ruppeiner thermodynamic curvature and reflects the collective geometry of the growing Fisher manifold. We provide falsification criteria and predictions for additional universality classes.

  • 1 authors
·
Mar 8

TempFlow-GRPO: When Timing Matters for GRPO in Flow Models

Recent flow matching models for text-to-image generation have achieved remarkable quality, yet their integration with reinforcement learning for human preference alignment remains suboptimal, hindering fine-grained reward-based optimization. We observe that the key impediment to effective GRPO training of flow models is the temporal uniformity assumption in existing approaches: sparse terminal rewards with uniform credit assignment fail to capture the varying criticality of decisions across generation timesteps, resulting in inefficient exploration and suboptimal convergence. To remedy this shortcoming, we introduce TempFlow-GRPO (Temporal Flow GRPO), a principled GRPO framework that captures and exploits the temporal structure inherent in flow-based generation. TempFlow-GRPO introduces two key innovations: (i) a trajectory branching mechanism that provides process rewards by concentrating stochasticity at designated branching points, enabling precise credit assignment without requiring specialized intermediate reward models; and (ii) a noise-aware weighting scheme that modulates policy optimization according to the intrinsic exploration potential of each timestep, prioritizing learning during high-impact early stages while ensuring stable refinement in later phases. These innovations endow the model with temporally-aware optimization that respects the underlying generative dynamics, leading to state-of-the-art performance in human preference alignment and standard text-to-image benchmarks.

  • 8 authors
·
Aug 6, 2025 2

Dynamical phase diagram of synchronization in one dimension: universal behavior from Edwards-Wilkinson to random deposition through Kardar-Parisi-Zhang

Synchronization in one dimension displays generic scale invariance with universal properties previously observed in surface kinetic roughening and the wider context of the Kardar-Parisi-Zhang (KPZ) universality class. This has been established for phase oscillators and also for some limit-cycle oscillators, both in the presence of columnar (quenched) disorder and of time-dependent noise, by extensive numerical simulations, and has been analytically motivated by continuum approximations in the strong oscillator coupling limit. The robustness and the precise boundaries in parameter space for such critical behavior remain unclear, however, which may preclude further developments, including the extension of these results to higher dimensions and the experimental observation of nonequilibrium criticality in synchronizing (e.g.~electronic or chemical) oscillators. We here present complete numerical phase diagrams of one-dimensional synchronization, including saturation times and values, but, most importantly, also dynamical features giving insight into the gradual emergence of synchronous dynamics, based on systems of phase oscillators with either type of randomness. In the absence of synchronization, the dynamics evolves as expected for random deposition (for time-dependent noise) or linear growth (for columnar disorder), while a crossover from Edwards-Wilkinson to Kardar-Parisi-Zhang behavior (with the corresponding type of randomness) is observed as the randomness strength, or the nonoddity of the coupling among oscillators, is increased in the synchronous region -- their combined effect being partially captured by the so-called KPZ coupling. The distortion of scaling due to phase slips near the desynchronization boundary, a feature that is likely to play a role in experimental contexts, is also discussed.

  • 2 authors
·
Apr 6

Anchoring Values in Temporal and Group Dimensions for Flow Matching Model Alignment

Group Relative Policy Optimization (GRPO) has proven highly effective in enhancing the alignment capabilities of Large Language Models (LLMs). However, current adaptations of GRPO for the flow matching-based image generation neglect a foundational conflict between its core principles and the distinct dynamics of the visual synthesis process. This mismatch leads to two key limitations: (i) Uniformly applying a sparse terminal reward across all timesteps impairs temporal credit assignment, ignoring the differing criticality of generation phases from early structure formation to late-stage tuning. (ii) Exclusive reliance on relative, intra-group rewards causes the optimization signal to fade as training converges, leading to the optimization stagnation when reward diversity is entirely depleted. To address these limitations, we propose Value-Anchored Group Policy Optimization (VGPO), a framework that redefines value estimation across both temporal and group dimensions. Specifically, VGPO transforms the sparse terminal reward into dense, process-aware value estimates, enabling precise credit assignment by modeling the expected cumulative reward at each generative stage. Furthermore, VGPO replaces standard group normalization with a novel process enhanced by absolute values to maintain a stable optimization signal even as reward diversity declines. Extensive experiments on three benchmarks demonstrate that VGPO achieves state-of-the-art image quality while simultaneously improving task-specific accuracy, effectively mitigating reward hacking. Project webpage: https://yawen-shao.github.io/VGPO/.

  • 7 authors
·
Dec 13, 2025

Multiphysics Continuous Shape Optimization of the TAP Reactor Components

The Transatomic Power (TAP) reactor has an unusual design for a molten salt reactor technology, building upon the foundation laid by the Molten Salt Reactor Experiment (MSRE). This design introduces three key modifications to enhance efficiency and compactness: a revised fuel salt composition, an alternative moderator material, and moderator pins surrounded by the molten salt fuel. Unlike traditional solid-fueled reactors that rely on excess positive reactivity at the beginning of life, the TAP concept employs a dynamic approach. The core's design, featuring a cylindrical geometry with square assemblies of moderator rods surrounded by flowing fuel salt, provides flexibility in adjusting the moderator-to-fuel ratio during operation - using movable moderator rods - further adding criticality control capability in addition to the control rods system. Shape optimization of the core can play a crucial role in enhancing performance and efficiency. By applying multiphysics continuous shape optimization techniques to key components, such as the unit cells of the TAP reactor or its moderator assemblies, we can fine-tune the reactor's geometry to achieve optimal performance in key physics like neutronics and thermal hydraulics. We explore this aspect using the optimization module in the Multiphysics Object Oriented Simulation Environment (MOOSE) framework which allows for multiphysics continuous shape optimization. The results reported here illustrate the benefits of applying continuous shape optimization in the design of nuclear reactor components and can help in extending the TAP reactor's performance.

  • 3 authors
·
Feb 2, 2025

Achieving the quantum field theory limit in far-from-equilibrium quantum link models

Realizations of gauge theories in setups of quantum synthetic matter open up the possibility of probing salient exotic phenomena in condensed matter and high-energy physics, along with potential applications in quantum information and science technologies. In light of the impressive ongoing efforts to achieve such realizations, a fundamental question regarding quantum link model regularizations of lattice gauge theories is how faithfully they capture the quantum field theory limit of gauge theories. Recent work [Zache, Van Damme, Halimeh, Hauke, and Banerjee, at https://journals.aps.org/prd/abstract/10.1103/PhysRevD.106.L091502 has shown through analytic derivations, exact diagonalization, and infinite matrix product state calculations that the low-energy physics of 1+1D U(1) quantum link models approaches the quantum field theory limit already at small link spin length S. Here, we show that the approach to this limit also lends itself to the far-from-equilibrium quench dynamics of lattice gauge theories, as demonstrated by our numerical simulations of the Loschmidt return rate and the chiral condensate in infinite matrix product states, which work directly in the thermodynamic limit. Similar to our findings in equilibrium that show a distinct behavior between half-integer and integer link spin lengths, we find that criticality emerging in the Loschmidt return rate is fundamentally different between half-integer and integer spin quantum link models in the regime of strong electric-field coupling. Our results further affirm that state-of-the-art finite-size ultracold-atom and NISQ-device implementations of quantum link lattice gauge theories have the real potential to simulate their quantum field theory limit even in the far-from-equilibrium regime.

  • 5 authors
·
Dec 8, 2021

EfficientVLA: Training-Free Acceleration and Compression for Vision-Language-Action Models

Vision-Language-Action (VLA) models, particularly diffusion-based architectures, demonstrate transformative potential for embodied intelligence but are severely hampered by high computational and memory demands stemming from extensive inherent and inference-time redundancies. While existing acceleration efforts often target isolated inefficiencies, such piecemeal solutions typically fail to holistically address the varied computational and memory bottlenecks across the entire VLA pipeline, thereby limiting practical deployability. We introduce EfficientVLA, a structured and training-free inference acceleration framework that systematically eliminates these barriers by cohesively exploiting multifaceted redundancies. EfficientVLA synergistically integrates three targeted strategies: (1) pruning of functionally inconsequential layers from the language module, guided by an analysis of inter-layer redundancies; (2) optimizing the visual processing pathway through a task-aware strategy that selects a compact, diverse set of visual tokens, balancing task-criticality with informational coverage; and (3) alleviating temporal computational redundancy within the iterative diffusion-based action head by strategically caching and reusing key intermediate features. We apply our method to a standard VLA model CogACT, yielding a 1.93X inference speedup and reduces FLOPs to 28.9%, with only a 0.6% success rate drop in the SIMPLER benchmark.

  • 8 authors
·
Jun 11, 2025 2

BranchGRPO: Stable and Efficient GRPO with Structured Branching in Diffusion Models

Recent progress in aligning image and video generative models with Group Relative Policy Optimization (GRPO) has improved human preference alignment, but existing variants remain inefficient due to sequential rollouts and large numbers of sampling steps, unreliable credit assignment: sparse terminal rewards are uniformly propagated across timesteps, failing to capture the varying criticality of decisions during denoising. In this paper, we present BranchGRPO, a method that restructures the rollout process into a branching tree, where shared prefixes amortize computation and pruning removes low-value paths and redundant depths. BranchGRPO introduces three contributions: (1) a branching scheme that amortizes rollout cost through shared prefixes while preserving exploration diversity; (2) a reward fusion and depth-wise advantage estimator that transforms sparse terminal rewards into dense step-level signals; and (3) pruning strategies that cut gradient computation but leave forward rollouts and exploration unaffected. On HPDv2.1 image alignment, BranchGRPO improves alignment scores by up to 16\% over DanceGRPO, while reducing per-iteration training time by nearly 55\%. A hybrid variant, BranchGRPO-Mix, further accelerates training to 4.7x faster than DanceGRPO without degrading alignment. On WanX video generation, it further achieves higher Video-Align scores with sharper and temporally consistent frames compared to DanceGRPO. Codes are available at https://fredreic1849.github.io/BranchGRPO-Webpage/{BranchGRPO}.

  • 7 authors
·
Sep 7, 2025