arxiv_id stringlengths 10 10 | abstract stringlengths 1.12k 2.15k |
|---|---|
2502.04320 | Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduceConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts ... |
2507.17702 | Mixture-of-Experts (MoE) has become a dominant architecture for scaling Large Language Models (LLMs) efficiently by decoupling total parameters from computational cost. However, this decoupling creates a critical challenge: predicting the model capacity of a given MoE configurations (e.g.,expert activation ratio and ... |
2409.12903 | The pre-training phase of language models often begins with randomly initialized parameters. With the current trends in scaling models, training their large number of parameters can be extremely slow and costly. In contrast, small language models are less expensive to train, but they often cannot achieve the accuracy o... |
2504.07448 | Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs) , yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We proposeLoRA withReducedInterference (LoRI) , a simple yet effective approach tha... |
2509.04642 | Building reliable LLM agents requires decisions at two levels: thegraph(which modules exist and how information flows) and theconfigurationof each node (models, prompts, tools, control knobs) . Most existing optimizers tune configurations while holding the graph fixed, leaving structural failure modes unaddressed. We ... |
2509.23331 | Prompt evolution algorithms offer a powerful paradigm for enhancing AI systems based on closed-source models, while few work explores whether aggregating results from multiple prompts to reach aconsensuscan further advance the system capability boundary. In this paper, we introduceConsensus-Evolve(C-Evolve) , an evolut... |
2510.04618 | Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely oncontext adaptation—modifying inputs with instructions, strategies, or evidence, rather than weight updates.
Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for con... |
2207.06991 | Language models are defined over a finite set of inputs, which creates avocabulary bottleneckwhen we attempt to scale the number of supported languages. Tackling this bottleneck results in a trade-off between what can be represented in the embedding matrix and computational issues in the output layer. This paper introd... |
2510.15103 | Modern language models are powerful, but typically static after deployment.
A major obstacle to building models that continually learn over time is catastrophic forgetting, where updating on new data erases previously acquired capabilities.
Motivated by the intuition that mitigating forgetting is challenging because tr... |
2209.15189 | Language models significantly benefit from context tokens, such as prompts or scratchpads.
They perform better when prompted with informative instructions, and they acquire new reasoning capabilities by generating a scratch-pad before predicting the final answers.
However, they do notinternalizethese performance gains,... |
2504.15777 | How cost-effectively can strong reasoning abilities be achieved in language models? Driven by this fundamental question, we present Tina, a family oftinyreasoning models achieved with high cost-efficiency.
Notably, Tina demonstrates that substantial reasoning performance can be developed using only minimal resources, b... |
2510.02375 | The impressive performance gains of modern language models currently rely on scaling parameters: larger models store more world knowledge and reason better. Yet compressing all world knowledge into parameters is unnecessary, as only a fraction is used per prompt, and impractical for edge devices with limited inference-... |
2506.11035 | Work in psychology has highlighted that the geometric model of similarity standard in deep learning is not psychologically plausible because its metric properties such as symmetry do not align with human perception of similarity.
In contrast,proposed an axiomatic theory of similarity with psychological plausibility bas... |
2509.17567 | We define “Agency” as the emergent capacity of AI systems to function as autonomous agents—actively discovering problems, formulating hypotheses, and executing solutions through self-directed engagement with environments and tools. This fundamental capability marks the dawn of the “Age of AI Agency”, driven by a critic... |
2509.13351 | Large language models (LLMs) have demonstrated impressive capabilities across diverse tasks, yet their ability to perform structured symbolic planning remains limited, particularly in domains requiring formal representations like the Planning Domain Definition Language (PDDL) . In this paper, we present a novel instru... |
2509.14252 | Large Language Model (LLM) pretraining, finetuning, and evaluation rely on input-space reconstruction and generative capabilities. Yet, it has been observed in vision that embedding-space training objectives, e.g., with Joint Embedding Predictive Architectures (JEPAs) , are far superior to their input-space counterpar... |
2509.03531 | Large language models are now routinely used in high-stakes applications where hallucinations can cause serious harm, such as medical consultations or legal advice.
Existing hallucination detection methods, however, are impractical for real-world use, as they are either limited to short factual queries or require costl... |
2509.01092 | Large Language Models (LLMs) have demonstrated remarkable capabilities in leveraging extensive external knowledge to enhance responses in multi-turn and agentic applications, such as retrieval-augmented generation (RAG) . However, processing long-context inputs introduces significant system latency and demands substan... |
README.md exists but content is empty.
- Downloads last month
- 49