-
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
From RAG to Memory: Non-Parametric Continual Learning for Large Language Models
Paper • 2502.14802 • Published • 13 -
When an LLM is apprehensive about its answers -- and when its uncertainty is justified
Paper • 2503.01688 • Published • 22
Collections
Discover the best community collections!
Collections including paper arxiv:2502.14502
-
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
s-nlp/Llama-3.1-8B-Instruct-TriviaQA-HighlyKnown
Viewer • Updated • 91k • 27 -
s-nlp/Llama-3.1-8B-Instruct-DBpedia-HighlyKnown
Viewer • Updated • 21k • 19 -
s-nlp/Mistral-7b-0.3-Instruct-TriviaQA-HighlyKnown
Viewer • Updated • 91k • 18
-
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
SIFT: Grounding LLM Reasoning in Contexts via Stickers
Paper • 2502.14922 • Published • 32 -
SurveyX: Academic Survey Automation via Large Language Models
Paper • 2502.14776 • Published • 100 -
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment
Paper • 2502.16894 • Published • 33
-
Is Safety Standard Same for Everyone? User-Specific Safety Evaluation of Large Language Models
Paper • 2502.15086 • Published • 16 -
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information
Paper • 2502.14258 • Published • 26 -
S^2R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning
Paper • 2502.12853 • Published • 29
-
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
Paper • 2502.14499 • Published • 195 -
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
Paper • 2502.14739 • Published • 110 -
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex Task Automation on PC
Paper • 2502.14282 • Published • 29
-
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
Paper • 2501.04519 • Published • 290 -
Transformer^2: Self-adaptive LLMs
Paper • 2501.06252 • Published • 55 -
Multimodal LLMs Can Reason about Aesthetics in Zero-Shot
Paper • 2501.09012 • Published • 10 -
FAST: Efficient Action Tokenization for Vision-Language-Action Models
Paper • 2501.09747 • Published • 29
-
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
From RAG to Memory: Non-Parametric Continual Learning for Large Language Models
Paper • 2502.14802 • Published • 13 -
When an LLM is apprehensive about its answers -- and when its uncertainty is justified
Paper • 2503.01688 • Published • 22
-
Is Safety Standard Same for Everyone? User-Specific Safety Evaluation of Large Language Models
Paper • 2502.15086 • Published • 16 -
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
Does Time Have Its Place? Temporal Heads: Where Language Models Recall Time-specific Information
Paper • 2502.14258 • Published • 26 -
S^2R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning
Paper • 2502.12853 • Published • 29
-
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
s-nlp/Llama-3.1-8B-Instruct-TriviaQA-HighlyKnown
Viewer • Updated • 91k • 27 -
s-nlp/Llama-3.1-8B-Instruct-DBpedia-HighlyKnown
Viewer • Updated • 21k • 19 -
s-nlp/Mistral-7b-0.3-Instruct-TriviaQA-HighlyKnown
Viewer • Updated • 91k • 18
-
MLGym: A New Framework and Benchmark for Advancing AI Research Agents
Paper • 2502.14499 • Published • 195 -
SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines
Paper • 2502.14739 • Published • 110 -
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex Task Automation on PC
Paper • 2502.14282 • Published • 29
-
How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM?
Paper • 2502.14502 • Published • 92 -
SIFT: Grounding LLM Reasoning in Contexts via Stickers
Paper • 2502.14922 • Published • 32 -
SurveyX: Academic Survey Automation via Large Language Models
Paper • 2502.14776 • Published • 100 -
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment
Paper • 2502.16894 • Published • 33
-
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
Paper • 2501.04519 • Published • 290 -
Transformer^2: Self-adaptive LLMs
Paper • 2501.06252 • Published • 55 -
Multimodal LLMs Can Reason about Aesthetics in Zero-Shot
Paper • 2501.09012 • Published • 10 -
FAST: Efficient Action Tokenization for Vision-Language-Action Models
Paper • 2501.09747 • Published • 29