Continuum: Efficient and Robust Multi-Turn LLM Agent Scheduling with KV Cache Time-to-Live Paper • 2511.02230 • Published Nov 4, 2025 • 2
FrontierCS: Evolving Challenges for Evolving Intelligence Paper • 2512.15699 • Published Dec 17, 2025 • 5
Meta-Harness: End-to-End Optimization of Model Harnesses Paper • 2603.28052 • Published 19 days ago • 18
Combee: Scaling Prompt Learning for Self-Improving Language Model Agents Paper • 2604.04247 • Published 13 days ago • 30
Combee: Scaling Prompt Learning for Self-Improving Language Model Agents Paper • 2604.04247 • Published 13 days ago • 30
Meta-Harness: End-to-End Optimization of Model Harnesses Paper • 2603.28052 • Published 19 days ago • 18
BEAVER: An Efficient Deterministic LLM Verifier Paper • 2512.05439 • Published Dec 5, 2025 • 36
FrontierCS: Evolving Challenges for Evolving Intelligence Paper • 2512.15699 • Published Dec 17, 2025 • 5
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models Paper • 2510.04618 • Published Oct 6, 2025 • 131 • 5
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models Paper • 2510.04618 • Published Oct 6, 2025 • 131
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models Paper • 2510.04618 • Published Oct 6, 2025 • 131
CacheBlend: Fast Large Language Model Serving for RAG with Cached Knowledge Fusion Paper • 2405.16444 • Published May 26, 2024 • 1
Cost-Efficient Serving of LLM Agents via Test-Time Plan Caching Paper • 2506.14852 • Published Jun 17, 2025 • 1
FlowRL: Matching Reward Distributions for LLM Reasoning Paper • 2509.15207 • Published Sep 18, 2025 • 118
FlowRL: Matching Reward Distributions for LLM Reasoning Paper • 2509.15207 • Published Sep 18, 2025 • 118