๐Ÿง  AยฒFM: Adaptive Agent Foundation Model for Tool-Aware Hybrid Reasoning

AยฒFM (Adaptive Agent Foundation Model) unifies reasoning-centric and agentic paradigms into a single framework that adaptively selects among three execution modes โ€” instant, reasoning, and agentic.
It follows a route-then-align training principle and introduces Adaptive Policy Optimization (APO) to jointly optimize accuracy and efficiency.

AยฒFM achieves state-of-the-art performance on major reasoning and agentic benchmarks:

  • 13.4% on BrowseComp (agentic)
  • 70.4% on AIME25 (reasoning)
  • 16.7% on HLE (general)

Notably, its adaptive execution achieves a cost of pass of only $0.00487 per correct answer, cutting cost by 45.2% vs. reasoning and 33.5% vs. agentic, delivering substantially higher cost efficiency while maintaining comparable accuracy.

๐Ÿ“„ Paper
๐Ÿ’ป GitHub


๐Ÿ”‘ Key Highlights

  • โš™๏ธ Unified reasoning & agentic modeling
    Integrates direct reasoning, chain-of-thought, and tool-augmented actions within a single backbone.

  • ๐Ÿ”„ Route-then-Align supervised fine-tuning
    Trains task-aware routing followed by mode-aligned trajectory learning.

  • ๐Ÿงฉ Adaptive Policy Optimization (APO)
    Reinforcement learning with adaptive sampling and cost-regularized reward for efficiencyโ€“accuracy balance.

  • ๐Ÿ’ก Substantially lower inference cost
    Adaptive routing cuts redundant reasoning/tool use while preserving correctness.


๐Ÿ“˜ Citation

@article{chen2025textsuperscript,
  title={A$\backslash$textsuperscript $\{$2$\}$ FM: An Adaptive Agent Foundation Model for Tool-Aware Hybrid Reasoning},
  author={Chen, Qianben and Cao, Jingyi and Zhang, Jiayu and Qin, Tianrui and Li, Xiaowan and Zhu, King and Shi, Dingfeng and Zhu, He and Liu, Minghao and Liang, Xiaobo and others},
  journal={arXiv preprint arXiv:2510.12838},
  year={2025}
}
Downloads last month
9
Safetensors
Model size
33B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for PersonalAILab/A2FM-32B-rl

Quantizations
2 models

Paper for PersonalAILab/A2FM-32B-rl