SWE-AGILE

πŸ“£ News

[2026/02/23] SWE-AGILE has been accepted to the ACL 2026 Findings.

πŸ”₯ Overview

Prior approaches typically lack the explicit System-2 reasoning required for deep analysis. While recent reasoning models demonstrate the potential of extended Chain-of-Thought (CoT), applying them to multi-turn tasks creates a dilemma: retaining full history leads to context explosion, while discarding it causes redundant re-reasoning.

We propose SWE-AGILE, a novel software agent framework designed to bridge the gap between reasoning depth, efficiency, and context constraints. SWE-AGILE introduces a Dynamic Reasoning Context strategy, maintaining a β€œsliding window” of detailed reasoning for immediate continuity to prevent redundant re-analyzing, while compressing historical reasoning content into concise Reasoning Digests via Backfilling Data Synthesis, Trajectory Snapshot Training and Compression-Aware Optimization.

While our current paradigm implicitly reduces redundant state reconstruction, a highly promising direction to strictly enforce this efficiency is to quantitatively monitor the reasoning content. By calculating the embedding similarity between consecutive reasoning steps or employing an LLM-as-a-Judge, future iterations can explicitly filter out repetitive SFT trajectories or design targeted RLVR penalties, pushing the boundary of cognitive efficiency even further.

overview

swe-bench-verified

⭐️ Citation

If you find this project useful, please cite our work:

@misc{lian2026sweagilesoftwareagentframework,
      title={SWE-AGILE: A Software Agent Framework for Efficiently Managing Dynamic Reasoning Context}, 
      author={Shuquan Lian and Juncheng Liu and Yazhe Chen and Yuhong Chen and Hui Li},
      year={2026},
      eprint={2604.11716},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2604.11716}, 
}

🀝 Acknowledgements

We sincerely thank the projects R2E-Gym/R2E-Gym and rllm-org/rllm for providing their open-source resources.

Downloads last month
465
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for KDEGroup/SWE-AGILE-RL-8B

Quantizations
2 models

Paper for KDEGroup/SWE-AGILE-RL-8B