SWE-AGILE

πŸ“£ News

[2026/02/23] SWE-AGILE has been accepted to the ACL 2026 Findings.

πŸ”₯ Overview

Prior approaches typically lack the explicit System-2 reasoning required for deep analysis. While recent reasoning models demonstrate the potential of extended Chain-of-Thought (CoT), applying them to multi-turn tasks creates a dilemma: retaining full history leads to context explosion, while discarding it causes redundant re-reasoning.

We propose SWE-AGILE, a novel software agent framework designed to bridge the gap between reasoning depth, efficiency, and context constraints. SWE-AGILE introduces a Dynamic Reasoning Context strategy, maintaining a β€œsliding window” of detailed reasoning for immediate continuity to prevent redundant re-analyzing, while compressing historical reasoning content into concise Reasoning Digests via Backfilling Data Synthesis, Trajectory Snapshot Training and Compression-Aware Optimization.

While our current paradigm implicitly reduces redundant state reconstruction, a highly promising direction to strictly enforce this efficiency is to quantitatively monitor the reasoning content. By calculating the embedding similarity between consecutive reasoning steps or employing an LLM-as-a-Judge, future iterations can explicitly filter out repetitive SFT trajectories or design targeted RLVR penalties, pushing the boundary of cognitive efficiency even further.

overview

swe-bench-verified

Downloads last month
42
Safetensors
Model size
15B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for KDEGroup/SWE-AGILE-SFT-14B

Quantizations
2 models

Paper for KDEGroup/SWE-AGILE-SFT-14B