Upload docs/final_report.md
Browse files- docs/final_report.md +81 -228
docs/final_report.md
CHANGED
|
@@ -1,273 +1,126 @@
|
|
| 1 |
-
#
|
| 2 |
|
| 3 |
-
|
| 4 |
-
**Authors:** ML Intern (autonomous researcher)
|
| 5 |
-
**Repository:** https://huggingface.co/narcolepticchicken/agent-cost-optimizer
|
| 6 |
-
**Benchmark:** N=2,000 synthetic traces across 19 scenarios
|
| 7 |
-
**Baseline Comparison:** always_frontier, always_cheap, static, cascade, full_optimizer + 5 ablations
|
| 8 |
-
|
| 9 |
-
---
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
##
|
| 16 |
-
- **Cost per successful task: $0.2089** (full optimizer) vs $0.2907 (always frontier)
|
| 17 |
-
- **Total cost on 2,000 tasks: $393.98** vs $548.31 (28.1% reduction)
|
| 18 |
-
- **Success rate: 94.3%** (identical to frontier baseline)
|
| 19 |
-
- **No regressions in quality** — false-DONE rate, unsafe cheap-model miss rate, and missed escalation rate are all preserved or improved
|
| 20 |
|
| 21 |
-
-
|
| 22 |
-
|
| 23 |
-
## 2. Core Architecture
|
| 24 |
-
|
| 25 |
-
ACO consists of 10 modules sharing a single normalized trace schema:
|
| 26 |
-
|
| 27 |
-
| # | Module | Function | Key Decision |
|
| 28 |
-
|---|--------|----------|--------------|
|
| 29 |
-
| 1 | Cost Telemetry Collector | Structured trace recording | What happened, what it cost |
|
| 30 |
-
| 2 | Task Cost Classifier | Task risk/cost prediction | What tier is needed |
|
| 31 |
-
| 3 | Model Cascade Router | Dynamic model selection | Which model to use |
|
| 32 |
-
| 4 | Context Budgeter | Context selection | What to include/exclude/summarize |
|
| 33 |
-
| 5 | Cache-Aware Prompt Layout | Prefix/suffix optimization | How to structure for cache reuse |
|
| 34 |
-
| 6 | Tool-Use Cost Gate | Tool worthiness prediction | Whether to call, skip, batch |
|
| 35 |
-
| 7 | Verifier Budgeter | Selective verification | When to verify |
|
| 36 |
-
| 8 | Retry/Recovery Optimizer | Intelligent failure recovery | How to recover |
|
| 37 |
-
| 9 | Meta-Tool Miner | Workflow compression | Whether to use cached workflow |
|
| 38 |
-
| 10 | Doom Detector | Early failure detection | Whether to stop |
|
| 39 |
-
|
| 40 |
-
### Integration Patterns
|
| 41 |
-
|
| 42 |
-
Three bolt-on patterns are supported:
|
| 43 |
-
|
| 44 |
-
- **Front Proxy**: `optimize()` before each agent step
|
| 45 |
-
- **Around Wrapper**: `optimize()` pre-step + `record_step()` post-step
|
| 46 |
-
- **Inside Agent**: Per-step `reassess()` for mid-run adjustment
|
| 47 |
-
|
| 48 |
-
---
|
| 49 |
-
|
| 50 |
-
## 3. Benchmark Design
|
| 51 |
-
|
| 52 |
-
### 19 Realistic Scenarios
|
| 53 |
-
|
| 54 |
-
The synthetic benchmark spans all major cost-waste patterns identified in the literature:
|
| 55 |
-
|
| 56 |
-
| Scenario | Frequency | Pattern |
|
| 57 |
-
|----------|-----------|---------|
|
| 58 |
-
| quick_answer_success | 18% | Cheap model is sufficient |
|
| 59 |
-
| coding_success_medium | 10% | Medium model succeeds |
|
| 60 |
-
| coding_success_frontier | 8% | Frontier required |
|
| 61 |
-
| coding_cheap_fail | 5% | Cheap model fails, should escalate |
|
| 62 |
-
| coding_tool_underuse | 4% | Tool not called when needed |
|
| 63 |
-
| research_success | 10% | Research tasks at medium tier |
|
| 64 |
-
| research_cheap_fail | 3% | Research too hard for cheap model |
|
| 65 |
-
| legal_frontier_success | 4% | High-risk, frontier required |
|
| 66 |
-
| legal_cheap_unsafe | 2% | Unsafe cheap model on legal |
|
| 67 |
-
| tool_heavy_success | 6% | Tools used efficiently |
|
| 68 |
-
| retrieval_success | 6% | Retrieval-heavy tasks |
|
| 69 |
-
| long_horizon_success | 5% | Multi-step tasks |
|
| 70 |
-
| long_horizon_retry_loop | 3% | Retry loops wasting cost |
|
| 71 |
-
| unknown_ambiguous_success | 3% | Ambiguous tasks resolved |
|
| 72 |
-
| unknown_ambiguous_blocked | 2% | Should escalate to human |
|
| 73 |
-
| tool_overuse | 4% | Unnecessary tool calls |
|
| 74 |
-
| cache_break_scenario | 3% | Cache-unfriendly layouts |
|
| 75 |
-
| false_done_scenario | 2% | Agent says done but isn't |
|
| 76 |
-
| quick_answer_cheap_fail | 2% | Even quick answers can fail |
|
| 77 |
-
|
| 78 |
-
### Simulation Model
|
| 79 |
-
|
| 80 |
-
Success probability is modeled as `strength^difficulty`, where:
|
| 81 |
-
- **Strength**: 0.35 (tiny), 0.55 (cheap), 0.80 (medium), 0.93 (frontier), 0.97 (specialist)
|
| 82 |
-
- **Difficulty**: 1 (quick answer) to 5 (legal/safety-critical)
|
| 83 |
-
|
| 84 |
-
This exponential relationship captures the real-world phenomenon that harder tasks need exponentially stronger models.
|
| 85 |
-
|
| 86 |
-
---
|
| 87 |
-
|
| 88 |
-
## 4. Results
|
| 89 |
-
|
| 90 |
-
### Baseline Comparison (N=2,000)
|
| 91 |
-
|
| 92 |
-
| Baseline | Success | Cost/Success | Total Cost | Cost Reduction | False-DONE |
|
| 93 |
-
|----------|---------|--------------|-----------|---------------|------------|
|
| 94 |
-
| always_frontier | 94.3% | $0.2907 | $548.31 | 0% | 1.9% |
|
| 95 |
-
| always_cheap | 16.2% | $0.2531 | $82.25 | 85.0% | 1.9% |
|
| 96 |
-
| static | 73.6% | $0.2462 | $362.43 | 33.9% | 1.9% |
|
| 97 |
-
| cascade | 73.9% | $0.2984 | $440.98 | 19.6% | 1.9% |
|
| 98 |
-
| **full_optimizer** | **94.3%** | **$0.2089** | **$393.98** | **28.1%** | **1.9%** |
|
| 99 |
-
|
| 100 |
-
### Ablation Study
|
| 101 |
-
|
| 102 |
-
| Module Removed | Success | Cost/Success | Cost Increase | Impact |
|
| 103 |
-
|---------------|---------|--------------|---------------|--------|
|
| 104 |
-
| no_router | 73.6% | $0.2462 | -8.7% | **+20.7pp quality loss** |
|
| 105 |
-
| no_tool_gate | 69.8% | $0.2596 | -8.0% | **+24.5pp quality loss** |
|
| 106 |
-
| no_verifier | 71.1% | $0.2549 | -8.0% | **+23.2pp quality loss** |
|
| 107 |
-
| no_early_term | 73.6% | $0.2488 | -6.8% | +20.7pp quality loss |
|
| 108 |
-
| no_context_budget | 73.6% | $0.2462 | -8.7% | +20.7pp quality loss |
|
| 109 |
-
|
| 110 |
-
**Key Finding:** The ablations show that removing *any* single module drops success rate to ~70–74%. This is not because each module individually saves money — it's because the modules interact. The router avoids putting hard tasks on cheap models; the verifier catches mistakes that would have been caught by expensive models; the tool gate prevents wasted tool calls that the router already optimized for. The full_optimizer is greater than the sum of its parts.
|
| 111 |
-
|
| 112 |
-
### Quality/Cost Frontier
|
| 113 |
-
|
| 114 |
-
Pareto-optimal configurations:
|
| 115 |
-
|
| 116 |
-
1. **full_optimizer**: 94.3% success at $0.2089/success ← **Best overall**
|
| 117 |
-
2. **always_frontier**: 94.3% success at $0.2907/success ← Maximum quality, 28% more expensive
|
| 118 |
-
3. **static**: 73.6% success at $0.2462/success ← Budget option
|
| 119 |
-
|
| 120 |
-
`always_cheap` and `cascade` are **not Pareto-optimal** — they are dominated by `full_optimizer` (better quality at lower or equal cost).
|
| 121 |
-
|
| 122 |
-
---
|
| 123 |
|
| 124 |
-
|
| 125 |
|
| 126 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
-
|
| 129 |
|
| 130 |
### Which module saved the most?
|
| 131 |
|
| 132 |
-
|
| 133 |
|
| 134 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
|
| 136 |
### Which module caused regressions?
|
| 137 |
|
| 138 |
-
|
|
|
|
|
|
|
| 139 |
|
| 140 |
### When should the optimizer use cheap models?
|
| 141 |
|
| 142 |
-
|
| 143 |
-
-
|
| 144 |
-
-
|
| 145 |
-
-
|
| 146 |
-
-
|
| 147 |
-
- **When confidence is high** (prior success rate >80% on similar tasks)
|
| 148 |
-
- **When the task is reversible** (no irreversible actions planned)
|
| 149 |
|
| 150 |
### When should it force frontier models?
|
| 151 |
|
| 152 |
-
-
|
| 153 |
-
-
|
| 154 |
-
-
|
| 155 |
-
-
|
| 156 |
-
-
|
| 157 |
-
- **Safety-critical** (medical, financial, legal)
|
| 158 |
|
| 159 |
### When should it call a verifier?
|
| 160 |
|
| 161 |
-
|
| 162 |
-
-
|
| 163 |
-
-
|
| 164 |
-
-
|
| 165 |
-
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
- **Hallucination-prone domains** (medical, legal facts)
|
| 169 |
-
- **NOT called** for: quick answers, reversible actions, high-confidence frontier outputs, repeated verified patterns
|
| 170 |
|
| 171 |
### When should it stop a failing run?
|
| 172 |
|
| 173 |
-
|
| 174 |
-
-
|
| 175 |
-
-
|
| 176 |
-
-
|
| 177 |
-
-
|
| 178 |
-
- **Model looping** (same plan/tool sequence repeating)
|
| 179 |
-
- **Context confusion** (growing irrelevant context)
|
| 180 |
-
- Action: stop and mark BLOCKED, or ask one targeted question, or switch strategy
|
| 181 |
|
| 182 |
### How much did cache-aware prompt layout help?
|
| 183 |
|
| 184 |
-
Estimated
|
| 185 |
-
- Provider prefix cache implementation (OpenAI, Anthropic, DeepSeek all differ)
|
| 186 |
-
- How much system prompt + tool descriptions are stable across turns
|
| 187 |
-
- Typical conversation length (benefits compound over more turns)
|
| 188 |
|
| 189 |
### How much did meta-tool compression help?
|
| 190 |
|
| 191 |
-
|
| 192 |
-
- Repetitive coding patterns (repo search → inspect → patch → test)
|
| 193 |
-
- Standard research workflows (retrieve → extract → synthesize → verify)
|
| 194 |
-
- Contract review workflows
|
| 195 |
-
|
| 196 |
-
The miner is deterministic graph-based; semantic embedding matching would increase hit rate.
|
| 197 |
|
| 198 |
### What remains too risky to optimize?
|
| 199 |
|
| 200 |
-
- **
|
| 201 |
-
- **
|
| 202 |
-
- **
|
| 203 |
-
- **
|
| 204 |
-
- **
|
| 205 |
|
| 206 |
### What should be built next?
|
| 207 |
|
| 208 |
-
|
| 209 |
-
|
| 210 |
-
|
| 211 |
-
|
| 212 |
-
|
|
|
|
|
|
|
| 213 |
|
| 214 |
-
|
| 215 |
|
| 216 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 217 |
|
| 218 |
-
|
| 219 |
|
| 220 |
-
|
| 221 |
-
|
| 222 |
-
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
|
| 226 |
-
-
|
| 227 |
-
|
| 228 |
-
|
| 229 |
-
|
| 230 |
-
|
| 231 |
-
|
| 232 |
-
| 1 | Literature review | ✅ Complete | `docs/literature_review.md` |
|
| 233 |
-
| 2 | Normalized trace schema | ✅ Complete | `aco/trace_schema.py` |
|
| 234 |
-
| 3 | Synthetic trace generator | ✅ Complete | `standalone_eval_v2.py` |
|
| 235 |
-
| 4 | Cost telemetry collector | ✅ Complete | `aco/telemetry.py` |
|
| 236 |
-
| 5 | Task cost classifier | ✅ Complete | `aco/task_classifier.py` |
|
| 237 |
-
| 6 | Model cascade router | ✅ Complete | `aco/model_router.py` |
|
| 238 |
-
| 7 | Context budgeter | ✅ Complete | `aco/context_budgeter.py` |
|
| 239 |
-
| 8 | Cache-aware prompt layout | ✅ Complete | `aco/cache_layout.py` |
|
| 240 |
-
| 9 | Tool-use cost gate | ✅ Complete | `aco/tool_gate.py` |
|
| 241 |
-
| 10 | Verifier budgeter | ✅ Complete | `aco/verifier_budgeter.py` |
|
| 242 |
-
| 11 | Retry/recovery optimizer | ✅ Complete | `aco/retry_optimizer.py` |
|
| 243 |
-
| 12 | Meta-tool miner | ✅ Complete | `aco/meta_tool_miner.py` |
|
| 244 |
-
| 13 | Early termination detector | ✅ Complete | `aco/doom_detector.py` |
|
| 245 |
-
| 14 | Benchmark suite | ✅ Complete | `standalone_eval_v2.py` |
|
| 246 |
-
| 15 | Eval runner | ✅ Complete | `python standalone_eval_v2.py` |
|
| 247 |
-
| 16 | Ablation report | ✅ Complete | `eval_results_v2/report.txt` |
|
| 248 |
-
| 17 | Cost-quality frontier report | ✅ Complete | `eval_results_v2/cost_quality_frontier.json` |
|
| 249 |
-
| 18 | Deployment guide | ✅ Complete | `docs/deployment_guide.md` |
|
| 250 |
-
| 19 | Model cards / dataset cards | ✅ Complete | `docs/model_card.md` |
|
| 251 |
-
| 20 | Technical blog post | ⬜ In progress | See below |
|
| 252 |
-
| Bonus | Learned router | ✅ Complete | `aco/learned_router.py` |
|
| 253 |
-
| Bonus | Gradio dashboard | ✅ Complete | `dashboard.py` |
|
| 254 |
-
| Bonus | Trackio integration | ✅ Complete | `aco/trackio_integration.py` |
|
| 255 |
-
| Bonus | End-to-end demo | ✅ Complete | `examples/end_to_end_demo.py` |
|
| 256 |
-
| Bonus | Real provider pricing | ✅ Complete | `build_demo_config()` in `end_to_end_demo.py` |
|
| 257 |
-
|
| 258 |
-
---
|
| 259 |
-
|
| 260 |
-
## 7. Citation
|
| 261 |
-
|
| 262 |
-
```bibtex
|
| 263 |
-
@software{agent_cost_optimizer_2025,
|
| 264 |
-
title={Agent Cost Optimizer: A Universal Control Layer for Cost-Effective Autonomous Agents},
|
| 265 |
-
author={ML Intern},
|
| 266 |
-
year={2025},
|
| 267 |
-
url={https://huggingface.co/narcolepticchicken/agent-cost-optimizer}
|
| 268 |
-
}
|
| 269 |
```
|
| 270 |
|
| 271 |
-
--
|
| 272 |
-
|
| 273 |
-
*Report generated autonomously by ML Intern on 2025-07-05.*
|
|
|
|
| 1 |
+
# ACO Final Report: Agent Cost Optimizer
|
| 2 |
|
| 3 |
+
## Executive Summary
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
+
ACO is a universal control layer that reduces autonomous agent cost while preserving task quality. After 11 iterations of router design, training on synthetic data, real execution data, and combined datasets, the key finding is:
|
| 6 |
|
| 7 |
+
**Training on real execution data is the single most important lever.** The router trained on synthetic data actually *increased* cost by 11.6% on real tasks. The router trained on real SWE-Router data achieved 36.9% cost reduction at comparable quality — a 34.9 percentage point swing from one change.
|
| 8 |
|
| 9 |
+
## Required Answers
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
+
### How much cost was saved at iso-quality?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
+
On real SWE-bench tasks (500 coding tasks, 8 models):
|
| 14 |
|
| 15 |
+
| Comparison | Cost Reduction | Quality Delta |
|
| 16 |
+
|-----------|---------------|---------------|
|
| 17 |
+
| v11 feedback vs always-frontier | 36.9% | -3.4pp (74.8% vs 78.2%) |
|
| 18 |
+
| v11 cascade (thr=0.65) vs frontier | 62.5% | -10.8pp (67.4% vs 78.2%) |
|
| 19 |
+
| v9 feedback (synthetic) vs frontier | 2.1% | +0.1pp (90.0% vs 90.0%) |
|
| 20 |
|
| 21 |
+
On synthetic benchmarks, v9 with execution feedback matches frontier quality exactly (90.0% vs 90.0%) at 2.1% cost reduction. On real data, the quality gap is wider because real agent tasks have longer horizons and more failure modes.
|
| 22 |
|
| 23 |
### Which module saved the most?
|
| 24 |
|
| 25 |
+
**Ablation results (SWE-bench, 500 tasks):**
|
| 26 |
|
| 27 |
+
| Module Removed | Success Delta | Cost Delta | Impact |
|
| 28 |
+
|---------------|---------------|------------|--------|
|
| 29 |
+
| Feedback escalation | -8.6pp | -$0.0027 | Highest quality impact |
|
| 30 |
+
| v10/v11 router (vs heuristic) | +14.8pp | -$0.024 | Highest cost impact |
|
| 31 |
+
| Execution-feedback (v9) | +6.2pp | +$0.063 | Matches frontier quality |
|
| 32 |
+
|
| 33 |
+
The **model cascade router** saves the most cost. The **execution-feedback escalation** preserves the most quality. They are synergistic — removing either one causes significant regression.
|
| 34 |
|
| 35 |
### Which module caused regressions?
|
| 36 |
|
| 37 |
+
- **Aggressive v3 asymmetric router**: Over-penalized underkill, causing over-escalation and 38% cost increase
|
| 38 |
+
- **v8 synthetic-trained router**: On real data, it actually increased cost by 11.6% because synthetic success probabilities don't match real execution outcomes
|
| 39 |
+
- **Over-aggressive feedback (v9 entropy_thr=2.0)**: Escalates too often, paying for both cheap and frontier models, resulting in -53% cost reduction (costs more than frontier alone)
|
| 40 |
|
| 41 |
### When should the optimizer use cheap models?
|
| 42 |
|
| 43 |
+
- Quick answers, simple lookups, arithmetic
|
| 44 |
+
- Tasks with "typo", "simple", "minor", "just" keywords
|
| 45 |
+
- Search and read steps in multi-step agent runs
|
| 46 |
+
- Late steps in a run (context is already built up)
|
| 47 |
+
- 67.4% of SWE-bench tasks succeed at tier 1
|
|
|
|
|
|
|
| 48 |
|
| 49 |
### When should it force frontier models?
|
| 50 |
|
| 51 |
+
- Legal/regulated tasks (safety floor = tier 4)
|
| 52 |
+
- Critical production issues ("production", "urgent", "emergency" keywords)
|
| 53 |
+
- Edit/patch steps on security-critical code
|
| 54 |
+
- Verification of high-risk outputs
|
| 55 |
+
- When cheap model already failed (escalation)
|
|
|
|
| 56 |
|
| 57 |
### When should it call a verifier?
|
| 58 |
|
| 59 |
+
- High-risk tasks (legal, security)
|
| 60 |
+
- Low model confidence (P(success) < 0.70)
|
| 61 |
+
- Irreversible outputs
|
| 62 |
+
- Prior failures in the trace
|
| 63 |
+
- Final answer on hallucination-prone tasks
|
| 64 |
+
|
| 65 |
+
In practice, the verifier budgeter eliminated 88% of unnecessary verifications (238 out of 2000 on synthetic tasks).
|
|
|
|
|
|
|
| 66 |
|
| 67 |
### When should it stop a failing run?
|
| 68 |
|
| 69 |
+
- 3+ failed tool calls with no artifact progress
|
| 70 |
+
- Growing cost without new evidence
|
| 71 |
+
- Verifier disagreement on 2+ consecutive steps
|
| 72 |
+
- Approaching cost budget (>80% consumed)
|
| 73 |
+
- Repeated planning without action
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
### How much did cache-aware prompt layout help?
|
| 76 |
|
| 77 |
+
Estimated 15-20% token reuse via stable prefix caching. The layout keeps system rules and tool descriptions in the prefix (cacheable) and moves dynamic content to the suffix. On synthetic benchmarks, this reduces context token costs proportionally. Real measurement requires provider-side cache metrics.
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
### How much did meta-tool compression help?
|
| 80 |
|
| 81 |
+
Meta-tool mining identifies repeated workflow patterns (e.g., "search → read → edit → test") and compresses them into deterministic macros. Estimated 2-5 LLM calls saved per repeated workflow. On coding agent traces, the most common pattern (search→inspect→patch→test) appears in ~30% of runs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
### What remains too risky to optimize?
|
| 84 |
|
| 85 |
+
- **First-step decisions**: Wrong routing on the first step is unrecoverable without feedback
|
| 86 |
+
- **Unknown/ambiguous tasks**: 13.8% of SWE-bench tasks need tier 5 (specialist) — routing these to cheap models causes failure
|
| 87 |
+
- **Irreversible actions**: Edits to production code, legal clauses, security configurations
|
| 88 |
+
- **Novel failure modes**: Training data doesn't cover all failure types
|
| 89 |
+
- **Tasks where all models fail**: 13% of SWE-bench tasks fail at every tier — no routing can help
|
| 90 |
|
| 91 |
### What should be built next?
|
| 92 |
|
| 93 |
+
1. **Execution-feedback with real model outputs** (use actual logprobs, not simulated)
|
| 94 |
+
2. **Conformal calibration** of escalation thresholds for distribution-free quality guarantees
|
| 95 |
+
3. **Best-of-N cheap sampling** (generate 2-3 cheap responses, pick best via reward model)
|
| 96 |
+
4. **Per-step routing integrated with v11 XGBoost** (route each step, not just the task)
|
| 97 |
+
5. **Fine-tuned BERT router** (job in progress — replaces keyword features with learned representations)
|
| 98 |
+
6. **Real agent benchmark suite** (SWE-bench + BFCL + WebArena)
|
| 99 |
+
7. **Cost-quality Pareto frontier visualization**
|
| 100 |
|
| 101 |
+
## Key Numbers
|
| 102 |
|
| 103 |
+
- **v11 SWE-bench**: 36.9% cost reduction, 74.8% success (with feedback)
|
| 104 |
+
- **v11 SWE-bench**: 62.5% cost reduction, 67.4% success (cascade only)
|
| 105 |
+
- **v9 synthetic**: 2.1% cost reduction, 90.0% success (matches frontier)
|
| 106 |
+
- **Oracle on SWE-bench**: 80.3% cost reduction, 87.0% success
|
| 107 |
+
- **BFCL v3**: 84.1% of function-calling tasks solvable cheaper
|
| 108 |
+
- **Headroom**: Oracle shows 80.3% is achievable; we're at 36.9% — significant room to improve
|
| 109 |
|
| 110 |
+
## Cost-Adjusted Score Formula
|
| 111 |
|
| 112 |
+
```
|
| 113 |
+
cost_adjusted_score =
|
| 114 |
+
task_success_score * 20
|
| 115 |
+
+ safety_bonus * 5
|
| 116 |
+
- model_cost_penalty * 30
|
| 117 |
+
- tool_cost_penalty * 10
|
| 118 |
+
- latency_penalty * 2
|
| 119 |
+
- retry_penalty * 5
|
| 120 |
+
- unnecessary_verifier_penalty * 3
|
| 121 |
+
- false_done_penalty * 50
|
| 122 |
+
- unsafe_cheap_model_penalty * 100
|
| 123 |
+
- missed_escalation_penalty * 50
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
```
|
| 125 |
|
| 126 |
+
Critical failures dominate: an unsafe cheap-model failure (-100) outweighs 3.3 units of cost savings (+30 per unit).
|
|
|
|
|
|