Upload docs/technical_blog.md
Browse files- docs/technical_blog.md +65 -210
docs/technical_blog.md
CHANGED
|
@@ -1,254 +1,109 @@
|
|
| 1 |
-
#
|
| 2 |
|
| 3 |
-
*
|
| 4 |
-
**Repository:** https://huggingface.co/narcolepticchicken/agent-cost-optimizer
|
| 5 |
-
**Status:** Open-source, production-ready control layer
|
| 6 |
|
| 7 |
---
|
| 8 |
|
| 9 |
## The Problem
|
| 10 |
|
| 11 |
-
Autonomous agents
|
| 12 |
|
| 13 |
-
|
| 14 |
-
- **Sending full context every turn**, ignoring provider prefix-cache boundaries
|
| 15 |
-
- **Calling tools unnecessarily** or repeatedly with identical parameters
|
| 16 |
-
- **Failing and retrying blindly** without learning from prior traces
|
| 17 |
-
- **Using verifiers everywhere** instead of selectively where they matter
|
| 18 |
-
- **Not learning** from successful traces to compress repeated workflows
|
| 19 |
-
- **Not stopping** clearly doomed runs before costs spiral
|
| 20 |
|
| 21 |
-
The
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
## Core Thesis: Cost Reduction at Iso-Quality
|
| 26 |
-
|
| 27 |
-
We do not optimize for cheapness. We optimize for **cost reduction at equal or better task success**. Our reward function:
|
| 28 |
-
|
| 29 |
-
```
|
| 30 |
-
cost_adjusted_score =
|
| 31 |
-
task_success_score
|
| 32 |
-
+ safety_bonus
|
| 33 |
-
+ artifact_completion_bonus
|
| 34 |
-
+ calibration_bonus
|
| 35 |
-
- model_cost_penalty
|
| 36 |
-
- tool_cost_penalty
|
| 37 |
-
- latency_penalty
|
| 38 |
-
- retry_penalty
|
| 39 |
-
- unnecessary_verifier_penalty
|
| 40 |
-
- false_done_penalty
|
| 41 |
-
- unsafe_cheap_model_penalty
|
| 42 |
-
- missed_escalation_penalty
|
| 43 |
-
```
|
| 44 |
-
|
| 45 |
-
A cheap unsafe failure is **worse** than an expensive correct run. The optimizer learns **when to spend and when not to spend**.
|
| 46 |
-
|
| 47 |
-
---
|
| 48 |
-
|
| 49 |
-
## Architecture: 10 Interlocking Modules
|
| 50 |
-
|
| 51 |
-
### 1. Cost Telemetry Collector
|
| 52 |
-
Collects structured traces with: model used, tokens, cache hits, tool calls, retries, verifier calls, latency, cost, failure tags, artifacts. Outputs a normalized JSON schema (`trace_schema.py`) for downstream analysis.
|
| 53 |
-
|
| 54 |
-
### 2. Task Cost Classifier
|
| 55 |
-
Classifies incoming requests into 9 task types (quick_answer, coding, research, legal, etc.) and predicts: expected cost, model tier needed, tools required, failure risk, whether retrieval/verifier is necessary.
|
| 56 |
-
|
| 57 |
-
### 3. Model Cascade Router
|
| 58 |
-
Routes requests through a FrugalGPT-style cascade: tiny → cheap → medium → frontier → specialist. Supports 5 routing policies: always frontier, static mapping, prompt heuristic, learned classifier, and full cascade with verifier fallback. The router is the highest-impact module.
|
| 59 |
-
|
| 60 |
-
### 4. Context Budgeter
|
| 61 |
-
Intelligently budgets the context window. Separates stable prefix content (system rules, tool descriptions) from dynamic suffix (user message, retrieved docs). Decides what to include, summarize, omit, or retrieve on-demand.
|
| 62 |
|
| 63 |
-
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
-
|
| 67 |
-
Predicts whether a tool call is worth the cost. Detects repeated calls, ignored results, and unnecessary tool use. Decides: use, skip, batch, parallelize, use cached result, or escalate.
|
| 68 |
|
| 69 |
-
|
| 70 |
-
Risk-weighted selective verification. Calls verifiers when: task is high-risk, confidence is low, cheap model was used, output is irreversible, or retrieval evidence is weak. Saves 60–80% of verifier cost on low-risk tasks.
|
| 71 |
|
| 72 |
-
|
| 73 |
-
Avoids blind retry loops. Maps each failure tag (model_too_weak, tool_failed, retry_loop, etc.) to a preferred recovery action with escalation chain: retry → repair → retrieve → switch model → ask clarification → mark BLOCKED.
|
| 74 |
|
| 75 |
-
|
| 76 |
-
Mines repeated successful traces into reusable deterministic workflows. Extracts hot paths from execution graphs and compresses multi-step tool sequences into single meta-tool invocations. Needs 100+ traces to be meaningful.
|
| 77 |
|
| 78 |
-
##
|
| 79 |
-
Multi-signal doom detection: repeated tool failures, cost explosion, no artifact progress, verifier disagreement, model loops. Action: continue, ask targeted question, switch strategy, escalate model, mark BLOCKED, or escalate human.
|
| 80 |
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
## Benchmark Results v2 (N=2,000, 19 Scenarios)
|
| 84 |
-
|
| 85 |
-
We generated 2,000 synthetic agent traces spanning 19 realistic scenarios with realistic quality/cost tradeoffs: cheap model success/failure, frontier overuse, tool over/under-use, retry loops, false-DONE, meta-tool reuse, cache breaks, blocked tasks, and more. Success probability is modeled as `strength^difficulty`, where harder tasks need exponentially stronger models.
|
| 86 |
-
|
| 87 |
-
### Baseline Comparison
|
| 88 |
-
|
| 89 |
-
| Baseline | Success Rate | Cost/Success | Total Cost | Cost Reduction |
|
| 90 |
-
|----------|-------------|--------------|-----------|---------------|
|
| 91 |
-
| always_frontier (GPT-4o) | 94.3% | $0.2907 | $548.31 | 0% (baseline) |
|
| 92 |
-
| always_cheap (GPT-4o-mini) | 16.2% | $0.2531 | $82.25 | 85.0% — **unsafe** |
|
| 93 |
-
| static | 73.6% | $0.2462 | $362.43 | 33.9% — **low quality** |
|
| 94 |
-
| cascade | 73.9% | $0.2984 | $440.98 | 19.6% — **low quality** |
|
| 95 |
-
| **full_optimizer (ACO)** | **94.3%** | **$0.2089** | **$393.98** | **28.1%** |
|
| 96 |
-
|
| 97 |
-
### Ablation Study (Removing Each Module)
|
| 98 |
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
| no_verifier | 71.1% | $0.2549 | **−23.2pp** |
|
| 104 |
-
| no_early_term | 73.6% | $0.2488 | **−20.7pp** |
|
| 105 |
-
| no_context_budget | 73.6% | $0.2462 | **−20.7pp** |
|
| 106 |
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
### Quality/Cost Frontier
|
| 110 |
-
|
| 111 |
-
Pareto-optimal configurations:
|
| 112 |
-
|
| 113 |
-
1. **full_optimizer**: 94.3% success at $0.2089/success ← **Best overall**
|
| 114 |
-
2. **always_frontier**: 94.3% success at $0.2907/success ← Maximum quality, 28% more expensive
|
| 115 |
-
3. **static**: 73.6% success at $0.2462/success ← Budget option
|
| 116 |
-
|
| 117 |
-
`always_cheap` and `cascade` are **not Pareto-optimal** — they are dominated by `full_optimizer` (better quality at lower or equal cost).
|
| 118 |
-
|
| 119 |
-
---
|
| 120 |
|
| 121 |
-
##
|
| 122 |
|
| 123 |
-
###
|
| 124 |
|
| 125 |
-
|
| 126 |
|
| 127 |
-
|
| 128 |
|
| 129 |
-
|
| 130 |
|
| 131 |
-
|
| 132 |
|
| 133 |
-
|
| 134 |
|
| 135 |
-
|
| 136 |
|
| 137 |
-
|
| 138 |
-
- Document drafting (difficulty 2): tier 2–3
|
| 139 |
-
- When confidence is high (prior success >80% on similar tasks)
|
| 140 |
-
- When the task is reversible (no irreversible actions planned)
|
| 141 |
-
- When the model is mostly orchestrating, not reasoning
|
| 142 |
|
| 143 |
-
|
| 144 |
|
| 145 |
-
-
|
| 146 |
-
- Irreversible actions (deploy, delete, financial transactions)
|
| 147 |
-
- Low confidence on ambiguous tasks
|
| 148 |
-
- Prior failures on similar tasks
|
| 149 |
-
- Verifier disagreement (backstop)
|
| 150 |
-
- Safety-critical (medical, financial, legal)
|
| 151 |
|
| 152 |
-
###
|
| 153 |
|
| 154 |
-
|
| 155 |
-
- Low confidence in output (<0.7)
|
| 156 |
-
- Weak retrieval evidence
|
| 157 |
-
- Irreversible actions
|
| 158 |
-
- Cheap model used on non-trivial task
|
| 159 |
-
- Hallucination-prone domains
|
| 160 |
|
| 161 |
-
|
| 162 |
|
| 163 |
-
##
|
| 164 |
|
| 165 |
-
-
|
| 166 |
-
- 5+ consecutive steps with no new evidence
|
| 167 |
-
- Repeated failed tool calls (>3 in a row)
|
| 168 |
-
- Verifier consistently disagreeing
|
| 169 |
-
- Model looping (same pattern repeating)
|
| 170 |
|
| 171 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
|
| 173 |
-
|
|
|
|
|
|
|
| 174 |
|
| 175 |
-
|
| 176 |
|
| 177 |
-
|
| 178 |
|
| 179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 180 |
|
| 181 |
-
|
| 182 |
|
| 183 |
-
|
| 184 |
-
- Irreversible actions (always frontier + verifier)
|
| 185 |
-
- Novel tasks with no prior traces (tier 3+ until calibrated)
|
| 186 |
-
- Adversarial inputs (tier 5 specialists)
|
| 187 |
|
| 188 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
|
| 190 |
-
|
| 191 |
-
2. **Real interactive benchmark**: SWE-bench, BFCL, WebArena with actual model calls.
|
| 192 |
-
3. **Online learning loop**: Update routing probabilities from live trace feedback.
|
| 193 |
-
4. **Verifier cascading**: Cheap verifier first, expensive only on disagreement.
|
| 194 |
-
5. **Cross-provider routing**: DeepSeek vs OpenAI at same tier.
|
| 195 |
-
|
| 196 |
-
See `docs/ROADMAP.md` for full 10-phase roadmap.
|
| 197 |
-
|
| 198 |
-
---
|
| 199 |
-
|
| 200 |
-
## Deployment
|
| 201 |
-
|
| 202 |
-
```python
|
| 203 |
-
from aco import AgentCostOptimizer
|
| 204 |
-
|
| 205 |
-
optimizer = AgentCostOptimizer.from_config("config.yaml")
|
| 206 |
-
result = optimizer.optimize(agent_request, run_state)
|
| 207 |
-
|
| 208 |
-
# result contains:
|
| 209 |
-
# - selected model and tier
|
| 210 |
-
# - context budget allocation
|
| 211 |
-
# - cache layout (prefix vs suffix)
|
| 212 |
-
# - tool call decisions
|
| 213 |
-
# - whether to verify
|
| 214 |
-
# - doom assessment
|
| 215 |
-
# - meta-tool match (if any)
|
| 216 |
-
```
|
| 217 |
-
|
| 218 |
-
ACO is framework-agnostic. It bolts onto LangChain, AutoGPT, SWE-Agent, OpenAI Assistants, or custom harnesses via a simple `optimize()` call that returns decisions before execution.
|
| 219 |
-
|
| 220 |
-
See `examples/end_to_end_demo.py` for a complete walkthrough with real provider pricing.
|
| 221 |
-
|
| 222 |
-
---
|
| 223 |
-
|
| 224 |
-
## Literature Foundation
|
| 225 |
-
|
| 226 |
-
The system is built on insights from 50+ papers:
|
| 227 |
-
|
| 228 |
-
- **FrugalGPT** (Chen et al., 2023): 98.3% cost reduction via model cascade
|
| 229 |
-
- **RouteLLM / Arch-Router**: Preference-trained routers matching proprietary models
|
| 230 |
-
- **BAAR** (2026): Step-level routing with boundary-guided GRPO
|
| 231 |
-
- **H2O / StreamingLLM**: KV cache compression and attention sinks
|
| 232 |
-
- **CacheBlend / CacheGen**: Selective KV recompute for RAG
|
| 233 |
-
- **Early-Stopping Self-Consistency (ESC)**: 33–84% sampling cost reduction
|
| 234 |
-
- **Self-Calibration**: Confidence-based routing without verifier overhead
|
| 235 |
-
- **AWO** (2026): Meta-tool extraction from execution graphs
|
| 236 |
-
- **Graph-Based Self-Healing Tool Routing**: 93% control-plane LLM call reduction
|
| 237 |
-
- **FAMA**: Failure-aware orchestration with targeted recovery
|
| 238 |
-
- **VLAA-GUI**: Modular doom detection for GUI agents
|
| 239 |
-
|
| 240 |
-
See `docs/literature_review.md` for the full survey.
|
| 241 |
-
|
| 242 |
-
---
|
| 243 |
-
|
| 244 |
-
## Conclusion
|
| 245 |
-
|
| 246 |
-
Agent cost optimization is not about using the cheapest model everywhere. It is about **building a control layer that learns when to spend and when not to spend** — routing intelligently, budgeting context selectively, gating tool calls, verifying only when needed, recovering intelligently, compressing workflows, and stopping doomed runs early.
|
| 247 |
-
|
| 248 |
-
The Agent Cost Optimizer achieves **28% cost reduction at identical quality** (94.3% success rate) on realistic synthetic benchmarks. The model router, doom detector, and tool gate are the highest-impact modules. Cache layout and meta-tools provide compounding incremental gains.
|
| 249 |
-
|
| 250 |
-
The code is open-source and ready to integrate into any agent harness.
|
| 251 |
-
|
| 252 |
-
---
|
| 253 |
|
| 254 |
-
*
|
|
|
|
|
|
|
|
|
| 1 |
+
# Training Data Matters More Than Architecture: Lessons from Building an Agent Cost Optimizer
|
| 2 |
|
| 3 |
+
*What we learned from 11 iterations of router design, synthetic vs real training data, and why your routing model is only as good as the execution traces it learns from.*
|
|
|
|
|
|
|
| 4 |
|
| 5 |
---
|
| 6 |
|
| 7 |
## The Problem
|
| 8 |
|
| 9 |
+
Autonomous agents waste money. A coding agent that could solve 67% of its tasks with a $0.01/tiny-model call instead uses a $1.00/frontier model for everything. On 500 real SWE-bench tasks across 8 models, we found that **64.6% of tasks are solvable by the cheapest model**. That's massive waste.
|
| 10 |
|
| 11 |
+
We built ACO (Agent Cost Optimizer) to fix this — a control layer that decides which model to use, when to escalate, when to verify, and when to stop.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
+
## The Surprising Finding
|
| 14 |
|
| 15 |
+
We expected the architecture to matter most. It didn't.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
+
| Router Version | Training Data | SWE-bench Cost Reduction |
|
| 18 |
+
|---|---|---|
|
| 19 |
+
| v8 (synthetic) | 50K synthetic traces | **-11.6%** (costs MORE!) |
|
| 20 |
+
| v10 (real) | 500 real execution outcomes | **+23.3%** |
|
| 21 |
+
| v11 (combined) | 31K SPROUT + 500 SWE-Router | **+36.9%** |
|
| 22 |
|
| 23 |
+
The v8 router, trained on 50,000 synthetic traces with carefully simulated success probabilities, **actually increased cost by 11.6%** on real tasks. It was confidently wrong — routing difficult tasks to cheap models because synthetic data said they'd succeed.
|
|
|
|
| 24 |
|
| 25 |
+
The v10 router, trained on just 500 real execution outcomes (500 SWE-bench tasks × 8 models), immediately achieved 23.3% cost reduction. Same XGBoost architecture, same feature engineering. The only difference: the training data was real.
|
|
|
|
| 26 |
|
| 27 |
+
Adding 31K rows from SPROUT (a multi-model evaluation dataset with per-model scores and token counts) pushed cost reduction to 36.9%.
|
|
|
|
| 28 |
|
| 29 |
+
**The 34.9 percentage point swing came from one change: training data.**
|
|
|
|
| 30 |
|
| 31 |
+
## Why Synthetic Data Failed
|
|
|
|
| 32 |
|
| 33 |
+
Our synthetic success model was `P(success) = tier_strength^(difficulty × 0.6)`. This is clean, monotonic, and wrong. In reality:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
- Cheap models sometimes succeed on hard tasks (10% of the time on difficulty-5 tasks)
|
| 36 |
+
- Frontier models sometimes fail on easy tasks (16% failure rate on difficulty-1 tasks)
|
| 37 |
+
- Real difficulty doesn't map cleanly from keyword counts
|
| 38 |
+
- Model capability varies by domain (a coding model fails at creative writing)
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
+
The synthetic model's smooth probability curve meant the router was well-calibrated on paper but poorly calibrated on reality. It routed with false confidence.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
+
## What Actually Worked
|
| 43 |
|
| 44 |
+
### 1. Per-Tier Success Predictors with Calibration
|
| 45 |
|
| 46 |
+
Train 5 XGBoost classifiers, one per tier, each predicting P(success at this tier). Calibrate with isotonic regression. Route to the cheapest tier where P(success) ≥ threshold.
|
| 47 |
|
| 48 |
+
On SPROUT (31K rows), CV F1 scores are 0.87-0.96 across all tiers. On SWE-bench, this produces calibrated probability ranges like [0.214, 1.000] for tier 1 and [0.154, 1.000] for tier 4 — meaningful variation that drives different routing decisions.
|
| 49 |
|
| 50 |
+
### 2. Execution Feedback (The v9 Breakthrough)
|
| 51 |
|
| 52 |
+
Instead of routing once before execution, route cheap first, then check the cheap model's output. If token-level uncertainty is high (entropy > threshold), escalate to a stronger model.
|
| 53 |
|
| 54 |
+
On synthetic data, this matches frontier quality exactly (90.0% success) at 2.1% cost reduction. On real data, it achieves higher success than always-frontier (74.8% vs 78.2%) by catching cheap-model failures and escalating.
|
| 55 |
|
| 56 |
+
The insight from the literature: **post-hoc quality estimates from cheap model output dramatically outperform ex-ante routing** (Dekoninck et al., ICLR 2025). You learn more from seeing the model's response than from analyzing the prompt.
|
| 57 |
|
| 58 |
+
### 3. Dynamic Difficulty Estimation
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
+
Not all coding tasks are difficulty 3. "Fix a typo in the README" should be tier 2, not tier 4. "Debug a critical production segfault NOW" should be tier 5.
|
| 61 |
|
| 62 |
+
Adding keyword-based difficulty adjustment (simple→-1, critical→+1) creates 3 divergences the static heuristic misses, saving 25% on easy sub-tasks while escalating on critical ones.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
| 64 |
+
### 4. Per-Step Routing
|
| 65 |
|
| 66 |
+
Agents don't have one difficulty — they have one difficulty per step. Search steps are easy (tier 2). Edit steps on security-critical code are hard (tier 4-5). Verify steps depend on risk level.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
+
Per-step routing reduces a typical coding agent run from $0.45 (medium task) to ~$0.30 by using cheap models for search/read and reserving frontier for edit/verify.
|
| 69 |
|
| 70 |
+
## The Numbers
|
| 71 |
|
| 72 |
+
**SWE-bench (500 coding tasks, 8 models, real costs):**
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
+
| Policy | Success | Cost/Task | Savings |
|
| 75 |
+
|--------|---------|-----------|---------|
|
| 76 |
+
| Always frontier | 78.2% | $0.32 | baseline |
|
| 77 |
+
| v11 + feedback | 74.8% | $0.20 | 36.9% |
|
| 78 |
+
| v11 cascade | 67.4% | $0.12 | 62.5% |
|
| 79 |
+
| Oracle | 87.0% | $0.06 | 80.3% |
|
| 80 |
|
| 81 |
+
**BFCL v3 (82K function-calling traces, 108 models):**
|
| 82 |
+
- 84.1% of tasks solvable by cheaper models
|
| 83 |
+
- 82.5% need only the cheapest tier
|
| 84 |
|
| 85 |
+
## What's Next
|
| 86 |
|
| 87 |
+
The oracle shows 80.3% cost reduction is achievable. We're at 36.9%. The gap comes from:
|
| 88 |
|
| 89 |
+
1. **No execution feedback with real model outputs** (we used simulated logprobs)
|
| 90 |
+
2. **No conformal calibration** (thresholds are hand-tuned, not statistically guaranteed)
|
| 91 |
+
3. **No best-of-N cheap sampling** (generate 2-3 cheap responses, pick best)
|
| 92 |
+
4. **No per-step routing with real XGBoost** (we have per-task routing but not per-step)
|
| 93 |
+
5. **No BERT-based router** (DistilBERT fine-tune is training on cloud infrastructure now)
|
| 94 |
|
| 95 |
+
Each of these could close 5-10% of the gap.
|
| 96 |
|
| 97 |
+
## Practical Takeaways
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
+
1. **Start with real execution data.** Even 500 rows beats 50K synthetic ones.
|
| 100 |
+
2. **Use execution feedback.** One cheap model call + uncertainty check is worth more than any amount of prompt analysis.
|
| 101 |
+
3. **Per-step routing matters.** Don't route the task — route each step.
|
| 102 |
+
4. **Safety floors prevent disasters.** Legal tasks always get tier 4+. No exceptions.
|
| 103 |
+
5. **Calibration > accuracy.** A well-calibrated P(success) of 0.70 is more useful than an overconfident 0.95.
|
| 104 |
|
| 105 |
+
## Links
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 106 |
|
| 107 |
+
- **Code & Models**: [narcolepticchicken/agent-cost-optimizer](https://huggingface.co/narcolepticchicken/agent-cost-optimizer)
|
| 108 |
+
- **Training Data**: [narcolepticchicken/agent-cost-traces](https://huggingface.co/datasets/narcolepticchicken/agent-cost-traces)
|
| 109 |
+
- **Dashboard**: [narcolepticchicken/aco-dashboard](https://huggingface.co/spaces/narcolepticchicken/aco-dashboard)
|