Add compare_pipelines skill
Browse files
openclaw/skills/compare_pipelines/SKILL.md
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# compare_pipelines
|
| 2 |
+
|
| 3 |
+
Run a query through both Baseline RAG and GraphRAG pipelines simultaneously, then compare their answers side-by-side with metrics including tokens, latency, cost, and answer quality.
|
| 4 |
+
|
| 5 |
+
## Parameters
|
| 6 |
+
- `query` (string, required): Question to compare across pipelines
|
| 7 |
+
- `provider` (string, optional, default="anthropic"): LLM provider to use
|
| 8 |
+
- `model` (string, optional): Specific model ID (defaults to provider's default)
|
| 9 |
+
|
| 10 |
+
## Returns
|
| 11 |
+
JSON with:
|
| 12 |
+
- `baseline`: Answer, tokens, latency, cost from Pipeline A (Baseline RAG)
|
| 13 |
+
- `graphrag`: Answer, tokens, latency, cost, entities, relations from Pipeline B (GraphRAG)
|
| 14 |
+
- `complexity`: Query complexity score (0.0-1.0)
|
| 15 |
+
- `recommended`: Which pipeline the adaptive router recommends
|
| 16 |
+
|
| 17 |
+
## Example
|
| 18 |
+
```
|
| 19 |
+
compare_pipelines "Which magazine was started first?" --provider ollama --model llama3.2
|
| 20 |
+
```
|