Rohan03 commited on
Commit
320bde4
Β·
verified Β·
1 Parent(s): 57cdf3c

v2.1.0: README with Spark/Flow/swarm/Council/Vault naming + API reference table

Browse files
Files changed (1) hide show
  1. README.md +46 -30
README.md CHANGED
@@ -55,8 +55,8 @@ Tested with **Llama-3.3-70B** and **Gemma-4-26B** via OpenRouter:
55
  | Llama-3.3-70B | βœ“ 100% | βœ“ 100% | βœ“ 100% | 0β†’3β†’9β†’18 heuristics |
56
  | Gemma-4-26B | βœ“ 100% | βœ“ 100% | βœ“ 100% | 0β†’3β†’6β†’11 heuristics |
57
 
 
58
  **Immune system:** 93% adversarial catch rate, 0% false positives.
59
-
60
  **Test suite:** 119 unit tests, all passing. See [LAUNCH_READINESS.md](LAUNCH_READINESS.md).
61
 
62
  ## Install
@@ -90,7 +90,7 @@ print(team.status()) # See what it's learned
90
  # Local (free, private)
91
  team = pa.purpose("Code helper", model="qwen3:1.7b")
92
 
93
- # Cloud
94
  team = pa.purpose("Code helper", model="openrouter:meta-llama/llama-3.3-70b-instruct")
95
  team = pa.purpose("Code helper", model="groq:llama-3.3-70b-versatile")
96
  team = pa.purpose("Code helper", model="openai:gpt-4o")
@@ -104,34 +104,53 @@ Supported providers: **OpenRouter, Groq, OpenAI, Ollama, HuggingFace, Together,
104
 
105
  ### Level 3 β€” Full control
106
 
 
 
107
  ```python
108
  import purpose_agent as pa
109
 
110
- # Graph workflows (LangGraph-style)
111
- graph = pa.Graph()
112
- graph.add_node("research", pa.Agent("researcher", model="qwen3:1.7b"))
113
- graph.add_node("write", pa.Agent("writer", model="qwen3:1.7b"))
114
- graph.add_edge(pa.START, "research")
115
- graph.add_edge("research", "write")
116
- graph.add_edge("write", pa.END)
117
- result = graph.run(pa.State(data={"topic": "AI safety"}))
 
 
 
 
118
 
119
- # Parallel execution (CrewAI-style)
120
- results = pa.parallel(["task 1", "task 2", "task 3"], agents=[a1, a2, a3])
121
 
122
- # Agent conversations (AutoGen-style)
123
- chat = pa.Conversation([pa.Agent("researcher"), pa.Agent("coder")])
124
- result = chat.run("Design a web scraper", rounds=3)
125
 
126
- # Knowledge-aware agents (LlamaIndex-style)
127
- kb = pa.KnowledgeStore.from_directory("./docs")
128
- agent = pa.Agent("assistant", tools=[kb.as_tool()])
 
129
 
130
- # Parallel tool execution (LLMCompiler-style)
131
  compiler = pa.LLMCompiler(planner_llm=backend, tool_registry=registry)
132
  result = compiler.compile_and_execute("Calculate X and search Y simultaneously")
133
  ```
134
 
 
 
 
 
 
 
 
 
 
 
 
 
135
  ## Evidence-Gated Memory
136
 
137
  Agents don't just accumulate knowledge blindly. Every new memory goes through a pipeline:
@@ -149,8 +168,6 @@ Seven memory types: `purpose_contract`, `user_preference`, `skill_card`, `episod
149
 
150
  ## Honest Evaluation
151
 
152
- Three run modes enforce what the framework can mutate:
153
-
154
  ```python
155
  from purpose_agent import RunMode
156
 
@@ -169,21 +186,20 @@ RunMode.EVAL_TEST # NO writes β€” numbers you can trust
169
 
170
  See [ARCHITECTURE.md](ARCHITECTURE.md) for the complete technical documentation.
171
 
172
- 34 Python modules, ~500KB, organized in layers:
173
 
174
  ```
175
- Core Engine β†’ Actor, Purpose Function, Experience Replay, Optimizer, Orchestrator
176
- V2 Kernel β†’ Memory, Immune, Trace, Compiler, Memory CI, Eval Port, Benchmark
177
- Research β†’ Meta-Rewarding, Self-Taught, Prompt Optimizer, LLM Compiler, Retroformer
178
- Breakthroughs→ Self-Improving Critic, MoH, Hindsight Relabeling, Heuristic Evolution
179
- Capabilities β†’ Agent, Graph, Parallel, Conversation, KnowledgeStore
180
- Easy API β†’ purpose(), Team, quickstart wizard
181
  ```
182
 
183
  ## Literature
184
 
185
- Built on 13 published papers. Full research trace: [COMPILED_RESEARCH.md](COMPILED_RESEARCH.md).
186
- Formal proofs: [PURPOSE_LEARNING.md](PURPOSE_LEARNING.md).
187
 
188
  | Paper | What it contributes |
189
  |-------|-------------------|
 
55
  | Llama-3.3-70B | βœ“ 100% | βœ“ 100% | βœ“ 100% | 0β†’3β†’9β†’18 heuristics |
56
  | Gemma-4-26B | βœ“ 100% | βœ“ 100% | βœ“ 100% | 0β†’3β†’6β†’11 heuristics |
57
 
58
+ **0-day production test:** 19/19 pass on Llama-3.3-70B across all 3 usage levels.
59
  **Immune system:** 93% adversarial catch rate, 0% false positives.
 
60
  **Test suite:** 119 unit tests, all passing. See [LAUNCH_READINESS.md](LAUNCH_READINESS.md).
61
 
62
  ## Install
 
90
  # Local (free, private)
91
  team = pa.purpose("Code helper", model="qwen3:1.7b")
92
 
93
+ # Cloud providers
94
  team = pa.purpose("Code helper", model="openrouter:meta-llama/llama-3.3-70b-instruct")
95
  team = pa.purpose("Code helper", model="groq:llama-3.3-70b-versatile")
96
  team = pa.purpose("Code helper", model="openai:gpt-4o")
 
104
 
105
  ### Level 3 β€” Full control
106
 
107
+ Purpose Agent has its own API vocabulary β€” original names, not borrowed from other frameworks.
108
+
109
  ```python
110
  import purpose_agent as pa
111
 
112
+ # ── Spark: a single intelligent agent ──
113
+ spark = pa.Spark("coder", model="openrouter:meta-llama/llama-3.3-70b-instruct")
114
+ result = spark.run("Write a fibonacci function")
115
+
116
+ # ── Flow: workflow engine with conditional routing ──
117
+ flow = pa.Flow()
118
+ flow.add_node("research", pa.Spark("researcher", model="qwen3:1.7b"))
119
+ flow.add_node("write", pa.Spark("writer", model="qwen3:1.7b"))
120
+ flow.add_edge(pa.BEGIN, "research")
121
+ flow.add_edge("research", "write")
122
+ flow.add_conditional_edge("write", review_fn, {"pass": pa.DONE_SIGNAL, "retry": "research"})
123
+ result = flow.run(initial_state)
124
 
125
+ # ── swarm: run tasks concurrently ──
126
+ results = pa.swarm(["task 1", "task 2", "task 3"], agents=[spark_a, spark_b, spark_c])
127
 
128
+ # ── Council: agents deliberate together ──
129
+ council = pa.Council([pa.Spark("researcher"), pa.Spark("coder"), pa.Spark("reviewer")])
130
+ result = council.run("Design a web scraper", rounds=3)
131
 
132
+ # ── Vault: knowledge store with RAG-as-a-tool ──
133
+ vault = pa.Vault.from_directory("./docs")
134
+ spark = pa.Spark("assistant", tools=[vault.as_tool()])
135
+ result = spark.run("What does the documentation say about X?")
136
 
137
+ # ── LLMCompiler: parallel tool execution via DAG planning ──
138
  compiler = pa.LLMCompiler(planner_llm=backend, tool_registry=registry)
139
  result = compiler.compile_and_execute("Calculate X and search Y simultaneously")
140
  ```
141
 
142
+ ## API Reference (Level 3)
143
+
144
+ | Name | What | Example |
145
+ |------|------|---------|
146
+ | `pa.Spark(name, model, tools)` | Create an intelligent agent | `pa.Spark("coder", model="qwen3:1.7b")` |
147
+ | `pa.Flow()` | Workflow engine with nodes and edges | `flow.add_node("step", handler)` |
148
+ | `pa.swarm(tasks, agents)` | Run tasks concurrently | `pa.swarm(["a","b"], [s1, s2])` |
149
+ | `pa.Council(agents)` | Agent deliberation rounds | `council.run("topic", rounds=3)` |
150
+ | `pa.Vault.from_texts(list)` | Knowledge store for RAG | `vault.query("search term")` |
151
+ | `pa.BEGIN` | Flow start node | `flow.add_edge(pa.BEGIN, "first")` |
152
+ | `pa.DONE_SIGNAL` | Flow end node | `flow.add_edge("last", pa.DONE_SIGNAL)` |
153
+
154
  ## Evidence-Gated Memory
155
 
156
  Agents don't just accumulate knowledge blindly. Every new memory goes through a pipeline:
 
168
 
169
  ## Honest Evaluation
170
 
 
 
171
  ```python
172
  from purpose_agent import RunMode
173
 
 
186
 
187
  See [ARCHITECTURE.md](ARCHITECTURE.md) for the complete technical documentation.
188
 
189
+ 34 Python modules, ~500KB:
190
 
191
  ```
192
+ Core Engine β†’ Actor, Purpose Function, Experience Replay, Optimizer, Orchestrator
193
+ V2 Kernel β†’ Memory, Immune, Trace, Compiler, Memory CI, Eval Port, Benchmark
194
+ Research β†’ Meta-Rewarding, Self-Taught, Prompt Optimizer, LLM Compiler, Retroformer
195
+ Breakthroughs β†’ Self-Improving Critic, MoH, Hindsight Relabeling, Heuristic Evolution
196
+ Capabilities β†’ Spark, Flow, swarm, Council, Vault
197
+ Easy API β†’ purpose(), Team, quickstart wizard
198
  ```
199
 
200
  ## Literature
201
 
202
+ Built on 13 published papers. Full research trace: [COMPILED_RESEARCH.md](COMPILED_RESEARCH.md). Formal proofs: [PURPOSE_LEARNING.md](PURPOSE_LEARNING.md).
 
203
 
204
  | Paper | What it contributes |
205
  |-------|-------------------|