metadata
library_name: purpose-agent
license: mit
language:
- en
tags:
- reinforcement-learning
- agents
- self-improving
- experience-replay
- llm-as-judge
- state-value-evaluation
- memory-augmented
- multi-agent
- slm
- small-language-models
- human-in-the-loop
- streaming
- tools
- evaluation
- ollama
- local-models
- no-code
- easy-to-use
pipeline_tag: text-generation
Purpose Agent
Build self-improving AI agent teams with just a purpose.
No PhD required. No infrastructure costs. Runs on your laptop.
import purpose_agent as pa
# One line. That's all you need.
team = pa.purpose("Help me research and summarize scientific papers")
# Give it tasks. It gets smarter every time.
result = team.run("Find recent breakthroughs in quantum computing")
print(result)
# Teach it your preferences
team.teach("Always cite your sources")
team.teach("Keep summaries under 200 words")
# Check what it's learned
print(team.status())
Three Levels of Usage
Pick your level. You can always go deeper later.
Level 1 β Beginner (no technical knowledge needed)
import purpose_agent as pa
# Describe what you want. The framework builds the right team.
team = pa.purpose("Write Python code and test it")
result = team.run("Create a function that calculates fibonacci numbers")
print(result)
# It auto-detects the best team:
# "Write code" β architect + coder + tester
# "Research X" β researcher + analyst
# "Write blog" β writer + editor
# "Analyze data" β analyst + reporter
# "Help me" β general assistant
Level 2 β Intermediate (customize your team)
import purpose_agent as pa
# Build a custom team
team = pa.Team.build(
purpose="Customer support assistant",
agents=["greeter", "resolver", "escalator"],
model="qwen3:1.7b", # Free local model
)
result = team.run("Customer says: I can't log in to my account")
# Add knowledge from your docs
team = pa.purpose(
"Answer questions about our product",
knowledge="./docs/", # Load all files from a folder
model="qwen3:1.7b",
)
result = team.ask("What is our refund policy?")
Level 3 β Advanced (full control)
import purpose_agent as pa
# Graph workflows (like LangGraph)
graph = pa.Graph()
graph.add_node("research", pa.Agent("researcher", model="qwen3:1.7b"))
graph.add_node("write", pa.Agent("writer", model="phi4-mini"))
graph.add_edge(pa.START, "research")
graph.add_conditional_edge("write", review_fn, {"pass": pa.END, "fail": "research"})
result = graph.run(initial_state)
# Parallel execution (like CrewAI)
results = pa.parallel(["task 1", "task 2", "task 3"], agents=[a1, a2, a3])
# Agent conversations (like AutoGen)
chat = pa.Conversation([researcher, coder, reviewer])
result = chat.run("Design a web scraper", rounds=5)
# Knowledge-aware agents (like LlamaIndex)
kb = pa.KnowledgeStore.from_directory("./docs")
agent = pa.Agent("assistant", tools=[kb.as_tool()])
# Human-in-the-loop (like LangGraph)
hitl = pa.HITLOrchestrator(orch, input_handler=pa.CLIInputHandler(),
approve_actions=True, review_scores=True)
What Makes This Different
The only framework where agents actually learn from experience.
Every other framework (LangChain, CrewAI, AutoGen) runs the same way every time. Purpose Agent gets smarter with each task via the Ξ¦ self-improvement loop:
Task 1: Agent struggles, takes 12 steps β Ξ¦ evaluates β learns heuristics
Task 5: Agent uses learned patterns, takes 8 steps β learns more
Task 10: Agent is efficient, takes 5 steps β keeps refining
Plus it absorbs the best of every competing framework:
| You want... | Others say use... | Purpose Agent gives you... |
|---|---|---|
| Control (graphs, conditions, loops) | LangGraph | pa.Graph() β same power, with self-improvement |
| Speed (parallel execution) | CrewAI | pa.parallel() β real threads, not fake async |
| Agents talking | AutoGen | pa.Conversation() β with Ξ¦-scored turns |
| Plug-and-play | OpenAI Agents SDK | pa.purpose() β even simpler, one function |
| Knowledge (RAG) | LlamaIndex | pa.KnowledgeStore β RAG as a tool |
| Self-improvement | Nobody | Only Purpose Agent |
Runs on Your Laptop (Free, Private)
# Install Ollama (one-time)
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull qwen3:1.7b # 1.7B params, runs on CPU
# That's it. No API keys. No cloud. No cost.
team = pa.purpose("Research assistant", model="qwen3:1.7b")
Also works with cloud models:
team = pa.purpose("Research assistant", model="gpt-4o") # OpenAI
team = pa.purpose("Research assistant", model="Qwen/Qwen3-32B") # HuggingFace
Interactive CLI
python -m purpose_agent
Walks you through setup step-by-step. No coding required.
Installation
git clone https://huggingface.co/Rohan03/purpose-agent
cd purpose-agent
# For local models (recommended)
pip install ollama
# Run demo (no API keys needed)
python demo.py
Literature Foundation
Built on 8 published papers β every design decision has empirical backing. See COMPILED_RESEARCH.md for the full research trace.
| Paper | What it contributes |
|---|---|
| MUSE | 3-tier memory hierarchy |
| LATS | LLM-as-value-function |
| REMEMBERER | Q-value experience replay |
| Reflexion | Verbal reinforcement |
| SPC | Anti-reward-hacking |
| CER | Experience distillation |
| MemRL | Two-phase retrieval |
| TinyAgent | SLM-native patterns |
License
MIT