π GraphRAG Inference Hackathon β Dual Pipeline System
Proving that graphs make LLM inference faster, cheaper, and smarter β with any LLM provider.
Quick Start Β· 12 Providers Β· OpenClaw Β· Architecture Β· Benchmarks Β· Deploy
π Quick Start
Option A: Next.js Dashboard (Recommended)
cd web
npm install
cp .env.example .env.local
# Set ANY provider key β or just use Ollama for free:
npm run dev
# β http://localhost:3000
Option B: Docker (One Command)
docker build -t graphrag .
docker run -p 3000:3000 -e ANTHROPIC_API_KEY=sk-ant-... graphrag
Option C: Python CLI
pip install -r requirements.txt
python -m graphrag.main demo
Option D: Ollama (100% Free, Local)
ollama pull llama3.2
cd web && npm install && npm run dev
# Select "Ollama (Local)" in provider dropdown
ποΈ Architecture (AI Factory Model β 4 Layers)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LAYER 4: EVALUATION β
β Next.js Dashboard β RAGAS β F1/EM β Cost Tracking β Live Benchmark β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β LAYER 3: UNIVERSAL LLM (12 Providers) β
β OpenAI β Claude β Gemini β Mistral β Ollama β Groq β DeepSeek β β¦ β
ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββ€
β Pipeline A: Baseline RAG β Pipeline B: GraphRAG β
β Query β Vector β LLM β Query β Keywords β Graph β Context β LLM β
β β π§ Adaptive Router β π Reasoning Paths β
ββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββ€
β LAYER 1: GRAPH (TigerGraph Cloud) β
β Schema: Document β Chunk β Entity β Community β
β GSQL: vectorSearchChunks β vectorSearchEntities β graphRAGTraverse β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Each layer is a separate module β swap TigerGraph for Neo4j, Claude for Ollama, or RAGAS for custom evals without touching other layers.
π€ Supported LLM Providers
| # | Provider | Default Model | Cost/1K tokens | Speed |
|---|---|---|---|---|
| 1 | OpenAI | gpt-4o-mini | $0.00015 in / $0.0006 out | β‘ Fast |
| 2 | Anthropic Claude | claude-sonnet-4 | $0.003 / $0.015 | π΅ Medium |
| 3 | Google Gemini | gemini-2.0-flash | $0.0001 / $0.0004 | β‘ Fast |
| 4 | Mistral AI | mistral-large | $0.002 / $0.006 | π΅ Medium |
| 5 | Cohere | command-r-plus | $0.0025 / $0.01 | π΅ Medium |
| 6 | π¦ Ollama | llama3.2 | $0 / $0 | β‘ Local |
| 7 | OpenRouter | llama-3.3-70b | $0.0004 / $0.0004 | π΅ Medium |
| 8 | Groq | llama-3.3-70b | $0.0006 / $0.0008 | β‘β‘ Blazing |
| 9 | xAI Grok | grok-3-mini | $0.0003 / $0.0005 | β‘ Fast |
| 10 | Together AI | llama-3.1-70b | $0.0009 / $0.0009 | β‘ Fast |
| 11 | HuggingFace | llama-3.3-70b | $0 / $0 | π΅ Medium |
| 12 | DeepSeek | deepseek-chat | $0.00014 / $0.00028 | β‘ Fast |
How: All providers use OpenAI SDK with dynamic baseURL β zero extra dependencies. Switch providers from the dropdown in the dashboard UI.
π Novel Features
- π§ Adaptive Query Router β complexity scoring β auto pipeline selection
- π Schema-Bounded Extraction β 9 entity types + 15 relation types
- π Dual-Level Keywords β LightRAG-inspired high/low-level retrieval
- π Graph Reasoning Paths β step-by-step NL traversal explanation
- π€ 12-Provider Universal LLM β including free Ollama local
- π¦ OpenClaw Agent Skills β GraphRAG as autonomous agent capabilities
- π Live Benchmark Button β run real evaluations from the dashboard
- π° 12-Provider Cost Comparison β real-time projections
π Benchmarks
Live Benchmark (Run from Dashboard)
Click "π Run Benchmark Now" in the Benchmark tab to evaluate both pipelines on 10 HotpotQA questions with your configured provider. Results populate real-time with F1, EM, token counts, costs.
Expected Results (HotpotQA)
| Metric | Baseline RAG | GraphRAG | Winner |
|---|---|---|---|
| F1 Score | ~0.45β0.60 | ~0.55β0.70 | β GraphRAG |
| Exact Match | ~0.30β0.45 | ~0.35β0.50 | β GraphRAG |
| Tokens/Query | ~800β1000 | ~2000β2800 | β Baseline |
| F1 Win Rate | β | ~55β70% | β GraphRAG |
Key Finding: GraphRAG consistently outperforms baseline on multi-hop questions (bridge type) where connecting facts across documents is required. The token overhead is 2β3Γ, but the Adaptive Router eliminates this cost for simple queries.
π¦ OpenClaw Integration
Full CIK model (Capability + Identity + Knowledge):
| File | Purpose |
|---|---|
openclaw/SOUL.md |
Agent identity, values, personality |
openclaw/IDENTITY.md |
Configuration, supported providers |
openclaw/MEMORY.md |
Learned facts about GraphRAG |
openclaw/skills/graph_query/ |
NL β knowledge graph traversal |
openclaw/skills/compare_pipelines/ |
Dual-pipeline comparison |
openclaw/skills/cost_estimate/ |
12-provider cost projection |
π§ͺ Testing
# Run all 31 unit tests
python tests/test_core.py
# Tests cover:
# - cosine_similarity (5 cases including edge cases)
# - chunk_text (4 cases: basic, empty, short, overlap)
# - entity ID generation (3 cases: deterministic, case-insensitive, type-different)
# - F1/EM computation (5 cases: perfect, partial, no overlap, empty)
# - context hit rate (2 cases)
# - token efficiency (3 cases)
# - provider registry (4 cases: completeness, fields, ollama free, available)
# - evaluation layer aggregate + report (2 cases)
π³ Deployment
Docker
docker build -t graphrag .
docker run -p 3000:3000 \
-e ANTHROPIC_API_KEY=sk-ant-... \
-e OPENAI_API_KEY=sk-... \
graphrag
Vercel
cd web
npx vercel --prod
Env Variables
# Set any/all β system auto-detects available providers
ANTHROPIC_API_KEY=sk-ant-... # Claude
OPENAI_API_KEY=sk-... # GPT-4o
GEMINI_API_KEY=AIza... # Gemini
GROQ_API_KEY=gsk_... # Groq (ultra-fast)
DEEPSEEK_API_KEY=sk-... # DeepSeek (cheapest)
# Or: ollama pull llama3.2 # Free, local
π Project Structure (68 files, 240KB)
βββ web/ # Next.js 15 Dashboard
β βββ src/app/
β β βββ globals.css # 14KB fused TigerGraphΓClaude design system
β β βββ api/
β β βββ compare/route.ts # Multi-provider dual-pipeline API
β β βββ benchmark/route.ts # Live benchmark runner with F1/EM
β β βββ providers/route.ts # Available providers + Ollama health
β βββ src/components/tabs/
β β βββ LiveCompare.tsx # Provider selector + side-by-side comparison
β β βββ Benchmark.tsx # Live "Run Now" + radar/bar charts
β β βββ CostAnalysis.tsx # 12-provider cost projections
β β βββ GraphExplorer.tsx # Interactive SVG knowledge graph
β βββ src/lib/
β βββ llm-providers.ts # 12-provider universal client (18KB)
β βββ design-tokens.ts # Color + typography tokens
β
βββ openclaw/ # OpenClaw Agent (CIK model)
β βββ SOUL.md / IDENTITY.md / MEMORY.md
β βββ skills/ (3 skills)
β
βββ graphrag/ # Python Backend
β βββ layers/
β βββ graph_layer.py # TigerGraph schema + GSQL
β βββ orchestration_layer.py # Dual pipeline + adaptive router
β βββ llm_layer.py # LLM interactions
β βββ evaluation_layer.py # RAGAS + F1/EM
β βββ universal_llm.py # LiteLLM 12-provider support
β
βββ tests/test_core.py # 31 unit tests
βββ Dockerfile # One-command deployment
βββ README.md
π References
- GraphRAG β From Local to Global
- LightRAG β Simple and Fast (34Kβ)
- OpenClaw β Personal AI Agent
- HotpotQA β Multi-hop QA
- RAGAS β RAG Evaluation
- Youtu-GraphRAG β Schema-Bounded
TigerGraph Β· Anthropic Β· Ollama Β· Groq Β· LiteLLM Β· Next.js Β· Recharts