# π GraphRAG Inference Hackathon β Dual Pipeline System
[](https://www.tigergraph.com/)
[](#-supported-llm-providers)
[](#-openclaw-integration)
[](#-ollama-local-models)
[](https://nextjs.org/)
[](#-testing)
**Proving that graphs make LLM inference faster, cheaper, and smarter β with any LLM provider.**
[Quick Start](#-quick-start) Β· [12 Providers](#-supported-llm-providers) Β· [OpenClaw](#-openclaw-integration) Β· [Architecture](#-architecture) Β· [Benchmarks](#-benchmarks) Β· [Deploy](#-deployment)
---
## π Quick Start
### Option A: Next.js Dashboard (Recommended)
```bash
cd web
npm install
cp .env.example .env.local
# Set ANY provider key β or just use Ollama for free:
npm run dev
# β http://localhost:3000
```
### Option B: Docker (One Command)
```bash
docker build -t graphrag .
docker run -p 3000:3000 -e ANTHROPIC_API_KEY=sk-ant-... graphrag
```
### Option C: Python CLI
```bash
pip install -r requirements.txt
python -m graphrag.main demo
```
### Option D: Ollama (100% Free, Local)
```bash
ollama pull llama3.2
cd web && npm install && npm run dev
# Select "Ollama (Local)" in provider dropdown
```
---
## ποΈ Architecture (AI Factory Model β 4 Layers)
```
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LAYER 4: EVALUATION β
β Next.js Dashboard β RAGAS β F1/EM β Cost Tracking β Live Benchmark β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β LAYER 3: UNIVERSAL LLM (12 Providers) β
β OpenAI β Claude β Gemini β Mistral β Ollama β Groq β DeepSeek β β¦ β
ββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββ€
β Pipeline A: Baseline RAG β Pipeline B: GraphRAG β
β Query β Vector β LLM β Query β Keywords β Graph β Context β LLM β
β β π§ Adaptive Router β π Reasoning Paths β
ββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββ€
β LAYER 1: GRAPH (TigerGraph Cloud) β
β Schema: Document β Chunk β Entity β Community β
β GSQL: vectorSearchChunks β vectorSearchEntities β graphRAGTraverse β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
**Each layer is a separate module** β swap TigerGraph for Neo4j, Claude for Ollama, or RAGAS for custom evals without touching other layers.
---
## π€ Supported LLM Providers
| # | Provider | Default Model | Cost/1K tokens | Speed |
|---|----------|---------------|----------------|-------|
| 1 | **OpenAI** | gpt-4o-mini | $0.00015 in / $0.0006 out | β‘ Fast |
| 2 | **Anthropic Claude** | claude-sonnet-4 | $0.003 / $0.015 | π΅ Medium |
| 3 | **Google Gemini** | gemini-2.0-flash | $0.0001 / $0.0004 | β‘ Fast |
| 4 | **Mistral AI** | mistral-large | $0.002 / $0.006 | π΅ Medium |
| 5 | **Cohere** | command-r-plus | $0.0025 / $0.01 | π΅ Medium |
| 6 | **π¦ Ollama** | llama3.2 | **$0 / $0** | β‘ Local |
| 7 | **OpenRouter** | llama-3.3-70b | $0.0004 / $0.0004 | π΅ Medium |
| 8 | **Groq** | llama-3.3-70b | $0.0006 / $0.0008 | β‘β‘ Blazing |
| 9 | **xAI Grok** | grok-3-mini | $0.0003 / $0.0005 | β‘ Fast |
| 10 | **Together AI** | llama-3.1-70b | $0.0009 / $0.0009 | β‘ Fast |
| 11 | **HuggingFace** | llama-3.3-70b | **$0 / $0** | π΅ Medium |
| 12 | **DeepSeek** | deepseek-chat | $0.00014 / $0.00028 | β‘ Fast |
**How:** All providers use OpenAI SDK with dynamic `baseURL` β zero extra dependencies. Switch providers from the **dropdown in the dashboard UI**.
---
## π Novel Features
1. **π§ Adaptive Query Router** β complexity scoring β auto pipeline selection
2. **π Schema-Bounded Extraction** β 9 entity types + 15 relation types
3. **π Dual-Level Keywords** β LightRAG-inspired high/low-level retrieval
4. **π Graph Reasoning Paths** β step-by-step NL traversal explanation
5. **π€ 12-Provider Universal LLM** β including free Ollama local
6. **π¦ OpenClaw Agent Skills** β GraphRAG as autonomous agent capabilities
7. **π Live Benchmark Button** β run real evaluations from the dashboard
8. **π° 12-Provider Cost Comparison** β real-time projections
---
## π Benchmarks
### Live Benchmark (Run from Dashboard)
Click **"π Run Benchmark Now"** in the Benchmark tab to evaluate both pipelines on 10 HotpotQA questions with your configured provider. Results populate real-time with F1, EM, token counts, costs.
### Expected Results (HotpotQA)
| Metric | Baseline RAG | GraphRAG | Winner |
|--------|-------------|----------|--------|
| **F1 Score** | ~0.45β0.60 | ~0.55β0.70 | β
GraphRAG |
| **Exact Match** | ~0.30β0.45 | ~0.35β0.50 | β
GraphRAG |
| **Tokens/Query** | ~800β1000 | ~2000β2800 | β
Baseline |
| **F1 Win Rate** | β | ~55β70% | β
GraphRAG |
> **Key Finding:** GraphRAG consistently outperforms baseline on multi-hop questions (bridge type) where connecting facts across documents is required. The token overhead is 2β3Γ, but the Adaptive Router eliminates this cost for simple queries.
---
## π¦ OpenClaw Integration
Full CIK model (Capability + Identity + Knowledge):
| File | Purpose |
|------|---------|
| `openclaw/SOUL.md` | Agent identity, values, personality |
| `openclaw/IDENTITY.md` | Configuration, supported providers |
| `openclaw/MEMORY.md` | Learned facts about GraphRAG |
| `openclaw/skills/graph_query/` | NL β knowledge graph traversal |
| `openclaw/skills/compare_pipelines/` | Dual-pipeline comparison |
| `openclaw/skills/cost_estimate/` | 12-provider cost projection |
---
## π§ͺ Testing
```bash
# Run all 31 unit tests
python tests/test_core.py
# Tests cover:
# - cosine_similarity (5 cases including edge cases)
# - chunk_text (4 cases: basic, empty, short, overlap)
# - entity ID generation (3 cases: deterministic, case-insensitive, type-different)
# - F1/EM computation (5 cases: perfect, partial, no overlap, empty)
# - context hit rate (2 cases)
# - token efficiency (3 cases)
# - provider registry (4 cases: completeness, fields, ollama free, available)
# - evaluation layer aggregate + report (2 cases)
```
---
## π³ Deployment
### Docker
```bash
docker build -t graphrag .
docker run -p 3000:3000 \
-e ANTHROPIC_API_KEY=sk-ant-... \
-e OPENAI_API_KEY=sk-... \
graphrag
```
### Vercel
```bash
cd web
npx vercel --prod
```
### Env Variables
```bash
# Set any/all β system auto-detects available providers
ANTHROPIC_API_KEY=sk-ant-... # Claude
OPENAI_API_KEY=sk-... # GPT-4o
GEMINI_API_KEY=AIza... # Gemini
GROQ_API_KEY=gsk_... # Groq (ultra-fast)
DEEPSEEK_API_KEY=sk-... # DeepSeek (cheapest)
# Or: ollama pull llama3.2 # Free, local
```
---
## π Project Structure (68 files, 240KB)
```
βββ web/ # Next.js 15 Dashboard
β βββ src/app/
β β βββ globals.css # 14KB fused TigerGraphΓClaude design system
β β βββ api/
β β βββ compare/route.ts # Multi-provider dual-pipeline API
β β βββ benchmark/route.ts # Live benchmark runner with F1/EM
β β βββ providers/route.ts # Available providers + Ollama health
β βββ src/components/tabs/
β β βββ LiveCompare.tsx # Provider selector + side-by-side comparison
β β βββ Benchmark.tsx # Live "Run Now" + radar/bar charts
β β βββ CostAnalysis.tsx # 12-provider cost projections
β β βββ GraphExplorer.tsx # Interactive SVG knowledge graph
β βββ src/lib/
β βββ llm-providers.ts # 12-provider universal client (18KB)
β βββ design-tokens.ts # Color + typography tokens
β
βββ openclaw/ # OpenClaw Agent (CIK model)
β βββ SOUL.md / IDENTITY.md / MEMORY.md
β βββ skills/ (3 skills)
β
βββ graphrag/ # Python Backend
β βββ layers/
β βββ graph_layer.py # TigerGraph schema + GSQL
β βββ orchestration_layer.py # Dual pipeline + adaptive router
β βββ llm_layer.py # LLM interactions
β βββ evaluation_layer.py # RAGAS + F1/EM
β βββ universal_llm.py # LiteLLM 12-provider support
β
βββ tests/test_core.py # 31 unit tests
βββ Dockerfile # One-command deployment
βββ README.md
```
---
## π References
1. [GraphRAG](https://arxiv.org/abs/2404.16130) β From Local to Global
2. [LightRAG](https://arxiv.org/abs/2410.05779) β Simple and Fast (34Kβ)
3. [OpenClaw](https://github.com/Gen-Verse/OpenClaw) β Personal AI Agent
4. [HotpotQA](https://arxiv.org/abs/1809.09600) β Multi-hop QA
5. [RAGAS](https://arxiv.org/abs/2309.15217) β RAG Evaluation
6. [Youtu-GraphRAG](https://arxiv.org/abs/2508.19855) β Schema-Bounded
[TigerGraph](https://tgcloud.io) Β· [Anthropic](https://anthropic.com) Β· [Ollama](https://ollama.ai) Β· [Groq](https://groq.com) Β· [LiteLLM](https://litellm.ai) Β· [Next.js](https://nextjs.org) Β· [Recharts](https://recharts.org)
---
### π Built for the GraphRAG Inference Hackathon by TigerGraph
**12 LLM Providers** Β· **OpenClaw Agent** Β· **Ollama Local** Β· **TigerGraph** Β· **Next.js 15** Β· **31 Unit Tests** Β· **Docker**