graph-rag / QUICKSTART.md
GitHub Action
Automated sync to Hugging Face
c11a2f8

Quick Start Guide

Get the Graph RAG Service up and running in 10 minutes!

Prerequisites

Before starting, make sure you have:

  • Python 3.12 or higher
  • Neo4j database (running locally or remotely)
  • Redis server (running locally or remotely)
  • UV package manager (will be installed if missing)

Step 1: Clone and Setup

cd graph-RAG

# Install UV if you don't have it
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install dependencies
uv sync

Step 2: Configure Environment

# Copy environment template
cp .env.example .env

# Edit .env with your settings
# Configure NEO4J_URI, NEO4J_USER, NEO4J_PASSWORD
# Configure REDIS_URL

Step 3: Ensure Backend Services Are Running

Make sure Neo4j and Redis are running and accessible:

  • Neo4j should be available at the URI specified in your .env file (default: bolt://localhost:7687)
  • Redis should be available at the URL specified in your .env file (default: redis://localhost:6379)

Step 4: Start the API Server

On Windows:

start-server.bat

On Mac/Linux:

chmod +x start-server.sh
./start-server.sh

Or directly (any OS):

uv run python main.py

The API server will start on http://localhost:8000

Step 5: Start Celery Worker

For asynchronous document ingestion, start a worker in a new terminal:

On Windows:

start-worker.bat

On Mac/Linux:

chmod +x start-worker.sh
./start-worker.sh

Or directly:

uv run celery -A src.graph_rag_service.workers.celery_worker worker --loglevel=info

Step 6: Start the React Frontend

In a new terminal:

cd frontend-react

# Install Node dependencies (first time only)
npm install

# Start the dev server
npm run dev

The React UI will open at http://localhost:5173

Note: The project uses a React/Vite frontend (frontend-react/).
There is no streamlit run app.py — any references to Streamlit in older docs are outdated.

Login with: username admin / password admin

Step 7: Test the API

Using cURL

  1. Get Access Token
curl -X POST "http://localhost:8000/api/auth/login" \
  -H "Content-Type: application/json" \
  -d '{"username": "demo", "password": "demo"}'

Save the access_token from the response.

  1. Upload a Document
curl -X POST "http://localhost:8000/api/documents/upload" \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
  -F "file=@sample.pdf"
  1. Query the Knowledge Base
curl -X POST "http://localhost:8000/api/query" \
  -H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "What is this document about?",
    "top_k": 5
  }'

Using the Interactive Docs

Open your browser and go to:

  • Swagger UI: http://localhost:8000/docs
  • ReDoc: http://localhost:8000/redoc

Click "Authorize" and enter your access token to test endpoints interactively.

Step 8: Setup Ollama (Optional)

If you want to use local LLMs instead of OpenAI/Anthropic:

# Install Ollama (https://ollama.ai)
# Then pull models:
ollama pull llama3.2
ollama pull bge-m3

Update .env:

DEFAULT_LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2
OLLAMA_EMBEDDING_MODEL=bge-m3

Common Commands

Check System Health

curl http://localhost:8000/api/system/health

Get System Stats

curl -H "Authorization: Bearer YOUR_TOKEN" \
  http://localhost:8000/api/system/stats

View Ontology

curl -H "Authorization: Bearer YOUR_TOKEN" \
  http://localhost:8000/api/ontology

Visualize Graph

curl -H "Authorization: Bearer YOUR_TOKEN" \
  "http://localhost:8000/api/graph/visualization?limit=50"

Troubleshooting

Neo4j Connection Error

# Verify credentials in .env
# Check Neo4j is accessible

# Access Neo4j browser
open http://localhost:7474

# Try connecting with cypher-shell
cypher-shell -u neo4j -p password

Redis Connection Error

# Check Redis is running
redis-cli ping

# Test Redis connection
redis-cli -h localhost -p 6379 ping

Worker Not Processing

# Check worker logs
# If running locally:
celery -A src.graph_rag_service.workers.celery_worker inspect active

API Server Won't Start

# Check for port conflicts
lsof -i :8000  # On Mac/Linux
netstat -ano | findstr :8000  # On Windows

# View detailed logs
uv run python main.py

Next Steps

  • Read the README.md for comprehensive documentation
  • Check ARCHITECTURE.md for system design details
  • Explore the API docs at http://localhost:8000/docs

Using with Different LLM Providers

OpenAI

# In .env:
DEFAULT_LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here

Anthropic

# In .env:
DEFAULT_LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-your-key-here

Google Gemini

# In .env:
DEFAULT_LLM_PROVIDER=gemini
GOOGLE_API_KEY=your-key-here

Services will be available at:

  • API: http://localhost:8000
  • Neo4j Browser: http://localhost:7474

Need Help?

  • Check the troubleshooting section above
  • Read the full README.md
  • Open an issue on GitHub

Happy building! 🚀