Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Echo - Digital Twin AI Platform

Echo is a containerized AI platform for creating and chatting with digital twin personas. It combines a FastAPI backend, a Streamlit UI, RAG-based document grounding, local speech-to-text, and multi-language support for English and Urdu.

Features

  • Persona creation and management with custom descriptions, personality traits, speaking style, and background.
  • Multi-turn chat with persisted session history.
  • RAG-backed responses from uploaded documents.
  • Document ingestion, background processing, list, delete, and reprocess flows.
  • Speech support with Whisper-based transcription, language detection, and real-time WebSocket transcription.
  • Dockerized deployment with separate API, frontend, and optional speech input service.
  • Health and statistics endpoints for operational checks.

Technology Stack

  • Backend: FastAPI, SQLAlchemy, Pydantic
  • Frontend: Streamlit
  • RAG: ChromaDB and the internal retrieval/generation pipeline
  • LLM providers: Groq, OpenAI, Google AI
  • Speech-to-text: Whisper base model
  • Storage: SQLite for metadata, persistent local storage for vector and upload data
  • Deployment: Docker and Docker Compose

System Layout

The compose setup exposes three services:

  • API: 8081 on the host, mapped to 8080 in the container
  • Frontend: 8501
  • Speech input: 5000

The backend also serves /health, /stats, and /docs through FastAPI.

Repository Layout

Project/
β”œβ”€β”€ README.md
β”œβ”€β”€ echo/
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”œβ”€β”€ core/
β”‚   β”‚   β”œβ”€β”€ db/
β”‚   β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”œβ”€β”€ rag/
β”‚   β”‚   └── speech/
β”‚   β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ docker-compose.yml
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ requirements.txt
β”‚   └── pytest.ini
β”œβ”€β”€ data/
β”‚   └── chroma/
└── logs/

Quick Start

Docker

cd echo
docker-compose up -d

Open:

Local Development

cd echo
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python -m app.db.init_db
uvicorn app.main:app --reload

In another terminal:

cd echo
streamlit run frontend/app.py

Environment Variables

Create echo/.env with the values you need:

LLM_PROVIDER=groq
GROQ_MODEL=llama-3.3-70b-versatile
GROQ_API_KEY=your_groq_api_key

OPENAI_API_KEY=your_openai_api_key
GOOGLE_API_KEY=your_google_api_key
HUGGINGFACE_API_KEY=your_huggingface_api_key

OPENAI_MODEL=gpt-4o-mini
GOOGLE_MODEL=gemini-1.5-flash
EMBEDDING_MODEL=sentence-transformers/all-MiniLM-L6-v2
CHROMA_PERSIST_DIRECTORY=./data/chroma
SQLITE_DATABASE_PATH=./data/echo.db
UPLOAD_DIRECTORY=./data/uploads
API_HOST=0.0.0.0
API_PORT=8080
CHUNK_SIZE=500
CHUNK_OVERLAP=50
TOP_K_RESULTS=5
MAX_UPLOAD_SIZE_MB=50

Main Capabilities

Personas

  • Create, list, view, update, delete, and inspect persona stats.
  • Each persona can carry background context, personality traits, speaking style, and a system prompt.

Chat

  • Chat requests use the persona profile plus retrieved document context.
  • Conversation history is persisted per persona and session.
  • Sessions can be listed and deleted.

Documents

  • Upload documents per persona.
  • Documents are stored, ingested in the background, and indexed into the vector store.
  • Failed documents can be reprocessed.

Speech

  • Transcribe uploaded files or base64 audio.
  • Detect language from audio.
  • Use the WebSocket endpoint for near real-time transcription.
  • Supported languages include English and Urdu, with auto-detection.

API Reference

Core

  • GET / - root status
  • GET /health - service health
  • GET /stats - persona, document, chunk, and chat counts

Personas

  • POST /api/personas
  • GET /api/personas
  • GET /api/personas/{persona_id}
  • PATCH /api/personas/{persona_id}
  • DELETE /api/personas/{persona_id}
  • GET /api/personas/{persona_id}/stats

Documents

  • POST /api/documents/upload/{persona_id}
  • GET /api/documents/persona/{persona_id}
  • GET /api/documents/{document_id}
  • DELETE /api/documents/{document_id}
  • POST /api/documents/{document_id}/reprocess

Chat

  • POST /api/chat
  • GET /api/chat/history/{persona_id}/{session_id}
  • GET /api/chat/sessions/{persona_id}
  • DELETE /api/chat/session/{persona_id}/{session_id}

Speech

  • GET /api/speech/languages
  • POST /api/speech/transcribe
  • POST /api/speech/transcribe/file
  • POST /api/speech/detect-language
  • WS /api/speech/ws/transcribe

Typical Workflows

  1. Create a persona.
  2. Upload documents for that persona.
  3. Wait for background ingestion to finish.
  4. Start chatting through the frontend or API.
  5. Use speech input if you want audio-based interaction.

Docker Hub Publishing

The current compose file references hamza1228/echo-persona-*. Replace that namespace with your own Docker Hub username before publishing if you are distributing your build.

cd echo
docker login
docker-compose build

docker tag echo-api <username>/echo-persona-api:latest
docker tag echo-frontend <username>/echo-persona-frontend:latest
docker tag echo-speech-input <username>/echo-persona-speech:latest

docker push <username>/echo-persona-api:latest
docker push <username>/echo-persona-frontend:latest
docker push <username>/echo-persona-speech:latest

GitHub Publishing

git status
git add README.md
git add -A
git commit -m "Consolidate project documentation into README"
git push origin main

Testing

cd echo
pytest
pytest --cov=app tests/

Model Notes

  • Whisper base model is used for a balance of speed and accuracy.
  • Groq is the default LLM provider and uses llama-3.3-70b-versatile by default.
  • Embeddings use sentence-transformers/all-MiniLM-L6-v2 by default.

Security and Production Notes

  • Keep API keys in .env and out of git.
  • SQLite is fine for development; PostgreSQL is a better production choice.
  • CORS is permissive in the current app for development only.
  • Add authentication and rate limiting before exposing this publicly.

Troubleshooting

  • If the frontend does not start, check docker-compose logs frontend.
  • If the API fails health checks, check docker-compose logs api and /health.
  • If transcription is slow on the first request, that is expected while Whisper loads.
  • If uploads fail, verify file size and supported extension.

Notes

This repository now keeps documentation in one place. The extra markdown guides were removed so the README is the single source of truth.

Downloads last month
76