Spaces:
Sleeping
title: AI Interview Mentor
emoji: π―
colorFrom: indigo
colorTo: purple
sdk: docker
pinned: false
AI Interview Mentor
Live at β https://adesh01-ai-interview-mentor.hf.space
An AI-powered mock interview platform for students and instructors. Instructors upload question banks, students take adaptive AI interviews and receive scored feedback.
Demo Credentials
| Role | Password | |
|---|---|---|
| Instructor | adeshboudh16@gmail.com | Adesh@123 |
| Student | adesh@gmail.com | adesh@123 |
Features
- Instructor portal β create batches, invite students via class code, manage topics, upload question banks via CSV, lock/unlock topics
- Student interviews β AI asks up to 5 questions per session, adapts with follow-up questions on shallow answers, randomised question order each time
- Scored reports β LLM judge evaluates the full Q&A transcript and produces a score (0β100), summary, and improvement tips
- Gemini + Groq fallback β Gemini 2.0 Flash as primary LLM, Groq Llama 3.3 70B as automatic fallback
- Persistent sessions β LangGraph + PostgreSQL checkpointing resumes interrupted interviews
How It Works
- Instructor signs up, creates a batch, and shares the class code
- Instructor uploads a CSV question bank and unlocks topics for interviews
- Student signs up with the class code and picks a topic to interview on
- AI conducts up to 5 turns β asking, evaluating, and probing deeper on weak answers
- Interview ends with an inline score and summary; full history saved to the dashboard
Tech Stack
| Layer | Technology |
|---|---|
| Frontend | React 19, TypeScript, Tailwind CSS v3, Zustand |
| Backend | FastAPI, LangGraph, asyncpg |
| Database | NeonDB (PostgreSQL + JSONB) |
| LLM | Gemini 2.0 Flash (primary) / Groq Llama 3.3 70B (fallback) |
| Auth | Custom JWT (HS256, 15 min access + 7 day refresh) |
| Deploy | Docker on Hugging Face Spaces (port 7860) |
Environment Variables
Set these as secrets in HF Spaces β Settings β Variables and secrets:
| Variable | Description |
|---|---|
NEON_DB_URL |
NeonDB PostgreSQL connection string |
JWT_SECRET |
Random secret, min 32 chars (python -c "import secrets; print(secrets.token_hex(32))") |
GEMINI_API_KEY |
Google AI Studio API key |
GROQ_API_KEY |
Groq API key (fallback LLM) |
APP_URL |
Deployed Space URL, e.g. https://adesh01-ai-interview-mentor.hf.space |
Local Development
make install # install all backend + frontend deps
make schema # apply DB schema (run once on fresh DB)
make dev # start backend on :7860 with auto-reload
make frontend # start frontend dev server on :5173
make test # run pytest suite
Docker
make build # build frontend + Docker image
make docker-run # run container with .env on localhost:7860
# Reset DB and re-apply schema (wipes all data + checkpoints)
make clean-db
CSV upload β use the Instructor portal β Upload page. Format:
question_text,difficulty(difficulty: easy / medium / hard). A sample file is atdocs/question_bank/linear_regression.csv.
Future Scope
The current version is a working MVP. A full-scale production version would include:
1. Text-to-Speech & Speech-to-Text
Replace the text chat with a real voice interview experience. The AI interviewer speaks questions aloud (TTS via ElevenLabs or Google Cloud TTS), and students respond by speaking (STT via Whisper or Google Speech-to-Text). This makes the interview feel closer to an actual job interview and removes typing as a bottleneck for non-native speakers.
2. Camera Monitoring
Integrate webcam monitoring using computer vision (MediaPipe or AWS Rekognition) to detect:
- Eye movement and gaze direction (focus/distraction signals)
- Facial expressions (confidence, hesitation)
- Tab-switching or browser focus loss
Results feed into the report as a "behaviour score" alongside the content score.
3. Dynamic Report Generation at Scale
The current report is a single LLM call on a 5-turn transcript. At scale this would become:
- A multi-agent report pipeline β one agent scores content, one scores communication, one compares against role benchmarks
- Historical trend charts (score over multiple attempts)
- Peer percentile ranking within a batch
- Instructor analytics aggregated across hundreds of students
4. Code Editor Support
For technical interviews (DSA, system design), embed an in-browser code editor (Monaco Editor / CodeMirror) with:
- Live code execution (sandboxed via Piston API or Judge0)
- AI reviewing both the verbal explanation and the submitted code
- Support for 20+ languages with syntax highlighting and test case runner
5. Multi-Domain Scaling
The current question bank is CSV-uploaded per topic. At scale this becomes a curated, versioned question library across domains:
- Software Engineering β Java, Python, Go, DevOps, Kubernetes, AWS
- Data Science β ML, Statistics, SQL, Spark
- Management β Product Management, Business Analysis, Finance
- AI-generated question variations to prevent question leakage between students
Scaling to Industry Level β Cost Estimate
| Component | Current (MVP) | Industry Scale (10k DAU) |
|---|---|---|
| Hosting | HF Spaces (free) | AWS ECS / GCP Cloud Run β ~$300β800/mo |
| Database | NeonDB free tier | NeonDB Business / RDS PostgreSQL β ~$200β500/mo |
| LLM (interviews) | Gemini free / Groq free | Gemini Flash API at scale β ~$0.075/1M tokens β ~$150β400/mo |
| LLM (reports) | Same | GPT-4o or Claude Sonnet for judge calls β ~$100β300/mo |
| TTS/STT | β | ElevenLabs + Whisper API β ~$200β600/mo |
| Camera AI | β | AWS Rekognition or custom CV model β ~$100β300/mo |
| CDN / Storage | β | CloudFront + S3 for recordings β ~$50β150/mo |
| Total estimate | ~$0 | ~$1,100β3,050/mo |
Key scaling decisions:
- Move from a single Docker container to microservices β interview engine, report engine, and media processing run independently and scale separately
- Use a message queue (Redis / SQS) between the interview API and the LLM workers to handle burst traffic without dropping requests
- Separate read replicas for analytics queries so instructor dashboards don't compete with live interview sessions
- Rate-limit LLM calls per student to control cost β a hard budget cap per institution is essential at B2B scale