adeshboudh16 commited on
Commit
47203d3
·
1 Parent(s): c7dfb3d

updated ui, docs, tests

Browse files
Files changed (44) hide show
  1. .gitignore +1 -0
  2. CLAUDE.md +74 -705
  3. README.md +0 -2
  4. backend/checkpointer.py +53 -1
  5. backend/db/queries.py +116 -0
  6. backend/graph/graph.py +46 -1
  7. backend/graph/nodes.py +167 -1
  8. backend/graph/state.py +27 -1
  9. backend/llm.py +26 -0
  10. backend/main.py +23 -5
  11. backend/prompts.py +97 -1
  12. backend/routers/instructor.py +55 -1
  13. backend/routers/interview.py +143 -1
  14. backend/routers/sessions.py +24 -1
  15. backend/routers/student.py +37 -1
  16. docs/collaboration.md +163 -0
  17. docs/question_bank/linear_regression.csv +101 -0
  18. frontend/src/api/instructor.ts +24 -0
  19. frontend/src/api/interview.ts +38 -1
  20. frontend/src/api/sessions.ts +8 -1
  21. frontend/src/api/student.ts +17 -0
  22. frontend/src/components/instructor/GapReport.tsx +40 -2
  23. frontend/src/components/instructor/StudentRow.tsx +45 -2
  24. frontend/src/components/interview/ChatWindow.tsx +33 -2
  25. frontend/src/components/interview/MessageBubble.tsx +22 -2
  26. frontend/src/components/interview/ProgressBar.tsx +22 -2
  27. frontend/src/components/interview/TypeInput.tsx +57 -2
  28. frontend/src/components/report/FeedbackCard.tsx +29 -2
  29. frontend/src/components/report/ScoreRing.tsx +35 -2
  30. frontend/src/components/report/SummaryBlock.tsx +11 -2
  31. frontend/src/components/student/AttemptRow.tsx +50 -2
  32. frontend/src/components/student/TopicChip.tsx +22 -2
  33. frontend/src/pages/InstructorDashboard.tsx +20 -2
  34. frontend/src/pages/Interview.tsx +129 -1
  35. frontend/src/pages/Report.tsx +96 -1
  36. frontend/src/pages/StudentDashboard.tsx +106 -1
  37. frontend/src/pages/StudentDetail.tsx +123 -1
  38. frontend/src/store/interviewStore.ts +50 -1
  39. frontend/src/types/index.ts +52 -0
  40. pyproject.toml +1 -0
  41. run.py +11 -0
  42. tests/test_prompts.py +75 -0
  43. tests/test_routing.py +55 -0
  44. uv.lock +58 -0
.gitignore CHANGED
@@ -24,3 +24,4 @@ Thumbs.db
24
  .idea/
25
  .vscode/
26
  *.swp
 
 
24
  .idea/
25
  .vscode/
26
  *.swp
27
+ .claude
CLAUDE.md CHANGED
@@ -1,736 +1,105 @@
1
- # AI InterviewMentor — Claude Code Architecture Document
2
 
3
- > This document is the single source of truth for Claude Code.
4
- > Read this fully before writing any code.
5
- > If architecture decisions conflict with the original docs, this document takes precedence.
6
 
7
- ---
8
 
9
- ## Project Overview
 
 
 
 
 
 
 
 
 
 
 
10
 
11
- AI InterviewMentor is a full-stack web platform that simulates real-world technical interviews for fresher IT students at training institutes. Instructors upload topic-wise question banks. Students take AI-powered mock interviews. The AI asks questions, fires counter-questions when answers are shallow, and generates scored feedback reports. Instructors see class-wide readiness data.
12
 
13
  ---
14
 
15
- ## Deployment Target
16
-
17
- **Single Docker container on Hugging Face Spaces.**
18
 
19
- Everything React frontend, FastAPI backend, LangGraph engine runs in one container on port 7860. There is no Vercel, no separate backend server, no split deployment.
20
 
21
- ```
22
- HF Spaces (Docker, port 7860)
23
- ├── React (Vite, served as static files by FastAPI)
24
- └── FastAPI + LangGraph
25
- └── NeonDB (external PostgreSQL)
26
- └── OpenRouter → MiniMax 2.7
27
- ```
28
 
29
  ---
30
 
31
- ## Tech Stack
32
-
33
- | Layer | Technology | Notes |
34
- |-------|-----------|-------|
35
- | Frontend | React 18 + Vite + TypeScript | Served as static build by FastAPI |
36
- | Styling | Tailwind CSS v3 | Dark theme only |
37
- | Backend | FastAPI (Python 3.11) | All API routes |
38
- | AI Orchestration | LangGraph | Interview state machine |
39
- | Database | NeonDB (PostgreSQL) | App data + LangGraph checkpoints |
40
- | AI Gateway | OpenRouter | MiniMax 2.7 model |
41
- | Auth | Custom JWT | jose (Python) + bcrypt, no Supabase, no Firebase |
42
- | Container | Docker (multi-stage build) | Node build stage + Python runtime stage |
43
- | Hosting | Hugging Face Spaces | Free tier, CPU Basic |
44
-
45
- ### Explicitly Banned
46
- - Supabase (not allowed by hackathon rules)
47
- - Firebase (not allowed by hackathon rules)
48
- - Vercel (not needed, everything on HF Spaces)
49
- - Next.js (replaced by React + Vite)
50
- - Any third-party auth service
51
 
52
- ---
53
 
54
- ## Folder Structure
55
 
56
- ```
57
- ai-interviewmentor/
58
- ├── frontend/ # React + Vite
59
- │ ├── src/
60
- │ │ ├── pages/
61
- │ │ │ ├── Login.tsx
62
- │ │ │ ├── Signup.tsx
63
- │ │ │ ├── StudentDashboard.tsx
64
- │ │ │ ├── Interview.tsx
65
- │ │ │ ├── Report.tsx
66
- │ │ │ ├── InstructorDashboard.tsx
67
- │ │ │ ├── Upload.tsx
68
- │ │ │ └── StudentDetail.tsx
69
- │ │ ├── components/
70
- │ │ │ ├── auth/
71
- │ �� │ │ └── RoleSelector.tsx
72
- │ │ │ ├── interview/
73
- │ │ │ │ ├── ChatWindow.tsx
74
- │ │ │ │ ├── MessageBubble.tsx
75
- │ │ │ │ ├── ProgressBar.tsx
76
- │ │ │ │ └── TypeInput.tsx
77
- │ │ │ ├── report/
78
- │ │ │ │ ├── ScoreRing.tsx
79
- │ │ │ │ ├── FeedbackCard.tsx
80
- │ │ │ │ └── SummaryBlock.tsx
81
- │ │ │ ├── student/
82
- │ │ │ │ ├── TopicChip.tsx
83
- │ │ │ │ └── AttemptRow.tsx
84
- │ │ │ ├── instructor/
85
- │ │ │ │ ├── StudentRow.tsx
86
- │ │ │ │ ├── GapReport.tsx
87
- │ │ │ │ ├── StatCard.tsx
88
- │ │ │ │ └── UploadZone.tsx
89
- │ │ │ └── shared/
90
- │ │ │ ├── Navbar.tsx
91
- │ │ │ └── ProtectedRoute.tsx
92
- │ │ ├── api/ # All fetch calls to FastAPI
93
- │ │ │ ├── auth.ts
94
- │ │ │ ├── topics.ts
95
- │ │ │ ├── sessions.ts
96
- │ │ │ ├── upload.ts
97
- │ │ │ └── interview.ts
98
- │ │ ├── store/ # Zustand stores
99
- │ │ │ ├── authStore.ts # JWT token + user context
100
- │ │ │ └── interviewStore.ts # Active session state
101
- │ │ ├── types/
102
- │ │ │ └── index.ts
103
- │ │ ├── App.tsx # React Router setup
104
- │ │ └── main.tsx
105
- │ ├── index.html
106
- │ ├── vite.config.ts
107
- │ ├── tailwind.config.ts
108
- │ └── package.json
109
-
110
- ├── backend/
111
- │ ├── main.py # FastAPI app entry point
112
- │ ├── routers/
113
- │ │ ├── auth.py # /api/auth/*
114
- │ │ ├── batches.py # /api/batches/*
115
- │ │ ├── topics.py # /api/topics/*
116
- │ │ ├── sessions.py # /api/sessions/*
117
- │ │ ├── upload.py # /api/upload
118
- │ │ └── interview.py # /interview/*
119
- │ ├── graph/
120
- │ │ ├── state.py # InterviewState TypedDict
121
- │ │ ├── nodes.py # All LangGraph nodes
122
- │ │ └── graph.py # Graph builder + compiler
123
- │ ├── db/
124
- │ │ ├── connection.py # NeonDB async connection pool
125
- │ │ └── queries.py # All SQL queries
126
- │ ├── auth/
127
- │ │ ├── jwt.py # JWT sign + verify
128
- │ │ └── password.py # bcrypt hash + verify
129
- │ ├── checkpointer.py # LangGraph AsyncPostgresSaver setup
130
- │ ├── prompts.py # System prompt builder
131
- │ └── requirements.txt
132
-
133
- ├── Dockerfile # Multi-stage build
134
- ├── .env.example
135
- └── CLAUDE.md # This file
136
- ```
137
 
138
- ---
139
-
140
- ## Environment Variables
141
 
 
142
  ```bash
143
- # NeonDB
144
- NEON_DB_URL=postgresql://user:pass@ep-xxx.neon.tech/neondb?sslmode=require
145
-
146
- # JWT
147
- JWT_SECRET=your-secret-key-min-32-chars
148
- JWT_ACCESS_EXPIRE_MINUTES=15
149
- JWT_REFRESH_EXPIRE_DAYS=7
150
-
151
- # OpenRouter
152
- OPENROUTER_API_KEY=your-openrouter-key
153
-
154
- # App
155
- APP_URL=https://your-space.hf.space
156
  ```
157
 
158
- ---
159
-
160
- ## Database Schema
161
-
162
- Two schemas in one NeonDB instance.
163
-
164
- ### Schema: public (App Data)
165
-
166
- ```sql
167
- CREATE TABLE users (
168
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
169
- full_name TEXT NOT NULL,
170
- email TEXT NOT NULL UNIQUE,
171
- password_hash TEXT NOT NULL,
172
- role TEXT NOT NULL CHECK (role IN ('student', 'instructor')),
173
- batch_id UUID REFERENCES batches(id) ON DELETE SET NULL,
174
- created_at TIMESTAMPTZ DEFAULT NOW()
175
- );
176
-
177
- CREATE TABLE refresh_tokens (
178
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
179
- user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
180
- token_hash TEXT NOT NULL,
181
- expires_at TIMESTAMPTZ NOT NULL,
182
- created_at TIMESTAMPTZ DEFAULT NOW()
183
- );
184
-
185
- CREATE TABLE batches (
186
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
187
- name TEXT NOT NULL,
188
- instructor_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
189
- class_code TEXT NOT NULL UNIQUE DEFAULT upper(substring(gen_random_uuid()::text, 1, 8)),
190
- created_at TIMESTAMPTZ DEFAULT NOW()
191
- );
192
-
193
- CREATE TABLE topics (
194
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
195
- batch_id UUID NOT NULL REFERENCES batches(id) ON DELETE CASCADE,
196
- name TEXT NOT NULL,
197
- is_unlocked BOOLEAN NOT NULL DEFAULT FALSE,
198
- order_index INTEGER NOT NULL DEFAULT 0,
199
- created_at TIMESTAMPTZ DEFAULT NOW()
200
- );
201
-
202
- CREATE TABLE questions (
203
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
204
- topic_id UUID NOT NULL REFERENCES topics(id) ON DELETE CASCADE,
205
- question_text TEXT NOT NULL,
206
- difficulty TEXT NOT NULL CHECK (difficulty IN ('easy', 'medium', 'hard')) DEFAULT 'medium',
207
- created_at TIMESTAMPTZ DEFAULT NOW()
208
- );
209
-
210
- CREATE TABLE interview_sessions (
211
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
212
- student_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
213
- topic_id UUID NOT NULL REFERENCES topics(id) ON DELETE CASCADE,
214
- status TEXT NOT NULL CHECK (status IN ('active', 'completed')) DEFAULT 'active',
215
- score INTEGER CHECK (score >= 0 AND score <= 100),
216
- feedback JSONB,
217
- started_at TIMESTAMPTZ DEFAULT NOW(),
218
- completed_at TIMESTAMPTZ
219
- );
220
  ```
221
 
222
- ### Schema: checkpointer (LangGraph State)
223
-
224
- ```sql
225
- -- Created automatically by LangGraph AsyncPostgresSaver
226
- -- Do not manually create or modify these tables
227
- -- Run: await checkpointer.setup() on app startup
228
- CREATE SCHEMA IF NOT EXISTS checkpointer;
229
  ```
230
 
231
- ---
232
-
233
- ## Auth Design
234
-
235
- ### Token Strategy
236
-
237
- ```
238
- Access Token
239
- - Algorithm: HS256
240
- - Expiry: 15 minutes
241
- - Payload: { user_id, role, batch_id, email, exp }
242
- - Sent in: Authorization: Bearer <token> header
243
 
244
- Refresh Token
245
- - Expiry: 7 days
246
- - Stored: hashed in refresh_tokens table
247
- - Sent in: httpOnly cookie (refresh_token)
248
- - Used only at: POST /api/auth/refresh
249
- ```
250
 
251
  ### Auth Flow
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
252
 
253
- ```
254
- SIGNUP
255
- POST /api/auth/signup
256
- Body: { full_name, email, password, role, class_code? }
257
- → Validate class_code if role=student → get batch_id
258
- → Hash password (bcrypt, cost=12)
259
- → Insert user
260
- → Issue access token + refresh token
261
- → Return: { access_token, user: { id, role, full_name, batch_id } }
262
- → Set: httpOnly cookie refresh_token
263
-
264
- LOGIN
265
- POST /api/auth/login
266
- Body: { email, password }
267
- → Verify password hash
268
- → Issue access token + refresh token
269
- → Return: { access_token, user: { id, role, full_name, batch_id } }
270
- → Set: httpOnly cookie refresh_token
271
-
272
- REFRESH
273
- POST /api/auth/refresh
274
- → Read refresh_token cookie
275
- → Verify hash against refresh_tokens table
276
- → Issue new access token
277
- → Return: { access_token }
278
-
279
- LOGOUT
280
- POST /api/auth/logout
281
- → Delete refresh_token from DB
282
- → Clear cookie
283
- ```
284
-
285
- ### Route Protection (FastAPI dependency)
286
-
287
- ```python
288
- # Applied to every protected route via Depends(get_current_user)
289
- # Returns the decoded JWT payload
290
- # Raises 401 if token missing, expired, or invalid
291
- async def get_current_user(token: str = Depends(oauth2_scheme)) -> dict:
292
- ...
293
-
294
- # Role guard
295
- async def require_instructor(user = Depends(get_current_user)) -> dict:
296
- if user["role"] != "instructor":
297
- raise HTTPException(status_code=403)
298
- return user
299
-
300
- async def require_student(user = Depends(get_current_user)) -> dict:
301
- if user["role"] != "student":
302
- raise HTTPException(status_code=403)
303
- return user
304
- ```
305
-
306
- ---
307
-
308
- ## API Routes
309
-
310
- ### Auth Routes
311
-
312
- ```
313
- POST /api/auth/signup
314
- POST /api/auth/login
315
- POST /api/auth/refresh
316
- POST /api/auth/logout
317
- ```
318
-
319
- ### Instructor Routes (require_instructor)
320
-
321
- ```
322
- POST /api/batches → create batch
323
- GET /api/batches/{batch_id} → get batch info + class code
324
- POST /api/topics → add topic to batch
325
- PATCH /api/topics/{topic_id} → unlock/lock topic
326
- GET /api/topics/{batch_id} → list all topics
327
- POST /api/upload → parse CSV → insert questions
328
- GET /api/instructor/dashboard → class stats + student scores
329
- GET /api/instructor/students/{student_id} → individual student detail
330
- ```
331
-
332
- ### Student Routes (require_student)
333
-
334
- ```
335
- GET /api/student/dashboard → unlocked topics + past attempts
336
- GET /api/sessions/{session_id} → session detail + report
337
- ```
338
-
339
- ### Interview Routes (require_student)
340
-
341
- ```
342
- POST /interview/start → create session + init LangGraph
343
- POST /interview/turn → send student message + get AI response
344
- GET /interview/state/{session_id} → turn count + status (for progress bar)
345
- ```
346
-
347
- ---
348
-
349
- ## LangGraph Architecture
350
-
351
- ### InterviewState
352
-
353
- ```python
354
- from typing import TypedDict, Literal, Annotated
355
- from langgraph.graph.message import add_messages
356
-
357
- class InterviewState(TypedDict):
358
- # Static — set once at session start
359
- topic_name: str
360
- session_id: str
361
- student_id: str
362
- questions_remaining: list[dict] # { question_text, difficulty }
363
- past_best_score: int | None # from previous attempts on same topic
364
- past_weak_areas: list[str] # from previous attempt feedback
365
-
366
- # Dynamic — mutates during session
367
- messages: Annotated[list, add_messages] # last 2-3 turns only
368
- conversation_summary: str # compressed older turns
369
- questions_asked: list[str] # prevents repeats
370
- student_weak_areas: list[str] # grows during session
371
- turn_count: int
372
- awaiting_counter_response: bool # true after counter-question
373
-
374
- # Terminal — set once at end
375
- status: Literal["active", "complete"]
376
- score: int | None
377
- feedback: dict | None
378
- ```
379
-
380
- ### Graph Nodes
381
-
382
- ```
383
- ask_question
384
- Input: questions_remaining, questions_asked, conversation_summary
385
- Action: Pick one question (bias toward difficulty mix)
386
- Move to questions_asked
387
- Remove from questions_remaining
388
- Output: AI message with question
389
-
390
- evaluate_answer
391
- Input: last student message, conversation_summary, current question context
392
- Action: Silently score: strong | shallow | wrong
393
- If shallow → add topic to student_weak_areas
394
- Routes:
395
- shallow AND awaiting_counter_response=false → counter_question
396
- strong OR wrong → check turn count
397
- turn_count >= 8 OR questions_remaining empty → generate_report
398
- else → ask_question
399
- turn_count % 4 == 0 → summarize (then ask_question)
400
-
401
- counter_question
402
- Input: last student message + question context
403
- Action: Generate ONE probing follow-up (no question bank used)
404
- Set awaiting_counter_response=true
405
- Output: AI counter-question message
406
-
407
- summarize
408
- Input: full messages[]
409
- Action: LLM call to compress messages into conversation_summary string
410
- Clear messages[] to empty
411
- Reset awaiting_counter_response=false
412
- Output: Updated summary, empty messages
413
-
414
- generate_report
415
- Input: questions_asked, student_weak_areas, conversation_summary, past_best_score
416
- Action: LLM call → structured JSON report
417
- Write score + feedback to NeonDB interview_sessions
418
- Set status: complete
419
- Output: { score, summary, concept_score, depth_score, mistakes[], tips[] }
420
- ```
421
-
422
- ### Graph Flow
423
-
424
- ```python
425
- graph.set_entry_point("ask_question")
426
-
427
- graph.add_edge("ask_question", "evaluate_answer")
428
-
429
- graph.add_conditional_edges("evaluate_answer", route_after_evaluation, {
430
- "counter": "counter_question",
431
- "next_question": "ask_question",
432
- "summarize": "summarize",
433
- "end": "generate_report"
434
- })
435
-
436
- graph.add_edge("counter_question", "evaluate_answer")
437
- graph.add_edge("summarize", "ask_question")
438
- graph.add_edge("generate_report", END)
439
- ```
440
-
441
- ### Checkpointer
442
-
443
- ```python
444
- # checkpointer.py
445
- from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver
446
-
447
- async def get_checkpointer():
448
- return await AsyncPostgresSaver.from_conn_string(
449
- os.getenv("NEON_DB_URL"),
450
- schema_name="checkpointer"
451
- )
452
-
453
- # Called once on app startup
454
- await checkpointer.setup()
455
- ```
456
-
457
- ### Thread ID Convention
458
-
459
- ```python
460
- # LangGraph uses thread_id to scope checkpointed state
461
- # Always use session_id (UUID from interview_sessions table) as thread_id
462
- config = {"configurable": {"thread_id": session_id}}
463
- ```
464
-
465
- ---
466
-
467
- ## Token Budget Per LLM Call
468
-
469
- ```
470
- Node Tokens sent Notes
471
- ────────────────────────────────────────────────────────
472
- ask_question ~300 fixed summary + last 2 msgs + topic
473
- evaluate_answer ~350 fixed summary + last 2 msgs + question
474
- counter_question ~250 fixed last exchange only
475
- summarize ~600 variable full messages[] (triggered every 4 turns)
476
- generate_report ~400 fixed summary + weak_areas + questions_asked
477
- ────────────────────────────────────────────────────────
478
- Max per session ~700 fixed ceiling (except summarize turns)
479
- vs current doc ~1600+ growing unbounded
480
- ```
481
-
482
- ---
483
-
484
- ## System Prompt Structure
485
-
486
- ```python
487
- # prompts.py
488
-
489
- def build_ask_question_prompt(topic: str, summary: str, asked: list[str]) -> str:
490
- # Topic context + summary + which questions asked
491
- # Instructs AI to pick next question and ask it naturally
492
-
493
- def build_evaluate_prompt(question: str, answer: str, summary: str) -> str:
494
- # Evaluates student answer: strong | shallow | wrong
495
- # Returns structured JSON: { verdict, weak_area? }
496
-
497
- def build_counter_prompt(question: str, answer: str) -> str:
498
- # Generates one probing follow-up question
499
-
500
- def build_summarize_prompt(messages: list) -> str:
501
- # Compresses conversation into 150-word summary
502
-
503
- def build_report_prompt(topic: str, asked: list, weak_areas: list,
504
- summary: str, past_score: int | None) -> str:
505
- # Generates final structured feedback JSON
506
- # Injects past_score for context-aware assessment
507
- ```
508
-
509
- ---
510
-
511
- ## Session Flow (Complete)
512
-
513
- ```
514
- Student clicks topic
515
- → POST /interview/start
516
- → Create interview_sessions row (status: active)
517
- → Fetch questions for topic from NeonDB
518
- → Fetch past attempts for this student+topic (for past_best_score)
519
- → LangGraph: graph.ainvoke(initial_state, config={thread_id: session_id})
520
- → ask_question node runs → returns first question
521
- → Response: { session_id, message: "Hi! Let's begin. [First question]" }
522
-
523
- Student types answer
524
- → POST /interview/turn { session_id, student_message }
525
- → FastAPI loads graph state from NeonDB checkpoint via thread_id
526
- → Appends student message to state
527
- → evaluate_answer node runs → routes to next node
528
- → Response: { message, is_counter_q, is_complete, feedback? }
529
-
530
- Session ends
531
- → generate_report node writes to NeonDB interview_sessions
532
- → Response: { is_complete: true, feedback: { score, summary, ... } }
533
- → Frontend redirects to /report/{session_id}
534
-
535
- Session recovery (tab closed and reopened)
536
- → GET /interview/state/{session_id}
537
- → LangGraph loads checkpoint from NeonDB
538
- → Returns: { status, turn_count, last_message }
539
- → Frontend resumes from last state
540
- ```
541
-
542
- ---
543
-
544
- ## Frontend API Layer
545
-
546
- All API calls use the same origin (no CORS needed). Access token sent in every request header.
547
-
548
- ```typescript
549
- // api/client.ts — base fetch wrapper
550
- const API_BASE = '' // same origin
551
-
552
- async function apiFetch(path: string, options?: RequestInit) {
553
- const token = useAuthStore.getState().accessToken
554
-
555
- const res = await fetch(`${API_BASE}${path}`, {
556
- ...options,
557
- headers: {
558
- 'Content-Type': 'application/json',
559
- 'Authorization': `Bearer ${token}`,
560
- ...options?.headers,
561
- },
562
- credentials: 'include', // for refresh token cookie
563
- })
564
-
565
- if (res.status === 401) {
566
- // Attempt token refresh
567
- await refreshToken()
568
- // Retry original request once
569
- }
570
-
571
- return res.json()
572
- }
573
- ```
574
-
575
- ---
576
-
577
- ## Frontend State Management (Zustand)
578
-
579
- ```typescript
580
- // store/authStore.ts
581
- interface AuthStore {
582
- accessToken: string | null
583
- user: { id: string, role: string, full_name: string, batch_id: string | null } | null
584
- setAuth: (token: string, user: User) => void
585
- clearAuth: () => void
586
- }
587
-
588
- // store/interviewStore.ts
589
- interface InterviewStore {
590
- sessionId: string | null
591
- messages: Message[]
592
- isLoading: boolean
593
- isComplete: boolean
594
- turnCount: number
595
- addMessage: (message: Message) => void
596
- setLoading: (v: boolean) => void
597
- setComplete: (feedback: Feedback) => void
598
- reset: () => void
599
- }
600
- ```
601
-
602
- ---
603
-
604
- ## Dockerfile
605
-
606
- ```dockerfile
607
- # Stage 1: Build React frontend
608
- FROM node:20-slim AS frontend-build
609
- WORKDIR /app/frontend
610
- COPY frontend/package*.json ./
611
- RUN npm install
612
- COPY frontend/ ./
613
- RUN npm run build
614
- # Output: /app/frontend/dist
615
-
616
- # Stage 2: Python runtime
617
- FROM python:3.11-slim
618
- WORKDIR /app
619
-
620
- # Install Python dependencies
621
- COPY backend/requirements.txt .
622
- RUN pip install --no-cache-dir -r requirements.txt
623
-
624
- # Copy FastAPI backend
625
- COPY backend/ ./backend/
626
-
627
- # Copy React static build from Stage 1
628
- COPY --from=frontend-build /app/frontend/dist ./frontend/dist
629
-
630
- # HF Spaces requires port 7860
631
- EXPOSE 7860
632
-
633
- CMD ["uvicorn", "backend.main:app", "--host", "0.0.0.0", "--port", "7860"]
634
- ```
635
-
636
- ---
637
-
638
- ## FastAPI Entry Point
639
-
640
- ```python
641
- # backend/main.py
642
- from fastapi import FastAPI
643
- from fastapi.staticfiles import StaticFiles
644
- from fastapi.middleware.cors import CORSMiddleware
645
- from contextlib import asynccontextmanager
646
-
647
- from routers import auth, batches, topics, sessions, upload, interview
648
- from checkpointer import get_checkpointer
649
- from db.connection import init_db_pool
650
-
651
- @asynccontextmanager
652
- async def lifespan(app: FastAPI):
653
- # Startup
654
- await init_db_pool()
655
- checkpointer = await get_checkpointer()
656
- await checkpointer.setup() # creates checkpointer schema + tables
657
- app.state.checkpointer = checkpointer
658
- yield
659
- # Shutdown — cleanup if needed
660
-
661
- app = FastAPI(lifespan=lifespan)
662
-
663
- # Routers (API routes registered BEFORE static mount)
664
- app.include_router(auth.router, prefix="/api/auth")
665
- app.include_router(batches.router, prefix="/api/batches")
666
- app.include_router(topics.router, prefix="/api/topics")
667
- app.include_router(sessions.router, prefix="/api/sessions")
668
- app.include_router(upload.router, prefix="/api/upload")
669
- app.include_router(interview.router, prefix="/interview")
670
-
671
- # Instructor + student combined dashboard routes
672
- app.include_router(instructor.router, prefix="/api/instructor")
673
- app.include_router(student.router, prefix="/api/student")
674
-
675
- # React static build — MUST be last
676
- app.mount("/", StaticFiles(directory="frontend/dist", html=True), name="static")
677
- ```
678
-
679
- ---
680
-
681
- ## requirements.txt
682
-
683
- ```
684
- fastapi==0.115.0
685
- uvicorn==0.30.6
686
- langgraph==0.2.50
687
- langgraph-checkpoint-postgres==2.0.8
688
- asyncpg==0.29.0
689
- python-jose[cryptography]==3.3.0
690
- bcrypt==4.1.3
691
- python-multipart==0.0.9
692
- httpx==0.27.2
693
- pydantic==2.8.2
694
- python-dotenv==1.0.1
695
- ```
696
-
697
- ---
698
-
699
- ## Known Constraints & Decisions
700
-
701
- | Decision | Reason |
702
- |----------|--------|
703
- | No Supabase/Firebase | Hackathon rules |
704
- | Custom JWT only | Hackathon rules |
705
- | React + Vite not Next.js | Single container deployment, no Vercel needed |
706
- | NeonDB for everything | One DB service, connection pooler handles LangGraph checkpoint writes |
707
- | Two NeonDB schemas | Clean separation — app data vs graph state |
708
- | CSV parsed to DB, no file storage | Eliminates need for S3/R2/Cloudinary |
709
- | Browser calls same origin | No CORS, no latency overhead from proxy hops |
710
- | LangGraph summarize every 4 turns | Fixed token ceiling ~700/turn regardless of session length |
711
- | session_id as LangGraph thread_id | Natural scoping, ties graph state to DB session row |
712
- | Static files mounted last in FastAPI | API routes must be registered before the static catch-all |
713
-
714
- ---
715
-
716
- ## What Is Not Built (Scope Boundary)
717
-
718
- - Voice input (V2)
719
- - Email notifications (V2)
720
- - Leaderboard (V2)
721
- - Certificate generation (V2)
722
- - Question bank editor UI — use CSV upload only (V2)
723
- - Dark/light toggle — dark only (V2)
724
- - Multiple instructors per batch (V3)
725
-
726
- ---
727
-
728
- ## Architecture Status
729
 
730
- > 70% confirmed. Subject to modification during build if constraints change.
731
- > Update this document if architecture decisions change during the hackathon.
732
- > Do not rely on the original `ai-interviewmentor-docs.md` for stack or deployment decisions — use this file.
733
 
734
- ---
735
 
736
- *AI InterviewMentor Claude Code Reference Hackathon 2026*
 
 
 
 
1
+ # CLAUDE.md
2
 
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
 
 
4
 
5
+ ## Implementation Status (as of 2026-03-28)
6
 
7
+ | Phase | Status | Description |
8
+ |-------|--------|-------------|
9
+ | 1 — Scaffolding | ✅ Done | Vite+React, FastAPI, Docker, NeonDB schema |
10
+ | 2 — Auth | ✅ Done | JWT, bcrypt, signup/login/refresh/logout |
11
+ | 3 — Instructor Batch/Topics | ✅ Done | Batch creation, class code, topic CRUD, lock/unlock |
12
+ | 4 — Question Bank Upload | ✅ Done | CSV parse + bulk insert, UploadZone UI |
13
+ | 5 — LangGraph Interview Engine | ✅ Done | State machine, nodes, prompts, `/interview` router, checkpointer |
14
+ | 6 — Student Interview UI | ✅ Done | Chat UI, interviewStore, interview API |
15
+ | 7 — Reports & Feedback | ✅ Done | ScoreRing, FeedbackCard, SummaryBlock, Report page |
16
+ | 8 — Student Dashboard | ✅ Done | TopicChip, AttemptRow, StudentDashboard page |
17
+ | 9 — Instructor Analytics | ✅ Done | StudentRow, GapReport, StudentDetail page, InstructorDashboard extended |
18
+ | 10 — Deploy | ⏳ Pending | Docker wiring, static mount, HF Spaces |
19
 
20
+ **Active area (other dev)**: `backend/graph/`, `backend/prompts.py`, `backend/checkpointer.py`, `backend/routers/interview.py`
21
 
22
  ---
23
 
24
+ ## Collaboration Notes (Two Developers)
 
 
25
 
26
+ Both developers are working on Phase 5 simultaneously. Each stub file has a header comment defining the **owner**, **interface contract**, and **dependencies** to prevent conflicts. Follow these rules:
27
 
28
+ - **Do not change function signatures** defined in stub headers without coordinating first
29
+ - `InterviewState` in `backend/graph/state.py` is the shared contract — both sides depend on it; define it first before touching nodes or prompts
30
+ - `backend/prompts.py` owns all LLM prompt strings nodes import from here, never inline prompts in nodes
31
+ - `backend/routers/interview.py` depends on `graph.py` being importable — implement `graph.py` before wiring the router
 
 
 
32
 
33
  ---
34
 
35
+ ## Project Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
+ AI InterviewMentor is a full-stack AI-powered mock interview platform. Monorepo with a React frontend and Python FastAPI backend, deployed as a single Docker container to Hugging Face Spaces (port 7860).
38
 
39
+ Two user roles: **Instructor** (creates batches, manages topics/questions) and **Student** (joins via class code, takes AI interviews). The AI interview engine uses LangGraph with PostgreSQL-backed checkpointing.
40
 
41
+ **Hackathon constraint**: No third-party auth (Supabase/Firebase banned) — custom JWT only.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
+ ## Commands
 
 
44
 
45
+ ### Backend
46
  ```bash
47
+ uv run uvicorn backend.main:app --host 0.0.0.0 --port 7860 --reload # Dev server
48
+ uv run pytest # All tests
49
+ uv run pytest tests/test_jwt.py # Single test file
50
+ uv run pytest tests/test_jwt.py::test_create_access_token -v # Single test
51
+ uv sync # Install/sync deps
 
 
 
 
 
 
 
 
52
  ```
53
 
54
+ ### Frontend
55
+ ```bash
56
+ cd frontend
57
+ npm install # Install deps
58
+ npm run dev # Dev server (port 5173, proxies /api to :7860)
59
+ npm run build # Production build (TypeScript check + Vite)
60
+ npm run lint # ESLint
61
+ npm run preview # Preview production build
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  ```
63
 
64
+ ### Docker
65
+ ```bash
66
+ docker build -t ai-interviewmentor .
67
+ docker run -p 7860:7860 --env-file .env ai-interviewmentor
 
 
 
68
  ```
69
 
70
+ ## Architecture
 
 
 
 
 
 
 
 
 
 
 
71
 
72
+ ### Same-Origin Design
73
+ FastAPI serves both the API (`/api/*`) and the built React SPA (catch-all `/`). No CORS needed. In development, Vite proxies `/api` and `/interview` to `localhost:7860`.
 
 
 
 
74
 
75
  ### Auth Flow
76
+ - Access token: HS256 JWT, 15min expiry, in `Authorization: Bearer` header
77
+ - Refresh token: 7-day, bcrypt-hashed in DB, httpOnly cookie
78
+ - Frontend `apiFetch()` wrapper (`frontend/src/api/client.ts`) auto-refreshes on 401
79
+ - Backend dependencies: `get_current_user`, `require_instructor`, `require_student` in `backend/auth/deps.py`
80
+
81
+ ### Database (NeonDB/PostgreSQL)
82
+ - `public` schema: app tables (users, refresh_tokens, batches, topics, questions, interview_sessions)
83
+ - `checkpointer` schema: LangGraph state (auto-managed)
84
+ - Connection via asyncpg pool (`backend/db/connection.py`)
85
+ - Raw SQL queries in `backend/db/queries.py`
86
+
87
+ ### LangGraph Interview Engine
88
+ State machine: `ask_question → evaluate_answer → (counter_question | ask_question | generate_report)`. Summarizes every 4 turns. Max 8 turns. `session_id = thread_id` maps DB sessions to LangGraph state. Target ~700 tokens per LLM call via OpenRouter (MiniMax 2.7).
89
+
90
+ ### Frontend Patterns
91
+ - **State**: Zustand stores (`authStore` with localStorage persist, `interviewStore`)
92
+ - **Routing**: React Router 7 with role-based `ProtectedRoute` wrapper
93
+ - **Styling**: Tailwind CSS v3, dark theme only (class-based), no component library
94
+ - **API calls**: Domain-organized modules in `frontend/src/api/` using shared `apiFetch` wrapper
95
 
96
+ ## Environment Variables
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
 
98
+ See `.env.example`. Required: `NEON_DB_URL`, `JWT_SECRET`, `OPENROUTER_API_KEY`, `APP_URL`.
 
 
99
 
100
+ ## Development Notes
101
 
102
+ - Backend package manager is `uv` (not pip). Always use `uv run` or `uv sync`.
103
+ - Frontend uses React 19, TypeScript strict mode (ES2023 target).
104
+ - Many backend routers and frontend components are stubs awaiting implementation — check `docs/tasks.md` for the phase breakdown.
105
+ - Detailed architecture diagrams and DB schema are in `docs/architecture.md`.
README.md DELETED
@@ -1,2 +0,0 @@
1
- # ai-interviewmentor
2
- Problem It Solves: Training institutes struggle with 1-to-1 instructor-student follow-up at scale. Students learn syllabus topics but can't gauge their actual interview readiness. AI InterviewMentor bridges this gap by letting students take AI-powered mock interviews topic-by-topic — with dynamic counter-questions that simulate real interviews — an
 
 
 
backend/checkpointer.py CHANGED
@@ -1 +1,53 @@
1
- # LangGraph AsyncPostgresSaver setup — implemented in Phase 5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from urllib.parse import urlparse, urlencode, parse_qs, urlunparse
3
+
4
+ from psycopg.rows import dict_row
5
+ from psycopg_pool import AsyncConnectionPool
6
+ from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver
7
+
8
+ _checkpointer: AsyncPostgresSaver | None = None
9
+ _pool: AsyncConnectionPool | None = None
10
+
11
+
12
+ def _psycopg_conn_string(url: str) -> str:
13
+ """
14
+ Strip parameters psycopg doesn't support (e.g. channel_binding)
15
+ and ensure sslmode is set to require.
16
+ """
17
+ parsed = urlparse(url)
18
+ params = parse_qs(parsed.query)
19
+ # Remove unsupported params
20
+ params.pop("channel_binding", None)
21
+ # Ensure SSL
22
+ if "sslmode" not in params:
23
+ params["sslmode"] = ["require"]
24
+ new_query = urlencode({k: v[0] for k, v in params.items()})
25
+ clean = urlunparse(parsed._replace(query=new_query))
26
+ return clean
27
+
28
+
29
+ async def init_checkpointer() -> AsyncPostgresSaver:
30
+ """
31
+ Initialize AsyncPostgresSaver backed by a psycopg connection pool.
32
+ Call once at app startup. The pool stays open for the process lifetime.
33
+ """
34
+ global _checkpointer, _pool
35
+ conn_string = _psycopg_conn_string(os.getenv("NEON_DB_URL", ""))
36
+
37
+ _pool = AsyncConnectionPool(
38
+ conn_string,
39
+ max_size=5,
40
+ kwargs={"autocommit": True, "prepare_threshold": 0, "row_factory": dict_row},
41
+ open=False,
42
+ )
43
+ await _pool.open()
44
+
45
+ _checkpointer = AsyncPostgresSaver(_pool)
46
+ await _checkpointer.setup()
47
+ return _checkpointer
48
+
49
+
50
+ def get_checkpointer() -> AsyncPostgresSaver:
51
+ if _checkpointer is None:
52
+ raise RuntimeError("Checkpointer not initialized — call init_checkpointer() first")
53
+ return _checkpointer
backend/db/queries.py CHANGED
@@ -165,3 +165,119 @@ async def get_question_count_by_topic(topic_id: str) -> int:
165
  topic_id,
166
  )
167
  return result["count"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
165
  topic_id,
166
  )
167
  return result["count"]
168
+
169
+
170
+ import json as _json
171
+
172
+
173
+ async def get_questions_by_topic(topic_id: str) -> list[asyncpg.Record]:
174
+ async with get_pool().acquire() as conn:
175
+ return await conn.fetch(
176
+ "SELECT * FROM questions WHERE topic_id = $1 ORDER BY created_at ASC",
177
+ topic_id,
178
+ )
179
+
180
+
181
+ async def get_best_session_by_student_topic(
182
+ student_id: str, topic_id: str
183
+ ) -> Optional[asyncpg.Record]:
184
+ async with get_pool().acquire() as conn:
185
+ return await conn.fetchrow(
186
+ """
187
+ SELECT * FROM interview_sessions
188
+ WHERE student_id = $1 AND topic_id = $2 AND status = 'completed'
189
+ ORDER BY score DESC NULLS LAST
190
+ LIMIT 1
191
+ """,
192
+ student_id, topic_id,
193
+ )
194
+
195
+
196
+ async def create_interview_session(
197
+ student_id: str, topic_id: str
198
+ ) -> asyncpg.Record:
199
+ async with get_pool().acquire() as conn:
200
+ return await conn.fetchrow(
201
+ """
202
+ INSERT INTO interview_sessions (student_id, topic_id)
203
+ VALUES ($1, $2)
204
+ RETURNING *
205
+ """,
206
+ student_id, topic_id,
207
+ )
208
+
209
+
210
+ async def update_session_complete(
211
+ session_id: str, score: int, feedback: dict
212
+ ) -> None:
213
+ async with get_pool().acquire() as conn:
214
+ await conn.execute(
215
+ """
216
+ UPDATE interview_sessions
217
+ SET status = 'completed', score = $1, feedback = $2::jsonb,
218
+ completed_at = NOW()
219
+ WHERE id = $3
220
+ """,
221
+ score, _json.dumps(feedback), session_id,
222
+ )
223
+
224
+
225
+ async def get_unlocked_topics_by_batch_id(batch_id: str) -> list[asyncpg.Record]:
226
+ async with get_pool().acquire() as conn:
227
+ return await conn.fetch(
228
+ """
229
+ SELECT * FROM topics
230
+ WHERE batch_id = $1 AND is_unlocked = TRUE
231
+ ORDER BY order_index ASC
232
+ """,
233
+ batch_id,
234
+ )
235
+
236
+
237
+ async def get_sessions_by_student_id(student_id: str) -> list[asyncpg.Record]:
238
+ async with get_pool().acquire() as conn:
239
+ return await conn.fetch(
240
+ """
241
+ SELECT s.*, t.name AS topic_name
242
+ FROM interview_sessions s
243
+ JOIN topics t ON s.topic_id = t.id
244
+ WHERE s.student_id = $1
245
+ ORDER BY s.started_at DESC
246
+ """,
247
+ student_id,
248
+ )
249
+
250
+
251
+ async def get_session_by_id(session_id: str) -> Optional[asyncpg.Record]:
252
+ async with get_pool().acquire() as conn:
253
+ return await conn.fetchrow(
254
+ "SELECT * FROM interview_sessions WHERE id = $1", session_id
255
+ )
256
+
257
+
258
+ async def get_students_by_batch_id(batch_id: str) -> list[asyncpg.Record]:
259
+ async with get_pool().acquire() as conn:
260
+ return await conn.fetch(
261
+ """
262
+ SELECT id, full_name, email, created_at
263
+ FROM users
264
+ WHERE batch_id = $1 AND role = 'student'
265
+ ORDER BY full_name ASC
266
+ """,
267
+ batch_id,
268
+ )
269
+
270
+
271
+ async def get_session_stats_by_student_id(student_id: str) -> asyncpg.Record:
272
+ async with get_pool().acquire() as conn:
273
+ return await conn.fetchrow(
274
+ """
275
+ SELECT
276
+ COUNT(*) AS total_sessions,
277
+ COUNT(*) FILTER (WHERE status = 'completed') AS completed_sessions,
278
+ ROUND(AVG(score) FILTER (WHERE status = 'completed')) AS avg_score
279
+ FROM interview_sessions
280
+ WHERE student_id = $1
281
+ """,
282
+ student_id,
283
+ )
backend/graph/graph.py CHANGED
@@ -1 +1,46 @@
1
- # Graph builder and compiler — implemented in Phase 5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langgraph.graph import END, StateGraph
2
+
3
+ from backend.graph.nodes import (
4
+ ask_question,
5
+ counter_question,
6
+ evaluate_answer,
7
+ generate_report,
8
+ route_after_evaluation,
9
+ summarize,
10
+ )
11
+ from backend.graph.state import InterviewState
12
+
13
+
14
+ def build_graph(checkpointer):
15
+ """Compile the interview graph with the given checkpointer. Call once at startup."""
16
+ builder = StateGraph(InterviewState)
17
+
18
+ builder.add_node("ask_question", ask_question)
19
+ builder.add_node("evaluate_answer", evaluate_answer)
20
+ builder.add_node("counter_question", counter_question)
21
+ builder.add_node("summarize", summarize)
22
+ builder.add_node("generate_report", generate_report)
23
+
24
+ builder.set_entry_point("ask_question")
25
+
26
+ builder.add_edge("ask_question", "evaluate_answer")
27
+
28
+ builder.add_conditional_edges(
29
+ "evaluate_answer",
30
+ route_after_evaluation,
31
+ {
32
+ "counter": "counter_question",
33
+ "next_question": "ask_question",
34
+ "summarize": "summarize",
35
+ "end": "generate_report",
36
+ },
37
+ )
38
+
39
+ builder.add_edge("counter_question", "evaluate_answer")
40
+ builder.add_edge("summarize", "ask_question")
41
+ builder.add_edge("generate_report", END)
42
+
43
+ return builder.compile(
44
+ checkpointer=checkpointer,
45
+ interrupt_before=["evaluate_answer"],
46
+ )
backend/graph/nodes.py CHANGED
@@ -1 +1,167 @@
1
- # LangGraph node functions — implemented in Phase 5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ from backend.graph.state import InterviewState
4
+ from backend.llm import call_llm
5
+ from backend.prompts import (
6
+ build_ask_question_prompt,
7
+ build_counter_prompt,
8
+ build_evaluate_prompt,
9
+ build_report_prompt,
10
+ build_summarize_prompt,
11
+ )
12
+ from backend.db import queries
13
+
14
+
15
+ def _msg_content(msg) -> str:
16
+ return msg.get("content", "") if isinstance(msg, dict) else getattr(msg, "content", "")
17
+
18
+
19
+ def _msg_role(msg) -> str:
20
+ if isinstance(msg, dict):
21
+ return msg.get("role", "")
22
+ name = type(msg).__name__.lower()
23
+ if "human" in name:
24
+ return "human"
25
+ return "assistant"
26
+
27
+
28
+ async def ask_question(state: InterviewState) -> dict:
29
+ remaining = list(state["questions_remaining"])
30
+ if not remaining:
31
+ return {}
32
+
33
+ question = remaining.pop(0)
34
+ prompt = build_ask_question_prompt(
35
+ state["topic_name"],
36
+ state["conversation_summary"],
37
+ state["questions_asked"],
38
+ )
39
+ response = await call_llm(prompt, max_tokens=200)
40
+
41
+ return {
42
+ "questions_remaining": remaining,
43
+ "questions_asked": state["questions_asked"] + [question["question_text"]],
44
+ "messages": [{"role": "assistant", "content": response}],
45
+ "turn_count": state["turn_count"] + 1,
46
+ "awaiting_counter_response": False,
47
+ }
48
+
49
+
50
+ async def evaluate_answer(state: InterviewState) -> dict:
51
+ last_student = next(
52
+ (m for m in reversed(state["messages"]) if _msg_role(m) == "human"),
53
+ None,
54
+ )
55
+ if not last_student:
56
+ return {"last_verdict": "wrong"}
57
+
58
+ last_question = state["questions_asked"][-1] if state["questions_asked"] else ""
59
+ prompt = build_evaluate_prompt(
60
+ last_question,
61
+ _msg_content(last_student),
62
+ state["conversation_summary"],
63
+ )
64
+ raw = await call_llm(prompt, max_tokens=100)
65
+
66
+ try:
67
+ result = json.loads(raw)
68
+ verdict = result.get("verdict", "wrong")
69
+ weak_area = result.get("weak_area")
70
+ except (json.JSONDecodeError, AttributeError):
71
+ verdict = "wrong"
72
+ weak_area = None
73
+
74
+ weak_areas = list(state["student_weak_areas"])
75
+ if verdict == "shallow" and weak_area:
76
+ weak_areas.append(str(weak_area))
77
+
78
+ return {
79
+ "last_verdict": verdict,
80
+ "student_weak_areas": weak_areas,
81
+ }
82
+
83
+
84
+ async def counter_question(state: InterviewState) -> dict:
85
+ last_student = next(
86
+ (m for m in reversed(state["messages"]) if _msg_role(m) == "human"),
87
+ None,
88
+ )
89
+ last_question = state["questions_asked"][-1] if state["questions_asked"] else ""
90
+ prompt = build_counter_prompt(
91
+ last_question,
92
+ _msg_content(last_student) if last_student else "",
93
+ )
94
+ response = await call_llm(prompt, max_tokens=150)
95
+
96
+ return {
97
+ "messages": [{"role": "assistant", "content": response}],
98
+ "awaiting_counter_response": True,
99
+ "turn_count": state["turn_count"] + 1,
100
+ }
101
+
102
+
103
+ async def summarize(state: InterviewState) -> dict:
104
+ prompt = build_summarize_prompt(state["messages"])
105
+ summary = await call_llm(prompt, max_tokens=200)
106
+
107
+ return {
108
+ "conversation_summary": summary,
109
+ "messages": [],
110
+ "awaiting_counter_response": False,
111
+ }
112
+
113
+
114
+ async def generate_report(state: InterviewState) -> dict:
115
+ prompt = build_report_prompt(
116
+ state["topic_name"],
117
+ state["questions_asked"],
118
+ state["student_weak_areas"],
119
+ state["conversation_summary"],
120
+ state["past_best_score"],
121
+ )
122
+ raw = await call_llm(prompt, max_tokens=400)
123
+
124
+ try:
125
+ feedback = json.loads(raw)
126
+ score = int(feedback.get("score", 0))
127
+ except (json.JSONDecodeError, ValueError, TypeError):
128
+ feedback = {
129
+ "score": 0,
130
+ "summary": raw,
131
+ "concept_score": 0,
132
+ "depth_score": 0,
133
+ "mistakes": [],
134
+ "tips": [],
135
+ }
136
+ score = 0
137
+
138
+ await queries.update_session_complete(state["session_id"], score, feedback)
139
+
140
+ return {
141
+ "status": "complete",
142
+ "score": score,
143
+ "feedback": feedback,
144
+ "messages": [{"role": "assistant", "content": feedback.get("summary", "Interview complete.")}],
145
+ }
146
+
147
+
148
+ def route_after_evaluation(state: InterviewState) -> str:
149
+ """Routing function for conditional edges after evaluate_answer."""
150
+ verdict = state.get("last_verdict", "wrong")
151
+ turn_count = state["turn_count"]
152
+ questions_remaining = state["questions_remaining"]
153
+ awaiting_counter = state["awaiting_counter_response"]
154
+
155
+ # Shallow + not already in counter loop → fire counter question
156
+ if verdict == "shallow" and not awaiting_counter:
157
+ return "counter"
158
+
159
+ # End conditions (checked before summarize to avoid wasted LLM call)
160
+ if turn_count >= 8 or not questions_remaining:
161
+ return "end"
162
+
163
+ # Compress memory every 4 turns
164
+ if turn_count % 4 == 0 and turn_count > 0:
165
+ return "summarize"
166
+
167
+ return "next_question"
backend/graph/state.py CHANGED
@@ -1 +1,27 @@
1
- # InterviewState TypedDict implemented in Phase 5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Annotated, Literal, TypedDict
2
+
3
+ from langgraph.graph.message import add_messages
4
+
5
+
6
+ class InterviewState(TypedDict):
7
+ # Static — set once at session start
8
+ topic_name: str
9
+ session_id: str
10
+ student_id: str
11
+ questions_remaining: list[dict] # [{"question_text": str, "difficulty": str}]
12
+ past_best_score: int | None
13
+ past_weak_areas: list[str]
14
+
15
+ # Dynamic — mutates during session
16
+ messages: Annotated[list, add_messages] # appended via reducer
17
+ conversation_summary: str
18
+ questions_asked: list[str]
19
+ student_weak_areas: list[str]
20
+ turn_count: int
21
+ awaiting_counter_response: bool
22
+ last_verdict: str | None # "strong" | "shallow" | "wrong" | None
23
+
24
+ # Terminal — set once at end
25
+ status: Literal["active", "complete"]
26
+ score: int | None
27
+ feedback: dict | None
backend/llm.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import httpx
4
+
5
+ _OPENROUTER_URL = "https://openrouter.ai/api/v1/chat/completions"
6
+ _MODEL = "google/gemini-2.0-flash-exp:free"
7
+
8
+
9
+ async def call_llm(messages: list[dict], max_tokens: int = 512) -> str:
10
+ """Call OpenRouter API. Returns the assistant message content string."""
11
+ api_key = os.getenv("OPENROUTER_API_KEY", "")
12
+ async with httpx.AsyncClient(timeout=60.0) as client:
13
+ response = await client.post(
14
+ _OPENROUTER_URL,
15
+ headers={
16
+ "Authorization": f"Bearer {api_key}",
17
+ "Content-Type": "application/json",
18
+ },
19
+ json={
20
+ "model": _MODEL,
21
+ "messages": messages,
22
+ "max_tokens": max_tokens,
23
+ },
24
+ )
25
+ response.raise_for_status()
26
+ return response.json()["choices"][0]["message"]["content"] or ""
backend/main.py CHANGED
@@ -1,23 +1,41 @@
 
 
 
 
 
 
 
 
 
 
1
  from contextlib import asynccontextmanager
2
 
3
  from fastapi import FastAPI
4
 
 
5
  from backend.db.connection import init_db_pool
6
- from backend.routers import auth, batches, topics, upload
 
7
 
8
 
9
  @asynccontextmanager
10
  async def lifespan(app: FastAPI):
11
  await init_db_pool()
 
 
12
  yield
13
 
14
 
15
  app = FastAPI(lifespan=lifespan)
16
 
17
- app.include_router(auth.router, prefix="/api/auth")
18
- app.include_router(batches.router, prefix="/api/batches")
19
- app.include_router(topics.router, prefix="/api/topics")
20
- app.include_router(upload.router, prefix="/api/upload")
 
 
 
 
21
 
22
  # React static build — MUST be last
23
  # app.mount("/", StaticFiles(directory="frontend/dist", html=True), name="static")
 
1
+ import asyncio
2
+ import sys
3
+
4
+ from dotenv import load_dotenv
5
+ load_dotenv()
6
+
7
+ # psycopg (used by LangGraph checkpointer) requires SelectorEventLoop on Windows
8
+ if sys.platform == "win32":
9
+ asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
10
+
11
  from contextlib import asynccontextmanager
12
 
13
  from fastapi import FastAPI
14
 
15
+ from backend.checkpointer import init_checkpointer
16
  from backend.db.connection import init_db_pool
17
+ from backend.graph.graph import build_graph
18
+ from backend.routers import auth, batches, instructor, interview, sessions, student, topics, upload
19
 
20
 
21
  @asynccontextmanager
22
  async def lifespan(app: FastAPI):
23
  await init_db_pool()
24
+ checkpointer = await init_checkpointer()
25
+ app.state.graph = build_graph(checkpointer)
26
  yield
27
 
28
 
29
  app = FastAPI(lifespan=lifespan)
30
 
31
+ app.include_router(auth.router, prefix="/api/auth")
32
+ app.include_router(batches.router, prefix="/api/batches")
33
+ app.include_router(topics.router, prefix="/api/topics")
34
+ app.include_router(upload.router, prefix="/api/upload")
35
+ app.include_router(student.router, prefix="/api/student")
36
+ app.include_router(sessions.router, prefix="/api/sessions")
37
+ app.include_router(instructor.router, prefix="/api/instructor")
38
+ app.include_router(interview.router, prefix="/interview")
39
 
40
  # React static build — MUST be last
41
  # app.mount("/", StaticFiles(directory="frontend/dist", html=True), name="static")
backend/prompts.py CHANGED
@@ -1 +1,97 @@
1
- # Prompt builders for all LangGraph nodes — implemented in Phase 5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ def build_ask_question_prompt(
2
+ topic: str, summary: str, asked: list[str]
3
+ ) -> list[dict]:
4
+ asked_str = "\n".join(f"- {q}" for q in asked) if asked else "None yet."
5
+ system = (
6
+ f"You are a technical interview AI conducting a mock interview on: {topic}. "
7
+ "Ask one clear, focused technical question. "
8
+ "Do not repeat questions already asked. Be conversational but professional."
9
+ )
10
+ user = (
11
+ f"Conversation summary:\n{summary or 'This is the start of the interview.'}\n\n"
12
+ f"Questions already asked:\n{asked_str}\n\n"
13
+ "Ask the next question. Just the question — no numbering, no preamble."
14
+ )
15
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
16
+
17
+
18
+ def build_evaluate_prompt(
19
+ question: str, answer: str, summary: str
20
+ ) -> list[dict]:
21
+ system = (
22
+ "You are evaluating a student's answer in a technical interview. "
23
+ 'Respond with ONLY valid JSON: {"verdict": "strong"|"shallow"|"wrong", "weak_area": "topic or null"}\n'
24
+ "strong = correct and complete. shallow = partially correct, missing depth. "
25
+ "wrong = incorrect or off-topic."
26
+ )
27
+ user = (
28
+ f"Context: {summary or 'Start of interview.'}\n\n"
29
+ f"Question: {question}\n"
30
+ f"Student answer: {answer}\n\n"
31
+ "Evaluate. Return only the JSON object."
32
+ )
33
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
34
+
35
+
36
+ def build_counter_prompt(question: str, answer: str) -> list[dict]:
37
+ system = (
38
+ "You are a technical interviewer. The student gave a shallow answer. "
39
+ "Ask ONE specific follow-up probing question to dig deeper into what they missed. "
40
+ "No preamble — just the question."
41
+ )
42
+ user = (
43
+ f"Original question: {question}\n"
44
+ f"Student's shallow answer: {answer}\n\n"
45
+ "Ask one targeted follow-up question."
46
+ )
47
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
48
+
49
+
50
+ def build_summarize_prompt(messages: list) -> list[dict]:
51
+ def _content(m) -> str:
52
+ return m.get("content", "") if isinstance(m, dict) else getattr(m, "content", "")
53
+
54
+ def _role(m) -> str:
55
+ if isinstance(m, dict):
56
+ return m.get("role", "unknown")
57
+ name = type(m).__name__.lower()
58
+ return "AI" if "ai" in name else "STUDENT"
59
+
60
+ transcript = "\n".join(f"{_role(m).upper()}: {_content(m)}" for m in messages)
61
+ system = (
62
+ "Summarize this interview transcript in under 150 words. "
63
+ "Cover: topics discussed, student strengths, weak areas identified."
64
+ )
65
+ return [
66
+ {"role": "system", "content": system},
67
+ {"role": "user", "content": f"Transcript:\n{transcript}\n\nSummarize:"},
68
+ ]
69
+
70
+
71
+ def build_report_prompt(
72
+ topic: str,
73
+ asked: list[str],
74
+ weak_areas: list[str],
75
+ summary: str,
76
+ past_score: int | None,
77
+ ) -> list[dict]:
78
+ past_ctx = (
79
+ f"Their previous best score on this topic was {past_score}/100."
80
+ if past_score is not None
81
+ else "This is their first attempt on this topic."
82
+ )
83
+ system = (
84
+ "Generate a final interview performance report. "
85
+ "Respond with ONLY valid JSON:\n"
86
+ '{"score": 0-100, "summary": "string", "concept_score": 0-100, '
87
+ '"depth_score": 0-100, "mistakes": ["string"], "tips": ["string"]}'
88
+ )
89
+ user = (
90
+ f"Topic: {topic}\n"
91
+ f"{past_ctx}\n"
92
+ f"Questions covered: {', '.join(asked) if asked else 'none'}\n"
93
+ f"Weak areas: {', '.join(weak_areas) if weak_areas else 'none'}\n\n"
94
+ f"Interview summary:\n{summary or 'No summary available.'}\n\n"
95
+ "Generate the report JSON."
96
+ )
97
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
backend/routers/instructor.py CHANGED
@@ -1,3 +1,57 @@
1
- from fastapi import APIRouter
 
 
 
2
 
3
  router = APIRouter()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Depends, HTTPException
2
+
3
+ from backend.auth.deps import require_instructor
4
+ from backend.db import queries
5
 
6
  router = APIRouter()
7
+
8
+
9
+ @router.get("/students")
10
+ async def list_students(user: dict = Depends(require_instructor)):
11
+ batch = await queries.get_batch_by_instructor_id(user["user_id"])
12
+ if not batch:
13
+ raise HTTPException(404, "No batch found")
14
+ students = await queries.get_students_by_batch_id(str(batch["id"]))
15
+ result = []
16
+ for s in students:
17
+ stats = await queries.get_session_stats_by_student_id(str(s["id"]))
18
+ result.append({
19
+ "id": str(s["id"]),
20
+ "full_name": s["full_name"],
21
+ "email": s["email"],
22
+ "total_sessions": stats["total_sessions"] or 0,
23
+ "completed_sessions": stats["completed_sessions"] or 0,
24
+ "avg_score": int(stats["avg_score"]) if stats["avg_score"] is not None else None,
25
+ })
26
+ return result
27
+
28
+
29
+ @router.get("/students/{student_id}")
30
+ async def get_student_detail(
31
+ student_id: str,
32
+ user: dict = Depends(require_instructor),
33
+ ):
34
+ batch = await queries.get_batch_by_instructor_id(user["user_id"])
35
+ if not batch:
36
+ raise HTTPException(404, "No batch found")
37
+ student = await queries.get_user_by_id(student_id)
38
+ if not student or str(student["batch_id"]) != str(batch["id"]):
39
+ raise HTTPException(404, "Student not found in your batch")
40
+ sessions = await queries.get_sessions_by_student_id(student_id)
41
+ return {
42
+ "id": str(student["id"]),
43
+ "full_name": student["full_name"],
44
+ "email": student["email"],
45
+ "sessions": [
46
+ {
47
+ "id": str(s["id"]),
48
+ "topic_id": str(s["topic_id"]),
49
+ "topic_name": s["topic_name"],
50
+ "status": s["status"],
51
+ "score": s["score"],
52
+ "started_at": s["started_at"].isoformat() if s["started_at"] else None,
53
+ "completed_at": s["completed_at"].isoformat() if s["completed_at"] else None,
54
+ }
55
+ for s in sessions
56
+ ],
57
+ }
backend/routers/interview.py CHANGED
@@ -1,3 +1,145 @@
1
- from fastapi import APIRouter
 
 
 
 
 
 
2
 
3
  router = APIRouter()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ from fastapi import APIRouter, Depends, HTTPException, Request
4
+ from pydantic import BaseModel
5
+
6
+ from backend.auth.deps import require_student
7
+ from backend.db import queries
8
 
9
  router = APIRouter()
10
+
11
+
12
+ class StartRequest(BaseModel):
13
+ topic_id: str
14
+
15
+
16
+ class TurnRequest(BaseModel):
17
+ session_id: str
18
+ student_message: str
19
+
20
+
21
+ def _last_ai_message(messages: list) -> str:
22
+ """Extract content of the last assistant message from a list."""
23
+ for m in reversed(messages):
24
+ role = m.get("role", "") if isinstance(m, dict) else getattr(type(m), "__name__", "").lower()
25
+ content = m.get("content", "") if isinstance(m, dict) else getattr(m, "content", "")
26
+ if role == "assistant" or "ai" in str(role).lower():
27
+ return content
28
+ return ""
29
+
30
+
31
+ @router.post("/start")
32
+ async def start_interview(
33
+ body: StartRequest,
34
+ request: Request,
35
+ user: dict = Depends(require_student),
36
+ ):
37
+ topic = await queries.get_topic_by_id(body.topic_id)
38
+ if not topic:
39
+ raise HTTPException(404, "Topic not found")
40
+ if not topic["is_unlocked"]:
41
+ raise HTTPException(403, "Topic is not unlocked for interviews")
42
+
43
+ questions = await queries.get_questions_by_topic(body.topic_id)
44
+ if not questions:
45
+ raise HTTPException(400, "No questions available for this topic")
46
+
47
+ past = await queries.get_best_session_by_student_topic(user["user_id"], body.topic_id)
48
+ past_best_score = past["score"] if past else None
49
+ past_weak_areas: list[str] = []
50
+ if past and past["feedback"]:
51
+ fb = past["feedback"] if isinstance(past["feedback"], dict) else json.loads(past["feedback"])
52
+ past_weak_areas = fb.get("tips", [])
53
+
54
+ session = await queries.create_interview_session(user["user_id"], body.topic_id)
55
+ session_id = str(session["id"])
56
+
57
+ initial_state = {
58
+ "topic_name": topic["name"],
59
+ "session_id": session_id,
60
+ "student_id": user["user_id"],
61
+ "questions_remaining": [
62
+ {"question_text": q["question_text"], "difficulty": q["difficulty"]}
63
+ for q in questions
64
+ ],
65
+ "past_best_score": past_best_score,
66
+ "past_weak_areas": past_weak_areas,
67
+ "messages": [],
68
+ "conversation_summary": "",
69
+ "questions_asked": [],
70
+ "student_weak_areas": [],
71
+ "turn_count": 0,
72
+ "awaiting_counter_response": False,
73
+ "last_verdict": None,
74
+ "status": "active",
75
+ "score": None,
76
+ "feedback": None,
77
+ }
78
+
79
+ graph = request.app.state.graph
80
+ config = {"configurable": {"thread_id": session_id}}
81
+ result = await graph.ainvoke(initial_state, config)
82
+
83
+ return {
84
+ "session_id": session_id,
85
+ "message": _last_ai_message(result["messages"]),
86
+ }
87
+
88
+
89
+ @router.post("/turn")
90
+ async def interview_turn(
91
+ body: TurnRequest,
92
+ request: Request,
93
+ user: dict = Depends(require_student),
94
+ ):
95
+ session = await queries.get_session_by_id(body.session_id)
96
+ if not session:
97
+ raise HTTPException(404, "Session not found")
98
+ if session["status"] == "completed":
99
+ raise HTTPException(400, "Session is already complete")
100
+
101
+ graph = request.app.state.graph
102
+ config = {"configurable": {"thread_id": body.session_id}}
103
+
104
+ result = await graph.ainvoke(
105
+ {"messages": [{"role": "human", "content": body.student_message}]},
106
+ config,
107
+ )
108
+
109
+ is_complete = result.get("status") == "complete"
110
+ response: dict = {
111
+ "message": _last_ai_message(result["messages"]),
112
+ "is_counter_q": result.get("awaiting_counter_response", False),
113
+ "is_complete": is_complete,
114
+ }
115
+ if is_complete:
116
+ response["feedback"] = result.get("feedback")
117
+
118
+ return response
119
+
120
+
121
+ @router.get("/state/{session_id}")
122
+ async def get_interview_state(
123
+ session_id: str,
124
+ request: Request,
125
+ user: dict = Depends(require_student),
126
+ ):
127
+ from backend.checkpointer import get_checkpointer
128
+
129
+ checkpointer = get_checkpointer()
130
+ config = {"configurable": {"thread_id": session_id}}
131
+ checkpoint = await checkpointer.aget(config)
132
+
133
+ if not checkpoint:
134
+ raise HTTPException(404, "Session state not found")
135
+
136
+ channel_values = checkpoint.get("channel_values", {})
137
+ messages = channel_values.get("messages", [])
138
+
139
+ last_msg = _last_ai_message(messages) if messages else None
140
+
141
+ return {
142
+ "status": channel_values.get("status", "active"),
143
+ "turn_count": channel_values.get("turn_count", 0),
144
+ "last_message": last_msg,
145
+ }
backend/routers/sessions.py CHANGED
@@ -1,3 +1,26 @@
1
- from fastapi import APIRouter
 
 
 
2
 
3
  router = APIRouter()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Depends, HTTPException
2
+
3
+ from backend.auth.deps import get_current_user
4
+ from backend.db import queries
5
 
6
  router = APIRouter()
7
+
8
+
9
+ @router.get("/{session_id}")
10
+ async def get_session(session_id: str, user: dict = Depends(get_current_user)):
11
+ session = await queries.get_session_by_id(session_id)
12
+ if not session:
13
+ raise HTTPException(404, "Session not found")
14
+ # Students can only see their own sessions; instructors can see any
15
+ if user["role"] == "student" and str(session["student_id"]) != user["user_id"]:
16
+ raise HTTPException(403, "Access denied")
17
+ return {
18
+ "id": str(session["id"]),
19
+ "topic_id": str(session["topic_id"]),
20
+ "student_id": str(session["student_id"]),
21
+ "status": session["status"],
22
+ "score": session["score"],
23
+ "feedback": session["feedback"],
24
+ "started_at": session["started_at"].isoformat() if session["started_at"] else None,
25
+ "completed_at": session["completed_at"].isoformat() if session["completed_at"] else None,
26
+ }
backend/routers/student.py CHANGED
@@ -1,3 +1,39 @@
1
- from fastapi import APIRouter
 
 
 
2
 
3
  router = APIRouter()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Depends, HTTPException
2
+
3
+ from backend.auth.deps import require_student
4
+ from backend.db import queries
5
 
6
  router = APIRouter()
7
+
8
+
9
+ @router.get("/topics")
10
+ async def get_student_topics(user: dict = Depends(require_student)):
11
+ batch_id = user.get("batch_id")
12
+ if not batch_id:
13
+ raise HTTPException(400, "You are not enrolled in a batch")
14
+ topics = await queries.get_unlocked_topics_by_batch_id(batch_id)
15
+ return [
16
+ {
17
+ "id": str(t["id"]),
18
+ "name": t["name"],
19
+ "order_index": t["order_index"],
20
+ }
21
+ for t in topics
22
+ ]
23
+
24
+
25
+ @router.get("/sessions")
26
+ async def get_student_sessions(user: dict = Depends(require_student)):
27
+ sessions = await queries.get_sessions_by_student_id(user["user_id"])
28
+ return [
29
+ {
30
+ "id": str(s["id"]),
31
+ "topic_id": str(s["topic_id"]),
32
+ "topic_name": s["topic_name"],
33
+ "status": s["status"],
34
+ "score": s["score"],
35
+ "started_at": s["started_at"].isoformat() if s["started_at"] else None,
36
+ "completed_at": s["completed_at"].isoformat() if s["completed_at"] else None,
37
+ }
38
+ for s in sessions
39
+ ]
docs/collaboration.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Collaboration Handoff — AI InterviewMentor
2
+
3
+ > **READ THIS FIRST** before touching any code.
4
+ > Two developers are working on this project simultaneously using Claude Code.
5
+ > This file tracks what has been done, who owns what, and the exact API contracts each side expects.
6
+
7
+ ---
8
+
9
+ ## Who Owns What
10
+
11
+ | Area | Owner | Status |
12
+ |------|-------|--------|
13
+ | Phases 1–4 (scaffolding, auth, instructor batch/topics, CSV upload) | Done | ✅ |
14
+ | **Phase 5** — LangGraph backend engine | **Other dev** | 🔄 In progress |
15
+ | Phases 6–9 (student UI, reports, dashboards, analytics) | **This dev** | ✅ Done |
16
+ | Phase 10 — Docker + deploy | Shared | ⏳ Pending |
17
+
18
+ **Rule**: Do not touch files owned by the other dev without coordinating first.
19
+
20
+ ---
21
+
22
+ ## Phase 5 — What the Backend Must Deliver (Other Dev's Contract)
23
+
24
+ The frontend (Phases 6–7) is already wired to these endpoints. Do not change the request/response shapes.
25
+
26
+ ### `POST /api/interview/start`
27
+ Request: `{ "topic_id": "<uuid>" }`
28
+ Response: `{ "session_id": "<uuid>", "message": "<first AI question>" }`
29
+
30
+ Behaviour:
31
+ - Creates a row in `interview_sessions` with `status='active'`
32
+ - Initialises LangGraph state with thread_id = session_id
33
+ - Returns the first AI question as `message`
34
+
35
+ ### `POST /api/interview/turn`
36
+ Request: `{ "session_id": "<uuid>", "answer": "<student text>" }`
37
+ Response: `{ "message": "<AI response or next question>", "turn_count": <int>, "status": "ongoing"|"finished" }`
38
+
39
+ Behaviour:
40
+ - Appends the student answer + AI response to LangGraph state
41
+ - When interview ends: sets `interview_sessions.status='completed'`, writes `score` and `feedback` JSON
42
+ - Returns `"finished"` status when done (max 8 turns or question bank exhausted)
43
+
44
+ ### `GET /api/interview/state/{session_id}`
45
+ Response:
46
+ ```json
47
+ {
48
+ "id": "<uuid>",
49
+ "topic_id": "<uuid>",
50
+ "status": "ongoing|finished",
51
+ "turn_count": 3,
52
+ "messages": [{"role": "ai|student", "content": "..."}]
53
+ }
54
+ ```
55
+
56
+ Behaviour: used to resume a session after tab close/reload.
57
+
58
+ ### `feedback` JSONB shape (written to `interview_sessions.feedback`)
59
+ The frontend's `Feedback` type expects:
60
+ ```json
61
+ {
62
+ "score": 82,
63
+ "summary": "Good understanding of concepts...",
64
+ "concept_score": 80,
65
+ "depth_score": 85,
66
+ "mistakes": ["Missed edge case in X", "..."],
67
+ "tips": ["Strong on Y", "..."]
68
+ }
69
+ ```
70
+
71
+ ---
72
+
73
+ ## Routers Already Registered in `backend/main.py`
74
+
75
+ ```
76
+ /api/auth/* — auth.router (signup, login, refresh, logout)
77
+ /api/batches/* — batches.router (create, get mine, get by id)
78
+ /api/topics/* — topics.router (create, list, unlock/lock)
79
+ /api/upload — upload.router (CSV parse + bulk insert)
80
+ /api/student/* — student.router (topics for student, sessions list)
81
+ /api/sessions/* — sessions.router (get session detail/report)
82
+ /api/instructor/* — instructor.router (student list + detail)
83
+ ```
84
+
85
+ **Phase 5 must also register:**
86
+ ```python
87
+ from backend.routers import interview
88
+ app.include_router(interview.router, prefix="/api/interview")
89
+ ```
90
+
91
+ ---
92
+
93
+ ## Completed Frontend Files (Do Not Overwrite)
94
+
95
+ ### API Layer (`frontend/src/api/`)
96
+ | File | Functions |
97
+ |------|-----------|
98
+ | `auth.ts` | `login`, `signup`, `logout`, `refreshToken` |
99
+ | `client.ts` | `apiFetch` — returns `Promise<Response>`, auto-refreshes on 401 |
100
+ | `topics.ts` | `getMyBatch`, `createBatch`, `getTopics`, `createTopic`, `setTopicUnlock`, `getStudentTopics`, `getStudentSessions` |
101
+ | `interview.ts` | `startInterview(topicId)`, `sendTurn(sessionId, answer)`, `getInterviewState(sessionId)` |
102
+ | `sessions.ts` | `getSession(sessionId)` |
103
+ | `upload.ts` | `uploadCSV(topicId, file)` |
104
+ | `student.ts` | `getStudentDashboard()` |
105
+ | `instructor.ts` | `getInstructorStudents()`, `getStudentDetail(studentId)` |
106
+
107
+ ### Store (`frontend/src/store/`)
108
+ | File | State |
109
+ |------|-------|
110
+ | `authStore.ts` | `accessToken`, `user`, `setAuth`, `clearAuth` — persisted to localStorage |
111
+ | `interviewStore.ts` | `sessionId`, `messages`, `turnCount`, `status`, `startSession`, `addMessage`, `setTurnCount`, `setStatus`, `reset` |
112
+
113
+ ### Pages (`frontend/src/pages/`)
114
+ All pages implemented: `Login`, `Signup`, `StudentDashboard`, `Interview`, `Report`, `InstructorDashboard`, `Upload`, `StudentDetail`
115
+
116
+ ### Types (`frontend/src/types/index.ts`)
117
+ ```ts
118
+ User, AuthResponse, Message, Feedback, InterviewSession, SessionReport, StudentSession, StudentTopic, InstructorStudent, StudentAttempt
119
+ ```
120
+
121
+ ---
122
+
123
+ ## Key Patterns to Follow
124
+
125
+ ### apiFetch usage (always check `.ok`):
126
+ ```ts
127
+ const res = await apiFetch('/api/something')
128
+ if (!res.ok) {
129
+ const err = await res.json()
130
+ throw new Error(err.detail ?? 'Request failed')
131
+ }
132
+ return res.json()
133
+ ```
134
+
135
+ ### Tailwind dark theme conventions:
136
+ - Page bg: `bg-gray-950`
137
+ - Card: `bg-gray-900 border border-gray-800 rounded-xl`
138
+ - Input: `bg-gray-800 border border-gray-700 rounded-lg focus:border-indigo-500`
139
+ - Primary button: `bg-indigo-600 hover:bg-indigo-500`
140
+ - Muted text: `text-gray-400`, `text-gray-500`
141
+
142
+ ### Backend router pattern:
143
+ ```python
144
+ from fastapi import APIRouter, Depends, HTTPException
145
+ from backend.auth.deps import require_student # or require_instructor
146
+ from backend.db import queries
147
+
148
+ router = APIRouter()
149
+ # user dict has: user_id, role, batch_id, email
150
+ ```
151
+
152
+ ---
153
+
154
+ ## DB Schema Reference
155
+
156
+ ```
157
+ users — id, full_name, email, password_hash, role, batch_id, created_at
158
+ batches — id, name, instructor_id, class_code, created_at
159
+ topics — id, batch_id, name, is_unlocked, order_index, created_at
160
+ questions — id, topic_id, question_text, difficulty, created_at
161
+ interview_sessions — id, student_id, topic_id, status('active'|'completed'), score, feedback(JSONB), started_at, completed_at
162
+ refresh_tokens — id, user_id, token_hash, expires_at
163
+ ```
docs/question_bank/linear_regression.csv ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ question_text,difficulty
2
+ What is linear regression?,easy
3
+ What is the primary goal of a linear regression model?,easy
4
+ What is the difference between simple and multiple linear regression?,easy
5
+ How do you define the dependent variable in a regression problem?,easy
6
+ What are independent variables or features?,easy
7
+ What does the slope represent in a simple linear regression equation?,easy
8
+ What does the y-intercept represent in a linear model?,easy
9
+ Can linear regression be used for classification tasks?,easy
10
+ What is a line of best fit?,easy
11
+ What is the difference between a parameter and a hyperparameter in this context?,easy
12
+ What is an error term or residual?,easy
13
+ How do you calculate a residual?,easy
14
+ Why do we square the residuals when calculating the cost function?,easy
15
+ What is the most common cost function used in linear regression?,easy
16
+ What does OLS stand for?,easy
17
+ What is the R-squared metric?,easy
18
+ What is the range of possible values for R-squared?,easy
19
+ Can R-squared ever be negative?,easy
20
+ What does an R-squared of 1.0 indicate?,easy
21
+ What does an R-squared of 0.0 indicate?,easy
22
+ What are the four primary assumptions of linear regression?,medium
23
+ What does the assumption of linearity mean?,medium
24
+ How can you check if the linearity assumption holds true?,medium
25
+ What is homoscedasticity?,medium
26
+ What is heteroscedasticity and why is it a problem?,medium
27
+ How do you test for homoscedasticity?,medium
28
+ What does the assumption of independence of errors mean?,medium
29
+ What is autocorrelation?,medium
30
+ How can you test for autocorrelation in your residuals?,medium
31
+ What does the normality of residuals assumption mean?,medium
32
+ Is it necessary for the independent variables to be normally distributed?,medium
33
+ How can you verify that your residuals are normally distributed?,medium
34
+ What is multicollinearity?,medium
35
+ Why is multicollinearity an issue for linear regression?,medium
36
+ How do you detect multicollinearity?,medium
37
+ What is the Variance Inflation Factor?,medium
38
+ What VIF score indicates severe multicollinearity?,medium
39
+ How do you resolve high multicollinearity in your dataset?,medium
40
+ What is the difference between R-squared and Adjusted R-squared?,medium
41
+ Why is Adjusted R-squared preferred when using multiple features?,medium
42
+ What is Mean Absolute Error?,medium
43
+ What is Mean Squared Error?,medium
44
+ What is Root Mean Squared Error?,medium
45
+ When would you prefer MAE over RMSE?,medium
46
+ When would you prefer RMSE over MAE?,medium
47
+ How do outliers affect a linear regression model?,medium
48
+ How can you identify outliers in your dataset?,medium
49
+ What are leverage points?,medium
50
+ What is Cook's Distance used for?,medium
51
+ How do you handle categorical variables in linear regression?,medium
52
+ What is one-hot encoding?,medium
53
+ What is the dummy variable trap?,medium
54
+ How do you avoid the dummy variable trap?,medium
55
+ Can you use ordinal encoding for linear regression features?,medium
56
+ How does feature scaling affect linear regression?,medium
57
+ Is feature scaling strictly required for Ordinary Least Squares?,medium
58
+ When is feature scaling absolutely necessary for linear regression?,medium
59
+ What is a baseline model in regression?,medium
60
+ How do you interpret a negative coefficient in your model?,medium
61
+ What does a coefficient close to zero imply?,medium
62
+ How do you derive the normal equation?,hard
63
+ What is the mathematical formula for the normal equation?,hard
64
+ What is the computational complexity of the normal equation?,hard
65
+ Why might the normal equation fail?,hard
66
+ What is a singular matrix and how does it relate to OLS?,hard
67
+ How does gradient descent optimize a linear regression model?,hard
68
+ What is the learning rate in gradient descent?,hard
69
+ What happens if your learning rate is too high?,hard
70
+ What happens if your learning rate is too low?,hard
71
+ What is the difference between batch and stochastic gradient descent?,hard
72
+ What is polynomial regression?,hard
73
+ Is polynomial regression a linear or non-linear model?,hard
74
+ How do interaction terms work in a regression model?,hard
75
+ What is the bias-variance tradeoff?,hard
76
+ How does adding more features affect bias and variance?,hard
77
+ What is overfitting in the context of linear regression?,hard
78
+ How do you detect overfitting?,hard
79
+ What is underfitting?,hard
80
+ What is regularization?,hard
81
+ How does Ridge regression differ from standard linear regression?,hard
82
+ What penalty term does Ridge regression use?,hard
83
+ How does Lasso regression differ from Ridge regression?,hard
84
+ What penalty term does Lasso regression use?,hard
85
+ Why does Lasso regression tend to produce sparse models?,hard
86
+ What is Elastic Net regularization?,hard
87
+ When would you choose Elastic Net over Lasso or Ridge?,hard
88
+ How do you tune the regularization hyperparameter?,hard
89
+ What is cross-validation and why is it used?,hard
90
+ What happens to Ridge coefficients as the penalty term approaches infinity?,hard
91
+ What happens to Lasso coefficients as the penalty term approaches infinity?,hard
92
+ Can linear regression handle non-linear relationships without polynomial features?,hard
93
+ How would you implement gradient descent for linear regression from scratch in Python?,hard
94
+ What is the difference between a confidence interval and a prediction interval?,hard
95
+ How do you handle missing data before training a linear model?,hard
96
+ What is the curse of dimensionality?,hard
97
+ How does Principal Component Analysis relate to linear regression?,hard
98
+ What is Principal Component Regression?,hard
99
+ What are generalized linear models?,hard
100
+ How do you interpret the intercept when all features are mean-centered?,hard
101
+ What would you do if you have more features than observations?,hard
frontend/src/api/instructor.ts ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { apiFetch } from './client'
2
+ import type { InstructorStudent, StudentAttempt } from '../types'
3
+
4
+ export interface StudentDetail {
5
+ id: string
6
+ full_name: string
7
+ email: string
8
+ sessions: StudentAttempt[]
9
+ }
10
+
11
+ export async function getInstructorStudents(): Promise<InstructorStudent[]> {
12
+ const res = await apiFetch('/api/instructor/students')
13
+ if (!res.ok) {
14
+ const err = await res.json()
15
+ throw new Error(err.detail ?? 'Failed to load students')
16
+ }
17
+ return res.json()
18
+ }
19
+
20
+ export async function getStudentDetail(studentId: string): Promise<StudentDetail> {
21
+ const res = await apiFetch(`/api/instructor/students/${studentId}`)
22
+ if (!res.ok) throw new Error('Failed to load student')
23
+ return res.json()
24
+ }
frontend/src/api/interview.ts CHANGED
@@ -1 +1,38 @@
1
- // Interview API calls implemented in Phase 6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { apiFetch } from './client'
2
+
3
+ export async function startInterview(
4
+ topicId: string,
5
+ ): Promise<{ session_id: string; message: string }> {
6
+ const res = await apiFetch('/interview/start', {
7
+ method: 'POST',
8
+ body: JSON.stringify({ topic_id: topicId }),
9
+ })
10
+ if (!res.ok) {
11
+ const err = await res.json()
12
+ throw new Error(err.detail ?? 'Failed to start interview')
13
+ }
14
+ return res.json()
15
+ }
16
+
17
+ export async function sendTurn(
18
+ sessionId: string,
19
+ studentMessage: string,
20
+ ): Promise<{ message: string; is_counter_q: boolean; is_complete: boolean; feedback?: object }> {
21
+ const res = await apiFetch('/interview/turn', {
22
+ method: 'POST',
23
+ body: JSON.stringify({ session_id: sessionId, student_message: studentMessage }),
24
+ })
25
+ if (!res.ok) {
26
+ const err = await res.json()
27
+ throw new Error(err.detail ?? 'Failed to send answer')
28
+ }
29
+ return res.json()
30
+ }
31
+
32
+ export async function getInterviewState(
33
+ sessionId: string,
34
+ ): Promise<{ status: string; turn_count: number; last_message: string | null }> {
35
+ const res = await apiFetch(`/interview/state/${sessionId}`)
36
+ if (!res.ok) throw new Error('Failed to load session')
37
+ return res.json()
38
+ }
frontend/src/api/sessions.ts CHANGED
@@ -1 +1,8 @@
1
- // Sessions API calls implemented in Phase 7
 
 
 
 
 
 
 
 
1
+ import { apiFetch } from './client'
2
+ import type { SessionReport } from '../types'
3
+
4
+ export async function getSession(sessionId: string): Promise<SessionReport> {
5
+ const res = await apiFetch(`/api/sessions/${sessionId}`)
6
+ if (!res.ok) throw new Error('Failed to load report')
7
+ return res.json()
8
+ }
frontend/src/api/student.ts ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { apiFetch } from './client'
2
+ import type { StudentTopic, StudentSession } from '../types'
3
+
4
+ export async function getStudentTopics(): Promise<StudentTopic[]> {
5
+ const res = await apiFetch('/api/student/topics')
6
+ if (!res.ok) {
7
+ const err = await res.json()
8
+ throw new Error(err.detail ?? 'Failed to load topics')
9
+ }
10
+ return res.json()
11
+ }
12
+
13
+ export async function getStudentSessions(): Promise<StudentSession[]> {
14
+ const res = await apiFetch('/api/student/sessions')
15
+ if (!res.ok) throw new Error('Failed to load sessions')
16
+ return res.json()
17
+ }
frontend/src/components/instructor/GapReport.tsx CHANGED
@@ -1,3 +1,41 @@
1
- export default function GapReport() {
2
- return <div>GapReport</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import type { StudentAttempt } from '../../types'
2
+
3
+ interface Props {
4
+ sessions: StudentAttempt[]
5
+ }
6
+
7
+ export default function GapReport({ sessions }: Props) {
8
+ const completed = sessions.filter((s) => s.status === 'completed' && s.score != null)
9
+
10
+ if (completed.length === 0) {
11
+ return null
12
+ }
13
+
14
+ // Find topics with low scores (below 60)
15
+ const weakTopics = completed
16
+ .filter((s) => (s.score ?? 100) < 60)
17
+ .sort((a, b) => (a.score ?? 0) - (b.score ?? 0))
18
+
19
+ if (weakTopics.length === 0) {
20
+ return (
21
+ <div className="bg-green-950/30 border border-green-800/50 rounded-xl px-5 py-4">
22
+ <p className="text-green-400 text-sm font-medium">No significant gaps detected.</p>
23
+ <p className="text-gray-500 text-xs mt-1">All topics scored above 60.</p>
24
+ </div>
25
+ )
26
+ }
27
+
28
+ return (
29
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-5">
30
+ <h3 className="text-sm font-semibold text-red-400 mb-3">Knowledge Gaps</h3>
31
+ <div className="space-y-2">
32
+ {weakTopics.map((s) => (
33
+ <div key={s.id} className="flex items-center justify-between">
34
+ <p className="text-sm text-gray-300">{s.topic_name}</p>
35
+ <span className="text-sm font-semibold text-red-400">{s.score}/100</span>
36
+ </div>
37
+ ))}
38
+ </div>
39
+ </div>
40
+ )
41
  }
frontend/src/components/instructor/StudentRow.tsx CHANGED
@@ -1,3 +1,46 @@
1
- export default function StudentRow() {
2
- return <div>StudentRow</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useNavigate } from 'react-router-dom'
2
+ import type { InstructorStudent } from '../../types'
3
+
4
+ interface Props {
5
+ student: InstructorStudent
6
+ }
7
+
8
+ export default function StudentRow({ student }: Props) {
9
+ const navigate = useNavigate()
10
+
11
+ const scoreColor =
12
+ student.avg_score == null
13
+ ? 'text-gray-500'
14
+ : student.avg_score >= 70
15
+ ? 'text-green-400'
16
+ : student.avg_score >= 40
17
+ ? 'text-yellow-400'
18
+ : 'text-red-400'
19
+
20
+ return (
21
+ <div className="flex items-center justify-between px-5 py-4 bg-gray-800/50 rounded-lg">
22
+ <div>
23
+ <p className="text-white font-medium">{student.full_name}</p>
24
+ <p className="text-gray-500 text-xs mt-0.5">{student.email}</p>
25
+ </div>
26
+ <div className="flex items-center gap-6">
27
+ <div className="text-center">
28
+ <p className="text-white font-semibold">{student.completed_sessions}</p>
29
+ <p className="text-gray-500 text-xs">Interviews</p>
30
+ </div>
31
+ <div className="text-center">
32
+ <p className={`font-semibold ${scoreColor}`}>
33
+ {student.avg_score ?? '—'}
34
+ </p>
35
+ <p className="text-gray-500 text-xs">Avg score</p>
36
+ </div>
37
+ <button
38
+ onClick={() => navigate(`/instructor/students/${student.id}`)}
39
+ className="text-sm text-indigo-400 hover:text-indigo-300 transition-colors"
40
+ >
41
+ Detail →
42
+ </button>
43
+ </div>
44
+ </div>
45
+ )
46
  }
frontend/src/components/interview/ChatWindow.tsx CHANGED
@@ -1,3 +1,34 @@
1
- export default function ChatWindow() {
2
- return <div>ChatWindow</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useEffect, useRef } from 'react'
2
+ import type { Message } from '../../types'
3
+ import MessageBubble from './MessageBubble'
4
+
5
+ interface Props {
6
+ messages: Message[]
7
+ loading: boolean
8
+ }
9
+
10
+ export default function ChatWindow({ messages, loading }: Props) {
11
+ const bottomRef = useRef<HTMLDivElement>(null)
12
+
13
+ useEffect(() => {
14
+ bottomRef.current?.scrollIntoView({ behavior: 'smooth' })
15
+ }, [messages, loading])
16
+
17
+ return (
18
+ <div className="flex-1 overflow-y-auto px-4 py-4 space-y-3">
19
+ {messages.map((msg, i) => (
20
+ <MessageBubble key={i} message={msg} />
21
+ ))}
22
+ {loading && (
23
+ <div className="flex justify-start">
24
+ <div className="bg-gray-800 rounded-2xl rounded-tl-sm px-4 py-3 flex gap-1 items-center">
25
+ <span className="w-2 h-2 bg-gray-400 rounded-full animate-bounce [animation-delay:0ms]" />
26
+ <span className="w-2 h-2 bg-gray-400 rounded-full animate-bounce [animation-delay:150ms]" />
27
+ <span className="w-2 h-2 bg-gray-400 rounded-full animate-bounce [animation-delay:300ms]" />
28
+ </div>
29
+ </div>
30
+ )}
31
+ <div ref={bottomRef} />
32
+ </div>
33
+ )
34
  }
frontend/src/components/interview/MessageBubble.tsx CHANGED
@@ -1,3 +1,23 @@
1
- export default function MessageBubble() {
2
- return <div>MessageBubble</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import type { Message } from '../../types'
2
+
3
+ interface Props {
4
+ message: Message
5
+ }
6
+
7
+ export default function MessageBubble({ message }: Props) {
8
+ const isAI = message.role === 'ai'
9
+
10
+ return (
11
+ <div className={`flex ${isAI ? 'justify-start' : 'justify-end'}`}>
12
+ <div
13
+ className={`max-w-[75%] rounded-2xl px-4 py-3 text-sm leading-relaxed whitespace-pre-wrap ${
14
+ isAI
15
+ ? 'bg-gray-800 text-white rounded-tl-sm'
16
+ : 'bg-indigo-600 text-white rounded-tr-sm'
17
+ }`}
18
+ >
19
+ {message.content}
20
+ </div>
21
+ </div>
22
+ )
23
  }
frontend/src/components/interview/ProgressBar.tsx CHANGED
@@ -1,3 +1,23 @@
1
- export default function ProgressBar() {
2
- return <div>ProgressBar</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ interface Props {
2
+ current: number
3
+ max: number
4
+ }
5
+
6
+ export default function ProgressBar({ current, max }: Props) {
7
+ const pct = Math.min((current / max) * 100, 100)
8
+
9
+ return (
10
+ <div className="px-4 py-2 border-b border-gray-800">
11
+ <div className="flex items-center justify-between text-xs text-gray-400 mb-1">
12
+ <span>Progress</span>
13
+ <span>Turn {current} of {max}</span>
14
+ </div>
15
+ <div className="h-1.5 bg-gray-800 rounded-full overflow-hidden">
16
+ <div
17
+ className="h-full bg-indigo-500 rounded-full transition-all duration-500"
18
+ style={{ width: `${pct}%` }}
19
+ />
20
+ </div>
21
+ </div>
22
+ )
23
  }
frontend/src/components/interview/TypeInput.tsx CHANGED
@@ -1,3 +1,58 @@
1
- export default function TypeInput() {
2
- return <div>TypeInput</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useState, useRef, type KeyboardEvent } from 'react'
2
+
3
+ interface Props {
4
+ onSend: (text: string) => void
5
+ disabled: boolean
6
+ }
7
+
8
+ export default function TypeInput({ onSend, disabled }: Props) {
9
+ const [text, setText] = useState('')
10
+ const textareaRef = useRef<HTMLTextAreaElement>(null)
11
+
12
+ function handleKeyDown(e: KeyboardEvent<HTMLTextAreaElement>) {
13
+ if (e.key === 'Enter' && !e.shiftKey) {
14
+ e.preventDefault()
15
+ submit()
16
+ }
17
+ }
18
+
19
+ function submit() {
20
+ const trimmed = text.trim()
21
+ if (!trimmed || disabled) return
22
+ onSend(trimmed)
23
+ setText('')
24
+ if (textareaRef.current) {
25
+ textareaRef.current.style.height = 'auto'
26
+ }
27
+ }
28
+
29
+ function handleInput() {
30
+ const el = textareaRef.current
31
+ if (!el) return
32
+ el.style.height = 'auto'
33
+ el.style.height = `${Math.min(el.scrollHeight, 120)}px`
34
+ }
35
+
36
+ return (
37
+ <div className="border-t border-gray-800 px-4 py-3 flex gap-3 items-end">
38
+ <textarea
39
+ ref={textareaRef}
40
+ rows={1}
41
+ value={text}
42
+ onChange={(e) => setText(e.target.value)}
43
+ onKeyDown={handleKeyDown}
44
+ onInput={handleInput}
45
+ disabled={disabled}
46
+ placeholder="Type your answer… (Enter to send, Shift+Enter for newline)"
47
+ className="flex-1 bg-gray-800 border border-gray-700 rounded-xl px-4 py-2 text-white placeholder-gray-500 text-sm resize-none focus:outline-none focus:border-indigo-500 disabled:opacity-50"
48
+ />
49
+ <button
50
+ onClick={submit}
51
+ disabled={disabled || !text.trim()}
52
+ className="bg-indigo-600 hover:bg-indigo-500 disabled:opacity-40 text-white rounded-xl px-4 py-2 text-sm font-medium transition-colors shrink-0"
53
+ >
54
+ Send
55
+ </button>
56
+ </div>
57
+ )
58
  }
frontend/src/components/report/FeedbackCard.tsx CHANGED
@@ -1,3 +1,30 @@
1
- export default function FeedbackCard() {
2
- return <div>FeedbackCard</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ interface Props {
2
+ title: string
3
+ items: string[]
4
+ variant: 'positive' | 'negative'
5
+ }
6
+
7
+ export default function FeedbackCard({ title, items, variant }: Props) {
8
+ const borderColor = variant === 'positive' ? 'border-green-500' : 'border-red-500'
9
+ const titleColor = variant === 'positive' ? 'text-green-400' : 'text-red-400'
10
+
11
+ return (
12
+ <div className={`bg-gray-900 border border-gray-800 border-l-4 ${borderColor} rounded-xl p-4`}>
13
+ <h3 className={`text-sm font-semibold mb-3 ${titleColor}`}>{title}</h3>
14
+ {items.length === 0 ? (
15
+ <p className="text-gray-500 text-sm">None noted.</p>
16
+ ) : (
17
+ <ul className="space-y-1.5">
18
+ {items.map((item, i) => (
19
+ <li key={i} className="text-sm text-gray-300 flex gap-2">
20
+ <span className={`mt-0.5 shrink-0 ${variant === 'positive' ? 'text-green-500' : 'text-red-500'}`}>
21
+ {variant === 'positive' ? '✓' : '✗'}
22
+ </span>
23
+ {item}
24
+ </li>
25
+ ))}
26
+ </ul>
27
+ )}
28
+ </div>
29
+ )
30
  }
frontend/src/components/report/ScoreRing.tsx CHANGED
@@ -1,3 +1,36 @@
1
- export default function ScoreRing() {
2
- return <div>ScoreRing</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ interface Props {
2
+ score: number
3
+ }
4
+
5
+ export default function ScoreRing({ score }: Props) {
6
+ const radius = 54
7
+ const circumference = 2 * Math.PI * radius
8
+ const filled = circumference - (score / 100) * circumference
9
+
10
+ const color =
11
+ score >= 70 ? '#22c55e' : score >= 40 ? '#eab308' : '#ef4444'
12
+
13
+ return (
14
+ <div className="relative inline-flex items-center justify-center">
15
+ <svg width="140" height="140" className="-rotate-90">
16
+ <circle cx="70" cy="70" r={radius} fill="none" stroke="#1f2937" strokeWidth="10" />
17
+ <circle
18
+ cx="70"
19
+ cy="70"
20
+ r={radius}
21
+ fill="none"
22
+ stroke={color}
23
+ strokeWidth="10"
24
+ strokeDasharray={circumference}
25
+ strokeDashoffset={filled}
26
+ strokeLinecap="round"
27
+ className="transition-all duration-700"
28
+ />
29
+ </svg>
30
+ <div className="absolute flex flex-col items-center rotate-0">
31
+ <span className="text-3xl font-bold text-white">{score}</span>
32
+ <span className="text-xs text-gray-400">/ 100</span>
33
+ </div>
34
+ </div>
35
+ )
36
  }
frontend/src/components/report/SummaryBlock.tsx CHANGED
@@ -1,3 +1,12 @@
1
- export default function SummaryBlock() {
2
- return <div>SummaryBlock</div>
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ interface Props {
2
+ summary: string
3
+ }
4
+
5
+ export default function SummaryBlock({ summary }: Props) {
6
+ return (
7
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-4">
8
+ <h3 className="text-sm font-semibold text-gray-400 mb-2">Summary</h3>
9
+ <p className="text-sm text-gray-200 leading-relaxed">{summary}</p>
10
+ </div>
11
+ )
12
  }
frontend/src/components/student/AttemptRow.tsx CHANGED
@@ -1,3 +1,51 @@
1
- export default function AttemptRow() {
2
- return <div>AttemptRow</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useNavigate } from 'react-router-dom'
2
+ import type { StudentSession } from '../../types'
3
+
4
+ interface Props {
5
+ session: StudentSession
6
+ }
7
+
8
+ function formatDate(iso: string | null): string {
9
+ if (!iso) return '—'
10
+ return new Date(iso).toLocaleDateString(undefined, { month: 'short', day: 'numeric', year: 'numeric' })
11
+ }
12
+
13
+ export default function AttemptRow({ session }: Props) {
14
+ const navigate = useNavigate()
15
+ const isCompleted = session.status === 'completed'
16
+
17
+ const scoreColor =
18
+ session.score == null
19
+ ? 'text-gray-500'
20
+ : session.score >= 70
21
+ ? 'text-green-400'
22
+ : session.score >= 40
23
+ ? 'text-yellow-400'
24
+ : 'text-red-400'
25
+
26
+ return (
27
+ <div className="flex items-center justify-between bg-gray-900 border border-gray-800 rounded-xl px-5 py-4">
28
+ <div>
29
+ <p className="text-white font-medium">{session.topic_name}</p>
30
+ <p className="text-gray-500 text-xs mt-0.5">{formatDate(session.started_at)}</p>
31
+ </div>
32
+ <div className="flex items-center gap-4">
33
+ <span className={`text-lg font-semibold ${scoreColor}`}>
34
+ {isCompleted && session.score != null ? session.score : '—'}
35
+ </span>
36
+ {isCompleted ? (
37
+ <button
38
+ onClick={() => navigate(`/student/report/${session.id}`)}
39
+ className="text-sm text-indigo-400 hover:text-indigo-300 transition-colors"
40
+ >
41
+ Report
42
+ </button>
43
+ ) : (
44
+ <span className="text-xs text-yellow-500 bg-yellow-900/30 px-2 py-1 rounded-full">
45
+ In progress
46
+ </span>
47
+ )}
48
+ </div>
49
+ </div>
50
+ )
51
  }
frontend/src/components/student/TopicChip.tsx CHANGED
@@ -1,3 +1,23 @@
1
- export default function TopicChip() {
2
- return <div>TopicChip</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import type { StudentTopic } from '../../types'
2
+
3
+ interface Props {
4
+ topic: StudentTopic
5
+ onStart: (topic: StudentTopic) => void
6
+ }
7
+
8
+ export default function TopicChip({ topic, onStart }: Props) {
9
+ return (
10
+ <div className="bg-gray-900 border border-gray-800 rounded-xl px-5 py-4 flex items-center justify-between">
11
+ <div>
12
+ <p className="text-white font-medium">{topic.name}</p>
13
+ <p className="text-gray-500 text-xs mt-0.5">Topic {topic.order_index + 1}</p>
14
+ </div>
15
+ <button
16
+ onClick={() => onStart(topic)}
17
+ className="bg-indigo-600 hover:bg-indigo-500 text-white text-sm font-medium px-4 py-1.5 rounded-lg transition-colors"
18
+ >
19
+ Start
20
+ </button>
21
+ </div>
22
+ )
23
  }
frontend/src/pages/InstructorDashboard.tsx CHANGED
@@ -2,15 +2,19 @@ import { useState, useEffect, type FormEvent } from 'react'
2
  import { useNavigate } from 'react-router-dom'
3
  import Navbar from '../components/shared/Navbar'
4
  import StatCard from '../components/instructor/StatCard'
 
5
  import {
6
  getMyBatch, createBatch, getTopics, createTopic, setTopicUnlock,
7
  type Batch, type Topic,
8
  } from '../api/topics'
 
 
9
 
10
  export default function InstructorDashboard() {
11
  const navigate = useNavigate()
12
  const [batch, setBatch] = useState<Batch | null>(null)
13
  const [topics, setTopics] = useState<Topic[]>([])
 
14
  const [batchName, setBatchName] = useState('')
15
  const [newTopicName, setNewTopicName] = useState('')
16
  const [loading, setLoading] = useState(true)
@@ -20,8 +24,9 @@ export default function InstructorDashboard() {
20
  getMyBatch()
21
  .then(async (b) => {
22
  setBatch(b)
23
- const t = await getTopics(b.id)
24
  setTopics(t)
 
25
  })
26
  .catch(() => { /* 404 = no batch yet */ })
27
  .finally(() => setLoading(false))
@@ -114,12 +119,13 @@ export default function InstructorDashboard() {
114
  </div>
115
 
116
  {/* ── Stats row ── */}
117
- <div className="grid grid-cols-2 gap-4 mb-6">
118
  <StatCard label="Topics" value={topics.length} />
119
  <StatCard
120
  label="Questions"
121
  value={topics.reduce((sum, t) => sum + t.question_count, 0)}
122
  />
 
123
  </div>
124
 
125
  {/* ── Topic list ── */}
@@ -184,6 +190,18 @@ export default function InstructorDashboard() {
184
  </form>
185
  {error && <p className="text-red-400 text-sm mt-2">{error}</p>}
186
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
187
  </>
188
  )}
189
  </div>
 
2
  import { useNavigate } from 'react-router-dom'
3
  import Navbar from '../components/shared/Navbar'
4
  import StatCard from '../components/instructor/StatCard'
5
+ import StudentRow from '../components/instructor/StudentRow'
6
  import {
7
  getMyBatch, createBatch, getTopics, createTopic, setTopicUnlock,
8
  type Batch, type Topic,
9
  } from '../api/topics'
10
+ import { getInstructorStudents } from '../api/instructor'
11
+ import type { InstructorStudent } from '../types'
12
 
13
  export default function InstructorDashboard() {
14
  const navigate = useNavigate()
15
  const [batch, setBatch] = useState<Batch | null>(null)
16
  const [topics, setTopics] = useState<Topic[]>([])
17
+ const [students, setStudents] = useState<InstructorStudent[]>([])
18
  const [batchName, setBatchName] = useState('')
19
  const [newTopicName, setNewTopicName] = useState('')
20
  const [loading, setLoading] = useState(true)
 
24
  getMyBatch()
25
  .then(async (b) => {
26
  setBatch(b)
27
+ const [t, s] = await Promise.all([getTopics(b.id), getInstructorStudents()])
28
  setTopics(t)
29
+ setStudents(s)
30
  })
31
  .catch(() => { /* 404 = no batch yet */ })
32
  .finally(() => setLoading(false))
 
119
  </div>
120
 
121
  {/* ── Stats row ── */}
122
+ <div className="grid grid-cols-3 gap-4 mb-6">
123
  <StatCard label="Topics" value={topics.length} />
124
  <StatCard
125
  label="Questions"
126
  value={topics.reduce((sum, t) => sum + t.question_count, 0)}
127
  />
128
+ <StatCard label="Students" value={students.length} />
129
  </div>
130
 
131
  {/* ── Topic list ── */}
 
190
  </form>
191
  {error && <p className="text-red-400 text-sm mt-2">{error}</p>}
192
  </div>
193
+
194
+ {/* ── Students analytics ── */}
195
+ {students.length > 0 && (
196
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-6 mt-6">
197
+ <h2 className="text-lg font-semibold text-white mb-4">Students</h2>
198
+ <div className="space-y-2">
199
+ {students.map((s) => (
200
+ <StudentRow key={s.id} student={s} />
201
+ ))}
202
+ </div>
203
+ </div>
204
+ )}
205
  </>
206
  )}
207
  </div>
frontend/src/pages/Interview.tsx CHANGED
@@ -1,3 +1,131 @@
 
 
 
 
 
 
 
 
 
1
  export default function Interview() {
2
- return <div>Interview</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useEffect, useState } from 'react'
2
+ import { useParams, useLocation, useNavigate } from 'react-router-dom'
3
+ import { useInterviewStore } from '../store/interviewStore'
4
+ import { startInterview, sendTurn, getInterviewState } from '../api/interview'
5
+ import ChatWindow from '../components/interview/ChatWindow'
6
+ import TypeInput from '../components/interview/TypeInput'
7
+ import ProgressBar from '../components/interview/ProgressBar'
8
+ import Navbar from '../components/shared/Navbar'
9
+
10
  export default function Interview() {
11
+ const { sessionId } = useParams<{ sessionId?: string }>()
12
+ const location = useLocation()
13
+ const navigate = useNavigate()
14
+
15
+ const { messages, turnCount, maxTurns, status, startSession, addMessage, setTurnCount, setStatus } =
16
+ useInterviewStore()
17
+
18
+ const [error, setError] = useState('')
19
+
20
+ // On mount: resume existing session or start new one
21
+ useEffect(() => {
22
+ if (sessionId && status === 'idle') {
23
+ resumeSession(sessionId)
24
+ } else if (!sessionId && status === 'idle') {
25
+ // topicId must be passed via location.state from StudentDashboard
26
+ const topicId: string | undefined = (location.state as { topicId?: string })?.topicId
27
+ if (topicId) {
28
+ handleStart(topicId)
29
+ }
30
+ }
31
+ // eslint-disable-next-line react-hooks/exhaustive-deps
32
+ }, [])
33
+
34
+ async function resumeSession(id: string) {
35
+ setStatus('loading')
36
+ try {
37
+ const state = await getInterviewState(id)
38
+ // Restore last AI message so the student sees where they left off
39
+ if (state.last_message) {
40
+ startSession(id, '', state.last_message)
41
+ }
42
+ setTurnCount(state.turn_count)
43
+ setStatus(state.status === 'complete' ? 'finished' : 'waiting')
44
+ } catch {
45
+ setError('Could not resume session.')
46
+ setStatus('idle')
47
+ }
48
+ }
49
+
50
+ async function handleStart(topicId: string) {
51
+ setStatus('loading')
52
+ setError('')
53
+ try {
54
+ const data = await startInterview(topicId)
55
+ startSession(data.session_id, topicId, data.message)
56
+ // Update URL to include session id without triggering re-mount
57
+ navigate(`/student/interview/${data.session_id}`, { replace: true, state: location.state })
58
+ } catch {
59
+ setError('Failed to start interview. Please try again.')
60
+ setStatus('idle')
61
+ }
62
+ }
63
+
64
+ async function handleSend(answer: string) {
65
+ if (!useInterviewStore.getState().sessionId) return
66
+ const currentSessionId = useInterviewStore.getState().sessionId!
67
+
68
+ addMessage({ role: 'student', content: answer })
69
+ setStatus('loading')
70
+ setError('')
71
+
72
+ try {
73
+ const data = await sendTurn(currentSessionId, answer)
74
+ addMessage({ role: 'ai', content: data.message })
75
+
76
+ if (data.is_complete) {
77
+ setStatus('finished')
78
+ } else {
79
+ setStatus('waiting')
80
+ }
81
+ } catch {
82
+ setError('Failed to send answer. Please try again.')
83
+ setStatus('waiting')
84
+ }
85
+ }
86
+
87
+ const isLoading = status === 'loading'
88
+ const isFinished = status === 'finished'
89
+ const currentSessionId = useInterviewStore.getState().sessionId
90
+
91
+ return (
92
+ <div className="min-h-screen bg-gray-950 flex flex-col">
93
+ <Navbar />
94
+
95
+ <div className="flex-1 flex flex-col max-w-3xl mx-auto w-full">
96
+ <ProgressBar current={turnCount} max={maxTurns} />
97
+
98
+ {/* No topic selected and no session — prompt user */}
99
+ {status === 'idle' && !sessionId && (
100
+ <div className="flex-1 flex items-center justify-center text-gray-500">
101
+ Select a topic from the dashboard to start.
102
+ </div>
103
+ )}
104
+
105
+ {status !== 'idle' && (
106
+ <ChatWindow messages={messages} loading={isLoading} />
107
+ )}
108
+
109
+ {error && (
110
+ <p className="text-red-400 text-sm text-center py-2">{error}</p>
111
+ )}
112
+
113
+ {isFinished ? (
114
+ <div className="border-t border-gray-800 px-4 py-4 text-center">
115
+ <p className="text-gray-400 text-sm mb-3">Interview complete!</p>
116
+ <button
117
+ onClick={() => navigate(`/student/report/${currentSessionId}`)}
118
+ className="bg-indigo-600 hover:bg-indigo-500 text-white rounded-xl px-6 py-2 text-sm font-medium transition-colors"
119
+ >
120
+ View Report
121
+ </button>
122
+ </div>
123
+ ) : (
124
+ status !== 'idle' && (
125
+ <TypeInput onSend={handleSend} disabled={isLoading} />
126
+ )
127
+ )}
128
+ </div>
129
+ </div>
130
+ )
131
  }
frontend/src/pages/Report.tsx CHANGED
@@ -1,3 +1,98 @@
 
 
 
 
 
 
 
 
 
1
  export default function Report() {
2
- return <div>Report</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useEffect, useState } from 'react'
2
+ import { useParams, useNavigate } from 'react-router-dom'
3
+ import { getSession } from '../api/sessions'
4
+ import type { SessionReport } from '../types'
5
+ import ScoreRing from '../components/report/ScoreRing'
6
+ import FeedbackCard from '../components/report/FeedbackCard'
7
+ import SummaryBlock from '../components/report/SummaryBlock'
8
+ import Navbar from '../components/shared/Navbar'
9
+
10
  export default function Report() {
11
+ const { sessionId } = useParams<{ sessionId: string }>()
12
+ const navigate = useNavigate()
13
+
14
+ const [report, setReport] = useState<SessionReport | null>(null)
15
+ const [loading, setLoading] = useState(true)
16
+ const [error, setError] = useState('')
17
+
18
+ useEffect(() => {
19
+ if (!sessionId) return
20
+ getSession(sessionId)
21
+ .then(setReport)
22
+ .catch(() => setError('Could not load report.'))
23
+ .finally(() => setLoading(false))
24
+ }, [sessionId])
25
+
26
+ return (
27
+ <div className="min-h-screen bg-gray-950">
28
+ <Navbar />
29
+
30
+ <div className="max-w-2xl mx-auto px-4 py-8 space-y-6">
31
+ {loading && (
32
+ <div className="space-y-4">
33
+ {[...Array(4)].map((_, i) => (
34
+ <div key={i} className="h-24 bg-gray-900 rounded-xl animate-pulse" />
35
+ ))}
36
+ </div>
37
+ )}
38
+
39
+ {error && (
40
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-6 text-center text-red-400 text-sm">
41
+ {error}
42
+ </div>
43
+ )}
44
+
45
+ {report && (
46
+ <>
47
+ {/* Header */}
48
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-6 flex items-center justify-between">
49
+ <div>
50
+ <p className="text-xs text-gray-500 uppercase tracking-wide mb-1">Interview Report</p>
51
+ <h1 className="text-xl font-semibold text-white">Topic {report.topic_id}</h1>
52
+ </div>
53
+ <ScoreRing score={report.score} />
54
+ </div>
55
+
56
+ {/* Summary */}
57
+ <SummaryBlock summary={report.feedback.summary} />
58
+
59
+ {/* Feedback cards */}
60
+ <div className="grid grid-cols-1 sm:grid-cols-2 gap-4">
61
+ <FeedbackCard
62
+ title="Strengths"
63
+ items={report.feedback.tips}
64
+ variant="positive"
65
+ />
66
+ <FeedbackCard
67
+ title="Areas to Improve"
68
+ items={report.feedback.mistakes}
69
+ variant="negative"
70
+ />
71
+ </div>
72
+
73
+ {/* Sub-scores */}
74
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-4 flex gap-6">
75
+ <div className="flex-1 text-center">
76
+ <p className="text-2xl font-bold text-white">{report.feedback.concept_score}</p>
77
+ <p className="text-xs text-gray-500 mt-1">Concept Score</p>
78
+ </div>
79
+ <div className="w-px bg-gray-800" />
80
+ <div className="flex-1 text-center">
81
+ <p className="text-2xl font-bold text-white">{report.feedback.depth_score}</p>
82
+ <p className="text-xs text-gray-500 mt-1">Depth Score</p>
83
+ </div>
84
+ </div>
85
+
86
+ {/* Back */}
87
+ <button
88
+ onClick={() => navigate('/student/dashboard')}
89
+ className="w-full bg-gray-800 hover:bg-gray-700 text-white rounded-xl px-4 py-3 text-sm font-medium transition-colors"
90
+ >
91
+ Back to Dashboard
92
+ </button>
93
+ </>
94
+ )}
95
+ </div>
96
+ </div>
97
+ )
98
  }
frontend/src/pages/StudentDashboard.tsx CHANGED
@@ -1,3 +1,108 @@
 
 
 
 
 
 
 
 
 
1
  export default function StudentDashboard() {
2
- return <div>StudentDashboard</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useEffect, useState } from 'react'
2
+ import { useNavigate } from 'react-router-dom'
3
+ import { useInterviewStore } from '../store/interviewStore'
4
+ import { getStudentTopics, getStudentSessions } from '../api/student'
5
+ import type { StudentTopic, StudentSession } from '../types'
6
+ import TopicChip from '../components/student/TopicChip'
7
+ import AttemptRow from '../components/student/AttemptRow'
8
+ import Navbar from '../components/shared/Navbar'
9
+
10
  export default function StudentDashboard() {
11
+ const navigate = useNavigate()
12
+ const resetInterview = useInterviewStore((s) => s.reset)
13
+
14
+ const [topics, setTopics] = useState<StudentTopic[]>([])
15
+ const [sessions, setSessions] = useState<StudentSession[]>([])
16
+ const [loading, setLoading] = useState(true)
17
+ const [error, setError] = useState('')
18
+
19
+ useEffect(() => {
20
+ Promise.all([getStudentTopics(), getStudentSessions()])
21
+ .then(([t, s]) => {
22
+ setTopics(t)
23
+ setSessions(s)
24
+ })
25
+ .catch(() => setError('Failed to load dashboard.'))
26
+ .finally(() => setLoading(false))
27
+ }, [])
28
+
29
+ function handleStartTopic(topic: StudentTopic) {
30
+ resetInterview()
31
+ navigate('/student/interview', { state: { topicId: topic.id } })
32
+ }
33
+
34
+ const completedSessions = sessions.filter((s) => s.status === 'completed')
35
+ const avgScore =
36
+ completedSessions.length > 0
37
+ ? Math.round(
38
+ completedSessions.reduce((sum, s) => sum + (s.score ?? 0), 0) /
39
+ completedSessions.length,
40
+ )
41
+ : null
42
+
43
+ return (
44
+ <div className="min-h-screen bg-gray-950">
45
+ <Navbar />
46
+
47
+ <div className="max-w-2xl mx-auto px-4 py-8 space-y-8">
48
+ {/* Stats */}
49
+ <div className="grid grid-cols-3 gap-4">
50
+ <div className="bg-gray-900 border border-gray-800 rounded-xl px-4 py-3 text-center">
51
+ <p className="text-2xl font-bold text-white">{topics.length}</p>
52
+ <p className="text-xs text-gray-400 mt-1">Available Topics</p>
53
+ </div>
54
+ <div className="bg-gray-900 border border-gray-800 rounded-xl px-4 py-3 text-center">
55
+ <p className="text-2xl font-bold text-white">{completedSessions.length}</p>
56
+ <p className="text-xs text-gray-400 mt-1">Completed</p>
57
+ </div>
58
+ <div className="bg-gray-900 border border-gray-800 rounded-xl px-4 py-3 text-center">
59
+ <p className="text-2xl font-bold text-white">{avgScore ?? '—'}</p>
60
+ <p className="text-xs text-gray-400 mt-1">Avg Score</p>
61
+ </div>
62
+ </div>
63
+
64
+ {error && <p className="text-red-400 text-sm text-center">{error}</p>}
65
+
66
+ {loading ? (
67
+ <div className="space-y-3">
68
+ {[...Array(3)].map((_, i) => (
69
+ <div key={i} className="h-16 bg-gray-900 rounded-xl animate-pulse" />
70
+ ))}
71
+ </div>
72
+ ) : (
73
+ <>
74
+ {/* Available topics */}
75
+ <section>
76
+ <h2 className="text-sm font-semibold text-gray-400 uppercase tracking-wide mb-3">
77
+ Available Topics
78
+ </h2>
79
+ {topics.length === 0 ? (
80
+ <p className="text-gray-500 text-sm">No topics unlocked yet. Check back later.</p>
81
+ ) : (
82
+ <div className="space-y-3">
83
+ {topics.map((t) => (
84
+ <TopicChip key={t.id} topic={t} onStart={handleStartTopic} />
85
+ ))}
86
+ </div>
87
+ )}
88
+ </section>
89
+
90
+ {/* Past attempts */}
91
+ {sessions.length > 0 && (
92
+ <section>
93
+ <h2 className="text-sm font-semibold text-gray-400 uppercase tracking-wide mb-3">
94
+ Past Attempts
95
+ </h2>
96
+ <div className="space-y-3">
97
+ {sessions.map((s) => (
98
+ <AttemptRow key={s.id} session={s} />
99
+ ))}
100
+ </div>
101
+ </section>
102
+ )}
103
+ </>
104
+ )}
105
+ </div>
106
+ </div>
107
+ )
108
  }
frontend/src/pages/StudentDetail.tsx CHANGED
@@ -1,3 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
1
  export default function StudentDetail() {
2
- return <div>StudentDetail</div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  }
 
1
+ import { useEffect, useState } from 'react'
2
+ import { useParams, useNavigate } from 'react-router-dom'
3
+ import { getStudentDetail, type StudentDetail } from '../api/instructor'
4
+ import GapReport from '../components/instructor/GapReport'
5
+ import Navbar from '../components/shared/Navbar'
6
+
7
+ function formatDate(iso: string | null): string {
8
+ if (!iso) return '—'
9
+ return new Date(iso).toLocaleDateString(undefined, { month: 'short', day: 'numeric', year: 'numeric' })
10
+ }
11
+
12
  export default function StudentDetail() {
13
+ const { studentId } = useParams<{ studentId: string }>()
14
+ const navigate = useNavigate()
15
+
16
+ const [detail, setDetail] = useState<StudentDetail | null>(null)
17
+ const [loading, setLoading] = useState(true)
18
+ const [error, setError] = useState('')
19
+
20
+ useEffect(() => {
21
+ if (!studentId) return
22
+ getStudentDetail(studentId)
23
+ .then(setDetail)
24
+ .catch(() => setError('Could not load student detail.'))
25
+ .finally(() => setLoading(false))
26
+ }, [studentId])
27
+
28
+ const completedSessions = detail?.sessions.filter((s) => s.status === 'completed') ?? []
29
+ const avgScore =
30
+ completedSessions.length > 0
31
+ ? Math.round(completedSessions.reduce((sum, s) => sum + (s.score ?? 0), 0) / completedSessions.length)
32
+ : null
33
+
34
+ return (
35
+ <div className="min-h-screen bg-gray-950">
36
+ <Navbar />
37
+
38
+ <div className="max-w-2xl mx-auto px-4 py-8 space-y-6">
39
+ <button
40
+ onClick={() => navigate('/instructor/dashboard')}
41
+ className="text-sm text-gray-400 hover:text-white transition-colors"
42
+ >
43
+ ← Back to dashboard
44
+ </button>
45
+
46
+ {loading && (
47
+ <div className="space-y-3">
48
+ {[...Array(4)].map((_, i) => (
49
+ <div key={i} className="h-16 bg-gray-900 rounded-xl animate-pulse" />
50
+ ))}
51
+ </div>
52
+ )}
53
+
54
+ {error && (
55
+ <p className="text-red-400 text-sm text-center">{error}</p>
56
+ )}
57
+
58
+ {detail && (
59
+ <>
60
+ {/* Header */}
61
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-6 flex items-center justify-between">
62
+ <div>
63
+ <h1 className="text-xl font-semibold text-white">{detail.full_name}</h1>
64
+ <p className="text-gray-500 text-sm mt-0.5">{detail.email}</p>
65
+ </div>
66
+ <div className="text-right">
67
+ <p className="text-2xl font-bold text-white">{avgScore ?? '—'}</p>
68
+ <p className="text-xs text-gray-500">Avg score</p>
69
+ </div>
70
+ </div>
71
+
72
+ {/* Gap report */}
73
+ <GapReport sessions={detail.sessions} />
74
+
75
+ {/* Session history */}
76
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-6">
77
+ <h2 className="text-sm font-semibold text-gray-400 uppercase tracking-wide mb-4">
78
+ Interview History
79
+ </h2>
80
+ {detail.sessions.length === 0 ? (
81
+ <p className="text-gray-500 text-sm">No interviews yet.</p>
82
+ ) : (
83
+ <div className="divide-y divide-gray-800">
84
+ {detail.sessions.map((s) => {
85
+ const scoreColor =
86
+ s.score == null
87
+ ? 'text-gray-500'
88
+ : s.score >= 70
89
+ ? 'text-green-400'
90
+ : s.score >= 40
91
+ ? 'text-yellow-400'
92
+ : 'text-red-400'
93
+ return (
94
+ <div key={s.id} className="flex items-center justify-between py-3">
95
+ <div>
96
+ <p className="text-white text-sm font-medium">{s.topic_name}</p>
97
+ <p className="text-gray-500 text-xs mt-0.5">{formatDate(s.started_at)}</p>
98
+ </div>
99
+ <div className="flex items-center gap-4">
100
+ <span className={`font-semibold ${scoreColor}`}>
101
+ {s.status === 'completed' && s.score != null ? s.score : '—'}
102
+ </span>
103
+ {s.status === 'completed' ? (
104
+ <button
105
+ onClick={() => navigate(`/student/report/${s.id}`)}
106
+ className="text-xs text-indigo-400 hover:text-indigo-300 transition-colors"
107
+ >
108
+ Report
109
+ </button>
110
+ ) : (
111
+ <span className="text-xs text-yellow-500">Active</span>
112
+ )}
113
+ </div>
114
+ </div>
115
+ )
116
+ })}
117
+ </div>
118
+ )}
119
+ </div>
120
+ </>
121
+ )}
122
+ </div>
123
+ </div>
124
+ )
125
  }
frontend/src/store/interviewStore.ts CHANGED
@@ -1 +1,50 @@
1
- // Zustand interview store implemented in Phase 6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import { create } from 'zustand'
2
+ import type { Message } from '../types'
3
+
4
+ interface InterviewState {
5
+ sessionId: string | null
6
+ topicId: string | null
7
+ messages: Message[]
8
+ turnCount: number
9
+ maxTurns: number
10
+ status: 'idle' | 'loading' | 'waiting' | 'finished'
11
+ }
12
+
13
+ interface InterviewActions {
14
+ startSession: (sessionId: string, topicId: string, firstMessage: string) => void
15
+ addMessage: (message: Message) => void
16
+ setTurnCount: (count: number) => void
17
+ setStatus: (status: InterviewState['status']) => void
18
+ reset: () => void
19
+ }
20
+
21
+ const initialState: InterviewState = {
22
+ sessionId: null,
23
+ topicId: null,
24
+ messages: [],
25
+ turnCount: 0,
26
+ maxTurns: 8,
27
+ status: 'idle',
28
+ }
29
+
30
+ export const useInterviewStore = create<InterviewState & InterviewActions>((set) => ({
31
+ ...initialState,
32
+
33
+ startSession: (sessionId, topicId, firstMessage) =>
34
+ set({
35
+ sessionId,
36
+ topicId,
37
+ messages: [{ role: 'ai', content: firstMessage }],
38
+ turnCount: 0,
39
+ status: 'waiting',
40
+ }),
41
+
42
+ addMessage: (message) =>
43
+ set((state) => ({ messages: [...state.messages, message] })),
44
+
45
+ setTurnCount: (count) => set({ turnCount: count }),
46
+
47
+ setStatus: (status) => set({ status }),
48
+
49
+ reset: () => set(initialState),
50
+ }))
frontend/src/types/index.ts CHANGED
@@ -23,3 +23,55 @@ export interface Feedback {
23
  mistakes: string[]
24
  tips: string[]
25
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  mistakes: string[]
24
  tips: string[]
25
  }
26
+
27
+ export interface InterviewSession {
28
+ status: string
29
+ turn_count: number
30
+ last_message: string | null
31
+ }
32
+
33
+ export interface SessionReport {
34
+ id: string
35
+ topic_id: string
36
+ status: string
37
+ score: number
38
+ feedback: Feedback
39
+ messages: Message[]
40
+ }
41
+
42
+ // Student dashboard types
43
+ export interface StudentTopic {
44
+ id: string
45
+ name: string
46
+ order_index: number
47
+ }
48
+
49
+ export interface StudentSession {
50
+ id: string
51
+ topic_id: string
52
+ topic_name: string
53
+ status: 'active' | 'completed'
54
+ score: number | null
55
+ started_at: string | null
56
+ completed_at: string | null
57
+ }
58
+
59
+ // Instructor analytics types
60
+ export interface InstructorStudent {
61
+ id: string
62
+ full_name: string
63
+ email: string
64
+ total_sessions: number
65
+ completed_sessions: number
66
+ avg_score: number | null
67
+ }
68
+
69
+ export interface StudentAttempt {
70
+ id: string
71
+ topic_id: string
72
+ topic_name: string
73
+ status: 'active' | 'completed'
74
+ score: number | null
75
+ started_at: string | null
76
+ completed_at: string | null
77
+ }
pyproject.toml CHANGED
@@ -11,6 +11,7 @@ dependencies = [
11
  "httpx==0.27.2",
12
  "langgraph==0.2.50",
13
  "langgraph-checkpoint-postgres==2.0.8",
 
14
  "pydantic==2.8.2",
15
  "python-dotenv==1.0.1",
16
  "python-jose[cryptography]==3.3.0",
 
11
  "httpx==0.27.2",
12
  "langgraph==0.2.50",
13
  "langgraph-checkpoint-postgres==2.0.8",
14
+ "psycopg[binary]>=3.3.3",
15
  "pydantic==2.8.2",
16
  "python-dotenv==1.0.1",
17
  "python-jose[cryptography]==3.3.0",
run.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Entry point for local development — sets Windows event loop policy before uvicorn."""
2
+ import asyncio
3
+ import sys
4
+
5
+ if sys.platform == "win32":
6
+ asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
7
+
8
+ import uvicorn
9
+
10
+ if __name__ == "__main__":
11
+ uvicorn.run("backend.main:app", host="0.0.0.0", port=7860)
tests/test_prompts.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from backend.prompts import (
2
+ build_ask_question_prompt,
3
+ build_counter_prompt,
4
+ build_evaluate_prompt,
5
+ build_report_prompt,
6
+ build_summarize_prompt,
7
+ )
8
+
9
+
10
+ def _is_message_list(result) -> bool:
11
+ return (
12
+ isinstance(result, list)
13
+ and all(isinstance(m, dict) and "role" in m and "content" in m for m in result)
14
+ )
15
+
16
+
17
+ def test_ask_question_prompt_returns_messages():
18
+ result = build_ask_question_prompt("Python", "Some summary", ["What is a list?"])
19
+ assert _is_message_list(result)
20
+ assert len(result) == 2
21
+ combined = " ".join(m["content"] for m in result)
22
+ assert "Python" in combined
23
+ assert "Some summary" in combined
24
+ assert "What is a list?" in combined
25
+
26
+
27
+ def test_ask_question_prompt_no_prior_questions():
28
+ result = build_ask_question_prompt("Python", "", [])
29
+ assert _is_message_list(result)
30
+ combined = " ".join(m["content"] for m in result)
31
+ assert "Python" in combined
32
+
33
+
34
+ def test_evaluate_prompt_returns_messages():
35
+ result = build_evaluate_prompt("What is GIL?", "Global Interpreter Lock", "summary")
36
+ assert _is_message_list(result)
37
+ combined = " ".join(m["content"] for m in result)
38
+ assert "What is GIL?" in combined
39
+ assert "Global Interpreter Lock" in combined
40
+ # Must instruct JSON output
41
+ assert "JSON" in combined or "json" in combined
42
+
43
+
44
+ def test_counter_prompt_returns_messages():
45
+ result = build_counter_prompt("What is GIL?", "I don't know")
46
+ assert _is_message_list(result)
47
+ assert len(result) == 2
48
+ combined = " ".join(m["content"] for m in result)
49
+ assert "What is GIL?" in combined
50
+
51
+
52
+ def test_summarize_prompt_returns_messages():
53
+ messages = [
54
+ {"role": "assistant", "content": "What is a decorator?"},
55
+ {"role": "human", "content": "It wraps a function"},
56
+ ]
57
+ result = build_summarize_prompt(messages)
58
+ assert _is_message_list(result)
59
+ combined = " ".join(m["content"] for m in result)
60
+ assert "decorator" in combined
61
+
62
+
63
+ def test_report_prompt_includes_past_score():
64
+ result = build_report_prompt("Python", ["Q1", "Q2"], ["decorators"], "summary", 72)
65
+ assert _is_message_list(result)
66
+ combined = " ".join(m["content"] for m in result)
67
+ assert "72" in combined
68
+ assert "JSON" in combined or "json" in combined
69
+
70
+
71
+ def test_report_prompt_no_past_score():
72
+ result = build_report_prompt("Python", [], [], "", None)
73
+ assert _is_message_list(result)
74
+ combined = " ".join(m["content"] for m in result)
75
+ assert "first attempt" in combined.lower() or "no previous" in combined.lower() or "first" in combined.lower()
tests/test_routing.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Unit tests for route_after_evaluation — no LLM, no DB needed."""
2
+ from backend.graph.nodes import route_after_evaluation
3
+
4
+
5
+ def _state(**overrides) -> dict:
6
+ """Build a minimal InterviewState dict for routing tests."""
7
+ base = {
8
+ "last_verdict": "strong",
9
+ "turn_count": 1,
10
+ "questions_remaining": [{"question_text": "Q2", "difficulty": "easy"}],
11
+ "awaiting_counter_response": False,
12
+ }
13
+ base.update(overrides)
14
+ return base
15
+
16
+
17
+ def test_shallow_not_in_counter_loop_routes_to_counter():
18
+ state = _state(last_verdict="shallow", awaiting_counter_response=False)
19
+ assert route_after_evaluation(state) == "counter"
20
+
21
+
22
+ def test_shallow_already_in_counter_loop_routes_to_next_question():
23
+ state = _state(last_verdict="shallow", awaiting_counter_response=True, turn_count=2)
24
+ assert route_after_evaluation(state) == "next_question"
25
+
26
+
27
+ def test_strong_routes_to_next_question():
28
+ state = _state(last_verdict="strong", turn_count=1)
29
+ assert route_after_evaluation(state) == "next_question"
30
+
31
+
32
+ def test_wrong_routes_to_next_question():
33
+ state = _state(last_verdict="wrong", turn_count=1)
34
+ assert route_after_evaluation(state) == "next_question"
35
+
36
+
37
+ def test_turn_count_8_routes_to_end():
38
+ state = _state(last_verdict="strong", turn_count=8)
39
+ assert route_after_evaluation(state) == "end"
40
+
41
+
42
+ def test_no_questions_remaining_routes_to_end():
43
+ state = _state(last_verdict="strong", turn_count=3, questions_remaining=[])
44
+ assert route_after_evaluation(state) == "end"
45
+
46
+
47
+ def test_every_4_turns_routes_to_summarize():
48
+ state = _state(last_verdict="strong", turn_count=4)
49
+ assert route_after_evaluation(state) == "summarize"
50
+
51
+
52
+ def test_turn_8_beats_summarize():
53
+ # turn_count=8 should end, not summarize (end takes priority)
54
+ state = _state(last_verdict="strong", turn_count=8)
55
+ assert route_after_evaluation(state) == "end"
uv.lock CHANGED
@@ -17,6 +17,7 @@ dependencies = [
17
  { name = "httpx" },
18
  { name = "langgraph" },
19
  { name = "langgraph-checkpoint-postgres" },
 
20
  { name = "pydantic" },
21
  { name = "python-dotenv" },
22
  { name = "python-jose", extra = ["cryptography"] },
@@ -37,6 +38,7 @@ requires-dist = [
37
  { name = "httpx", specifier = "==0.27.2" },
38
  { name = "langgraph", specifier = "==0.2.50" },
39
  { name = "langgraph-checkpoint-postgres", specifier = "==2.0.8" },
 
40
  { name = "pydantic", specifier = "==2.8.2" },
41
  { name = "python-dotenv", specifier = "==1.0.1" },
42
  { name = "python-jose", extras = ["cryptography"], specifier = "==3.3.0" },
@@ -727,6 +729,62 @@ wheels = [
727
  { url = "https://files.pythonhosted.org/packages/c8/5b/181e2e3becb7672b502f0ed7f16ed7352aca7c109cfb94cf3878a9186db9/psycopg-3.3.3-py3-none-any.whl", hash = "sha256:f96525a72bcfade6584ab17e89de415ff360748c766f0106959144dcbb38c698", size = 212768, upload-time = "2026-02-18T16:46:27.365Z" },
728
  ]
729
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
730
  [[package]]
731
  name = "psycopg-pool"
732
  version = "3.3.0"
 
17
  { name = "httpx" },
18
  { name = "langgraph" },
19
  { name = "langgraph-checkpoint-postgres" },
20
+ { name = "psycopg", extra = ["binary"] },
21
  { name = "pydantic" },
22
  { name = "python-dotenv" },
23
  { name = "python-jose", extra = ["cryptography"] },
 
38
  { name = "httpx", specifier = "==0.27.2" },
39
  { name = "langgraph", specifier = "==0.2.50" },
40
  { name = "langgraph-checkpoint-postgres", specifier = "==2.0.8" },
41
+ { name = "psycopg", extras = ["binary"], specifier = ">=3.3.3" },
42
  { name = "pydantic", specifier = "==2.8.2" },
43
  { name = "python-dotenv", specifier = "==1.0.1" },
44
  { name = "python-jose", extras = ["cryptography"], specifier = "==3.3.0" },
 
729
  { url = "https://files.pythonhosted.org/packages/c8/5b/181e2e3becb7672b502f0ed7f16ed7352aca7c109cfb94cf3878a9186db9/psycopg-3.3.3-py3-none-any.whl", hash = "sha256:f96525a72bcfade6584ab17e89de415ff360748c766f0106959144dcbb38c698", size = 212768, upload-time = "2026-02-18T16:46:27.365Z" },
730
  ]
731
 
732
+ [package.optional-dependencies]
733
+ binary = [
734
+ { name = "psycopg-binary", marker = "implementation_name != 'pypy'" },
735
+ ]
736
+
737
+ [[package]]
738
+ name = "psycopg-binary"
739
+ version = "3.3.3"
740
+ source = { registry = "https://pypi.org/simple" }
741
+ wheels = [
742
+ { url = "https://files.pythonhosted.org/packages/be/c0/b389119dd754483d316805260f3e73cdcad97925839107cc7a296f6132b1/psycopg_binary-3.3.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a89bb9ee11177b2995d87186b1d9fa892d8ea725e85eab28c6525e4cc14ee048", size = 4609740, upload-time = "2026-02-18T16:47:51.093Z" },
743
+ { url = "https://files.pythonhosted.org/packages/cf/e3/9976eef20f61840285174d360da4c820a311ab39d6b82fa09fbb545be825/psycopg_binary-3.3.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:9f7d0cf072c6fbac3795b08c98ef9ea013f11db609659dcfc6b1f6cc31f9e181", size = 4676837, upload-time = "2026-02-18T16:47:55.523Z" },
744
+ { url = "https://files.pythonhosted.org/packages/9f/f2/d28ba2f7404fd7f68d41e8a11df86313bd646258244cb12a8dd83b868a97/psycopg_binary-3.3.3-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:90eecd93073922f085967f3ed3a98ba8c325cbbc8c1a204e300282abd2369e13", size = 5497070, upload-time = "2026-02-18T16:47:59.929Z" },
745
+ { url = "https://files.pythonhosted.org/packages/de/2f/6c5c54b815edeb30a281cfcea96dc93b3bb6be939aea022f00cab7aa1420/psycopg_binary-3.3.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dac7ee2f88b4d7bb12837989ca354c38d400eeb21bce3b73dac02622f0a3c8d6", size = 5172410, upload-time = "2026-02-18T16:48:05.665Z" },
746
+ { url = "https://files.pythonhosted.org/packages/51/75/8206c7008b57de03c1ada46bd3110cc3743f3fd9ed52031c4601401d766d/psycopg_binary-3.3.3-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b62cf8784eb6d35beaee1056d54caf94ec6ecf2b7552395e305518ab61eb8fd2", size = 6763408, upload-time = "2026-02-18T16:48:13.541Z" },
747
+ { url = "https://files.pythonhosted.org/packages/d4/5a/ea1641a1e6c8c8b3454b0fcb43c3045133a8b703e6e824fae134088e63bd/psycopg_binary-3.3.3-cp311-cp311-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:a39f34c9b18e8f6794cca17bfbcd64572ca2482318db644268049f8c738f35a6", size = 5006255, upload-time = "2026-02-18T16:48:22.176Z" },
748
+ { url = "https://files.pythonhosted.org/packages/aa/fb/538df099bf55ae1637d52d7ccb6b9620b535a40f4c733897ac2b7bb9e14c/psycopg_binary-3.3.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:883d68d48ca9ff3cb3d10c5fdebea02c79b48eecacdddbf7cce6e7cdbdc216b8", size = 4532694, upload-time = "2026-02-18T16:48:27.338Z" },
749
+ { url = "https://files.pythonhosted.org/packages/a1/d1/00780c0e187ea3c13dfc53bd7060654b2232cd30df562aac91a5f1c545ac/psycopg_binary-3.3.3-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:cab7bc3d288d37a80aa8c0820033250c95e40b1c2b5c57cf59827b19c2a8b69d", size = 4222833, upload-time = "2026-02-18T16:48:31.221Z" },
750
+ { url = "https://files.pythonhosted.org/packages/7a/34/a07f1ff713c51d64dc9f19f2c32be80299a2055d5d109d5853662b922cb4/psycopg_binary-3.3.3-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:56c767007ca959ca32f796b42379fc7e1ae2ed085d29f20b05b3fc394f3715cc", size = 3952818, upload-time = "2026-02-18T16:48:35.869Z" },
751
+ { url = "https://files.pythonhosted.org/packages/d3/67/d33f268a7759b4445f3c9b5a181039b01af8c8263c865c1be7a6444d4749/psycopg_binary-3.3.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:da2f331a01af232259a21573a01338530c6016dcfad74626c01330535bcd8628", size = 4258061, upload-time = "2026-02-18T16:48:41.365Z" },
752
+ { url = "https://files.pythonhosted.org/packages/b4/3b/0d8d2c5e8e29ccc07d28c8af38445d9d9abcd238d590186cac82ee71fc84/psycopg_binary-3.3.3-cp311-cp311-win_amd64.whl", hash = "sha256:19f93235ece6dbfc4036b5e4f6d8b13f0b8f2b3eeb8b0bd2936d406991bcdd40", size = 3558915, upload-time = "2026-02-18T16:48:46.679Z" },
753
+ { url = "https://files.pythonhosted.org/packages/90/15/021be5c0cbc5b7c1ab46e91cc3434eb42569f79a0592e67b8d25e66d844d/psycopg_binary-3.3.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6698dbab5bcef8fdb570fc9d35fd9ac52041771bfcfe6fd0fc5f5c4e36f1e99d", size = 4591170, upload-time = "2026-02-18T16:48:55.594Z" },
754
+ { url = "https://files.pythonhosted.org/packages/f1/54/a60211c346c9a2f8c6b272b5f2bbe21f6e11800ce7f61e99ba75cf8b63e1/psycopg_binary-3.3.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:329ff393441e75f10b673ae99ab45276887993d49e65f141da20d915c05aafd8", size = 4670009, upload-time = "2026-02-18T16:49:03.608Z" },
755
+ { url = "https://files.pythonhosted.org/packages/c1/53/ac7c18671347c553362aadbf65f92786eef9540676ca24114cc02f5be405/psycopg_binary-3.3.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:eb072949b8ebf4082ae24289a2b0fd724da9adc8f22743409d6fd718ddb379df", size = 5469735, upload-time = "2026-02-18T16:49:10.128Z" },
756
+ { url = "https://files.pythonhosted.org/packages/7f/c3/4f4e040902b82a344eff1c736cde2f2720f127fe939c7e7565706f96dd44/psycopg_binary-3.3.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:263a24f39f26e19ed7fc982d7859a36f17841b05bebad3eb47bb9cd2dd785351", size = 5152919, upload-time = "2026-02-18T16:49:16.335Z" },
757
+ { url = "https://files.pythonhosted.org/packages/0c/e7/d929679c6a5c212bcf738806c7c89f5b3d0919f2e1685a0e08d6ff877945/psycopg_binary-3.3.3-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5152d50798c2fa5bd9b68ec68eb68a1b71b95126c1d70adaa1a08cd5eefdc23d", size = 6738785, upload-time = "2026-02-18T16:49:22.687Z" },
758
+ { url = "https://files.pythonhosted.org/packages/69/b0/09703aeb69a9443d232d7b5318d58742e8ca51ff79f90ffe6b88f1db45e7/psycopg_binary-3.3.3-cp312-cp312-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:9d6a1e56dd267848edb824dbeb08cf5bac649e02ee0b03ba883ba3f4f0bd54f2", size = 4979008, upload-time = "2026-02-18T16:49:27.313Z" },
759
+ { url = "https://files.pythonhosted.org/packages/cc/a6/e662558b793c6e13a7473b970fee327d635270e41eded3090ef14045a6a5/psycopg_binary-3.3.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:73eaaf4bb04709f545606c1db2f65f4000e8a04cdbf3e00d165a23004692093e", size = 4508255, upload-time = "2026-02-18T16:49:31.575Z" },
760
+ { url = "https://files.pythonhosted.org/packages/5f/7f/0f8b2e1d5e0093921b6f324a948a5c740c1447fbb45e97acaf50241d0f39/psycopg_binary-3.3.3-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:162e5675efb4704192411eaf8e00d07f7960b679cd3306e7efb120bb8d9456cc", size = 4189166, upload-time = "2026-02-18T16:49:35.801Z" },
761
+ { url = "https://files.pythonhosted.org/packages/92/ec/ce2e91c33bc8d10b00c87e2f6b0fb570641a6a60042d6a9ae35658a3a797/psycopg_binary-3.3.3-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:fab6b5e37715885c69f5d091f6ff229be71e235f272ebaa35158d5a46fd548a0", size = 3924544, upload-time = "2026-02-18T16:49:41.129Z" },
762
+ { url = "https://files.pythonhosted.org/packages/c5/2f/7718141485f73a924205af60041c392938852aa447a94c8cbd222ff389a1/psycopg_binary-3.3.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a4aab31bd6d1057f287c96c0effca3a25584eb9cc702f282ecb96ded7814e830", size = 4235297, upload-time = "2026-02-18T16:49:46.726Z" },
763
+ { url = "https://files.pythonhosted.org/packages/57/f9/1add717e2643a003bbde31b1b220172e64fbc0cb09f06429820c9173f7fc/psycopg_binary-3.3.3-cp312-cp312-win_amd64.whl", hash = "sha256:59aa31fe11a0e1d1bcc2ce37ed35fe2ac84cd65bb9036d049b1a1c39064d0f14", size = 3547659, upload-time = "2026-02-18T16:49:52.999Z" },
764
+ { url = "https://files.pythonhosted.org/packages/03/0a/cac9fdf1df16a269ba0e5f0f06cac61f826c94cadb39df028cdfe19d3a33/psycopg_binary-3.3.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:05f32239aec25c5fb15f7948cffdc2dc0dac098e48b80a140e4ba32b572a2e7d", size = 4590414, upload-time = "2026-02-18T16:50:01.441Z" },
765
+ { url = "https://files.pythonhosted.org/packages/9c/c0/d8f8508fbf440edbc0099b1abff33003cd80c9e66eb3a1e78834e3fb4fb9/psycopg_binary-3.3.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7c84f9d214f2d1de2fafebc17fa68ac3f6561a59e291553dfc45ad299f4898c1", size = 4669021, upload-time = "2026-02-18T16:50:08.803Z" },
766
+ { url = "https://files.pythonhosted.org/packages/04/05/097016b77e343b4568feddf12c72171fc513acef9a4214d21b9478569068/psycopg_binary-3.3.3-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:e77957d2ba17cada11be09a5066d93026cdb61ada7c8893101d7fe1c6e1f3925", size = 5467453, upload-time = "2026-02-18T16:50:14.985Z" },
767
+ { url = "https://files.pythonhosted.org/packages/91/23/73244e5feb55b5ca109cede6e97f32ef45189f0fdac4c80d75c99862729d/psycopg_binary-3.3.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:42961609ac07c232a427da7c87a468d3c82fee6762c220f38e37cfdacb2b178d", size = 5151135, upload-time = "2026-02-18T16:50:24.82Z" },
768
+ { url = "https://files.pythonhosted.org/packages/11/49/5309473b9803b207682095201d8708bbc7842ddf3f192488a69204e36455/psycopg_binary-3.3.3-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ae07a3114313dd91fce686cab2f4c44af094398519af0e0f854bc707e1aeedf1", size = 6737315, upload-time = "2026-02-18T16:50:35.106Z" },
769
+ { url = "https://files.pythonhosted.org/packages/d4/5d/03abe74ef34d460b33c4d9662bf6ec1dd38888324323c1a1752133c10377/psycopg_binary-3.3.3-cp313-cp313-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:d257c58d7b36a621dcce1d01476ad8b60f12d80eb1406aee4cf796f88b2ae482", size = 4979783, upload-time = "2026-02-18T16:50:42.067Z" },
770
+ { url = "https://files.pythonhosted.org/packages/f0/6c/3fbf8e604e15f2f3752900434046c00c90bb8764305a1b81112bff30ba24/psycopg_binary-3.3.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:07c7211f9327d522c9c47560cae00a4ecf6687f4e02d779d035dd3177b41cb12", size = 4509023, upload-time = "2026-02-18T16:50:50.116Z" },
771
+ { url = "https://files.pythonhosted.org/packages/9c/6b/1a06b43b7c7af756c80b67eac8bfaa51d77e68635a8a8d246e4f0bb7604a/psycopg_binary-3.3.3-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:8e7e9eca9b363dbedeceeadd8be97149d2499081f3c52d141d7cd1f395a91f83", size = 4185874, upload-time = "2026-02-18T16:50:55.97Z" },
772
+ { url = "https://files.pythonhosted.org/packages/2b/d3/bf49e3dcaadba510170c8d111e5e69e5ae3f981c1554c5bb71c75ce354bb/psycopg_binary-3.3.3-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:cb85b1d5702877c16f28d7b92ba030c1f49ebcc9b87d03d8c10bf45a2f1c7508", size = 3925668, upload-time = "2026-02-18T16:51:03.299Z" },
773
+ { url = "https://files.pythonhosted.org/packages/f8/92/0aac830ed6a944fe334404e1687a074e4215630725753f0e3e9a9a595b62/psycopg_binary-3.3.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:4d4606c84d04b80f9138d72f1e28c6c02dc5ae0c7b8f3f8aaf89c681ce1cd1b1", size = 4234973, upload-time = "2026-02-18T16:51:09.097Z" },
774
+ { url = "https://files.pythonhosted.org/packages/2e/96/102244653ee5a143ece5afe33f00f52fe64e389dfce8dbc87580c6d70d3d/psycopg_binary-3.3.3-cp313-cp313-win_amd64.whl", hash = "sha256:74eae563166ebf74e8d950ff359be037b85723d99ca83f57d9b244a871d6c13b", size = 3551342, upload-time = "2026-02-18T16:51:13.892Z" },
775
+ { url = "https://files.pythonhosted.org/packages/a2/71/7a57e5b12275fe7e7d84d54113f0226080423a869118419c9106c083a21c/psycopg_binary-3.3.3-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:497852c5eaf1f0c2d88ab74a64a8097c099deac0c71de1cbcf18659a8a04a4b2", size = 4607368, upload-time = "2026-02-18T16:51:19.295Z" },
776
+ { url = "https://files.pythonhosted.org/packages/c7/04/cb834f120f2b2c10d4003515ef9ca9d688115b9431735e3936ae48549af8/psycopg_binary-3.3.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:258d1ea53464d29768bf25930f43291949f4c7becc706f6e220c515a63a24edd", size = 4687047, upload-time = "2026-02-18T16:51:23.84Z" },
777
+ { url = "https://files.pythonhosted.org/packages/40/e9/47a69692d3da9704468041aa5ed3ad6fc7f6bb1a5ae788d261a26bbca6c7/psycopg_binary-3.3.3-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:111c59897a452196116db12e7f608da472fbff000693a21040e35fc978b23430", size = 5487096, upload-time = "2026-02-18T16:51:29.645Z" },
778
+ { url = "https://files.pythonhosted.org/packages/0b/b6/0e0dd6a2f802864a4ae3dbadf4ec620f05e3904c7842b326aafc43e5f464/psycopg_binary-3.3.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:17bb6600e2455993946385249a3c3d0af52cd70c1c1cdbf712e9d696d0b0bf1b", size = 5168720, upload-time = "2026-02-18T16:51:36.499Z" },
779
+ { url = "https://files.pythonhosted.org/packages/6f/0d/977af38ac19a6b55d22dff508bd743fd7c1901e1b73657e7937c7cccb0a3/psycopg_binary-3.3.3-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:642050398583d61c9856210568eb09a8e4f2fe8224bf3be21b67a370e677eead", size = 6762076, upload-time = "2026-02-18T16:51:43.167Z" },
780
+ { url = "https://files.pythonhosted.org/packages/34/40/912a39d48322cf86895c0eaf2d5b95cb899402443faefd4b09abbba6b6e1/psycopg_binary-3.3.3-cp314-cp314-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:533efe6dc3a7cba5e2a84e38970786bb966306863e45f3db152007e9f48638a6", size = 4997623, upload-time = "2026-02-18T16:51:47.707Z" },
781
+ { url = "https://files.pythonhosted.org/packages/98/0c/c14d0e259c65dc7be854d926993f151077887391d5a081118907a9d89603/psycopg_binary-3.3.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:5958dbf28b77ce2033482f6cb9ef04d43f5d8f4b7636e6963d5626f000efb23e", size = 4532096, upload-time = "2026-02-18T16:51:51.421Z" },
782
+ { url = "https://files.pythonhosted.org/packages/39/21/8b7c50a194cfca6ea0fd4d1f276158307785775426e90700ab2eba5cd623/psycopg_binary-3.3.3-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:a6af77b6626ce92b5817bf294b4d45ec1a6161dba80fc2d82cdffdd6814fd023", size = 4208884, upload-time = "2026-02-18T16:51:57.336Z" },
783
+ { url = "https://files.pythonhosted.org/packages/c7/2c/a4981bf42cf30ebba0424971d7ce70a222ae9b82594c42fc3f2105d7b525/psycopg_binary-3.3.3-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:47f06fcbe8542b4d96d7392c476a74ada521c5aebdb41c3c0155f6595fc14c8d", size = 3944542, upload-time = "2026-02-18T16:52:04.266Z" },
784
+ { url = "https://files.pythonhosted.org/packages/60/e9/b7c29b56aa0b85a4e0c4d89db691c1ceef08f46a356369144430c155a2f5/psycopg_binary-3.3.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:e7800e6c6b5dc4b0ca7cc7370f770f53ac83886b76afda0848065a674231e856", size = 4254339, upload-time = "2026-02-18T16:52:10.444Z" },
785
+ { url = "https://files.pythonhosted.org/packages/98/5a/291d89f44d3820fffb7a04ebc8f3ef5dda4f542f44a5daea0c55a84abf45/psycopg_binary-3.3.3-cp314-cp314-win_amd64.whl", hash = "sha256:165f22ab5a9513a3d7425ffb7fcc7955ed8ccaeef6d37e369d6cc1dff1582383", size = 3652796, upload-time = "2026-02-18T16:52:14.02Z" },
786
+ ]
787
+
788
  [[package]]
789
  name = "psycopg-pool"
790
  version = "3.3.0"