samar m commited on
Commit
c7dfb3d
·
1 Parent(s): 9c38802

phase 5 implementation docs

Browse files
docs/superpowers/plans/2026-03-28-phase-2-instructor.md ADDED
@@ -0,0 +1,1292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Phase 2 — Instructor Batch/Topic Management + Upload
2
+
3
+ > **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
4
+
5
+ **Goal:** Build the instructor workflow — create a class batch, manage topics, and upload question banks via CSV — plus the NeonDB schema that all phases depend on.
6
+
7
+ **Architecture:** Backend adds three new FastAPI routers (batches, topics, upload) and a CSV parser utility. Frontend gains an InstructorDashboard (batch creation + topic list with unlock toggles) and an Upload page. All DB tables are created once via a schema script applied against NeonDB.
8
+
9
+ **Tech Stack:** FastAPI, asyncpg, python-multipart, React 18 + Vite + TypeScript, Tailwind CSS v3, Zustand v5
10
+
11
+ ---
12
+
13
+ ## File Map
14
+
15
+ ### Backend — Create/Modify
16
+ - `backend/db/schema.sql` — all CREATE TABLE statements for the full app
17
+ - `backend/db/apply_schema.py` — one-time script to run schema.sql against NeonDB
18
+ - `backend/utils/__init__.py` — empty package marker
19
+ - `backend/utils/csv_parser.py` — pure CSV-to-dict parser, no DB dependency
20
+ - `backend/db/queries.py` — add batch/topic/question query functions
21
+ - `backend/routers/batches.py` — POST /api/batches, GET /api/batches/mine, GET /api/batches/{batch_id}
22
+ - `backend/routers/topics.py` — POST /api/topics, GET /api/topics?batch_id=..., PATCH /api/topics/{topic_id}
23
+ - `backend/routers/upload.py` — POST /api/upload (multipart CSV + topic_id)
24
+ - `backend/main.py` — register 3 new routers
25
+
26
+ ### Backend — Tests
27
+ - `tests/test_csv_parser.py` — unit tests for parse_csv_questions
28
+
29
+ ### Frontend — Create/Modify
30
+ - `frontend/src/api/topics.ts` — getMyBatch, createBatch, getTopics, createTopic, setTopicUnlock
31
+ - `frontend/src/api/upload.ts` — uploadCSV (multipart fetch)
32
+ - `frontend/src/components/shared/Navbar.tsx` — top nav with user name + logout
33
+ - `frontend/src/components/instructor/StatCard.tsx` — reusable stat display card
34
+ - `frontend/src/components/instructor/UploadZone.tsx` — drag-and-drop CSV input
35
+ - `frontend/src/pages/InstructorDashboard.tsx` — batch creation flow + topic management
36
+ - `frontend/src/pages/Upload.tsx` — CSV upload page
37
+
38
+ ---
39
+
40
+ ## Task 1: NeonDB schema
41
+
42
+ **Files:**
43
+ - Create: `backend/db/schema.sql`
44
+ - Create: `backend/db/apply_schema.py`
45
+
46
+ - [ ] **Step 1: Create backend/db/schema.sql**
47
+
48
+ ```sql
49
+ -- backend/db/schema.sql
50
+ -- Run once via: uv run python backend/db/apply_schema.py
51
+
52
+ -- batches first (no FK to users yet — circular dependency resolved below)
53
+ CREATE TABLE IF NOT EXISTS batches (
54
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
55
+ name TEXT NOT NULL,
56
+ instructor_id UUID NOT NULL,
57
+ class_code TEXT NOT NULL UNIQUE DEFAULT upper(substring(gen_random_uuid()::text, 1, 8)),
58
+ created_at TIMESTAMPTZ DEFAULT NOW()
59
+ );
60
+
61
+ -- users references batches
62
+ CREATE TABLE IF NOT EXISTS users (
63
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
64
+ full_name TEXT NOT NULL,
65
+ email TEXT NOT NULL UNIQUE,
66
+ password_hash TEXT NOT NULL,
67
+ role TEXT NOT NULL CHECK (role IN ('student', 'instructor')),
68
+ batch_id UUID REFERENCES batches(id) ON DELETE SET NULL,
69
+ created_at TIMESTAMPTZ DEFAULT NOW()
70
+ );
71
+
72
+ -- add FK from batches.instructor_id → users.id (deferred until users exists)
73
+ DO $$
74
+ BEGIN
75
+ IF NOT EXISTS (
76
+ SELECT 1 FROM information_schema.table_constraints
77
+ WHERE constraint_name = 'batches_instructor_id_fkey'
78
+ AND table_name = 'batches'
79
+ ) THEN
80
+ ALTER TABLE batches
81
+ ADD CONSTRAINT batches_instructor_id_fkey
82
+ FOREIGN KEY (instructor_id) REFERENCES users(id) ON DELETE CASCADE;
83
+ END IF;
84
+ END $$;
85
+
86
+ CREATE TABLE IF NOT EXISTS refresh_tokens (
87
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
88
+ user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
89
+ token_hash TEXT NOT NULL,
90
+ expires_at TIMESTAMPTZ NOT NULL,
91
+ created_at TIMESTAMPTZ DEFAULT NOW()
92
+ );
93
+
94
+ CREATE TABLE IF NOT EXISTS topics (
95
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
96
+ batch_id UUID NOT NULL REFERENCES batches(id) ON DELETE CASCADE,
97
+ name TEXT NOT NULL,
98
+ is_unlocked BOOLEAN NOT NULL DEFAULT FALSE,
99
+ order_index INTEGER NOT NULL DEFAULT 0,
100
+ created_at TIMESTAMPTZ DEFAULT NOW()
101
+ );
102
+
103
+ CREATE TABLE IF NOT EXISTS questions (
104
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
105
+ topic_id UUID NOT NULL REFERENCES topics(id) ON DELETE CASCADE,
106
+ question_text TEXT NOT NULL,
107
+ difficulty TEXT NOT NULL CHECK (difficulty IN ('easy', 'medium', 'hard')) DEFAULT 'medium',
108
+ created_at TIMESTAMPTZ DEFAULT NOW()
109
+ );
110
+
111
+ CREATE TABLE IF NOT EXISTS interview_sessions (
112
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
113
+ student_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
114
+ topic_id UUID NOT NULL REFERENCES topics(id) ON DELETE CASCADE,
115
+ status TEXT NOT NULL CHECK (status IN ('active', 'completed')) DEFAULT 'active',
116
+ score INTEGER CHECK (score >= 0 AND score <= 100),
117
+ feedback JSONB,
118
+ started_at TIMESTAMPTZ DEFAULT NOW(),
119
+ completed_at TIMESTAMPTZ
120
+ );
121
+ ```
122
+
123
+ - [ ] **Step 2: Create backend/db/apply_schema.py**
124
+
125
+ ```python
126
+ """Run once to apply DB schema to NeonDB. Usage: uv run python backend/db/apply_schema.py"""
127
+ import asyncio
128
+ import os
129
+ from pathlib import Path
130
+
131
+ import asyncpg
132
+ from dotenv import load_dotenv
133
+
134
+ load_dotenv()
135
+
136
+
137
+ async def main() -> None:
138
+ sql = (Path(__file__).parent / "schema.sql").read_text()
139
+ conn = await asyncpg.connect(os.getenv("NEON_DB_URL"))
140
+ try:
141
+ await conn.execute(sql)
142
+ print("Schema applied successfully.")
143
+ finally:
144
+ await conn.close()
145
+
146
+
147
+ if __name__ == "__main__":
148
+ asyncio.run(main())
149
+ ```
150
+
151
+ - [ ] **Step 3: Run schema migration**
152
+
153
+ Make sure `.env` is present with a valid `NEON_DB_URL`, then run:
154
+
155
+ ```bash
156
+ uv run python backend/db/apply_schema.py
157
+ ```
158
+
159
+ Expected output: `Schema applied successfully.`
160
+
161
+ - [ ] **Step 4: Commit**
162
+
163
+ ```bash
164
+ git add backend/db/schema.sql backend/db/apply_schema.py
165
+ git commit -m "feat: add NeonDB schema (all tables) and apply_schema script"
166
+ ```
167
+
168
+ ---
169
+
170
+ ## Task 2: CSV parser utility + tests
171
+
172
+ **Files:**
173
+ - Create: `backend/utils/__init__.py`
174
+ - Create: `backend/utils/csv_parser.py`
175
+ - Create: `tests/test_csv_parser.py`
176
+
177
+ - [ ] **Step 1: Create backend/utils/__init__.py**
178
+
179
+ ```python
180
+ ```
181
+
182
+ (empty file — marks the utils package)
183
+
184
+ - [ ] **Step 2: Write failing tests first**
185
+
186
+ Create `tests/test_csv_parser.py`:
187
+
188
+ ```python
189
+ from backend.utils.csv_parser import parse_csv_questions
190
+
191
+
192
+ def test_parses_valid_csv():
193
+ text = "question_text,difficulty\nWhat is Python?,easy\nExplain GIL,hard"
194
+ result = parse_csv_questions(text)
195
+ assert len(result) == 2
196
+ assert result[0] == {"question_text": "What is Python?", "difficulty": "easy"}
197
+ assert result[1] == {"question_text": "Explain GIL", "difficulty": "hard"}
198
+
199
+
200
+ def test_defaults_invalid_difficulty_to_medium():
201
+ text = "question_text,difficulty\nWhat is Python?,bogus"
202
+ result = parse_csv_questions(text)
203
+ assert result[0]["difficulty"] == "medium"
204
+
205
+
206
+ def test_skips_empty_question_text():
207
+ text = "question_text,difficulty\n,easy\nWhat is Python?,easy"
208
+ result = parse_csv_questions(text)
209
+ assert len(result) == 1
210
+
211
+
212
+ def test_missing_difficulty_column_defaults_to_medium():
213
+ text = "question_text\nWhat is Python?"
214
+ result = parse_csv_questions(text)
215
+ assert result[0]["difficulty"] == "medium"
216
+
217
+
218
+ def test_empty_csv_returns_empty_list():
219
+ text = "question_text,difficulty\n"
220
+ result = parse_csv_questions(text)
221
+ assert result == []
222
+
223
+
224
+ def test_strips_whitespace():
225
+ text = "question_text,difficulty\n What is Python? , easy "
226
+ result = parse_csv_questions(text)
227
+ assert result[0]["question_text"] == "What is Python?"
228
+ assert result[0]["difficulty"] == "easy"
229
+ ```
230
+
231
+ - [ ] **Step 3: Run tests — expect failure**
232
+
233
+ ```bash
234
+ uv run pytest tests/test_csv_parser.py -v
235
+ ```
236
+
237
+ Expected: `FAILED` — `ModuleNotFoundError`
238
+
239
+ - [ ] **Step 4: Implement backend/utils/csv_parser.py**
240
+
241
+ ```python
242
+ import csv
243
+ import io
244
+
245
+ _VALID_DIFFICULTIES = {"easy", "medium", "hard"}
246
+
247
+
248
+ def parse_csv_questions(text: str) -> list[dict]:
249
+ """
250
+ Parse CSV text into question dicts.
251
+ Expected columns: question_text, difficulty
252
+ Returns list of {"question_text": str, "difficulty": str}.
253
+ Skips rows where question_text is empty.
254
+ Defaults difficulty to 'medium' if missing or invalid.
255
+ """
256
+ reader = csv.DictReader(io.StringIO(text))
257
+ rows = []
258
+ for row in reader:
259
+ question_text = (row.get("question_text") or "").strip()
260
+ difficulty = (row.get("difficulty") or "medium").strip().lower()
261
+ if not question_text:
262
+ continue
263
+ if difficulty not in _VALID_DIFFICULTIES:
264
+ difficulty = "medium"
265
+ rows.append({"question_text": question_text, "difficulty": difficulty})
266
+ return rows
267
+ ```
268
+
269
+ - [ ] **Step 5: Run tests — expect all pass**
270
+
271
+ ```bash
272
+ uv run pytest tests/test_csv_parser.py -v
273
+ ```
274
+
275
+ Expected: `6 passed`
276
+
277
+ - [ ] **Step 6: Commit**
278
+
279
+ ```bash
280
+ git add backend/utils/__init__.py backend/utils/csv_parser.py tests/test_csv_parser.py
281
+ git commit -m "feat: add CSV question parser utility with unit tests"
282
+ ```
283
+
284
+ ---
285
+
286
+ ## Task 3: DB queries for batches, topics, questions
287
+
288
+ **Files:**
289
+ - Modify: `backend/db/queries.py`
290
+
291
+ - [ ] **Step 1: Append batch/topic/question query functions**
292
+
293
+ Add to the bottom of `backend/db/queries.py`:
294
+
295
+ ```python
296
+ async def create_batch(name: str, instructor_id: str) -> asyncpg.Record:
297
+ async with get_pool().acquire() as conn:
298
+ return await conn.fetchrow(
299
+ """
300
+ INSERT INTO batches (name, instructor_id)
301
+ VALUES ($1, $2)
302
+ RETURNING *
303
+ """,
304
+ name, instructor_id,
305
+ )
306
+
307
+
308
+ async def get_batch_by_instructor_id(instructor_id: str) -> Optional[asyncpg.Record]:
309
+ async with get_pool().acquire() as conn:
310
+ return await conn.fetchrow(
311
+ "SELECT * FROM batches WHERE instructor_id = $1", instructor_id
312
+ )
313
+
314
+
315
+ async def get_batch_by_id(batch_id: str) -> Optional[asyncpg.Record]:
316
+ async with get_pool().acquire() as conn:
317
+ return await conn.fetchrow(
318
+ "SELECT * FROM batches WHERE id = $1", batch_id
319
+ )
320
+
321
+
322
+ async def create_topic(batch_id: str, name: str, order_index: int) -> asyncpg.Record:
323
+ async with get_pool().acquire() as conn:
324
+ return await conn.fetchrow(
325
+ """
326
+ INSERT INTO topics (batch_id, name, order_index)
327
+ VALUES ($1, $2, $3)
328
+ RETURNING *
329
+ """,
330
+ batch_id, name, order_index,
331
+ )
332
+
333
+
334
+ async def get_topics_by_batch_id(batch_id: str) -> list[asyncpg.Record]:
335
+ async with get_pool().acquire() as conn:
336
+ return await conn.fetch(
337
+ "SELECT * FROM topics WHERE batch_id = $1 ORDER BY order_index ASC",
338
+ batch_id,
339
+ )
340
+
341
+
342
+ async def get_topic_by_id(topic_id: str) -> Optional[asyncpg.Record]:
343
+ async with get_pool().acquire() as conn:
344
+ return await conn.fetchrow(
345
+ "SELECT * FROM topics WHERE id = $1", topic_id
346
+ )
347
+
348
+
349
+ async def set_topic_unlock(topic_id: str, is_unlocked: bool) -> asyncpg.Record:
350
+ async with get_pool().acquire() as conn:
351
+ return await conn.fetchrow(
352
+ """
353
+ UPDATE topics SET is_unlocked = $1
354
+ WHERE id = $2
355
+ RETURNING *
356
+ """,
357
+ is_unlocked, topic_id,
358
+ )
359
+
360
+
361
+ async def create_questions_bulk(topic_id: str, rows: list[dict]) -> int:
362
+ async with get_pool().acquire() as conn:
363
+ await conn.executemany(
364
+ """
365
+ INSERT INTO questions (topic_id, question_text, difficulty)
366
+ VALUES ($1, $2, $3)
367
+ """,
368
+ [(topic_id, r["question_text"], r["difficulty"]) for r in rows],
369
+ )
370
+ return len(rows)
371
+
372
+
373
+ async def get_question_count_by_topic(topic_id: str) -> int:
374
+ async with get_pool().acquire() as conn:
375
+ result = await conn.fetchrow(
376
+ "SELECT COUNT(*) AS count FROM questions WHERE topic_id = $1",
377
+ topic_id,
378
+ )
379
+ return result["count"]
380
+ ```
381
+
382
+ - [ ] **Step 2: Commit**
383
+
384
+ ```bash
385
+ git add backend/db/queries.py
386
+ git commit -m "feat: add batch/topic/question DB queries"
387
+ ```
388
+
389
+ ---
390
+
391
+ ## Task 4: Batches router
392
+
393
+ **Files:**
394
+ - Create: `backend/routers/batches.py`
395
+
396
+ - [ ] **Step 1: Create backend/routers/batches.py**
397
+
398
+ ```python
399
+ from fastapi import APIRouter, Depends, HTTPException
400
+ from pydantic import BaseModel
401
+
402
+ from backend.auth.deps import require_instructor
403
+ from backend.db import queries
404
+
405
+ router = APIRouter()
406
+
407
+
408
+ class CreateBatchRequest(BaseModel):
409
+ name: str
410
+
411
+
412
+ def _batch_out(batch) -> dict:
413
+ return {
414
+ "id": str(batch["id"]),
415
+ "name": batch["name"],
416
+ "instructor_id": str(batch["instructor_id"]),
417
+ "class_code": batch["class_code"],
418
+ }
419
+
420
+
421
+ @router.post("")
422
+ async def create_batch(
423
+ body: CreateBatchRequest,
424
+ user: dict = Depends(require_instructor),
425
+ ):
426
+ existing = await queries.get_batch_by_instructor_id(user["user_id"])
427
+ if existing:
428
+ raise HTTPException(400, "You already have a batch")
429
+ batch = await queries.create_batch(body.name, user["user_id"])
430
+ return _batch_out(batch)
431
+
432
+
433
+ @router.get("/mine")
434
+ async def get_my_batch(user: dict = Depends(require_instructor)):
435
+ batch = await queries.get_batch_by_instructor_id(user["user_id"])
436
+ if not batch:
437
+ raise HTTPException(404, "No batch found")
438
+ return _batch_out(batch)
439
+
440
+
441
+ @router.get("/{batch_id}")
442
+ async def get_batch(
443
+ batch_id: str,
444
+ user: dict = Depends(require_instructor),
445
+ ):
446
+ batch = await queries.get_batch_by_id(batch_id)
447
+ if not batch:
448
+ raise HTTPException(404, "Batch not found")
449
+ return _batch_out(batch)
450
+ ```
451
+
452
+ - [ ] **Step 2: Commit**
453
+
454
+ ```bash
455
+ git add backend/routers/batches.py
456
+ git commit -m "feat: batches router — create, get mine, get by id"
457
+ ```
458
+
459
+ ---
460
+
461
+ ## Task 5: Topics router
462
+
463
+ **Files:**
464
+ - Create: `backend/routers/topics.py`
465
+
466
+ - [ ] **Step 1: Create backend/routers/topics.py**
467
+
468
+ ```python
469
+ from fastapi import APIRouter, Depends, HTTPException
470
+ from pydantic import BaseModel
471
+
472
+ from backend.auth.deps import require_instructor
473
+ from backend.db import queries
474
+
475
+ router = APIRouter()
476
+
477
+
478
+ class CreateTopicRequest(BaseModel):
479
+ batch_id: str
480
+ name: str
481
+
482
+
483
+ class UpdateTopicRequest(BaseModel):
484
+ is_unlocked: bool
485
+
486
+
487
+ def _topic_out(topic, question_count: int = 0) -> dict:
488
+ return {
489
+ "id": str(topic["id"]),
490
+ "batch_id": str(topic["batch_id"]),
491
+ "name": topic["name"],
492
+ "is_unlocked": topic["is_unlocked"],
493
+ "order_index": topic["order_index"],
494
+ "question_count": question_count,
495
+ }
496
+
497
+
498
+ @router.post("")
499
+ async def create_topic(
500
+ body: CreateTopicRequest,
501
+ user: dict = Depends(require_instructor),
502
+ ):
503
+ batch = await queries.get_batch_by_id(body.batch_id)
504
+ if not batch or str(batch["instructor_id"]) != user["user_id"]:
505
+ raise HTTPException(403, "Batch not found or not yours")
506
+ existing = await queries.get_topics_by_batch_id(body.batch_id)
507
+ order_index = len(existing)
508
+ topic = await queries.create_topic(body.batch_id, body.name, order_index)
509
+ return _topic_out(topic)
510
+
511
+
512
+ @router.get("")
513
+ async def list_topics(
514
+ batch_id: str,
515
+ user: dict = Depends(require_instructor),
516
+ ):
517
+ topics = await queries.get_topics_by_batch_id(batch_id)
518
+ result = []
519
+ for t in topics:
520
+ count = await queries.get_question_count_by_topic(str(t["id"]))
521
+ result.append(_topic_out(t, count))
522
+ return result
523
+
524
+
525
+ @router.patch("/{topic_id}")
526
+ async def update_topic(
527
+ topic_id: str,
528
+ body: UpdateTopicRequest,
529
+ user: dict = Depends(require_instructor),
530
+ ):
531
+ topic = await queries.get_topic_by_id(topic_id)
532
+ if not topic:
533
+ raise HTTPException(404, "Topic not found")
534
+ batch = await queries.get_batch_by_id(str(topic["batch_id"]))
535
+ if not batch or str(batch["instructor_id"]) != user["user_id"]:
536
+ raise HTTPException(403, "Topic does not belong to your batch")
537
+ updated = await queries.set_topic_unlock(topic_id, body.is_unlocked)
538
+ count = await queries.get_question_count_by_topic(topic_id)
539
+ return _topic_out(updated, count)
540
+ ```
541
+
542
+ - [ ] **Step 2: Commit**
543
+
544
+ ```bash
545
+ git add backend/routers/topics.py
546
+ git commit -m "feat: topics router — create, list, unlock/lock"
547
+ ```
548
+
549
+ ---
550
+
551
+ ## Task 6: Upload router
552
+
553
+ **Files:**
554
+ - Create: `backend/routers/upload.py`
555
+
556
+ - [ ] **Step 1: Create backend/routers/upload.py**
557
+
558
+ ```python
559
+ from fastapi import APIRouter, Depends, File, Form, HTTPException, UploadFile
560
+
561
+ from backend.auth.deps import require_instructor
562
+ from backend.db import queries
563
+ from backend.utils.csv_parser import parse_csv_questions
564
+
565
+ router = APIRouter()
566
+
567
+
568
+ @router.post("")
569
+ async def upload_questions(
570
+ topic_id: str = Form(...),
571
+ file: UploadFile = File(...),
572
+ user: dict = Depends(require_instructor),
573
+ ):
574
+ if not (file.filename or "").endswith(".csv"):
575
+ raise HTTPException(400, "File must be a .csv")
576
+
577
+ content = await file.read()
578
+ text = content.decode("utf-8-sig") # strips BOM if present
579
+
580
+ rows = parse_csv_questions(text)
581
+ if not rows:
582
+ raise HTTPException(400, "CSV is empty or contains no valid question rows")
583
+
584
+ topic = await queries.get_topic_by_id(topic_id)
585
+ if not topic:
586
+ raise HTTPException(404, "Topic not found")
587
+
588
+ batch = await queries.get_batch_by_id(str(topic["batch_id"]))
589
+ if not batch or str(batch["instructor_id"]) != user["user_id"]:
590
+ raise HTTPException(403, "Topic does not belong to your batch")
591
+
592
+ inserted = await queries.create_questions_bulk(topic_id, rows)
593
+ return {"inserted": inserted}
594
+ ```
595
+
596
+ - [ ] **Step 2: Commit**
597
+
598
+ ```bash
599
+ git add backend/routers/upload.py
600
+ git commit -m "feat: upload router — parse CSV and bulk insert questions"
601
+ ```
602
+
603
+ ---
604
+
605
+ ## Task 7: Wire main.py
606
+
607
+ **Files:**
608
+ - Modify: `backend/main.py`
609
+
610
+ - [ ] **Step 1: Replace backend/main.py**
611
+
612
+ ```python
613
+ from contextlib import asynccontextmanager
614
+
615
+ from fastapi import FastAPI
616
+
617
+ from backend.db.connection import init_db_pool
618
+ from backend.routers import auth, batches, topics, upload
619
+
620
+
621
+ @asynccontextmanager
622
+ async def lifespan(app: FastAPI):
623
+ await init_db_pool()
624
+ yield
625
+
626
+
627
+ app = FastAPI(lifespan=lifespan)
628
+
629
+ app.include_router(auth.router, prefix="/api/auth")
630
+ app.include_router(batches.router, prefix="/api/batches")
631
+ app.include_router(topics.router, prefix="/api/topics")
632
+ app.include_router(upload.router, prefix="/api/upload")
633
+
634
+ # React static build — MUST be last
635
+ # app.mount("/", StaticFiles(directory="frontend/dist", html=True), name="static")
636
+ ```
637
+
638
+ - [ ] **Step 2: Verify backend imports cleanly**
639
+
640
+ ```bash
641
+ uv run python -c "from backend.main import app; print('OK')"
642
+ ```
643
+
644
+ Expected: `OK` (plus possible JWT_SECRET warning — that is fine)
645
+
646
+ - [ ] **Step 3: Run all backend tests**
647
+
648
+ ```bash
649
+ uv run pytest tests/ -v
650
+ ```
651
+
652
+ Expected: `17 passed` (11 existing + 6 csv parser)
653
+
654
+ - [ ] **Step 4: Commit**
655
+
656
+ ```bash
657
+ git add backend/main.py
658
+ git commit -m "feat: register batches, topics, upload routers in main.py"
659
+ ```
660
+
661
+ ---
662
+
663
+ ## Task 8: Frontend API layer
664
+
665
+ **Files:**
666
+ - Modify: `frontend/src/api/topics.ts`
667
+ - Create: `frontend/src/api/upload.ts`
668
+
669
+ - [ ] **Step 1: Implement frontend/src/api/topics.ts**
670
+
671
+ ```typescript
672
+ import { apiFetch } from './client'
673
+
674
+ export interface Batch {
675
+ id: string
676
+ name: string
677
+ instructor_id: string
678
+ class_code: string
679
+ }
680
+
681
+ export interface Topic {
682
+ id: string
683
+ batch_id: string
684
+ name: string
685
+ is_unlocked: boolean
686
+ order_index: number
687
+ question_count: number
688
+ }
689
+
690
+ export async function getMyBatch(): Promise<Batch> {
691
+ const res = await apiFetch('/api/batches/mine')
692
+ if (!res.ok) throw new Error('No batch found')
693
+ return res.json()
694
+ }
695
+
696
+ export async function createBatch(name: string): Promise<Batch> {
697
+ const res = await apiFetch('/api/batches', {
698
+ method: 'POST',
699
+ body: JSON.stringify({ name }),
700
+ })
701
+ if (!res.ok) {
702
+ const err = await res.json()
703
+ throw new Error(err.detail ?? 'Failed to create batch')
704
+ }
705
+ return res.json()
706
+ }
707
+
708
+ export async function getTopics(batch_id: string): Promise<Topic[]> {
709
+ const res = await apiFetch(`/api/topics?batch_id=${batch_id}`)
710
+ if (!res.ok) throw new Error('Failed to load topics')
711
+ return res.json()
712
+ }
713
+
714
+ export async function createTopic(batch_id: string, name: string): Promise<Topic> {
715
+ const res = await apiFetch('/api/topics', {
716
+ method: 'POST',
717
+ body: JSON.stringify({ batch_id, name }),
718
+ })
719
+ if (!res.ok) {
720
+ const err = await res.json()
721
+ throw new Error(err.detail ?? 'Failed to create topic')
722
+ }
723
+ return res.json()
724
+ }
725
+
726
+ export async function setTopicUnlock(topic_id: string, is_unlocked: boolean): Promise<Topic> {
727
+ const res = await apiFetch(`/api/topics/${topic_id}`, {
728
+ method: 'PATCH',
729
+ body: JSON.stringify({ is_unlocked }),
730
+ })
731
+ if (!res.ok) throw new Error('Failed to update topic')
732
+ return res.json()
733
+ }
734
+ ```
735
+
736
+ - [ ] **Step 2: Create frontend/src/api/upload.ts**
737
+
738
+ ```typescript
739
+ import { useAuthStore } from '../store/authStore'
740
+
741
+ export async function uploadCSV(
742
+ topic_id: string,
743
+ file: File,
744
+ ): Promise<{ inserted: number }> {
745
+ const token = useAuthStore.getState().accessToken
746
+ const formData = new FormData()
747
+ formData.append('file', file)
748
+ formData.append('topic_id', topic_id)
749
+
750
+ // Cannot use apiFetch here — Content-Type must be set by browser for multipart
751
+ const res = await fetch('/api/upload', {
752
+ method: 'POST',
753
+ headers: token ? { Authorization: `Bearer ${token}` } : {},
754
+ body: formData,
755
+ credentials: 'include',
756
+ })
757
+ if (!res.ok) {
758
+ const err = await res.json()
759
+ throw new Error(err.detail ?? 'Upload failed')
760
+ }
761
+ return res.json()
762
+ }
763
+ ```
764
+
765
+ - [ ] **Step 3: Verify TypeScript compiles**
766
+
767
+ ```bash
768
+ cd frontend && npx tsc --noEmit && cd ..
769
+ ```
770
+
771
+ Expected: no errors
772
+
773
+ - [ ] **Step 4: Commit**
774
+
775
+ ```bash
776
+ git add frontend/src/api/topics.ts frontend/src/api/upload.ts
777
+ git commit -m "feat: frontend topics and upload API layer"
778
+ ```
779
+
780
+ ---
781
+
782
+ ## Task 9: Navbar + StatCard
783
+
784
+ **Files:**
785
+ - Modify: `frontend/src/components/shared/Navbar.tsx`
786
+ - Create: `frontend/src/components/instructor/StatCard.tsx`
787
+
788
+ - [ ] **Step 1: Implement Navbar.tsx**
789
+
790
+ ```typescript
791
+ import { useNavigate } from 'react-router-dom'
792
+ import { logout } from '../../api/auth'
793
+ import { useAuthStore } from '../../store/authStore'
794
+
795
+ export default function Navbar() {
796
+ const navigate = useNavigate()
797
+ const { user, clearAuth } = useAuthStore()
798
+
799
+ async function handleLogout() {
800
+ try { await logout() } catch { /* ignore */ }
801
+ clearAuth()
802
+ navigate('/login', { replace: true })
803
+ }
804
+
805
+ return (
806
+ <nav className="bg-gray-900 border-b border-gray-800 px-6 py-3 flex items-center justify-between">
807
+ <span className="text-white font-semibold text-lg tracking-tight">InterviewMentor</span>
808
+ {user && (
809
+ <div className="flex items-center gap-4">
810
+ <span className="text-gray-400 text-sm">{user.full_name}</span>
811
+ <button
812
+ onClick={handleLogout}
813
+ className="text-sm text-gray-400 hover:text-white transition-colors"
814
+ >
815
+ Sign out
816
+ </button>
817
+ </div>
818
+ )}
819
+ </nav>
820
+ )
821
+ }
822
+ ```
823
+
824
+ - [ ] **Step 2: Create frontend/src/components/instructor/StatCard.tsx**
825
+
826
+ ```typescript
827
+ interface Props {
828
+ label: string
829
+ value: string | number
830
+ }
831
+
832
+ export default function StatCard({ label, value }: Props) {
833
+ return (
834
+ <div className="bg-gray-900 border border-gray-800 rounded-lg px-4 py-3">
835
+ <p className="text-gray-400 text-xs uppercase tracking-wide">{label}</p>
836
+ <p className="text-white text-2xl font-semibold mt-1">{value}</p>
837
+ </div>
838
+ )
839
+ }
840
+ ```
841
+
842
+ - [ ] **Step 3: Verify TypeScript compiles**
843
+
844
+ ```bash
845
+ cd frontend && npx tsc --noEmit && cd ..
846
+ ```
847
+
848
+ Expected: no errors
849
+
850
+ - [ ] **Step 4: Commit**
851
+
852
+ ```bash
853
+ git add frontend/src/components/shared/Navbar.tsx frontend/src/components/instructor/StatCard.tsx
854
+ git commit -m "feat: Navbar with logout and StatCard component"
855
+ ```
856
+
857
+ ---
858
+
859
+ ## Task 10: InstructorDashboard page
860
+
861
+ **Files:**
862
+ - Modify: `frontend/src/pages/InstructorDashboard.tsx`
863
+
864
+ - [ ] **Step 1: Implement frontend/src/pages/InstructorDashboard.tsx**
865
+
866
+ ```typescript
867
+ import { useState, useEffect, type FormEvent } from 'react'
868
+ import { useNavigate } from 'react-router-dom'
869
+ import Navbar from '../components/shared/Navbar'
870
+ import StatCard from '../components/instructor/StatCard'
871
+ import {
872
+ getMyBatch, createBatch, getTopics, createTopic, setTopicUnlock,
873
+ type Batch, type Topic,
874
+ } from '../api/topics'
875
+
876
+ export default function InstructorDashboard() {
877
+ const navigate = useNavigate()
878
+ const [batch, setBatch] = useState<Batch | null>(null)
879
+ const [topics, setTopics] = useState<Topic[]>([])
880
+ const [batchName, setBatchName] = useState('')
881
+ const [newTopicName, setNewTopicName] = useState('')
882
+ const [loading, setLoading] = useState(true)
883
+ const [error, setError] = useState('')
884
+
885
+ useEffect(() => {
886
+ getMyBatch()
887
+ .then(async (b) => {
888
+ setBatch(b)
889
+ const t = await getTopics(b.id)
890
+ setTopics(t)
891
+ })
892
+ .catch(() => { /* 404 = no batch yet */ })
893
+ .finally(() => setLoading(false))
894
+ }, [])
895
+
896
+ async function handleCreateBatch(e: FormEvent) {
897
+ e.preventDefault()
898
+ setError('')
899
+ try {
900
+ const b = await createBatch(batchName.trim())
901
+ setBatch(b)
902
+ setTopics([])
903
+ setBatchName('')
904
+ } catch (err) {
905
+ setError(err instanceof Error ? err.message : 'Failed to create batch')
906
+ }
907
+ }
908
+
909
+ async function handleAddTopic(e: FormEvent) {
910
+ e.preventDefault()
911
+ if (!batch || !newTopicName.trim()) return
912
+ setError('')
913
+ try {
914
+ const t = await createTopic(batch.id, newTopicName.trim())
915
+ setTopics((prev) => [...prev, t])
916
+ setNewTopicName('')
917
+ } catch (err) {
918
+ setError(err instanceof Error ? err.message : 'Failed to add topic')
919
+ }
920
+ }
921
+
922
+ async function handleToggleUnlock(topic: Topic) {
923
+ try {
924
+ const updated = await setTopicUnlock(topic.id, !topic.is_unlocked)
925
+ setTopics((prev) => prev.map((t) => (t.id === topic.id ? updated : t)))
926
+ } catch { /* silently ignore */ }
927
+ }
928
+
929
+ if (loading) {
930
+ return (
931
+ <div className="min-h-screen bg-gray-950">
932
+ <Navbar />
933
+ <div className="flex items-center justify-center h-64">
934
+ <p className="text-gray-400">Loading...</p>
935
+ </div>
936
+ </div>
937
+ )
938
+ }
939
+
940
+ return (
941
+ <div className="min-h-screen bg-gray-950">
942
+ <Navbar />
943
+ <div className="max-w-3xl mx-auto px-6 py-8">
944
+
945
+ {!batch ? (
946
+ /* ── Create batch ── */
947
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-8 max-w-md mx-auto">
948
+ <h2 className="text-xl font-semibold text-white mb-2">Create your batch</h2>
949
+ <p className="text-gray-400 text-sm mb-6">Give your class a name to get started.</p>
950
+ <form onSubmit={handleCreateBatch} className="space-y-4">
951
+ <input
952
+ type="text"
953
+ value={batchName}
954
+ onChange={(e) => setBatchName(e.target.value)}
955
+ required
956
+ placeholder="e.g. Batch A — Jan 2026"
957
+ className="w-full bg-gray-800 border border-gray-700 rounded-lg px-4 py-2 text-white placeholder-gray-500 focus:outline-none focus:border-indigo-500"
958
+ />
959
+ {error && <p className="text-red-400 text-sm">{error}</p>}
960
+ <button
961
+ type="submit"
962
+ className="w-full bg-indigo-600 hover:bg-indigo-500 text-white rounded-lg px-4 py-2 font-medium transition-colors"
963
+ >
964
+ Create batch
965
+ </button>
966
+ </form>
967
+ </div>
968
+ ) : (
969
+ <>
970
+ {/* ── Batch header ── */}
971
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-6 mb-6 flex items-start justify-between">
972
+ <div>
973
+ <h1 className="text-2xl font-semibold text-white">{batch.name}</h1>
974
+ <p className="text-gray-400 text-sm mt-1">Share this code with your students</p>
975
+ </div>
976
+ <div className="bg-gray-800 border border-gray-700 rounded-lg px-4 py-2 text-center ml-4 shrink-0">
977
+ <p className="text-xs text-gray-400 mb-1">Class code</p>
978
+ <p className="text-xl font-mono font-bold text-indigo-400">{batch.class_code}</p>
979
+ </div>
980
+ </div>
981
+
982
+ {/* ── Stats row ── */}
983
+ <div className="grid grid-cols-2 gap-4 mb-6">
984
+ <StatCard label="Topics" value={topics.length} />
985
+ <StatCard
986
+ label="Questions"
987
+ value={topics.reduce((sum, t) => sum + t.question_count, 0)}
988
+ />
989
+ </div>
990
+
991
+ {/* ── Topic list ── */}
992
+ <div className="bg-gray-900 border border-gray-800 rounded-xl p-6">
993
+ <h2 className="text-lg font-semibold text-white mb-4">Topics</h2>
994
+
995
+ {topics.length === 0 && (
996
+ <p className="text-gray-500 text-sm mb-4">No topics yet. Add one below.</p>
997
+ )}
998
+
999
+ <div className="space-y-3 mb-6">
1000
+ {topics.map((topic) => (
1001
+ <div
1002
+ key={topic.id}
1003
+ className="flex items-center justify-between bg-gray-800 border border-gray-700 rounded-lg px-4 py-3"
1004
+ >
1005
+ <div>
1006
+ <p className="text-white font-medium">{topic.name}</p>
1007
+ <p className="text-gray-400 text-xs mt-0.5">{topic.question_count} questions</p>
1008
+ </div>
1009
+ <div className="flex items-center gap-3">
1010
+ <button
1011
+ onClick={() =>
1012
+ navigate(
1013
+ `/instructor/upload?topic_id=${topic.id}&topic_name=${encodeURIComponent(topic.name)}`,
1014
+ )
1015
+ }
1016
+ className="text-sm text-indigo-400 hover:text-indigo-300 transition-colors"
1017
+ >
1018
+ Add questions
1019
+ </button>
1020
+ <button
1021
+ onClick={() => handleToggleUnlock(topic)}
1022
+ className={`text-xs px-3 py-1 rounded-full font-medium transition-colors ${
1023
+ topic.is_unlocked
1024
+ ? 'bg-green-900 text-green-300 hover:bg-green-800'
1025
+ : 'bg-gray-700 text-gray-400 hover:bg-gray-600'
1026
+ }`}
1027
+ >
1028
+ {topic.is_unlocked ? 'Unlocked' : 'Locked'}
1029
+ </button>
1030
+ </div>
1031
+ </div>
1032
+ ))}
1033
+ </div>
1034
+
1035
+ {/* ── Add topic form ── */}
1036
+ <form onSubmit={handleAddTopic} className="flex gap-2">
1037
+ <input
1038
+ type="text"
1039
+ value={newTopicName}
1040
+ onChange={(e) => setNewTopicName(e.target.value)}
1041
+ placeholder="New topic name"
1042
+ className="flex-1 bg-gray-800 border border-gray-700 rounded-lg px-4 py-2 text-white placeholder-gray-500 focus:outline-none focus:border-indigo-500"
1043
+ />
1044
+ <button
1045
+ type="submit"
1046
+ className="bg-indigo-600 hover:bg-indigo-500 text-white rounded-lg px-4 py-2 font-medium transition-colors"
1047
+ >
1048
+ Add
1049
+ </button>
1050
+ </form>
1051
+ {error && <p className="text-red-400 text-sm mt-2">{error}</p>}
1052
+ </div>
1053
+ </>
1054
+ )}
1055
+ </div>
1056
+ </div>
1057
+ )
1058
+ }
1059
+ ```
1060
+
1061
+ - [ ] **Step 2: Verify TypeScript compiles**
1062
+
1063
+ ```bash
1064
+ cd frontend && npx tsc --noEmit && cd ..
1065
+ ```
1066
+
1067
+ Expected: no errors
1068
+
1069
+ - [ ] **Step 3: Commit**
1070
+
1071
+ ```bash
1072
+ git add frontend/src/pages/InstructorDashboard.tsx
1073
+ git commit -m "feat: InstructorDashboard — batch creation, class code display, topic management"
1074
+ ```
1075
+
1076
+ ---
1077
+
1078
+ ## Task 11: Upload page + UploadZone
1079
+
1080
+ **Files:**
1081
+ - Create: `frontend/src/components/instructor/UploadZone.tsx`
1082
+ - Modify: `frontend/src/pages/Upload.tsx`
1083
+
1084
+ - [ ] **Step 1: Create frontend/src/components/instructor/UploadZone.tsx**
1085
+
1086
+ ```typescript
1087
+ import { useRef, useState } from 'react'
1088
+
1089
+ interface Props {
1090
+ onFile: (file: File) => void
1091
+ uploading: boolean
1092
+ }
1093
+
1094
+ export default function UploadZone({ onFile, uploading }: Props) {
1095
+ const inputRef = useRef<HTMLInputElement>(null)
1096
+ const [dragging, setDragging] = useState(false)
1097
+
1098
+ function handleDrop(e: React.DragEvent) {
1099
+ e.preventDefault()
1100
+ setDragging(false)
1101
+ const file = e.dataTransfer.files[0]
1102
+ if (file) onFile(file)
1103
+ }
1104
+
1105
+ function handleChange(e: React.ChangeEvent<HTMLInputElement>) {
1106
+ const file = e.target.files?.[0]
1107
+ if (file) onFile(file)
1108
+ }
1109
+
1110
+ return (
1111
+ <div
1112
+ onDragOver={(e) => { e.preventDefault(); setDragging(true) }}
1113
+ onDragLeave={() => setDragging(false)}
1114
+ onDrop={handleDrop}
1115
+ onClick={() => inputRef.current?.click()}
1116
+ className={`border-2 border-dashed rounded-xl p-12 text-center cursor-pointer transition-colors select-none ${
1117
+ dragging ? 'border-indigo-500 bg-indigo-950' : 'border-gray-700 hover:border-gray-600'
1118
+ } ${uploading ? 'opacity-50 pointer-events-none' : ''}`}
1119
+ >
1120
+ <input
1121
+ ref={inputRef}
1122
+ type="file"
1123
+ accept=".csv"
1124
+ className="hidden"
1125
+ onChange={handleChange}
1126
+ />
1127
+ <p className="text-gray-300 font-medium">
1128
+ {uploading ? 'Uploading...' : 'Drop your CSV here or click to browse'}
1129
+ </p>
1130
+ <p className="text-gray-500 text-sm mt-1">.csv files only</p>
1131
+ </div>
1132
+ )
1133
+ }
1134
+ ```
1135
+
1136
+ - [ ] **Step 2: Implement frontend/src/pages/Upload.tsx**
1137
+
1138
+ ```typescript
1139
+ import { useState } from 'react'
1140
+ import { useNavigate, useSearchParams } from 'react-router-dom'
1141
+ import Navbar from '../components/shared/Navbar'
1142
+ import UploadZone from '../components/instructor/UploadZone'
1143
+ import { uploadCSV } from '../api/upload'
1144
+
1145
+ export default function Upload() {
1146
+ const navigate = useNavigate()
1147
+ const [searchParams] = useSearchParams()
1148
+ const topicId = searchParams.get('topic_id') ?? ''
1149
+ const topicName = searchParams.get('topic_name') ?? 'Unknown topic'
1150
+ const [result, setResult] = useState<{ inserted: number } | null>(null)
1151
+ const [error, setError] = useState('')
1152
+ const [uploading, setUploading] = useState(false)
1153
+
1154
+ async function handleFile(file: File) {
1155
+ if (!topicId) {
1156
+ setError('Missing topic ID — go back to the dashboard.')
1157
+ return
1158
+ }
1159
+ setError('')
1160
+ setUploading(true)
1161
+ try {
1162
+ const res = await uploadCSV(topicId, file)
1163
+ setResult(res)
1164
+ } catch (err) {
1165
+ setError(err instanceof Error ? err.message : 'Upload failed')
1166
+ } finally {
1167
+ setUploading(false)
1168
+ }
1169
+ }
1170
+
1171
+ return (
1172
+ <div className="min-h-screen bg-gray-950">
1173
+ <Navbar />
1174
+ <div className="max-w-2xl mx-auto px-6 py-8">
1175
+ <button
1176
+ onClick={() => navigate('/instructor/dashboard')}
1177
+ className="text-sm text-gray-400 hover:text-white mb-6 inline-flex items-center gap-1 transition-colors"
1178
+ >
1179
+ ← Back to dashboard
1180
+ </button>
1181
+
1182
+ <h1 className="text-2xl font-semibold text-white mb-1">Upload questions</h1>
1183
+ <p className="text-gray-400 text-sm mb-6">
1184
+ Topic: <span className="text-indigo-400">{topicName}</span>
1185
+ </p>
1186
+
1187
+ {result ? (
1188
+ <div className="bg-green-950 border border-green-800 rounded-xl p-8 text-center">
1189
+ <p className="text-green-300 text-lg font-semibold">
1190
+ ✓ {result.inserted} questions uploaded
1191
+ </p>
1192
+ <button
1193
+ onClick={() => navigate('/instructor/dashboard')}
1194
+ className="mt-4 text-sm text-gray-400 hover:text-white transition-colors"
1195
+ >
1196
+ Back to dashboard
1197
+ </button>
1198
+ </div>
1199
+ ) : (
1200
+ <>
1201
+ <UploadZone onFile={handleFile} uploading={uploading} />
1202
+ {error && <p className="text-red-400 text-sm mt-3">{error}</p>}
1203
+ <div className="mt-6 bg-gray-900 border border-gray-800 rounded-lg p-4">
1204
+ <p className="text-gray-400 text-sm font-medium mb-2">CSV format</p>
1205
+ <pre className="text-xs text-gray-500 font-mono whitespace-pre">{`question_text,difficulty\nWhat is Python?,easy\nExplain the GIL,hard\nWhat is a decorator?,medium`}</pre>
1206
+ <p className="text-gray-500 text-xs mt-2">
1207
+ Difficulty: <code className="text-gray-400">easy</code> |{' '}
1208
+ <code className="text-gray-400">medium</code> |{' '}
1209
+ <code className="text-gray-400">hard</code>{' '}
1210
+ (defaults to <code className="text-gray-400">medium</code> if missing or invalid)
1211
+ </p>
1212
+ </div>
1213
+ </>
1214
+ )}
1215
+ </div>
1216
+ </div>
1217
+ )
1218
+ }
1219
+ ```
1220
+
1221
+ - [ ] **Step 3: Run full frontend build**
1222
+
1223
+ ```bash
1224
+ cd frontend && npm run build && cd ..
1225
+ ```
1226
+
1227
+ Expected: clean build, no errors
1228
+
1229
+ - [ ] **Step 4: Commit**
1230
+
1231
+ ```bash
1232
+ git add frontend/src/components/instructor/UploadZone.tsx frontend/src/pages/Upload.tsx
1233
+ git commit -m "feat: Upload page with drag-and-drop UploadZone and CSV format guide"
1234
+ ```
1235
+
1236
+ ---
1237
+
1238
+ ## Task 12: Final verification + push
1239
+
1240
+ - [ ] **Step 1: Run all backend tests**
1241
+
1242
+ ```bash
1243
+ uv run pytest tests/ -v
1244
+ ```
1245
+
1246
+ Expected: `17 passed`
1247
+
1248
+ - [ ] **Step 2: Verify frontend build**
1249
+
1250
+ ```bash
1251
+ cd frontend && npm run build && cd ..
1252
+ ```
1253
+
1254
+ Expected: clean build
1255
+
1256
+ - [ ] **Step 3: Verify backend imports cleanly**
1257
+
1258
+ ```bash
1259
+ uv run python -c "from backend.main import app; print('OK')"
1260
+ ```
1261
+
1262
+ Expected: `OK`
1263
+
1264
+ - [ ] **Step 4: Push to GitHub**
1265
+
1266
+ ```bash
1267
+ git push origin main
1268
+ ```
1269
+
1270
+ ---
1271
+
1272
+ ## Self-Review
1273
+
1274
+ **Spec coverage:**
1275
+ - ✅ NeonDB schema — all 6 tables (Task 1)
1276
+ - ✅ `batches.py` router — POST /api/batches, GET /api/batches/mine, GET /api/batches/{batch_id} (Task 4)
1277
+ - ✅ `topics.py` router — POST /api/topics, GET /api/topics?batch_id=..., PATCH /api/topics/{topic_id} (Task 5)
1278
+ - ✅ `upload.py` router — POST /api/upload with multipart CSV (Task 6)
1279
+ - ✅ CSV parser — pure function with 6 unit tests (Task 2)
1280
+ - ✅ DB queries — all batch/topic/question operations (Task 3)
1281
+ - ✅ Frontend: topics.ts + upload.ts API layer (Task 8)
1282
+ - ✅ Frontend: Navbar with logout (Task 9)
1283
+ - ✅ Frontend: StatCard component (Task 9)
1284
+ - ✅ Frontend: InstructorDashboard — batch creation, class code, topic management (Task 10)
1285
+ - ✅ Frontend: Upload page + UploadZone drag-and-drop (Task 11)
1286
+ - ✅ main.py updated with all routers (Task 7)
1287
+
1288
+ **Type consistency check:**
1289
+ - `Batch` interface (topics.ts) matches `_batch_out()` return shape in batches.py ✅
1290
+ - `Topic` interface (topics.ts) matches `_topic_out()` return shape in topics.py ✅
1291
+ - `uploadCSV(topic_id, file)` in upload.ts matches Form(...) params in upload.py ✅
1292
+ - `create_questions_bulk` takes `list[dict]` and returns `int` — matched in upload router ✅
docs/superpowers/plans/2026-03-28-phase-5-langgraph-engine.md ADDED
@@ -0,0 +1,1135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Phase 5 — LangGraph Interview Engine
2
+
3
+ > **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
4
+
5
+ **Goal:** Build the LangGraph-powered interview state machine — five nodes, a checkpointer backed by NeonDB, and three FastAPI endpoints (`/interview/start`, `/interview/turn`, `/interview/state/{session_id}`) that drive a full AI mock interview session.
6
+
7
+ **Architecture:** Each interview is a LangGraph graph compiled with an `AsyncPostgresSaver` checkpointer (stored in NeonDB's `checkpointer` schema). The graph is compiled once at app startup and reused across requests. Each session maps to one LangGraph thread identified by `session_id`. LLM calls go to OpenRouter (MiniMax model) via httpx. All prompt builders are pure functions so they can be unit-tested independently of the LLM.
8
+
9
+ **Tech Stack:** LangGraph 0.2.50, langgraph-checkpoint-postgres 2.0.8, httpx 0.27.2, FastAPI 0.115.0, asyncpg 0.29.0, Python 3.11
10
+
11
+ ---
12
+
13
+ ## File Map
14
+
15
+ ### Backend — Create
16
+ - `backend/graph/__init__.py` — empty package marker
17
+ - `backend/graph/state.py` — `InterviewState` TypedDict with `add_messages`
18
+ - `backend/llm.py` — `call_llm(messages, max_tokens)` async httpx wrapper for OpenRouter
19
+ - `backend/prompts.py` — 5 pure prompt-builder functions
20
+ - `backend/graph/nodes.py` — 5 async node functions + `route_after_evaluation`
21
+ - `backend/graph/graph.py` — `build_graph(checkpointer)` → compiled LangGraph
22
+ - `backend/checkpointer.py` — `init_checkpointer()` / `get_checkpointer()` lifecycle helpers
23
+ - `backend/routers/interview.py` — `/interview/start`, `/interview/turn`, `/interview/state/{session_id}`
24
+
25
+ ### Backend — Modify
26
+ - `backend/db/queries.py` — add 5 session query functions
27
+ - `backend/main.py` — init checkpointer + graph in lifespan, register interview router
28
+
29
+ ### Tests — Create
30
+ - `tests/test_prompts.py` — unit tests for all 5 prompt builders
31
+ - `tests/test_routing.py` — unit tests for `route_after_evaluation`
32
+
33
+ ---
34
+
35
+ ## Task 1: Session DB queries
36
+
37
+ **Files:**
38
+ - Modify: `backend/db/queries.py`
39
+
40
+ - [ ] **Step 1: Append session query functions to backend/db/queries.py**
41
+
42
+ Read the file first, then append to the bottom:
43
+
44
+ ```python
45
+ import json as _json
46
+
47
+
48
+ async def get_questions_by_topic(topic_id: str) -> list[asyncpg.Record]:
49
+ async with get_pool().acquire() as conn:
50
+ return await conn.fetch(
51
+ "SELECT * FROM questions WHERE topic_id = $1 ORDER BY created_at ASC",
52
+ topic_id,
53
+ )
54
+
55
+
56
+ async def get_best_session_by_student_topic(
57
+ student_id: str, topic_id: str
58
+ ) -> Optional[asyncpg.Record]:
59
+ async with get_pool().acquire() as conn:
60
+ return await conn.fetchrow(
61
+ """
62
+ SELECT * FROM interview_sessions
63
+ WHERE student_id = $1 AND topic_id = $2 AND status = 'completed'
64
+ ORDER BY score DESC NULLS LAST
65
+ LIMIT 1
66
+ """,
67
+ student_id, topic_id,
68
+ )
69
+
70
+
71
+ async def create_interview_session(
72
+ student_id: str, topic_id: str
73
+ ) -> asyncpg.Record:
74
+ async with get_pool().acquire() as conn:
75
+ return await conn.fetchrow(
76
+ """
77
+ INSERT INTO interview_sessions (student_id, topic_id)
78
+ VALUES ($1, $2)
79
+ RETURNING *
80
+ """,
81
+ student_id, topic_id,
82
+ )
83
+
84
+
85
+ async def update_session_complete(
86
+ session_id: str, score: int, feedback: dict
87
+ ) -> None:
88
+ async with get_pool().acquire() as conn:
89
+ await conn.execute(
90
+ """
91
+ UPDATE interview_sessions
92
+ SET status = 'completed', score = $1, feedback = $2::jsonb,
93
+ completed_at = NOW()
94
+ WHERE id = $3
95
+ """,
96
+ score, _json.dumps(feedback), session_id,
97
+ )
98
+
99
+
100
+ async def get_session_by_id(session_id: str) -> Optional[asyncpg.Record]:
101
+ async with get_pool().acquire() as conn:
102
+ return await conn.fetchrow(
103
+ "SELECT * FROM interview_sessions WHERE id = $1", session_id
104
+ )
105
+ ```
106
+
107
+ - [ ] **Step 2: Verify import**
108
+
109
+ ```bash
110
+ uv run python -c "from backend.db import queries; print('OK')"
111
+ ```
112
+
113
+ Expected: `OK`
114
+
115
+ - [ ] **Step 3: Commit**
116
+
117
+ ```bash
118
+ git add backend/db/queries.py
119
+ git commit -m "feat: add session DB queries (create, complete, get, past best)"
120
+ ```
121
+
122
+ ---
123
+
124
+ ## Task 2: InterviewState + LLM client
125
+
126
+ **Files:**
127
+ - Create: `backend/graph/__init__.py`
128
+ - Create: `backend/graph/state.py`
129
+ - Create: `backend/llm.py`
130
+
131
+ - [ ] **Step 1: Create backend/graph/__init__.py** (empty)
132
+
133
+ - [ ] **Step 2: Create backend/graph/state.py**
134
+
135
+ ```python
136
+ from typing import Annotated, Literal, TypedDict
137
+
138
+ from langgraph.graph.message import add_messages
139
+
140
+
141
+ class InterviewState(TypedDict):
142
+ # Static — set once at session start
143
+ topic_name: str
144
+ session_id: str
145
+ student_id: str
146
+ questions_remaining: list[dict] # [{"question_text": str, "difficulty": str}]
147
+ past_best_score: int | None
148
+ past_weak_areas: list[str]
149
+
150
+ # Dynamic — mutates during session
151
+ messages: Annotated[list, add_messages] # appended via reducer
152
+ conversation_summary: str
153
+ questions_asked: list[str]
154
+ student_weak_areas: list[str]
155
+ turn_count: int
156
+ awaiting_counter_response: bool
157
+ last_verdict: str | None # "strong" | "shallow" | "wrong" | None
158
+
159
+ # Terminal — set once at end
160
+ status: Literal["active", "complete"]
161
+ score: int | None
162
+ feedback: dict | None
163
+ ```
164
+
165
+ - [ ] **Step 3: Create backend/llm.py**
166
+
167
+ ```python
168
+ import os
169
+
170
+ import httpx
171
+
172
+ _OPENROUTER_URL = "https://openrouter.ai/api/v1/chat/completions"
173
+ _MODEL = "minimax/minimax-01"
174
+
175
+
176
+ async def call_llm(messages: list[dict], max_tokens: int = 512) -> str:
177
+ """Call OpenRouter API. Returns the assistant message content string."""
178
+ api_key = os.getenv("OPENROUTER_API_KEY", "")
179
+ async with httpx.AsyncClient(timeout=60.0) as client:
180
+ response = await client.post(
181
+ _OPENROUTER_URL,
182
+ headers={
183
+ "Authorization": f"Bearer {api_key}",
184
+ "Content-Type": "application/json",
185
+ },
186
+ json={
187
+ "model": _MODEL,
188
+ "messages": messages,
189
+ "max_tokens": max_tokens,
190
+ },
191
+ )
192
+ response.raise_for_status()
193
+ return response.json()["choices"][0]["message"]["content"]
194
+ ```
195
+
196
+ - [ ] **Step 4: Verify imports**
197
+
198
+ ```bash
199
+ uv run python -c "
200
+ from backend.graph.state import InterviewState
201
+ from backend.llm import call_llm
202
+ print('OK')
203
+ "
204
+ ```
205
+
206
+ Expected: `OK`
207
+
208
+ - [ ] **Step 5: Commit**
209
+
210
+ ```bash
211
+ git add backend/graph/__init__.py backend/graph/state.py backend/llm.py
212
+ git commit -m "feat: InterviewState TypedDict and OpenRouter LLM client"
213
+ ```
214
+
215
+ ---
216
+
217
+ ## Task 3: Prompt builders + tests
218
+
219
+ **Files:**
220
+ - Create: `backend/prompts.py`
221
+ - Create: `tests/test_prompts.py`
222
+
223
+ - [ ] **Step 1: Write failing tests first**
224
+
225
+ Create `tests/test_prompts.py`:
226
+
227
+ ```python
228
+ from backend.prompts import (
229
+ build_ask_question_prompt,
230
+ build_counter_prompt,
231
+ build_evaluate_prompt,
232
+ build_report_prompt,
233
+ build_summarize_prompt,
234
+ )
235
+
236
+
237
+ def _is_message_list(result) -> bool:
238
+ return (
239
+ isinstance(result, list)
240
+ and all(isinstance(m, dict) and "role" in m and "content" in m for m in result)
241
+ )
242
+
243
+
244
+ def test_ask_question_prompt_returns_messages():
245
+ result = build_ask_question_prompt("Python", "Some summary", ["What is a list?"])
246
+ assert _is_message_list(result)
247
+ assert len(result) == 2
248
+ combined = " ".join(m["content"] for m in result)
249
+ assert "Python" in combined
250
+ assert "Some summary" in combined
251
+ assert "What is a list?" in combined
252
+
253
+
254
+ def test_ask_question_prompt_no_prior_questions():
255
+ result = build_ask_question_prompt("Python", "", [])
256
+ assert _is_message_list(result)
257
+ combined = " ".join(m["content"] for m in result)
258
+ assert "Python" in combined
259
+
260
+
261
+ def test_evaluate_prompt_returns_messages():
262
+ result = build_evaluate_prompt("What is GIL?", "Global Interpreter Lock", "summary")
263
+ assert _is_message_list(result)
264
+ combined = " ".join(m["content"] for m in result)
265
+ assert "What is GIL?" in combined
266
+ assert "Global Interpreter Lock" in combined
267
+ # Must instruct JSON output
268
+ assert "JSON" in combined or "json" in combined
269
+
270
+
271
+ def test_counter_prompt_returns_messages():
272
+ result = build_counter_prompt("What is GIL?", "I don't know")
273
+ assert _is_message_list(result)
274
+ assert len(result) == 2
275
+ combined = " ".join(m["content"] for m in result)
276
+ assert "What is GIL?" in combined
277
+
278
+
279
+ def test_summarize_prompt_returns_messages():
280
+ messages = [
281
+ {"role": "assistant", "content": "What is a decorator?"},
282
+ {"role": "human", "content": "It wraps a function"},
283
+ ]
284
+ result = build_summarize_prompt(messages)
285
+ assert _is_message_list(result)
286
+ combined = " ".join(m["content"] for m in result)
287
+ assert "decorator" in combined
288
+
289
+
290
+ def test_report_prompt_includes_past_score():
291
+ result = build_report_prompt("Python", ["Q1", "Q2"], ["decorators"], "summary", 72)
292
+ assert _is_message_list(result)
293
+ combined = " ".join(m["content"] for m in result)
294
+ assert "72" in combined
295
+ assert "JSON" in combined or "json" in combined
296
+
297
+
298
+ def test_report_prompt_no_past_score():
299
+ result = build_report_prompt("Python", [], [], "", None)
300
+ assert _is_message_list(result)
301
+ combined = " ".join(m["content"] for m in result)
302
+ assert "first attempt" in combined.lower() or "no previous" in combined.lower() or "first" in combined.lower()
303
+ ```
304
+
305
+ - [ ] **Step 2: Run tests — expect failure**
306
+
307
+ ```bash
308
+ uv run pytest tests/test_prompts.py -v
309
+ ```
310
+
311
+ Expected: `FAILED` — `ModuleNotFoundError`
312
+
313
+ - [ ] **Step 3: Implement backend/prompts.py**
314
+
315
+ ```python
316
+ def build_ask_question_prompt(
317
+ topic: str, summary: str, asked: list[str]
318
+ ) -> list[dict]:
319
+ asked_str = "\n".join(f"- {q}" for q in asked) if asked else "None yet."
320
+ system = (
321
+ f"You are a technical interview AI conducting a mock interview on: {topic}. "
322
+ "Ask one clear, focused technical question. "
323
+ "Do not repeat questions already asked. Be conversational but professional."
324
+ )
325
+ user = (
326
+ f"Conversation summary:\n{summary or 'This is the start of the interview.'}\n\n"
327
+ f"Questions already asked:\n{asked_str}\n\n"
328
+ "Ask the next question. Just the question — no numbering, no preamble."
329
+ )
330
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
331
+
332
+
333
+ def build_evaluate_prompt(
334
+ question: str, answer: str, summary: str
335
+ ) -> list[dict]:
336
+ system = (
337
+ "You are evaluating a student's answer in a technical interview. "
338
+ 'Respond with ONLY valid JSON: {"verdict": "strong"|"shallow"|"wrong", "weak_area": "topic or null"}\n'
339
+ "strong = correct and complete. shallow = partially correct, missing depth. "
340
+ "wrong = incorrect or off-topic."
341
+ )
342
+ user = (
343
+ f"Context: {summary or 'Start of interview.'}\n\n"
344
+ f"Question: {question}\n"
345
+ f"Student answer: {answer}\n\n"
346
+ "Evaluate. Return only the JSON object."
347
+ )
348
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
349
+
350
+
351
+ def build_counter_prompt(question: str, answer: str) -> list[dict]:
352
+ system = (
353
+ "You are a technical interviewer. The student gave a shallow answer. "
354
+ "Ask ONE specific follow-up probing question to dig deeper into what they missed. "
355
+ "No preamble — just the question."
356
+ )
357
+ user = (
358
+ f"Original question: {question}\n"
359
+ f"Student's shallow answer: {answer}\n\n"
360
+ "Ask one targeted follow-up question."
361
+ )
362
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
363
+
364
+
365
+ def build_summarize_prompt(messages: list) -> list[dict]:
366
+ def _content(m) -> str:
367
+ return m.get("content", "") if isinstance(m, dict) else getattr(m, "content", "")
368
+
369
+ def _role(m) -> str:
370
+ if isinstance(m, dict):
371
+ return m.get("role", "unknown")
372
+ name = type(m).__name__.lower()
373
+ return "AI" if "ai" in name else "STUDENT"
374
+
375
+ transcript = "\n".join(f"{_role(m).upper()}: {_content(m)}" for m in messages)
376
+ system = (
377
+ "Summarize this interview transcript in under 150 words. "
378
+ "Cover: topics discussed, student strengths, weak areas identified."
379
+ )
380
+ return [
381
+ {"role": "system", "content": system},
382
+ {"role": "user", "content": f"Transcript:\n{transcript}\n\nSummarize:"},
383
+ ]
384
+
385
+
386
+ def build_report_prompt(
387
+ topic: str,
388
+ asked: list[str],
389
+ weak_areas: list[str],
390
+ summary: str,
391
+ past_score: int | None,
392
+ ) -> list[dict]:
393
+ past_ctx = (
394
+ f"Their previous best score on this topic was {past_score}/100."
395
+ if past_score is not None
396
+ else "This is their first attempt on this topic."
397
+ )
398
+ system = (
399
+ "Generate a final interview performance report. "
400
+ "Respond with ONLY valid JSON:\n"
401
+ '{"score": 0-100, "summary": "string", "concept_score": 0-100, '
402
+ '"depth_score": 0-100, "mistakes": ["string"], "tips": ["string"]}'
403
+ )
404
+ user = (
405
+ f"Topic: {topic}\n"
406
+ f"{past_ctx}\n"
407
+ f"Questions covered: {', '.join(asked) if asked else 'none'}\n"
408
+ f"Weak areas: {', '.join(weak_areas) if weak_areas else 'none'}\n\n"
409
+ f"Interview summary:\n{summary or 'No summary available.'}\n\n"
410
+ "Generate the report JSON."
411
+ )
412
+ return [{"role": "system", "content": system}, {"role": "user", "content": user}]
413
+ ```
414
+
415
+ - [ ] **Step 4: Run tests — expect all pass**
416
+
417
+ ```bash
418
+ uv run pytest tests/test_prompts.py -v
419
+ ```
420
+
421
+ Expected: `7 passed`
422
+
423
+ - [ ] **Step 5: Commit**
424
+
425
+ ```bash
426
+ git add backend/prompts.py tests/test_prompts.py
427
+ git commit -m "feat: prompt builders for all 5 LangGraph nodes with unit tests"
428
+ ```
429
+
430
+ ---
431
+
432
+ ## Task 4: Graph nodes + routing tests
433
+
434
+ **Files:**
435
+ - Create: `backend/graph/nodes.py`
436
+ - Create: `tests/test_routing.py`
437
+
438
+ - [ ] **Step 1: Write routing tests first**
439
+
440
+ Create `tests/test_routing.py`:
441
+
442
+ ```python
443
+ """Unit tests for route_after_evaluation — no LLM, no DB needed."""
444
+ from backend.graph.nodes import route_after_evaluation
445
+
446
+
447
+ def _state(**overrides) -> dict:
448
+ """Build a minimal InterviewState dict for routing tests."""
449
+ base = {
450
+ "last_verdict": "strong",
451
+ "turn_count": 1,
452
+ "questions_remaining": [{"question_text": "Q2", "difficulty": "easy"}],
453
+ "awaiting_counter_response": False,
454
+ }
455
+ base.update(overrides)
456
+ return base
457
+
458
+
459
+ def test_shallow_not_in_counter_loop_routes_to_counter():
460
+ state = _state(last_verdict="shallow", awaiting_counter_response=False)
461
+ assert route_after_evaluation(state) == "counter"
462
+
463
+
464
+ def test_shallow_already_in_counter_loop_routes_to_next_question():
465
+ state = _state(last_verdict="shallow", awaiting_counter_response=True, turn_count=2)
466
+ assert route_after_evaluation(state) == "next_question"
467
+
468
+
469
+ def test_strong_routes_to_next_question():
470
+ state = _state(last_verdict="strong", turn_count=1)
471
+ assert route_after_evaluation(state) == "next_question"
472
+
473
+
474
+ def test_wrong_routes_to_next_question():
475
+ state = _state(last_verdict="wrong", turn_count=1)
476
+ assert route_after_evaluation(state) == "next_question"
477
+
478
+
479
+ def test_turn_count_8_routes_to_end():
480
+ state = _state(last_verdict="strong", turn_count=8)
481
+ assert route_after_evaluation(state) == "end"
482
+
483
+
484
+ def test_no_questions_remaining_routes_to_end():
485
+ state = _state(last_verdict="strong", turn_count=3, questions_remaining=[])
486
+ assert route_after_evaluation(state) == "end"
487
+
488
+
489
+ def test_every_4_turns_routes_to_summarize():
490
+ state = _state(last_verdict="strong", turn_count=4)
491
+ assert route_after_evaluation(state) == "summarize"
492
+
493
+
494
+ def test_turn_8_beats_summarize():
495
+ # turn_count=8 should end, not summarize (end takes priority)
496
+ state = _state(last_verdict="strong", turn_count=8)
497
+ assert route_after_evaluation(state) == "end"
498
+ ```
499
+
500
+ - [ ] **Step 2: Run routing tests — expect failure**
501
+
502
+ ```bash
503
+ uv run pytest tests/test_routing.py -v
504
+ ```
505
+
506
+ Expected: `FAILED` — `ModuleNotFoundError`
507
+
508
+ - [ ] **Step 3: Implement backend/graph/nodes.py**
509
+
510
+ ```python
511
+ import json
512
+
513
+ from backend.graph.state import InterviewState
514
+ from backend.llm import call_llm
515
+ from backend.prompts import (
516
+ build_ask_question_prompt,
517
+ build_counter_prompt,
518
+ build_evaluate_prompt,
519
+ build_report_prompt,
520
+ build_summarize_prompt,
521
+ )
522
+ from backend.db import queries
523
+
524
+
525
+ def _msg_content(msg) -> str:
526
+ return msg.get("content", "") if isinstance(msg, dict) else getattr(msg, "content", "")
527
+
528
+
529
+ def _msg_role(msg) -> str:
530
+ if isinstance(msg, dict):
531
+ return msg.get("role", "")
532
+ name = type(msg).__name__.lower()
533
+ if "human" in name:
534
+ return "human"
535
+ return "assistant"
536
+
537
+
538
+ async def ask_question(state: InterviewState) -> dict:
539
+ remaining = list(state["questions_remaining"])
540
+ if not remaining:
541
+ return {}
542
+
543
+ question = remaining.pop(0)
544
+ prompt = build_ask_question_prompt(
545
+ state["topic_name"],
546
+ state["conversation_summary"],
547
+ state["questions_asked"],
548
+ )
549
+ response = await call_llm(prompt, max_tokens=200)
550
+
551
+ return {
552
+ "questions_remaining": remaining,
553
+ "questions_asked": state["questions_asked"] + [question["question_text"]],
554
+ "messages": [{"role": "assistant", "content": response}],
555
+ "turn_count": state["turn_count"] + 1,
556
+ "awaiting_counter_response": False,
557
+ }
558
+
559
+
560
+ async def evaluate_answer(state: InterviewState) -> dict:
561
+ last_student = next(
562
+ (m for m in reversed(state["messages"]) if _msg_role(m) == "human"),
563
+ None,
564
+ )
565
+ if not last_student:
566
+ return {"last_verdict": "wrong"}
567
+
568
+ last_question = state["questions_asked"][-1] if state["questions_asked"] else ""
569
+ prompt = build_evaluate_prompt(
570
+ last_question,
571
+ _msg_content(last_student),
572
+ state["conversation_summary"],
573
+ )
574
+ raw = await call_llm(prompt, max_tokens=100)
575
+
576
+ try:
577
+ result = json.loads(raw)
578
+ verdict = result.get("verdict", "wrong")
579
+ weak_area = result.get("weak_area")
580
+ except (json.JSONDecodeError, AttributeError):
581
+ verdict = "wrong"
582
+ weak_area = None
583
+
584
+ weak_areas = list(state["student_weak_areas"])
585
+ if verdict == "shallow" and weak_area:
586
+ weak_areas.append(str(weak_area))
587
+
588
+ return {
589
+ "last_verdict": verdict,
590
+ "student_weak_areas": weak_areas,
591
+ }
592
+
593
+
594
+ async def counter_question(state: InterviewState) -> dict:
595
+ last_student = next(
596
+ (m for m in reversed(state["messages"]) if _msg_role(m) == "human"),
597
+ None,
598
+ )
599
+ last_question = state["questions_asked"][-1] if state["questions_asked"] else ""
600
+ prompt = build_counter_prompt(
601
+ last_question,
602
+ _msg_content(last_student) if last_student else "",
603
+ )
604
+ response = await call_llm(prompt, max_tokens=150)
605
+
606
+ return {
607
+ "messages": [{"role": "assistant", "content": response}],
608
+ "awaiting_counter_response": True,
609
+ "turn_count": state["turn_count"] + 1,
610
+ }
611
+
612
+
613
+ async def summarize(state: InterviewState) -> dict:
614
+ prompt = build_summarize_prompt(state["messages"])
615
+ summary = await call_llm(prompt, max_tokens=200)
616
+
617
+ return {
618
+ "conversation_summary": summary,
619
+ "messages": [],
620
+ "awaiting_counter_response": False,
621
+ }
622
+
623
+
624
+ async def generate_report(state: InterviewState) -> dict:
625
+ prompt = build_report_prompt(
626
+ state["topic_name"],
627
+ state["questions_asked"],
628
+ state["student_weak_areas"],
629
+ state["conversation_summary"],
630
+ state["past_best_score"],
631
+ )
632
+ raw = await call_llm(prompt, max_tokens=400)
633
+
634
+ try:
635
+ feedback = json.loads(raw)
636
+ score = int(feedback.get("score", 0))
637
+ except (json.JSONDecodeError, ValueError, TypeError):
638
+ feedback = {
639
+ "score": 0,
640
+ "summary": raw,
641
+ "concept_score": 0,
642
+ "depth_score": 0,
643
+ "mistakes": [],
644
+ "tips": [],
645
+ }
646
+ score = 0
647
+
648
+ await queries.update_session_complete(state["session_id"], score, feedback)
649
+
650
+ return {
651
+ "status": "complete",
652
+ "score": score,
653
+ "feedback": feedback,
654
+ "messages": [{"role": "assistant", "content": feedback.get("summary", "Interview complete.")}],
655
+ }
656
+
657
+
658
+ def route_after_evaluation(state: InterviewState) -> str:
659
+ """Routing function for conditional edges after evaluate_answer."""
660
+ verdict = state.get("last_verdict", "wrong")
661
+ turn_count = state["turn_count"]
662
+ questions_remaining = state["questions_remaining"]
663
+ awaiting_counter = state["awaiting_counter_response"]
664
+
665
+ # Shallow + not already in counter loop → fire counter question
666
+ if verdict == "shallow" and not awaiting_counter:
667
+ return "counter"
668
+
669
+ # End conditions (checked before summarize to avoid wasted LLM call)
670
+ if turn_count >= 8 or not questions_remaining:
671
+ return "end"
672
+
673
+ # Compress memory every 4 turns
674
+ if turn_count % 4 == 0 and turn_count > 0:
675
+ return "summarize"
676
+
677
+ return "next_question"
678
+ ```
679
+
680
+ - [ ] **Step 4: Run routing tests — expect all pass**
681
+
682
+ ```bash
683
+ uv run pytest tests/test_routing.py -v
684
+ ```
685
+
686
+ Expected: `8 passed`
687
+
688
+ - [ ] **Step 5: Run full test suite**
689
+
690
+ ```bash
691
+ uv run pytest tests/ -v
692
+ ```
693
+
694
+ Expected: `32 passed` (17 existing + 7 prompts + 8 routing)
695
+
696
+ - [ ] **Step 6: Commit**
697
+
698
+ ```bash
699
+ git add backend/graph/nodes.py tests/test_routing.py
700
+ git commit -m "feat: LangGraph nodes (ask, evaluate, counter, summarize, report) + routing tests"
701
+ ```
702
+
703
+ ---
704
+
705
+ ## Task 5: Graph builder + checkpointer
706
+
707
+ **Files:**
708
+ - Create: `backend/graph/graph.py`
709
+ - Create: `backend/checkpointer.py`
710
+
711
+ - [ ] **Step 1: Create backend/graph/graph.py**
712
+
713
+ ```python
714
+ from langgraph.graph import END, StateGraph
715
+
716
+ from backend.graph.nodes import (
717
+ ask_question,
718
+ counter_question,
719
+ evaluate_answer,
720
+ generate_report,
721
+ route_after_evaluation,
722
+ summarize,
723
+ )
724
+ from backend.graph.state import InterviewState
725
+
726
+
727
+ def build_graph(checkpointer):
728
+ """Compile the interview graph with the given checkpointer. Call once at startup."""
729
+ builder = StateGraph(InterviewState)
730
+
731
+ builder.add_node("ask_question", ask_question)
732
+ builder.add_node("evaluate_answer", evaluate_answer)
733
+ builder.add_node("counter_question", counter_question)
734
+ builder.add_node("summarize", summarize)
735
+ builder.add_node("generate_report", generate_report)
736
+
737
+ builder.set_entry_point("ask_question")
738
+
739
+ builder.add_edge("ask_question", "evaluate_answer")
740
+
741
+ builder.add_conditional_edges(
742
+ "evaluate_answer",
743
+ route_after_evaluation,
744
+ {
745
+ "counter": "counter_question",
746
+ "next_question": "ask_question",
747
+ "summarize": "summarize",
748
+ "end": "generate_report",
749
+ },
750
+ )
751
+
752
+ builder.add_edge("counter_question", "evaluate_answer")
753
+ builder.add_edge("summarize", "ask_question")
754
+ builder.add_edge("generate_report", END)
755
+
756
+ return builder.compile(checkpointer=checkpointer)
757
+ ```
758
+
759
+ - [ ] **Step 2: Create backend/checkpointer.py**
760
+
761
+ ```python
762
+ import os
763
+
764
+ from langgraph.checkpoint.postgres.aio import AsyncPostgresSaver
765
+
766
+ _checkpointer: AsyncPostgresSaver | None = None
767
+
768
+
769
+ async def init_checkpointer() -> AsyncPostgresSaver:
770
+ """
771
+ Initialize the AsyncPostgresSaver and run setup() to create the
772
+ checkpointer schema + tables in NeonDB. Call once at app startup.
773
+ """
774
+ global _checkpointer
775
+ conn_string = os.getenv("NEON_DB_URL", "")
776
+
777
+ # langgraph-checkpoint-postgres 2.x: from_conn_string is an async context manager.
778
+ # We enter it at startup and keep it open for the app lifetime.
779
+ cm = AsyncPostgresSaver.from_conn_string(conn_string)
780
+ _checkpointer = await cm.__aenter__()
781
+ await _checkpointer.setup()
782
+ return _checkpointer
783
+
784
+
785
+ def get_checkpointer() -> AsyncPostgresSaver:
786
+ if _checkpointer is None:
787
+ raise RuntimeError("Checkpointer not initialized — call init_checkpointer() first")
788
+ return _checkpointer
789
+ ```
790
+
791
+ - [ ] **Step 3: Verify imports compile**
792
+
793
+ ```bash
794
+ uv run python -c "
795
+ from backend.graph.graph import build_graph
796
+ from backend.checkpointer import init_checkpointer, get_checkpointer
797
+ print('OK')
798
+ "
799
+ ```
800
+
801
+ Expected: `OK`
802
+
803
+ - [ ] **Step 4: Commit**
804
+
805
+ ```bash
806
+ git add backend/graph/graph.py backend/checkpointer.py
807
+ git commit -m "feat: LangGraph graph builder and AsyncPostgresSaver checkpointer"
808
+ ```
809
+
810
+ ---
811
+
812
+ ## Task 6: Update main.py
813
+
814
+ **Files:**
815
+ - Modify: `backend/main.py`
816
+
817
+ - [ ] **Step 1: Replace backend/main.py**
818
+
819
+ ```python
820
+ from contextlib import asynccontextmanager
821
+
822
+ from fastapi import FastAPI
823
+
824
+ from backend.checkpointer import init_checkpointer
825
+ from backend.db.connection import init_db_pool
826
+ from backend.graph.graph import build_graph
827
+ from backend.routers import auth, batches, interview, topics, upload
828
+
829
+
830
+ @asynccontextmanager
831
+ async def lifespan(app: FastAPI):
832
+ # Startup
833
+ await init_db_pool()
834
+ checkpointer = await init_checkpointer()
835
+ app.state.graph = build_graph(checkpointer)
836
+ yield
837
+ # Shutdown — nothing to clean up (pool managed by asyncpg)
838
+
839
+
840
+ app = FastAPI(lifespan=lifespan)
841
+
842
+ app.include_router(auth.router, prefix="/api/auth")
843
+ app.include_router(batches.router, prefix="/api/batches")
844
+ app.include_router(topics.router, prefix="/api/topics")
845
+ app.include_router(upload.router, prefix="/api/upload")
846
+ app.include_router(interview.router, prefix="/interview")
847
+
848
+ # React static build — MUST be last
849
+ # app.mount("/", StaticFiles(directory="frontend/dist", html=True), name="static")
850
+ ```
851
+
852
+ - [ ] **Step 2: Verify — but DON'T start the server yet (interview router doesn't exist yet)**
853
+
854
+ ```bash
855
+ uv run python -c "
856
+ # Just verify imports, not the full app startup
857
+ from backend.graph.graph import build_graph
858
+ from backend.checkpointer import init_checkpointer, get_checkpointer
859
+ from backend.routers import auth, batches, topics, upload
860
+ print('OK')
861
+ "
862
+ ```
863
+
864
+ Expected: `OK`
865
+
866
+ - [ ] **Step 3: Commit**
867
+
868
+ ```bash
869
+ git add backend/main.py
870
+ git commit -m "feat: wire checkpointer + graph into FastAPI lifespan, register interview router"
871
+ ```
872
+
873
+ ---
874
+
875
+ ## Task 7: Interview router
876
+
877
+ **Files:**
878
+ - Create: `backend/routers/interview.py`
879
+
880
+ - [ ] **Step 1: Create backend/routers/interview.py**
881
+
882
+ ```python
883
+ from fastapi import APIRouter, Depends, HTTPException, Request
884
+ from pydantic import BaseModel
885
+
886
+ from backend.auth.deps import require_student
887
+ from backend.db import queries
888
+
889
+ router = APIRouter()
890
+
891
+
892
+ class StartRequest(BaseModel):
893
+ topic_id: str
894
+
895
+
896
+ class TurnRequest(BaseModel):
897
+ session_id: str
898
+ student_message: str
899
+
900
+
901
+ def _last_ai_message(messages: list) -> str:
902
+ """Extract content of the last assistant message from a list."""
903
+ for m in reversed(messages):
904
+ role = m.get("role", "") if isinstance(m, dict) else getattr(type(m), "__name__", "").lower()
905
+ content = m.get("content", "") if isinstance(m, dict) else getattr(m, "content", "")
906
+ if role == "assistant" or "ai" in str(role).lower():
907
+ return content
908
+ return ""
909
+
910
+
911
+ @router.post("/start")
912
+ async def start_interview(
913
+ body: StartRequest,
914
+ request: Request,
915
+ user: dict = Depends(require_student),
916
+ ):
917
+ topic = await queries.get_topic_by_id(body.topic_id)
918
+ if not topic:
919
+ raise HTTPException(404, "Topic not found")
920
+ if not topic["is_unlocked"]:
921
+ raise HTTPException(403, "Topic is not unlocked for interviews")
922
+
923
+ questions = await queries.get_questions_by_topic(body.topic_id)
924
+ if not questions:
925
+ raise HTTPException(400, "No questions available for this topic")
926
+
927
+ past = await queries.get_best_session_by_student_topic(user["user_id"], body.topic_id)
928
+ past_best_score = past["score"] if past else None
929
+ past_weak_areas: list[str] = []
930
+ if past and past["feedback"]:
931
+ import json
932
+ fb = past["feedback"] if isinstance(past["feedback"], dict) else json.loads(past["feedback"])
933
+ past_weak_areas = fb.get("tips", [])
934
+
935
+ session = await queries.create_interview_session(user["user_id"], body.topic_id)
936
+ session_id = str(session["id"])
937
+
938
+ initial_state = {
939
+ "topic_name": topic["name"],
940
+ "session_id": session_id,
941
+ "student_id": user["user_id"],
942
+ "questions_remaining": [
943
+ {"question_text": q["question_text"], "difficulty": q["difficulty"]}
944
+ for q in questions
945
+ ],
946
+ "past_best_score": past_best_score,
947
+ "past_weak_areas": past_weak_areas,
948
+ "messages": [],
949
+ "conversation_summary": "",
950
+ "questions_asked": [],
951
+ "student_weak_areas": [],
952
+ "turn_count": 0,
953
+ "awaiting_counter_response": False,
954
+ "last_verdict": None,
955
+ "status": "active",
956
+ "score": None,
957
+ "feedback": None,
958
+ }
959
+
960
+ graph = request.app.state.graph
961
+ config = {"configurable": {"thread_id": session_id}}
962
+ result = await graph.ainvoke(initial_state, config)
963
+
964
+ return {
965
+ "session_id": session_id,
966
+ "message": _last_ai_message(result["messages"]),
967
+ }
968
+
969
+
970
+ @router.post("/turn")
971
+ async def interview_turn(
972
+ body: TurnRequest,
973
+ request: Request,
974
+ user: dict = Depends(require_student),
975
+ ):
976
+ session = await queries.get_session_by_id(body.session_id)
977
+ if not session:
978
+ raise HTTPException(404, "Session not found")
979
+ if session["status"] == "completed":
980
+ raise HTTPException(400, "Session is already complete")
981
+
982
+ graph = request.app.state.graph
983
+ config = {"configurable": {"thread_id": body.session_id}}
984
+
985
+ result = await graph.ainvoke(
986
+ {"messages": [{"role": "human", "content": body.student_message}]},
987
+ config,
988
+ )
989
+
990
+ is_complete = result.get("status") == "complete"
991
+ response: dict = {
992
+ "message": _last_ai_message(result["messages"]),
993
+ "is_counter_q": result.get("awaiting_counter_response", False),
994
+ "is_complete": is_complete,
995
+ }
996
+ if is_complete:
997
+ response["feedback"] = result.get("feedback")
998
+
999
+ return response
1000
+
1001
+
1002
+ @router.get("/state/{session_id}")
1003
+ async def get_interview_state(
1004
+ session_id: str,
1005
+ request: Request,
1006
+ user: dict = Depends(require_student),
1007
+ ):
1008
+ from backend.checkpointer import get_checkpointer
1009
+
1010
+ checkpointer = get_checkpointer()
1011
+ config = {"configurable": {"thread_id": session_id}}
1012
+ checkpoint = await checkpointer.aget(config)
1013
+
1014
+ if not checkpoint:
1015
+ raise HTTPException(404, "Session state not found")
1016
+
1017
+ channel_values = checkpoint.get("channel_values", {})
1018
+ messages = channel_values.get("messages", [])
1019
+
1020
+ last_msg = _last_ai_message(messages) if messages else None
1021
+
1022
+ return {
1023
+ "status": channel_values.get("status", "active"),
1024
+ "turn_count": channel_values.get("turn_count", 0),
1025
+ "last_message": last_msg,
1026
+ }
1027
+ ```
1028
+
1029
+ - [ ] **Step 2: Verify the full app imports cleanly (all routers now exist)**
1030
+
1031
+ ```bash
1032
+ uv run python -c "from backend.main import app; print('OK')"
1033
+ ```
1034
+
1035
+ Expected: `OK` (plus JWT warning — fine)
1036
+
1037
+ - [ ] **Step 3: Run all tests**
1038
+
1039
+ ```bash
1040
+ uv run pytest tests/ -v
1041
+ ```
1042
+
1043
+ Expected: `32 passed`
1044
+
1045
+ - [ ] **Step 4: Commit**
1046
+
1047
+ ```bash
1048
+ git add backend/routers/interview.py
1049
+ git commit -m "feat: interview router — start, turn, state endpoints"
1050
+ ```
1051
+
1052
+ ---
1053
+
1054
+ ## Task 8: Final verification + push
1055
+
1056
+ - [ ] **Step 1: Run all tests**
1057
+
1058
+ ```bash
1059
+ uv run pytest tests/ -v
1060
+ ```
1061
+
1062
+ Expected: `32 passed`
1063
+
1064
+ - [ ] **Step 2: Verify all routes registered**
1065
+
1066
+ ```bash
1067
+ uv run python -c "
1068
+ from backend.main import app
1069
+ routes = [(r.path, list(r.methods)) for r in app.routes if hasattr(r, 'methods')]
1070
+ for path, methods in sorted(routes):
1071
+ print(methods, path)
1072
+ "
1073
+ ```
1074
+
1075
+ Expected output must include:
1076
+ ```
1077
+ ['POST'] /interview/start
1078
+ ['POST'] /interview/turn
1079
+ ['GET'] /interview/state/{session_id}
1080
+ ```
1081
+
1082
+ - [ ] **Step 3: Smoke test backend import**
1083
+
1084
+ ```bash
1085
+ uv run python -c "from backend.main import app; print('OK')"
1086
+ ```
1087
+
1088
+ Expected: `OK`
1089
+
1090
+ - [ ] **Step 4: Verify frontend build still passes (no regressions)**
1091
+
1092
+ ```bash
1093
+ cd frontend && npm run build && cd ..
1094
+ ```
1095
+
1096
+ Expected: clean build
1097
+
1098
+ - [ ] **Step 5: Push to GitHub**
1099
+
1100
+ ```bash
1101
+ git push origin main
1102
+ ```
1103
+
1104
+ ---
1105
+
1106
+ ## Self-Review
1107
+
1108
+ **Spec coverage:**
1109
+ - ✅ `InterviewState` TypedDict with all fields including `add_messages` (Task 2)
1110
+ - ✅ `call_llm` async OpenRouter wrapper (Task 2)
1111
+ - ✅ `build_ask_question_prompt` (Task 3)
1112
+ - ✅ `build_evaluate_prompt` — returns JSON verdict (Task 3)
1113
+ - ✅ `build_counter_prompt` (Task 3)
1114
+ - ✅ `build_summarize_prompt` (Task 3)
1115
+ - ✅ `build_report_prompt` with past_score context (Task 3)
1116
+ - ✅ `ask_question` node — pops from remaining, adds to asked (Task 4)
1117
+ - ✅ `evaluate_answer` node — sets last_verdict + grows weak_areas (Task 4)
1118
+ - ✅ `counter_question` node — sets awaiting_counter_response=True (Task 4)
1119
+ - ✅ `summarize` node — compresses, clears messages (Task 4)
1120
+ - ✅ `generate_report` node — writes to DB, sets status=complete (Task 4)
1121
+ - ✅ `route_after_evaluation` — all 4 routes: counter/next_question/summarize/end (Task 4)
1122
+ - ✅ Graph wired: entry=ask_question, conditional from evaluate_answer, all edges (Task 5)
1123
+ - ✅ `AsyncPostgresSaver` with `setup()` call (Task 5)
1124
+ - ✅ Graph compiled once at startup, stored in `app.state.graph` (Task 6)
1125
+ - ✅ `POST /interview/start` — unlocked check, create session, init state, ainvoke (Task 7)
1126
+ - ✅ `POST /interview/turn` — append human message, ainvoke, return AI response (Task 7)
1127
+ - ✅ `GET /interview/state/{session_id}` — load checkpoint, return status+turn+last_msg (Task 7)
1128
+ - ✅ Thread ID = session_id (Tasks 5, 7)
1129
+ - ✅ Session DB queries: get_questions, get_best_past, create_session, update_complete, get_by_id (Task 1)
1130
+
1131
+ **Type consistency:**
1132
+ - `route_after_evaluation` returns `"counter"` | `"next_question"` | `"summarize"` | `"end"` — matches graph edge keys ✅
1133
+ - `InterviewState.last_verdict` set in `evaluate_answer`, read in `route_after_evaluation` ✅
1134
+ - `InterviewState.messages` uses `add_messages` reducer — nodes return list, graph appends ✅
1135
+ - `update_session_complete(session_id, score, feedback)` called in `generate_report` — signature matches Task 1 ✅
pyproject.toml CHANGED
@@ -17,3 +17,8 @@ dependencies = [
17
  "python-multipart==0.0.9",
18
  "uvicorn==0.30.6",
19
  ]
 
 
 
 
 
 
17
  "python-multipart==0.0.9",
18
  "uvicorn==0.30.6",
19
  ]
20
+
21
+ [dependency-groups]
22
+ dev = [
23
+ "pytest>=9.0.2",
24
+ ]
uv.lock CHANGED
@@ -24,6 +24,11 @@ dependencies = [
24
  { name = "uvicorn" },
25
  ]
26
 
 
 
 
 
 
27
  [package.metadata]
28
  requires-dist = [
29
  { name = "asyncpg", specifier = "==0.29.0" },
@@ -39,6 +44,9 @@ requires-dist = [
39
  { name = "uvicorn", specifier = "==0.30.6" },
40
  ]
41
 
 
 
 
42
  [[package]]
43
  name = "annotated-types"
44
  version = "0.7.0"
@@ -448,6 +456,15 @@ wheels = [
448
  { url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
449
  ]
450
 
 
 
 
 
 
 
 
 
 
451
  [[package]]
452
  name = "jsonpatch"
453
  version = "1.33"
@@ -688,6 +705,15 @@ wheels = [
688
  { url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" },
689
  ]
690
 
 
 
 
 
 
 
 
 
 
691
  [[package]]
692
  name = "psycopg"
693
  version = "3.3.3"
@@ -792,6 +818,31 @@ wheels = [
792
  { url = "https://files.pythonhosted.org/packages/13/63/b95781763e8d84207025071c0cec16d921c0163c7a9033ae4b9a0e020dc7/pydantic_core-2.20.1-cp313-none-win_amd64.whl", hash = "sha256:65db0f2eefcaad1a3950f498aabb4875c8890438bc80b19362cf633b87a8ab20", size = 1898013, upload-time = "2024-07-03T17:02:15.157Z" },
793
  ]
794
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
795
  [[package]]
796
  name = "python-dotenv"
797
  version = "1.0.1"
 
24
  { name = "uvicorn" },
25
  ]
26
 
27
+ [package.dev-dependencies]
28
+ dev = [
29
+ { name = "pytest" },
30
+ ]
31
+
32
  [package.metadata]
33
  requires-dist = [
34
  { name = "asyncpg", specifier = "==0.29.0" },
 
44
  { name = "uvicorn", specifier = "==0.30.6" },
45
  ]
46
 
47
+ [package.metadata.requires-dev]
48
+ dev = [{ name = "pytest", specifier = ">=9.0.2" }]
49
+
50
  [[package]]
51
  name = "annotated-types"
52
  version = "0.7.0"
 
456
  { url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
457
  ]
458
 
459
+ [[package]]
460
+ name = "iniconfig"
461
+ version = "2.3.0"
462
+ source = { registry = "https://pypi.org/simple" }
463
+ sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
464
+ wheels = [
465
+ { url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
466
+ ]
467
+
468
  [[package]]
469
  name = "jsonpatch"
470
  version = "1.33"
 
705
  { url = "https://files.pythonhosted.org/packages/20/12/38679034af332785aac8774540895e234f4d07f7545804097de4b666afd8/packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484", size = 66469, upload-time = "2025-04-19T11:48:57.875Z" },
706
  ]
707
 
708
+ [[package]]
709
+ name = "pluggy"
710
+ version = "1.6.0"
711
+ source = { registry = "https://pypi.org/simple" }
712
+ sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
713
+ wheels = [
714
+ { url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
715
+ ]
716
+
717
  [[package]]
718
  name = "psycopg"
719
  version = "3.3.3"
 
818
  { url = "https://files.pythonhosted.org/packages/13/63/b95781763e8d84207025071c0cec16d921c0163c7a9033ae4b9a0e020dc7/pydantic_core-2.20.1-cp313-none-win_amd64.whl", hash = "sha256:65db0f2eefcaad1a3950f498aabb4875c8890438bc80b19362cf633b87a8ab20", size = 1898013, upload-time = "2024-07-03T17:02:15.157Z" },
819
  ]
820
 
821
+ [[package]]
822
+ name = "pygments"
823
+ version = "2.19.2"
824
+ source = { registry = "https://pypi.org/simple" }
825
+ sdist = { url = "https://files.pythonhosted.org/packages/b0/77/a5b8c569bf593b0140bde72ea885a803b82086995367bf2037de0159d924/pygments-2.19.2.tar.gz", hash = "sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887", size = 4968631, upload-time = "2025-06-21T13:39:12.283Z" }
826
+ wheels = [
827
+ { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
828
+ ]
829
+
830
+ [[package]]
831
+ name = "pytest"
832
+ version = "9.0.2"
833
+ source = { registry = "https://pypi.org/simple" }
834
+ dependencies = [
835
+ { name = "colorama", marker = "sys_platform == 'win32'" },
836
+ { name = "iniconfig" },
837
+ { name = "packaging" },
838
+ { name = "pluggy" },
839
+ { name = "pygments" },
840
+ ]
841
+ sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
842
+ wheels = [
843
+ { url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
844
+ ]
845
+
846
  [[package]]
847
  name = "python-dotenv"
848
  version = "1.0.1"