Spaces:
Runtime error
Runtime error
Study Pipeline
Overview
A one-page web app that runs a perceptual discrimination study. Participants see images one at a time and judge whether colorization artifacts are present or absent. No login, no score reveal β pure signal detection data collection.
Participant Flow
Welcome / Colorblindness Screen
ββ 3 Ishihara plates (demo β red-green β blue-yellow)
ββ Expertise dropdown (novice / hobbyist / professional / researcher)
β
βΌ
Tutorial (3 steps)
Step 1 β artifact reference: what artifacts look like (static explainer image)
Step 2 β practice real: participant sees a ground-truth image, makes a judgment, gets feedback
Step 3 β practice fake: participant sees a colorized image, makes a judgment, gets feedback
β
βΌ
50 Trial Loop
ββ Image displayed (fixed-size container, objectFit: contain)
ββ Participant responds: ABSENT (β) or PRESENT (β)
β β’ Swipe right / β key / green button = PRESENT (artifacts detected)
β β’ Swipe left / β key / red button = ABSENT (no artifacts)
ββ Response + timing logged to backend (fire-and-forget)
ββ Repeat until trial 50
β
βΌ
Done Screen
ββ Thank-you message + share link
Image Sampling (per session)
- 10 ground-truth (real) images sampled randomly from 15 available
- 8 colorized (fake) images per method Γ 5 methods = 40
- Total: 50 trials, shuffled
- Balance enforced: within each method, 1 image is drawn from each of the 6 (variant Γ dataset) groups (6 groups Γ 1 = 6; 2 extra drawn randomly from remaining pool)
- GT varies session-to-session; same 15 base scenes, different 10 selected each time
Methods: bigcolor, ddcolor, disco, unicolor, mixed
Variants: ortho, standard
Datasets: coco, imagenet, instance
Response Encoding
| User action | Button | Color | response value stored |
Meaning |
|---|---|---|---|---|
| Swipe right / β | PRESENT | Green | fake |
Participant detected artifacts |
| Swipe left / β | ABSENT | Red | real |
Participant saw no artifacts |
label field encodes ground truth: fake (colorized) or real (ground-truth photo).correct = 1 when response == label.
Tech Stack
| Layer | Tech |
|---|---|
| Frontend | React 18 + Vite + MUI v5 dark theme |
| Backend | FastAPI + aiosqlite |
| Database | SQLite (local: ./responses.db, Fly.io: /app/data/responses.db) |
| Deployment | Fly.io β single Docker container, persistent volume |
| Session state | localStorage (resume on reload, UUID per participant) |
Key Files
colorization-webapp/
βββ backend/
β βββ main.py β FastAPI app, static mounts
β βββ db.py β SQLite init, connection
β βββ routes/
β βββ session.py β /api/session/start, /respond, /complete + sampling logic
β βββ results.py β /api/results/csv, /summary (key-protected)
βββ frontend/src/
β βββ pages/
β β βββ Welcome.jsx β colorblindness plates + expertise
β β βββ Tutorial.jsx β 3-step practice
β β βββ Trial.jsx β main 50-trial loop
β β βββ Done.jsx β thank-you + share
β βββ components/
β βββ SwipeCard.jsx β touch/drag swipe handler
β βββ ProgressBar.jsx
βββ image_samples/ β 165 images (served at /images/...)
βββ tutorial/ β tutorial assets + Ishihara plates
βββ manifest.json β image metadata index
βββ Dockerfile
βββ fly.toml
Admin Endpoints
Both require ?key=colorturingtest2025.
| Endpoint | Description |
|---|---|
GET /api/results/csv?key=colorturingtest2025 |
Download full response CSV |
GET /api/results/summary?key=colorturingtest2025 |
JSON summary: per-method detection rates, overall accuracy |