Initial anonymous release: phyground-code
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- README.md +71 -0
- dataprocessing/__init__.py +0 -0
- dataprocessing/analysis/ablation_prompt_enhancement.py +719 -0
- dataprocessing/common/__init__.py +0 -0
- dataprocessing/common/gemini.py +51 -0
- dataprocessing/common/pipeline.py +153 -0
- dataprocessing/common/video_id.py +221 -0
- dataprocessing/refine/__init__.py +0 -0
- dataprocessing/refine/enhance_prompts_physics.py +237 -0
- dataprocessing/refine/gen_hard_subset.py +294 -0
- dataprocessing/refine/gen_humaneval_set.py +16 -0
- evals/__init__.py +0 -0
- evals/eval_types.py +154 -0
- evals/human_eval/__init__.py +0 -0
- evals/human_eval/app.py +869 -0
- evals/human_eval/assign.py +252 -0
- evals/human_eval/check_db_json_alignment.py +95 -0
- evals/human_eval/config.py +84 -0
- evals/human_eval/coverage_report.py +88 -0
- evals/human_eval/db.py +124 -0
- evals/human_eval/filter_db.py +366 -0
- evals/human_eval/import_videos.py +96 -0
- evals/human_eval/init_test_db.py +66 -0
- evals/human_eval/static/rate.js +211 -0
- evals/human_eval/static/style.css +1277 -0
- evals/human_eval/supplement_laws.py +17 -0
- evals/human_eval/templates/_progress_bar.html +16 -0
- evals/human_eval/templates/_scale_table.html +17 -0
- evals/human_eval/templates/base.html +29 -0
- evals/human_eval/templates/dashboard.html +191 -0
- evals/human_eval/templates/demo.html +132 -0
- evals/human_eval/templates/demographics.html +65 -0
- evals/human_eval/templates/guide.html +108 -0
- evals/human_eval/templates/login.html +18 -0
- evals/human_eval/templates/rate_compare.html +154 -0
- evals/human_eval/templates/task_list.html +102 -0
- evals/human_eval/templates/thanks.html +44 -0
- evals/human_eval/tests/__init__.py +0 -0
- evals/human_eval/tests/conftest.py +43 -0
- evals/human_eval/tests/test_assign.py +246 -0
- evals/human_eval/tests/test_db.py +63 -0
- evals/human_eval/tests/test_import.py +56 -0
- evals/human_eval/tests/test_routes.py +149 -0
- evals/physics_criteria.py +240 -0
- evals/prompts/__init__.py +210 -0
- evals/prompts/cot-subq.yaml +128 -0
- evals/prompts/cotnosubq.yaml +105 -0
- evals/prompts/dashboard.py +212 -0
- evals/prompts/default.yaml +124 -0
- evals/prompts/subq+answer.yaml +123 -0
README.md
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
tags:
|
| 5 |
+
- code
|
| 6 |
+
- video-evaluation
|
| 7 |
+
- benchmark
|
| 8 |
+
- judge
|
| 9 |
+
- anonymous-release
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# Anonymous Release — Code
|
| 13 |
+
|
| 14 |
+
Source code for the benchmark, evaluation, and judge-training pipeline that
|
| 15 |
+
accompanies the companion data release
|
| 16 |
+
[`phyground`](../datasets/) and the LoRA judge adapter under
|
| 17 |
+
[`../model/`](../model/).
|
| 18 |
+
|
| 19 |
+
This drop contains 43 Python source files plus the HTML/CSS/JS assets needed
|
| 20 |
+
by the human-annotation app. Shell scripts, configuration YAMLs,
|
| 21 |
+
prompts/answers JSONs, generated dashboards, databases, and binary assets are
|
| 22 |
+
intentionally excluded — see the dataset and model cards for the artifacts and
|
| 23 |
+
prompts those scripts consume.
|
| 24 |
+
|
| 25 |
+
## Layout
|
| 26 |
+
|
| 27 |
+
```
|
| 28 |
+
dataprocessing/
|
| 29 |
+
common/ # Vertex AI / OpenAI client helpers, video-id utilities
|
| 30 |
+
refine/ # Prompt-set construction: enhance, dedup, hard-subset,
|
| 31 |
+
# humaneval-set assembly, removal sync
|
| 32 |
+
analysis/ # Ablation: prompt-enhancement effect on judges
|
| 33 |
+
|
| 34 |
+
evals/
|
| 35 |
+
eval_types.py # Typed result containers for VLM-as-judge runs
|
| 36 |
+
physics_criteria.py # Physical-law sub-rubric definitions
|
| 37 |
+
sub_questions.py # Sub-question rendering for CoT/SubQ prompts
|
| 38 |
+
prompts/ # Prompt-template loaders (YAMLs withheld; see model card)
|
| 39 |
+
human_eval/ # Flask-based human-annotation app + tests + templates
|
| 40 |
+
|
| 41 |
+
judge_training/
|
| 42 |
+
data/ # Build SFT data for ms-swift from raw judgement logs
|
| 43 |
+
# (schema, sampling, naming, Claude-CoT/DB builders)
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
## Companion artifacts
|
| 47 |
+
|
| 48 |
+
- **Dataset** (250 prompts × 8 video models = 2 000 videos, sub-rubric
|
| 49 |
+
ground truth): `../datasets/` — see its `README.md`.
|
| 50 |
+
- **Model** (LoRA judge adapter, prompt template, inference script):
|
| 51 |
+
`../model/` — see its `README.md` and `infer.py`.
|
| 52 |
+
|
| 53 |
+
## Dependencies (top-level)
|
| 54 |
+
|
| 55 |
+
The pipeline relies on the following open-source components. Versions match
|
| 56 |
+
those reported in the paper.
|
| 57 |
+
|
| 58 |
+
| Component | Used for |
|
| 59 |
+
| --- | --- |
|
| 60 |
+
| `transformers`, `peft`, `qwen-vl-utils[decord]` | Judge inference |
|
| 61 |
+
| `ms-swift`, `deepspeed` | Judge LoRA training (ZeRO-2) |
|
| 62 |
+
| `vllm` (OpenAI-compatible server) | Hosting the base VLM for evaluation |
|
| 63 |
+
| `google-genai` / Vertex AI | Gemini family runs |
|
| 64 |
+
| `anthropic` / Vertex AI | Claude family runs |
|
| 65 |
+
| `openai` Python SDK | OpenAI / GPT family runs |
|
| 66 |
+
| `flask`, `sqlite3`, `selenium` | Human-annotation web app |
|
| 67 |
+
|
| 68 |
+
## License
|
| 69 |
+
|
| 70 |
+
Code is released under the same anonymous-review terms as the rest of this
|
| 71 |
+
release. No identifying metadata is included.
|
dataprocessing/__init__.py
ADDED
|
File without changes
|
dataprocessing/analysis/ablation_prompt_enhancement.py
ADDED
|
@@ -0,0 +1,719 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Ablation study: Effect of prompt enhancement (adding physical law/phenomenon descriptions)
|
| 4 |
+
on VLM evaluation scores.
|
| 5 |
+
|
| 6 |
+
Compares backup (pre-enhancement) eval files with current (post-enhancement) eval files,
|
| 7 |
+
analyzing score deltas per video where the prompt actually changed.
|
| 8 |
+
|
| 9 |
+
Run from the anonymous root dir:
|
| 10 |
+
python -m dataprocessing.analysis.ablation_prompt_enhancement
|
| 11 |
+
"""
|
| 12 |
+
|
| 13 |
+
import json
|
| 14 |
+
import os
|
| 15 |
+
import re
|
| 16 |
+
import sys
|
| 17 |
+
from collections import defaultdict
|
| 18 |
+
from pathlib import Path
|
| 19 |
+
from typing import Any, Dict, List, Optional, Tuple
|
| 20 |
+
|
| 21 |
+
# ---------------------------------------------------------------------------
|
| 22 |
+
# Configuration
|
| 23 |
+
# ---------------------------------------------------------------------------
|
| 24 |
+
|
| 25 |
+
DATA_ROOT = "data"
|
| 26 |
+
BACKUP_DIR = os.path.join(DATA_ROOT, "backup_before_laws_update")
|
| 27 |
+
VIDEOS_DIR = os.path.join(DATA_ROOT, "videos")
|
| 28 |
+
|
| 29 |
+
GENERAL_METRICS = ["SA", "PTV", "persistence"]
|
| 30 |
+
SCORE_BINS = [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5)]
|
| 31 |
+
|
| 32 |
+
# Law-to-domain mapping
|
| 33 |
+
LAW_TO_DOMAIN = {
|
| 34 |
+
# collision
|
| 35 |
+
"collision": "collision",
|
| 36 |
+
"impenetrability": "collision",
|
| 37 |
+
"momentum_transfer": "collision",
|
| 38 |
+
"momentum": "collision",
|
| 39 |
+
"elastic_deformation": "collision",
|
| 40 |
+
# gravity
|
| 41 |
+
"gravity": "gravity",
|
| 42 |
+
"free_fall": "gravity",
|
| 43 |
+
"projectile_motion": "gravity",
|
| 44 |
+
"buoyancy": "gravity",
|
| 45 |
+
# fluid
|
| 46 |
+
"fluid_continuity": "fluid",
|
| 47 |
+
"flow_dynamics": "fluid",
|
| 48 |
+
"flow": "fluid",
|
| 49 |
+
"viscosity": "fluid",
|
| 50 |
+
"surface_tension": "fluid",
|
| 51 |
+
"pressure": "fluid",
|
| 52 |
+
"continuity": "fluid",
|
| 53 |
+
# temporal / motion
|
| 54 |
+
"inertia": "temporal",
|
| 55 |
+
"acceleration": "temporal",
|
| 56 |
+
"velocity": "temporal",
|
| 57 |
+
"displacement": "temporal",
|
| 58 |
+
# lighting
|
| 59 |
+
"reflection": "lighting",
|
| 60 |
+
"refraction": "lighting",
|
| 61 |
+
"light_absorption": "lighting",
|
| 62 |
+
"shadow": "lighting",
|
| 63 |
+
"illumination": "lighting",
|
| 64 |
+
# deformation
|
| 65 |
+
"deformation": "deformation",
|
| 66 |
+
"plastic_deformation": "deformation",
|
| 67 |
+
# material
|
| 68 |
+
"material": "material",
|
| 69 |
+
"rigidity": "material",
|
| 70 |
+
"elasticity": "material",
|
| 71 |
+
"phase_transition": "material",
|
| 72 |
+
"melting": "material",
|
| 73 |
+
"combustion": "material",
|
| 74 |
+
}
|
| 75 |
+
|
| 76 |
+
# ---------------------------------------------------------------------------
|
| 77 |
+
# Helpers
|
| 78 |
+
# ---------------------------------------------------------------------------
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
def parse_judge_key(judge_str: str) -> str:
|
| 82 |
+
"""Extract a short judge key like 'gemini', 'qwen', 'gpt' from judge field."""
|
| 83 |
+
if not judge_str:
|
| 84 |
+
return "unknown"
|
| 85 |
+
return judge_str.split(":")[0]
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
def parse_filename_judge(filename: str) -> str:
|
| 89 |
+
"""Extract judge key from filename like eval_gemini_..., eval_qwen_permetric_..."""
|
| 90 |
+
m = re.match(r"eval_(gemini|qwen|gpt)", filename)
|
| 91 |
+
return m.group(1) if m else "unknown"
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def is_permetric(filename: str) -> bool:
|
| 95 |
+
return "permetric" in filename
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
def get_physical_avg(result: dict) -> Optional[float]:
|
| 99 |
+
"""Extract physical average from a result entry."""
|
| 100 |
+
phys = result.get("physical")
|
| 101 |
+
if phys is None:
|
| 102 |
+
return None
|
| 103 |
+
if isinstance(phys, (int, float)):
|
| 104 |
+
return float(phys)
|
| 105 |
+
if isinstance(phys, dict):
|
| 106 |
+
avg = phys.get("avg")
|
| 107 |
+
if avg is not None:
|
| 108 |
+
return float(avg)
|
| 109 |
+
return None
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
def get_per_law_scores(result: dict) -> Dict[str, float]:
|
| 113 |
+
"""Extract per-law scores from a result entry."""
|
| 114 |
+
phys = result.get("physical")
|
| 115 |
+
if not isinstance(phys, dict):
|
| 116 |
+
return {}
|
| 117 |
+
laws = phys.get("laws", {})
|
| 118 |
+
out = {}
|
| 119 |
+
for law_name, law_data in laws.items():
|
| 120 |
+
if isinstance(law_data, dict) and "score" in law_data and law_data["score"] is not None:
|
| 121 |
+
out[law_name] = float(law_data["score"])
|
| 122 |
+
elif isinstance(law_data, (int, float)) and law_data is not None:
|
| 123 |
+
out[law_name] = float(law_data)
|
| 124 |
+
return out
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
def extract_dataset_from_video_dir(video_dir: str) -> str:
|
| 128 |
+
"""Extract dataset name from video_dir field, e.g. 'data/videos/cosmos-predict2.5-2b-wmb/' -> 'wmb'."""
|
| 129 |
+
vd = video_dir.rstrip("/")
|
| 130 |
+
basename = os.path.basename(vd)
|
| 131 |
+
# Try to find the dataset suffix: wmb, video_phy_2, physics_iq, openvid, video_phy_2
|
| 132 |
+
for ds in ["wmb", "video_phy_2", "physics_iq", "openvid", "video_phy_2", "wmb"]:
|
| 133 |
+
if basename.endswith(f"-{ds}"):
|
| 134 |
+
return ds
|
| 135 |
+
return basename
|
| 136 |
+
|
| 137 |
+
|
| 138 |
+
def extract_model_from_video_dir(video_dir: str) -> str:
|
| 139 |
+
"""Extract model name from video_dir."""
|
| 140 |
+
vd = video_dir.rstrip("/")
|
| 141 |
+
basename = os.path.basename(vd)
|
| 142 |
+
for ds in ["wmb", "video_phy_2", "physics_iq", "openvid", "video_phy_2", "wmb"]:
|
| 143 |
+
suffix = f"-{ds}"
|
| 144 |
+
if basename.endswith(suffix):
|
| 145 |
+
return basename[: -len(suffix)]
|
| 146 |
+
return basename
|
| 147 |
+
|
| 148 |
+
|
| 149 |
+
def load_eval_file(filepath: str) -> Optional[dict]:
|
| 150 |
+
"""Load an eval JSON file, returning None on error."""
|
| 151 |
+
try:
|
| 152 |
+
with open(filepath, "r") as f:
|
| 153 |
+
data = json.load(f)
|
| 154 |
+
if not data.get("results"):
|
| 155 |
+
return None
|
| 156 |
+
return data
|
| 157 |
+
except (json.JSONDecodeError, FileNotFoundError, KeyError):
|
| 158 |
+
return None
|
| 159 |
+
|
| 160 |
+
|
| 161 |
+
def get_timestamp_from_filename(filename: str) -> str:
|
| 162 |
+
"""Extract timestamp from filename like eval_gemini_20260322_200226.json."""
|
| 163 |
+
m = re.search(r"(\d{8}_\d{6})", filename)
|
| 164 |
+
return m.group(1) if m else ""
|
| 165 |
+
|
| 166 |
+
|
| 167 |
+
# ---------------------------------------------------------------------------
|
| 168 |
+
# Core: find old/new pairs
|
| 169 |
+
# ---------------------------------------------------------------------------
|
| 170 |
+
|
| 171 |
+
|
| 172 |
+
def find_pairs() -> List[Dict[str, Any]]:
|
| 173 |
+
"""
|
| 174 |
+
Find all (old_file, new_file) pairs for comparison.
|
| 175 |
+
Returns list of dicts with keys: old_path, new_path, model, dataset, judge, mode (batched/permetric).
|
| 176 |
+
"""
|
| 177 |
+
pairs = []
|
| 178 |
+
|
| 179 |
+
if not os.path.isdir(BACKUP_DIR):
|
| 180 |
+
print(f"ERROR: Backup directory not found: {BACKUP_DIR}", file=sys.stderr)
|
| 181 |
+
return pairs
|
| 182 |
+
|
| 183 |
+
for backup_model in sorted(os.listdir(BACKUP_DIR)):
|
| 184 |
+
backup_model_dir = os.path.join(BACKUP_DIR, backup_model)
|
| 185 |
+
if not os.path.isdir(backup_model_dir):
|
| 186 |
+
continue
|
| 187 |
+
|
| 188 |
+
for fname in sorted(os.listdir(backup_model_dir)):
|
| 189 |
+
if not fname.startswith("eval_") or not fname.endswith(".json"):
|
| 190 |
+
continue
|
| 191 |
+
|
| 192 |
+
old_path = os.path.join(backup_model_dir, fname)
|
| 193 |
+
old_data = load_eval_file(old_path)
|
| 194 |
+
if old_data is None:
|
| 195 |
+
continue
|
| 196 |
+
|
| 197 |
+
video_dir = old_data.get("video_dir", "")
|
| 198 |
+
if not video_dir:
|
| 199 |
+
continue
|
| 200 |
+
|
| 201 |
+
dataset = extract_dataset_from_video_dir(video_dir)
|
| 202 |
+
model = extract_model_from_video_dir(video_dir)
|
| 203 |
+
judge = parse_judge_key(old_data.get("judge", ""))
|
| 204 |
+
mode = "permetric" if is_permetric(fname) else "batched"
|
| 205 |
+
|
| 206 |
+
# Find corresponding current directory
|
| 207 |
+
current_dir = os.path.join(VIDEOS_DIR, f"{model}-{dataset}")
|
| 208 |
+
if not os.path.isdir(current_dir):
|
| 209 |
+
continue
|
| 210 |
+
|
| 211 |
+
# Find newest matching eval file in current dir
|
| 212 |
+
old_timestamp = get_timestamp_from_filename(fname)
|
| 213 |
+
best_new_path = None
|
| 214 |
+
best_new_ts = ""
|
| 215 |
+
|
| 216 |
+
for cur_fname in os.listdir(current_dir):
|
| 217 |
+
if not cur_fname.startswith("eval_") or not cur_fname.endswith(".json"):
|
| 218 |
+
continue
|
| 219 |
+
if ".old_pre_t26." in cur_fname:
|
| 220 |
+
continue
|
| 221 |
+
|
| 222 |
+
cur_judge = parse_filename_judge(cur_fname)
|
| 223 |
+
cur_mode = "permetric" if is_permetric(cur_fname) else "batched"
|
| 224 |
+
|
| 225 |
+
if cur_judge != judge or cur_mode != mode:
|
| 226 |
+
continue
|
| 227 |
+
|
| 228 |
+
cur_ts = get_timestamp_from_filename(cur_fname)
|
| 229 |
+
|
| 230 |
+
# Skip files with same timestamp as the old file (these are copies)
|
| 231 |
+
if cur_ts == old_timestamp:
|
| 232 |
+
continue
|
| 233 |
+
|
| 234 |
+
# Must be newer than old
|
| 235 |
+
if cur_ts <= old_timestamp:
|
| 236 |
+
continue
|
| 237 |
+
|
| 238 |
+
# Pick the newest one
|
| 239 |
+
if cur_ts > best_new_ts:
|
| 240 |
+
best_new_ts = cur_ts
|
| 241 |
+
best_new_path = os.path.join(current_dir, cur_fname)
|
| 242 |
+
|
| 243 |
+
if best_new_path:
|
| 244 |
+
pairs.append(
|
| 245 |
+
{
|
| 246 |
+
"old_path": old_path,
|
| 247 |
+
"new_path": best_new_path,
|
| 248 |
+
"model": model,
|
| 249 |
+
"dataset": dataset,
|
| 250 |
+
"judge": judge,
|
| 251 |
+
"mode": mode,
|
| 252 |
+
}
|
| 253 |
+
)
|
| 254 |
+
|
| 255 |
+
return pairs
|
| 256 |
+
|
| 257 |
+
|
| 258 |
+
# ---------------------------------------------------------------------------
|
| 259 |
+
# Core: compute deltas
|
| 260 |
+
# ---------------------------------------------------------------------------
|
| 261 |
+
|
| 262 |
+
|
| 263 |
+
def compute_deltas(
|
| 264 |
+
old_data: dict, new_data: dict
|
| 265 |
+
) -> List[Dict[str, Any]]:
|
| 266 |
+
"""
|
| 267 |
+
For each video present in both old and new where the prompt changed,
|
| 268 |
+
compute score deltas.
|
| 269 |
+
Returns list of per-video delta records.
|
| 270 |
+
"""
|
| 271 |
+
old_by_video = {r["video"]: r for r in old_data["results"]}
|
| 272 |
+
new_by_video = {r["video"]: r for r in new_data["results"]}
|
| 273 |
+
|
| 274 |
+
deltas = []
|
| 275 |
+
common_videos = set(old_by_video.keys()) & set(new_by_video.keys())
|
| 276 |
+
|
| 277 |
+
for vid in sorted(common_videos):
|
| 278 |
+
old_r = old_by_video[vid]
|
| 279 |
+
new_r = new_by_video[vid]
|
| 280 |
+
|
| 281 |
+
# Only analyze videos where the prompt actually changed
|
| 282 |
+
if old_r.get("prompt", "") == new_r.get("prompt", ""):
|
| 283 |
+
continue
|
| 284 |
+
|
| 285 |
+
rec: Dict[str, Any] = {
|
| 286 |
+
"video": vid,
|
| 287 |
+
"old_prompt": old_r.get("prompt", ""),
|
| 288 |
+
"new_prompt": new_r.get("prompt", ""),
|
| 289 |
+
"physical_laws": new_r.get("physical_laws", old_r.get("physical_laws", [])),
|
| 290 |
+
}
|
| 291 |
+
|
| 292 |
+
# General metrics deltas
|
| 293 |
+
for m in GENERAL_METRICS:
|
| 294 |
+
old_val = old_r.get(m)
|
| 295 |
+
new_val = new_r.get(m)
|
| 296 |
+
if old_val is not None and new_val is not None:
|
| 297 |
+
rec[f"{m}_old"] = float(old_val)
|
| 298 |
+
rec[f"{m}_new"] = float(new_val)
|
| 299 |
+
rec[f"{m}_delta"] = float(new_val) - float(old_val)
|
| 300 |
+
|
| 301 |
+
# general_avg delta
|
| 302 |
+
old_ga = old_r.get("general_avg")
|
| 303 |
+
new_ga = new_r.get("general_avg")
|
| 304 |
+
if old_ga is not None and new_ga is not None:
|
| 305 |
+
rec["general_avg_old"] = float(old_ga)
|
| 306 |
+
rec["general_avg_new"] = float(new_ga)
|
| 307 |
+
rec["general_avg_delta"] = float(new_ga) - float(old_ga)
|
| 308 |
+
|
| 309 |
+
# physical_avg delta
|
| 310 |
+
old_pa = get_physical_avg(old_r)
|
| 311 |
+
new_pa = get_physical_avg(new_r)
|
| 312 |
+
if old_pa is not None and new_pa is not None:
|
| 313 |
+
rec["physical_avg_old"] = float(old_pa)
|
| 314 |
+
rec["physical_avg_new"] = float(new_pa)
|
| 315 |
+
rec["physical_avg_delta"] = float(new_pa) - float(old_pa)
|
| 316 |
+
|
| 317 |
+
# Per-law score deltas
|
| 318 |
+
old_laws = get_per_law_scores(old_r)
|
| 319 |
+
new_laws = get_per_law_scores(new_r)
|
| 320 |
+
law_deltas = {}
|
| 321 |
+
for law in set(old_laws.keys()) & set(new_laws.keys()):
|
| 322 |
+
law_deltas[law] = new_laws[law] - old_laws[law]
|
| 323 |
+
rec["per_law_deltas"] = law_deltas
|
| 324 |
+
|
| 325 |
+
deltas.append(rec)
|
| 326 |
+
|
| 327 |
+
return deltas
|
| 328 |
+
|
| 329 |
+
|
| 330 |
+
# ---------------------------------------------------------------------------
|
| 331 |
+
# Aggregation
|
| 332 |
+
# ---------------------------------------------------------------------------
|
| 333 |
+
|
| 334 |
+
|
| 335 |
+
def safe_mean(values: list) -> Optional[float]:
|
| 336 |
+
if not values:
|
| 337 |
+
return None
|
| 338 |
+
return sum(values) / len(values)
|
| 339 |
+
|
| 340 |
+
|
| 341 |
+
def format_delta(val: Optional[float], decimals: int = 4) -> str:
|
| 342 |
+
if val is None:
|
| 343 |
+
return "N/A"
|
| 344 |
+
sign = "+" if val >= 0 else ""
|
| 345 |
+
return f"{sign}{val:.{decimals}f}"
|
| 346 |
+
|
| 347 |
+
|
| 348 |
+
def format_float(val: Optional[float], decimals: int = 4) -> str:
|
| 349 |
+
if val is None:
|
| 350 |
+
return "N/A"
|
| 351 |
+
return f"{val:.{decimals}f}"
|
| 352 |
+
|
| 353 |
+
|
| 354 |
+
class AblationAnalysis:
|
| 355 |
+
def __init__(self):
|
| 356 |
+
# Each entry: (model, dataset, judge, mode, delta_record)
|
| 357 |
+
self.all_records: List[Tuple[str, str, str, str, Dict[str, Any]]] = []
|
| 358 |
+
|
| 359 |
+
def add(self, model: str, dataset: str, judge: str, mode: str, deltas: List[Dict[str, Any]]):
|
| 360 |
+
for d in deltas:
|
| 361 |
+
self.all_records.append((model, dataset, judge, mode, d))
|
| 362 |
+
|
| 363 |
+
def _filter(
|
| 364 |
+
self,
|
| 365 |
+
model: Optional[str] = None,
|
| 366 |
+
dataset: Optional[str] = None,
|
| 367 |
+
judge: Optional[str] = None,
|
| 368 |
+
) -> List[Dict[str, Any]]:
|
| 369 |
+
out = []
|
| 370 |
+
for m, ds, j, mode, rec in self.all_records:
|
| 371 |
+
if model and m != model:
|
| 372 |
+
continue
|
| 373 |
+
if dataset and ds != dataset:
|
| 374 |
+
continue
|
| 375 |
+
if judge and j != judge:
|
| 376 |
+
continue
|
| 377 |
+
out.append(rec)
|
| 378 |
+
return out
|
| 379 |
+
|
| 380 |
+
def _metric_deltas(self, records: List[Dict], metric_key: str) -> List[float]:
|
| 381 |
+
key = f"{metric_key}_delta"
|
| 382 |
+
return [r[key] for r in records if key in r]
|
| 383 |
+
|
| 384 |
+
def overall_summary(self) -> str:
|
| 385 |
+
lines = []
|
| 386 |
+
records = self._filter()
|
| 387 |
+
n = len(records)
|
| 388 |
+
lines.append(f"**Total video comparisons (prompt changed):** {n}")
|
| 389 |
+
lines.append("")
|
| 390 |
+
|
| 391 |
+
if n == 0:
|
| 392 |
+
lines.append("No data to analyze.")
|
| 393 |
+
return "\n".join(lines)
|
| 394 |
+
|
| 395 |
+
# Table header
|
| 396 |
+
metrics = GENERAL_METRICS + ["general_avg", "physical_avg"]
|
| 397 |
+
lines.append("| Metric | Mean Delta | Median Delta | Std Dev | N |")
|
| 398 |
+
lines.append("|--------|-----------|-------------|---------|---|")
|
| 399 |
+
for metric in metrics:
|
| 400 |
+
vals = self._metric_deltas(records, metric)
|
| 401 |
+
if not vals:
|
| 402 |
+
lines.append(f"| {metric} | N/A | N/A | N/A | 0 |")
|
| 403 |
+
continue
|
| 404 |
+
import statistics
|
| 405 |
+
|
| 406 |
+
mean = statistics.mean(vals)
|
| 407 |
+
median = statistics.median(vals)
|
| 408 |
+
stdev = statistics.stdev(vals) if len(vals) > 1 else 0.0
|
| 409 |
+
lines.append(
|
| 410 |
+
f"| {metric} | {format_delta(mean)} | {format_delta(median)} | {format_float(stdev)} | {len(vals)} |"
|
| 411 |
+
)
|
| 412 |
+
return "\n".join(lines)
|
| 413 |
+
|
| 414 |
+
def per_group_table(self, group_key: str) -> str:
|
| 415 |
+
"""Group by model, dataset, or judge."""
|
| 416 |
+
groups: Dict[str, List[Dict]] = defaultdict(list)
|
| 417 |
+
for m, ds, j, mode, rec in self.all_records:
|
| 418 |
+
if group_key == "model":
|
| 419 |
+
key = m
|
| 420 |
+
elif group_key == "dataset":
|
| 421 |
+
key = ds
|
| 422 |
+
elif group_key == "judge":
|
| 423 |
+
key = j
|
| 424 |
+
else:
|
| 425 |
+
key = "all"
|
| 426 |
+
groups[key].append(rec)
|
| 427 |
+
|
| 428 |
+
metrics = GENERAL_METRICS + ["general_avg", "physical_avg"]
|
| 429 |
+
lines = []
|
| 430 |
+
header = f"| {group_key.capitalize()} | N |"
|
| 431 |
+
sep = "|---|---|"
|
| 432 |
+
for metric in metrics:
|
| 433 |
+
header += f" {metric} |"
|
| 434 |
+
sep += "---|"
|
| 435 |
+
lines.append(header)
|
| 436 |
+
lines.append(sep)
|
| 437 |
+
|
| 438 |
+
for gname in sorted(groups.keys()):
|
| 439 |
+
recs = groups[gname]
|
| 440 |
+
row = f"| {gname} | {len(recs)} |"
|
| 441 |
+
for metric in metrics:
|
| 442 |
+
vals = self._metric_deltas(recs, metric)
|
| 443 |
+
mean = safe_mean(vals)
|
| 444 |
+
row += f" {format_delta(mean)} |"
|
| 445 |
+
lines.append(row)
|
| 446 |
+
|
| 447 |
+
return "\n".join(lines)
|
| 448 |
+
|
| 449 |
+
def per_domain_table(self) -> str:
|
| 450 |
+
"""Aggregate per-law deltas into domains."""
|
| 451 |
+
domain_deltas: Dict[str, List[float]] = defaultdict(list)
|
| 452 |
+
for _, _, _, _, rec in self.all_records:
|
| 453 |
+
for law, delta in rec.get("per_law_deltas", {}).items():
|
| 454 |
+
domain = LAW_TO_DOMAIN.get(law, "other")
|
| 455 |
+
domain_deltas[domain].append(delta)
|
| 456 |
+
|
| 457 |
+
lines = []
|
| 458 |
+
lines.append("| Domain | Mean Delta | N | Improved% | Degraded% | Unchanged% |")
|
| 459 |
+
lines.append("|--------|-----------|---|----------|----------|-----------|")
|
| 460 |
+
|
| 461 |
+
for domain in sorted(domain_deltas.keys()):
|
| 462 |
+
vals = domain_deltas[domain]
|
| 463 |
+
n = len(vals)
|
| 464 |
+
mean = safe_mean(vals)
|
| 465 |
+
improved = sum(1 for v in vals if v > 0) / n * 100 if n else 0
|
| 466 |
+
degraded = sum(1 for v in vals if v < 0) / n * 100 if n else 0
|
| 467 |
+
unchanged = sum(1 for v in vals if v == 0) / n * 100 if n else 0
|
| 468 |
+
lines.append(
|
| 469 |
+
f"| {domain} | {format_delta(mean)} | {n} | {improved:.1f}% | {degraded:.1f}% | {unchanged:.1f}% |"
|
| 470 |
+
)
|
| 471 |
+
|
| 472 |
+
return "\n".join(lines)
|
| 473 |
+
|
| 474 |
+
def per_law_table(self) -> str:
|
| 475 |
+
"""Per-law breakdown."""
|
| 476 |
+
law_deltas: Dict[str, List[float]] = defaultdict(list)
|
| 477 |
+
for _, _, _, _, rec in self.all_records:
|
| 478 |
+
for law, delta in rec.get("per_law_deltas", {}).items():
|
| 479 |
+
law_deltas[law].append(delta)
|
| 480 |
+
|
| 481 |
+
lines = []
|
| 482 |
+
lines.append("| Law | Domain | Mean Delta | N | Improved% | Degraded% |")
|
| 483 |
+
lines.append("|-----|--------|-----------|---|----------|----------|")
|
| 484 |
+
|
| 485 |
+
# Sort by mean delta descending
|
| 486 |
+
sorted_laws = sorted(
|
| 487 |
+
law_deltas.keys(), key=lambda l: safe_mean(law_deltas[l]) or 0, reverse=True
|
| 488 |
+
)
|
| 489 |
+
|
| 490 |
+
for law in sorted_laws:
|
| 491 |
+
vals = law_deltas[law]
|
| 492 |
+
n = len(vals)
|
| 493 |
+
mean = safe_mean(vals)
|
| 494 |
+
domain = LAW_TO_DOMAIN.get(law, "other")
|
| 495 |
+
improved = sum(1 for v in vals if v > 0) / n * 100 if n else 0
|
| 496 |
+
degraded = sum(1 for v in vals if v < 0) / n * 100 if n else 0
|
| 497 |
+
lines.append(
|
| 498 |
+
f"| {law} | {domain} | {format_delta(mean)} | {n} | {improved:.1f}% | {degraded:.1f}% |"
|
| 499 |
+
)
|
| 500 |
+
|
| 501 |
+
return "\n".join(lines)
|
| 502 |
+
|
| 503 |
+
def score_bin_table(self) -> str:
|
| 504 |
+
"""Effect on low-score vs high-score videos, binned by old general_avg."""
|
| 505 |
+
bin_deltas: Dict[str, Dict[str, List[float]]] = {}
|
| 506 |
+
for lo, hi in SCORE_BINS:
|
| 507 |
+
label = f"{lo}-{hi}"
|
| 508 |
+
bin_deltas[label] = defaultdict(list)
|
| 509 |
+
|
| 510 |
+
metrics = GENERAL_METRICS + ["general_avg", "physical_avg"]
|
| 511 |
+
for _, _, _, _, rec in self.all_records:
|
| 512 |
+
old_ga = rec.get("general_avg_old")
|
| 513 |
+
if old_ga is None:
|
| 514 |
+
continue
|
| 515 |
+
for lo, hi in SCORE_BINS:
|
| 516 |
+
if lo <= old_ga < hi or (hi == 5 and old_ga == 5):
|
| 517 |
+
label = f"{lo}-{hi}"
|
| 518 |
+
for metric in metrics:
|
| 519 |
+
key = f"{metric}_delta"
|
| 520 |
+
if key in rec:
|
| 521 |
+
bin_deltas[label][metric].append(rec[key])
|
| 522 |
+
break
|
| 523 |
+
|
| 524 |
+
lines = []
|
| 525 |
+
header = "| Old general_avg Bin | N |"
|
| 526 |
+
sep = "|---|---|"
|
| 527 |
+
for metric in metrics:
|
| 528 |
+
header += f" {metric} |"
|
| 529 |
+
sep += "---|"
|
| 530 |
+
lines.append(header)
|
| 531 |
+
lines.append(sep)
|
| 532 |
+
|
| 533 |
+
for lo, hi in SCORE_BINS:
|
| 534 |
+
label = f"{lo}-{hi}"
|
| 535 |
+
bd = bin_deltas[label]
|
| 536 |
+
# N = number of records in this bin
|
| 537 |
+
n_vals = bd.get("general_avg", [])
|
| 538 |
+
n = len(n_vals)
|
| 539 |
+
row = f"| {label} | {n} |"
|
| 540 |
+
for metric in metrics:
|
| 541 |
+
vals = bd.get(metric, [])
|
| 542 |
+
mean = safe_mean(vals)
|
| 543 |
+
row += f" {format_delta(mean)} |"
|
| 544 |
+
lines.append(row)
|
| 545 |
+
|
| 546 |
+
return "\n".join(lines)
|
| 547 |
+
|
| 548 |
+
def prompt_length_analysis(self) -> str:
|
| 549 |
+
"""Analyze whether longer prompt additions correlate with bigger deltas."""
|
| 550 |
+
records_with_len = []
|
| 551 |
+
for _, _, _, _, rec in self.all_records:
|
| 552 |
+
old_len = len(rec.get("old_prompt", ""))
|
| 553 |
+
new_len = len(rec.get("new_prompt", ""))
|
| 554 |
+
delta_len = new_len - old_len
|
| 555 |
+
ga_delta = rec.get("general_avg_delta")
|
| 556 |
+
pa_delta = rec.get("physical_avg_delta")
|
| 557 |
+
if ga_delta is not None:
|
| 558 |
+
records_with_len.append((delta_len, ga_delta, pa_delta))
|
| 559 |
+
|
| 560 |
+
if not records_with_len:
|
| 561 |
+
return "No data for prompt length analysis."
|
| 562 |
+
|
| 563 |
+
lines = []
|
| 564 |
+
# Bin by prompt length increase
|
| 565 |
+
bins = [(0, 30), (30, 60), (60, 90), (90, 200)]
|
| 566 |
+
lines.append("| Prompt Length Increase | N | Mean general_avg Delta | Mean physical_avg Delta |")
|
| 567 |
+
lines.append("|----------------------|---|----------------------|----------------------|")
|
| 568 |
+
|
| 569 |
+
for lo, hi in bins:
|
| 570 |
+
subset = [(d, g, p) for d, g, p in records_with_len if lo <= d < hi]
|
| 571 |
+
n = len(subset)
|
| 572 |
+
ga_mean = safe_mean([g for _, g, _ in subset])
|
| 573 |
+
pa_mean = safe_mean([p for _, _, p in subset if p is not None])
|
| 574 |
+
lines.append(
|
| 575 |
+
f"| {lo}-{hi} chars | {n} | {format_delta(ga_mean)} | {format_delta(pa_mean)} |"
|
| 576 |
+
)
|
| 577 |
+
|
| 578 |
+
return "\n".join(lines)
|
| 579 |
+
|
| 580 |
+
|
| 581 |
+
# ---------------------------------------------------------------------------
|
| 582 |
+
# Main
|
| 583 |
+
# ---------------------------------------------------------------------------
|
| 584 |
+
|
| 585 |
+
|
| 586 |
+
def main():
|
| 587 |
+
# Ensure we're in anonymous root or adjust paths
|
| 588 |
+
if not os.path.isdir(DATA_ROOT):
|
| 589 |
+
# Try from the script's location
|
| 590 |
+
script_dir = os.path.dirname(os.path.abspath(__file__))
|
| 591 |
+
anonymous_root = os.path.abspath(os.path.join(script_dir, "..", ".."))
|
| 592 |
+
os.chdir(anonymous_root)
|
| 593 |
+
if not os.path.isdir(DATA_ROOT):
|
| 594 |
+
print(f"ERROR: Cannot find {DATA_ROOT}. Run from the anonymous root directory.", file=sys.stderr)
|
| 595 |
+
sys.exit(1)
|
| 596 |
+
|
| 597 |
+
print("# Ablation Study: Effect of Prompt Enhancement on VLM Evaluation Scores")
|
| 598 |
+
print()
|
| 599 |
+
print("Comparing backup (pre-enhancement) vs current (post-enhancement) evaluation files.")
|
| 600 |
+
print("Only videos where the prompt actually changed between versions are included.")
|
| 601 |
+
print()
|
| 602 |
+
|
| 603 |
+
# Step 1: Find all old/new pairs
|
| 604 |
+
pairs = find_pairs()
|
| 605 |
+
print(f"Found **{len(pairs)}** old/new eval file pairs.")
|
| 606 |
+
print()
|
| 607 |
+
|
| 608 |
+
if not pairs:
|
| 609 |
+
print("No pairs found. Exiting.")
|
| 610 |
+
return
|
| 611 |
+
|
| 612 |
+
# Print pairs summary
|
| 613 |
+
print("## File Pairs Found")
|
| 614 |
+
print()
|
| 615 |
+
print("| # | Model | Dataset | Judge | Mode | Old File | New File |")
|
| 616 |
+
print("|---|-------|---------|-------|------|----------|----------|")
|
| 617 |
+
for i, p in enumerate(pairs, 1):
|
| 618 |
+
old_base = os.path.basename(p["old_path"])
|
| 619 |
+
new_base = os.path.basename(p["new_path"])
|
| 620 |
+
print(
|
| 621 |
+
f"| {i} | {p['model']} | {p['dataset']} | {p['judge']} | {p['mode']} | {old_base} | {new_base} |"
|
| 622 |
+
)
|
| 623 |
+
print()
|
| 624 |
+
|
| 625 |
+
# Step 2: Compute deltas for each pair
|
| 626 |
+
analysis = AblationAnalysis()
|
| 627 |
+
total_videos = 0
|
| 628 |
+
total_changed = 0
|
| 629 |
+
skipped_pairs = 0
|
| 630 |
+
|
| 631 |
+
for p in pairs:
|
| 632 |
+
old_data = load_eval_file(p["old_path"])
|
| 633 |
+
new_data = load_eval_file(p["new_path"])
|
| 634 |
+
if old_data is None or new_data is None:
|
| 635 |
+
skipped_pairs += 1
|
| 636 |
+
continue
|
| 637 |
+
|
| 638 |
+
old_by_video = {r["video"]: r for r in old_data["results"]}
|
| 639 |
+
new_by_video = {r["video"]: r for r in new_data["results"]}
|
| 640 |
+
common = set(old_by_video.keys()) & set(new_by_video.keys())
|
| 641 |
+
total_videos += len(common)
|
| 642 |
+
|
| 643 |
+
deltas = compute_deltas(old_data, new_data)
|
| 644 |
+
total_changed += len(deltas)
|
| 645 |
+
|
| 646 |
+
analysis.add(p["model"], p["dataset"], p["judge"], p["mode"], deltas)
|
| 647 |
+
|
| 648 |
+
print(f"**Total matched videos across all pairs:** {total_videos}")
|
| 649 |
+
print(f"**Videos with prompt changes:** {total_changed}")
|
| 650 |
+
if skipped_pairs:
|
| 651 |
+
print(f"**Skipped pairs (load errors):** {skipped_pairs}")
|
| 652 |
+
print()
|
| 653 |
+
|
| 654 |
+
if total_changed == 0:
|
| 655 |
+
print("No videos with prompt changes found. Exiting.")
|
| 656 |
+
return
|
| 657 |
+
|
| 658 |
+
# Step 3: Reports
|
| 659 |
+
print("---")
|
| 660 |
+
print()
|
| 661 |
+
print("## 1. Overall Score Deltas (All Models, Datasets, Judges)")
|
| 662 |
+
print()
|
| 663 |
+
print(analysis.overall_summary())
|
| 664 |
+
print()
|
| 665 |
+
|
| 666 |
+
print("---")
|
| 667 |
+
print()
|
| 668 |
+
print("## 2. Per-Model Breakdown")
|
| 669 |
+
print()
|
| 670 |
+
print(analysis.per_group_table("model"))
|
| 671 |
+
print()
|
| 672 |
+
|
| 673 |
+
print("---")
|
| 674 |
+
print()
|
| 675 |
+
print("## 3. Per-Dataset Breakdown")
|
| 676 |
+
print()
|
| 677 |
+
print(analysis.per_group_table("dataset"))
|
| 678 |
+
print()
|
| 679 |
+
|
| 680 |
+
print("---")
|
| 681 |
+
print()
|
| 682 |
+
print("## 4. Per-Judge Breakdown")
|
| 683 |
+
print()
|
| 684 |
+
print(analysis.per_group_table("judge"))
|
| 685 |
+
print()
|
| 686 |
+
|
| 687 |
+
print("---")
|
| 688 |
+
print()
|
| 689 |
+
print("## 5. Per-Domain Breakdown (Physical Laws)")
|
| 690 |
+
print()
|
| 691 |
+
print(analysis.per_domain_table())
|
| 692 |
+
print()
|
| 693 |
+
|
| 694 |
+
print("---")
|
| 695 |
+
print()
|
| 696 |
+
print("## 6. Per-Law Breakdown (sorted by mean delta, descending)")
|
| 697 |
+
print()
|
| 698 |
+
print(analysis.per_law_table())
|
| 699 |
+
print()
|
| 700 |
+
|
| 701 |
+
print("---")
|
| 702 |
+
print()
|
| 703 |
+
print("## 7. Effect by Old Score Bin")
|
| 704 |
+
print()
|
| 705 |
+
print("Videos binned by their old general_avg score:")
|
| 706 |
+
print()
|
| 707 |
+
print(analysis.score_bin_table())
|
| 708 |
+
print()
|
| 709 |
+
|
| 710 |
+
print("---")
|
| 711 |
+
print()
|
| 712 |
+
print("## 8. Prompt Length Increase Analysis")
|
| 713 |
+
print()
|
| 714 |
+
print(analysis.prompt_length_analysis())
|
| 715 |
+
print()
|
| 716 |
+
|
| 717 |
+
|
| 718 |
+
if __name__ == "__main__":
|
| 719 |
+
main()
|
dataprocessing/common/__init__.py
ADDED
|
File without changes
|
dataprocessing/common/gemini.py
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Shared Gemini API client and argument utilities."""
|
| 2 |
+
|
| 3 |
+
import json
|
| 4 |
+
import logging
|
| 5 |
+
import os
|
| 6 |
+
import re
|
| 7 |
+
from pathlib import Path
|
| 8 |
+
|
| 9 |
+
logger = logging.getLogger(__name__)
|
| 10 |
+
|
| 11 |
+
GEMINI_MODEL = "gemini-3.1-pro-preview"
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
def add_gemini_args(parser):
|
| 15 |
+
"""Add standard Gemini CLI arguments to an argparse parser."""
|
| 16 |
+
parser.add_argument("--api_key_file", type=str,
|
| 17 |
+
default=None,
|
| 18 |
+
help="Path to a Gemini API key file. If omitted, GEMINI_API_KEY is used.")
|
| 19 |
+
parser.add_argument("--vertexai", action="store_true")
|
| 20 |
+
parser.add_argument("--project", type=str, default="your-gcp-project")
|
| 21 |
+
parser.add_argument("--location", type=str, default="global")
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def make_client(args):
|
| 25 |
+
"""Create a google.genai.Client from parsed CLI arguments."""
|
| 26 |
+
from google import genai
|
| 27 |
+
if args.vertexai:
|
| 28 |
+
client = genai.Client(vertexai=True, project=args.project,
|
| 29 |
+
location=args.location)
|
| 30 |
+
logger.info("Using Vertex AI (project=%s, location=%s)",
|
| 31 |
+
args.project, args.location)
|
| 32 |
+
else:
|
| 33 |
+
if args.api_key_file:
|
| 34 |
+
api_key = Path(args.api_key_file).read_text().strip()
|
| 35 |
+
else:
|
| 36 |
+
api_key = os.environ.get("GEMINI_API_KEY")
|
| 37 |
+
if not api_key:
|
| 38 |
+
raise ValueError("Set GEMINI_API_KEY or pass --api_key_file.")
|
| 39 |
+
client = genai.Client(api_key=api_key)
|
| 40 |
+
return client
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def extract_json_array(text):
|
| 44 |
+
"""Extract and parse a JSON array from LLM response text."""
|
| 45 |
+
match = re.search(r"\[.*?\]", text, re.DOTALL)
|
| 46 |
+
if match:
|
| 47 |
+
try:
|
| 48 |
+
return json.loads(match.group())
|
| 49 |
+
except json.JSONDecodeError:
|
| 50 |
+
pass
|
| 51 |
+
return None
|
dataprocessing/common/pipeline.py
ADDED
|
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Pipeline quality checks: staleness, missing data, empty laws.
|
| 2 |
+
|
| 3 |
+
Default behavior is warn-and-continue. Pass ``--strict`` to fail on
|
| 4 |
+
issues that would otherwise only warn (for CI / batch runs).
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from __future__ import annotations
|
| 8 |
+
|
| 9 |
+
import logging
|
| 10 |
+
import os
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
|
| 13 |
+
logger = logging.getLogger(__name__)
|
| 14 |
+
|
| 15 |
+
# Datasets where every prompt is expected to have physical_laws.
|
| 16 |
+
# Edge cases (empty perspectives, removed status) are the only exceptions.
|
| 17 |
+
LAWS_REQUIRED_DATASETS = frozenset([
|
| 18 |
+
"wmb", "video_phy_2", "physics_iq", "video_phy_2", "openvid",
|
| 19 |
+
])
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
class PipelineCheck:
|
| 23 |
+
"""Collect warnings and errors during a pipeline run, report at the end.
|
| 24 |
+
|
| 25 |
+
Usage::
|
| 26 |
+
|
| 27 |
+
checker = PipelineCheck(strict=args.strict)
|
| 28 |
+
# ... pipeline logic ...
|
| 29 |
+
checker.check_staleness(source_path, eval_path)
|
| 30 |
+
checker.check_missing_ratio(missing=5, total=279)
|
| 31 |
+
checker.check_empty_laws("wmb_195", [], "wmb")
|
| 32 |
+
# ... at the end ...
|
| 33 |
+
score = checker.report()
|
| 34 |
+
checker.finalize() # raises if any FAIL-level issues
|
| 35 |
+
"""
|
| 36 |
+
|
| 37 |
+
def __init__(self, strict: bool = False) -> None:
|
| 38 |
+
self.strict = strict
|
| 39 |
+
self._warnings: list[str] = []
|
| 40 |
+
self._errors: list[str] = []
|
| 41 |
+
self._missing_count = 0
|
| 42 |
+
self._total_count = 0
|
| 43 |
+
self._empty_laws_count = 0
|
| 44 |
+
self._stale_count = 0
|
| 45 |
+
|
| 46 |
+
# ------------------------------------------------------------------
|
| 47 |
+
# Individual checks
|
| 48 |
+
# ------------------------------------------------------------------
|
| 49 |
+
|
| 50 |
+
def check_staleness(self, source_path: Path, eval_path: Path) -> None:
|
| 51 |
+
"""Warn/fail if source JSON is older than eval JSON."""
|
| 52 |
+
try:
|
| 53 |
+
src_mtime = os.path.getmtime(source_path)
|
| 54 |
+
eval_mtime = os.path.getmtime(eval_path)
|
| 55 |
+
except OSError:
|
| 56 |
+
return
|
| 57 |
+
|
| 58 |
+
if src_mtime < eval_mtime:
|
| 59 |
+
self._stale_count += 1
|
| 60 |
+
msg = (f"Source {source_path.name} (mtime {src_mtime:.0f}) "
|
| 61 |
+
f"is older than eval {eval_path.name} (mtime {eval_mtime:.0f})")
|
| 62 |
+
if self.strict:
|
| 63 |
+
self._errors.append(msg)
|
| 64 |
+
else:
|
| 65 |
+
self._warnings.append(msg)
|
| 66 |
+
|
| 67 |
+
def check_missing_ratio(self, missing: int, total: int) -> None:
|
| 68 |
+
"""Check ratio of unmatched vids. >10% always fails."""
|
| 69 |
+
self._missing_count = missing
|
| 70 |
+
self._total_count = total
|
| 71 |
+
|
| 72 |
+
if total == 0:
|
| 73 |
+
return
|
| 74 |
+
|
| 75 |
+
ratio = missing / total
|
| 76 |
+
if ratio > 0.10:
|
| 77 |
+
self._errors.append(
|
| 78 |
+
f"Vid mismatch too high: {missing}/{total} ({ratio:.1%})")
|
| 79 |
+
elif ratio > 0.01:
|
| 80 |
+
msg = f"Moderate vid mismatch: {missing}/{total} ({ratio:.1%})"
|
| 81 |
+
if self.strict:
|
| 82 |
+
self._errors.append(msg)
|
| 83 |
+
else:
|
| 84 |
+
self._warnings.append(msg)
|
| 85 |
+
elif missing > 0:
|
| 86 |
+
self._warnings.append(
|
| 87 |
+
f"Minor vid mismatch: {missing}/{total} ({ratio:.1%})")
|
| 88 |
+
|
| 89 |
+
def check_empty_laws(self, vid: str, laws: list, dataset: str,
|
| 90 |
+
resolved: bool = True) -> None:
|
| 91 |
+
"""Check if a prompt is missing physical_laws that it should have.
|
| 92 |
+
|
| 93 |
+
Args:
|
| 94 |
+
resolved: Whether this vid was successfully matched to a source
|
| 95 |
+
entry. Unresolved vids get a warn (already counted by
|
| 96 |
+
check_missing_ratio); only resolved-but-empty is an error.
|
| 97 |
+
"""
|
| 98 |
+
if laws:
|
| 99 |
+
return
|
| 100 |
+
self._empty_laws_count += 1
|
| 101 |
+
|
| 102 |
+
if not resolved:
|
| 103 |
+
# Already captured by check_missing_ratio — just warn here
|
| 104 |
+
self._warnings.append(
|
| 105 |
+
f"No laws for {vid} (unresolved, dataset={dataset})")
|
| 106 |
+
elif dataset in LAWS_REQUIRED_DATASETS:
|
| 107 |
+
self._errors.append(f"Missing laws for {vid} (dataset={dataset})")
|
| 108 |
+
else:
|
| 109 |
+
self._warnings.append(
|
| 110 |
+
f"No laws for {vid} (dataset={dataset}, allowed)")
|
| 111 |
+
|
| 112 |
+
# ------------------------------------------------------------------
|
| 113 |
+
# Reporting
|
| 114 |
+
# ------------------------------------------------------------------
|
| 115 |
+
|
| 116 |
+
def report(self) -> float:
|
| 117 |
+
"""Log summary and return quality score (0.0 - 1.0)."""
|
| 118 |
+
total = self._total_count or 1
|
| 119 |
+
missing_ratio = self._missing_count / total
|
| 120 |
+
empty_ratio = self._empty_laws_count / total
|
| 121 |
+
|
| 122 |
+
score = 1.0 - (
|
| 123 |
+
0.5 * missing_ratio # unmatched vids are most critical
|
| 124 |
+
+ 0.3 * empty_ratio # missing laws degrade downstream use
|
| 125 |
+
+ 0.2 * min(self._stale_count / max(total, 1), 1.0) # stale sources
|
| 126 |
+
)
|
| 127 |
+
score = max(0.0, min(1.0, score))
|
| 128 |
+
|
| 129 |
+
logger.info("Pipeline Quality Score: %.2f", score)
|
| 130 |
+
if self._warnings:
|
| 131 |
+
logger.info("Warnings (%d):", len(self._warnings))
|
| 132 |
+
# Show first 10, summarize rest
|
| 133 |
+
for w in self._warnings[:10]:
|
| 134 |
+
logger.warning(" %s", w)
|
| 135 |
+
if len(self._warnings) > 10:
|
| 136 |
+
logger.warning(" ... and %d more",
|
| 137 |
+
len(self._warnings) - 10)
|
| 138 |
+
if self._errors:
|
| 139 |
+
logger.error("Errors (%d):", len(self._errors))
|
| 140 |
+
for e in self._errors[:10]:
|
| 141 |
+
logger.error(" %s", e)
|
| 142 |
+
if len(self._errors) > 10:
|
| 143 |
+
logger.error(" ... and %d more", len(self._errors) - 10)
|
| 144 |
+
|
| 145 |
+
return score
|
| 146 |
+
|
| 147 |
+
def finalize(self) -> None:
|
| 148 |
+
"""Raise if any FAIL-level issues were recorded."""
|
| 149 |
+
if self._errors:
|
| 150 |
+
raise RuntimeError(
|
| 151 |
+
f"Pipeline failed with {len(self._errors)} error(s). "
|
| 152 |
+
f"First: {self._errors[0]}"
|
| 153 |
+
)
|
dataprocessing/common/video_id.py
ADDED
|
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Canonical video ID normalization and source-law loading.
|
| 2 |
+
|
| 3 |
+
Every prompt in the pipeline gets a single canonical vid of the form
|
| 4 |
+
``{dataset}_{original_key}`` (no domain prefix, no file extension).
|
| 5 |
+
This module is the single source of truth for that mapping.
|
| 6 |
+
|
| 7 |
+
The canonical vid is computed at runtime — it is NOT stored in source JSONs.
|
| 8 |
+
"""
|
| 9 |
+
|
| 10 |
+
from __future__ import annotations
|
| 11 |
+
|
| 12 |
+
import json
|
| 13 |
+
import logging
|
| 14 |
+
from dataclasses import dataclass, field
|
| 15 |
+
from pathlib import Path
|
| 16 |
+
from typing import TypedDict
|
| 17 |
+
|
| 18 |
+
logger = logging.getLogger(__name__)
|
| 19 |
+
|
| 20 |
+
ROOT = Path(__file__).resolve().parents[2]
|
| 21 |
+
|
| 22 |
+
# Ordered: first match wins for overlapping prompts (video_phy_2 before video_phy_2).
|
| 23 |
+
PROMPT_SOURCES: list[tuple[str, Path]] = [
|
| 24 |
+
("wmb", ROOT / "data/prompts/wmb/wmb.json"),
|
| 25 |
+
("physics_iq", ROOT / "data/prompts/physics_iq/physics_iq.json"),
|
| 26 |
+
("video_phy_2", ROOT / "data/prompts/video_phy_2/video_phy_2.json"),
|
| 27 |
+
("openvid", ROOT / "data/prompts/openvid/openvid.json"),
|
| 28 |
+
]
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
class SourceEntry(TypedDict):
|
| 32 |
+
laws: list[str]
|
| 33 |
+
dataset: str
|
| 34 |
+
prompt: str
|
| 35 |
+
source_key: str
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
# ---------------------------------------------------------------------------
|
| 39 |
+
# normalize_vid: the single rule for canonical IDs
|
| 40 |
+
# ---------------------------------------------------------------------------
|
| 41 |
+
|
| 42 |
+
def normalize_vid(dataset: str, key: str) -> str:
|
| 43 |
+
"""Pure function. ``(dataset, key) -> canonical vid``.
|
| 44 |
+
|
| 45 |
+
Rules:
|
| 46 |
+
- Strip ``.mp4`` suffix from key
|
| 47 |
+
- Prefix with ``{dataset}_``
|
| 48 |
+
- For physics_iq perspectives, key is the full generated_video_name (without .mp4)
|
| 49 |
+
|
| 50 |
+
Examples::
|
| 51 |
+
|
| 52 |
+
normalize_vid("wmb", "195") -> "wmb_195"
|
| 53 |
+
normalize_vid("video_phy_2", "119") -> "video_phy_2_119"
|
| 54 |
+
normalize_vid("openvid", "abc.mp4") -> "openvid_abc"
|
| 55 |
+
normalize_vid("physics_iq", "0052_...-double-cradle") -> "physics_iq_0052_...-double-cradle"
|
| 56 |
+
"""
|
| 57 |
+
key = key.removesuffix(".mp4")
|
| 58 |
+
return f"{dataset}_{key}"
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
# ---------------------------------------------------------------------------
|
| 62 |
+
# resolve_eval_vid: map eval-side video names back to canonical vids
|
| 63 |
+
# ---------------------------------------------------------------------------
|
| 64 |
+
|
| 65 |
+
def resolve_eval_vid(
|
| 66 |
+
eval_video: str,
|
| 67 |
+
eval_dataset_suffix: str,
|
| 68 |
+
reverse_map: dict[str, str],
|
| 69 |
+
) -> str | None:
|
| 70 |
+
"""Map an eval-side video name to its canonical vid.
|
| 71 |
+
|
| 72 |
+
``reverse_map`` is built by :func:`load_source_laws` — it maps every
|
| 73 |
+
legacy identifier (first_frame_image stem, generated_video_name, numeric
|
| 74 |
+
key) to the canonical vid.
|
| 75 |
+
|
| 76 |
+
Falls back to numeric-suffix extraction for video_phy_2/video_phy_2 domain-prefix
|
| 77 |
+
mismatches (e.g. ``buoyancy_119`` -> key ``119``).
|
| 78 |
+
"""
|
| 79 |
+
# Direct hit (covers physics_iq perspective names, openvid, exact wmb stems)
|
| 80 |
+
if eval_video in reverse_map:
|
| 81 |
+
return reverse_map[eval_video]
|
| 82 |
+
|
| 83 |
+
# Fallback: strip domain prefix, try numeric key
|
| 84 |
+
if "_" in eval_video:
|
| 85 |
+
numeric = eval_video.rsplit("_", 1)[-1]
|
| 86 |
+
if numeric in reverse_map:
|
| 87 |
+
return reverse_map[numeric]
|
| 88 |
+
|
| 89 |
+
return None
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
# ---------------------------------------------------------------------------
|
| 93 |
+
# load_source_laws: single loader for all source datasets
|
| 94 |
+
# ---------------------------------------------------------------------------
|
| 95 |
+
|
| 96 |
+
@dataclass
|
| 97 |
+
class SourceLawsResult:
|
| 98 |
+
"""Result of loading all source prompt JSONs."""
|
| 99 |
+
entries: dict[str, SourceEntry] # canonical_vid -> SourceEntry
|
| 100 |
+
reverse_map: dict[str, str] # legacy_id -> canonical_vid
|
| 101 |
+
stats: dict[str, int] = field(default_factory=dict)
|
| 102 |
+
|
| 103 |
+
def get(self, canonical_vid: str) -> SourceEntry | None:
|
| 104 |
+
return self.entries.get(canonical_vid)
|
| 105 |
+
|
| 106 |
+
def resolve_eval(self, eval_video: str, eval_ds_suffix: str) -> tuple[str, SourceEntry] | None:
|
| 107 |
+
"""Resolve eval-side video name -> (canonical_vid, entry)."""
|
| 108 |
+
cvid = resolve_eval_vid(eval_video, eval_ds_suffix, self.reverse_map)
|
| 109 |
+
if cvid and cvid in self.entries:
|
| 110 |
+
return cvid, self.entries[cvid]
|
| 111 |
+
return None
|
| 112 |
+
|
| 113 |
+
@property
|
| 114 |
+
def cvid_to_legacies(self) -> dict[str, set[str]]:
|
| 115 |
+
"""Inverse of reverse_map: canonical_vid -> set of legacy IDs."""
|
| 116 |
+
if not hasattr(self, "_cvid_to_legacies"):
|
| 117 |
+
inv: dict[str, set[str]] = {}
|
| 118 |
+
for lid, cvid in self.reverse_map.items():
|
| 119 |
+
inv.setdefault(cvid, set()).add(lid)
|
| 120 |
+
self._cvid_to_legacies = inv
|
| 121 |
+
return self._cvid_to_legacies
|
| 122 |
+
|
| 123 |
+
|
| 124 |
+
def load_source_laws(
|
| 125 |
+
sources: list[tuple[str, Path]] | None = None,
|
| 126 |
+
) -> SourceLawsResult:
|
| 127 |
+
"""Load physical_laws from all canonical prompt JSONs.
|
| 128 |
+
|
| 129 |
+
Returns a :class:`SourceLawsResult` with:
|
| 130 |
+
- ``entries``: canonical_vid -> SourceEntry
|
| 131 |
+
- ``reverse_map``: legacy identifiers -> canonical_vid (for eval matching)
|
| 132 |
+
"""
|
| 133 |
+
if sources is None:
|
| 134 |
+
sources = PROMPT_SOURCES
|
| 135 |
+
|
| 136 |
+
entries: dict[str, SourceEntry] = {}
|
| 137 |
+
reverse_map: dict[str, str] = {}
|
| 138 |
+
stats: dict[str, int] = {}
|
| 139 |
+
|
| 140 |
+
def _register(cvid: str, legacy_ids: list[str],
|
| 141 |
+
laws: list[str], dataset: str,
|
| 142 |
+
prompt: str, source_key: str) -> None:
|
| 143 |
+
"""Register a canonical vid + its legacy aliases (first-match-wins)."""
|
| 144 |
+
if cvid in entries:
|
| 145 |
+
return
|
| 146 |
+
entries[cvid] = SourceEntry(
|
| 147 |
+
laws=laws, dataset=dataset, prompt=prompt, source_key=source_key,
|
| 148 |
+
)
|
| 149 |
+
# Register all legacy identifiers for reverse lookup
|
| 150 |
+
for lid in legacy_ids:
|
| 151 |
+
if lid and lid not in reverse_map:
|
| 152 |
+
reverse_map[lid] = cvid
|
| 153 |
+
|
| 154 |
+
for ds_name, path in sources:
|
| 155 |
+
if not path.exists():
|
| 156 |
+
logger.warning("Source not found: %s", path)
|
| 157 |
+
continue
|
| 158 |
+
with open(path) as f:
|
| 159 |
+
data = json.load(f)
|
| 160 |
+
|
| 161 |
+
prompts = data.get("prompts", data)
|
| 162 |
+
if not isinstance(prompts, dict):
|
| 163 |
+
continue
|
| 164 |
+
|
| 165 |
+
count = 0
|
| 166 |
+
for key, item in prompts.items():
|
| 167 |
+
if not isinstance(item, dict):
|
| 168 |
+
continue
|
| 169 |
+
if item.get("status") != "kept":
|
| 170 |
+
continue
|
| 171 |
+
|
| 172 |
+
laws = item.get("physical_laws", [])
|
| 173 |
+
if not laws:
|
| 174 |
+
continue
|
| 175 |
+
|
| 176 |
+
prompt_text = item.get("prompt", item.get("description", ""))
|
| 177 |
+
|
| 178 |
+
if ds_name == "physics_iq":
|
| 179 |
+
# Each perspective is a separate vid
|
| 180 |
+
for persp in item.get("perspectives", []):
|
| 181 |
+
gvn = persp.get("generated_video_name", "")
|
| 182 |
+
gvn_bare = gvn.removesuffix(".mp4")
|
| 183 |
+
if not gvn_bare:
|
| 184 |
+
continue
|
| 185 |
+
cvid = normalize_vid(ds_name, gvn_bare)
|
| 186 |
+
_register(
|
| 187 |
+
cvid,
|
| 188 |
+
legacy_ids=[gvn_bare],
|
| 189 |
+
laws=laws, dataset=ds_name,
|
| 190 |
+
prompt=prompt_text, source_key=key,
|
| 191 |
+
)
|
| 192 |
+
count += 1
|
| 193 |
+
elif ds_name == "openvid":
|
| 194 |
+
key_bare = key.removesuffix(".mp4")
|
| 195 |
+
cvid = normalize_vid(ds_name, key_bare)
|
| 196 |
+
_register(
|
| 197 |
+
cvid,
|
| 198 |
+
legacy_ids=[key_bare],
|
| 199 |
+
laws=laws, dataset=ds_name,
|
| 200 |
+
prompt=prompt_text, source_key=key,
|
| 201 |
+
)
|
| 202 |
+
count += 1
|
| 203 |
+
else:
|
| 204 |
+
# wmb, video_phy_2: first_frame_image stem is the legacy ID
|
| 205 |
+
# video_phy_2 entries with subset=video_phy_2 get dataset="video_phy_2"
|
| 206 |
+
effective_ds = item.get("subset", ds_name)
|
| 207 |
+
ff = item.get("first_frame_image", "")
|
| 208 |
+
ff_stem = Path(ff).stem if ff else key
|
| 209 |
+
cvid = normalize_vid(effective_ds, key)
|
| 210 |
+
_register(
|
| 211 |
+
cvid,
|
| 212 |
+
legacy_ids=[ff_stem, key], # both stem and numeric key
|
| 213 |
+
laws=laws, dataset=effective_ds,
|
| 214 |
+
prompt=prompt_text, source_key=key,
|
| 215 |
+
)
|
| 216 |
+
count += 1
|
| 217 |
+
|
| 218 |
+
stats[ds_name] = count
|
| 219 |
+
logger.info("Loaded %d prompts from %s", count, path.name)
|
| 220 |
+
|
| 221 |
+
return SourceLawsResult(entries=entries, reverse_map=reverse_map, stats=stats)
|
dataprocessing/refine/__init__.py
ADDED
|
File without changes
|
dataprocessing/refine/enhance_prompts_physics.py
ADDED
|
@@ -0,0 +1,237 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Enhance prompts by adding expected physical phenomena using Gemini.
|
| 2 |
+
|
| 3 |
+
Many prompts only describe the initial scene setup and action, but don't describe
|
| 4 |
+
the expected physical outcome (e.g., liquid overflowing, dominoes falling in
|
| 5 |
+
sequence, ball bouncing). This script uses Gemini to append a concise description
|
| 6 |
+
of the expected physical phenomenon to each prompt.
|
| 7 |
+
|
| 8 |
+
Usage:
|
| 9 |
+
# Dry run (preview changes without writing)
|
| 10 |
+
python -m dataprocessing.refine.enhance_prompts_physics --dry_run
|
| 11 |
+
|
| 12 |
+
# Run on all datasets
|
| 13 |
+
python -m dataprocessing.refine.enhance_prompts_physics
|
| 14 |
+
|
| 15 |
+
# Run on specific dataset
|
| 16 |
+
python -m dataprocessing.refine.enhance_prompts_physics --dataset physics_iq
|
| 17 |
+
|
| 18 |
+
# Vertex AI mode
|
| 19 |
+
python -m dataprocessing.refine.enhance_prompts_physics --vertexai
|
| 20 |
+
"""
|
| 21 |
+
|
| 22 |
+
import argparse
|
| 23 |
+
import csv
|
| 24 |
+
import json
|
| 25 |
+
import logging
|
| 26 |
+
import sys
|
| 27 |
+
import time
|
| 28 |
+
from datetime import datetime
|
| 29 |
+
from pathlib import Path
|
| 30 |
+
|
| 31 |
+
from dataprocessing.common.gemini import add_gemini_args, make_client
|
| 32 |
+
|
| 33 |
+
logger = logging.getLogger(__name__)
|
| 34 |
+
|
| 35 |
+
GEMINI_MODEL = "gemini-2.5-flash"
|
| 36 |
+
|
| 37 |
+
SYSTEM_PROMPT = """\
|
| 38 |
+
You are a physics expert helping improve text prompts for a video generation benchmark.
|
| 39 |
+
|
| 40 |
+
Your task: given a prompt that describes a physical scene, check whether it explicitly
|
| 41 |
+
describes the **expected physical outcome/phenomenon**. If the prompt only describes
|
| 42 |
+
the setup and initial action but NOT the resulting physics, add a concise sentence
|
| 43 |
+
describing the expected physical phenomenon.
|
| 44 |
+
|
| 45 |
+
Rules:
|
| 46 |
+
1. If the prompt ALREADY describes the physical outcome adequately, return it unchanged.
|
| 47 |
+
2. If the prompt is MISSING the physical outcome, insert ONE concise sentence describing
|
| 48 |
+
what physically happens as a result of the described action.
|
| 49 |
+
3. Keep the addition natural and concise (1 sentence, ~10-20 words).
|
| 50 |
+
4. Do NOT change the existing text — only append/insert the new physics description.
|
| 51 |
+
5. Keep any "Static shot with no camera movement." or similar camera notes at the END.
|
| 52 |
+
6. Return ONLY the enhanced prompt text, no explanation, no quotes.
|
| 53 |
+
|
| 54 |
+
Examples:
|
| 55 |
+
|
| 56 |
+
Input: "A bright red liquid being poured from a dispenser into a glass which is placed on a dark baking tray on a wooden table. Static shot with no camera movement."
|
| 57 |
+
Output: "A bright red liquid being poured from a dispenser into a glass which is placed on a dark baking tray on a wooden table. The liquid overflows from the glass and spills onto the baking tray. Static shot with no camera movement."
|
| 58 |
+
|
| 59 |
+
Input: "A row of colorful wooden blocks lined up on a wooden table with a wooden stick attached to a black rotating platform. The platform rotates clockwise and the wooden stick hits the first block as it rotates. Static shot with no camera movement."
|
| 60 |
+
Output: "A row of colorful wooden blocks lined up on a wooden table with a wooden stick attached to a black rotating platform. The platform rotates clockwise and the wooden stick hits the first block as it rotates. The blocks topple over one by one in a domino chain reaction. Static shot with no camera movement."
|
| 61 |
+
|
| 62 |
+
Input: "A basketball is dropped from a rooftop and bounces on the pavement, each bounce lower than the last."
|
| 63 |
+
Output: "A basketball is dropped from a rooftop and bounces on the pavement, each bounce lower than the last."
|
| 64 |
+
(Already describes the physics — returned unchanged.)
|
| 65 |
+
"""
|
| 66 |
+
|
| 67 |
+
USER_TEMPLATE = """\
|
| 68 |
+
Category: {category}
|
| 69 |
+
Prompt: {prompt}"""
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
def enhance_prompt(client, prompt: str, category: str, max_retries: int = 3) -> str | None:
|
| 73 |
+
"""Call Gemini to enhance one prompt with physical phenomenon description."""
|
| 74 |
+
user_msg = USER_TEMPLATE.format(prompt=prompt, category=category)
|
| 75 |
+
|
| 76 |
+
for attempt in range(max_retries):
|
| 77 |
+
try:
|
| 78 |
+
resp = client.models.generate_content(
|
| 79 |
+
model=GEMINI_MODEL,
|
| 80 |
+
contents=[user_msg],
|
| 81 |
+
config={"system_instruction": SYSTEM_PROMPT},
|
| 82 |
+
)
|
| 83 |
+
text = resp.text.strip().strip('"').strip("'")
|
| 84 |
+
if text:
|
| 85 |
+
return text
|
| 86 |
+
logger.warning("Empty response (attempt %d)", attempt + 1)
|
| 87 |
+
except Exception as e:
|
| 88 |
+
logger.warning("API error (attempt %d/%d): %s", attempt + 1, max_retries, e)
|
| 89 |
+
if attempt < max_retries - 1:
|
| 90 |
+
time.sleep(2 ** attempt)
|
| 91 |
+
return None
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def process_physics_iq(client, dry_run: bool = False):
|
| 95 |
+
"""Enhance Physics-IQ descriptions.csv prompts."""
|
| 96 |
+
csv_path = Path("data/prompts/physics_iq/descriptions.csv")
|
| 97 |
+
logger.info("=== Physics-IQ: %s ===", csv_path)
|
| 98 |
+
|
| 99 |
+
with open(csv_path, newline="") as f:
|
| 100 |
+
reader = csv.DictReader(f)
|
| 101 |
+
rows = [r for r in reader if "take-2" not in r.get("scenario", "")]
|
| 102 |
+
|
| 103 |
+
unique_descs: dict[str, list[int]] = {}
|
| 104 |
+
for i, row in enumerate(rows):
|
| 105 |
+
desc = row["description"]
|
| 106 |
+
unique_descs.setdefault(desc, []).append(i)
|
| 107 |
+
|
| 108 |
+
logger.info("Total rows: %d, unique descriptions: %d", len(rows), len(unique_descs))
|
| 109 |
+
|
| 110 |
+
enhanced_map: dict[str, str] = {}
|
| 111 |
+
changed = 0
|
| 112 |
+
for j, (desc, indices) in enumerate(unique_descs.items()):
|
| 113 |
+
category = rows[indices[0]]["category"]
|
| 114 |
+
logger.info("[%d/%d] %s — %s", j + 1, len(unique_descs), category, desc[:80])
|
| 115 |
+
|
| 116 |
+
if dry_run:
|
| 117 |
+
continue
|
| 118 |
+
|
| 119 |
+
result = enhance_prompt(client, desc, category)
|
| 120 |
+
if result and result != desc:
|
| 121 |
+
enhanced_map[desc] = result
|
| 122 |
+
changed += 1
|
| 123 |
+
logger.info(" CHANGED → %s", result[:100])
|
| 124 |
+
else:
|
| 125 |
+
logger.info(" unchanged")
|
| 126 |
+
time.sleep(0.3)
|
| 127 |
+
|
| 128 |
+
if dry_run:
|
| 129 |
+
logger.info("DRY RUN — no changes written")
|
| 130 |
+
return
|
| 131 |
+
|
| 132 |
+
if changed > 0:
|
| 133 |
+
backup = csv_path.with_suffix(f".csv.bak_{datetime.now():%Y%m%d_%H%M%S}")
|
| 134 |
+
backup.write_text(csv_path.read_text())
|
| 135 |
+
logger.info("Backup → %s", backup)
|
| 136 |
+
|
| 137 |
+
for row in rows:
|
| 138 |
+
if row["description"] in enhanced_map:
|
| 139 |
+
row["description"] = enhanced_map[row["description"]]
|
| 140 |
+
|
| 141 |
+
with open(csv_path, "w", newline="") as f:
|
| 142 |
+
writer = csv.DictWriter(f, fieldnames=["scenario", "description", "category", "generated_video_name"])
|
| 143 |
+
writer.writeheader()
|
| 144 |
+
writer.writerows(rows)
|
| 145 |
+
|
| 146 |
+
logger.info("Physics-IQ: %d/%d unique descriptions enhanced", changed, len(unique_descs))
|
| 147 |
+
else:
|
| 148 |
+
logger.info("Physics-IQ: no changes needed")
|
| 149 |
+
|
| 150 |
+
|
| 151 |
+
_JSON_DATASETS = {
|
| 152 |
+
"video_phy_2": (Path("data/prompts/video_phy_2/video_phy_2.json"), "VideoPhy-2", ("_domain",)),
|
| 153 |
+
"wmb": (Path("data/prompts/wmb/wmb.json"), "WorldModelBench", ("_domain", "wmb_domain")),
|
| 154 |
+
"openvid": (Path("data/prompts/openvid/openvid.json"), "OpenVid", ("domain",)),
|
| 155 |
+
}
|
| 156 |
+
|
| 157 |
+
|
| 158 |
+
def _process_json_dataset(client, json_path: Path, name: str,
|
| 159 |
+
domain_keys: tuple[str, ...],
|
| 160 |
+
dry_run: bool = False):
|
| 161 |
+
"""Enhance prompts in a JSON dataset file."""
|
| 162 |
+
logger.info("=== %s: %s ===", name, json_path)
|
| 163 |
+
|
| 164 |
+
with open(json_path) as f:
|
| 165 |
+
data = json.load(f)
|
| 166 |
+
|
| 167 |
+
changed = 0
|
| 168 |
+
total = 0
|
| 169 |
+
for pid, p in data["prompts"].items():
|
| 170 |
+
if p.get("status") != "kept":
|
| 171 |
+
continue
|
| 172 |
+
total += 1
|
| 173 |
+
prompt = p["prompt"]
|
| 174 |
+
domain = next((p[k] for k in domain_keys if p.get(k)), "")
|
| 175 |
+
logger.info("[%s %s] %s", domain, pid, prompt[:80])
|
| 176 |
+
|
| 177 |
+
if dry_run:
|
| 178 |
+
continue
|
| 179 |
+
|
| 180 |
+
result = enhance_prompt(client, prompt, domain)
|
| 181 |
+
if result and result != prompt:
|
| 182 |
+
p["prompt"] = result
|
| 183 |
+
changed += 1
|
| 184 |
+
logger.info(" CHANGED → %s", result[:100])
|
| 185 |
+
else:
|
| 186 |
+
logger.info(" unchanged")
|
| 187 |
+
time.sleep(0.3)
|
| 188 |
+
|
| 189 |
+
if dry_run:
|
| 190 |
+
logger.info("DRY RUN — no changes written")
|
| 191 |
+
return
|
| 192 |
+
|
| 193 |
+
if changed > 0:
|
| 194 |
+
backup = json_path.with_suffix(f".json.bak_{datetime.now():%Y%m%d_%H%M%S}")
|
| 195 |
+
backup.write_text(json_path.read_text())
|
| 196 |
+
logger.info("Backup → %s", backup)
|
| 197 |
+
|
| 198 |
+
with open(json_path, "w") as f:
|
| 199 |
+
json.dump(data, f, indent=2, ensure_ascii=False)
|
| 200 |
+
|
| 201 |
+
logger.info("%s: %d/%d prompts enhanced", name, changed, total)
|
| 202 |
+
else:
|
| 203 |
+
logger.info("%s: no changes needed", name)
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
def main():
|
| 207 |
+
parser = argparse.ArgumentParser(description="Enhance prompts with physical phenomena via Gemini")
|
| 208 |
+
parser.add_argument("--dataset", choices=["physics_iq", "video_phy_2", "wmb", "openvid", "all"], default="all")
|
| 209 |
+
add_gemini_args(parser)
|
| 210 |
+
parser.add_argument("--dry_run", action="store_true", help="Preview without making changes")
|
| 211 |
+
args = parser.parse_args()
|
| 212 |
+
|
| 213 |
+
logging.basicConfig(
|
| 214 |
+
level=logging.INFO,
|
| 215 |
+
format="[%(asctime)s] %(message)s",
|
| 216 |
+
handlers=[logging.StreamHandler(sys.stdout)],
|
| 217 |
+
)
|
| 218 |
+
|
| 219 |
+
client = make_client(args) if not args.dry_run else None
|
| 220 |
+
|
| 221 |
+
if args.dataset in ("all", "physics_iq"):
|
| 222 |
+
process_physics_iq(client, dry_run=args.dry_run)
|
| 223 |
+
logger.info("")
|
| 224 |
+
|
| 225 |
+
datasets = list(_JSON_DATASETS) if args.dataset == "all" else (
|
| 226 |
+
[args.dataset] if args.dataset in _JSON_DATASETS else []
|
| 227 |
+
)
|
| 228 |
+
for ds in datasets:
|
| 229 |
+
path, name, dk = _JSON_DATASETS[ds]
|
| 230 |
+
_process_json_dataset(client, path, name, dk, dry_run=args.dry_run)
|
| 231 |
+
logger.info("")
|
| 232 |
+
|
| 233 |
+
logger.info("Done.")
|
| 234 |
+
|
| 235 |
+
|
| 236 |
+
if __name__ == "__main__":
|
| 237 |
+
main()
|
dataprocessing/refine/gen_hard_subset.py
ADDED
|
@@ -0,0 +1,294 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Generate Anonymous-Hard subset from Qwen per-law eval results.
|
| 2 |
+
|
| 3 |
+
For each prompt, computes a physics micro-avg from per-law scores
|
| 4 |
+
(all 13 laws), averaged across all models.
|
| 5 |
+
Keeps prompts with cross-model avg < threshold.
|
| 6 |
+
|
| 7 |
+
Physical laws are sourced from the canonical prompt JSONs (not from eval
|
| 8 |
+
JSONs, which may contain stale vocabulary).
|
| 9 |
+
|
| 10 |
+
Usage:
|
| 11 |
+
# Default: threshold 1.75, overwrite existing hard subset
|
| 12 |
+
python -m dataprocessing.refine.gen_hard_subset
|
| 13 |
+
|
| 14 |
+
# Custom threshold, write to a different file
|
| 15 |
+
python -m dataprocessing.refine.gen_hard_subset --threshold 1.50 \
|
| 16 |
+
--output data/prompts/anonymous_hard_subset_150.json
|
| 17 |
+
|
| 18 |
+
# Dry run: print stats without writing
|
| 19 |
+
python -m dataprocessing.refine.gen_hard_subset --dry-run
|
| 20 |
+
|
| 21 |
+
# Strict mode: fail on data quality issues
|
| 22 |
+
python -m dataprocessing.refine.gen_hard_subset --strict
|
| 23 |
+
"""
|
| 24 |
+
|
| 25 |
+
import argparse
|
| 26 |
+
import json
|
| 27 |
+
import logging
|
| 28 |
+
import sys
|
| 29 |
+
from collections import Counter, defaultdict
|
| 30 |
+
from pathlib import Path
|
| 31 |
+
|
| 32 |
+
from dataprocessing.common.pipeline import PipelineCheck
|
| 33 |
+
from dataprocessing.common.video_id import PROMPT_SOURCES, load_source_laws
|
| 34 |
+
|
| 35 |
+
logger = logging.getLogger(__name__)
|
| 36 |
+
|
| 37 |
+
ROOT = Path(__file__).resolve().parents[2]
|
| 38 |
+
VIDEOS_DIR = ROOT / "data/videos"
|
| 39 |
+
OUTPUT_PATH = ROOT / "data/prompts/anonymous_hard_subset.json"
|
| 40 |
+
|
| 41 |
+
# Dataset suffixes that appear in video dir names.
|
| 42 |
+
DATASET_SUFFIXES = ["video_phy_2", "physics_iq", "openvid", "wmb"]
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
def parse_model_dataset(dirname: str) -> tuple[str, str] | None:
|
| 46 |
+
"""Extract (model, dataset) from a directory name like 'ltx-2-video_phy_2'."""
|
| 47 |
+
for ds in DATASET_SUFFIXES:
|
| 48 |
+
if dirname.endswith(f"-{ds}"):
|
| 49 |
+
model = dirname[:-(len(ds) + 1)]
|
| 50 |
+
return model, ds
|
| 51 |
+
return None
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def find_latest_eval(dirpath: Path, evaluator: str = "qwen") -> Path | None:
|
| 55 |
+
"""Find the latest batched eval JSON for the given evaluator, fallback to gemini."""
|
| 56 |
+
evals = sorted(dirpath.glob(f"eval_{evaluator}_2*.json"))
|
| 57 |
+
if evals:
|
| 58 |
+
return evals[-1]
|
| 59 |
+
if evaluator != "gemini":
|
| 60 |
+
gemini = sorted(dirpath.glob("eval_gemini*_2*.json"))
|
| 61 |
+
if gemini:
|
| 62 |
+
return gemini[-1]
|
| 63 |
+
return None
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
def load_eval_scores(eval_path: Path) -> list[dict]:
|
| 67 |
+
"""Load eval JSON and extract per-video physics micro-avg.
|
| 68 |
+
|
| 69 |
+
Computes micro-avg from per-law scores (all laws).
|
| 70 |
+
Follows the same scoring approach as score_histogram / rank.md.
|
| 71 |
+
|
| 72 |
+
Returns list of dicts with keys: video, prompt, phys_micro_avg, n_laws_scored.
|
| 73 |
+
"""
|
| 74 |
+
with open(eval_path) as f:
|
| 75 |
+
data = json.load(f)
|
| 76 |
+
|
| 77 |
+
entries = []
|
| 78 |
+
for r in data.get("results", []):
|
| 79 |
+
video = r.get("video", "")
|
| 80 |
+
prompt = r.get("prompt", "")
|
| 81 |
+
if not video:
|
| 82 |
+
continue
|
| 83 |
+
|
| 84 |
+
# Per-law physical scores (null-aware, supports v1 and v2 formats)
|
| 85 |
+
phys = r.get("physical", {})
|
| 86 |
+
if not isinstance(phys, dict):
|
| 87 |
+
continue
|
| 88 |
+
laws = phys.get("laws", {})
|
| 89 |
+
|
| 90 |
+
scored_vals = []
|
| 91 |
+
for law_name, law_data in laws.items():
|
| 92 |
+
if not isinstance(law_data, dict):
|
| 93 |
+
continue
|
| 94 |
+
score = law_data.get("score")
|
| 95 |
+
is_scored = (law_data.get("status") == "scored"
|
| 96 |
+
or law_data.get("valid", False))
|
| 97 |
+
if is_scored and score is not None:
|
| 98 |
+
scored_vals.append(score)
|
| 99 |
+
|
| 100 |
+
if not scored_vals:
|
| 101 |
+
continue
|
| 102 |
+
|
| 103 |
+
entries.append({
|
| 104 |
+
"video": video,
|
| 105 |
+
"prompt": prompt,
|
| 106 |
+
"phys_micro_avg": sum(scored_vals) / len(scored_vals),
|
| 107 |
+
"n_laws_scored": len(scored_vals),
|
| 108 |
+
})
|
| 109 |
+
return entries
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
def main(argv: list[str] | None = None):
|
| 113 |
+
parser = argparse.ArgumentParser(
|
| 114 |
+
description="Generate Anonymous-Hard subset from Gemini eval scores")
|
| 115 |
+
parser.add_argument("--threshold", type=float, default=3.00,
|
| 116 |
+
help="Physics micro-avg threshold (default: 3.00)")
|
| 117 |
+
parser.add_argument("--output", type=str, default=None,
|
| 118 |
+
help="Output path (default: data/prompts/anonymous_hard_subset.json)")
|
| 119 |
+
parser.add_argument("--dry-run", action="store_true",
|
| 120 |
+
help="Print stats without writing")
|
| 121 |
+
parser.add_argument("--strict", action="store_true",
|
| 122 |
+
help="Fail on data quality issues (for CI)")
|
| 123 |
+
args = parser.parse_args(argv)
|
| 124 |
+
|
| 125 |
+
logging.basicConfig(
|
| 126 |
+
level=logging.INFO,
|
| 127 |
+
format="[%(asctime)s] %(message)s",
|
| 128 |
+
handlers=[logging.StreamHandler(sys.stdout)],
|
| 129 |
+
)
|
| 130 |
+
|
| 131 |
+
checker = PipelineCheck(strict=args.strict)
|
| 132 |
+
|
| 133 |
+
# ---- Step 1: Collect per-model physics micro-avg for every video ----
|
| 134 |
+
# {eval_video: {model: phys_micro_avg}}
|
| 135 |
+
video_scores: dict[str, dict] = defaultdict(dict)
|
| 136 |
+
video_prompts: dict[str, str] = {}
|
| 137 |
+
video_eval_ds: dict[str, str] = {} # eval-side dataset suffix
|
| 138 |
+
eval_paths_by_ds: dict[str, Path] = {}
|
| 139 |
+
|
| 140 |
+
for d in sorted(VIDEOS_DIR.iterdir()):
|
| 141 |
+
if not d.is_dir():
|
| 142 |
+
continue
|
| 143 |
+
parsed = parse_model_dataset(d.name)
|
| 144 |
+
if parsed is None:
|
| 145 |
+
continue
|
| 146 |
+
model, dataset = parsed
|
| 147 |
+
if "real_world" in model:
|
| 148 |
+
continue
|
| 149 |
+
|
| 150 |
+
eval_path = find_latest_eval(d)
|
| 151 |
+
if eval_path is None:
|
| 152 |
+
continue
|
| 153 |
+
|
| 154 |
+
eval_paths_by_ds[dataset] = eval_path
|
| 155 |
+
entries = load_eval_scores(eval_path)
|
| 156 |
+
logger.info("Loaded %d entries from %s", len(entries), eval_path.name)
|
| 157 |
+
|
| 158 |
+
for e in entries:
|
| 159 |
+
vid = e["video"]
|
| 160 |
+
video_scores[vid][model] = e["phys_micro_avg"]
|
| 161 |
+
video_prompts[vid] = e["prompt"]
|
| 162 |
+
video_eval_ds[vid] = dataset
|
| 163 |
+
|
| 164 |
+
logger.info("Total unique videos with scores: %d", len(video_scores))
|
| 165 |
+
|
| 166 |
+
# ---- Step 2: Compute cross-model average ----
|
| 167 |
+
video_difficulty = {}
|
| 168 |
+
for vid, model_scores in video_scores.items():
|
| 169 |
+
vals = list(model_scores.values())
|
| 170 |
+
avg = sum(vals) / len(vals)
|
| 171 |
+
video_difficulty[vid] = {
|
| 172 |
+
"phys_micro_avg": round(avg, 3),
|
| 173 |
+
"n_models": len(model_scores),
|
| 174 |
+
}
|
| 175 |
+
|
| 176 |
+
# ---- Step 3: Filter by threshold ----
|
| 177 |
+
hard_vids = [
|
| 178 |
+
vid for vid, diff in video_difficulty.items()
|
| 179 |
+
if diff["phys_micro_avg"] < args.threshold
|
| 180 |
+
]
|
| 181 |
+
hard_vids.sort(key=lambda v: video_difficulty[v]["phys_micro_avg"])
|
| 182 |
+
|
| 183 |
+
logger.info("Threshold < %.2f: %d / %d videos",
|
| 184 |
+
args.threshold, len(hard_vids), len(video_difficulty))
|
| 185 |
+
|
| 186 |
+
# ---- Step 4: Look up physical_laws via canonical vid matching ----
|
| 187 |
+
source = load_source_laws()
|
| 188 |
+
|
| 189 |
+
# Check staleness: only compare each source JSON against its matching eval
|
| 190 |
+
DS_TO_EVAL_SUFFIX = {
|
| 191 |
+
"wmb": "wmb", "video_phy_2": "video_phy_2", "physics_iq": "physics_iq",
|
| 192 |
+
"openvid": "openvid",
|
| 193 |
+
}
|
| 194 |
+
for ds_name, src_path in PROMPT_SOURCES:
|
| 195 |
+
eval_suffix = DS_TO_EVAL_SUFFIX.get(ds_name)
|
| 196 |
+
if eval_suffix and eval_suffix in eval_paths_by_ds:
|
| 197 |
+
checker.check_staleness(src_path, eval_paths_by_ds[eval_suffix])
|
| 198 |
+
|
| 199 |
+
missing = 0
|
| 200 |
+
prompts_out = []
|
| 201 |
+
seen_disk_vids: dict[str, int] = {} # disk_vid -> index in prompts_out
|
| 202 |
+
for eval_vid in hard_vids:
|
| 203 |
+
eval_ds = video_eval_ds.get(eval_vid, "")
|
| 204 |
+
matched = source.resolve_eval(eval_vid, eval_ds)
|
| 205 |
+
|
| 206 |
+
if matched:
|
| 207 |
+
cvid, entry = matched
|
| 208 |
+
laws = entry["laws"]
|
| 209 |
+
dataset = entry["dataset"]
|
| 210 |
+
prompt = entry["prompt"] or video_prompts.get(eval_vid, "")
|
| 211 |
+
legacy_ids = source.cvid_to_legacies.get(cvid, set())
|
| 212 |
+
disk_vid = max(legacy_ids, key=len) if legacy_ids else eval_vid
|
| 213 |
+
else:
|
| 214 |
+
# Not in kept source — skip (removed, safety-blocked, etc.)
|
| 215 |
+
missing += 1
|
| 216 |
+
continue
|
| 217 |
+
|
| 218 |
+
# Deduplicate: different eval_vid values can resolve to the same
|
| 219 |
+
# disk_vid via legacy aliases. Keep the entry with more models.
|
| 220 |
+
if disk_vid in seen_disk_vids:
|
| 221 |
+
idx = seen_disk_vids[disk_vid]
|
| 222 |
+
existing = prompts_out[idx]
|
| 223 |
+
if video_difficulty[eval_vid]["n_models"] > existing["difficulty"]["n_models"]:
|
| 224 |
+
prompts_out[idx] = {
|
| 225 |
+
"video": disk_vid,
|
| 226 |
+
"dataset": dataset,
|
| 227 |
+
"prompt": prompt,
|
| 228 |
+
"physical_laws": laws,
|
| 229 |
+
"difficulty": video_difficulty[eval_vid],
|
| 230 |
+
"per_model_scores": dict(video_scores[eval_vid]),
|
| 231 |
+
}
|
| 232 |
+
continue
|
| 233 |
+
|
| 234 |
+
checker.check_empty_laws(disk_vid, laws, dataset,
|
| 235 |
+
resolved=matched is not None)
|
| 236 |
+
|
| 237 |
+
seen_disk_vids[disk_vid] = len(prompts_out)
|
| 238 |
+
prompts_out.append({
|
| 239 |
+
"video": disk_vid,
|
| 240 |
+
"dataset": dataset,
|
| 241 |
+
"prompt": prompt,
|
| 242 |
+
"physical_laws": laws,
|
| 243 |
+
"difficulty": video_difficulty[eval_vid],
|
| 244 |
+
"per_model_scores": dict(video_scores[eval_vid]),
|
| 245 |
+
})
|
| 246 |
+
|
| 247 |
+
checker.check_missing_ratio(missing, len(hard_vids))
|
| 248 |
+
|
| 249 |
+
# ---- Step 5: Compute stats and report ----
|
| 250 |
+
by_dataset = Counter(p["dataset"] for p in prompts_out)
|
| 251 |
+
law_counts = Counter()
|
| 252 |
+
for p in prompts_out:
|
| 253 |
+
for law in p["physical_laws"]:
|
| 254 |
+
law_counts[law] += 1
|
| 255 |
+
|
| 256 |
+
output = {
|
| 257 |
+
"description": (
|
| 258 |
+
f"Anonymous-Hard: prompts where cross-model physics micro-avg < {args.threshold} "
|
| 259 |
+
f"(Qwen, per-law scores, all 13 laws)"
|
| 260 |
+
),
|
| 261 |
+
"threshold": args.threshold,
|
| 262 |
+
"scoring_mode": "phys_micro_avg",
|
| 263 |
+
"judge": "qwen",
|
| 264 |
+
"num_prompts": len(prompts_out),
|
| 265 |
+
"by_dataset": dict(by_dataset.most_common()),
|
| 266 |
+
"prompts": prompts_out,
|
| 267 |
+
}
|
| 268 |
+
|
| 269 |
+
logger.info("=" * 60)
|
| 270 |
+
logger.info("Hard subset: %d prompts", len(prompts_out))
|
| 271 |
+
logger.info("By dataset:")
|
| 272 |
+
for ds, cnt in by_dataset.most_common():
|
| 273 |
+
logger.info(" %s: %d", ds, cnt)
|
| 274 |
+
logger.info("Physical law counts:")
|
| 275 |
+
for law, cnt in law_counts.most_common():
|
| 276 |
+
logger.info(" %s: %d", law, cnt)
|
| 277 |
+
|
| 278 |
+
score = checker.report()
|
| 279 |
+
|
| 280 |
+
if args.dry_run:
|
| 281 |
+
logger.info("(dry-run — no file written)")
|
| 282 |
+
checker.finalize()
|
| 283 |
+
return
|
| 284 |
+
|
| 285 |
+
out_path = Path(args.output) if args.output else OUTPUT_PATH
|
| 286 |
+
with open(out_path, "w") as f:
|
| 287 |
+
json.dump(output, f, indent=2, ensure_ascii=False)
|
| 288 |
+
logger.info("Saved → %s", out_path)
|
| 289 |
+
|
| 290 |
+
checker.finalize()
|
| 291 |
+
|
| 292 |
+
|
| 293 |
+
if __name__ == "__main__":
|
| 294 |
+
main()
|
dataprocessing/refine/gen_humaneval_set.py
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Disabled human-eval prompt-set generator for the anonymous release.
|
| 2 |
+
|
| 3 |
+
The original generator writes prompt-selection JSON artifacts that are not part
|
| 4 |
+
of this code release.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
def main(argv=None):
|
| 9 |
+
raise RuntimeError(
|
| 10 |
+
"gen_humaneval_set is not included in this release because it writes "
|
| 11 |
+
"prompt-selection JSON artifacts that are omitted."
|
| 12 |
+
)
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
if __name__ == "__main__":
|
| 16 |
+
main()
|
evals/__init__.py
ADDED
|
File without changes
|
evals/eval_types.py
ADDED
|
@@ -0,0 +1,154 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Typed data structures for VLM evaluation pipeline — parse, don't validate.
|
| 2 |
+
|
| 3 |
+
Every constructor enforces types and invariants. Once you have a PromptEntry
|
| 4 |
+
or LawScore, the data is guaranteed clean.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from __future__ import annotations
|
| 8 |
+
|
| 9 |
+
import logging
|
| 10 |
+
from dataclasses import dataclass
|
| 11 |
+
|
| 12 |
+
logger = logging.getLogger(__name__)
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
# ---------------------------------------------------------------------------
|
| 16 |
+
# PromptEntry — parsed from prompts JSON at the boundary
|
| 17 |
+
# ---------------------------------------------------------------------------
|
| 18 |
+
|
| 19 |
+
@dataclass(frozen=True)
|
| 20 |
+
class PromptEntry:
|
| 21 |
+
"""A parsed, validated prompt entry for evaluation.
|
| 22 |
+
|
| 23 |
+
Required: prompt must be non-empty.
|
| 24 |
+
physical_laws: None means the field was absent (skip physical eval),
|
| 25 |
+
[] means explicitly empty.
|
| 26 |
+
"""
|
| 27 |
+
prompt: str
|
| 28 |
+
physical_laws: list[str] | None = None
|
| 29 |
+
domain: str = ""
|
| 30 |
+
first_frame_image: str = ""
|
| 31 |
+
dataset: str = ""
|
| 32 |
+
video: str = ""
|
| 33 |
+
|
| 34 |
+
def __post_init__(self):
|
| 35 |
+
if not isinstance(self.prompt, str) or not self.prompt:
|
| 36 |
+
raise ValueError(f"PromptEntry.prompt must be non-empty str, got {self.prompt!r}")
|
| 37 |
+
|
| 38 |
+
@property
|
| 39 |
+
def has_physical_laws(self) -> bool:
|
| 40 |
+
return self.physical_laws is not None
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def parse_prompt_entry(raw: dict, *, key: str = "") -> PromptEntry | None:
|
| 44 |
+
"""Parse a raw JSON dict into PromptEntry.
|
| 45 |
+
|
| 46 |
+
Returns None on bad data (tolerant — logs warning instead of raising).
|
| 47 |
+
"""
|
| 48 |
+
prompt = raw.get("prompt") or raw.get("description") or ""
|
| 49 |
+
if not isinstance(prompt, str) or not prompt.strip():
|
| 50 |
+
logger.warning("Skipping entry %r: missing or empty prompt", key)
|
| 51 |
+
return None
|
| 52 |
+
|
| 53 |
+
physical_laws = raw.get("physical_laws")
|
| 54 |
+
if physical_laws is not None and not isinstance(physical_laws, list):
|
| 55 |
+
logger.warning("Entry %r: physical_laws is not a list, ignoring", key)
|
| 56 |
+
physical_laws = None
|
| 57 |
+
|
| 58 |
+
domain = raw.get("_domain") or raw.get("domain") or raw.get("our_domain") or ""
|
| 59 |
+
|
| 60 |
+
try:
|
| 61 |
+
return PromptEntry(
|
| 62 |
+
prompt=prompt.strip().strip('"'),
|
| 63 |
+
physical_laws=physical_laws,
|
| 64 |
+
domain=str(domain),
|
| 65 |
+
first_frame_image=str(raw.get("first_frame_image") or ""),
|
| 66 |
+
dataset=str(raw.get("dataset") or ""),
|
| 67 |
+
video=str(raw.get("video") or ""),
|
| 68 |
+
)
|
| 69 |
+
except (ValueError, TypeError) as e:
|
| 70 |
+
logger.warning("Skipping entry %r: %s", key, e)
|
| 71 |
+
return None
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
# ---------------------------------------------------------------------------
|
| 75 |
+
# LawScore — per-law evaluation result
|
| 76 |
+
# ---------------------------------------------------------------------------
|
| 77 |
+
|
| 78 |
+
SCORED = "scored"
|
| 79 |
+
NOT_OBSERVED = "not_observed"
|
| 80 |
+
FAILED = "failed"
|
| 81 |
+
_VALID_STATUSES = frozenset({SCORED, NOT_OBSERVED, FAILED})
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
@dataclass(frozen=True)
|
| 85 |
+
class LawScore:
|
| 86 |
+
"""Result of scoring one physical law on one video."""
|
| 87 |
+
law: str
|
| 88 |
+
score: int | None
|
| 89 |
+
status: str # SCORED | NOT_OBSERVED | FAILED
|
| 90 |
+
sub_answers: dict[str, str] | None = None # raw yes/no/na per sub-question
|
| 91 |
+
|
| 92 |
+
def __post_init__(self):
|
| 93 |
+
if self.status not in _VALID_STATUSES:
|
| 94 |
+
raise ValueError(
|
| 95 |
+
f"LawScore.status must be one of {_VALID_STATUSES}, got {self.status!r}"
|
| 96 |
+
)
|
| 97 |
+
if self.status == SCORED and self.score is None:
|
| 98 |
+
raise ValueError(f"LawScore({self.law!r}): status='scored' but score is None")
|
| 99 |
+
|
| 100 |
+
@classmethod
|
| 101 |
+
def scored(cls, law: str, score: int, sub_answers: dict[str, str] | None = None) -> LawScore:
|
| 102 |
+
return cls(law=law, score=score, status=SCORED, sub_answers=sub_answers)
|
| 103 |
+
|
| 104 |
+
@classmethod
|
| 105 |
+
def not_observed(cls, law: str, sub_answers: dict[str, str] | None = None) -> LawScore:
|
| 106 |
+
return cls(law=law, score=None, status=NOT_OBSERVED, sub_answers=sub_answers)
|
| 107 |
+
|
| 108 |
+
@classmethod
|
| 109 |
+
def failed(cls, law: str) -> LawScore:
|
| 110 |
+
return cls(law=law, score=None, status=FAILED)
|
| 111 |
+
|
| 112 |
+
def to_dict(self) -> dict:
|
| 113 |
+
d: dict = {"score": self.score, "status": self.status}
|
| 114 |
+
if self.sub_answers is not None:
|
| 115 |
+
d["sub_answers"] = self.sub_answers
|
| 116 |
+
return d
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
# ---------------------------------------------------------------------------
|
| 120 |
+
# PhysicalSummary — aggregated physical results for one video
|
| 121 |
+
# ---------------------------------------------------------------------------
|
| 122 |
+
|
| 123 |
+
@dataclass
|
| 124 |
+
class PhysicalSummary:
|
| 125 |
+
"""Aggregated physical evaluation for one video."""
|
| 126 |
+
laws: dict[str, LawScore]
|
| 127 |
+
missing_laws: list[str]
|
| 128 |
+
coverage: float
|
| 129 |
+
avg: float | None
|
| 130 |
+
|
| 131 |
+
@classmethod
|
| 132 |
+
def from_law_scores(
|
| 133 |
+
cls, law_scores: dict[str, LawScore], total_laws: int,
|
| 134 |
+
) -> PhysicalSummary:
|
| 135 |
+
scored_values = [
|
| 136 |
+
ls.score for ls in law_scores.values() if ls.status == SCORED
|
| 137 |
+
]
|
| 138 |
+
missing = [name for name, ls in law_scores.items() if ls.status == FAILED]
|
| 139 |
+
coverage = len(scored_values) / total_laws if total_laws else 0
|
| 140 |
+
avg = sum(scored_values) / len(scored_values) if scored_values else None
|
| 141 |
+
return cls(
|
| 142 |
+
laws=law_scores,
|
| 143 |
+
missing_laws=missing,
|
| 144 |
+
coverage=round(coverage, 4),
|
| 145 |
+
avg=round(avg, 4) if avg is not None else None,
|
| 146 |
+
)
|
| 147 |
+
|
| 148 |
+
def to_dict(self) -> dict:
|
| 149 |
+
return {
|
| 150 |
+
"laws": {name: ls.to_dict() for name, ls in self.laws.items()},
|
| 151 |
+
"missing_laws": self.missing_laws,
|
| 152 |
+
"coverage": self.coverage,
|
| 153 |
+
"avg": self.avg,
|
| 154 |
+
}
|
evals/human_eval/__init__.py
ADDED
|
File without changes
|
evals/human_eval/app.py
ADDED
|
@@ -0,0 +1,869 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Human evaluation annotation app."""
|
| 2 |
+
import hashlib
|
| 3 |
+
import json
|
| 4 |
+
import random
|
| 5 |
+
import sqlite3
|
| 6 |
+
from functools import wraps
|
| 7 |
+
from pathlib import Path
|
| 8 |
+
|
| 9 |
+
from flask import (
|
| 10 |
+
Flask,
|
| 11 |
+
g,
|
| 12 |
+
redirect,
|
| 13 |
+
render_template,
|
| 14 |
+
request,
|
| 15 |
+
send_from_directory,
|
| 16 |
+
session,
|
| 17 |
+
url_for,
|
| 18 |
+
jsonify,
|
| 19 |
+
abort,
|
| 20 |
+
)
|
| 21 |
+
|
| 22 |
+
from human_eval.config import (
|
| 23 |
+
N_ANNOTATORS_PER_VIDEO,
|
| 24 |
+
ASSIGNMENT_TTL_HOURS,
|
| 25 |
+
VIDEO_DATA_DIR,
|
| 26 |
+
DB_PATH,
|
| 27 |
+
SECRET_KEY,
|
| 28 |
+
DISAGREEMENT_TOP_K,
|
| 29 |
+
STATUS_ASSIGNED,
|
| 30 |
+
STATUS_COMPLETED,
|
| 31 |
+
STATUS_PARTIAL,
|
| 32 |
+
STATUS_SKIPPED,
|
| 33 |
+
COMPARISON_MODELS,
|
| 34 |
+
MODELS_PER_GROUP,
|
| 35 |
+
COMPLETION_SURVEY_URL,
|
| 36 |
+
extract_model,
|
| 37 |
+
get_batch_size,
|
| 38 |
+
VALID_COHORTS,
|
| 39 |
+
TEST_COHORTS,
|
| 40 |
+
COHORT_COMPLETION_CODE,
|
| 41 |
+
)
|
| 42 |
+
from human_eval.db import init_db, get_db, pending_assignment_sql
|
| 43 |
+
from human_eval.assign import assign_comparison_batch
|
| 44 |
+
|
| 45 |
+
# ---------------------------------------------------------------------------
|
| 46 |
+
# Import CRITERIA & DOMAIN_SUBSCORES from physics_criteria (single source of truth)
|
| 47 |
+
# ---------------------------------------------------------------------------
|
| 48 |
+
import sys
|
| 49 |
+
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent))
|
| 50 |
+
from evals.physics_criteria import (
|
| 51 |
+
CRITERIA, CRITERIA_EN, CRITERIA_ZH, DOMAIN_SUBSCORES,
|
| 52 |
+
HUMAN_CRITERIA, HUMAN_CRITERIA_BY_KEY, HUMAN_DOMAINS,
|
| 53 |
+
)
|
| 54 |
+
|
| 55 |
+
def _all_criteria_for_lang(lang: str) -> list[tuple[str, str]]:
|
| 56 |
+
"""Return a flat, deduplicated list of (law_key, description) for display."""
|
| 57 |
+
src = CRITERIA_ZH if lang == "zh" else CRITERIA_EN
|
| 58 |
+
return [(k, src[k]) for k in src]
|
| 59 |
+
|
| 60 |
+
def _get_physical_dims(assignment) -> list[tuple[str, str]]:
|
| 61 |
+
"""Parse physical_laws JSON from a video/assignment row and return (law, description) pairs."""
|
| 62 |
+
laws = json.loads(assignment["physical_laws"])
|
| 63 |
+
return [(law, CRITERIA.get(law, law)) for law in laws]
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
# ---------------------------------------------------------------------------
|
| 67 |
+
# General dimensions — derived from HumanCriterion (General domain)
|
| 68 |
+
# ---------------------------------------------------------------------------
|
| 69 |
+
_SCORE_LABELS = {1: "Completely implausible", 2: "Largely implausible", 3: "Partially plausible", 4: "Mostly plausible", 5: "Fully plausible"}
|
| 70 |
+
_SCORE_LABELS_ZH = {1: "完全不合理", 2: "大部分不合理", 3: "部分合理", 4: "大部分合理", 5: "完全合理"}
|
| 71 |
+
|
| 72 |
+
GENERAL_DIMS = [
|
| 73 |
+
(
|
| 74 |
+
c.key, c.name, list(range(1, 6)),
|
| 75 |
+
f"{c.question} ({c.note})" if c.note else c.question,
|
| 76 |
+
_SCORE_LABELS,
|
| 77 |
+
)
|
| 78 |
+
for c in HUMAN_DOMAINS["General"]
|
| 79 |
+
]
|
| 80 |
+
|
| 81 |
+
GENERAL_DIMS_ZH = [
|
| 82 |
+
(
|
| 83 |
+
c.key, c.name_zh, list(range(1, 6)),
|
| 84 |
+
f"{c.question_zh}({c.note_zh})" if c.note_zh else c.question_zh,
|
| 85 |
+
_SCORE_LABELS_ZH,
|
| 86 |
+
)
|
| 87 |
+
for c in HUMAN_DOMAINS["General"]
|
| 88 |
+
]
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
def _save_annotation(db, assignment_id, general_scores, physical_scores, physical_dims, meta_ver, note="", play_count=0, stay_seconds=0.0, na_laws=None):
|
| 92 |
+
"""Upsert annotation + annotation_items for a single assignment."""
|
| 93 |
+
if na_laws is None:
|
| 94 |
+
na_laws = []
|
| 95 |
+
law_keys = [key for key, _ in physical_dims]
|
| 96 |
+
scores_json = json.dumps({
|
| 97 |
+
"general": general_scores,
|
| 98 |
+
"physical": physical_scores,
|
| 99 |
+
"physical_laws": law_keys,
|
| 100 |
+
})
|
| 101 |
+
na_laws_json = json.dumps(na_laws)
|
| 102 |
+
|
| 103 |
+
existing = db.execute(
|
| 104 |
+
"SELECT id FROM annotations WHERE assignment_id = ?", (assignment_id,)
|
| 105 |
+
).fetchone()
|
| 106 |
+
|
| 107 |
+
if existing:
|
| 108 |
+
annotation_id = existing["id"]
|
| 109 |
+
db.execute(
|
| 110 |
+
"UPDATE annotations SET scores_json = ?, metadata_version = ?, note = ?, play_count = ?, stay_seconds = ?, na_laws = ?, updated_at = CURRENT_TIMESTAMP WHERE id = ?",
|
| 111 |
+
(scores_json, meta_ver, note, play_count, stay_seconds, na_laws_json, annotation_id),
|
| 112 |
+
)
|
| 113 |
+
db.execute("DELETE FROM annotation_items WHERE annotation_id = ?", (annotation_id,))
|
| 114 |
+
else:
|
| 115 |
+
cur = db.execute(
|
| 116 |
+
"INSERT INTO annotations (assignment_id, scores_json, metadata_version, note, play_count, stay_seconds, na_laws) VALUES (?, ?, ?, ?, ?, ?, ?)",
|
| 117 |
+
(assignment_id, scores_json, meta_ver, note, play_count, stay_seconds, na_laws_json),
|
| 118 |
+
)
|
| 119 |
+
annotation_id = cur.lastrowid
|
| 120 |
+
|
| 121 |
+
items = [(annotation_id, key, None, general_scores[key]) for key in general_scores]
|
| 122 |
+
items += [(annotation_id, key, key, physical_scores[key]) for key in physical_scores if physical_scores[key] is not None]
|
| 123 |
+
db.executemany(
|
| 124 |
+
"INSERT INTO annotation_items (annotation_id, dimension, law, score) VALUES (?, ?, ?, ?)",
|
| 125 |
+
items,
|
| 126 |
+
)
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
|
| 130 |
+
def _extract_scores(prefix, physical_dims):
|
| 131 |
+
"""Parse general + physical scores and NA laws from the current request form."""
|
| 132 |
+
general = {}
|
| 133 |
+
for key, _, _, _, _ in GENERAL_DIMS:
|
| 134 |
+
val = request.form.get(f"{prefix}{key}")
|
| 135 |
+
if val:
|
| 136 |
+
general[key] = int(val)
|
| 137 |
+
physical = {}
|
| 138 |
+
for key, _ in physical_dims:
|
| 139 |
+
val = request.form.get(f"{prefix}{key}")
|
| 140 |
+
if val and val != "null":
|
| 141 |
+
physical[key] = int(val)
|
| 142 |
+
# NA laws come from a separate comma-separated hidden input
|
| 143 |
+
na_raw = request.form.get(f"{prefix}na_laws", "")
|
| 144 |
+
na_laws = [k for k in na_raw.split(",") if k]
|
| 145 |
+
return general, physical, na_laws
|
| 146 |
+
|
| 147 |
+
|
| 148 |
+
# ---------------------------------------------------------------------------
|
| 149 |
+
# Demo data (hardcoded, editable)
|
| 150 |
+
# ---------------------------------------------------------------------------
|
| 151 |
+
|
| 152 |
+
DEMO_ENTRIES = [
|
| 153 |
+
{
|
| 154 |
+
"video_url": "/video/wan2.2-i2v-a14b/collision_283.mp4",
|
| 155 |
+
"prompt": "A water balloon is thrown at a large target made of cardboard; it bursts cleanly.",
|
| 156 |
+
"prompt_zh": "一个水气球被投向一个纸板做的大靶子,干脆地炸裂开来。",
|
| 157 |
+
"scores": {"SA": 4, "PTV": 3, "persistence": 2},
|
| 158 |
+
"physical_dims": [
|
| 159 |
+
("collision", "After impact, is there reasonable bounce/shatter/deformation? Does response match impact force?"),
|
| 160 |
+
("material", "Do different materials respond according to their properties? Glass shatters? Rubber bounces?"),
|
| 161 |
+
("momentum", "After collision, is the direction of motion reasonable? Ignore speed magnitude."),
|
| 162 |
+
("impenetrability", "Do objects maintain impenetrability — no passing through each other?"),
|
| 163 |
+
],
|
| 164 |
+
"physical_dims_zh": [
|
| 165 |
+
("collision", "碰撞后是否有合理的弹跳/破碎/变形?响应是否与撞击力匹配?"),
|
| 166 |
+
("material", "不同材质是否按其属性反应?玻璃碎裂?橡胶弹跳?"),
|
| 167 |
+
("momentum", "碰撞后运动方向是否合理?忽略速度大小。"),
|
| 168 |
+
("impenetrability", "物体是否保持不可穿透性——没有互相穿过?"),
|
| 169 |
+
],
|
| 170 |
+
"physical_scores": {"collision": 4, "material": 3, "momentum": 2, "impenetrability": 1},
|
| 171 |
+
"rationale_general": {
|
| 172 |
+
"SA": "The video shows a water balloon being thrown at a target and bursting. The overall scene matches the prompt well.",
|
| 173 |
+
"PTV": "The balloon approaches and bursts on contact, but because the target behaves like an inflatable object rather than cardboard, the physical interaction sequence is only partially correct.",
|
| 174 |
+
"persistence": "The large target morphs from cardboard into an inflatable object mid-video — a major material/shape inconsistency.",
|
| 175 |
+
},
|
| 176 |
+
"rationale_general_zh": {
|
| 177 |
+
"SA": "视频展示了一个水气球被投向靶子并炸裂的过程,整体场景与提示词匹配良好。",
|
| 178 |
+
"PTV": "气球接近并在接触时炸裂,但由于靶子表现得像充气物体而非纸板,物理交互序列仅部分正确。",
|
| 179 |
+
"persistence": "大靶子在视频中途从纸板变形为充气物体——存在严重的材质/形状不一致。",
|
| 180 |
+
},
|
| 181 |
+
"rationale_physical": {
|
| 182 |
+
"collision": "The balloon deforms and shatters on impact, which is a reasonable response to the collision force.",
|
| 183 |
+
"material": "The water balloon bursts correctly on impact, but the cardboard target expands/inflates unnaturally — real cardboard would not swell like that.",
|
| 184 |
+
"momentum": "After the balloon bursts, the water and debris should stop and fall downward, but they continue moving unnaturally.",
|
| 185 |
+
"impenetrability": "The balloon passes through the target instead of bouncing off or bursting on the surface — objects should not penetrate each other.",
|
| 186 |
+
},
|
| 187 |
+
"rationale_physical_zh": {
|
| 188 |
+
"collision": "气球在撞击时变形并破裂,这是对碰撞力的合理响应。",
|
| 189 |
+
"material": "水气球在撞击时正确地破裂,但纸板靶子不自然地膨胀——真实的纸板不会像那样膨胀。",
|
| 190 |
+
"momentum": "气球破裂后,水和碎片应该停下并向下掉落,但它们继续不自然地移动。",
|
| 191 |
+
"impenetrability": "气球穿过了靶子,而不是在表面弹开或破裂——物体不应互相穿透。",
|
| 192 |
+
},
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"video_url": "/video/cosmos-predict2.5-14b/88H1BBqFzXQ_1_118to222.mp4",
|
| 196 |
+
"prompt": "A heavy car tire rolls over a green bottle on concrete, crushing it flat. The trapped liquid bursts from the broken bottle and rapidly flows outward, spreading across the ground in an expanding puddle.",
|
| 197 |
+
"prompt_zh": "一个沉重的汽车轮胎在混凝土地面上碾过一个绿色瓶子,将其压扁。瓶中的液体从破碎的瓶子中喷出,迅速向外流动,在地面上形成一个不断扩大的水洼。",
|
| 198 |
+
"scores": {"SA": 3, "PTV": 1, "persistence": 3},
|
| 199 |
+
"physical_dims": [
|
| 200 |
+
("flow_dynamics", "Does the liquid's overall motion behave realistically over time — flowing along surfaces, spreading, draining naturally?"),
|
| 201 |
+
("material", "Does each material respond according to its properties? (glass shatters, rubber bounces, metal is rigid, cloth deforms softly, etc.)"),
|
| 202 |
+
("collision", "After impact, is there reasonable bounce/shatter/deformation? Does response match impact force?"),
|
| 203 |
+
("fluid_continuity", "Does the liquid maintain physical continuity and mass conservation? Brief splash separation is acceptable."),
|
| 204 |
+
],
|
| 205 |
+
"physical_dims_zh": [
|
| 206 |
+
("flow_dynamics", "液体整体流动是否符合物理——沿表面流动、铺展、排出是否自然?"),
|
| 207 |
+
("material", "每种材料的响应是否符合其属性?玻璃碎裂、橡胶弹回、金属坚硬、布料柔软变形。"),
|
| 208 |
+
("collision", "物体撞击后是否有合理的反弹/碎裂/变形?撞击力度与响应程度是否匹配?"),
|
| 209 |
+
("fluid_continuity", "液体是否保持物理连续性与质量守恒——无不合理的断裂、消失或凭空生成?短暂飞溅分离可接受。"),
|
| 210 |
+
],
|
| 211 |
+
"physical_scores": {"flow_dynamics": 4, "material": 1, "collision": 1, "fluid_continuity": 4},
|
| 212 |
+
"rationale_general": {
|
| 213 |
+
"SA": "The liquid bursting from the broken bottle and spreading into a puddle is present, but the first part — a heavy tire rolling over and crushing the bottle flat — is missing.",
|
| 214 |
+
"PTV": "The bottle breaks without any logical cause — there is no visible impact or force that triggers the rupture.",
|
| 215 |
+
"persistence": "The tire and ground maintain consistent appearance, but the text on the bottle label changes mid-video.",
|
| 216 |
+
},
|
| 217 |
+
"rationale_general_zh": {
|
| 218 |
+
"SA": "被困的液体从破碎的瓶子中爆裂而出并在地面形成水洼,这部分有体现;但前半部分——沉重的汽车轮胎碾过绿色瓶子并将其压扁——缺失。",
|
| 219 |
+
"PTV": "瓶子没有任何逻辑原因就破裂了——没有可见的撞击或外力触发破裂。",
|
| 220 |
+
"persistence": "轮胎和地面保持一致,但瓶子的贴纸文字在视频中途发生了变化。",
|
| 221 |
+
},
|
| 222 |
+
"rationale_physical": {
|
| 223 |
+
"flow_dynamics": "The liquid bursts out and rapidly flows outward, forming an expanding puddle on the ground — largely consistent with realistic flow behavior.",
|
| 224 |
+
"material": "The glass bottle does not shatter convincingly; it deforms like a soft object rather than breaking into sharp fragments as real glass would.",
|
| 225 |
+
"collision": "The bottle ruptures without any visible force or impact — it just bursts on its own.",
|
| 226 |
+
"fluid_continuity": "No obvious disappearance, or unexpectedly appear of liquid throughout the video.",
|
| 227 |
+
},
|
| 228 |
+
"rationale_physical_zh": {
|
| 229 |
+
"flow_dynamics": "液体爆裂而出,迅速向外流淌,在地面上形成不断扩大的水洼——基本符合真实的流动行为。",
|
| 230 |
+
"material": "玻璃瓶的碎裂不够逼真,像软物体一样变形,而不是像真正的玻璃那样碎成锋利的碎片。",
|
| 231 |
+
"collision": "瓶子没有受到任何可见的外力或撞击就爆裂了——完全是自行破裂。",
|
| 232 |
+
"fluid_continuity": "整个视频中液体没有明显的碎裂、消失或凭空生成现象。",
|
| 233 |
+
},
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"video_url": "/video/cosmos-predict2.5-2b/collision_144.mp4",
|
| 237 |
+
"prompt": "A baseball is struck by a bat, causing a visible scuff mark on the ball\u2019s surface.",
|
| 238 |
+
"prompt_zh": "一根球棒击中一个棒球,在球的表面留下一个明显的擦痕。",
|
| 239 |
+
"scores": {"SA": 3, "PTV": 1, "persistence": 3},
|
| 240 |
+
"physical_dims": [
|
| 241 |
+
("collision", "After impact, is there reasonable bounce/shatter/deformation? Does response match impact force?"),
|
| 242 |
+
("material", "Do different materials respond according to their properties? Glass shatters? Rubber bounces?"),
|
| 243 |
+
("momentum", "After collision, is the direction of motion reasonable? Ignore speed magnitude."),
|
| 244 |
+
],
|
| 245 |
+
"physical_dims_zh": [
|
| 246 |
+
("collision", "碰撞后是否有合理的弹跳/破碎/变形?响应是否与撞击力匹配?"),
|
| 247 |
+
("material", "不同材质是否按其属性反应?玻璃碎裂?橡胶弹跳?"),
|
| 248 |
+
("momentum", "碰撞后运动方向是否合理?忽略速度大小。"),
|
| 249 |
+
],
|
| 250 |
+
"physical_scores": {"collision": 1, "material": 1, "momentum": 1},
|
| 251 |
+
"rationale_general": {
|
| 252 |
+
"SA": "A visible scuff mark appears on the ball, but the bat never clearly strikes the baseball — the striking action is missing.",
|
| 253 |
+
"PTV": "The bat never strikes the ball, yet a scuff mark appears — the temporal sequence is highly inconsistent and misordered.",
|
| 254 |
+
"persistence": "The ball maintains mostly consistent appearance, but the bat only appears for a brief moment before disappearing.",
|
| 255 |
+
},
|
| 256 |
+
"rationale_general_zh": {
|
| 257 |
+
"SA": "球上出现了明显的擦痕,但球棒从未清晰地击中棒球——缺少击球动作。",
|
| 258 |
+
"PTV": "球棒没有击中球,就出现了擦痕,时间顺序极其不一致且顺序错误。",
|
| 259 |
+
"persistence": "球基本保持一致的外观,但球棒只出现了���瞬间就消失了。",
|
| 260 |
+
},
|
| 261 |
+
"rationale_physical": {
|
| 262 |
+
"collision": "No real collision occurs — the bat never makes clear contact with the ball, so there is no physically plausible impact response.",
|
| 263 |
+
"material": "The baseball does not fly away when struck. Instead, a massive, unrealistic brown chunk appears on its surface, resembling mud rather than a scuff mark. The bat also exhibits morphing and clipping issues.",
|
| 264 |
+
"momentum": "The direction of motion is implausible because no real collision occurs — the bat never makes clear contact with the ball.",
|
| 265 |
+
},
|
| 266 |
+
"rationale_physical_zh": {
|
| 267 |
+
"collision": "没有发生真正的碰撞——球棒从未清晰地接触到棒球,因此不存在物理上合理的撞击响应。",
|
| 268 |
+
"material": "棒球被击中后没有飞走。相反,其表面出现了一个巨大的、不真实的棕色块状物,看起来像泥巴而不是擦痕。球棒也表现出变形和穿模问题。",
|
| 269 |
+
"momentum": "运动方向不合理,因为根本没有发生真正的碰撞——球棒从未清晰地接触到棒球。",
|
| 270 |
+
},
|
| 271 |
+
},
|
| 272 |
+
]
|
| 273 |
+
|
| 274 |
+
|
| 275 |
+
# ---------------------------------------------------------------------------
|
| 276 |
+
# login_required decorator
|
| 277 |
+
# ---------------------------------------------------------------------------
|
| 278 |
+
def login_required(f):
|
| 279 |
+
@wraps(f)
|
| 280 |
+
def decorated(*args, **kwargs):
|
| 281 |
+
if "annotator_id" not in session:
|
| 282 |
+
return redirect(url_for("login_page"))
|
| 283 |
+
return f(*args, **kwargs)
|
| 284 |
+
return decorated
|
| 285 |
+
|
| 286 |
+
|
| 287 |
+
def verify_ownership(assignment):
|
| 288 |
+
if assignment is None:
|
| 289 |
+
abort(404)
|
| 290 |
+
if assignment["annotator_id"] != session["annotator_id"]:
|
| 291 |
+
abort(403)
|
| 292 |
+
|
| 293 |
+
|
| 294 |
+
# ---------------------------------------------------------------------------
|
| 295 |
+
# App-level DB helper (stores conn on g, tied to app context)
|
| 296 |
+
# ---------------------------------------------------------------------------
|
| 297 |
+
def get_app_db() -> sqlite3.Connection:
|
| 298 |
+
from flask import current_app
|
| 299 |
+
# For in-memory databases, use the shared connection stored on the app
|
| 300 |
+
if hasattr(current_app, '_mem_conn'):
|
| 301 |
+
return current_app._mem_conn
|
| 302 |
+
if "db" not in g:
|
| 303 |
+
g.db = get_db(g.db_path)
|
| 304 |
+
return g.db
|
| 305 |
+
|
| 306 |
+
|
| 307 |
+
def close_app_db(exc=None):
|
| 308 |
+
db = g.pop("db", None)
|
| 309 |
+
if db is not None:
|
| 310 |
+
db.close()
|
| 311 |
+
|
| 312 |
+
|
| 313 |
+
# ---------------------------------------------------------------------------
|
| 314 |
+
# Factory
|
| 315 |
+
# ---------------------------------------------------------------------------
|
| 316 |
+
def create_app(
|
| 317 |
+
db_path=None,
|
| 318 |
+
video_data_dir=None,
|
| 319 |
+
skip_import=True,
|
| 320 |
+
):
|
| 321 |
+
app = Flask(__name__, template_folder=str(Path(__file__).resolve().parent / "templates"))
|
| 322 |
+
app.secret_key = SECRET_KEY
|
| 323 |
+
|
| 324 |
+
resolved_db_path = db_path or str(DB_PATH)
|
| 325 |
+
resolved_video_dir = str(video_data_dir) if video_data_dir else str(VIDEO_DATA_DIR)
|
| 326 |
+
|
| 327 |
+
# Initialise schema (for :memory: or first run)
|
| 328 |
+
if resolved_db_path == ":memory:":
|
| 329 |
+
# Keep a single in-memory connection for the lifetime of the app
|
| 330 |
+
mem_conn = sqlite3.connect(":memory:")
|
| 331 |
+
mem_conn.row_factory = sqlite3.Row
|
| 332 |
+
init_db(mem_conn)
|
| 333 |
+
app._mem_conn = mem_conn
|
| 334 |
+
|
| 335 |
+
# Don't close the shared in-memory connection on teardown
|
| 336 |
+
else:
|
| 337 |
+
conn = get_db(Path(resolved_db_path))
|
| 338 |
+
init_db(conn)
|
| 339 |
+
if not skip_import:
|
| 340 |
+
from human_eval.import_videos import import_videos
|
| 341 |
+
import_videos(conn, Path(resolved_video_dir))
|
| 342 |
+
conn.close()
|
| 343 |
+
|
| 344 |
+
@app.before_request
|
| 345 |
+
def _inject_db_path():
|
| 346 |
+
g.db_path = resolved_db_path
|
| 347 |
+
|
| 348 |
+
app.teardown_appcontext(close_app_db)
|
| 349 |
+
|
| 350 |
+
# Expose constants to all templates (avoids passing on every render_template call)
|
| 351 |
+
app.jinja_env.globals.update(
|
| 352 |
+
STATUS_ASSIGNED=STATUS_ASSIGNED,
|
| 353 |
+
STATUS_COMPLETED=STATUS_COMPLETED,
|
| 354 |
+
STATUS_PARTIAL=STATUS_PARTIAL,
|
| 355 |
+
STATUS_SKIPPED=STATUS_SKIPPED,
|
| 356 |
+
general_dims=GENERAL_DIMS,
|
| 357 |
+
criteria=CRITERIA,
|
| 358 |
+
domain_subscores=DOMAIN_SUBSCORES,
|
| 359 |
+
human_criteria_by_key=HUMAN_CRITERIA_BY_KEY,
|
| 360 |
+
human_criteria=HUMAN_CRITERIA,
|
| 361 |
+
human_domains=HUMAN_DOMAINS,
|
| 362 |
+
)
|
| 363 |
+
|
| 364 |
+
# ------------------------------------------------------------------
|
| 365 |
+
# Routes
|
| 366 |
+
# ------------------------------------------------------------------
|
| 367 |
+
|
| 368 |
+
@app.route("/")
|
| 369 |
+
def login_page():
|
| 370 |
+
if request.args.get("return_url"):
|
| 371 |
+
session["return_url"] = request.args["return_url"]
|
| 372 |
+
cohort = request.args.get("cohort", "others")
|
| 373 |
+
if cohort not in VALID_COHORTS:
|
| 374 |
+
cohort = "others"
|
| 375 |
+
session["cohort"] = cohort
|
| 376 |
+
return render_template("login.html", cohort=cohort)
|
| 377 |
+
|
| 378 |
+
@app.route("/guide")
|
| 379 |
+
def guide():
|
| 380 |
+
lang = request.args.get("lang", "en")
|
| 381 |
+
if lang not in ("en", "zh"):
|
| 382 |
+
lang = "en"
|
| 383 |
+
return render_template(
|
| 384 |
+
"guide.html",
|
| 385 |
+
lang=lang,
|
| 386 |
+
general_dims_display=GENERAL_DIMS_ZH if lang == "zh" else GENERAL_DIMS,
|
| 387 |
+
all_criteria=_all_criteria_for_lang(lang),
|
| 388 |
+
)
|
| 389 |
+
|
| 390 |
+
@app.route("/demo")
|
| 391 |
+
@login_required
|
| 392 |
+
def demo():
|
| 393 |
+
if request.args.get("return_url"):
|
| 394 |
+
session["return_url"] = request.args["return_url"]
|
| 395 |
+
lang = request.args.get("lang", "en")
|
| 396 |
+
if lang not in ("en", "zh"):
|
| 397 |
+
lang = "en"
|
| 398 |
+
dims = GENERAL_DIMS_ZH if lang == "zh" else GENERAL_DIMS
|
| 399 |
+
|
| 400 |
+
# ---- task-list data (merged from /tasks) ----
|
| 401 |
+
db = get_app_db()
|
| 402 |
+
annotator_id = session["annotator_id"]
|
| 403 |
+
user_batch_size = get_batch_size(_get_user_cohort(db, annotator_id))
|
| 404 |
+
|
| 405 |
+
rows = db.execute(
|
| 406 |
+
"SELECT a.id, a.status, a.expires_at, a.group_id, "
|
| 407 |
+
"v.filename, v.dataset, v.prompt, v.physical_laws, "
|
| 408 |
+
"cg.prompt AS group_prompt "
|
| 409 |
+
"FROM assignments a "
|
| 410 |
+
"JOIN videos v ON a.video_id = v.id "
|
| 411 |
+
"LEFT JOIN comparison_groups cg ON a.group_id = cg.id "
|
| 412 |
+
"WHERE a.annotator_id = ? AND a.group_id IS NOT NULL "
|
| 413 |
+
"ORDER BY a.group_id, a.id",
|
| 414 |
+
(annotator_id,),
|
| 415 |
+
).fetchall()
|
| 416 |
+
|
| 417 |
+
groups = {}
|
| 418 |
+
for r in rows:
|
| 419 |
+
gid = r["group_id"]
|
| 420 |
+
if gid not in groups:
|
| 421 |
+
groups[gid] = {
|
| 422 |
+
"group_id": gid,
|
| 423 |
+
"prompt": r["group_prompt"] or r["prompt"],
|
| 424 |
+
"assignments": [],
|
| 425 |
+
"models": [],
|
| 426 |
+
"status": STATUS_COMPLETED,
|
| 427 |
+
}
|
| 428 |
+
groups[gid]["assignments"].append(r)
|
| 429 |
+
groups[gid]["models"].append(extract_model(r["dataset"]))
|
| 430 |
+
if r["status"] not in (STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED):
|
| 431 |
+
groups[gid]["status"] = STATUS_ASSIGNED
|
| 432 |
+
elif r["status"] == STATUS_PARTIAL and groups[gid]["status"] == STATUS_COMPLETED:
|
| 433 |
+
groups[gid]["status"] = STATUS_PARTIAL
|
| 434 |
+
|
| 435 |
+
group_list = list(groups.values())
|
| 436 |
+
_done_statuses = (STATUS_COMPLETED, STATUS_PARTIAL)
|
| 437 |
+
user_done_groups = len([g for g in group_list if g["status"] in _done_statuses])
|
| 438 |
+
|
| 439 |
+
# Reached quota — redirect to thanks page
|
| 440 |
+
if user_done_groups >= user_batch_size:
|
| 441 |
+
return redirect(url_for("thanks"))
|
| 442 |
+
|
| 443 |
+
pending_groups = [g for g in group_list if g["status"] not in _done_statuses]
|
| 444 |
+
if len(group_list) > 0 and len(pending_groups) == 0:
|
| 445 |
+
new_gids = _auto_assign(db, annotator_id)
|
| 446 |
+
if new_gids:
|
| 447 |
+
return redirect(url_for("demo"))
|
| 448 |
+
|
| 449 |
+
total_prompts = db.execute(
|
| 450 |
+
"SELECT COUNT(DISTINCT prompt) AS c FROM comparison_groups"
|
| 451 |
+
).fetchone()["c"]
|
| 452 |
+
|
| 453 |
+
group_list.sort(key=lambda g: (g["status"] in (STATUS_COMPLETED, STATUS_PARTIAL), g["group_id"]))
|
| 454 |
+
|
| 455 |
+
return render_template(
|
| 456 |
+
"demo.html",
|
| 457 |
+
lang=lang,
|
| 458 |
+
demos=DEMO_ENTRIES,
|
| 459 |
+
general_dims=dims,
|
| 460 |
+
human_criteria_by_key=HUMAN_CRITERIA_BY_KEY,
|
| 461 |
+
groups=group_list,
|
| 462 |
+
annotator_name=session.get("annotator_name"),
|
| 463 |
+
total_prompts=total_prompts,
|
| 464 |
+
user_completed_groups=user_done_groups,
|
| 465 |
+
user_quota=user_batch_size,
|
| 466 |
+
)
|
| 467 |
+
|
| 468 |
+
@app.route("/thanks")
|
| 469 |
+
@login_required
|
| 470 |
+
def thanks():
|
| 471 |
+
db = get_app_db()
|
| 472 |
+
cohort = _get_user_cohort(db, session["annotator_id"])
|
| 473 |
+
code = COHORT_COMPLETION_CODE.get(cohort, f"{random.randint(0, 999999):06d}")
|
| 474 |
+
return render_template("thanks.html", code=code, survey_url=COMPLETION_SURVEY_URL)
|
| 475 |
+
|
| 476 |
+
@app.route("/login", methods=["POST"])
|
| 477 |
+
def login():
|
| 478 |
+
username = (request.form.get("username") or "").strip()
|
| 479 |
+
if not username:
|
| 480 |
+
return render_template("login.html")
|
| 481 |
+
|
| 482 |
+
cohort = (request.form.get("cohort") or session.get("cohort") or "others").strip()
|
| 483 |
+
if cohort not in VALID_COHORTS:
|
| 484 |
+
cohort = "others"
|
| 485 |
+
|
| 486 |
+
db = get_app_db()
|
| 487 |
+
row = db.execute("SELECT * FROM annotators WHERE name = ?", (username,)).fetchone()
|
| 488 |
+
if row is None:
|
| 489 |
+
cur = db.execute("INSERT INTO annotators (name, cohort) VALUES (?, ?)", (username, cohort))
|
| 490 |
+
db.commit()
|
| 491 |
+
row = db.execute("SELECT * FROM annotators WHERE id = ?", (cur.lastrowid,)).fetchone()
|
| 492 |
+
elif not row["cohort"] or row["cohort"] != cohort:
|
| 493 |
+
# Update cohort if user re-registers via a different link
|
| 494 |
+
db.execute("UPDATE annotators SET cohort = ? WHERE id = ?", (cohort, row["id"]))
|
| 495 |
+
db.commit()
|
| 496 |
+
|
| 497 |
+
session["annotator_id"] = row["id"]
|
| 498 |
+
session["annotator_name"] = row["name"]
|
| 499 |
+
session["cohort"] = cohort
|
| 500 |
+
|
| 501 |
+
# Auto-assign comparison groups
|
| 502 |
+
_auto_assign(db, row["id"])
|
| 503 |
+
|
| 504 |
+
return redirect(url_for("demographics"))
|
| 505 |
+
|
| 506 |
+
@app.route("/demographics", methods=["GET", "POST"])
|
| 507 |
+
@login_required
|
| 508 |
+
def demographics():
|
| 509 |
+
db = get_app_db()
|
| 510 |
+
|
| 511 |
+
if request.method == "POST":
|
| 512 |
+
gender = (request.form.get("gender") or "").strip()
|
| 513 |
+
age = (request.form.get("age") or "").strip()
|
| 514 |
+
major = (request.form.get("major") or "").strip()
|
| 515 |
+
education = (request.form.get("education") or "").strip()
|
| 516 |
+
if major == "other":
|
| 517 |
+
major = (request.form.get("major_other") or "").strip() or "other"
|
| 518 |
+
if not gender or not age or not major or not education:
|
| 519 |
+
return redirect(url_for("demographics"))
|
| 520 |
+
db.execute(
|
| 521 |
+
"UPDATE annotators SET gender = ?, age = ?, major = ?, education = ? WHERE id = ?",
|
| 522 |
+
(gender, age, major, education, session["annotator_id"]),
|
| 523 |
+
)
|
| 524 |
+
db.commit()
|
| 525 |
+
return redirect(url_for("demo"))
|
| 526 |
+
|
| 527 |
+
# GET — redirect to demo if already filled
|
| 528 |
+
ann_row = db.execute(
|
| 529 |
+
"SELECT gender, age, major, education FROM annotators WHERE id = ?",
|
| 530 |
+
(session["annotator_id"],),
|
| 531 |
+
).fetchone()
|
| 532 |
+
if ann_row["gender"] and ann_row["age"] and ann_row["major"] and ann_row["education"]:
|
| 533 |
+
return redirect(url_for("demo"))
|
| 534 |
+
return render_template("demographics.html")
|
| 535 |
+
|
| 536 |
+
def _get_user_cohort(db, annotator_id) -> str:
|
| 537 |
+
row = db.execute("SELECT cohort FROM annotators WHERE id = ?", (annotator_id,)).fetchone()
|
| 538 |
+
return (row["cohort"] if row and row["cohort"] else "others")
|
| 539 |
+
|
| 540 |
+
def _auto_assign(db, annotator_id):
|
| 541 |
+
"""Assign a comparison batch with cohort-specific config."""
|
| 542 |
+
cohort = _get_user_cohort(db, annotator_id)
|
| 543 |
+
return assign_comparison_batch(
|
| 544 |
+
db,
|
| 545 |
+
annotator_id,
|
| 546 |
+
n_annotators=N_ANNOTATORS_PER_VIDEO,
|
| 547 |
+
batch_size=get_batch_size(cohort),
|
| 548 |
+
ttl_hours=ASSIGNMENT_TTL_HOURS,
|
| 549 |
+
)
|
| 550 |
+
|
| 551 |
+
@app.route("/tasks")
|
| 552 |
+
@login_required
|
| 553 |
+
def task_list():
|
| 554 |
+
return redirect(url_for("demo"))
|
| 555 |
+
|
| 556 |
+
def _next_group_url(db, annotator_id, current_group_id=None):
|
| 557 |
+
"""Return URL for the next unfinished group, skipping current_group_id."""
|
| 558 |
+
pending_sql = pending_assignment_sql("a")
|
| 559 |
+
row = db.execute(
|
| 560 |
+
"SELECT a.group_id FROM assignments a "
|
| 561 |
+
"WHERE a.annotator_id = ? AND a.group_id IS NOT NULL "
|
| 562 |
+
"AND a.group_id != ? "
|
| 563 |
+
f"AND {pending_sql} "
|
| 564 |
+
"ORDER BY a.group_id LIMIT 1",
|
| 565 |
+
(annotator_id, current_group_id or ""),
|
| 566 |
+
).fetchone()
|
| 567 |
+
if row:
|
| 568 |
+
return f"/rate_group/{row['group_id']}"
|
| 569 |
+
return None
|
| 570 |
+
|
| 571 |
+
@app.route("/tasks/start", methods=["POST"])
|
| 572 |
+
@login_required
|
| 573 |
+
def start_next():
|
| 574 |
+
"""Redirect to the first unfinished comparison group."""
|
| 575 |
+
db = get_app_db()
|
| 576 |
+
annotator_id = session["annotator_id"]
|
| 577 |
+
nxt = _next_group_url(db, annotator_id)
|
| 578 |
+
if nxt:
|
| 579 |
+
return redirect(nxt)
|
| 580 |
+
_auto_assign(db, annotator_id)
|
| 581 |
+
nxt = _next_group_url(db, annotator_id)
|
| 582 |
+
if nxt:
|
| 583 |
+
return redirect(nxt)
|
| 584 |
+
return redirect(url_for("task_list"))
|
| 585 |
+
|
| 586 |
+
@app.route("/tasks/more", methods=["POST"])
|
| 587 |
+
@login_required
|
| 588 |
+
def request_more():
|
| 589 |
+
db = get_app_db()
|
| 590 |
+
_auto_assign(db, session["annotator_id"])
|
| 591 |
+
return redirect(url_for("task_list"))
|
| 592 |
+
|
| 593 |
+
# ------------------------------------------------------------------
|
| 594 |
+
# Comparison group routes
|
| 595 |
+
# ------------------------------------------------------------------
|
| 596 |
+
|
| 597 |
+
def _load_group_assignments(db, group_id, annotator_id):
|
| 598 |
+
"""Load and verify all assignments in a comparison group."""
|
| 599 |
+
rows = db.execute(
|
| 600 |
+
"SELECT a.*, v.filename, v.dataset, v.prompt, v.physical_laws, v.metadata_version AS v_meta_ver "
|
| 601 |
+
"FROM assignments a JOIN videos v ON a.video_id = v.id "
|
| 602 |
+
"WHERE a.group_id = ? AND a.annotator_id = ? "
|
| 603 |
+
"ORDER BY a.id",
|
| 604 |
+
(group_id, annotator_id),
|
| 605 |
+
).fetchall()
|
| 606 |
+
if not rows:
|
| 607 |
+
abort(404)
|
| 608 |
+
return rows
|
| 609 |
+
|
| 610 |
+
def _shuffle_for_group(items, group_id):
|
| 611 |
+
"""Deterministic shuffle based on group_id (same order on page refresh)."""
|
| 612 |
+
seed = int(hashlib.md5(group_id.encode()).hexdigest(), 16) % (2**32)
|
| 613 |
+
rng = random.Random(seed)
|
| 614 |
+
result = list(items)
|
| 615 |
+
rng.shuffle(result)
|
| 616 |
+
return result
|
| 617 |
+
|
| 618 |
+
@app.route("/rate_group/<group_id>")
|
| 619 |
+
@login_required
|
| 620 |
+
def rate_group_page(group_id):
|
| 621 |
+
# Persist return_url in session so it survives across group pages
|
| 622 |
+
if request.args.get("return_url"):
|
| 623 |
+
session["return_url"] = request.args["return_url"]
|
| 624 |
+
db = get_app_db()
|
| 625 |
+
annotator_id = session["annotator_id"]
|
| 626 |
+
assignments = _load_group_assignments(db, group_id, annotator_id)
|
| 627 |
+
|
| 628 |
+
# Shuffle order (blind evaluation — same model not always in same position)
|
| 629 |
+
assignments = _shuffle_for_group(assignments, group_id)
|
| 630 |
+
|
| 631 |
+
# Get physical dims from first assignment (same prompt = same laws)
|
| 632 |
+
physical_dims = _get_physical_dims(assignments[0])
|
| 633 |
+
|
| 634 |
+
# Load existing scores for all assignments in one query
|
| 635 |
+
aid_list = [a["id"] for a in assignments]
|
| 636 |
+
placeholders = ",".join("?" * len(aid_list))
|
| 637 |
+
existing_rows = db.execute(
|
| 638 |
+
f"SELECT assignment_id, scores_json, note, na_laws FROM annotations WHERE assignment_id IN ({placeholders})",
|
| 639 |
+
aid_list,
|
| 640 |
+
).fetchall()
|
| 641 |
+
existing_scores_map = {}
|
| 642 |
+
for r in existing_rows:
|
| 643 |
+
entry = json.loads(r["scores_json"])
|
| 644 |
+
entry["note"] = r["note"] or ""
|
| 645 |
+
entry["na_laws"] = json.loads(r["na_laws"]) if r["na_laws"] else []
|
| 646 |
+
existing_scores_map[r["assignment_id"]] = entry
|
| 647 |
+
|
| 648 |
+
# Assign blind labels: Model A, Model B, ...
|
| 649 |
+
labels = [chr(65 + i) for i in range(len(assignments))] # A, B, C, D, ...
|
| 650 |
+
|
| 651 |
+
prompt = assignments[0]["prompt"]
|
| 652 |
+
|
| 653 |
+
# Build status map for per-video state display
|
| 654 |
+
status_map = {a["id"]: a["status"] for a in assignments}
|
| 655 |
+
|
| 656 |
+
# User progress: completed groups / quota
|
| 657 |
+
user_completed_groups = db.execute(
|
| 658 |
+
"SELECT COUNT(DISTINCT a.group_id) AS c FROM assignments a "
|
| 659 |
+
"WHERE a.annotator_id = ? AND a.group_id IS NOT NULL "
|
| 660 |
+
"AND NOT EXISTS (SELECT 1 FROM assignments a2 WHERE a2.group_id = a.group_id "
|
| 661 |
+
"AND a2.annotator_id = a.annotator_id AND a2.status NOT IN (?, ?, ?))",
|
| 662 |
+
(annotator_id, STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED),
|
| 663 |
+
).fetchone()["c"]
|
| 664 |
+
|
| 665 |
+
return render_template(
|
| 666 |
+
"rate_compare.html",
|
| 667 |
+
group_id=group_id,
|
| 668 |
+
prompt=prompt,
|
| 669 |
+
physical_dims=physical_dims,
|
| 670 |
+
video_entries=list(zip(labels, assignments)),
|
| 671 |
+
existing_scores_map=existing_scores_map,
|
| 672 |
+
status_map=status_map,
|
| 673 |
+
user_completed_groups=user_completed_groups,
|
| 674 |
+
user_quota=get_batch_size(_get_user_cohort(db, annotator_id)),
|
| 675 |
+
annotator_name=session.get("annotator_name"),
|
| 676 |
+
)
|
| 677 |
+
|
| 678 |
+
@app.route("/rate_group/<group_id>", methods=["POST"])
|
| 679 |
+
@login_required
|
| 680 |
+
def rate_group_submit(group_id):
|
| 681 |
+
db = get_app_db()
|
| 682 |
+
annotator_id = session["annotator_id"]
|
| 683 |
+
assignments = _load_group_assignments(db, group_id, annotator_id)
|
| 684 |
+
|
| 685 |
+
physical_dims = _get_physical_dims(assignments[0])
|
| 686 |
+
|
| 687 |
+
general_keys = {key for key, _, _, _, _ in GENERAL_DIMS}
|
| 688 |
+
physical_keys = {key for key, _ in physical_dims}
|
| 689 |
+
|
| 690 |
+
completed_aids = []
|
| 691 |
+
partial_aids = []
|
| 692 |
+
for a in assignments:
|
| 693 |
+
aid = a["id"]
|
| 694 |
+
general_scores, physical_scores, na_laws = _extract_scores(f"v{aid}_", physical_dims)
|
| 695 |
+
if not general_scores:
|
| 696 |
+
continue
|
| 697 |
+
note = request.form.get(f"v{aid}_note", "").strip()
|
| 698 |
+
play_count = int(request.form.get(f"v{aid}_play_count", 0) or 0)
|
| 699 |
+
stay_seconds = float(request.form.get(f"v{aid}_stay_seconds", 0) or 0)
|
| 700 |
+
_save_annotation(db, aid, general_scores, physical_scores, physical_dims, a["v_meta_ver"], note=note, play_count=play_count, stay_seconds=stay_seconds, na_laws=na_laws)
|
| 701 |
+
covered_physical = set(physical_scores.keys()) | set(na_laws)
|
| 702 |
+
if general_scores.keys() >= general_keys and covered_physical >= physical_keys:
|
| 703 |
+
completed_aids.append(aid)
|
| 704 |
+
else:
|
| 705 |
+
partial_aids.append(aid)
|
| 706 |
+
|
| 707 |
+
db.commit()
|
| 708 |
+
|
| 709 |
+
action = request.form.get("action", "save")
|
| 710 |
+
|
| 711 |
+
if action == "save":
|
| 712 |
+
return jsonify({"ok": True})
|
| 713 |
+
|
| 714 |
+
# "Next" — mark assignments by completeness
|
| 715 |
+
all_aids = [a["id"] for a in assignments]
|
| 716 |
+
saved_aids = completed_aids + partial_aids
|
| 717 |
+
if completed_aids:
|
| 718 |
+
db.execute(
|
| 719 |
+
"UPDATE assignments SET status = ?, completed_at = CURRENT_TIMESTAMP "
|
| 720 |
+
"WHERE id IN ({}) AND status NOT IN (?, ?)".format(
|
| 721 |
+
",".join("?" * len(completed_aids))),
|
| 722 |
+
[STATUS_COMPLETED, *completed_aids, STATUS_COMPLETED, STATUS_SKIPPED],
|
| 723 |
+
)
|
| 724 |
+
if partial_aids:
|
| 725 |
+
db.execute(
|
| 726 |
+
"UPDATE assignments SET status = ?, completed_at = CURRENT_TIMESTAMP "
|
| 727 |
+
"WHERE id IN ({}) AND status NOT IN (?, ?, ?)".format(
|
| 728 |
+
",".join("?" * len(partial_aids))),
|
| 729 |
+
[STATUS_PARTIAL, *partial_aids, STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED],
|
| 730 |
+
)
|
| 731 |
+
# Mark remaining (unannotated) assignments as skipped so progress advances
|
| 732 |
+
unsaved_aids = [aid for aid in all_aids if aid not in saved_aids]
|
| 733 |
+
if unsaved_aids:
|
| 734 |
+
db.execute(
|
| 735 |
+
"UPDATE assignments SET status = ?, completed_at = CURRENT_TIMESTAMP "
|
| 736 |
+
"WHERE id IN ({}) AND status NOT IN (?, ?, ?)".format(
|
| 737 |
+
",".join("?" * len(unsaved_aids))),
|
| 738 |
+
[STATUS_SKIPPED, *unsaved_aids, STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED],
|
| 739 |
+
)
|
| 740 |
+
db.commit()
|
| 741 |
+
return_url = session.pop("return_url", "")
|
| 742 |
+
nxt = _next_group_url(db, annotator_id, current_group_id=group_id)
|
| 743 |
+
if nxt:
|
| 744 |
+
# Keep return_url in session for subsequent groups (re-set it)
|
| 745 |
+
if return_url:
|
| 746 |
+
session["return_url"] = return_url
|
| 747 |
+
return jsonify({"ok": True, "redirect": nxt})
|
| 748 |
+
return jsonify({"ok": True, "redirect": return_url or url_for("task_list")})
|
| 749 |
+
|
| 750 |
+
@app.route("/rate_group/<group_id>/skip_video/<int:assignment_id>", methods=["POST"])
|
| 751 |
+
@login_required
|
| 752 |
+
def skip_video(group_id, assignment_id):
|
| 753 |
+
db = get_app_db()
|
| 754 |
+
assignment = db.execute(
|
| 755 |
+
"SELECT * FROM assignments WHERE id = ? AND group_id = ?",
|
| 756 |
+
(assignment_id, group_id),
|
| 757 |
+
).fetchone()
|
| 758 |
+
verify_ownership(assignment)
|
| 759 |
+
|
| 760 |
+
reason = request.form.get("skip_reason", "")
|
| 761 |
+
db.execute(
|
| 762 |
+
"UPDATE assignments SET status = ?, skip_reason = ? WHERE id = ?",
|
| 763 |
+
(STATUS_SKIPPED, reason, assignment_id),
|
| 764 |
+
)
|
| 765 |
+
db.commit()
|
| 766 |
+
|
| 767 |
+
# Auto-advance if group is fully done
|
| 768 |
+
pending = db.execute(
|
| 769 |
+
"SELECT COUNT(*) AS c FROM assignments "
|
| 770 |
+
"WHERE group_id = ? AND annotator_id = ? AND status NOT IN (?, ?, ?)",
|
| 771 |
+
(group_id, session["annotator_id"], STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED),
|
| 772 |
+
).fetchone()["c"]
|
| 773 |
+
if pending == 0:
|
| 774 |
+
nxt = _next_group_url(db, session["annotator_id"])
|
| 775 |
+
if nxt:
|
| 776 |
+
return redirect(nxt)
|
| 777 |
+
return redirect(url_for("task_list"))
|
| 778 |
+
|
| 779 |
+
return redirect(url_for("rate_group_page", group_id=group_id))
|
| 780 |
+
|
| 781 |
+
@app.route("/video/<dataset>/<filename>")
|
| 782 |
+
def serve_video(dataset, filename):
|
| 783 |
+
video_dir = Path(resolved_video_dir) / dataset
|
| 784 |
+
return send_from_directory(str(video_dir), filename)
|
| 785 |
+
|
| 786 |
+
@app.route("/api/stats")
|
| 787 |
+
@login_required
|
| 788 |
+
def stats():
|
| 789 |
+
db = get_app_db()
|
| 790 |
+
|
| 791 |
+
total_videos = db.execute("SELECT COUNT(*) AS c FROM videos").fetchone()["c"]
|
| 792 |
+
|
| 793 |
+
target_n = N_ANNOTATORS_PER_VIDEO
|
| 794 |
+
|
| 795 |
+
# Coverage: aggregate in SQL
|
| 796 |
+
coverage = db.execute(
|
| 797 |
+
"SELECT "
|
| 798 |
+
"SUM(CASE WHEN cnt >= ? THEN 1 ELSE 0 END) AS fully, "
|
| 799 |
+
"SUM(CASE WHEN cnt > 0 AND cnt < ? THEN 1 ELSE 0 END) AS partially, "
|
| 800 |
+
"SUM(CASE WHEN cnt = 0 THEN 1 ELSE 0 END) AS uncovered "
|
| 801 |
+
"FROM ("
|
| 802 |
+
" SELECT COUNT(a.id) AS cnt "
|
| 803 |
+
" FROM videos v "
|
| 804 |
+
" LEFT JOIN assignments a ON v.id = a.video_id AND a.status IN (?, ?) "
|
| 805 |
+
" GROUP BY v.id"
|
| 806 |
+
")",
|
| 807 |
+
(target_n, target_n, STATUS_COMPLETED, STATUS_PARTIAL),
|
| 808 |
+
).fetchone()
|
| 809 |
+
fully = coverage["fully"] or 0
|
| 810 |
+
partially = coverage["partially"] or 0
|
| 811 |
+
uncovered = coverage["uncovered"] or 0
|
| 812 |
+
|
| 813 |
+
# Per annotator
|
| 814 |
+
per_annotator = db.execute(
|
| 815 |
+
"SELECT an.name, "
|
| 816 |
+
"SUM(CASE WHEN a.status = ? THEN 1 ELSE 0 END) AS completed, "
|
| 817 |
+
"SUM(CASE WHEN a.status = ? THEN 1 ELSE 0 END) AS partial, "
|
| 818 |
+
"SUM(CASE WHEN a.status = ? THEN 1 ELSE 0 END) AS assigned "
|
| 819 |
+
"FROM annotators an "
|
| 820 |
+
"LEFT JOIN assignments a ON an.id = a.annotator_id "
|
| 821 |
+
"GROUP BY an.id",
|
| 822 |
+
(STATUS_COMPLETED, STATUS_PARTIAL, STATUS_ASSIGNED),
|
| 823 |
+
).fetchall()
|
| 824 |
+
|
| 825 |
+
per_annotator_list = [
|
| 826 |
+
{"name": r["name"], "completed": r["completed"] or 0, "partial": r["partial"] or 0, "assigned": r["assigned"] or 0}
|
| 827 |
+
for r in per_annotator
|
| 828 |
+
]
|
| 829 |
+
|
| 830 |
+
# Disagreement top-k
|
| 831 |
+
disagreement_rows = db.execute(
|
| 832 |
+
"SELECT ai.dimension, ai.law, "
|
| 833 |
+
"GROUP_CONCAT(ai.score) AS scores, "
|
| 834 |
+
"MAX(ai.score) - MIN(ai.score) AS max_min_diff, "
|
| 835 |
+
"a2.video_id "
|
| 836 |
+
"FROM annotation_items ai "
|
| 837 |
+
"JOIN annotations ann ON ai.annotation_id = ann.id "
|
| 838 |
+
"JOIN assignments a2 ON ann.assignment_id = a2.id "
|
| 839 |
+
"GROUP BY a2.video_id, ai.dimension, ai.law "
|
| 840 |
+
"HAVING COUNT(ai.score) > 1 "
|
| 841 |
+
"ORDER BY max_min_diff DESC "
|
| 842 |
+
"LIMIT ?",
|
| 843 |
+
(DISAGREEMENT_TOP_K,),
|
| 844 |
+
).fetchall()
|
| 845 |
+
|
| 846 |
+
disagreement_list = [
|
| 847 |
+
{
|
| 848 |
+
"video_id": r["video_id"],
|
| 849 |
+
"dimension": r["dimension"],
|
| 850 |
+
"law": r["law"],
|
| 851 |
+
"scores": [int(s) for s in r["scores"].split(",")],
|
| 852 |
+
"max_min_diff": r["max_min_diff"],
|
| 853 |
+
}
|
| 854 |
+
for r in disagreement_rows
|
| 855 |
+
]
|
| 856 |
+
|
| 857 |
+
return jsonify({
|
| 858 |
+
"total_videos": total_videos,
|
| 859 |
+
"coverage": {
|
| 860 |
+
"fully_covered": fully,
|
| 861 |
+
"partially_covered": partially,
|
| 862 |
+
"uncovered": uncovered,
|
| 863 |
+
"target_n": target_n,
|
| 864 |
+
},
|
| 865 |
+
"per_annotator": per_annotator_list,
|
| 866 |
+
"disagreement_top_k": disagreement_list,
|
| 867 |
+
})
|
| 868 |
+
|
| 869 |
+
return app
|
evals/human_eval/assign.py
ADDED
|
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import random
|
| 2 |
+
import sqlite3
|
| 3 |
+
import uuid
|
| 4 |
+
|
| 5 |
+
from human_eval.config import (
|
| 6 |
+
STATUS_ASSIGNED, STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED,
|
| 7 |
+
COMPARISON_MODELS, MODELS_PER_GROUP, extract_model,
|
| 8 |
+
)
|
| 9 |
+
from human_eval.db import pending_assignment_sql
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def _delete_assignments_with_annotations(
|
| 13 |
+
conn: sqlite3.Connection,
|
| 14 |
+
where_sql: str,
|
| 15 |
+
params: tuple,
|
| 16 |
+
) -> list[int]:
|
| 17 |
+
"""Delete assignments plus any saved draft rows that reference them.
|
| 18 |
+
|
| 19 |
+
Also cleans up comparison_groups left with no remaining assignments.
|
| 20 |
+
"""
|
| 21 |
+
rows = conn.execute(
|
| 22 |
+
f"SELECT id, group_id FROM assignments WHERE {where_sql}",
|
| 23 |
+
params,
|
| 24 |
+
).fetchall()
|
| 25 |
+
assignment_ids = [row["id"] for row in rows]
|
| 26 |
+
if not assignment_ids:
|
| 27 |
+
return []
|
| 28 |
+
|
| 29 |
+
affected_group_ids = list({
|
| 30 |
+
row["group_id"] for row in rows if row["group_id"] is not None
|
| 31 |
+
})
|
| 32 |
+
|
| 33 |
+
placeholders = ",".join("?" * len(assignment_ids))
|
| 34 |
+
conn.execute(
|
| 35 |
+
f"""
|
| 36 |
+
DELETE FROM annotation_items
|
| 37 |
+
WHERE annotation_id IN (
|
| 38 |
+
SELECT id FROM annotations WHERE assignment_id IN ({placeholders})
|
| 39 |
+
)
|
| 40 |
+
""",
|
| 41 |
+
assignment_ids,
|
| 42 |
+
)
|
| 43 |
+
conn.execute(
|
| 44 |
+
f"DELETE FROM annotations WHERE assignment_id IN ({placeholders})",
|
| 45 |
+
assignment_ids,
|
| 46 |
+
)
|
| 47 |
+
conn.execute(
|
| 48 |
+
f"DELETE FROM assignments WHERE id IN ({placeholders})",
|
| 49 |
+
assignment_ids,
|
| 50 |
+
)
|
| 51 |
+
|
| 52 |
+
# Clean up comparison_groups that lost all their assignments
|
| 53 |
+
if affected_group_ids:
|
| 54 |
+
gph = ",".join("?" * len(affected_group_ids))
|
| 55 |
+
conn.execute(
|
| 56 |
+
f"""
|
| 57 |
+
DELETE FROM comparison_groups
|
| 58 |
+
WHERE id IN ({gph})
|
| 59 |
+
AND NOT EXISTS (
|
| 60 |
+
SELECT 1 FROM assignments WHERE group_id = comparison_groups.id
|
| 61 |
+
)
|
| 62 |
+
""",
|
| 63 |
+
affected_group_ids,
|
| 64 |
+
)
|
| 65 |
+
|
| 66 |
+
return assignment_ids
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
def _select_group_videos(
|
| 70 |
+
model_videos: dict[str, dict],
|
| 71 |
+
models: list[str],
|
| 72 |
+
video_coverage: dict[int, int],
|
| 73 |
+
k: int,
|
| 74 |
+
n_annotators: int,
|
| 75 |
+
) -> tuple[list[dict], tuple[int, ...]] | None:
|
| 76 |
+
"""Pick the least-covered videos for a prompt, randomizing only on ties."""
|
| 77 |
+
ranked = []
|
| 78 |
+
for model in models:
|
| 79 |
+
row = model_videos.get(model)
|
| 80 |
+
if row is None:
|
| 81 |
+
continue
|
| 82 |
+
ranked.append((video_coverage.get(row["id"], 0), row))
|
| 83 |
+
|
| 84 |
+
if len(ranked) < 2:
|
| 85 |
+
return None
|
| 86 |
+
if all(coverage >= n_annotators for coverage, _ in ranked):
|
| 87 |
+
return None
|
| 88 |
+
|
| 89 |
+
random.shuffle(ranked)
|
| 90 |
+
ranked.sort(key=lambda item: item[0])
|
| 91 |
+
picked = ranked[: min(k, len(ranked))]
|
| 92 |
+
return [row for _, row in picked], tuple(coverage for coverage, _ in picked)
|
| 93 |
+
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
def _build_prompt_video_map(conn, models: list[str]) -> dict:
|
| 97 |
+
"""Build {prompt_text: {model: video_row}} mapping for comparison models.
|
| 98 |
+
|
| 99 |
+
Only includes prompts that have at least 2 models available.
|
| 100 |
+
"""
|
| 101 |
+
rows = conn.execute(
|
| 102 |
+
"SELECT id, filename, dataset, prompt, physical_laws FROM videos"
|
| 103 |
+
).fetchall()
|
| 104 |
+
|
| 105 |
+
model_set = set(models)
|
| 106 |
+
# prompt → {model → video_row}
|
| 107 |
+
prompt_map: dict[str, dict[str, dict]] = {}
|
| 108 |
+
for r in rows:
|
| 109 |
+
model = extract_model(r["dataset"])
|
| 110 |
+
if model not in model_set:
|
| 111 |
+
continue
|
| 112 |
+
prompt = r["prompt"]
|
| 113 |
+
if prompt not in prompt_map:
|
| 114 |
+
prompt_map[prompt] = {}
|
| 115 |
+
existing = prompt_map[prompt].get(model)
|
| 116 |
+
if existing is None:
|
| 117 |
+
prompt_map[prompt][model] = dict(r)
|
| 118 |
+
elif "perspective-center" in r["filename"]:
|
| 119 |
+
prompt_map[prompt][model] = dict(r)
|
| 120 |
+
|
| 121 |
+
# Filter: keep only prompts with >= 2 models
|
| 122 |
+
return {p: m for p, m in prompt_map.items() if len(m) >= 2}
|
| 123 |
+
|
| 124 |
+
|
| 125 |
+
def assign_comparison_batch(
|
| 126 |
+
conn: sqlite3.Connection,
|
| 127 |
+
annotator_id: int,
|
| 128 |
+
n_annotators: int,
|
| 129 |
+
batch_size: int,
|
| 130 |
+
ttl_hours: int,
|
| 131 |
+
models: list[str] | None = None,
|
| 132 |
+
models_per_group: int | None = None,
|
| 133 |
+
) -> list[str]:
|
| 134 |
+
"""Assign comparison groups to an annotator.
|
| 135 |
+
|
| 136 |
+
Each group = 1 prompt × K randomly-sampled models.
|
| 137 |
+
Returns list of new group_ids.
|
| 138 |
+
"""
|
| 139 |
+
models = models or COMPARISON_MODELS
|
| 140 |
+
k = models_per_group or MODELS_PER_GROUP
|
| 141 |
+
|
| 142 |
+
prompt_map = _build_prompt_video_map(conn, models)
|
| 143 |
+
if not prompt_map:
|
| 144 |
+
return []
|
| 145 |
+
|
| 146 |
+
# Find prompts with active (completed/skipped or pending) groups for this user
|
| 147 |
+
pending_sql = pending_assignment_sql("a")
|
| 148 |
+
active_groups = conn.execute(
|
| 149 |
+
f"""
|
| 150 |
+
SELECT DISTINCT cg.prompt
|
| 151 |
+
FROM comparison_groups cg
|
| 152 |
+
JOIN assignments a ON a.group_id = cg.id
|
| 153 |
+
WHERE a.annotator_id = ?
|
| 154 |
+
AND (a.status IN ('{STATUS_COMPLETED}', '{STATUS_PARTIAL}', '{STATUS_SKIPPED}')
|
| 155 |
+
OR {pending_sql})
|
| 156 |
+
""",
|
| 157 |
+
(annotator_id,),
|
| 158 |
+
).fetchall()
|
| 159 |
+
active_prompts = {r["prompt"] for r in active_groups}
|
| 160 |
+
|
| 161 |
+
# Count how many non-excluded annotators have completed each video.
|
| 162 |
+
# excluded_annotators is populated by filter_db.py; if the table is
|
| 163 |
+
# empty or missing, all annotators count (original behavior).
|
| 164 |
+
has_exclusion_table = conn.execute(
|
| 165 |
+
"SELECT 1 FROM sqlite_master WHERE type='table' AND name='excluded_annotators'"
|
| 166 |
+
).fetchone() is not None
|
| 167 |
+
|
| 168 |
+
if has_exclusion_table:
|
| 169 |
+
coverage = conn.execute(
|
| 170 |
+
f"""
|
| 171 |
+
SELECT a.video_id, COUNT(DISTINCT a.annotator_id) AS cnt
|
| 172 |
+
FROM assignments a
|
| 173 |
+
WHERE a.status IN ('{STATUS_COMPLETED}', '{STATUS_PARTIAL}')
|
| 174 |
+
AND a.annotator_id NOT IN (SELECT annotator_id FROM excluded_annotators)
|
| 175 |
+
GROUP BY a.video_id
|
| 176 |
+
""",
|
| 177 |
+
).fetchall()
|
| 178 |
+
else:
|
| 179 |
+
coverage = conn.execute(
|
| 180 |
+
f"""
|
| 181 |
+
SELECT a.video_id, COUNT(DISTINCT a.annotator_id) AS cnt
|
| 182 |
+
FROM assignments a
|
| 183 |
+
WHERE a.status IN ('{STATUS_COMPLETED}', '{STATUS_PARTIAL}')
|
| 184 |
+
GROUP BY a.video_id
|
| 185 |
+
""",
|
| 186 |
+
).fetchall()
|
| 187 |
+
video_coverage = {r["video_id"]: r["cnt"] for r in coverage}
|
| 188 |
+
|
| 189 |
+
# Filter to assignable prompts: not already active for this user and still
|
| 190 |
+
# containing at least one under-covered video.
|
| 191 |
+
candidates = []
|
| 192 |
+
for prompt, model_videos in prompt_map.items():
|
| 193 |
+
if prompt in active_prompts:
|
| 194 |
+
continue
|
| 195 |
+
|
| 196 |
+
selected = _select_group_videos(
|
| 197 |
+
model_videos,
|
| 198 |
+
models,
|
| 199 |
+
video_coverage,
|
| 200 |
+
k,
|
| 201 |
+
n_annotators,
|
| 202 |
+
)
|
| 203 |
+
if selected is None:
|
| 204 |
+
continue
|
| 205 |
+
video_rows, coverage_signature = selected
|
| 206 |
+
candidates.append((coverage_signature, random.random(), prompt, video_rows))
|
| 207 |
+
|
| 208 |
+
# Sort: prompts with the lowest-covered selected videos first.
|
| 209 |
+
candidates.sort(key=lambda x: (x[0], x[1]))
|
| 210 |
+
|
| 211 |
+
conn.execute("BEGIN IMMEDIATE")
|
| 212 |
+
try:
|
| 213 |
+
# Clean up non-completed, non-skipped group assignments (expired/abandoned)
|
| 214 |
+
# and their orphaned comparison_groups
|
| 215 |
+
deleted_ids = _delete_assignments_with_annotations(
|
| 216 |
+
conn,
|
| 217 |
+
(
|
| 218 |
+
"annotator_id = ? AND group_id IS NOT NULL "
|
| 219 |
+
"AND status NOT IN (?, ?, ?) "
|
| 220 |
+
"AND expires_at <= datetime('now')"
|
| 221 |
+
),
|
| 222 |
+
(annotator_id, STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED),
|
| 223 |
+
)
|
| 224 |
+
new_group_ids = []
|
| 225 |
+
for _, _, prompt, video_rows in candidates[:batch_size]:
|
| 226 |
+
|
| 227 |
+
# Get physical_laws from the first video (same prompt = same laws)
|
| 228 |
+
physical_laws = video_rows[0]["physical_laws"]
|
| 229 |
+
|
| 230 |
+
# Create comparison group
|
| 231 |
+
group_id = str(uuid.uuid4())
|
| 232 |
+
conn.execute(
|
| 233 |
+
"INSERT INTO comparison_groups (id, prompt, physical_laws) VALUES (?, ?, ?)",
|
| 234 |
+
(group_id, prompt, physical_laws),
|
| 235 |
+
)
|
| 236 |
+
|
| 237 |
+
# Create assignments for each video in the group
|
| 238 |
+
for vr in video_rows:
|
| 239 |
+
conn.execute(
|
| 240 |
+
"INSERT OR IGNORE INTO assignments (video_id, annotator_id, status, expires_at, group_id) "
|
| 241 |
+
f"VALUES (?, ?, '{STATUS_ASSIGNED}', datetime('now', '+' || ? || ' hours'), ?)",
|
| 242 |
+
(vr["id"], annotator_id, ttl_hours, group_id),
|
| 243 |
+
)
|
| 244 |
+
|
| 245 |
+
new_group_ids.append(group_id)
|
| 246 |
+
|
| 247 |
+
conn.commit()
|
| 248 |
+
return new_group_ids
|
| 249 |
+
|
| 250 |
+
except Exception:
|
| 251 |
+
conn.rollback()
|
| 252 |
+
raise
|
evals/human_eval/check_db_json_alignment.py
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Check internal alignment for a human-eval DB.
|
| 2 |
+
|
| 3 |
+
The original source-JSON comparison depends on prompt-selection artifacts that
|
| 4 |
+
are omitted from this release. This release version only checks consistency
|
| 5 |
+
between DB videos and comparison groups.
|
| 6 |
+
|
| 7 |
+
Usage:
|
| 8 |
+
python -m human_eval.check_db_json_alignment
|
| 9 |
+
"""
|
| 10 |
+
|
| 11 |
+
import json
|
| 12 |
+
import sqlite3
|
| 13 |
+
import sys
|
| 14 |
+
from collections import defaultdict
|
| 15 |
+
from pathlib import Path
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
ROOT = Path(__file__).resolve().parent.parent.parent
|
| 19 |
+
DB_PATH = ROOT / "eval" / "human_eval" / "human_eval_filtered.db"
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
def _norm_laws(laws_str: str) -> list[str]:
|
| 23 |
+
try:
|
| 24 |
+
return sorted(json.loads(laws_str))
|
| 25 |
+
except (json.JSONDecodeError, TypeError):
|
| 26 |
+
return []
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def main():
|
| 30 |
+
print("=" * 60)
|
| 31 |
+
print("DB Internal Alignment Check")
|
| 32 |
+
print("=" * 60)
|
| 33 |
+
|
| 34 |
+
conn = sqlite3.connect(str(DB_PATH))
|
| 35 |
+
conn.row_factory = sqlite3.Row
|
| 36 |
+
groups = [dict(r) for r in conn.execute(
|
| 37 |
+
"SELECT id, prompt, physical_laws FROM comparison_groups"
|
| 38 |
+
).fetchall()]
|
| 39 |
+
|
| 40 |
+
issues = []
|
| 41 |
+
|
| 42 |
+
print(f"\n[1] comparison_groups ({len(groups)}) internal consistency with videos")
|
| 43 |
+
rows = conn.execute(
|
| 44 |
+
"SELECT DISTINCT cg.id as gid, cg.prompt as cg_prompt, cg.physical_laws as cg_laws, "
|
| 45 |
+
"v.id as vid, v.prompt as v_prompt, v.physical_laws as v_laws "
|
| 46 |
+
"FROM comparison_groups cg "
|
| 47 |
+
"JOIN assignments a ON a.group_id = cg.id "
|
| 48 |
+
"JOIN videos v ON a.video_id = v.id"
|
| 49 |
+
).fetchall()
|
| 50 |
+
conn.close()
|
| 51 |
+
|
| 52 |
+
cg_mismatches = 0
|
| 53 |
+
for r in rows:
|
| 54 |
+
if r["cg_prompt"] != r["v_prompt"]:
|
| 55 |
+
cg_mismatches += 1
|
| 56 |
+
if cg_mismatches <= 5:
|
| 57 |
+
issues.append({"type": "prompt_mismatch", "layer": "group_vs_video",
|
| 58 |
+
"group_id": r["gid"][:12], "video_id": r["vid"],
|
| 59 |
+
"cg_prompt": r["cg_prompt"][:80], "v_prompt": r["v_prompt"][:80]})
|
| 60 |
+
if _norm_laws(r["cg_laws"]) != _norm_laws(r["v_laws"]):
|
| 61 |
+
cg_mismatches += 1
|
| 62 |
+
if cg_mismatches <= 5:
|
| 63 |
+
issues.append({"type": "laws_mismatch", "layer": "group_vs_video",
|
| 64 |
+
"group_id": r["gid"][:12], "video_id": r["vid"],
|
| 65 |
+
"cg_laws": _norm_laws(r["cg_laws"]), "v_laws": _norm_laws(r["v_laws"])})
|
| 66 |
+
print(f" checked {len(rows)} (group, video) pairs, {cg_mismatches} mismatches")
|
| 67 |
+
|
| 68 |
+
print("\n" + "=" * 60)
|
| 69 |
+
if not issues:
|
| 70 |
+
print("ALL ALIGNED")
|
| 71 |
+
return 0
|
| 72 |
+
|
| 73 |
+
grouped = defaultdict(list)
|
| 74 |
+
for iss in issues:
|
| 75 |
+
key = (iss["layer"], iss["type"], iss.get("group_id", "?"))
|
| 76 |
+
grouped[key].append(iss)
|
| 77 |
+
|
| 78 |
+
print(f"FOUND {len(grouped)} unique issue(s) ({len(issues)} total across models):\n")
|
| 79 |
+
display_keys = {
|
| 80 |
+
"prompt_mismatch": ("cg_prompt", "v_prompt"),
|
| 81 |
+
"laws_mismatch": ("cg_laws", "v_laws"),
|
| 82 |
+
}
|
| 83 |
+
for i, ((layer, typ, stem), group) in enumerate(sorted(grouped.items()), 1):
|
| 84 |
+
iss = group[0]
|
| 85 |
+
print(f"--- #{i} [{typ}] {layer} | stem={stem} | x{len(group)} models ---")
|
| 86 |
+
for key in display_keys.get(typ, ()):
|
| 87 |
+
if key in iss:
|
| 88 |
+
print(f" {key}: {iss[key]}")
|
| 89 |
+
print()
|
| 90 |
+
|
| 91 |
+
return len(grouped)
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
if __name__ == "__main__":
|
| 95 |
+
sys.exit(0 if main() == 0 else 1)
|
evals/human_eval/config.py
ADDED
|
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
from pathlib import Path
|
| 3 |
+
|
| 4 |
+
BASE_DIR = Path(__file__).resolve().parent
|
| 5 |
+
|
| 6 |
+
N_ANNOTATORS_PER_VIDEO = 3
|
| 7 |
+
BATCH_SIZE_PER_USER = 4 # default fallback
|
| 8 |
+
|
| 9 |
+
# Per-cohort batch size (number of comparison pages per user)
|
| 10 |
+
COHORT_BATCH_SIZE: dict[str, int] = {
|
| 11 |
+
"cohort_a": 4,
|
| 12 |
+
"cohort_b": 11,
|
| 13 |
+
"cohort_c": 4,
|
| 14 |
+
"cohort_d": 4,
|
| 15 |
+
"test": 2, # Test cohort — excluded from dashboard/stats
|
| 16 |
+
"others": 100, # Default for unspecified cohort
|
| 17 |
+
}
|
| 18 |
+
|
| 19 |
+
# Expected headcount per cohort (for dashboard display)
|
| 20 |
+
COHORT_EXPECTED: dict[str, int] = {
|
| 21 |
+
"cohort_a": 285,
|
| 22 |
+
"cohort_b": 35,
|
| 23 |
+
"cohort_d": 80,
|
| 24 |
+
}
|
| 25 |
+
|
| 26 |
+
# Fixed completion codes per cohort (not randomly generated)
|
| 27 |
+
COHORT_COMPLETION_CODE: dict[str, str] = {
|
| 28 |
+
"cohort_c": "COMPLETION",
|
| 29 |
+
"cohort_d": "COMPLETION",
|
| 30 |
+
}
|
| 31 |
+
|
| 32 |
+
# Cohorts to exclude from dashboard statistics and exports
|
| 33 |
+
TEST_COHORTS = {"test"}
|
| 34 |
+
|
| 35 |
+
VALID_COHORTS = set(COHORT_BATCH_SIZE.keys())
|
| 36 |
+
|
| 37 |
+
def get_batch_size(cohort: str | None) -> int:
|
| 38 |
+
"""Return batch size for a given cohort, falling back to default."""
|
| 39 |
+
return COHORT_BATCH_SIZE.get(cohort or "others", BATCH_SIZE_PER_USER)
|
| 40 |
+
ASSIGNMENT_TTL_HOURS = 24
|
| 41 |
+
VIDEO_DATA_DIR = BASE_DIR / "../../data/videos"
|
| 42 |
+
DB_PATH = BASE_DIR / "human_eval.db"
|
| 43 |
+
TEST_DB_PATH = BASE_DIR / "human_eval_test.db"
|
| 44 |
+
DISAGREEMENT_TOP_K = 20
|
| 45 |
+
SECRET_KEY = os.environ.get("HUMAN_EVAL_SECRET_KEY", "human-eval-local-dev-key")
|
| 46 |
+
|
| 47 |
+
# Assignment status constants
|
| 48 |
+
STATUS_ASSIGNED = "assigned"
|
| 49 |
+
STATUS_COMPLETED = "completed"
|
| 50 |
+
STATUS_PARTIAL = "partial"
|
| 51 |
+
STATUS_SKIPPED = "skipped"
|
| 52 |
+
|
| 53 |
+
# ---------------------------------------------------------------------------
|
| 54 |
+
# Comparison mode: multi-model side-by-side evaluation
|
| 55 |
+
# ---------------------------------------------------------------------------
|
| 56 |
+
# Model prefixes — must match the prefix part of dataset directory names
|
| 57 |
+
# e.g. "ltx-2-19b-distilled-fp8" matches "ltx-2-19b-distilled-fp8-openvid", etc.
|
| 58 |
+
COMPARISON_MODELS = [
|
| 59 |
+
"wan2.2-ti2v-5b",
|
| 60 |
+
"ltx-2-19b-dev",
|
| 61 |
+
"cosmos-predict2.5-2b",
|
| 62 |
+
"cosmos-predict2.5-14b",
|
| 63 |
+
"veo-3.1",
|
| 64 |
+
"wan2.2-i2v-a14b",
|
| 65 |
+
"omniweaving",
|
| 66 |
+
"ltx-2.3-22b-dev",
|
| 67 |
+
]
|
| 68 |
+
|
| 69 |
+
# How many models to show per comparison group (randomly sampled from COMPARISON_MODELS)
|
| 70 |
+
MODELS_PER_GROUP = 3
|
| 71 |
+
|
| 72 |
+
SOURCE_DATASETS = {"wmb", "openvid", "physics_iq", "video_phy_2"}
|
| 73 |
+
|
| 74 |
+
# Redirect URL shown after user completes all assigned groups
|
| 75 |
+
COMPLETION_SURVEY_URL = "https://example.com/survey"
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
def extract_model(dataset_name: str) -> str:
|
| 79 |
+
"""Extract model prefix from a dataset directory name like 'ltx-2-19b-distilled-fp8-openvid' → 'ltx-2-19b-distilled-fp8'."""
|
| 80 |
+
for src in sorted(SOURCE_DATASETS, key=len, reverse=True):
|
| 81 |
+
suffix = f"-{src}"
|
| 82 |
+
if dataset_name.endswith(suffix):
|
| 83 |
+
return dataset_name[: -len(suffix)]
|
| 84 |
+
return dataset_name
|
evals/human_eval/coverage_report.py
ADDED
|
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Report video coverage and how many people are needed to reach 3x.
|
| 2 |
+
|
| 3 |
+
Usage:
|
| 4 |
+
python -m evals.human_eval.coverage_report
|
| 5 |
+
python -m evals.human_eval.coverage_report --batch-size 11
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
from __future__ import annotations
|
| 9 |
+
|
| 10 |
+
import argparse
|
| 11 |
+
import sqlite3
|
| 12 |
+
import sys
|
| 13 |
+
from collections import Counter
|
| 14 |
+
|
| 15 |
+
from evals.human_eval.config import (
|
| 16 |
+
COMPARISON_MODELS,
|
| 17 |
+
DB_PATH,
|
| 18 |
+
N_ANNOTATORS_PER_VIDEO,
|
| 19 |
+
)
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
def _active_video_sql() -> str:
|
| 23 |
+
likes = " OR ".join(f"v.dataset LIKE '{m}%'" for m in COMPARISON_MODELS)
|
| 24 |
+
return f"({likes})"
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
def _query_coverage(conn: sqlite3.Connection, exclude_bad: bool) -> list[tuple[int, int]]:
|
| 28 |
+
vf = _active_video_sql()
|
| 29 |
+
ea = ""
|
| 30 |
+
if exclude_bad:
|
| 31 |
+
has_table = conn.execute(
|
| 32 |
+
"SELECT 1 FROM sqlite_master WHERE type='table' AND name='excluded_annotators'"
|
| 33 |
+
).fetchone()
|
| 34 |
+
if has_table:
|
| 35 |
+
ea = "AND a.annotator_id NOT IN (SELECT annotator_id FROM excluded_annotators)"
|
| 36 |
+
|
| 37 |
+
return conn.execute(f"""
|
| 38 |
+
SELECT v.id, COUNT(a.id) AS cnt
|
| 39 |
+
FROM videos v
|
| 40 |
+
LEFT JOIN assignments a ON v.id = a.video_id AND a.status IN ('completed', 'partial')
|
| 41 |
+
AND a.annotator_id NOT IN (
|
| 42 |
+
SELECT id FROM annotators WHERE COALESCE(cohort, 'others') IN ('test')
|
| 43 |
+
)
|
| 44 |
+
{ea}
|
| 45 |
+
WHERE {vf}
|
| 46 |
+
GROUP BY v.id
|
| 47 |
+
""").fetchall()
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
def _print_report(label: str, rows: list[tuple[int, int]], target: int, batch_size: int) -> None:
|
| 51 |
+
total = len(rows)
|
| 52 |
+
dist: Counter[int] = Counter()
|
| 53 |
+
deficit = 0
|
| 54 |
+
for _, cnt in rows:
|
| 55 |
+
bucket = min(cnt, target)
|
| 56 |
+
dist[bucket] += 1
|
| 57 |
+
if cnt < target:
|
| 58 |
+
deficit += target - cnt
|
| 59 |
+
|
| 60 |
+
print(f"=== {label} ({total} active videos) ===")
|
| 61 |
+
for k in sorted(dist.keys()):
|
| 62 |
+
print(f" {k}x: {dist[k]} videos ({dist[k] / total * 100:.1f}%)")
|
| 63 |
+
print(f" 缺少 annotations: {deficit}")
|
| 64 |
+
print(f" 需要补人 (batch={batch_size}): {-(-deficit // batch_size)}")
|
| 65 |
+
print()
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
def main(argv: list[str] | None = None) -> int:
|
| 69 |
+
parser = argparse.ArgumentParser(description="Video coverage report")
|
| 70 |
+
parser.add_argument("--db", default=str(DB_PATH), help="Path to human_eval.db")
|
| 71 |
+
parser.add_argument("--batch-size", type=int, default=12, help="Videos per person (default: 4 groups × 3 models)")
|
| 72 |
+
parser.add_argument("--target", type=int, default=N_ANNOTATORS_PER_VIDEO, help="Target annotations per video")
|
| 73 |
+
args = parser.parse_args(argv)
|
| 74 |
+
|
| 75 |
+
conn = sqlite3.connect(args.db, timeout=30)
|
| 76 |
+
|
| 77 |
+
rows_all = _query_coverage(conn, exclude_bad=False)
|
| 78 |
+
rows_filt = _query_coverage(conn, exclude_bad=True)
|
| 79 |
+
conn.close()
|
| 80 |
+
|
| 81 |
+
_print_report("不 filter", rows_all, args.target, args.batch_size)
|
| 82 |
+
_print_report("Filter 后", rows_filt, args.target, args.batch_size)
|
| 83 |
+
|
| 84 |
+
return 0
|
| 85 |
+
|
| 86 |
+
|
| 87 |
+
if __name__ == "__main__":
|
| 88 |
+
sys.exit(main())
|
evals/human_eval/db.py
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import sqlite3
|
| 2 |
+
from pathlib import Path
|
| 3 |
+
|
| 4 |
+
from human_eval.config import STATUS_ASSIGNED, STATUS_COMPLETED, STATUS_PARTIAL, STATUS_SKIPPED
|
| 5 |
+
|
| 6 |
+
SCHEMA_SQL = """
|
| 7 |
+
CREATE TABLE IF NOT EXISTS annotators (
|
| 8 |
+
id INTEGER PRIMARY KEY,
|
| 9 |
+
name TEXT UNIQUE NOT NULL,
|
| 10 |
+
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
| 11 |
+
);
|
| 12 |
+
|
| 13 |
+
CREATE TABLE IF NOT EXISTS videos (
|
| 14 |
+
id INTEGER PRIMARY KEY,
|
| 15 |
+
filename TEXT NOT NULL,
|
| 16 |
+
dataset TEXT NOT NULL,
|
| 17 |
+
prompt TEXT NOT NULL,
|
| 18 |
+
physical_laws TEXT NOT NULL,
|
| 19 |
+
difficulty_score REAL,
|
| 20 |
+
metadata_version INTEGER NOT NULL DEFAULT 1,
|
| 21 |
+
imported_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
| 22 |
+
import_hash TEXT NOT NULL
|
| 23 |
+
);
|
| 24 |
+
|
| 25 |
+
CREATE TABLE IF NOT EXISTS assignments (
|
| 26 |
+
id INTEGER PRIMARY KEY,
|
| 27 |
+
video_id INTEGER NOT NULL REFERENCES videos(id),
|
| 28 |
+
annotator_id INTEGER NOT NULL REFERENCES annotators(id),
|
| 29 |
+
status TEXT NOT NULL DEFAULT 'assigned',
|
| 30 |
+
assigned_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
| 31 |
+
completed_at TIMESTAMP,
|
| 32 |
+
expires_at TIMESTAMP NOT NULL,
|
| 33 |
+
skip_reason TEXT,
|
| 34 |
+
group_id TEXT REFERENCES comparison_groups(id),
|
| 35 |
+
UNIQUE(video_id, annotator_id)
|
| 36 |
+
);
|
| 37 |
+
|
| 38 |
+
CREATE TABLE IF NOT EXISTS annotations (
|
| 39 |
+
id INTEGER PRIMARY KEY,
|
| 40 |
+
assignment_id INTEGER NOT NULL UNIQUE REFERENCES assignments(id),
|
| 41 |
+
scores_json TEXT NOT NULL,
|
| 42 |
+
metadata_version INTEGER,
|
| 43 |
+
note TEXT,
|
| 44 |
+
play_count INTEGER DEFAULT 0,
|
| 45 |
+
stay_seconds REAL DEFAULT 0,
|
| 46 |
+
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
| 47 |
+
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
| 48 |
+
);
|
| 49 |
+
|
| 50 |
+
CREATE TABLE IF NOT EXISTS annotation_items (
|
| 51 |
+
id INTEGER PRIMARY KEY,
|
| 52 |
+
annotation_id INTEGER NOT NULL REFERENCES annotations(id),
|
| 53 |
+
dimension TEXT NOT NULL,
|
| 54 |
+
law TEXT,
|
| 55 |
+
score INTEGER NOT NULL
|
| 56 |
+
);
|
| 57 |
+
|
| 58 |
+
CREATE TABLE IF NOT EXISTS comparison_groups (
|
| 59 |
+
id TEXT PRIMARY KEY,
|
| 60 |
+
prompt TEXT NOT NULL,
|
| 61 |
+
physical_laws TEXT NOT NULL,
|
| 62 |
+
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
| 63 |
+
);
|
| 64 |
+
|
| 65 |
+
CREATE TABLE IF NOT EXISTS excluded_annotators (
|
| 66 |
+
annotator_id INTEGER PRIMARY KEY REFERENCES annotators(id),
|
| 67 |
+
reason TEXT NOT NULL,
|
| 68 |
+
generated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
| 69 |
+
);
|
| 70 |
+
"""
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
_MIGRATIONS = [
|
| 74 |
+
"ALTER TABLE videos ADD COLUMN metadata_version INTEGER NOT NULL DEFAULT 1",
|
| 75 |
+
"ALTER TABLE annotations ADD COLUMN metadata_version INTEGER",
|
| 76 |
+
"ALTER TABLE assignments ADD COLUMN group_id TEXT",
|
| 77 |
+
# comparison_groups table is created in SCHEMA_SQL above; this migration
|
| 78 |
+
# is a no-op for new DBs but needed for existing ones.
|
| 79 |
+
("CREATE TABLE IF NOT EXISTS comparison_groups ("
|
| 80 |
+
"id TEXT PRIMARY KEY, prompt TEXT NOT NULL, physical_laws TEXT NOT NULL, "
|
| 81 |
+
"created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP)"),
|
| 82 |
+
"ALTER TABLE annotations ADD COLUMN play_count INTEGER DEFAULT 0",
|
| 83 |
+
"ALTER TABLE annotations ADD COLUMN stay_seconds REAL DEFAULT 0",
|
| 84 |
+
"ALTER TABLE annotations ADD COLUMN na_laws TEXT",
|
| 85 |
+
"ALTER TABLE annotators ADD COLUMN gender TEXT",
|
| 86 |
+
"ALTER TABLE annotators ADD COLUMN age TEXT",
|
| 87 |
+
"ALTER TABLE annotators ADD COLUMN major TEXT",
|
| 88 |
+
"ALTER TABLE annotators ADD COLUMN education TEXT",
|
| 89 |
+
"ALTER TABLE annotators ADD COLUMN cohort TEXT DEFAULT 'others'",
|
| 90 |
+
("CREATE TABLE IF NOT EXISTS excluded_annotators ("
|
| 91 |
+
"annotator_id INTEGER PRIMARY KEY REFERENCES annotators(id), "
|
| 92 |
+
"reason TEXT NOT NULL, "
|
| 93 |
+
"generated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP)"),
|
| 94 |
+
]
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
def init_db(conn: sqlite3.Connection):
|
| 98 |
+
conn.executescript(SCHEMA_SQL)
|
| 99 |
+
conn.execute("PRAGMA foreign_keys = ON")
|
| 100 |
+
# Apply migrations for existing DBs (silently skip if column already exists)
|
| 101 |
+
for sql in _MIGRATIONS:
|
| 102 |
+
try:
|
| 103 |
+
conn.execute(sql)
|
| 104 |
+
conn.commit()
|
| 105 |
+
except sqlite3.OperationalError:
|
| 106 |
+
pass
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
def get_db(db_path: Path) -> sqlite3.Connection:
|
| 110 |
+
conn = sqlite3.connect(str(db_path), timeout=30)
|
| 111 |
+
conn.row_factory = sqlite3.Row
|
| 112 |
+
conn.execute("PRAGMA foreign_keys = ON")
|
| 113 |
+
conn.execute("PRAGMA journal_mode = WAL")
|
| 114 |
+
conn.execute("PRAGMA busy_timeout = 30000")
|
| 115 |
+
return conn
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
def pending_assignment_sql(alias: str = "a") -> str:
|
| 120 |
+
"""Assignment is still actionable: not finished AND not expired."""
|
| 121 |
+
return (
|
| 122 |
+
f"({alias}.status NOT IN ('{STATUS_COMPLETED}', '{STATUS_PARTIAL}', '{STATUS_SKIPPED}') "
|
| 123 |
+
f"AND {alias}.expires_at > datetime('now'))"
|
| 124 |
+
)
|
evals/human_eval/filter_db.py
ADDED
|
@@ -0,0 +1,366 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Filter human_eval.db → human_eval_filtered.db by removing low-quality annotators.
|
| 2 |
+
|
| 3 |
+
Implements the cleaning rules documented in:
|
| 4 |
+
docs/exp-results/eval/humaneval/cleandata.md
|
| 5 |
+
|
| 6 |
+
Rules (require multiple signals to converge):
|
| 7 |
+
1. Constant scores (std=0): all dimensions get the same score regardless of video
|
| 8 |
+
2. Near-constant (std<0.3): almost all scores identical
|
| 9 |
+
3. 100% copy-paste: every video has all dimensions scored identically
|
| 10 |
+
4. High copy-paste (≥75%) + behavioral anomaly (median stay <30s or never plays)
|
| 11 |
+
5. Tiny volume: <5 annotation items or median stay <10s
|
| 12 |
+
6. Extreme Peer MAE (>1.8) + behavioral anomaly (median stay <30s or never plays
|
| 13 |
+
or copy-paste ≥50%)
|
| 14 |
+
|
| 15 |
+
Usage:
|
| 16 |
+
python -m eval.human_eval.filter_db
|
| 17 |
+
python -m eval.human_eval.filter_db --dry-run
|
| 18 |
+
python -m eval.human_eval.filter_db --input evals/human_eval/human_eval.db \
|
| 19 |
+
--output evals/human_eval/human_eval_filtered.db
|
| 20 |
+
"""
|
| 21 |
+
|
| 22 |
+
from __future__ import annotations
|
| 23 |
+
|
| 24 |
+
import argparse
|
| 25 |
+
import logging
|
| 26 |
+
import math
|
| 27 |
+
import shutil
|
| 28 |
+
import sqlite3
|
| 29 |
+
import statistics
|
| 30 |
+
import sys
|
| 31 |
+
from collections import defaultdict
|
| 32 |
+
from pathlib import Path
|
| 33 |
+
|
| 34 |
+
logger = logging.getLogger(__name__)
|
| 35 |
+
|
| 36 |
+
DEFAULT_INPUT = "evals/human_eval/human_eval.db"
|
| 37 |
+
DEFAULT_OUTPUT = "evals/human_eval/human_eval_filtered.db"
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
def _compute_annotator_metrics(conn: sqlite3.Connection) -> dict[int, dict]:
|
| 41 |
+
"""Compute per-annotator quality metrics."""
|
| 42 |
+
rows = conn.execute("""
|
| 43 |
+
SELECT
|
| 44 |
+
a.annotator_id,
|
| 45 |
+
ann.id AS annotation_id,
|
| 46 |
+
ann.play_count,
|
| 47 |
+
ann.stay_seconds,
|
| 48 |
+
ai.dimension,
|
| 49 |
+
ai.law,
|
| 50 |
+
ai.score,
|
| 51 |
+
a.video_id
|
| 52 |
+
FROM assignments a
|
| 53 |
+
JOIN annotations ann ON ann.assignment_id = a.id
|
| 54 |
+
JOIN annotation_items ai ON ai.annotation_id = ann.id
|
| 55 |
+
WHERE a.status = 'completed'
|
| 56 |
+
ORDER BY a.annotator_id, a.video_id
|
| 57 |
+
""").fetchall()
|
| 58 |
+
|
| 59 |
+
# Group by annotator
|
| 60 |
+
annotators: dict[int, dict] = {}
|
| 61 |
+
for row in rows:
|
| 62 |
+
aid = row[0]
|
| 63 |
+
if aid not in annotators:
|
| 64 |
+
annotators[aid] = {
|
| 65 |
+
"all_scores": [],
|
| 66 |
+
"videos": defaultdict(list), # video_id -> [scores]
|
| 67 |
+
"stay_seconds": [],
|
| 68 |
+
"play_counts": [],
|
| 69 |
+
"annotation_ids": set(),
|
| 70 |
+
"item_count": 0,
|
| 71 |
+
}
|
| 72 |
+
m = annotators[aid]
|
| 73 |
+
m["all_scores"].append(row[6])
|
| 74 |
+
m["videos"][row[7]].append(row[6])
|
| 75 |
+
m["item_count"] += 1
|
| 76 |
+
ann_id = row[1]
|
| 77 |
+
if ann_id not in m["annotation_ids"]:
|
| 78 |
+
m["annotation_ids"].add(ann_id)
|
| 79 |
+
if row[2] is not None:
|
| 80 |
+
m["play_counts"].append(row[2])
|
| 81 |
+
if row[3] is not None:
|
| 82 |
+
m["stay_seconds"].append(row[3])
|
| 83 |
+
|
| 84 |
+
# Compute metrics
|
| 85 |
+
for aid, m in annotators.items():
|
| 86 |
+
scores = m["all_scores"]
|
| 87 |
+
m["overall_std"] = statistics.pstdev(scores) if len(scores) > 1 else 0.0
|
| 88 |
+
|
| 89 |
+
# Copy-paste rate: fraction of videos where all dims got the same score
|
| 90 |
+
cp_count = 0
|
| 91 |
+
total_videos = 0
|
| 92 |
+
for vid, vscores in m["videos"].items():
|
| 93 |
+
if len(vscores) > 1:
|
| 94 |
+
total_videos += 1
|
| 95 |
+
if len(set(vscores)) == 1:
|
| 96 |
+
cp_count += 1
|
| 97 |
+
m["copy_paste_rate"] = cp_count / total_videos if total_videos > 0 else 0.0
|
| 98 |
+
m["video_count"] = total_videos
|
| 99 |
+
|
| 100 |
+
m["median_stay"] = statistics.median(m["stay_seconds"]) if m["stay_seconds"] else 0.0
|
| 101 |
+
m["median_play"] = statistics.median(m["play_counts"]) if m["play_counts"] else 0.0
|
| 102 |
+
m["max_play"] = max(m["play_counts"]) if m["play_counts"] else 0
|
| 103 |
+
n_annotations = len(m["annotation_ids"])
|
| 104 |
+
zero_play = sum(1 for p in m["play_counts"] if p == 0)
|
| 105 |
+
m["zero_play_rate"] = zero_play / n_annotations if n_annotations > 0 else 0.0
|
| 106 |
+
|
| 107 |
+
return annotators
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
def _compute_peer_mae(conn: sqlite3.Connection, annotator_ids: set[int]) -> dict[int, float]:
|
| 111 |
+
"""Compute per-annotator Peer MAE: mean |score - peer_score| on shared video-dimensions."""
|
| 112 |
+
rows = conn.execute("""
|
| 113 |
+
SELECT
|
| 114 |
+
a.annotator_id,
|
| 115 |
+
a.video_id,
|
| 116 |
+
ai.dimension,
|
| 117 |
+
ai.law,
|
| 118 |
+
ai.score
|
| 119 |
+
FROM assignments a
|
| 120 |
+
JOIN annotations ann ON ann.assignment_id = a.id
|
| 121 |
+
JOIN annotation_items ai ON ai.annotation_id = ann.id
|
| 122 |
+
WHERE a.status = 'completed'
|
| 123 |
+
ORDER BY a.video_id, ai.dimension, ai.law
|
| 124 |
+
""").fetchall()
|
| 125 |
+
|
| 126 |
+
# Group scores by (video_id, dimension, law)
|
| 127 |
+
dim_groups: dict[tuple, list[tuple[int, int]]] = defaultdict(list)
|
| 128 |
+
for row in rows:
|
| 129 |
+
aid, vid, dim, law, score = row
|
| 130 |
+
if aid not in annotator_ids:
|
| 131 |
+
continue
|
| 132 |
+
key = (vid, dim, law)
|
| 133 |
+
dim_groups[key].append((aid, score))
|
| 134 |
+
|
| 135 |
+
# Compute pairwise MAE per annotator
|
| 136 |
+
annotator_diffs: dict[int, list[int]] = defaultdict(list)
|
| 137 |
+
for key, entries in dim_groups.items():
|
| 138 |
+
if len(entries) < 2:
|
| 139 |
+
continue
|
| 140 |
+
for i, (aid_i, score_i) in enumerate(entries):
|
| 141 |
+
for j, (aid_j, score_j) in enumerate(entries):
|
| 142 |
+
if i < j:
|
| 143 |
+
diff = abs(score_i - score_j)
|
| 144 |
+
annotator_diffs[aid_i].append(diff)
|
| 145 |
+
annotator_diffs[aid_j].append(diff)
|
| 146 |
+
|
| 147 |
+
return {
|
| 148 |
+
aid: statistics.mean(diffs) if diffs else 0.0
|
| 149 |
+
for aid, diffs in annotator_diffs.items()
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
|
| 153 |
+
def identify_removals(
|
| 154 |
+
metrics: dict[int, dict],
|
| 155 |
+
peer_mae: dict[int, float],
|
| 156 |
+
) -> dict[int, str]:
|
| 157 |
+
"""Return {annotator_id: reason} for annotators to remove."""
|
| 158 |
+
removals: dict[int, str] = {}
|
| 159 |
+
|
| 160 |
+
for aid, m in metrics.items():
|
| 161 |
+
# Rule 1: constant scores (std=0)
|
| 162 |
+
if m["item_count"] >= 5 and m["overall_std"] == 0.0:
|
| 163 |
+
removals[aid] = "constant_scores (std=0)"
|
| 164 |
+
continue
|
| 165 |
+
|
| 166 |
+
# Rule 2: near-constant (std<0.3)
|
| 167 |
+
if m["item_count"] >= 5 and m["overall_std"] < 0.3:
|
| 168 |
+
removals[aid] = f"near_constant (std={m['overall_std']:.2f})"
|
| 169 |
+
continue
|
| 170 |
+
|
| 171 |
+
# Rule 5: tiny volume (<5 items or median stay <10s)
|
| 172 |
+
if m["item_count"] < 5:
|
| 173 |
+
removals[aid] = f"tiny_volume ({m['item_count']} items)"
|
| 174 |
+
continue
|
| 175 |
+
if m["median_stay"] < 10.0 and m["item_count"] < 10:
|
| 176 |
+
removals[aid] = f"tiny_volume (stay={m['median_stay']:.0f}s, {m['item_count']} items)"
|
| 177 |
+
continue
|
| 178 |
+
|
| 179 |
+
# Rule 3: 100% copy-paste (with enough videos)
|
| 180 |
+
if m["video_count"] >= 3 and m["copy_paste_rate"] >= 1.0:
|
| 181 |
+
removals[aid] = "100%_copy_paste"
|
| 182 |
+
continue
|
| 183 |
+
|
| 184 |
+
# Rule 4: high copy-paste + behavioral anomaly
|
| 185 |
+
behavioral_anomaly = m["median_stay"] < 30.0 or m["max_play"] == 0
|
| 186 |
+
if m["copy_paste_rate"] >= 0.75 and behavioral_anomaly:
|
| 187 |
+
removals[aid] = (
|
| 188 |
+
f"high_copy_paste ({m['copy_paste_rate']:.0%}) "
|
| 189 |
+
f"+ behavioral (stay={m['median_stay']:.0f}s, max_play={m['max_play']})"
|
| 190 |
+
)
|
| 191 |
+
continue
|
| 192 |
+
|
| 193 |
+
# Rule 6: extreme Peer MAE + behavioral anomaly
|
| 194 |
+
pmae = peer_mae.get(aid, 0.0)
|
| 195 |
+
if pmae > 1.8 and (behavioral_anomaly or m["copy_paste_rate"] >= 0.50):
|
| 196 |
+
removals[aid] = (
|
| 197 |
+
f"high_peer_mae ({pmae:.2f}) "
|
| 198 |
+
f"+ behavioral (stay={m['median_stay']:.0f}s, cp={m['copy_paste_rate']:.0%})"
|
| 199 |
+
)
|
| 200 |
+
continue
|
| 201 |
+
|
| 202 |
+
# Rule 7: never played video + short stay
|
| 203 |
+
if m["zero_play_rate"] >= 1.0 and m["median_stay"] < 30.0:
|
| 204 |
+
removals[aid] = (
|
| 205 |
+
f"never_played (zero_play=100%) "
|
| 206 |
+
f"+ short_stay (stay={m['median_stay']:.0f}s)"
|
| 207 |
+
)
|
| 208 |
+
continue
|
| 209 |
+
|
| 210 |
+
return removals
|
| 211 |
+
|
| 212 |
+
|
| 213 |
+
def apply_filter(input_db: str, output_db: str, removals: dict[int, str]) -> dict:
|
| 214 |
+
"""Copy input_db to output_db and delete data for removed annotators."""
|
| 215 |
+
shutil.copy2(input_db, output_db)
|
| 216 |
+
conn = sqlite3.connect(output_db)
|
| 217 |
+
conn.execute("PRAGMA foreign_keys = OFF")
|
| 218 |
+
|
| 219 |
+
removed_ids = list(removals.keys())
|
| 220 |
+
if not removed_ids:
|
| 221 |
+
conn.close()
|
| 222 |
+
return {"removed": 0}
|
| 223 |
+
|
| 224 |
+
placeholders = ",".join("?" * len(removed_ids))
|
| 225 |
+
|
| 226 |
+
# Get assignment IDs to remove
|
| 227 |
+
assignment_ids = [
|
| 228 |
+
r[0] for r in conn.execute(
|
| 229 |
+
f"SELECT id FROM assignments WHERE annotator_id IN ({placeholders})",
|
| 230 |
+
removed_ids,
|
| 231 |
+
).fetchall()
|
| 232 |
+
]
|
| 233 |
+
|
| 234 |
+
if assignment_ids:
|
| 235 |
+
a_ph = ",".join("?" * len(assignment_ids))
|
| 236 |
+
|
| 237 |
+
# Get annotation IDs
|
| 238 |
+
annotation_ids = [
|
| 239 |
+
r[0] for r in conn.execute(
|
| 240 |
+
f"SELECT id FROM annotations WHERE assignment_id IN ({a_ph})",
|
| 241 |
+
assignment_ids,
|
| 242 |
+
).fetchall()
|
| 243 |
+
]
|
| 244 |
+
|
| 245 |
+
if annotation_ids:
|
| 246 |
+
ann_ph = ",".join("?" * len(annotation_ids))
|
| 247 |
+
conn.execute(
|
| 248 |
+
f"DELETE FROM annotation_items WHERE annotation_id IN ({ann_ph})",
|
| 249 |
+
annotation_ids,
|
| 250 |
+
)
|
| 251 |
+
conn.execute(
|
| 252 |
+
f"DELETE FROM annotations WHERE id IN ({ann_ph})",
|
| 253 |
+
annotation_ids,
|
| 254 |
+
)
|
| 255 |
+
|
| 256 |
+
conn.execute(
|
| 257 |
+
f"DELETE FROM assignments WHERE id IN ({a_ph})",
|
| 258 |
+
assignment_ids,
|
| 259 |
+
)
|
| 260 |
+
|
| 261 |
+
conn.execute(
|
| 262 |
+
f"DELETE FROM annotators WHERE id IN ({placeholders})",
|
| 263 |
+
removed_ids,
|
| 264 |
+
)
|
| 265 |
+
|
| 266 |
+
# Clean up orphaned comparison_groups
|
| 267 |
+
conn.execute("""
|
| 268 |
+
DELETE FROM comparison_groups
|
| 269 |
+
WHERE id NOT IN (SELECT DISTINCT group_id FROM assignments WHERE group_id IS NOT NULL)
|
| 270 |
+
""")
|
| 271 |
+
|
| 272 |
+
conn.commit()
|
| 273 |
+
|
| 274 |
+
# Collect stats
|
| 275 |
+
stats = {}
|
| 276 |
+
for table in ["annotators", "videos", "assignments", "annotations", "annotation_items", "comparison_groups"]:
|
| 277 |
+
stats[table] = conn.execute(f"SELECT COUNT(*) FROM {table}").fetchone()[0]
|
| 278 |
+
conn.close()
|
| 279 |
+
|
| 280 |
+
return stats
|
| 281 |
+
|
| 282 |
+
|
| 283 |
+
def _print_removal_summary(removals: dict[int, str], conn: sqlite3.Connection) -> None:
|
| 284 |
+
"""Print summary of annotators to be removed."""
|
| 285 |
+
if not removals:
|
| 286 |
+
print("No annotators to remove.")
|
| 287 |
+
return
|
| 288 |
+
|
| 289 |
+
# Get names
|
| 290 |
+
ids = list(removals.keys())
|
| 291 |
+
ph = ",".join("?" * len(ids))
|
| 292 |
+
name_map = {
|
| 293 |
+
r[0]: r[1]
|
| 294 |
+
for r in conn.execute(
|
| 295 |
+
f"SELECT id, name FROM annotators WHERE id IN ({ph})", ids
|
| 296 |
+
).fetchall()
|
| 297 |
+
}
|
| 298 |
+
|
| 299 |
+
# Group by reason category
|
| 300 |
+
by_category: dict[str, list[str]] = defaultdict(list)
|
| 301 |
+
for aid, reason in removals.items():
|
| 302 |
+
cat = reason.split("(")[0].strip().split("+")[0].strip()
|
| 303 |
+
by_category[cat].append(f" {name_map.get(aid, '?')} (id={aid}): {reason}")
|
| 304 |
+
|
| 305 |
+
print(f"\n=== Annotators to remove: {len(removals)} ===")
|
| 306 |
+
for cat, entries in sorted(by_category.items()):
|
| 307 |
+
print(f"\n{cat} ({len(entries)}):")
|
| 308 |
+
for e in sorted(entries):
|
| 309 |
+
print(e)
|
| 310 |
+
|
| 311 |
+
|
| 312 |
+
def main(argv: list[str] | None = None) -> int:
|
| 313 |
+
logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s")
|
| 314 |
+
parser = argparse.ArgumentParser(description="Filter human_eval.db")
|
| 315 |
+
parser.add_argument("--input", default=DEFAULT_INPUT, help="Input DB path")
|
| 316 |
+
parser.add_argument("--output", default=DEFAULT_OUTPUT, help="Output DB path")
|
| 317 |
+
parser.add_argument("--dry-run", action="store_true", help="Show removals without writing")
|
| 318 |
+
args = parser.parse_args(argv)
|
| 319 |
+
|
| 320 |
+
if not Path(args.input).exists():
|
| 321 |
+
logger.error("Input DB not found: %s", args.input)
|
| 322 |
+
return 1
|
| 323 |
+
|
| 324 |
+
conn = sqlite3.connect(args.input, timeout=30)
|
| 325 |
+
|
| 326 |
+
print("Computing annotator metrics...")
|
| 327 |
+
metrics = _compute_annotator_metrics(conn)
|
| 328 |
+
print(f" {len(metrics)} annotators with completed annotations")
|
| 329 |
+
|
| 330 |
+
print("Computing Peer MAE...")
|
| 331 |
+
peer_mae = _compute_peer_mae(conn, set(metrics.keys()))
|
| 332 |
+
|
| 333 |
+
removals = identify_removals(metrics, peer_mae)
|
| 334 |
+
_print_removal_summary(removals, conn)
|
| 335 |
+
|
| 336 |
+
if args.dry_run:
|
| 337 |
+
conn.close()
|
| 338 |
+
print("\n[DRY RUN] No output written.")
|
| 339 |
+
return 0
|
| 340 |
+
|
| 341 |
+
# Sync excluded_annotators table in the main DB so assignment can
|
| 342 |
+
# skip bad annotators without needing the filtered DB.
|
| 343 |
+
conn.execute("CREATE TABLE IF NOT EXISTS excluded_annotators ("
|
| 344 |
+
"annotator_id INTEGER PRIMARY KEY, reason TEXT NOT NULL, "
|
| 345 |
+
"generated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP)")
|
| 346 |
+
conn.execute("DELETE FROM excluded_annotators")
|
| 347 |
+
for aid, reason in removals.items():
|
| 348 |
+
conn.execute(
|
| 349 |
+
"INSERT INTO excluded_annotators (annotator_id, reason) VALUES (?, ?)",
|
| 350 |
+
(aid, reason),
|
| 351 |
+
)
|
| 352 |
+
conn.commit()
|
| 353 |
+
print(f"\nUpdated excluded_annotators in main DB: {len(removals)} entries")
|
| 354 |
+
conn.close()
|
| 355 |
+
|
| 356 |
+
print(f"Writing filtered DB to {args.output}...")
|
| 357 |
+
stats = apply_filter(args.input, args.output, removals)
|
| 358 |
+
print("Filtered DB stats:")
|
| 359 |
+
for table, count in stats.items():
|
| 360 |
+
print(f" {table}: {count}")
|
| 361 |
+
|
| 362 |
+
return 0
|
| 363 |
+
|
| 364 |
+
|
| 365 |
+
if __name__ == "__main__":
|
| 366 |
+
sys.exit(main())
|
evals/human_eval/import_videos.py
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import hashlib
|
| 2 |
+
import json
|
| 3 |
+
import sys
|
| 4 |
+
from pathlib import Path
|
| 5 |
+
|
| 6 |
+
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent))
|
| 7 |
+
from human_eval.config import COMPARISON_MODELS, STATUS_SKIPPED
|
| 8 |
+
|
| 9 |
+
GENERAL_KEYS = ["SA", "PTV", "persistence"]
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def compute_import_hash(video_data_dir: Path) -> str:
|
| 13 |
+
"""Global hash across all datasets (used by initial import guard)."""
|
| 14 |
+
paths = sorted(str(p.relative_to(video_data_dir)) for p in video_data_dir.rglob("*.mp4"))
|
| 15 |
+
return hashlib.md5("\n".join(paths).encode()).hexdigest()
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def compute_dataset_hash(dataset_dir: Path) -> str:
|
| 19 |
+
"""Per-dataset hash — only includes mp4 filenames within one dataset."""
|
| 20 |
+
paths = sorted(p.name for p in dataset_dir.glob("*.mp4"))
|
| 21 |
+
return hashlib.md5("\n".join(paths).encode()).hexdigest()
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
def compute_difficulty_score(gemini_scores: dict, qwen_scores: dict) -> float | None:
|
| 25 |
+
diffs = []
|
| 26 |
+
for key in GENERAL_KEYS:
|
| 27 |
+
g, q = gemini_scores.get(key), qwen_scores.get(key)
|
| 28 |
+
if g is None or q is None:
|
| 29 |
+
return None
|
| 30 |
+
diffs.append(abs(g - q))
|
| 31 |
+
return sum(diffs) / len(diffs) if diffs else None
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
def _load_latest_eval(dataset_dir: Path, prefix: str) -> dict | None:
|
| 35 |
+
files = sorted(dataset_dir.glob(f"eval_{prefix}_*.json"))
|
| 36 |
+
if not files:
|
| 37 |
+
return None
|
| 38 |
+
with open(files[-1]) as f:
|
| 39 |
+
return json.load(f)
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
def _build_scores_lookup(eval_data: dict | None) -> dict:
|
| 43 |
+
if not eval_data or "results" not in eval_data:
|
| 44 |
+
return {}
|
| 45 |
+
lookup = {}
|
| 46 |
+
for r in eval_data["results"]:
|
| 47 |
+
scores = {k: r[k] for k in GENERAL_KEYS if k in r}
|
| 48 |
+
lookup[r["video"]] = scores
|
| 49 |
+
return lookup
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
_DATASET_SUFFIXES = ("openvid", "video_phy_2", "physics_iq", "wmb")
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def _ds_suffix(db_dataset: str) -> str:
|
| 58 |
+
"""Extract source dataset suffix from DB dataset name, e.g. 'veo-3.1-video_phy_2' -> 'video_phy_2'."""
|
| 59 |
+
for suffix in _DATASET_SUFFIXES:
|
| 60 |
+
if db_dataset.endswith(suffix):
|
| 61 |
+
return suffix
|
| 62 |
+
return db_dataset
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
class _EvalLookupCache:
|
| 66 |
+
"""Caches per-dataset eval score lookups and dataset hashes."""
|
| 67 |
+
|
| 68 |
+
def __init__(self):
|
| 69 |
+
self._eval: dict[str, tuple[dict, dict]] = {}
|
| 70 |
+
self._hash: dict[str, str] = {}
|
| 71 |
+
|
| 72 |
+
def get_scores(self, ds_name: str, ds_dir: Path) -> tuple[dict, dict]:
|
| 73 |
+
if ds_name not in self._eval:
|
| 74 |
+
self._eval[ds_name] = (
|
| 75 |
+
_build_scores_lookup(_load_latest_eval(ds_dir, "gemini")),
|
| 76 |
+
_build_scores_lookup(_load_latest_eval(ds_dir, "qwen")),
|
| 77 |
+
)
|
| 78 |
+
return self._eval[ds_name]
|
| 79 |
+
|
| 80 |
+
def get_hash(self, ds_name: str, ds_dir: Path) -> str:
|
| 81 |
+
if ds_name not in self._hash:
|
| 82 |
+
self._hash[ds_name] = compute_dataset_hash(ds_dir) if ds_dir.exists() else ""
|
| 83 |
+
return self._hash[ds_name]
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
def import_videos(conn, video_data_dir: Path):
|
| 87 |
+
"""Import videos into the human-eval DB.
|
| 88 |
+
|
| 89 |
+
This release omits the prompt-selection JSON consumed by the original
|
| 90 |
+
importer, so the importer entry point is intentionally disabled.
|
| 91 |
+
"""
|
| 92 |
+
raise RuntimeError(
|
| 93 |
+
"import_videos is not included in this release because the prompt-selection "
|
| 94 |
+
"JSON is omitted. Use the companion dataset metadata to build a DB import."
|
| 95 |
+
)
|
| 96 |
+
|
evals/human_eval/init_test_db.py
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Initialize test DB by copying videos & comparison_groups from production DB.
|
| 2 |
+
|
| 3 |
+
Usage:
|
| 4 |
+
python -m human_eval.init_test_db # default paths from config
|
| 5 |
+
python -m human_eval.init_test_db --reset # drop & recreate test DB
|
| 6 |
+
"""
|
| 7 |
+
import argparse
|
| 8 |
+
import sqlite3
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
|
| 11 |
+
from human_eval.config import DB_PATH, TEST_DB_PATH
|
| 12 |
+
from human_eval.db import init_db, get_db
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
# Tables whose rows are copied verbatim from prod → test
|
| 16 |
+
_COPY_TABLES = ["videos", "comparison_groups"]
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
def init_test_db(*, reset: bool = False):
|
| 20 |
+
if reset and TEST_DB_PATH.exists():
|
| 21 |
+
TEST_DB_PATH.unlink()
|
| 22 |
+
print(f"removed old test DB: {TEST_DB_PATH}")
|
| 23 |
+
|
| 24 |
+
if not DB_PATH.exists():
|
| 25 |
+
raise FileNotFoundError(f"production DB not found: {DB_PATH}")
|
| 26 |
+
|
| 27 |
+
# Open production (read-only) and test DBs
|
| 28 |
+
prod = sqlite3.connect(str(DB_PATH))
|
| 29 |
+
prod.row_factory = sqlite3.Row
|
| 30 |
+
prod.execute("PRAGMA query_only = ON")
|
| 31 |
+
|
| 32 |
+
test = get_db(TEST_DB_PATH)
|
| 33 |
+
init_db(test)
|
| 34 |
+
|
| 35 |
+
for table in _COPY_TABLES:
|
| 36 |
+
existing = test.execute(f"SELECT COUNT(*) AS c FROM {table}").fetchone()["c"]
|
| 37 |
+
if existing > 0 and not reset:
|
| 38 |
+
print(f" {table}: already has {existing} rows, skipping (use --reset to overwrite)")
|
| 39 |
+
continue
|
| 40 |
+
|
| 41 |
+
if reset:
|
| 42 |
+
test.execute(f"DELETE FROM {table}")
|
| 43 |
+
|
| 44 |
+
cols_info = prod.execute(f"PRAGMA table_info({table})").fetchall()
|
| 45 |
+
col_names = [c["name"] for c in cols_info]
|
| 46 |
+
cols_csv = ", ".join(col_names)
|
| 47 |
+
placeholders = ", ".join("?" for _ in col_names)
|
| 48 |
+
|
| 49 |
+
rows = prod.execute(f"SELECT {cols_csv} FROM {table}").fetchall()
|
| 50 |
+
test.executemany(
|
| 51 |
+
f"INSERT OR IGNORE INTO {table} ({cols_csv}) VALUES ({placeholders})",
|
| 52 |
+
[tuple(r) for r in rows],
|
| 53 |
+
)
|
| 54 |
+
test.commit()
|
| 55 |
+
print(f" {table}: copied {len(rows)} rows")
|
| 56 |
+
|
| 57 |
+
prod.close()
|
| 58 |
+
test.close()
|
| 59 |
+
print(f"test DB ready: {TEST_DB_PATH}")
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
if __name__ == "__main__":
|
| 63 |
+
parser = argparse.ArgumentParser()
|
| 64 |
+
parser.add_argument("--reset", action="store_true", help="drop & recreate test DB")
|
| 65 |
+
args = parser.parse_args()
|
| 66 |
+
init_test_db(reset=args.reset)
|
evals/human_eval/static/rate.js
ADDED
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
function confirmSkip(form) {
|
| 2 |
+
var reason = form.querySelector('.skip-reason-input').value.trim();
|
| 3 |
+
if (!reason) {
|
| 4 |
+
alert('Please enter a skip reason.');
|
| 5 |
+
return false;
|
| 6 |
+
}
|
| 7 |
+
return confirm('Are you sure you want to skip this video?\n\nReason: ' + reason);
|
| 8 |
+
}
|
| 9 |
+
|
| 10 |
+
var _scoreColors = {1: '#ef4444', 2: '#f97316', 3: '#eab308', 4: '#16a34a', 5: '#15803d'};
|
| 11 |
+
|
| 12 |
+
function _applyScoreColor(btn) {
|
| 13 |
+
var v = parseInt(btn.dataset.value);
|
| 14 |
+
btn.style.background = _scoreColors[v] || '#888';
|
| 15 |
+
btn.style.borderColor = 'transparent';
|
| 16 |
+
btn.style.color = '#fff';
|
| 17 |
+
}
|
| 18 |
+
|
| 19 |
+
function _clearScoreColor(btn) {
|
| 20 |
+
btn.style.background = '';
|
| 21 |
+
btn.style.borderColor = '';
|
| 22 |
+
btn.style.color = '';
|
| 23 |
+
}
|
| 24 |
+
|
| 25 |
+
function selectScore(btn) {
|
| 26 |
+
const row = btn.closest('.dim-row');
|
| 27 |
+
row.querySelectorAll('.score-btn').forEach(b => { b.classList.remove('selected'); _clearScoreColor(b); });
|
| 28 |
+
btn.classList.add('selected');
|
| 29 |
+
_applyScoreColor(btn);
|
| 30 |
+
row.querySelector('input[type="hidden"]').value = btn.dataset.value;
|
| 31 |
+
checkFormComplete();
|
| 32 |
+
}
|
| 33 |
+
|
| 34 |
+
// Color pre-selected buttons on page load
|
| 35 |
+
document.addEventListener('DOMContentLoaded', function() {
|
| 36 |
+
document.querySelectorAll('.score-btn.selected').forEach(_applyScoreColor);
|
| 37 |
+
});
|
| 38 |
+
|
| 39 |
+
function _updateSubmitBtn(form, selector) {
|
| 40 |
+
var allFilled = Array.from(form.querySelectorAll('.score-input')).every(function(i) {
|
| 41 |
+
if (i.closest('.physical-dim-row')) return true; // physical dims are optional
|
| 42 |
+
return i.value;
|
| 43 |
+
});
|
| 44 |
+
var btn = form.querySelector(selector);
|
| 45 |
+
if (btn) btn.disabled = !allFilled;
|
| 46 |
+
}
|
| 47 |
+
|
| 48 |
+
function checkFormComplete() {
|
| 49 |
+
// Save button: always enabled (allow partial saves midway)
|
| 50 |
+
// Next button: always enabled (users can skip ahead)
|
| 51 |
+
}
|
| 52 |
+
|
| 53 |
+
// --- Play count & stay time tracking ---
|
| 54 |
+
|
| 55 |
+
var _playCountMap = {}; // vid -> count
|
| 56 |
+
var _stayStartMap = {}; // vid -> timestamp (ms)
|
| 57 |
+
|
| 58 |
+
function _incPlayCount(vid, card) {
|
| 59 |
+
if (!vid || !card) return;
|
| 60 |
+
_playCountMap[vid] = (_playCountMap[vid] || 0) + 1;
|
| 61 |
+
var inp = card.querySelector('.play-count-input');
|
| 62 |
+
if (inp) inp.value = _playCountMap[vid];
|
| 63 |
+
}
|
| 64 |
+
|
| 65 |
+
function _updateStaySeconds(vid, card) {
|
| 66 |
+
if (!vid || !_stayStartMap[vid] || !card) return;
|
| 67 |
+
var elapsed = (Date.now() - _stayStartMap[vid]) / 1000;
|
| 68 |
+
var inp = card.querySelector('.stay-seconds-input');
|
| 69 |
+
if (inp) inp.value = elapsed.toFixed(1);
|
| 70 |
+
}
|
| 71 |
+
|
| 72 |
+
// --- Custom video controls (comparison mode: hide duration) ---
|
| 73 |
+
|
| 74 |
+
var _progressRAF = null;
|
| 75 |
+
var _videoWraps = [];
|
| 76 |
+
|
| 77 |
+
function _startProgressLoop() {
|
| 78 |
+
if (_progressRAF) return;
|
| 79 |
+
function tick() {
|
| 80 |
+
_videoWraps.forEach(function(wrap) {
|
| 81 |
+
var video = wrap.querySelector('video');
|
| 82 |
+
var bar = wrap.querySelector('.vc-progress');
|
| 83 |
+
if (video && bar && video.duration) {
|
| 84 |
+
bar.style.width = (video.currentTime / video.duration * 100) + '%';
|
| 85 |
+
}
|
| 86 |
+
});
|
| 87 |
+
var anyPlaying = _videoWraps.some(function(wrap) {
|
| 88 |
+
var v = wrap.querySelector('video');
|
| 89 |
+
return v && !v.paused && !v.ended;
|
| 90 |
+
});
|
| 91 |
+
if (anyPlaying) {
|
| 92 |
+
_progressRAF = requestAnimationFrame(tick);
|
| 93 |
+
} else {
|
| 94 |
+
_progressRAF = null;
|
| 95 |
+
}
|
| 96 |
+
}
|
| 97 |
+
_progressRAF = requestAnimationFrame(tick);
|
| 98 |
+
}
|
| 99 |
+
|
| 100 |
+
function togglePlay(btn) {
|
| 101 |
+
var video = btn.closest('.compare-video-wrap').querySelector('video');
|
| 102 |
+
if (video.paused) {
|
| 103 |
+
var card = btn.closest('.compare-card');
|
| 104 |
+
_incPlayCount(card ? card.dataset.vid : null, card);
|
| 105 |
+
video.play();
|
| 106 |
+
btn.textContent = '\u23F8';
|
| 107 |
+
_startProgressLoop();
|
| 108 |
+
} else {
|
| 109 |
+
video.pause();
|
| 110 |
+
btn.textContent = '\u25B6';
|
| 111 |
+
}
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
function toggleMute(btn) {
|
| 115 |
+
var video = btn.closest('.compare-video-wrap').querySelector('video');
|
| 116 |
+
video.muted = !video.muted;
|
| 117 |
+
btn.textContent = video.muted ? '\uD83D\uDD07' : '\uD83D\uDD09';
|
| 118 |
+
}
|
| 119 |
+
|
| 120 |
+
function seekVideo(e, bar) {
|
| 121 |
+
var video = bar.closest('.compare-video-wrap').querySelector('video');
|
| 122 |
+
var rect = bar.getBoundingClientRect();
|
| 123 |
+
var pct = (e.clientX - rect.left) / rect.width;
|
| 124 |
+
video.currentTime = pct * video.duration;
|
| 125 |
+
var progressBar = bar.querySelector('.vc-progress');
|
| 126 |
+
if (progressBar) progressBar.style.width = (pct * 100) + '%';
|
| 127 |
+
}
|
| 128 |
+
|
| 129 |
+
document.addEventListener('DOMContentLoaded', function() {
|
| 130 |
+
checkFormComplete();
|
| 131 |
+
|
| 132 |
+
// Cache video wraps for rAF progress loop
|
| 133 |
+
_videoWraps = Array.from(document.querySelectorAll('.compare-video-wrap'));
|
| 134 |
+
|
| 135 |
+
// Reset play button when video ends & click-to-play on video itself
|
| 136 |
+
_videoWraps.forEach(function(wrap) {
|
| 137 |
+
var video = wrap.querySelector('video');
|
| 138 |
+
if (video) {
|
| 139 |
+
video.addEventListener('ended', function() {
|
| 140 |
+
var btn = wrap.querySelector('.vc-play');
|
| 141 |
+
if (btn) btn.textContent = '\u25B6';
|
| 142 |
+
});
|
| 143 |
+
video.style.cursor = 'pointer';
|
| 144 |
+
video.addEventListener('click', function() {
|
| 145 |
+
var btn = wrap.querySelector('.vc-play');
|
| 146 |
+
if (btn) togglePlay(btn);
|
| 147 |
+
});
|
| 148 |
+
}
|
| 149 |
+
});
|
| 150 |
+
|
| 151 |
+
// Start stay timer for each card
|
| 152 |
+
var now = Date.now();
|
| 153 |
+
document.querySelectorAll('.compare-card').forEach(function(card) {
|
| 154 |
+
var vid = card.dataset.vid;
|
| 155 |
+
if (vid) _stayStartMap[vid] = now;
|
| 156 |
+
});
|
| 157 |
+
|
| 158 |
+
});
|
| 159 |
+
|
| 160 |
+
function _showToast(msg) {
|
| 161 |
+
var toast = document.getElementById('save-toast');
|
| 162 |
+
if (!toast) {
|
| 163 |
+
toast = document.createElement('div');
|
| 164 |
+
toast.id = 'save-toast';
|
| 165 |
+
toast.style.cssText = 'position:fixed;top:50%;left:50%;transform:translate(-50%,-50%);background:rgba(0,0,0,0.8);color:#fff;padding:16px 32px;border-radius:8px;font-size:1.2rem;z-index:9999;pointer-events:none;opacity:0;transition:opacity 0.3s;';
|
| 166 |
+
document.body.appendChild(toast);
|
| 167 |
+
}
|
| 168 |
+
toast.textContent = msg;
|
| 169 |
+
toast.style.opacity = '1';
|
| 170 |
+
setTimeout(function() { toast.style.opacity = '0'; }, 1500);
|
| 171 |
+
}
|
| 172 |
+
|
| 173 |
+
function _collectFormData(action) {
|
| 174 |
+
var form = document.getElementById('submit-all-form');
|
| 175 |
+
if (!form) return null;
|
| 176 |
+
// Flush stay seconds
|
| 177 |
+
document.querySelectorAll('.compare-card').forEach(function(card) {
|
| 178 |
+
var vid = card.dataset.vid;
|
| 179 |
+
if (vid) _updateStaySeconds(vid, card);
|
| 180 |
+
});
|
| 181 |
+
var data = new FormData(form);
|
| 182 |
+
data.set('action', action);
|
| 183 |
+
return { url: form.action, data: data };
|
| 184 |
+
}
|
| 185 |
+
|
| 186 |
+
function doSave() {
|
| 187 |
+
var fd = _collectFormData('save');
|
| 188 |
+
if (!fd) return;
|
| 189 |
+
fetch(fd.url, { method: 'POST', body: fd.data })
|
| 190 |
+
.then(function(r) { if (r.ok) _showToast('Saved!'); else _showToast('Error'); })
|
| 191 |
+
.catch(function() { _showToast('Error'); });
|
| 192 |
+
}
|
| 193 |
+
|
| 194 |
+
function doNext() {
|
| 195 |
+
var fd = _collectFormData('next');
|
| 196 |
+
if (!fd) {
|
| 197 |
+
// Fallback: go to task list if form not found
|
| 198 |
+
window.location.href = '/tasks';
|
| 199 |
+
return;
|
| 200 |
+
}
|
| 201 |
+
fetch(fd.url, { method: 'POST', body: fd.data })
|
| 202 |
+
.then(function(r) { return r.json(); })
|
| 203 |
+
.then(function(data) {
|
| 204 |
+
if (data.redirect) window.location.href = data.redirect;
|
| 205 |
+
else window.location.href = '/tasks';
|
| 206 |
+
})
|
| 207 |
+
.catch(function() {
|
| 208 |
+
// On any error, still navigate to next
|
| 209 |
+
window.location.href = '/tasks';
|
| 210 |
+
});
|
| 211 |
+
}
|
evals/human_eval/static/style.css
ADDED
|
@@ -0,0 +1,1277 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
/* Light theme */
|
| 2 |
+
:root {
|
| 3 |
+
--bg: #ffffff;
|
| 4 |
+
--card: #f7f7f8;
|
| 5 |
+
--input: #f0f0f2;
|
| 6 |
+
--accent: #e94560;
|
| 7 |
+
--text: #1a1a1a;
|
| 8 |
+
--label: #2563eb;
|
| 9 |
+
--border: #d1d5db;
|
| 10 |
+
--green-bg: #ecfdf5;
|
| 11 |
+
--green-border: #a7f3d0;
|
| 12 |
+
--green-text: #16a34a;
|
| 13 |
+
--muted: #6b7280;
|
| 14 |
+
--muted-dim: #9ca3af;
|
| 15 |
+
--warning: #f59e0b;
|
| 16 |
+
}
|
| 17 |
+
|
| 18 |
+
* {
|
| 19 |
+
box-sizing: border-box;
|
| 20 |
+
margin: 0;
|
| 21 |
+
padding: 0;
|
| 22 |
+
}
|
| 23 |
+
|
| 24 |
+
body {
|
| 25 |
+
background: var(--bg);
|
| 26 |
+
color: var(--text);
|
| 27 |
+
font-family: 'Segoe UI', 'PingFang SC', 'Microsoft YaHei', sans-serif;
|
| 28 |
+
font-size: 17px;
|
| 29 |
+
line-height: 1.6;
|
| 30 |
+
min-height: 100vh;
|
| 31 |
+
}
|
| 32 |
+
|
| 33 |
+
a {
|
| 34 |
+
color: var(--label);
|
| 35 |
+
text-decoration: none;
|
| 36 |
+
}
|
| 37 |
+
|
| 38 |
+
a:hover {
|
| 39 |
+
color: var(--accent);
|
| 40 |
+
}
|
| 41 |
+
|
| 42 |
+
/* ---- Instruction Banner ---- */
|
| 43 |
+
.instruction-banner {
|
| 44 |
+
background: var(--input);
|
| 45 |
+
border: 1px solid var(--border);
|
| 46 |
+
border-radius: 6px;
|
| 47 |
+
padding: 14px 20px;
|
| 48 |
+
margin-bottom: 16px;
|
| 49 |
+
font-size: 16px;
|
| 50 |
+
line-height: 1.6;
|
| 51 |
+
color: var(--text);
|
| 52 |
+
}
|
| 53 |
+
.instruction-banner strong {
|
| 54 |
+
color: var(--text);
|
| 55 |
+
}
|
| 56 |
+
.scoring-guide-inline {
|
| 57 |
+
margin-top: 8px;
|
| 58 |
+
display: flex;
|
| 59 |
+
gap: 16px;
|
| 60 |
+
flex-wrap: wrap;
|
| 61 |
+
font-size: 15px;
|
| 62 |
+
color: #2563eb;
|
| 63 |
+
}
|
| 64 |
+
.scoring-guide-inline span:first-child {
|
| 65 |
+
font-weight: bold;
|
| 66 |
+
color: #2563eb;
|
| 67 |
+
}
|
| 68 |
+
|
| 69 |
+
/* ---- Buttons ---- */
|
| 70 |
+
.btn {
|
| 71 |
+
display: inline-block;
|
| 72 |
+
padding: 8px 16px;
|
| 73 |
+
border: 1px solid var(--border);
|
| 74 |
+
border-radius: 4px;
|
| 75 |
+
background: var(--input);
|
| 76 |
+
color: var(--text);
|
| 77 |
+
cursor: pointer;
|
| 78 |
+
font-size: 14px;
|
| 79 |
+
transition: background 0.2s, border-color 0.2s;
|
| 80 |
+
}
|
| 81 |
+
|
| 82 |
+
.btn:hover {
|
| 83 |
+
background: #e5e7eb;
|
| 84 |
+
border-color: var(--label);
|
| 85 |
+
}
|
| 86 |
+
|
| 87 |
+
.btn-primary {
|
| 88 |
+
background: var(--accent);
|
| 89 |
+
border-color: var(--accent);
|
| 90 |
+
color: #fff;
|
| 91 |
+
}
|
| 92 |
+
|
| 93 |
+
.btn-primary:hover {
|
| 94 |
+
background: #c73050;
|
| 95 |
+
border-color: #c73050;
|
| 96 |
+
}
|
| 97 |
+
|
| 98 |
+
.btn-danger {
|
| 99 |
+
background: #fef2f2;
|
| 100 |
+
border-color: #fca5a5;
|
| 101 |
+
color: #dc2626;
|
| 102 |
+
}
|
| 103 |
+
|
| 104 |
+
.btn-danger:hover {
|
| 105 |
+
background: var(--accent);
|
| 106 |
+
border-color: var(--accent);
|
| 107 |
+
}
|
| 108 |
+
|
| 109 |
+
.btn-sm {
|
| 110 |
+
padding: 5px 10px;
|
| 111 |
+
font-size: 12px;
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
.btn:disabled,
|
| 115 |
+
.btn[disabled] {
|
| 116 |
+
opacity: 0.4;
|
| 117 |
+
cursor: not-allowed;
|
| 118 |
+
}
|
| 119 |
+
|
| 120 |
+
/* ---- Container ---- */
|
| 121 |
+
.container {
|
| 122 |
+
max-width: 1200px;
|
| 123 |
+
margin: 0 auto;
|
| 124 |
+
padding: 20px;
|
| 125 |
+
}
|
| 126 |
+
|
| 127 |
+
/* ---- Login ---- */
|
| 128 |
+
.login-container {
|
| 129 |
+
display: flex;
|
| 130 |
+
flex-direction: column;
|
| 131 |
+
align-items: center;
|
| 132 |
+
justify-content: center;
|
| 133 |
+
min-height: 100vh;
|
| 134 |
+
gap: 20px;
|
| 135 |
+
}
|
| 136 |
+
|
| 137 |
+
.login-container h1 {
|
| 138 |
+
font-size: 2rem;
|
| 139 |
+
color: var(--label);
|
| 140 |
+
letter-spacing: 2px;
|
| 141 |
+
}
|
| 142 |
+
|
| 143 |
+
.welcome-msg {
|
| 144 |
+
color: var(--muted);
|
| 145 |
+
font-size: 15px;
|
| 146 |
+
text-align: center;
|
| 147 |
+
max-width: 420px;
|
| 148 |
+
line-height: 1.6;
|
| 149 |
+
}
|
| 150 |
+
|
| 151 |
+
.login-container form {
|
| 152 |
+
display: flex;
|
| 153 |
+
flex-direction: column;
|
| 154 |
+
gap: 12px;
|
| 155 |
+
width: 320px;
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
.login-container input[type="text"] {
|
| 159 |
+
padding: 10px 14px;
|
| 160 |
+
background: var(--input);
|
| 161 |
+
border: 1px solid var(--border);
|
| 162 |
+
border-radius: 4px;
|
| 163 |
+
color: var(--text);
|
| 164 |
+
font-size: 16px;
|
| 165 |
+
outline: none;
|
| 166 |
+
}
|
| 167 |
+
|
| 168 |
+
.login-container input[type="text"]:focus {
|
| 169 |
+
border-color: var(--label);
|
| 170 |
+
}
|
| 171 |
+
|
| 172 |
+
.login-hint {
|
| 173 |
+
color: var(--muted-dim);
|
| 174 |
+
font-size: 13px;
|
| 175 |
+
text-align: center;
|
| 176 |
+
}
|
| 177 |
+
|
| 178 |
+
.error-msg {
|
| 179 |
+
color: var(--accent);
|
| 180 |
+
background: #fef2f2;
|
| 181 |
+
border: 1px solid var(--accent);
|
| 182 |
+
border-radius: 4px;
|
| 183 |
+
padding: 8px 14px;
|
| 184 |
+
font-size: 14px;
|
| 185 |
+
}
|
| 186 |
+
|
| 187 |
+
/* ---- Topbar ---- */
|
| 188 |
+
.topbar {
|
| 189 |
+
background: var(--card);
|
| 190 |
+
border-bottom: 1px solid var(--border);
|
| 191 |
+
padding: 12px 24px;
|
| 192 |
+
display: flex;
|
| 193 |
+
align-items: center;
|
| 194 |
+
gap: 16px;
|
| 195 |
+
}
|
| 196 |
+
|
| 197 |
+
.annotator-name {
|
| 198 |
+
font-weight: bold;
|
| 199 |
+
color: var(--text);
|
| 200 |
+
font-size: 16px;
|
| 201 |
+
margin-right: auto;
|
| 202 |
+
}
|
| 203 |
+
|
| 204 |
+
.progress-label {
|
| 205 |
+
color: var(--text);
|
| 206 |
+
font-size: 16px;
|
| 207 |
+
}
|
| 208 |
+
|
| 209 |
+
/* ---- Progress bar ---- */
|
| 210 |
+
.progress-section {
|
| 211 |
+
background: var(--card);
|
| 212 |
+
border-bottom: 1px solid var(--border);
|
| 213 |
+
padding: 12px 24px;
|
| 214 |
+
display: flex;
|
| 215 |
+
align-items: center;
|
| 216 |
+
gap: 14px;
|
| 217 |
+
}
|
| 218 |
+
|
| 219 |
+
.progress-stats {
|
| 220 |
+
display: flex;
|
| 221 |
+
flex-direction: column;
|
| 222 |
+
gap: 2px;
|
| 223 |
+
min-width: 280px;
|
| 224 |
+
}
|
| 225 |
+
|
| 226 |
+
.progress-stat-main {
|
| 227 |
+
color: var(--text);
|
| 228 |
+
font-size: 15px;
|
| 229 |
+
font-weight: bold;
|
| 230 |
+
}
|
| 231 |
+
|
| 232 |
+
.progress-stat-user {
|
| 233 |
+
color: var(--text);
|
| 234 |
+
font-size: 14px;
|
| 235 |
+
}
|
| 236 |
+
|
| 237 |
+
.progress-bar-track {
|
| 238 |
+
flex: 1;
|
| 239 |
+
height: 14px;
|
| 240 |
+
background: var(--input);
|
| 241 |
+
border: 1px solid var(--border);
|
| 242 |
+
border-radius: 7px;
|
| 243 |
+
overflow: hidden;
|
| 244 |
+
}
|
| 245 |
+
|
| 246 |
+
.progress-bar-fill {
|
| 247 |
+
height: 100%;
|
| 248 |
+
background: linear-gradient(90deg, var(--green-border), var(--green-text));
|
| 249 |
+
border-radius: 7px;
|
| 250 |
+
transition: width 0.4s ease;
|
| 251 |
+
}
|
| 252 |
+
|
| 253 |
+
.progress-pct {
|
| 254 |
+
color: var(--text);
|
| 255 |
+
font-size: 15px;
|
| 256 |
+
font-weight: bold;
|
| 257 |
+
min-width: 48px;
|
| 258 |
+
text-align: right;
|
| 259 |
+
}
|
| 260 |
+
|
| 261 |
+
/* ---- Task list ---- */
|
| 262 |
+
.task-list {
|
| 263 |
+
display: flex;
|
| 264 |
+
flex-direction: column;
|
| 265 |
+
gap: 10px;
|
| 266 |
+
padding: 20px 0;
|
| 267 |
+
}
|
| 268 |
+
|
| 269 |
+
.task-card {
|
| 270 |
+
display: flex;
|
| 271 |
+
align-items: center;
|
| 272 |
+
justify-content: space-between;
|
| 273 |
+
background: var(--card);
|
| 274 |
+
border: 1px solid var(--border);
|
| 275 |
+
border-radius: 6px;
|
| 276 |
+
padding: 14px 18px;
|
| 277 |
+
color: var(--text);
|
| 278 |
+
transition: border-color 0.2s, background 0.2s;
|
| 279 |
+
}
|
| 280 |
+
|
| 281 |
+
.task-card:hover {
|
| 282 |
+
border-color: var(--label);
|
| 283 |
+
background: #eff6ff;
|
| 284 |
+
}
|
| 285 |
+
|
| 286 |
+
.task-card.completed {
|
| 287 |
+
opacity: 0.6;
|
| 288 |
+
}
|
| 289 |
+
|
| 290 |
+
.task-card-main {
|
| 291 |
+
display: flex;
|
| 292 |
+
align-items: center;
|
| 293 |
+
gap: 14px;
|
| 294 |
+
flex: 1;
|
| 295 |
+
}
|
| 296 |
+
|
| 297 |
+
.task-filename {
|
| 298 |
+
font-size: 16px;
|
| 299 |
+
color: var(--text);
|
| 300 |
+
}
|
| 301 |
+
|
| 302 |
+
.task-dataset {
|
| 303 |
+
font-size: 14px;
|
| 304 |
+
color: var(--text);
|
| 305 |
+
}
|
| 306 |
+
|
| 307 |
+
.domain-tag {
|
| 308 |
+
background: var(--input);
|
| 309 |
+
color: var(--text);
|
| 310 |
+
border-radius: 3px;
|
| 311 |
+
padding: 2px 8px;
|
| 312 |
+
font-size: 13px;
|
| 313 |
+
border: 1px solid var(--border);
|
| 314 |
+
}
|
| 315 |
+
|
| 316 |
+
.task-card-meta {
|
| 317 |
+
display: flex;
|
| 318 |
+
align-items: center;
|
| 319 |
+
gap: 8px;
|
| 320 |
+
}
|
| 321 |
+
|
| 322 |
+
/* ---- Status badges ---- */
|
| 323 |
+
.status-badge {
|
| 324 |
+
display: inline-block;
|
| 325 |
+
padding: 3px 10px;
|
| 326 |
+
border-radius: 12px;
|
| 327 |
+
font-size: 12px;
|
| 328 |
+
font-weight: bold;
|
| 329 |
+
}
|
| 330 |
+
|
| 331 |
+
.status-badge.completed {
|
| 332 |
+
background: var(--green-bg);
|
| 333 |
+
color: var(--green-text);
|
| 334 |
+
border: 1px solid var(--green-border);
|
| 335 |
+
}
|
| 336 |
+
|
| 337 |
+
.status-badge.assigned {
|
| 338 |
+
background: #eff6ff;
|
| 339 |
+
color: var(--label);
|
| 340 |
+
border: 1px solid #bfdbfe;
|
| 341 |
+
}
|
| 342 |
+
|
| 343 |
+
.status-badge.skipped {
|
| 344 |
+
background: #f3f4f6;
|
| 345 |
+
color: var(--muted-dim);
|
| 346 |
+
border: 1px solid #d1d5db;
|
| 347 |
+
}
|
| 348 |
+
|
| 349 |
+
.empty-msg {
|
| 350 |
+
color: var(--muted-dim);
|
| 351 |
+
text-align: center;
|
| 352 |
+
padding: 40px;
|
| 353 |
+
}
|
| 354 |
+
|
| 355 |
+
/* ---- Rate layout ---- */
|
| 356 |
+
.rate-layout {
|
| 357 |
+
display: grid;
|
| 358 |
+
grid-template-columns: 1fr 1fr;
|
| 359 |
+
gap: 20px;
|
| 360 |
+
padding: 20px 0;
|
| 361 |
+
}
|
| 362 |
+
|
| 363 |
+
.rate-left {
|
| 364 |
+
display: flex;
|
| 365 |
+
flex-direction: column;
|
| 366 |
+
gap: 14px;
|
| 367 |
+
}
|
| 368 |
+
|
| 369 |
+
.video-player {
|
| 370 |
+
width: 100%;
|
| 371 |
+
border-radius: 6px;
|
| 372 |
+
background: #000;
|
| 373 |
+
border: 1px solid var(--border);
|
| 374 |
+
}
|
| 375 |
+
|
| 376 |
+
.prompt-box {
|
| 377 |
+
background: var(--card);
|
| 378 |
+
border: 1px solid var(--border);
|
| 379 |
+
border-radius: 6px;
|
| 380 |
+
padding: 14px 16px;
|
| 381 |
+
display: flex;
|
| 382 |
+
flex-direction: column;
|
| 383 |
+
gap: 8px;
|
| 384 |
+
}
|
| 385 |
+
|
| 386 |
+
.prompt-label {
|
| 387 |
+
color: var(--text);
|
| 388 |
+
font-size: 14px;
|
| 389 |
+
text-transform: uppercase;
|
| 390 |
+
letter-spacing: 1px;
|
| 391 |
+
font-weight: bold;
|
| 392 |
+
}
|
| 393 |
+
|
| 394 |
+
.prompt-text {
|
| 395 |
+
color: #1e90ff;
|
| 396 |
+
font-size: 20px;
|
| 397 |
+
line-height: 1.6;
|
| 398 |
+
font-weight: 600;
|
| 399 |
+
}
|
| 400 |
+
|
| 401 |
+
.rate-right {
|
| 402 |
+
display: flex;
|
| 403 |
+
flex-direction: column;
|
| 404 |
+
gap: 14px;
|
| 405 |
+
}
|
| 406 |
+
|
| 407 |
+
/* ---- Score sections ---- */
|
| 408 |
+
.score-section {
|
| 409 |
+
background: var(--card);
|
| 410 |
+
border: 1px solid var(--border);
|
| 411 |
+
border-radius: 6px;
|
| 412 |
+
padding: 14px 16px;
|
| 413 |
+
display: flex;
|
| 414 |
+
flex-direction: column;
|
| 415 |
+
gap: 10px;
|
| 416 |
+
}
|
| 417 |
+
|
| 418 |
+
.section-title {
|
| 419 |
+
color: var(--text);
|
| 420 |
+
font-size: 15px;
|
| 421 |
+
text-transform: uppercase;
|
| 422 |
+
letter-spacing: 1px;
|
| 423 |
+
margin-bottom: 4px;
|
| 424 |
+
border-bottom: 1px solid var(--border);
|
| 425 |
+
padding-bottom: 6px;
|
| 426 |
+
font-weight: 600;
|
| 427 |
+
}
|
| 428 |
+
|
| 429 |
+
.dim-row {
|
| 430 |
+
display: flex;
|
| 431 |
+
align-items: center;
|
| 432 |
+
justify-content: space-between;
|
| 433 |
+
gap: 10px;
|
| 434 |
+
flex-wrap: wrap;
|
| 435 |
+
}
|
| 436 |
+
|
| 437 |
+
.dim-reasoning {
|
| 438 |
+
width: 100%;
|
| 439 |
+
font-size: 0.88em;
|
| 440 |
+
color: #555;
|
| 441 |
+
background: #f0f4f8;
|
| 442 |
+
border-left: 3px solid #6366f1;
|
| 443 |
+
padding: 6px 10px;
|
| 444 |
+
margin-top: 2px;
|
| 445 |
+
border-radius: 0 4px 4px 0;
|
| 446 |
+
line-height: 1.5;
|
| 447 |
+
}
|
| 448 |
+
|
| 449 |
+
.dim-info {
|
| 450 |
+
display: flex;
|
| 451 |
+
flex-direction: column;
|
| 452 |
+
gap: 2px;
|
| 453 |
+
flex: 1;
|
| 454 |
+
min-width: 0;
|
| 455 |
+
}
|
| 456 |
+
|
| 457 |
+
.dim-label {
|
| 458 |
+
color: var(--text);
|
| 459 |
+
font-size: 16px;
|
| 460 |
+
font-weight: 600;
|
| 461 |
+
}
|
| 462 |
+
|
| 463 |
+
.dim-desc {
|
| 464 |
+
color: var(--text);
|
| 465 |
+
font-size: 13px;
|
| 466 |
+
line-height: 1.4;
|
| 467 |
+
}
|
| 468 |
+
|
| 469 |
+
.dim-key {
|
| 470 |
+
background: var(--input);
|
| 471 |
+
border-radius: 3px;
|
| 472 |
+
padding: 1px 5px;
|
| 473 |
+
font-size: 11px;
|
| 474 |
+
color: var(--accent);
|
| 475 |
+
margin-right: 4px;
|
| 476 |
+
font-family: monospace;
|
| 477 |
+
}
|
| 478 |
+
|
| 479 |
+
.score-btns {
|
| 480 |
+
display: flex;
|
| 481 |
+
align-items: center;
|
| 482 |
+
gap: 4px;
|
| 483 |
+
}
|
| 484 |
+
|
| 485 |
+
.score-btn {
|
| 486 |
+
width: 44px;
|
| 487 |
+
height: 44px;
|
| 488 |
+
border: 1px solid var(--border);
|
| 489 |
+
border-radius: 4px;
|
| 490 |
+
background: var(--input);
|
| 491 |
+
color: var(--text);
|
| 492 |
+
cursor: pointer;
|
| 493 |
+
font-size: 18px;
|
| 494 |
+
font-weight: 600;
|
| 495 |
+
transition: background 0.15s, border-color 0.15s;
|
| 496 |
+
display: flex;
|
| 497 |
+
align-items: center;
|
| 498 |
+
justify-content: center;
|
| 499 |
+
}
|
| 500 |
+
|
| 501 |
+
.score-btn:hover {
|
| 502 |
+
border-color: var(--label);
|
| 503 |
+
background: #eff6ff;
|
| 504 |
+
}
|
| 505 |
+
|
| 506 |
+
.score-btn.selected {
|
| 507 |
+
background: var(--accent);
|
| 508 |
+
border-color: var(--accent);
|
| 509 |
+
color: #fff;
|
| 510 |
+
font-weight: bold;
|
| 511 |
+
}
|
| 512 |
+
|
| 513 |
+
/* ---- Note input ---- */
|
| 514 |
+
.note-input {
|
| 515 |
+
width: 100%;
|
| 516 |
+
min-height: 70px;
|
| 517 |
+
background: var(--input);
|
| 518 |
+
border: 1px solid var(--border);
|
| 519 |
+
border-radius: 4px;
|
| 520 |
+
color: var(--text);
|
| 521 |
+
font-size: 15px;
|
| 522 |
+
padding: 8px 10px;
|
| 523 |
+
resize: vertical;
|
| 524 |
+
outline: none;
|
| 525 |
+
font-family: inherit;
|
| 526 |
+
}
|
| 527 |
+
|
| 528 |
+
.note-input:focus {
|
| 529 |
+
border-color: var(--label);
|
| 530 |
+
}
|
| 531 |
+
|
| 532 |
+
/* ---- Form actions ---- */
|
| 533 |
+
.form-actions {
|
| 534 |
+
display: flex;
|
| 535 |
+
align-items: center;
|
| 536 |
+
gap: 12px;
|
| 537 |
+
}
|
| 538 |
+
|
| 539 |
+
/* ---- Skip form ---- */
|
| 540 |
+
.skip-form {
|
| 541 |
+
display: flex;
|
| 542 |
+
gap: 8px;
|
| 543 |
+
align-items: center;
|
| 544 |
+
margin-top: 4px;
|
| 545 |
+
}
|
| 546 |
+
|
| 547 |
+
.skip-reason-input {
|
| 548 |
+
flex: 1;
|
| 549 |
+
padding: 7px 10px;
|
| 550 |
+
background: var(--input);
|
| 551 |
+
border: 1px solid var(--border);
|
| 552 |
+
border-radius: 4px;
|
| 553 |
+
color: var(--text);
|
| 554 |
+
font-size: 13px;
|
| 555 |
+
outline: none;
|
| 556 |
+
}
|
| 557 |
+
|
| 558 |
+
.skip-reason-input:focus {
|
| 559 |
+
border-color: var(--accent);
|
| 560 |
+
}
|
| 561 |
+
|
| 562 |
+
.note-section { margin-top: 8px; }
|
| 563 |
+
.note-input {
|
| 564 |
+
width: 100%;
|
| 565 |
+
min-height: 48px;
|
| 566 |
+
padding: 7px 10px;
|
| 567 |
+
background: var(--input);
|
| 568 |
+
border: 1px solid var(--border);
|
| 569 |
+
border-radius: 4px;
|
| 570 |
+
color: var(--text);
|
| 571 |
+
font-size: 13px;
|
| 572 |
+
resize: vertical;
|
| 573 |
+
outline: none;
|
| 574 |
+
}
|
| 575 |
+
.note-input:focus { border-color: var(--accent); }
|
| 576 |
+
.note-input:disabled { opacity: .6; cursor: not-allowed; }
|
| 577 |
+
|
| 578 |
+
/* ---- Scoring guide ---- */
|
| 579 |
+
.scoring-guide {
|
| 580 |
+
background: var(--card);
|
| 581 |
+
border: 1px solid var(--border);
|
| 582 |
+
border-radius: 6px;
|
| 583 |
+
padding: 10px 16px;
|
| 584 |
+
display: flex;
|
| 585 |
+
gap: 20px;
|
| 586 |
+
align-items: center;
|
| 587 |
+
font-size: 14px;
|
| 588 |
+
color: var(--text);
|
| 589 |
+
margin-top: 10px;
|
| 590 |
+
}
|
| 591 |
+
|
| 592 |
+
.scoring-guide span:first-child {
|
| 593 |
+
color: var(--text);
|
| 594 |
+
font-weight: bold;
|
| 595 |
+
}
|
| 596 |
+
|
| 597 |
+
/* ---- Congrats screen ---- */
|
| 598 |
+
.congrats-screen {
|
| 599 |
+
display: flex;
|
| 600 |
+
justify-content: center;
|
| 601 |
+
align-items: center;
|
| 602 |
+
min-height: 60vh;
|
| 603 |
+
}
|
| 604 |
+
.congrats-card {
|
| 605 |
+
text-align: center;
|
| 606 |
+
background: var(--card);
|
| 607 |
+
border: 1px solid var(--border);
|
| 608 |
+
border-radius: 12px;
|
| 609 |
+
padding: 48px 40px;
|
| 610 |
+
max-width: 500px;
|
| 611 |
+
}
|
| 612 |
+
.congrats-icon {
|
| 613 |
+
font-size: 64px;
|
| 614 |
+
margin-bottom: 16px;
|
| 615 |
+
}
|
| 616 |
+
.congrats-card h2 {
|
| 617 |
+
color: var(--accent);
|
| 618 |
+
margin: 0 0 12px;
|
| 619 |
+
font-size: 24px;
|
| 620 |
+
}
|
| 621 |
+
.congrats-card p {
|
| 622 |
+
color: var(--text);
|
| 623 |
+
font-size: 16px;
|
| 624 |
+
margin: 8px 0;
|
| 625 |
+
}
|
| 626 |
+
.congrats-sub {
|
| 627 |
+
color: var(--text) !important;
|
| 628 |
+
font-size: 15px !important;
|
| 629 |
+
}
|
| 630 |
+
|
| 631 |
+
/* ---- Guide button ---- */
|
| 632 |
+
.btn-guide {
|
| 633 |
+
background: var(--green-bg);
|
| 634 |
+
border-color: var(--green-border);
|
| 635 |
+
color: var(--green-text);
|
| 636 |
+
font-size: 15px;
|
| 637 |
+
padding: 5px 14px;
|
| 638 |
+
}
|
| 639 |
+
.btn-guide:hover {
|
| 640 |
+
background: var(--green-border);
|
| 641 |
+
border-color: var(--green-text);
|
| 642 |
+
color: #fff;
|
| 643 |
+
}
|
| 644 |
+
.btn-guide-float {
|
| 645 |
+
float: right;
|
| 646 |
+
margin-top: -4px;
|
| 647 |
+
}
|
| 648 |
+
|
| 649 |
+
/* ---- Guide page ---- */
|
| 650 |
+
.guide-container {
|
| 651 |
+
max-width: 900px;
|
| 652 |
+
padding-bottom: 60px;
|
| 653 |
+
}
|
| 654 |
+
|
| 655 |
+
.guide-topbar {
|
| 656 |
+
margin-bottom: 16px;
|
| 657 |
+
}
|
| 658 |
+
|
| 659 |
+
.guide-title {
|
| 660 |
+
color: var(--text);
|
| 661 |
+
font-size: 28px;
|
| 662 |
+
margin-bottom: 8px;
|
| 663 |
+
}
|
| 664 |
+
|
| 665 |
+
.guide-intro {
|
| 666 |
+
color: var(--text);
|
| 667 |
+
font-size: 16px;
|
| 668 |
+
line-height: 1.7;
|
| 669 |
+
margin-bottom: 28px;
|
| 670 |
+
}
|
| 671 |
+
|
| 672 |
+
.guide-section {
|
| 673 |
+
margin-bottom: 32px;
|
| 674 |
+
}
|
| 675 |
+
|
| 676 |
+
.guide-section h2 {
|
| 677 |
+
color: var(--text);
|
| 678 |
+
font-size: 20px;
|
| 679 |
+
border-bottom: 1px solid var(--border);
|
| 680 |
+
padding-bottom: 8px;
|
| 681 |
+
margin-bottom: 14px;
|
| 682 |
+
}
|
| 683 |
+
|
| 684 |
+
.guide-note {
|
| 685 |
+
color: var(--text);
|
| 686 |
+
font-size: 15px;
|
| 687 |
+
margin-bottom: 12px;
|
| 688 |
+
line-height: 1.6;
|
| 689 |
+
}
|
| 690 |
+
|
| 691 |
+
/* Guide table */
|
| 692 |
+
.guide-table {
|
| 693 |
+
width: 100%;
|
| 694 |
+
border-collapse: collapse;
|
| 695 |
+
margin-bottom: 10px;
|
| 696 |
+
}
|
| 697 |
+
.guide-table th,
|
| 698 |
+
.guide-table td {
|
| 699 |
+
padding: 10px 14px;
|
| 700 |
+
border: 1px solid var(--border);
|
| 701 |
+
text-align: left;
|
| 702 |
+
font-size: 14px;
|
| 703 |
+
}
|
| 704 |
+
.guide-table th {
|
| 705 |
+
background: var(--input);
|
| 706 |
+
color: var(--text);
|
| 707 |
+
font-size: 15px;
|
| 708 |
+
}
|
| 709 |
+
.score-cell {
|
| 710 |
+
font-weight: bold;
|
| 711 |
+
text-align: center;
|
| 712 |
+
width: 60px;
|
| 713 |
+
}
|
| 714 |
+
.score-cell.s5 { color: #15803d; }
|
| 715 |
+
.score-cell.s4 { color: #16a34a; }
|
| 716 |
+
.score-cell.s3 { color: #eab308; }
|
| 717 |
+
.score-cell.s2 { color: #f97316; }
|
| 718 |
+
.score-cell.s1 { color: #ef4444; }
|
| 719 |
+
|
| 720 |
+
/* Dim cards */
|
| 721 |
+
.dim-cards {
|
| 722 |
+
display: flex;
|
| 723 |
+
flex-direction: column;
|
| 724 |
+
gap: 10px;
|
| 725 |
+
}
|
| 726 |
+
|
| 727 |
+
.guide-card {
|
| 728 |
+
background: var(--card);
|
| 729 |
+
border: 1px solid var(--border);
|
| 730 |
+
border-radius: 6px;
|
| 731 |
+
padding: 14px 18px;
|
| 732 |
+
}
|
| 733 |
+
|
| 734 |
+
.guide-card-header {
|
| 735 |
+
display: flex;
|
| 736 |
+
align-items: center;
|
| 737 |
+
gap: 8px;
|
| 738 |
+
margin-bottom: 6px;
|
| 739 |
+
}
|
| 740 |
+
|
| 741 |
+
.guide-card-label {
|
| 742 |
+
color: var(--text);
|
| 743 |
+
font-weight: bold;
|
| 744 |
+
font-size: 16px;
|
| 745 |
+
}
|
| 746 |
+
|
| 747 |
+
.guide-card-scale {
|
| 748 |
+
margin-left: auto;
|
| 749 |
+
color: var(--text);
|
| 750 |
+
font-size: 14px;
|
| 751 |
+
background: var(--input);
|
| 752 |
+
padding: 2px 8px;
|
| 753 |
+
border-radius: 3px;
|
| 754 |
+
}
|
| 755 |
+
|
| 756 |
+
.guide-card-desc {
|
| 757 |
+
color: var(--text);
|
| 758 |
+
font-size: 15px;
|
| 759 |
+
line-height: 1.5;
|
| 760 |
+
margin-bottom: 8px;
|
| 761 |
+
}
|
| 762 |
+
|
| 763 |
+
.guide-score-labels {
|
| 764 |
+
display: flex;
|
| 765 |
+
flex-wrap: wrap;
|
| 766 |
+
gap: 10px;
|
| 767 |
+
font-size: 14px;
|
| 768 |
+
color: var(--text);
|
| 769 |
+
}
|
| 770 |
+
.guide-score-labels strong {
|
| 771 |
+
color: var(--text);
|
| 772 |
+
}
|
| 773 |
+
|
| 774 |
+
.guide-card-phys {
|
| 775 |
+
border-left: 3px solid var(--input);
|
| 776 |
+
}
|
| 777 |
+
|
| 778 |
+
.guide-card-warn {
|
| 779 |
+
border-left: 3px solid var(--accent);
|
| 780 |
+
}
|
| 781 |
+
.guide-card-warn h3 {
|
| 782 |
+
color: var(--accent);
|
| 783 |
+
font-size: 15px;
|
| 784 |
+
margin-bottom: 8px;
|
| 785 |
+
}
|
| 786 |
+
.guide-card-warn ul {
|
| 787 |
+
list-style: none;
|
| 788 |
+
padding: 0;
|
| 789 |
+
}
|
| 790 |
+
.guide-card-warn li {
|
| 791 |
+
padding: 6px 0;
|
| 792 |
+
border-bottom: 1px solid var(--border);
|
| 793 |
+
font-size: 15px;
|
| 794 |
+
line-height: 1.6;
|
| 795 |
+
}
|
| 796 |
+
.guide-card-warn li:last-child {
|
| 797 |
+
border-bottom: none;
|
| 798 |
+
}
|
| 799 |
+
.guide-card-warn p {
|
| 800 |
+
font-size: 15px;
|
| 801 |
+
line-height: 1.6;
|
| 802 |
+
color: var(--text);
|
| 803 |
+
}
|
| 804 |
+
|
| 805 |
+
.guide-domain-block {
|
| 806 |
+
margin-bottom: 16px;
|
| 807 |
+
}
|
| 808 |
+
.guide-domain-title {
|
| 809 |
+
color: var(--text);
|
| 810 |
+
font-size: 16px;
|
| 811 |
+
text-transform: capitalize;
|
| 812 |
+
margin-bottom: 8px;
|
| 813 |
+
padding-left: 4px;
|
| 814 |
+
}
|
| 815 |
+
|
| 816 |
+
.guide-list {
|
| 817 |
+
list-style: none;
|
| 818 |
+
padding: 0;
|
| 819 |
+
}
|
| 820 |
+
.guide-list li {
|
| 821 |
+
background: var(--card);
|
| 822 |
+
border: 1px solid var(--border);
|
| 823 |
+
border-radius: 6px;
|
| 824 |
+
padding: 10px 16px;
|
| 825 |
+
margin-bottom: 8px;
|
| 826 |
+
font-size: 13px;
|
| 827 |
+
line-height: 1.6;
|
| 828 |
+
}
|
| 829 |
+
|
| 830 |
+
.guide-footer {
|
| 831 |
+
text-align: center;
|
| 832 |
+
margin-top: 32px;
|
| 833 |
+
}
|
| 834 |
+
|
| 835 |
+
/* ---- Demo page ---- */
|
| 836 |
+
.demo-example {
|
| 837 |
+
background: var(--card);
|
| 838 |
+
border: 1px solid var(--border);
|
| 839 |
+
border-radius: 10px;
|
| 840 |
+
padding: 20px 24px;
|
| 841 |
+
margin-bottom: 24px;
|
| 842 |
+
}
|
| 843 |
+
.demo-example .score-section {
|
| 844 |
+
margin-bottom: 12px;
|
| 845 |
+
}
|
| 846 |
+
.demo-video-wrap {
|
| 847 |
+
margin-bottom: 16px;
|
| 848 |
+
text-align: center;
|
| 849 |
+
}
|
| 850 |
+
.demo-video {
|
| 851 |
+
max-width: 560px;
|
| 852 |
+
width: 100%;
|
| 853 |
+
border-radius: 8px;
|
| 854 |
+
border: 1px solid var(--border);
|
| 855 |
+
}
|
| 856 |
+
.demo-judge {
|
| 857 |
+
font-weight: normal;
|
| 858 |
+
font-size: 12px;
|
| 859 |
+
color: var(--label);
|
| 860 |
+
}
|
| 861 |
+
/* Color-coded selected score buttons on demo page */
|
| 862 |
+
.score-btn.demo-selected {
|
| 863 |
+
color: #fff !important;
|
| 864 |
+
border-color: transparent !important;
|
| 865 |
+
transform: scale(1.12);
|
| 866 |
+
box-shadow: 0 2px 6px rgba(0,0,0,0.15);
|
| 867 |
+
}
|
| 868 |
+
.demo-rationale {
|
| 869 |
+
background: var(--bg);
|
| 870 |
+
border: 1px solid var(--border);
|
| 871 |
+
border-radius: 6px;
|
| 872 |
+
padding: 12px 16px;
|
| 873 |
+
font-size: 14px;
|
| 874 |
+
line-height: 1.6;
|
| 875 |
+
margin-top: 12px;
|
| 876 |
+
}
|
| 877 |
+
.demo-rationale ul {
|
| 878 |
+
margin: 6px 0 0;
|
| 879 |
+
padding-left: 18px;
|
| 880 |
+
}
|
| 881 |
+
.demo-rationale li {
|
| 882 |
+
margin-bottom: 6px;
|
| 883 |
+
}
|
| 884 |
+
|
| 885 |
+
/* ---- Comparison mode ---- */
|
| 886 |
+
.compare-container {
|
| 887 |
+
max-width: 1800px;
|
| 888 |
+
}
|
| 889 |
+
|
| 890 |
+
.compare-prompt {
|
| 891 |
+
margin-bottom: 16px;
|
| 892 |
+
}
|
| 893 |
+
|
| 894 |
+
.compare-grid {
|
| 895 |
+
display: grid;
|
| 896 |
+
gap: 16px;
|
| 897 |
+
margin-bottom: 20px;
|
| 898 |
+
}
|
| 899 |
+
.compare-grid-2 { grid-template-columns: repeat(2, 1fr); }
|
| 900 |
+
.compare-grid-3 { grid-template-columns: repeat(3, 1fr); }
|
| 901 |
+
.compare-grid-4 { grid-template-columns: repeat(4, 1fr); }
|
| 902 |
+
|
| 903 |
+
.compare-card {
|
| 904 |
+
background: var(--card);
|
| 905 |
+
border: 1px solid var(--border);
|
| 906 |
+
border-radius: 8px;
|
| 907 |
+
padding: 12px;
|
| 908 |
+
display: flex;
|
| 909 |
+
flex-direction: column;
|
| 910 |
+
gap: 10px;
|
| 911 |
+
}
|
| 912 |
+
|
| 913 |
+
.compare-card-header {
|
| 914 |
+
display: flex;
|
| 915 |
+
align-items: center;
|
| 916 |
+
justify-content: center;
|
| 917 |
+
padding-bottom: 6px;
|
| 918 |
+
border-bottom: 1px solid var(--border);
|
| 919 |
+
}
|
| 920 |
+
|
| 921 |
+
.compare-label {
|
| 922 |
+
font-size: 16px;
|
| 923 |
+
font-weight: bold;
|
| 924 |
+
color: var(--accent);
|
| 925 |
+
letter-spacing: 1px;
|
| 926 |
+
}
|
| 927 |
+
|
| 928 |
+
.compare-video {
|
| 929 |
+
width: 100%;
|
| 930 |
+
aspect-ratio: 16 / 9;
|
| 931 |
+
object-fit: cover;
|
| 932 |
+
border-radius: 4px;
|
| 933 |
+
background: #000;
|
| 934 |
+
border: 1px solid var(--border);
|
| 935 |
+
}
|
| 936 |
+
|
| 937 |
+
.compare-card .score-section {
|
| 938 |
+
padding: 10px 12px;
|
| 939 |
+
}
|
| 940 |
+
|
| 941 |
+
.law-tag {
|
| 942 |
+
display: inline-block;
|
| 943 |
+
background: #e8f0fe;
|
| 944 |
+
color: #1a56db;
|
| 945 |
+
font-weight: 600;
|
| 946 |
+
font-size: 0.8em;
|
| 947 |
+
padding: 2px 8px;
|
| 948 |
+
border-radius: 4px;
|
| 949 |
+
box-shadow: 0 1px 3px rgba(0,0,0,0.15);
|
| 950 |
+
margin-right: 6px;
|
| 951 |
+
vertical-align: middle;
|
| 952 |
+
}
|
| 953 |
+
|
| 954 |
+
/* Domain header in physical sub-questions */
|
| 955 |
+
.domain-header {
|
| 956 |
+
font-size: 12px;
|
| 957 |
+
font-weight: 700;
|
| 958 |
+
color: var(--muted);
|
| 959 |
+
text-transform: uppercase;
|
| 960 |
+
letter-spacing: 0.5px;
|
| 961 |
+
margin: 10px 0 4px;
|
| 962 |
+
padding: 4px 0;
|
| 963 |
+
border-bottom: 1px solid var(--border);
|
| 964 |
+
}
|
| 965 |
+
|
| 966 |
+
/* Note text below criterion question */
|
| 967 |
+
.dim-note {
|
| 968 |
+
display: block;
|
| 969 |
+
font-size: 0.82em;
|
| 970 |
+
color: var(--muted);
|
| 971 |
+
margin-top: 2px;
|
| 972 |
+
font-style: italic;
|
| 973 |
+
}
|
| 974 |
+
|
| 975 |
+
/* Guide page: note under question */
|
| 976 |
+
.guide-card-note {
|
| 977 |
+
font-size: 13px;
|
| 978 |
+
color: var(--muted);
|
| 979 |
+
margin-top: 4px;
|
| 980 |
+
font-style: italic;
|
| 981 |
+
}
|
| 982 |
+
|
| 983 |
+
.compare-card .section-title {
|
| 984 |
+
font-size: 11px;
|
| 985 |
+
margin-bottom: 2px;
|
| 986 |
+
padding-bottom: 4px;
|
| 987 |
+
}
|
| 988 |
+
|
| 989 |
+
.compare-card .dim-row {
|
| 990 |
+
gap: 6px;
|
| 991 |
+
}
|
| 992 |
+
|
| 993 |
+
.compare-card .dim-info {
|
| 994 |
+
min-width: 0;
|
| 995 |
+
}
|
| 996 |
+
|
| 997 |
+
.compare-card .dim-label {
|
| 998 |
+
font-size: 14px;
|
| 999 |
+
}
|
| 1000 |
+
|
| 1001 |
+
.compare-card .dim-desc {
|
| 1002 |
+
font-size: 12px;
|
| 1003 |
+
}
|
| 1004 |
+
|
| 1005 |
+
.compare-card .score-btn {
|
| 1006 |
+
width: 28px;
|
| 1007 |
+
height: 28px;
|
| 1008 |
+
font-size: 11px;
|
| 1009 |
+
}
|
| 1010 |
+
|
| 1011 |
+
.compare-actions {
|
| 1012 |
+
display: flex;
|
| 1013 |
+
align-items: center;
|
| 1014 |
+
gap: 12px;
|
| 1015 |
+
margin-bottom: 10px;
|
| 1016 |
+
}
|
| 1017 |
+
|
| 1018 |
+
/* Task list: prompt preview */
|
| 1019 |
+
.task-prompt-preview {
|
| 1020 |
+
font-size: 15px;
|
| 1021 |
+
color: var(--text);
|
| 1022 |
+
flex: 1;
|
| 1023 |
+
min-width: 0;
|
| 1024 |
+
white-space: nowrap;
|
| 1025 |
+
overflow: hidden;
|
| 1026 |
+
text-overflow: ellipsis;
|
| 1027 |
+
}
|
| 1028 |
+
|
| 1029 |
+
.task-model-count {
|
| 1030 |
+
font-size: 13px;
|
| 1031 |
+
color: var(--text);
|
| 1032 |
+
background: var(--input);
|
| 1033 |
+
padding: 2px 8px;
|
| 1034 |
+
border-radius: 3px;
|
| 1035 |
+
border: 1px solid var(--border);
|
| 1036 |
+
white-space: nowrap;
|
| 1037 |
+
}
|
| 1038 |
+
|
| 1039 |
+
/* ---- Custom video controls (hide duration) ---- */
|
| 1040 |
+
.compare-video-wrap {
|
| 1041 |
+
position: relative;
|
| 1042 |
+
}
|
| 1043 |
+
|
| 1044 |
+
.video-controls {
|
| 1045 |
+
display: flex;
|
| 1046 |
+
align-items: center;
|
| 1047 |
+
gap: 8px;
|
| 1048 |
+
padding: 6px 8px;
|
| 1049 |
+
background: rgba(0, 0, 0, 0.7);
|
| 1050 |
+
border-radius: 0 0 4px 4px;
|
| 1051 |
+
margin-top: -4px;
|
| 1052 |
+
}
|
| 1053 |
+
|
| 1054 |
+
.vc-btn {
|
| 1055 |
+
background: none;
|
| 1056 |
+
border: none;
|
| 1057 |
+
color: #fff;
|
| 1058 |
+
cursor: pointer;
|
| 1059 |
+
font-size: 14px;
|
| 1060 |
+
padding: 2px 4px;
|
| 1061 |
+
line-height: 1;
|
| 1062 |
+
}
|
| 1063 |
+
|
| 1064 |
+
.vc-bar {
|
| 1065 |
+
flex: 1;
|
| 1066 |
+
height: 6px;
|
| 1067 |
+
background: #444;
|
| 1068 |
+
border-radius: 3px;
|
| 1069 |
+
cursor: pointer;
|
| 1070 |
+
position: relative;
|
| 1071 |
+
}
|
| 1072 |
+
|
| 1073 |
+
.vc-progress {
|
| 1074 |
+
height: 100%;
|
| 1075 |
+
background: var(--accent);
|
| 1076 |
+
border-radius: 3px;
|
| 1077 |
+
width: 0%;
|
| 1078 |
+
pointer-events: none;
|
| 1079 |
+
}
|
| 1080 |
+
|
| 1081 |
+
/* ---- Per-video card states ---- */
|
| 1082 |
+
.compare-card.card-completed {
|
| 1083 |
+
border-color: var(--green-border);
|
| 1084 |
+
opacity: 0.6;
|
| 1085 |
+
}
|
| 1086 |
+
.compare-card.card-skipped {
|
| 1087 |
+
border-color: #d1d5db;
|
| 1088 |
+
opacity: 0.45;
|
| 1089 |
+
}
|
| 1090 |
+
.compare-card-header .status-badge {
|
| 1091 |
+
margin-left: auto;
|
| 1092 |
+
font-size: 11px;
|
| 1093 |
+
}
|
| 1094 |
+
.card-actions {
|
| 1095 |
+
display: flex;
|
| 1096 |
+
gap: 8px;
|
| 1097 |
+
padding-top: 6px;
|
| 1098 |
+
}
|
| 1099 |
+
|
| 1100 |
+
/* Instructions box on task list page */
|
| 1101 |
+
.instructions-box {
|
| 1102 |
+
background: #f0f7ff;
|
| 1103 |
+
border: 1px solid #b3d4fc;
|
| 1104 |
+
border-radius: 10px;
|
| 1105 |
+
padding: 20px 28px;
|
| 1106 |
+
margin-bottom: 24px;
|
| 1107 |
+
}
|
| 1108 |
+
.instructions-box h2 {
|
| 1109 |
+
margin: 0 0 10px;
|
| 1110 |
+
font-size: 1.25rem;
|
| 1111 |
+
color: #1a3a5c;
|
| 1112 |
+
}
|
| 1113 |
+
.instructions-box p {
|
| 1114 |
+
margin: 6px 0;
|
| 1115 |
+
color: #333;
|
| 1116 |
+
line-height: 1.5;
|
| 1117 |
+
}
|
| 1118 |
+
|
| 1119 |
+
/* Rating Scale Reference box on task list page */
|
| 1120 |
+
.scale-reference-box {
|
| 1121 |
+
background: #fffbeb;
|
| 1122 |
+
border: 1px solid #fcd34d;
|
| 1123 |
+
border-radius: 10px;
|
| 1124 |
+
padding: 20px 28px;
|
| 1125 |
+
margin-bottom: 24px;
|
| 1126 |
+
}
|
| 1127 |
+
.scale-reference-box h2 {
|
| 1128 |
+
margin: 0 0 10px;
|
| 1129 |
+
font-size: 1.25rem;
|
| 1130 |
+
color: #92400e;
|
| 1131 |
+
}
|
| 1132 |
+
.scale-reference-box p {
|
| 1133 |
+
margin: 6px 0 14px;
|
| 1134 |
+
color: #333;
|
| 1135 |
+
line-height: 1.5;
|
| 1136 |
+
}
|
| 1137 |
+
.scale-table {
|
| 1138 |
+
width: 100%;
|
| 1139 |
+
}
|
| 1140 |
+
.scale-table th {
|
| 1141 |
+
text-align: left;
|
| 1142 |
+
}
|
| 1143 |
+
.scale-table td:first-child {
|
| 1144 |
+
width: 60px;
|
| 1145 |
+
text-align: center;
|
| 1146 |
+
}
|
| 1147 |
+
.scale-table td:nth-child(2) {
|
| 1148 |
+
width: 180px;
|
| 1149 |
+
white-space: nowrap;
|
| 1150 |
+
}
|
| 1151 |
+
|
| 1152 |
+
.start-section {
|
| 1153 |
+
text-align: center;
|
| 1154 |
+
margin: 32px 0;
|
| 1155 |
+
}
|
| 1156 |
+
.btn-start {
|
| 1157 |
+
display: inline-block;
|
| 1158 |
+
padding: 16px 48px;
|
| 1159 |
+
font-size: 1.2rem;
|
| 1160 |
+
font-weight: 600;
|
| 1161 |
+
color: #fff;
|
| 1162 |
+
background: var(--accent);
|
| 1163 |
+
border: none;
|
| 1164 |
+
border-radius: 8px;
|
| 1165 |
+
cursor: pointer;
|
| 1166 |
+
transition: background 0.2s;
|
| 1167 |
+
}
|
| 1168 |
+
.btn-start:hover {
|
| 1169 |
+
background: #d63050;
|
| 1170 |
+
}
|
| 1171 |
+
|
| 1172 |
+
/* Trial mode: all-models grid */
|
| 1173 |
+
.trial-container { max-width: 1800px; }
|
| 1174 |
+
.trial-grid {
|
| 1175 |
+
display: grid;
|
| 1176 |
+
grid-template-columns: repeat(4, 1fr);
|
| 1177 |
+
gap: 14px;
|
| 1178 |
+
margin-bottom: 20px;
|
| 1179 |
+
}
|
| 1180 |
+
.trial-grid .compare-card { font-size: 0.88rem; }
|
| 1181 |
+
.trial-grid .compare-video { max-height: 220px; }
|
| 1182 |
+
.trial-grid .section-title { font-size: 0.85rem; margin: 6px 0 4px; }
|
| 1183 |
+
.trial-grid .dim-label { font-size: 0.8rem; }
|
| 1184 |
+
.trial-grid .score-btn { min-width: 26px; padding: 3px 6px; font-size: 0.78rem; }
|
| 1185 |
+
.trial-grid .note-input { font-size: 0.8rem; padding: 4px 6px; min-height: 40px; }
|
| 1186 |
+
|
| 1187 |
+
@media (max-width: 1600px) {
|
| 1188 |
+
.trial-grid { grid-template-columns: repeat(3, 1fr); }
|
| 1189 |
+
}
|
| 1190 |
+
@media (max-width: 1100px) {
|
| 1191 |
+
.trial-grid { grid-template-columns: repeat(2, 1fr); }
|
| 1192 |
+
}
|
| 1193 |
+
|
| 1194 |
+
/* ---- Demographics form ---- */
|
| 1195 |
+
.demographics-box {
|
| 1196 |
+
background: #f0f7ff;
|
| 1197 |
+
border: 1px solid #b3d4fc;
|
| 1198 |
+
border-radius: 10px;
|
| 1199 |
+
padding: 24px 32px;
|
| 1200 |
+
margin-bottom: 24px;
|
| 1201 |
+
max-width: 560px;
|
| 1202 |
+
margin-left: auto;
|
| 1203 |
+
margin-right: auto;
|
| 1204 |
+
}
|
| 1205 |
+
.demographics-box h2 {
|
| 1206 |
+
margin: 0 0 8px;
|
| 1207 |
+
font-size: 1.25rem;
|
| 1208 |
+
color: #1a3a5c;
|
| 1209 |
+
}
|
| 1210 |
+
.demographics-box > p {
|
| 1211 |
+
color: #555;
|
| 1212 |
+
margin-bottom: 20px;
|
| 1213 |
+
}
|
| 1214 |
+
.demographics-form {
|
| 1215 |
+
display: flex;
|
| 1216 |
+
flex-direction: column;
|
| 1217 |
+
gap: 20px;
|
| 1218 |
+
}
|
| 1219 |
+
.demo-field {
|
| 1220 |
+
display: flex;
|
| 1221 |
+
flex-direction: column;
|
| 1222 |
+
gap: 8px;
|
| 1223 |
+
}
|
| 1224 |
+
.demo-label {
|
| 1225 |
+
font-weight: 600;
|
| 1226 |
+
font-size: 16px;
|
| 1227 |
+
color: var(--text);
|
| 1228 |
+
}
|
| 1229 |
+
.demo-label .required {
|
| 1230 |
+
color: var(--accent);
|
| 1231 |
+
}
|
| 1232 |
+
.demo-options {
|
| 1233 |
+
display: flex;
|
| 1234 |
+
flex-wrap: wrap;
|
| 1235 |
+
gap: 14px;
|
| 1236 |
+
}
|
| 1237 |
+
.demo-options label {
|
| 1238 |
+
display: flex;
|
| 1239 |
+
align-items: center;
|
| 1240 |
+
gap: 5px;
|
| 1241 |
+
font-size: 15px;
|
| 1242 |
+
cursor: pointer;
|
| 1243 |
+
color: var(--text);
|
| 1244 |
+
}
|
| 1245 |
+
.demo-input {
|
| 1246 |
+
padding: 10px 14px;
|
| 1247 |
+
background: var(--input);
|
| 1248 |
+
border: 1px solid var(--border);
|
| 1249 |
+
border-radius: 4px;
|
| 1250 |
+
color: var(--text);
|
| 1251 |
+
font-size: 15px;
|
| 1252 |
+
outline: none;
|
| 1253 |
+
width: 100%;
|
| 1254 |
+
}
|
| 1255 |
+
.demo-input:focus {
|
| 1256 |
+
border-color: var(--label);
|
| 1257 |
+
}
|
| 1258 |
+
.demo-input-inline {
|
| 1259 |
+
width: 160px;
|
| 1260 |
+
padding: 6px 10px;
|
| 1261 |
+
font-size: 14px;
|
| 1262 |
+
}
|
| 1263 |
+
.demo-input-inline:disabled {
|
| 1264 |
+
opacity: 0.4;
|
| 1265 |
+
}
|
| 1266 |
+
|
| 1267 |
+
/* Responsive: stack on narrow screens */
|
| 1268 |
+
@media (max-width: 1200px) {
|
| 1269 |
+
.compare-grid-4 { grid-template-columns: repeat(2, 1fr); }
|
| 1270 |
+
}
|
| 1271 |
+
@media (max-width: 800px) {
|
| 1272 |
+
.compare-grid-3,
|
| 1273 |
+
.compare-grid-4,
|
| 1274 |
+
.compare-grid-2 { grid-template-columns: 1fr; }
|
| 1275 |
+
.trial-grid { grid-template-columns: 1fr; }
|
| 1276 |
+
}
|
| 1277 |
+
|
evals/human_eval/supplement_laws.py
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Disabled supplement script for the anonymous release.
|
| 3 |
+
|
| 4 |
+
The original supplement flow depends on the prompt-selection JSON, which is not
|
| 5 |
+
included in this release.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def main():
|
| 10 |
+
raise RuntimeError(
|
| 11 |
+
"supplement_laws is not included in this release because it depends on "
|
| 12 |
+
"the omitted prompt-selection JSON."
|
| 13 |
+
)
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
if __name__ == "__main__":
|
| 17 |
+
main()
|
evals/human_eval/templates/_progress_bar.html
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{# Progress bar partial — expects user_completed_groups, user_quota in context.
|
| 2 |
+
Set progress_compact = true before including to hide the stats text. #}
|
| 3 |
+
<div class="progress-section">
|
| 4 |
+
{% if not progress_compact | default(false) %}
|
| 5 |
+
<div class="progress-stats">
|
| 6 |
+
<span class="progress-stat-main">
|
| 7 |
+
You: {{ user_completed_groups }} / {{ user_quota }} groups rated
|
| 8 |
+
</span>
|
| 9 |
+
</div>
|
| 10 |
+
{% endif %}
|
| 11 |
+
{% set pct = [user_completed_groups / user_quota * 100, 100] | min if user_quota > 0 else 0 %}
|
| 12 |
+
<div class="progress-bar-track">
|
| 13 |
+
<div class="progress-bar-fill" style="width: {{ pct | round(1) }}%"></div>
|
| 14 |
+
</div>
|
| 15 |
+
<span class="progress-pct">{{ pct | round(1) }}%</span>
|
| 16 |
+
</div>
|
evals/human_eval/templates/_scale_table.html
ADDED
|
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{# Rating scale reference table partial #}
|
| 2 |
+
<div class="scale-reference-box">
|
| 3 |
+
<h2>Rating Scale Reference</h2>
|
| 4 |
+
<p>Before you begin rating, here is a summary of the scoring scale you will use. Keep these descriptions in mind as you rate each video.</p>
|
| 5 |
+
<table class="guide-table scale-table">
|
| 6 |
+
<thead>
|
| 7 |
+
<tr><th>Score</th><th>Label</th><th>What It Means</th></tr>
|
| 8 |
+
</thead>
|
| 9 |
+
<tbody>
|
| 10 |
+
<tr><td class="score-cell s5">5</td><td style="color:#15803d"><strong>Fully plausible</strong></td><td style="color:#15803d">You would not notice anything wrong if this were a real video. The physics looks exactly like the real world.</td></tr>
|
| 11 |
+
<tr><td class="score-cell s4">4</td><td style="color:#16a34a"><strong>Mostly plausible</strong></td><td style="color:#16a34a">The physics is mostly right, but you can spot a small oddity if you look carefully. It does not break your sense of realism.</td></tr>
|
| 12 |
+
<tr><td class="score-cell s3">3</td><td style="color:#eab308"><strong>Partially plausible</strong></td><td style="color:#eab308">You notice something clearly off about the physics. It is not right, but it is not wildly impossible either.</td></tr>
|
| 13 |
+
<tr><td class="score-cell s2">2</td><td style="color:#f97316"><strong>Largely implausible</strong></td><td style="color:#f97316">The physics is mostly wrong. You would immediately notice these errors in real life.</td></tr>
|
| 14 |
+
<tr><td class="score-cell s1">1</td><td style="color:#ef4444"><strong>Completely implausible</strong></td><td style="color:#ef4444">The physics is impossible. Objects do things that could never happen in the real world.</td></tr>
|
| 15 |
+
</tbody>
|
| 16 |
+
</table>
|
| 17 |
+
</div>
|
evals/human_eval/templates/base.html
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>{% block title %}Video Rating System{% endblock %}</title>
|
| 7 |
+
<link rel="stylesheet" href="/static/style.css">
|
| 8 |
+
</head>
|
| 9 |
+
<body>
|
| 10 |
+
<div id="mobile-block" style="display:none; position:fixed; inset:0; z-index:99999; background:#fff; justify-content:center; align-items:center; text-align:center; padding:2rem;">
|
| 11 |
+
<div>
|
| 12 |
+
<h2 style="font-size:1.5rem; margin-bottom:1rem;">Please use a computer</h2>
|
| 13 |
+
<p style="color:#666;">This requires a desktop or laptop browser for accurate video evaluation. Mobile devices are not supported.</p>
|
| 14 |
+
</div>
|
| 15 |
+
</div>
|
| 16 |
+
<script>
|
| 17 |
+
(function() {
|
| 18 |
+
var w = window.innerWidth;
|
| 19 |
+
if (w < 768 || /Android|iPhone|iPod/i.test(navigator.userAgent)) {
|
| 20 |
+
var b = document.getElementById('mobile-block');
|
| 21 |
+
b.style.display = 'flex';
|
| 22 |
+
document.body.style.overflow = 'hidden';
|
| 23 |
+
}
|
| 24 |
+
})();
|
| 25 |
+
</script>
|
| 26 |
+
{% block content %}{% endblock %}
|
| 27 |
+
{% block scripts %}{% endblock %}
|
| 28 |
+
</body>
|
| 29 |
+
</html>
|
evals/human_eval/templates/dashboard.html
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>Annotation Dashboard</title>
|
| 7 |
+
<style>
|
| 8 |
+
* { box-sizing: border-box; margin: 0; padding: 0; }
|
| 9 |
+
body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif; background: #f5f7fa; color: #333; padding: 2rem; }
|
| 10 |
+
h1 { font-size: 1.6rem; margin-bottom: 1.5rem; }
|
| 11 |
+
.cards { display: grid; grid-template-columns: repeat(auto-fit, minmax(180px, 1fr)); gap: 1rem; margin-bottom: 2rem; }
|
| 12 |
+
.card { background: #fff; border-radius: 10px; padding: 1.2rem; box-shadow: 0 1px 4px rgba(0,0,0,.08); text-align: center; }
|
| 13 |
+
.card .num { font-size: 2rem; font-weight: 700; }
|
| 14 |
+
.card .label { font-size: .85rem; color: #888; margin-top: .3rem; }
|
| 15 |
+
.card.green .num { color: #16a34a; }
|
| 16 |
+
.card.blue .num { color: #2563eb; }
|
| 17 |
+
.card.orange .num { color: #ea580c; }
|
| 18 |
+
.card.purple .num { color: #6366f1; }
|
| 19 |
+
.card.gray .num { color: #64748b; }
|
| 20 |
+
|
| 21 |
+
h2 { font-size: 1.2rem; margin: 1.5rem 0 .8rem; }
|
| 22 |
+
table { width: 100%; border-collapse: collapse; background: #fff; border-radius: 10px; overflow: hidden; box-shadow: 0 1px 4px rgba(0,0,0,.08); margin-bottom: 2rem; }
|
| 23 |
+
th, td { padding: .7rem 1rem; text-align: left; border-bottom: 1px solid #eee; }
|
| 24 |
+
th { background: #f8fafc; font-weight: 600; font-size: .85rem; color: #64748b; text-transform: uppercase; letter-spacing: .03em; }
|
| 25 |
+
td { font-size: .95rem; }
|
| 26 |
+
.bar-bg { background: #e5e7eb; border-radius: 4px; height: 8px; width: 100%; min-width: 120px; display: flex; overflow: hidden; }
|
| 27 |
+
.bar-fill { height: 8px; transition: width .3s; }
|
| 28 |
+
.bar-fill:first-child { border-radius: 4px 0 0 4px; }
|
| 29 |
+
.bar-fill:last-child { border-radius: 0 4px 4px 0; }
|
| 30 |
+
.bar-fill:only-child { border-radius: 4px; }
|
| 31 |
+
.bar-fill.done { background: #16a34a; }
|
| 32 |
+
.bar-fill.partial { background: #6366f1; }
|
| 33 |
+
.bar-fill.skipped { background: #fbbf24; }
|
| 34 |
+
.tag { display: inline-block; padding: 2px 8px; border-radius: 4px; font-size: .8rem; font-weight: 500; }
|
| 35 |
+
.tag-completed { background: #dcfce7; color: #166534; }
|
| 36 |
+
.tag-partial { background: #e0e7ff; color: #3730a3; }
|
| 37 |
+
.tag-assigned { background: #dbeafe; color: #1e40af; }
|
| 38 |
+
.tag-skipped { background: #fef3c7; color: #92400e; }
|
| 39 |
+
.ts { color: #94a3b8; font-size: .8rem; margin-top: 1.5rem; text-align: right; }
|
| 40 |
+
.refresh-btn { background: #2563eb; color: #fff; border: none; padding: .4rem 1rem; border-radius: 6px; cursor: pointer; font-size: .85rem; margin-left: 1rem; }
|
| 41 |
+
.refresh-btn:hover { background: #1d4ed8; }
|
| 42 |
+
.toggle-btn { background: #6366f1; color: #fff; border: none; padding: .4rem 1rem; border-radius: 6px; cursor: pointer; font-size: .85rem; margin-left: .5rem; }
|
| 43 |
+
.toggle-btn:hover { background: #4f46e5; }
|
| 44 |
+
.toggle-btn.active { background: #16a34a; }
|
| 45 |
+
.db-badge { display: inline-block; padding: 2px 10px; border-radius: 4px; font-size: .8rem; font-weight: 600; margin-left: .7rem; vertical-align: middle; }
|
| 46 |
+
.db-badge.raw { background: #fee2e2; color: #991b1b; }
|
| 47 |
+
.db-badge.filtered { background: #dcfce7; color: #166534; }
|
| 48 |
+
.card.red .num { color: #dc2626; }
|
| 49 |
+
.empty { color: #94a3b8; font-style: italic; padding: 1rem; }
|
| 50 |
+
</style>
|
| 51 |
+
</head>
|
| 52 |
+
<body>
|
| 53 |
+
<h1>Annotation Dashboard
|
| 54 |
+
<span class="db-badge {{ 'filtered' if filtered else 'raw' }}">{{ db_name }}</span>
|
| 55 |
+
{% if has_filtered_db %}
|
| 56 |
+
<a href="?filtered={{ '0' if filtered else '1' }}"><button class="toggle-btn {{ 'active' if filtered else '' }}">{{ 'Show Raw' if filtered else 'Show Filtered' }}</button></a>
|
| 57 |
+
{% endif %}
|
| 58 |
+
<button class="refresh-btn" onclick="location.reload()">Refresh</button>
|
| 59 |
+
</h1>
|
| 60 |
+
|
| 61 |
+
<div class="cards">
|
| 62 |
+
<div class="card blue"><div class="num">{{ total_videos }}</div><div class="label">Total Videos</div></div>
|
| 63 |
+
<div class="card green"><div class="num">{{ total_completed }}</div><div class="label">Completed</div></div>
|
| 64 |
+
<div class="card purple"><div class="num">{{ total_partial }}</div><div class="label">Partial</div></div>
|
| 65 |
+
<div class="card orange"><div class="num">{{ total_assigned }}</div><div class="label">Pending</div></div>
|
| 66 |
+
<div class="card gray"><div class="num">{{ total_skipped }}</div><div class="label">Skipped</div></div>
|
| 67 |
+
<div class="card"><div class="num">{{ total_annotations }}</div><div class="label">Annotations</div></div>
|
| 68 |
+
<div class="card red"><div class="num">{{ annotations_needed }}</div><div class="label">Still Needed ({{ target_n }}x)</div></div>
|
| 69 |
+
<div class="card"><div class="num">{{ total_groups }}</div><div class="label">Groups</div></div>
|
| 70 |
+
</div>
|
| 71 |
+
|
| 72 |
+
<h2>Coverage (target: {{ target_n }}x per video)</h2>
|
| 73 |
+
<div class="cards">
|
| 74 |
+
<div class="card green"><div class="num">{{ coverage.fully }}</div><div class="label">Fully Covered</div></div>
|
| 75 |
+
<div class="card orange"><div class="num">{{ coverage.partially }}</div><div class="label">Partially Covered</div></div>
|
| 76 |
+
<div class="card gray"><div class="num">{{ coverage.uncovered }}</div><div class="label">Uncovered</div></div>
|
| 77 |
+
</div>
|
| 78 |
+
|
| 79 |
+
<h2>Per-Model Coverage</h2>
|
| 80 |
+
<table>
|
| 81 |
+
<thead>
|
| 82 |
+
<tr><th>Model</th><th>Total</th><th>3x</th><th>2x</th><th>1x</th><th>0x</th><th>Coverage</th></tr>
|
| 83 |
+
</thead>
|
| 84 |
+
<tbody>
|
| 85 |
+
{% for m in per_model %}
|
| 86 |
+
{% set mtotal = m.fully + m.partially + m.uncovered %}
|
| 87 |
+
<tr>
|
| 88 |
+
<td>{{ m.model }}</td>
|
| 89 |
+
<td>{{ mtotal }}</td>
|
| 90 |
+
<td><span class="tag tag-completed">{{ m.v3 }}</span></td>
|
| 91 |
+
<td><span class="tag tag-partial">{{ m.v2 }}</span></td>
|
| 92 |
+
<td>{{ m.v1 }}</td>
|
| 93 |
+
<td>{{ m.v0 }}</td>
|
| 94 |
+
<td>
|
| 95 |
+
{% if mtotal > 0 %}
|
| 96 |
+
<div class="bar-bg"><div class="bar-fill done" style="width:{{ (m.v3 / mtotal * 100)|round }}%"></div>{% if m.v2 > 0 %}<div class="bar-fill partial" style="width:{{ (m.v2 / mtotal * 100)|round }}%"></div>{% endif %}{% if m.v1 > 0 %}<div class="bar-fill skipped" style="width:{{ (m.v1 / mtotal * 100)|round }}%"></div>{% endif %}</div>
|
| 97 |
+
<span style="font-size:.75rem;color:#888">{{ m.v3 + m.v2 + m.v1 }}/{{ mtotal }}</span>
|
| 98 |
+
{% endif %}
|
| 99 |
+
</td>
|
| 100 |
+
</tr>
|
| 101 |
+
{% endfor %}
|
| 102 |
+
</tbody>
|
| 103 |
+
</table>
|
| 104 |
+
|
| 105 |
+
<h2>Cohort Summary</h2>
|
| 106 |
+
<table>
|
| 107 |
+
<thead>
|
| 108 |
+
<tr><th>Cohort</th><th>Registered</th><th>Completed</th><th>Partial</th><th>Pending</th><th>Skipped</th><th>Progress</th></tr>
|
| 109 |
+
</thead>
|
| 110 |
+
<tbody>
|
| 111 |
+
{% for c in cohort_summary %}
|
| 112 |
+
<tr>
|
| 113 |
+
<td>{{ c.cohort }}</td>
|
| 114 |
+
<td>{{ c.registered }}{% if c.expected %}/{{ c.expected }}{% endif %}</td>
|
| 115 |
+
<td><span class="tag tag-completed">{{ c.completed }}</span></td>
|
| 116 |
+
<td><span class="tag tag-partial">{{ c.partial }}</span></td>
|
| 117 |
+
<td><span class="tag tag-assigned">{{ c.assigned }}</span></td>
|
| 118 |
+
<td><span class="tag tag-skipped">{{ c.skipped }}</span></td>
|
| 119 |
+
<td>
|
| 120 |
+
{% set total = c.completed + c.partial + c.assigned + c.skipped %}
|
| 121 |
+
{% if total > 0 %}
|
| 122 |
+
<div class="bar-bg"><div class="bar-fill done" style="width:{{ (c.completed / total * 100)|round }}%"></div>{% if c.partial > 0 %}<div class="bar-fill partial" style="width:{{ (c.partial / total * 100)|round }}%"></div>{% endif %}{% if c.skipped > 0 %}<div class="bar-fill skipped" style="width:{{ (c.skipped / total * 100)|round }}%"></div>{% endif %}</div>
|
| 123 |
+
<span style="font-size:.75rem;color:#888">{{ c.completed + c.partial }}/{{ total }}</span>
|
| 124 |
+
{% else %}
|
| 125 |
+
<span class="empty">no tasks</span>
|
| 126 |
+
{% endif %}
|
| 127 |
+
</td>
|
| 128 |
+
</tr>
|
| 129 |
+
{% endfor %}
|
| 130 |
+
<tr style="font-weight:700;background:#f8fafc">
|
| 131 |
+
<td>Total</td>
|
| 132 |
+
<td>{{ cohort_summary|sum(attribute='registered') }}/{{ cohort_summary|sum(attribute='expected') }}</td>
|
| 133 |
+
<td><span class="tag tag-completed">{{ cohort_summary|sum(attribute='completed') }}</span></td>
|
| 134 |
+
<td><span class="tag tag-partial">{{ cohort_summary|sum(attribute='partial') }}</span></td>
|
| 135 |
+
<td><span class="tag tag-assigned">{{ cohort_summary|sum(attribute='assigned') }}</span></td>
|
| 136 |
+
<td><span class="tag tag-skipped">{{ cohort_summary|sum(attribute='skipped') }}</span></td>
|
| 137 |
+
<td></td>
|
| 138 |
+
</tr>
|
| 139 |
+
</tbody>
|
| 140 |
+
</table>
|
| 141 |
+
|
| 142 |
+
<h2>NEU PG Student IDs</h2>
|
| 143 |
+
<table>
|
| 144 |
+
<thead>
|
| 145 |
+
<tr><th>Name</th><th>Cohort</th><th>Completed</th><th>Partial</th><th>Assigned</th><th>Skipped</th><th>Progress</th></tr>
|
| 146 |
+
</thead>
|
| 147 |
+
<tbody>
|
| 148 |
+
{% for a in per_annotator %}
|
| 149 |
+
<tr>
|
| 150 |
+
<td>{{ a.name }}</td>
|
| 151 |
+
<td>{{ a.cohort or '-' }}</td>
|
| 152 |
+
<td><span class="tag tag-completed">{{ a.completed }}</span></td>
|
| 153 |
+
<td><span class="tag tag-partial">{{ a.partial }}</span></td>
|
| 154 |
+
<td><span class="tag tag-assigned">{{ a.assigned }}</span></td>
|
| 155 |
+
<td><span class="tag tag-skipped">{{ a.skipped }}</span></td>
|
| 156 |
+
<td>
|
| 157 |
+
{% set total = a.completed + a.partial + a.assigned + a.skipped %}
|
| 158 |
+
{% if total > 0 %}
|
| 159 |
+
<div class="bar-bg"><div class="bar-fill done" style="width:{{ (a.completed / total * 100)|round }}%"></div>{% if a.partial > 0 %}<div class="bar-fill partial" style="width:{{ (a.partial / total * 100)|round }}%"></div>{% endif %}{% if a.skipped > 0 %}<div class="bar-fill skipped" style="width:{{ (a.skipped / total * 100)|round }}%"></div>{% endif %}</div>
|
| 160 |
+
<span style="font-size:.75rem;color:#888">{{ a.completed + a.partial }}/{{ total }}</span>
|
| 161 |
+
{% else %}
|
| 162 |
+
<span class="empty">no tasks</span>
|
| 163 |
+
{% endif %}
|
| 164 |
+
</td>
|
| 165 |
+
</tr>
|
| 166 |
+
{% endfor %}
|
| 167 |
+
</tbody>
|
| 168 |
+
</table>
|
| 169 |
+
|
| 170 |
+
{% if recent_activity %}
|
| 171 |
+
<h2>Recent Activity</h2>
|
| 172 |
+
<table>
|
| 173 |
+
<thead>
|
| 174 |
+
<tr><th>Annotator</th><th>Video</th><th>Status</th><th>Time</th></tr>
|
| 175 |
+
</thead>
|
| 176 |
+
<tbody>
|
| 177 |
+
{% for r in recent_activity %}
|
| 178 |
+
<tr>
|
| 179 |
+
<td>{{ r.name }}</td>
|
| 180 |
+
<td title="{{ r.filename }}">{{ r.filename|truncate(40) }}</td>
|
| 181 |
+
<td><span class="tag tag-{{ r.status }}">{{ r.status }}</span></td>
|
| 182 |
+
<td>{{ r.completed_at or r.assigned_at }}</td>
|
| 183 |
+
</tr>
|
| 184 |
+
{% endfor %}
|
| 185 |
+
</tbody>
|
| 186 |
+
</table>
|
| 187 |
+
{% endif %}
|
| 188 |
+
|
| 189 |
+
<div class="ts">Last loaded: <script>document.write(new Date().toLocaleString())</script></div>
|
| 190 |
+
</body>
|
| 191 |
+
</html>
|
evals/human_eval/templates/demo.html
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{% extends "base.html" %}
|
| 2 |
+
|
| 3 |
+
{% block title %}{% if lang == 'zh' %}Demo{% else %}Demo{% endif %} — Video Rating System{% endblock %}
|
| 4 |
+
|
| 5 |
+
{% block content %}
|
| 6 |
+
<div class="topbar">
|
| 7 |
+
<span class="annotator-name">{{ annotator_name }}</span>
|
| 8 |
+
<span class="progress-label">Progress: {{ user_completed_groups }} / {{ user_quota }}</span>
|
| 9 |
+
</div>
|
| 10 |
+
|
| 11 |
+
{% include "_progress_bar.html" %}
|
| 12 |
+
|
| 13 |
+
<div class="container">
|
| 14 |
+
<!-- Instructions -->
|
| 15 |
+
<div class="instructions-box">
|
| 16 |
+
<h2>Instructions</h2>
|
| 17 |
+
<p>You will be shown pairs of videos along with their text prompts and reference images. Your task is to evaluate whether each video obeys real-world physical laws independently.</p>
|
| 18 |
+
<p>Please provide thoughtful and consistent ratings. Your evaluations are very important to us.</p>
|
| 19 |
+
<p>Check the scoring demo examples below to see how we score videos before you begin.</p>
|
| 20 |
+
</div>
|
| 21 |
+
|
| 22 |
+
{% include "_scale_table.html" %}
|
| 23 |
+
|
| 24 |
+
<!-- Scoring Demo Examples -->
|
| 25 |
+
<div class="guide-container" style="max-width:100%; padding:0;">
|
| 26 |
+
<h1 class="guide-title">{% if lang == 'zh' %}评分示范{% else %}Scoring Demo{% endif %}</h1>
|
| 27 |
+
<p class="guide-intro">
|
| 28 |
+
{% if lang == 'zh' %}
|
| 29 |
+
以下是人工打分的实际案例。高亮的按钮是给出的分数,下方是评分理由,供你参考如何打分。
|
| 30 |
+
{% else %}
|
| 31 |
+
Below are human scoring examples. The highlighted button is the score given, with reasoning shown below. Use these as a reference for how to rate.
|
| 32 |
+
{% endif %}
|
| 33 |
+
</p>
|
| 34 |
+
|
| 35 |
+
{% for demo in demos %}
|
| 36 |
+
<div class="guide-section demo-example">
|
| 37 |
+
<h2>{% if lang == 'zh' %}示例{% else %}Example{% endif %} {{ loop.index }}</h2>
|
| 38 |
+
|
| 39 |
+
<!-- Prompt -->
|
| 40 |
+
<div class="prompt-box compare-prompt">
|
| 41 |
+
<span class="prompt-label">Prompt</span>
|
| 42 |
+
<p class="prompt-text">{{ demo.prompt }}</p>
|
| 43 |
+
</div>
|
| 44 |
+
|
| 45 |
+
<!-- Video -->
|
| 46 |
+
<div class="demo-video-wrap">
|
| 47 |
+
<video class="demo-video" controls muted loop>
|
| 48 |
+
<source src="{{ demo.video_url }}" type="video/mp4">
|
| 49 |
+
</video>
|
| 50 |
+
<p class="text-muted" style="font-size:0.85em; margin-top:0.3em;">
|
| 51 |
+
{% if lang == 'zh' %}点击 play 可以播放视频,再次点击可以重放{% else %}Click play to start the video; click again to replay{% endif %}
|
| 52 |
+
</p>
|
| 53 |
+
</div>
|
| 54 |
+
|
| 55 |
+
<!-- General Dimensions — same layout as rating page -->
|
| 56 |
+
{% set rg = demo.rationale_general_zh if lang == 'zh' and demo.rationale_general_zh else demo.rationale_general %}
|
| 57 |
+
<div class="score-section">
|
| 58 |
+
<h3 class="section-title">{% if lang == 'zh' %}通用维度{% else %}General Dimensions{% endif %}</h3>
|
| 59 |
+
{% for key, dim_label, values, description, score_labels in general_dims %}
|
| 60 |
+
<div class="dim-row">
|
| 61 |
+
<div class="dim-info">
|
| 62 |
+
<span class="dim-label">{{ description }}</span>
|
| 63 |
+
</div>
|
| 64 |
+
<div class="score-btns">
|
| 65 |
+
{% for v in values %}
|
| 66 |
+
<button type="button"
|
| 67 |
+
class="score-btn{% if demo.scores.get(key) == v %} demo-selected{% endif %}"
|
| 68 |
+
title="{{ score_labels[v] }}"
|
| 69 |
+
disabled>{{ v }}</button>
|
| 70 |
+
{% endfor %}
|
| 71 |
+
</div>
|
| 72 |
+
{% if rg and rg.get(key) %}
|
| 73 |
+
<div class="dim-reasoning">{{ rg[key] }}</div>
|
| 74 |
+
{% endif %}
|
| 75 |
+
</div>
|
| 76 |
+
{% endfor %}
|
| 77 |
+
</div>
|
| 78 |
+
|
| 79 |
+
<!-- Physical Sub-questions — same layout as rating page -->
|
| 80 |
+
{% if demo.physical_dims %}
|
| 81 |
+
{% set rp = demo.rationale_physical_zh if lang == 'zh' and demo.rationale_physical_zh else demo.rationale_physical %}
|
| 82 |
+
<div class="score-section">
|
| 83 |
+
<h3 class="section-title">{% if lang == 'zh' %}物理子问题{% else %}Physical Sub-questions{% endif %}</h3>
|
| 84 |
+
{% set phy_dims = demo.physical_dims_zh if lang == 'zh' and demo.physical_dims_zh else demo.physical_dims %}
|
| 85 |
+
{% for law_key, law_desc in phy_dims %}
|
| 86 |
+
{% set hc = human_criteria_by_key.get(law_key) %}
|
| 87 |
+
<div class="dim-row">
|
| 88 |
+
<div class="dim-info">
|
| 89 |
+
{% if hc %}
|
| 90 |
+
<span class="dim-label"><span class="law-tag">{{ law_key | capitalize }}</span> {{ hc.question }}</span>
|
| 91 |
+
{% if hc.note %}<span class="dim-note">{{ hc.note }}</span>{% endif %}
|
| 92 |
+
{% else %}
|
| 93 |
+
<span class="dim-label"><span class="law-tag">{{ law_key | capitalize }}</span> {{ law_desc }}</span>
|
| 94 |
+
{% endif %}
|
| 95 |
+
</div>
|
| 96 |
+
<div class="score-btns">
|
| 97 |
+
{% for v in [1, 2, 3, 4, 5] %}
|
| 98 |
+
<button type="button"
|
| 99 |
+
class="score-btn{% if demo.physical_scores.get(law_key) == v %} demo-selected{% endif %}"
|
| 100 |
+
disabled>{{ v }}</button>
|
| 101 |
+
{% endfor %}
|
| 102 |
+
</div>
|
| 103 |
+
{% if rp and rp.get(law_key) %}
|
| 104 |
+
<div class="dim-reasoning">{{ rp[law_key] }}</div>
|
| 105 |
+
{% endif %}
|
| 106 |
+
</div>
|
| 107 |
+
{% endfor %}
|
| 108 |
+
</div>
|
| 109 |
+
{% endif %}
|
| 110 |
+
</div>
|
| 111 |
+
{% endfor %}
|
| 112 |
+
</div>
|
| 113 |
+
|
| 114 |
+
<!-- Start Rating button -->
|
| 115 |
+
<div class="start-section">
|
| 116 |
+
<form action="/tasks/start" method="post">
|
| 117 |
+
<button type="submit" class="btn btn-start">{% if lang == 'zh' %}我已了解评分标准 — 开始评分{% else %}Start Rating{% endif %}</button>
|
| 118 |
+
</form>
|
| 119 |
+
</div>
|
| 120 |
+
</div>
|
| 121 |
+
{% endblock %}
|
| 122 |
+
|
| 123 |
+
{% block scripts %}
|
| 124 |
+
<script>
|
| 125 |
+
// Color demo-selected buttons by score value
|
| 126 |
+
document.querySelectorAll('.score-btn.demo-selected').forEach(function(btn) {
|
| 127 |
+
var v = parseInt(btn.textContent.trim());
|
| 128 |
+
var colors = {5: '#15803d', 4: '#16a34a', 3: '#eab308', 2: '#f97316', 1: '#ef4444'};
|
| 129 |
+
btn.style.background = colors[v] || '#888';
|
| 130 |
+
});
|
| 131 |
+
</script>
|
| 132 |
+
{% endblock %}
|
evals/human_eval/templates/demographics.html
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{% extends "base.html" %}
|
| 2 |
+
|
| 3 |
+
{% block title %}Basic Information — Video Rating System{% endblock %}
|
| 4 |
+
|
| 5 |
+
{% block content %}
|
| 6 |
+
<div class="container">
|
| 7 |
+
<div class="demographics-box">
|
| 8 |
+
<h3>Basic Information</h3>
|
| 9 |
+
<p>Background information for internal use only.</p>
|
| 10 |
+
<form action="/demographics" method="POST" class="demographics-form">
|
| 11 |
+
<div class="demo-field">
|
| 12 |
+
<label class="demo-label">Gender <span class="required">*</span></label>
|
| 13 |
+
<div class="demo-options">
|
| 14 |
+
<label><input type="radio" name="gender" value="male" required> Male</label>
|
| 15 |
+
<label><input type="radio" name="gender" value="female"> Female</label>
|
| 16 |
+
<label><input type="radio" name="gender" value="other"> Other</label>
|
| 17 |
+
<label><input type="radio" name="gender" value="prefer_not_to_say"> Prefer not to say</label>
|
| 18 |
+
</div>
|
| 19 |
+
</div>
|
| 20 |
+
<div class="demo-field">
|
| 21 |
+
<label class="demo-label">Age <span class="required">*</span></label>
|
| 22 |
+
<select name="age" class="demo-input" style="width:240px" required>
|
| 23 |
+
<option value="" disabled selected>Select</option>
|
| 24 |
+
<option value="under18">Under 18</option>
|
| 25 |
+
<option value="18-22">18–22</option>
|
| 26 |
+
<option value="22-25">22–25</option>
|
| 27 |
+
<option value="26-30">26–30</option>
|
| 28 |
+
<option value="31-40">31–40</option>
|
| 29 |
+
<option value="41+">41+</option>
|
| 30 |
+
</select>
|
| 31 |
+
</div>
|
| 32 |
+
<div class="demo-field">
|
| 33 |
+
<label class="demo-label">Major <span class="required">*</span></label>
|
| 34 |
+
<div class="demo-options">
|
| 35 |
+
<label><input type="radio" name="major" value="CS" required> Computer Science</label>
|
| 36 |
+
<label><input type="radio" name="major" value="EE"> Electrical and Computer Engineering</label>
|
| 37 |
+
<label><input type="radio" name="major" value="Business"> Business</label>
|
| 38 |
+
<label><input type="radio" name="major" value="other"> Other:</label>
|
| 39 |
+
<input type="text" name="major_other" class="demo-input demo-input-inline" placeholder="Please specify" disabled>
|
| 40 |
+
</div>
|
| 41 |
+
</div>
|
| 42 |
+
<div class="demo-field">
|
| 43 |
+
<label class="demo-label">Highest education (including pursuing) <span class="required">*</span></label>
|
| 44 |
+
<select name="education" class="demo-input" style="width:240px" required>
|
| 45 |
+
<option value="" disabled selected>Select</option>
|
| 46 |
+
<option value="bachelor">Bachelor</option>
|
| 47 |
+
<option value="master">Master</option>
|
| 48 |
+
<option value="phd">PhD</option>
|
| 49 |
+
</select>
|
| 50 |
+
</div>
|
| 51 |
+
<script>
|
| 52 |
+
document.querySelectorAll('input[name="major"]').forEach(function(r) {
|
| 53 |
+
r.addEventListener('change', function() {
|
| 54 |
+
var otherInput = document.querySelector('input[name="major_other"]');
|
| 55 |
+
otherInput.disabled = this.value !== 'other';
|
| 56 |
+
if (this.value === 'other') otherInput.focus();
|
| 57 |
+
else otherInput.value = '';
|
| 58 |
+
});
|
| 59 |
+
});
|
| 60 |
+
</script>
|
| 61 |
+
<button type="submit" class="btn btn-primary btn-start">Continue</button>
|
| 62 |
+
</form>
|
| 63 |
+
</div>
|
| 64 |
+
</div>
|
| 65 |
+
{% endblock %}
|
evals/human_eval/templates/guide.html
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{% extends "base.html" %}
|
| 2 |
+
|
| 3 |
+
{% block title %}{% if lang == 'zh' %}评分指南{% else %}Scoring Guide{% endif %} — Video Rating System{% endblock %}
|
| 4 |
+
|
| 5 |
+
{% block content %}
|
| 6 |
+
<div class="container guide-container">
|
| 7 |
+
<div class="guide-topbar">
|
| 8 |
+
<form action="/tasks/start" method="post" style="display:inline;">
|
| 9 |
+
<button type="submit" class="btn btn-start btn-sm">{% if lang == 'zh' %}开始评分{% else %}Start Rating{% endif %}</button>
|
| 10 |
+
</form>
|
| 11 |
+
<span style="float:right"></span>
|
| 12 |
+
</div>
|
| 13 |
+
|
| 14 |
+
<h1 class="guide-title">{% if lang == 'zh' %}评分指南{% else %}Scoring Guide{% endif %}</h1>
|
| 15 |
+
<p class="guide-intro">
|
| 16 |
+
{% if lang == 'zh' %}
|
| 17 |
+
开始评分前请仔细阅读本指南。请完整观看视频后,对<strong>每个</strong>维度打分。
|
| 18 |
+
{% else %}
|
| 19 |
+
Please read this guide carefully before you start rating.
|
| 20 |
+
Watch the full video, then rate <strong>every</strong> dimension.
|
| 21 |
+
{% endif %}
|
| 22 |
+
</p>
|
| 23 |
+
|
| 24 |
+
<!-- ===== Scale ===== -->
|
| 25 |
+
<div class="guide-section">
|
| 26 |
+
<h2>{% if lang == 'zh' %}评分尺度{% else %}Rating Scale{% endif %}</h2>
|
| 27 |
+
<table class="guide-table">
|
| 28 |
+
<thead>
|
| 29 |
+
<tr><th>{% if lang == 'zh' %}分数{% else %}Score{% endif %}</th><th>{% if lang == 'zh' %}含义{% else %}Meaning{% endif %}</th></tr>
|
| 30 |
+
</thead>
|
| 31 |
+
<tbody>
|
| 32 |
+
{% if lang == 'zh' %}
|
| 33 |
+
<tr><td class="score-cell s5">5</td><td>完全合理</td></tr>
|
| 34 |
+
<tr><td class="score-cell s4">4</td><td>大部分合理</td></tr>
|
| 35 |
+
<tr><td class="score-cell s3">3</td><td>部分合理</td></tr>
|
| 36 |
+
<tr><td class="score-cell s2">2</td><td>大部分不合理</td></tr>
|
| 37 |
+
<tr><td class="score-cell s1">1</td><td>完全不合理</td></tr>
|
| 38 |
+
{% else %}
|
| 39 |
+
<tr><td class="score-cell s5">5</td><td>Fully plausible</td></tr>
|
| 40 |
+
<tr><td class="score-cell s4">4</td><td>Mostly plausible</td></tr>
|
| 41 |
+
<tr><td class="score-cell s3">3</td><td>Partially plausible</td></tr>
|
| 42 |
+
<tr><td class="score-cell s2">2</td><td>Largely implausible</td></tr>
|
| 43 |
+
<tr><td class="score-cell s1">1</td><td>Completely implausible</td></tr>
|
| 44 |
+
{% endif %}
|
| 45 |
+
</tbody>
|
| 46 |
+
</table>
|
| 47 |
+
<p class="guide-note">
|
| 48 |
+
{% if lang == 'zh' %}
|
| 49 |
+
所有维度统一使用 1–5 分制。
|
| 50 |
+
{% else %}
|
| 51 |
+
All dimensions (general and physical) use the 1–5 scale above.
|
| 52 |
+
{% endif %}
|
| 53 |
+
</p>
|
| 54 |
+
</div>
|
| 55 |
+
|
| 56 |
+
<!-- ===== General Dimensions ===== -->
|
| 57 |
+
<div class="guide-section">
|
| 58 |
+
<h2>{% if lang == 'zh' %}通用维度{% else %}General Dimensions{% endif %}</h2>
|
| 59 |
+
<div class="dim-cards">
|
| 60 |
+
{% for key, label, values, description, score_labels in general_dims_display %}
|
| 61 |
+
<div class="guide-card">
|
| 62 |
+
<div class="guide-card-header">
|
| 63 |
+
<span class="dim-key">{{ key }}</span>
|
| 64 |
+
<span class="guide-card-label">{{ label }}</span>
|
| 65 |
+
<span class="guide-card-scale">{% if values | length == 2 %}0/1{% else %}1–5{% endif %}</span>
|
| 66 |
+
</div>
|
| 67 |
+
<p class="guide-card-desc">{{ description }}</p>
|
| 68 |
+
</div>
|
| 69 |
+
{% endfor %}
|
| 70 |
+
</div>
|
| 71 |
+
</div>
|
| 72 |
+
|
| 73 |
+
<!-- ===== Physical Sub-questions (Rating Instrument) ===== -->
|
| 74 |
+
<div class="guide-section">
|
| 75 |
+
<h2>{% if lang == 'zh' %}物理子问题 (1–5){% else %}Physical Sub-questions (1–5){% endif %}</h2>
|
| 76 |
+
<p class="guide-note">
|
| 77 |
+
{% if lang == 'zh' %}
|
| 78 |
+
每个视频有 2–4 个需要评估的物理法则,只需对显示的法则打分。
|
| 79 |
+
{% else %}
|
| 80 |
+
Each video has 2–4 physical laws to evaluate. You only rate the laws shown for that video.
|
| 81 |
+
{% endif %}
|
| 82 |
+
</p>
|
| 83 |
+
|
| 84 |
+
<div class="dim-cards">
|
| 85 |
+
{% for c in human_criteria %}
|
| 86 |
+
<div class="guide-card guide-card-phys">
|
| 87 |
+
<div class="guide-card-header">
|
| 88 |
+
<span class="dim-key">{{ c.key | capitalize }}</span>
|
| 89 |
+
<span class="guide-card-scale">1–5</span>
|
| 90 |
+
</div>
|
| 91 |
+
<p class="guide-card-desc"><strong>Q:</strong> {{ c.question }}</p>
|
| 92 |
+
{% if c.note %}
|
| 93 |
+
<p class="guide-card-note"><strong>Note:</strong> {{ c.note }}</p>
|
| 94 |
+
{% endif %}
|
| 95 |
+
</div>
|
| 96 |
+
{% endfor %}
|
| 97 |
+
</div>
|
| 98 |
+
</div>
|
| 99 |
+
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
<div class="guide-footer">
|
| 103 |
+
<form action="/tasks/start" method="post">
|
| 104 |
+
<button type="submit" class="btn btn-start">{% if lang == 'zh' %}开始评分{% else %}Start Rating{% endif %}</button>
|
| 105 |
+
</form>
|
| 106 |
+
</div>
|
| 107 |
+
</div>
|
| 108 |
+
{% endblock %}
|
evals/human_eval/templates/login.html
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{% extends "base.html" %}
|
| 2 |
+
|
| 3 |
+
{% block title %}Login — Video Rating System{% endblock %}
|
| 4 |
+
|
| 5 |
+
{% block content %}
|
| 6 |
+
<div class="login-container">
|
| 7 |
+
<h1>Video Rating System</h1>
|
| 8 |
+
{% if error %}
|
| 9 |
+
<div class="error-msg">{{ error }}</div>
|
| 10 |
+
{% endif %}
|
| 11 |
+
<form method="post" action="/login">
|
| 12 |
+
<input type="hidden" name="cohort" value="{{ cohort or 'others' }}">
|
| 13 |
+
<input type="text" name="username" placeholder="Enter your username" autocomplete="off" required>
|
| 14 |
+
<p class="login-hint">Your username will be saved. Please use the same username each time.</p>
|
| 15 |
+
<button type="submit" class="btn btn-primary">Enter</button>
|
| 16 |
+
</form>
|
| 17 |
+
</div>
|
| 18 |
+
{% endblock %}
|
evals/human_eval/templates/rate_compare.html
ADDED
|
@@ -0,0 +1,154 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{% extends "base.html" %}
|
| 2 |
+
|
| 3 |
+
{% block title %}Compare — {{ video_entries | length }} Models{% endblock %}
|
| 4 |
+
|
| 5 |
+
{% block content %}
|
| 6 |
+
<div class="topbar">
|
| 7 |
+
<span class="annotator-name">{{ annotator_name }}</span>
|
| 8 |
+
<a href="/demo" target="_blank" class="btn btn-guide">Demo</a>
|
| 9 |
+
<span class="progress-label">Progress: {{ user_completed_groups }} / {{ user_quota }}</span>
|
| 10 |
+
</div>
|
| 11 |
+
|
| 12 |
+
{% set progress_compact = true %}
|
| 13 |
+
{% include "_progress_bar.html" %}
|
| 14 |
+
|
| 15 |
+
<div class="container compare-container">
|
| 16 |
+
<div class="instruction-banner">
|
| 17 |
+
<strong>Instructions:</strong> Watch all three videos below. Rate <em>each</em> video independently.
|
| 18 |
+
Your rating should range from 1-5, with 1 meaning completely implausible and 5 meaning fully plausible.
|
| 19 |
+
Judge only physical plausibility — do not let brightness, colour saturation, or visual appeal influence your ratings.
|
| 20 |
+
<div class="scoring-guide-inline">
|
| 21 |
+
<span>Scoring:</span>
|
| 22 |
+
<span style="color:#ef4444">1 = Completely implausible</span>
|
| 23 |
+
<span style="color:#f97316">2 = Largely implausible</span>
|
| 24 |
+
<span style="color:#eab308">3 = Partially plausible</span>
|
| 25 |
+
<span style="color:#16a34a">4 = Mostly plausible</span>
|
| 26 |
+
<span style="color:#15803d">5 = Fully plausible</span>
|
| 27 |
+
</div>
|
| 28 |
+
</div>
|
| 29 |
+
|
| 30 |
+
<!-- Shared prompt -->
|
| 31 |
+
<div class="prompt-box compare-prompt">
|
| 32 |
+
<span class="prompt-label">Generation Prompt (shared by all videos)</span>
|
| 33 |
+
<p class="prompt-text">{{ prompt }}</p>
|
| 34 |
+
</div>
|
| 35 |
+
|
| 36 |
+
<!-- Video grid — single form wrapping all cards -->
|
| 37 |
+
<form id="submit-all-form" method="POST" action="/rate_group/{{ group_id }}">
|
| 38 |
+
<div class="compare-grid compare-grid-{{ video_entries | length }}">
|
| 39 |
+
{% for label, assignment in video_entries %}
|
| 40 |
+
{% set vid_status = status_map.get(assignment.id, STATUS_ASSIGNED) %}
|
| 41 |
+
{% set is_done = vid_status in (STATUS_COMPLETED, STATUS_SKIPPED) %}
|
| 42 |
+
<div class="compare-card{% if vid_status == STATUS_COMPLETED %} card-completed{% elif vid_status == STATUS_SKIPPED %} card-skipped{% endif %}" data-vid="{{ assignment.id }}">
|
| 43 |
+
<div class="compare-card-header">
|
| 44 |
+
<span class="compare-label">Video {{ label }}</span>
|
| 45 |
+
{% if vid_status == STATUS_COMPLETED %}
|
| 46 |
+
<span class="status-badge completed">Submitted</span>
|
| 47 |
+
{% elif vid_status == STATUS_SKIPPED %}
|
| 48 |
+
<span class="status-badge skipped">Skipped</span>
|
| 49 |
+
{% endif %}
|
| 50 |
+
</div>
|
| 51 |
+
|
| 52 |
+
<div class="compare-video-wrap">
|
| 53 |
+
<video class="video-player compare-video" muted>
|
| 54 |
+
<source src="/video/{{ assignment.dataset }}/{{ assignment.filename }}" type="video/mp4">
|
| 55 |
+
</video>
|
| 56 |
+
<div class="video-controls">
|
| 57 |
+
<button type="button" class="vc-btn vc-play" onclick="togglePlay(this)">▶</button>
|
| 58 |
+
<div class="vc-bar" onclick="seekVideo(event, this)">
|
| 59 |
+
<div class="vc-progress"></div>
|
| 60 |
+
</div>
|
| 61 |
+
<button type="button" class="vc-btn vc-mute" onclick="toggleMute(this)">🔇</button>
|
| 62 |
+
</div>
|
| 63 |
+
</div>
|
| 64 |
+
|
| 65 |
+
<div class="card-form" data-vid="{{ assignment.id }}">
|
| 66 |
+
<!-- General dimensions -->
|
| 67 |
+
<div class="score-section">
|
| 68 |
+
<h3 class="section-title">General Dimensions</h3>
|
| 69 |
+
{% set es = existing_scores_map.get(assignment.id) %}
|
| 70 |
+
{% for key, dim_label, values, description, score_labels in general_dims %}
|
| 71 |
+
<div class="dim-row">
|
| 72 |
+
<div class="dim-info">
|
| 73 |
+
<span class="dim-label">{{ description }}</span>
|
| 74 |
+
</div>
|
| 75 |
+
<div class="score-btns">
|
| 76 |
+
{% for v in values %}
|
| 77 |
+
<button type="button"
|
| 78 |
+
class="score-btn{% if es and es.get('general', {}).get(key) == v %} selected{% endif %}"
|
| 79 |
+
data-value="{{ v }}"
|
| 80 |
+
title="{{ score_labels[v] }}"
|
| 81 |
+
onclick="selectScore(this)"
|
| 82 |
+
{% if is_done %}disabled{% endif %}>{{ v }}</button>
|
| 83 |
+
{% endfor %}
|
| 84 |
+
<input type="hidden" class="score-input" name="v{{ assignment.id }}_{{ key }}"
|
| 85 |
+
value="{{ es.get('general', {}).get(key, '') if es else '' }}">
|
| 86 |
+
</div>
|
| 87 |
+
</div>
|
| 88 |
+
{% endfor %}
|
| 89 |
+
</div>
|
| 90 |
+
|
| 91 |
+
<!-- Physical sub-questions (grouped by domain) -->
|
| 92 |
+
{% if physical_dims %}
|
| 93 |
+
<div class="score-section">
|
| 94 |
+
<h3 class="section-title">Physical Sub-questions</h3>
|
| 95 |
+
{% for key, desc in physical_dims %}
|
| 96 |
+
{% set hc = human_criteria_by_key.get(key) %}
|
| 97 |
+
<div class="dim-row physical-dim-row" data-law="{{ key }}">
|
| 98 |
+
<div class="dim-info">
|
| 99 |
+
{% if hc %}
|
| 100 |
+
<span class="dim-label"><span class="law-tag">{{ key | capitalize }}</span> {{ hc.question }}</span>
|
| 101 |
+
{% if hc.note %}<span class="dim-note">{{ hc.note }}</span>{% endif %}
|
| 102 |
+
{% else %}
|
| 103 |
+
<span class="dim-label"><span class="law-tag">{{ key | capitalize }}</span> {{ desc }}</span>
|
| 104 |
+
{% endif %}
|
| 105 |
+
</div>
|
| 106 |
+
<div class="score-btns">
|
| 107 |
+
{% for v in [1, 2, 3, 4, 5] %}
|
| 108 |
+
<button type="button"
|
| 109 |
+
class="score-btn{% if es and es.get('physical', {}).get(key) == v %} selected{% endif %}"
|
| 110 |
+
data-value="{{ v }}"
|
| 111 |
+
onclick="selectScore(this)"
|
| 112 |
+
{% if is_done %}disabled{% endif %}>{{ v }}</button>
|
| 113 |
+
{% endfor %}
|
| 114 |
+
<input type="hidden" class="score-input" name="v{{ assignment.id }}_{{ key }}"
|
| 115 |
+
value="{{ es.get('physical', {}).get(key, '') if es else '' }}">
|
| 116 |
+
</div>
|
| 117 |
+
</div>
|
| 118 |
+
{% endfor %}
|
| 119 |
+
</div>
|
| 120 |
+
{% endif %}
|
| 121 |
+
|
| 122 |
+
<div class="note-section">
|
| 123 |
+
<textarea name="v{{ assignment.id }}_note" class="note-input" placeholder="Notes (optional) — If a physical law is completely irrelevant to this prompt (e.g. buoyancy when there is no water), note it here."
|
| 124 |
+
{% if is_done %}disabled{% endif %}>{{ es.get('note', '') if es else '' }}</textarea>
|
| 125 |
+
</div>
|
| 126 |
+
|
| 127 |
+
<input type="hidden" name="v{{ assignment.id }}_play_count" class="play-count-input" value="0">
|
| 128 |
+
<input type="hidden" name="v{{ assignment.id }}_stay_seconds" class="stay-seconds-input" value="0">
|
| 129 |
+
</div>
|
| 130 |
+
|
| 131 |
+
</div>
|
| 132 |
+
{% endfor %}
|
| 133 |
+
</div>
|
| 134 |
+
|
| 135 |
+
{% set has_pending = namespace(val=false) %}
|
| 136 |
+
{% for label, assignment in video_entries %}
|
| 137 |
+
{% if status_map.get(assignment.id, STATUS_ASSIGNED) not in (STATUS_COMPLETED, STATUS_SKIPPED) %}
|
| 138 |
+
{% set has_pending.val = true %}
|
| 139 |
+
{% endif %}
|
| 140 |
+
{% endfor %}
|
| 141 |
+
{% if has_pending.val %}
|
| 142 |
+
<div class="submit-all-section" style="text-align:center; margin: 1.5rem 0; display:flex; justify-content:center; gap:1rem;">
|
| 143 |
+
<button type="button" class="btn btn-primary btn-lg" id="submit-all-btn" onclick="doSave()">Save</button>
|
| 144 |
+
<button type="button" class="btn btn-primary btn-lg" id="submit-next-btn" onclick="doNext()">Next</button>
|
| 145 |
+
</div>
|
| 146 |
+
{% endif %}
|
| 147 |
+
</form>
|
| 148 |
+
|
| 149 |
+
</div>
|
| 150 |
+
{% endblock %}
|
| 151 |
+
|
| 152 |
+
{% block scripts %}
|
| 153 |
+
<script src="/static/rate.js"></script>
|
| 154 |
+
{% endblock %}
|
evals/human_eval/templates/task_list.html
ADDED
|
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{% extends "base.html" %}
|
| 2 |
+
|
| 3 |
+
{% block title %}Task List — Video Rating System{% endblock %}
|
| 4 |
+
|
| 5 |
+
{% block content %}
|
| 6 |
+
<div class="topbar">
|
| 7 |
+
<span class="annotator-name">{{ annotator_name }}</span>
|
| 8 |
+
<a href="/demo" class="btn btn-guide" target="_blank">Demo</a>
|
| 9 |
+
<span class="progress-label">Progress: {{ user_completed_groups }} / {{ user_quota }}</span>
|
| 10 |
+
</div>
|
| 11 |
+
|
| 12 |
+
{% include "_progress_bar.html" %}
|
| 13 |
+
|
| 14 |
+
<div class="container">
|
| 15 |
+
{% if needs_demographics %}
|
| 16 |
+
<div class="demographics-box">
|
| 17 |
+
<h2>Evaluating the Realism of Short Videos</h2>
|
| 18 |
+
<p>Welcome, and thank you for participating!</p>
|
| 19 |
+
<p>Your task is simple: you will watch a series of short videos and rate how realistic certain physical behaviours look to you.</p>
|
| 20 |
+
<p>Please read the following information before proceeding.</p>
|
| 21 |
+
<ul style="text-align:left; margin:0.8em 0;">
|
| 22 |
+
<li><strong>Voluntary participation.</strong> Your participation is voluntary. You may stop at any time, with no penalty.</li>
|
| 23 |
+
<li><strong>Duration.</strong> This takes about 15 minutes.</li>
|
| 24 |
+
<li><strong>Anonymity.</strong> All your responses are anonymous. We do not collect your name, email address, or any information that could identify you personally.</li>
|
| 25 |
+
<li><strong>Use of data.</strong> Your responses will be kept confidential and used internally only.</li>
|
| 26 |
+
</ul>
|
| 27 |
+
<hr style="margin:1.2em 0; border:none; border-top:1px solid #ccc;">
|
| 28 |
+
<h3>Basic Information</h3>
|
| 29 |
+
<p>We collect the following background information for internal use only.</p>
|
| 30 |
+
<form action="/demographics" method="POST" class="demographics-form">
|
| 31 |
+
<div class="demo-field">
|
| 32 |
+
<label class="demo-label">Gender <span class="required">*</span></label>
|
| 33 |
+
<div class="demo-options">
|
| 34 |
+
<label><input type="radio" name="gender" value="male" required> Male</label>
|
| 35 |
+
<label><input type="radio" name="gender" value="female"> Female</label>
|
| 36 |
+
<label><input type="radio" name="gender" value="other"> Other</label>
|
| 37 |
+
<label><input type="radio" name="gender" value="prefer_not_to_say"> Prefer not to say</label>
|
| 38 |
+
</div>
|
| 39 |
+
</div>
|
| 40 |
+
<div class="demo-field">
|
| 41 |
+
<label class="demo-label">Age <span class="required">*</span></label>
|
| 42 |
+
<select name="age" class="demo-input" style="width:240px" required>
|
| 43 |
+
<option value="" disabled selected>Select</option>
|
| 44 |
+
<option value="under18">Under 18</option>
|
| 45 |
+
<option value="18-22">18–22</option>
|
| 46 |
+
<option value="22-25">22–25</option>
|
| 47 |
+
<option value="26-30">26–30</option>
|
| 48 |
+
<option value="31-40">31–40</option>
|
| 49 |
+
<option value="41+">41+</option>
|
| 50 |
+
</select>
|
| 51 |
+
</div>
|
| 52 |
+
<div class="demo-field">
|
| 53 |
+
<label class="demo-label">Major <span class="required">*</span></label>
|
| 54 |
+
<div class="demo-options">
|
| 55 |
+
<label><input type="radio" name="major" value="CS" required> Computer Science</label>
|
| 56 |
+
<label><input type="radio" name="major" value="EE"> Electrical Engineering</label>
|
| 57 |
+
<label><input type="radio" name="major" value="Business"> Business</label>
|
| 58 |
+
<label><input type="radio" name="major" value="other"> Other:</label>
|
| 59 |
+
<input type="text" name="major_other" class="demo-input demo-input-inline" placeholder="Please specify" disabled>
|
| 60 |
+
</div>
|
| 61 |
+
</div>
|
| 62 |
+
<div class="demo-field">
|
| 63 |
+
<label class="demo-label">Highest education (including pursuing) <span class="required">*</span></label>
|
| 64 |
+
<select name="education" class="demo-input" style="width:240px" required>
|
| 65 |
+
<option value="" disabled selected>Select</option>
|
| 66 |
+
<option value="bachelor">Bachelor</option>
|
| 67 |
+
<option value="master">Master</option>
|
| 68 |
+
<option value="phd">PhD</option>
|
| 69 |
+
</select>
|
| 70 |
+
</div>
|
| 71 |
+
<script>
|
| 72 |
+
document.querySelectorAll('input[name="major"]').forEach(function(r) {
|
| 73 |
+
r.addEventListener('change', function() {
|
| 74 |
+
var otherInput = document.querySelector('input[name="major_other"]');
|
| 75 |
+
otherInput.disabled = this.value !== 'other';
|
| 76 |
+
if (this.value === 'other') otherInput.focus();
|
| 77 |
+
else otherInput.value = '';
|
| 78 |
+
});
|
| 79 |
+
});
|
| 80 |
+
</script>
|
| 81 |
+
<button type="submit" class="btn btn-primary btn-start">Continue</button>
|
| 82 |
+
</form>
|
| 83 |
+
</div>
|
| 84 |
+
{% else %}
|
| 85 |
+
<div class="instructions-box">
|
| 86 |
+
<h2>Instructions</h2>
|
| 87 |
+
<p>Welcome! You will be shown pairs of AI-generated videos along with their text prompts and reference images. Your task is to compare the videos and rate which one better follows the prompt and obeys real-world physical laws.</p>
|
| 88 |
+
<p>You can stop at any time and close the browser — your progress is saved automatically. Come back and resume whenever you like.</p>
|
| 89 |
+
<p>Please provide thoughtful and consistent ratings. Your evaluations are very important to us.</p>
|
| 90 |
+
<p>Check the <a href="/demo" target="_blank"><strong>Demo</strong></a> (top-right corner) to see how we score videos before you begin.</p>
|
| 91 |
+
</div>
|
| 92 |
+
|
| 93 |
+
{% include "_scale_table.html" %}
|
| 94 |
+
|
| 95 |
+
<div class="start-section">
|
| 96 |
+
<form action="/tasks/start" method="post" onsubmit="this.querySelector('button').disabled=true">
|
| 97 |
+
<button type="submit" class="btn btn-start">I understand the scale — Continue</button>
|
| 98 |
+
</form>
|
| 99 |
+
</div>
|
| 100 |
+
{% endif %}
|
| 101 |
+
</div>
|
| 102 |
+
{% endblock %}
|
evals/human_eval/templates/thanks.html
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{% extends "base.html" %}
|
| 2 |
+
{% block title %}Thank You{% endblock %}
|
| 3 |
+
{% block content %}
|
| 4 |
+
<div style="max-width:600px; margin:80px auto; text-align:center; font-family:system-ui,sans-serif;">
|
| 5 |
+
<h1 style="font-size:2rem; margin-bottom:1rem;">Thank you for your participation!</h1>
|
| 6 |
+
<p style="font-size:1.1rem; color:#555; margin-bottom:2rem;">
|
| 7 |
+
Your annotations have been saved successfully.
|
| 8 |
+
</p>
|
| 9 |
+
<div style="background:#f0f7ff; border:2px solid #3b82f6; border-radius:12px; padding:2rem; margin-bottom:2rem;">
|
| 10 |
+
<p style="font-size:1rem; color:#333; margin-bottom:0.5rem;">Your completion code:</p>
|
| 11 |
+
<p id="code" style="font-size:2.5rem; font-weight:bold; letter-spacing:0.3em; color:#1d4ed8; margin:0.5rem 0;">{{ code }}</p>
|
| 12 |
+
<button id="copyBtn"
|
| 13 |
+
style="margin-top:1rem; padding:0.5rem 1.5rem; font-size:1rem; background:#3b82f6; color:#fff; border:none; border-radius:6px; cursor:pointer;">
|
| 14 |
+
Copy Code
|
| 15 |
+
</button>
|
| 16 |
+
<script>
|
| 17 |
+
document.getElementById('copyBtn').addEventListener('click', function() {
|
| 18 |
+
var text = '{{ code }}';
|
| 19 |
+
var btn = this;
|
| 20 |
+
if (navigator.clipboard && window.isSecureContext) {
|
| 21 |
+
navigator.clipboard.writeText(text).then(function() {
|
| 22 |
+
btn.textContent = 'Copied!';
|
| 23 |
+
setTimeout(function(){ btn.textContent = 'Copy Code'; }, 1500);
|
| 24 |
+
});
|
| 25 |
+
} else {
|
| 26 |
+
var ta = document.createElement('textarea');
|
| 27 |
+
ta.value = text;
|
| 28 |
+
ta.style.position = 'fixed';
|
| 29 |
+
ta.style.opacity = '0';
|
| 30 |
+
document.body.appendChild(ta);
|
| 31 |
+
ta.select();
|
| 32 |
+
document.execCommand('copy');
|
| 33 |
+
document.body.removeChild(ta);
|
| 34 |
+
btn.textContent = 'Copied!';
|
| 35 |
+
setTimeout(function(){ btn.textContent = 'Copy Code'; }, 1500);
|
| 36 |
+
}
|
| 37 |
+
});
|
| 38 |
+
</script>
|
| 39 |
+
</div>
|
| 40 |
+
<p style="font-size:1.05rem; color:#333;">
|
| 41 |
+
Please copy this code and paste it into the survey.
|
| 42 |
+
</p>
|
| 43 |
+
</div>
|
| 44 |
+
{% endblock %}
|
evals/human_eval/tests/__init__.py
ADDED
|
File without changes
|
evals/human_eval/tests/conftest.py
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import sqlite3
|
| 2 |
+
import pytest
|
| 3 |
+
import sys
|
| 4 |
+
from pathlib import Path
|
| 5 |
+
|
| 6 |
+
EVAL_ROOT = Path(__file__).resolve().parent.parent.parent
|
| 7 |
+
if str(EVAL_ROOT) not in sys.path:
|
| 8 |
+
sys.path.append(str(EVAL_ROOT))
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
@pytest.fixture
|
| 12 |
+
def db():
|
| 13 |
+
from human_eval.db import init_db
|
| 14 |
+
conn = sqlite3.connect(":memory:")
|
| 15 |
+
conn.row_factory = sqlite3.Row
|
| 16 |
+
init_db(conn)
|
| 17 |
+
yield conn
|
| 18 |
+
conn.close()
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
@pytest.fixture
|
| 22 |
+
def app(tmp_path):
|
| 23 |
+
pytest.importorskip("flask")
|
| 24 |
+
from human_eval.app import create_app
|
| 25 |
+
test_app = create_app(
|
| 26 |
+
db_path=":memory:",
|
| 27 |
+
video_data_dir=tmp_path,
|
| 28 |
+
skip_import=True,
|
| 29 |
+
)
|
| 30 |
+
test_app.config["TESTING"] = True
|
| 31 |
+
yield test_app
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
@pytest.fixture
|
| 35 |
+
def client(app):
|
| 36 |
+
return app.test_client()
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
@pytest.fixture
|
| 40 |
+
def app_db(app):
|
| 41 |
+
from human_eval.app import get_app_db
|
| 42 |
+
with app.app_context():
|
| 43 |
+
yield get_app_db()
|
evals/human_eval/tests/test_assign.py
ADDED
|
@@ -0,0 +1,246 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import sqlite3
|
| 2 |
+
import pytest
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
def _insert_annotator(db, name):
|
| 6 |
+
db.execute("INSERT INTO annotators (name) VALUES (?)", (name,))
|
| 7 |
+
db.commit()
|
| 8 |
+
return db.execute("SELECT id FROM annotators WHERE name=?", (name,)).fetchone()["id"]
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
def _insert_video(db, filename, dataset="ds", prompt="p", physical_laws='["fluid"]', difficulty=None):
|
| 12 |
+
db.execute(
|
| 13 |
+
"INSERT INTO videos (filename, dataset, prompt, physical_laws, difficulty_score, import_hash) "
|
| 14 |
+
"VALUES (?, ?, ?, ?, ?, 'hash')",
|
| 15 |
+
(filename, dataset, prompt, physical_laws, difficulty),
|
| 16 |
+
)
|
| 17 |
+
db.commit()
|
| 18 |
+
return db.execute(
|
| 19 |
+
"SELECT id FROM videos WHERE filename=?", (filename,)
|
| 20 |
+
).fetchone()["id"]
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
def _insert_assignment(db, video_id, annotator_id, status="assigned", hours_offset=1):
|
| 24 |
+
db.execute(
|
| 25 |
+
"INSERT INTO assignments (video_id, annotator_id, status, expires_at) "
|
| 26 |
+
"VALUES (?, ?, ?, datetime('now', ? || ' hours'))",
|
| 27 |
+
(video_id, annotator_id, status, f"+{hours_offset}" if hours_offset >= 0 else str(hours_offset)),
|
| 28 |
+
)
|
| 29 |
+
db.commit()
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
def _insert_comparison_prompt(
|
| 33 |
+
db,
|
| 34 |
+
prompt,
|
| 35 |
+
models=("model-a", "model-b"),
|
| 36 |
+
physical_laws='["fluid"]',
|
| 37 |
+
):
|
| 38 |
+
video_ids = []
|
| 39 |
+
for model in models:
|
| 40 |
+
video_ids.append(
|
| 41 |
+
_insert_video(
|
| 42 |
+
db,
|
| 43 |
+
f"{prompt}-{model}.mp4",
|
| 44 |
+
dataset=f"{model}-wmb",
|
| 45 |
+
prompt=prompt,
|
| 46 |
+
physical_laws=physical_laws,
|
| 47 |
+
)
|
| 48 |
+
)
|
| 49 |
+
return video_ids
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def _insert_comparison_group(db, group_id, prompt, physical_laws, video_ids, annotator_id, hours_offset=24):
|
| 53 |
+
db.execute(
|
| 54 |
+
"INSERT INTO comparison_groups (id, prompt, physical_laws) VALUES (?, ?, ?)",
|
| 55 |
+
(group_id, prompt, physical_laws),
|
| 56 |
+
)
|
| 57 |
+
for video_id in video_ids:
|
| 58 |
+
db.execute(
|
| 59 |
+
"INSERT INTO assignments (video_id, annotator_id, status, expires_at, group_id) "
|
| 60 |
+
"VALUES (?, ?, 'assigned', datetime('now', ? || ' hours'), ?)",
|
| 61 |
+
(video_id, annotator_id, f"+{hours_offset}" if hours_offset >= 0 else str(hours_offset), group_id),
|
| 62 |
+
)
|
| 63 |
+
db.commit()
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
class TestAssignComparisonBatch:
|
| 68 |
+
def test_expired_group_cleanup_deletes_saved_drafts_before_reassignment(self, db):
|
| 69 |
+
from human_eval.assign import assign_comparison_batch
|
| 70 |
+
|
| 71 |
+
uid = _insert_annotator(db, "alice")
|
| 72 |
+
video_ids = _insert_comparison_prompt(db, "prompt-1")
|
| 73 |
+
|
| 74 |
+
_insert_comparison_group(
|
| 75 |
+
db,
|
| 76 |
+
"expired-group",
|
| 77 |
+
"prompt-1",
|
| 78 |
+
'["fluid"]',
|
| 79 |
+
video_ids,
|
| 80 |
+
uid,
|
| 81 |
+
hours_offset=-1,
|
| 82 |
+
)
|
| 83 |
+
|
| 84 |
+
expired_aid = db.execute(
|
| 85 |
+
"SELECT id FROM assignments WHERE group_id = 'expired-group' ORDER BY id LIMIT 1"
|
| 86 |
+
).fetchone()["id"]
|
| 87 |
+
db.execute(
|
| 88 |
+
"INSERT INTO annotations (assignment_id, scores_json) VALUES (?, '{}')",
|
| 89 |
+
(expired_aid,),
|
| 90 |
+
)
|
| 91 |
+
annotation_id = db.execute(
|
| 92 |
+
"SELECT id FROM annotations WHERE assignment_id = ?",
|
| 93 |
+
(expired_aid,),
|
| 94 |
+
).fetchone()["id"]
|
| 95 |
+
db.execute(
|
| 96 |
+
"INSERT INTO annotation_items (annotation_id, dimension, law, score) VALUES (?, ?, ?, ?)",
|
| 97 |
+
(annotation_id, "SA", None, 4),
|
| 98 |
+
)
|
| 99 |
+
db.commit()
|
| 100 |
+
|
| 101 |
+
new_groups = assign_comparison_batch(
|
| 102 |
+
db,
|
| 103 |
+
uid,
|
| 104 |
+
n_annotators=3,
|
| 105 |
+
batch_size=1,
|
| 106 |
+
ttl_hours=24,
|
| 107 |
+
models=["model-a", "model-b"],
|
| 108 |
+
models_per_group=2,
|
| 109 |
+
)
|
| 110 |
+
|
| 111 |
+
assert len(new_groups) == 1
|
| 112 |
+
assert db.execute(
|
| 113 |
+
"SELECT 1 FROM comparison_groups WHERE id = 'expired-group'",
|
| 114 |
+
).fetchone() is None
|
| 115 |
+
assert db.execute(
|
| 116 |
+
"SELECT COUNT(*) AS c FROM annotations WHERE assignment_id = ?",
|
| 117 |
+
(expired_aid,),
|
| 118 |
+
).fetchone()["c"] == 0
|
| 119 |
+
assert db.execute(
|
| 120 |
+
"SELECT COUNT(*) AS c FROM annotation_items WHERE annotation_id = ?",
|
| 121 |
+
(annotation_id,),
|
| 122 |
+
).fetchone()["c"] == 0
|
| 123 |
+
|
| 124 |
+
def test_prompt_stays_assignable_until_each_model_reaches_target_coverage(self, db):
|
| 125 |
+
from human_eval.assign import assign_comparison_batch
|
| 126 |
+
|
| 127 |
+
uid_a = _insert_annotator(db, "alice")
|
| 128 |
+
uid_b = _insert_annotator(db, "bob")
|
| 129 |
+
uid_c = _insert_annotator(db, "carol")
|
| 130 |
+
uid_d = _insert_annotator(db, "dave")
|
| 131 |
+
|
| 132 |
+
video_ids = _insert_comparison_prompt(
|
| 133 |
+
db,
|
| 134 |
+
"prompt-1",
|
| 135 |
+
models=("model-a", "model-b", "model-c", "model-d"),
|
| 136 |
+
)
|
| 137 |
+
covered_video_ids = video_ids[:3]
|
| 138 |
+
|
| 139 |
+
for idx, uid in enumerate((uid_a, uid_b, uid_c), start=1):
|
| 140 |
+
group_id = f"completed-{idx}"
|
| 141 |
+
_insert_comparison_group(
|
| 142 |
+
db,
|
| 143 |
+
group_id,
|
| 144 |
+
"prompt-1",
|
| 145 |
+
'["fluid"]',
|
| 146 |
+
covered_video_ids,
|
| 147 |
+
uid,
|
| 148 |
+
)
|
| 149 |
+
db.execute(
|
| 150 |
+
"UPDATE assignments SET status = 'completed' WHERE group_id = ?",
|
| 151 |
+
(group_id,),
|
| 152 |
+
)
|
| 153 |
+
db.commit()
|
| 154 |
+
|
| 155 |
+
new_groups = assign_comparison_batch(
|
| 156 |
+
db,
|
| 157 |
+
uid_d,
|
| 158 |
+
n_annotators=3,
|
| 159 |
+
batch_size=1,
|
| 160 |
+
ttl_hours=24,
|
| 161 |
+
models=["model-a", "model-b", "model-c", "model-d"],
|
| 162 |
+
models_per_group=3,
|
| 163 |
+
)
|
| 164 |
+
|
| 165 |
+
assert len(new_groups) == 1
|
| 166 |
+
datasets = {
|
| 167 |
+
row["dataset"]
|
| 168 |
+
for row in db.execute(
|
| 169 |
+
"SELECT v.dataset FROM assignments a "
|
| 170 |
+
"JOIN videos v ON a.video_id = v.id "
|
| 171 |
+
"WHERE a.group_id = ?",
|
| 172 |
+
(new_groups[0],),
|
| 173 |
+
).fetchall()
|
| 174 |
+
}
|
| 175 |
+
assert "model-d-wmb" in datasets
|
| 176 |
+
|
| 177 |
+
def test_repeated_call_keeps_pending_group_when_no_new_candidates(self, db):
|
| 178 |
+
from human_eval.assign import assign_comparison_batch
|
| 179 |
+
|
| 180 |
+
uid = _insert_annotator(db, "alice")
|
| 181 |
+
video_ids = _insert_comparison_prompt(db, "prompt-1")
|
| 182 |
+
|
| 183 |
+
first_groups = assign_comparison_batch(
|
| 184 |
+
db,
|
| 185 |
+
uid,
|
| 186 |
+
n_annotators=3,
|
| 187 |
+
batch_size=1,
|
| 188 |
+
ttl_hours=24,
|
| 189 |
+
models=["model-a", "model-b"],
|
| 190 |
+
models_per_group=2,
|
| 191 |
+
)
|
| 192 |
+
|
| 193 |
+
assert len(first_groups) == 1
|
| 194 |
+
group_id = first_groups[0]
|
| 195 |
+
|
| 196 |
+
second_groups = assign_comparison_batch(
|
| 197 |
+
db,
|
| 198 |
+
uid,
|
| 199 |
+
n_annotators=3,
|
| 200 |
+
batch_size=1,
|
| 201 |
+
ttl_hours=24,
|
| 202 |
+
models=["model-a", "model-b"],
|
| 203 |
+
models_per_group=2,
|
| 204 |
+
)
|
| 205 |
+
|
| 206 |
+
assert second_groups == []
|
| 207 |
+
assert db.execute(
|
| 208 |
+
"SELECT COUNT(*) AS c FROM assignments WHERE group_id = ?",
|
| 209 |
+
(group_id,),
|
| 210 |
+
).fetchone()["c"] == len(video_ids)
|
| 211 |
+
assert db.execute(
|
| 212 |
+
"SELECT 1 FROM comparison_groups WHERE id = ?",
|
| 213 |
+
(group_id,),
|
| 214 |
+
).fetchone() is not None
|
| 215 |
+
|
| 216 |
+
def test_new_group_assignment_does_not_delete_existing_pending_group(self, db):
|
| 217 |
+
from human_eval.assign import assign_comparison_batch
|
| 218 |
+
|
| 219 |
+
uid = _insert_annotator(db, "alice")
|
| 220 |
+
prompt_one_videos = _insert_comparison_prompt(db, "prompt-1")
|
| 221 |
+
_insert_comparison_prompt(db, "prompt-2")
|
| 222 |
+
|
| 223 |
+
_insert_comparison_group(
|
| 224 |
+
db,
|
| 225 |
+
"existing-group",
|
| 226 |
+
"prompt-1",
|
| 227 |
+
'["fluid"]',
|
| 228 |
+
prompt_one_videos,
|
| 229 |
+
uid,
|
| 230 |
+
)
|
| 231 |
+
|
| 232 |
+
new_groups = assign_comparison_batch(
|
| 233 |
+
db,
|
| 234 |
+
uid,
|
| 235 |
+
n_annotators=3,
|
| 236 |
+
batch_size=1,
|
| 237 |
+
ttl_hours=24,
|
| 238 |
+
models=["model-a", "model-b"],
|
| 239 |
+
models_per_group=2,
|
| 240 |
+
)
|
| 241 |
+
|
| 242 |
+
assert len(new_groups) == 1
|
| 243 |
+
assert new_groups[0] != "existing-group"
|
| 244 |
+
assert db.execute(
|
| 245 |
+
"SELECT COUNT(*) AS c FROM assignments WHERE group_id = 'existing-group'",
|
| 246 |
+
).fetchone()["c"] == len(prompt_one_videos)
|
evals/human_eval/tests/test_db.py
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import sqlite3
|
| 2 |
+
import pytest
|
| 3 |
+
|
| 4 |
+
|
| 5 |
+
def test_tables_exist(db):
|
| 6 |
+
cursor = db.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name")
|
| 7 |
+
tables = [row[0] for row in cursor.fetchall()]
|
| 8 |
+
assert "annotators" in tables
|
| 9 |
+
assert "videos" in tables
|
| 10 |
+
assert "assignments" in tables
|
| 11 |
+
assert "annotations" in tables
|
| 12 |
+
assert "annotation_items" in tables
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
def test_annotator_unique_name(db):
|
| 16 |
+
db.execute("INSERT INTO annotators (name) VALUES ('alice')")
|
| 17 |
+
db.commit()
|
| 18 |
+
with pytest.raises(sqlite3.IntegrityError):
|
| 19 |
+
db.execute("INSERT INTO annotators (name) VALUES ('alice')")
|
| 20 |
+
db.commit()
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
def test_assignment_unique_video_annotator(db):
|
| 24 |
+
db.execute("INSERT INTO annotators (name) VALUES ('alice')")
|
| 25 |
+
db.execute(
|
| 26 |
+
"INSERT INTO videos (filename, dataset, prompt, physical_laws, import_hash) "
|
| 27 |
+
"VALUES ('v.mp4', 'ds', 'prompt', '[\"fluid\"]', 'abc')"
|
| 28 |
+
)
|
| 29 |
+
db.commit()
|
| 30 |
+
db.execute(
|
| 31 |
+
"INSERT INTO assignments (video_id, annotator_id, status, expires_at) "
|
| 32 |
+
"VALUES (1, 1, 'assigned', datetime('now', '+1 hour'))"
|
| 33 |
+
)
|
| 34 |
+
db.commit()
|
| 35 |
+
with pytest.raises(sqlite3.IntegrityError):
|
| 36 |
+
db.execute(
|
| 37 |
+
"INSERT INTO assignments (video_id, annotator_id, status, expires_at) "
|
| 38 |
+
"VALUES (1, 1, 'assigned', datetime('now', '+1 hour'))"
|
| 39 |
+
)
|
| 40 |
+
db.commit()
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def test_annotation_unique_assignment(db):
|
| 44 |
+
db.execute("INSERT INTO annotators (name) VALUES ('alice')")
|
| 45 |
+
db.execute(
|
| 46 |
+
"INSERT INTO videos (filename, dataset, prompt, physical_laws, import_hash) "
|
| 47 |
+
"VALUES ('v.mp4', 'ds', 'prompt', '[\"fluid\"]', 'abc')"
|
| 48 |
+
)
|
| 49 |
+
db.execute(
|
| 50 |
+
"INSERT INTO assignments (video_id, annotator_id, status, expires_at) "
|
| 51 |
+
"VALUES (1, 1, 'assigned', datetime('now', '+1 hour'))"
|
| 52 |
+
)
|
| 53 |
+
db.execute(
|
| 54 |
+
"INSERT INTO annotations (assignment_id, scores_json) VALUES (1, '{}')"
|
| 55 |
+
)
|
| 56 |
+
db.commit()
|
| 57 |
+
with pytest.raises(sqlite3.IntegrityError):
|
| 58 |
+
db.execute(
|
| 59 |
+
"INSERT INTO annotations (assignment_id, scores_json) VALUES (1, '{}')"
|
| 60 |
+
)
|
| 61 |
+
db.commit()
|
| 62 |
+
|
| 63 |
+
|
evals/human_eval/tests/test_import.py
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pytest
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
def _make_eval_json(results, judge="gemini"):
|
| 5 |
+
return {
|
| 6 |
+
"schema_version": "v2.0",
|
| 7 |
+
"judge": judge,
|
| 8 |
+
"results": results,
|
| 9 |
+
}
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def _make_result(video, domain, prompt, scores=None):
|
| 13 |
+
base = {"video": video, "domain": domain, "prompt": prompt}
|
| 14 |
+
if scores:
|
| 15 |
+
base.update(scores)
|
| 16 |
+
return base
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
class TestImportHash:
|
| 20 |
+
def test_deterministic(self, tmp_path):
|
| 21 |
+
from human_eval.import_videos import compute_import_hash
|
| 22 |
+
(tmp_path / "ds1").mkdir()
|
| 23 |
+
(tmp_path / "ds1" / "a.mp4").touch()
|
| 24 |
+
(tmp_path / "ds1" / "b.mp4").touch()
|
| 25 |
+
h1 = compute_import_hash(tmp_path)
|
| 26 |
+
h2 = compute_import_hash(tmp_path)
|
| 27 |
+
assert h1 == h2
|
| 28 |
+
|
| 29 |
+
def test_changes_with_files(self, tmp_path):
|
| 30 |
+
from human_eval.import_videos import compute_import_hash
|
| 31 |
+
(tmp_path / "ds1").mkdir()
|
| 32 |
+
(tmp_path / "ds1" / "a.mp4").touch()
|
| 33 |
+
h1 = compute_import_hash(tmp_path)
|
| 34 |
+
(tmp_path / "ds1" / "b.mp4").touch()
|
| 35 |
+
h2 = compute_import_hash(tmp_path)
|
| 36 |
+
assert h1 != h2
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
class TestDifficultyScore:
|
| 40 |
+
def test_computes_mean_abs_diff(self):
|
| 41 |
+
from human_eval.import_videos import compute_difficulty_score
|
| 42 |
+
gemini = {"SA": 4, "PTV": 3, "persistence": 4}
|
| 43 |
+
qwen = {"SA": 2, "PTV": 1, "persistence": 2}
|
| 44 |
+
score = compute_difficulty_score(gemini, qwen)
|
| 45 |
+
assert abs(score - 6 / 3) < 1e-6
|
| 46 |
+
|
| 47 |
+
def test_returns_none_when_missing_keys(self):
|
| 48 |
+
from human_eval.import_videos import compute_difficulty_score
|
| 49 |
+
assert compute_difficulty_score({"SA": 1}, {}) is None
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
class TestImportVideos:
|
| 53 |
+
def test_importer_disabled_without_prompt_selection_json(self, db, tmp_path):
|
| 54 |
+
from human_eval.import_videos import import_videos
|
| 55 |
+
with pytest.raises(RuntimeError, match="prompt-selection JSON"):
|
| 56 |
+
import_videos(db, tmp_path / "videos")
|
evals/human_eval/tests/test_routes.py
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import pytest
|
| 3 |
+
|
| 4 |
+
from human_eval.config import STATUS_COMPLETED
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
def _setup_user_and_video(app_db):
|
| 8 |
+
app_db.execute("INSERT INTO annotators (name) VALUES ('alice')")
|
| 9 |
+
app_db.execute(
|
| 10 |
+
"INSERT INTO videos (filename, dataset, prompt, physical_laws, import_hash) "
|
| 11 |
+
"""VALUES ('v.mp4', 'ds', 'test prompt', '["flow_dynamics", "boundary_interaction", "fluid_continuity"]', 'hash')"""
|
| 12 |
+
)
|
| 13 |
+
app_db.commit()
|
| 14 |
+
return 1, 1
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
class TestLogin:
|
| 19 |
+
def test_get_login_page(self, client):
|
| 20 |
+
resp = client.get("/")
|
| 21 |
+
assert resp.status_code == 200
|
| 22 |
+
|
| 23 |
+
def test_login_creates_annotator(self, client, app_db):
|
| 24 |
+
resp = client.post("/login", data={"username": "alice"}, follow_redirects=False)
|
| 25 |
+
assert resp.status_code == 302
|
| 26 |
+
row = app_db.execute("SELECT * FROM annotators WHERE name='alice'").fetchone()
|
| 27 |
+
assert row is not None
|
| 28 |
+
|
| 29 |
+
def test_login_empty_name_rejected(self, client):
|
| 30 |
+
resp = client.post("/login", data={"username": ""}, follow_redirects=True)
|
| 31 |
+
assert resp.status_code == 200
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
def _setup_comparison_group(app_db, video_id, annotator_id, group_id="test-group"):
|
| 35 |
+
"""Create a comparison group and a grouped assignment."""
|
| 36 |
+
app_db.execute(
|
| 37 |
+
"INSERT OR IGNORE INTO comparison_groups (id, prompt, physical_laws) "
|
| 38 |
+
"""VALUES (?, 'test prompt', '["flow_dynamics"]')""",
|
| 39 |
+
(group_id,),
|
| 40 |
+
)
|
| 41 |
+
app_db.execute(
|
| 42 |
+
"INSERT INTO assignments (video_id, annotator_id, status, expires_at, group_id) "
|
| 43 |
+
"VALUES (?, ?, 'assigned', datetime('now', '+24 hours'), ?)",
|
| 44 |
+
(video_id, annotator_id, group_id),
|
| 45 |
+
)
|
| 46 |
+
app_db.commit()
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
class TestTaskList:
|
| 50 |
+
def test_requires_login(self, client):
|
| 51 |
+
resp = client.get("/tasks")
|
| 52 |
+
assert resp.status_code == 302
|
| 53 |
+
|
| 54 |
+
def test_shows_assignments(self, client, app_db):
|
| 55 |
+
_setup_user_and_video(app_db)
|
| 56 |
+
with client.session_transaction() as sess:
|
| 57 |
+
sess["annotator_id"] = 1
|
| 58 |
+
sess["annotator_name"] = "alice"
|
| 59 |
+
_setup_comparison_group(app_db, 1, 1)
|
| 60 |
+
resp = client.get("/tasks", follow_redirects=True)
|
| 61 |
+
assert resp.status_code == 200
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
class TestStats:
|
| 65 |
+
def test_stats_returns_json(self, client, app_db):
|
| 66 |
+
with client.session_transaction() as sess:
|
| 67 |
+
sess["annotator_id"] = 1
|
| 68 |
+
sess["annotator_name"] = "alice"
|
| 69 |
+
resp = client.get("/api/stats")
|
| 70 |
+
assert resp.status_code == 200
|
| 71 |
+
data = resp.get_json()
|
| 72 |
+
assert "total_videos" in data
|
| 73 |
+
assert "coverage" in data
|
| 74 |
+
assert "per_annotator" in data
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
class TestEndToEnd:
|
| 78 |
+
def test_full_flow_comparison(self, client, app_db):
|
| 79 |
+
"""Login → create comparison group → rate group → verify annotations."""
|
| 80 |
+
# Insert two test videos (same prompt, different models)
|
| 81 |
+
app_db.execute(
|
| 82 |
+
"INSERT INTO videos (filename, dataset, prompt, physical_laws, import_hash) "
|
| 83 |
+
"""VALUES ('v1.mp4', 'model-a-wmb', 'water flows', '["flow_dynamics", "fluid_continuity"]', 'hash')"""
|
| 84 |
+
)
|
| 85 |
+
app_db.execute(
|
| 86 |
+
"INSERT INTO videos (filename, dataset, prompt, physical_laws, import_hash) "
|
| 87 |
+
"""VALUES ('v1.mp4', 'model-b-wmb', 'water flows', '["flow_dynamics", "fluid_continuity"]', 'hash')"""
|
| 88 |
+
)
|
| 89 |
+
app_db.commit()
|
| 90 |
+
|
| 91 |
+
# Login
|
| 92 |
+
resp = client.post("/login", data={"username": "tester"}, follow_redirects=True)
|
| 93 |
+
assert resp.status_code == 200
|
| 94 |
+
|
| 95 |
+
# Manually create a comparison group (auto-assign won't match test models)
|
| 96 |
+
group_id = "test-e2e-group"
|
| 97 |
+
app_db.execute(
|
| 98 |
+
"INSERT INTO comparison_groups (id, prompt, physical_laws) VALUES (?, ?, ?)",
|
| 99 |
+
(group_id, "water flows", '["flow_dynamics", "fluid_continuity"]'),
|
| 100 |
+
)
|
| 101 |
+
annotator_id = app_db.execute("SELECT id FROM annotators WHERE name='tester'").fetchone()["id"]
|
| 102 |
+
for vid in [1, 2]:
|
| 103 |
+
app_db.execute(
|
| 104 |
+
"INSERT INTO assignments (video_id, annotator_id, status, expires_at, group_id) "
|
| 105 |
+
"VALUES (?, ?, 'assigned', datetime('now', '+24 hours'), ?)",
|
| 106 |
+
(vid, annotator_id, group_id),
|
| 107 |
+
)
|
| 108 |
+
app_db.commit()
|
| 109 |
+
|
| 110 |
+
# Get rate group page
|
| 111 |
+
resp = client.get(f"/rate_group/{group_id}")
|
| 112 |
+
assert resp.status_code == 200
|
| 113 |
+
|
| 114 |
+
# Submit scores for both videos
|
| 115 |
+
assignments = app_db.execute(
|
| 116 |
+
"SELECT id FROM assignments WHERE group_id=? ORDER BY id",
|
| 117 |
+
(group_id,),
|
| 118 |
+
).fetchall()
|
| 119 |
+
|
| 120 |
+
scores = {}
|
| 121 |
+
for a in assignments:
|
| 122 |
+
aid = a["id"]
|
| 123 |
+
scores[f"v{aid}_SA"] = "4"
|
| 124 |
+
scores[f"v{aid}_PTV"] = "4"
|
| 125 |
+
scores[f"v{aid}_persistence"] = "3"
|
| 126 |
+
scores[f"v{aid}_flow_dynamics"] = "4"
|
| 127 |
+
scores[f"v{aid}_fluid_continuity"] = "2"
|
| 128 |
+
|
| 129 |
+
scores["action"] = "next"
|
| 130 |
+
resp = client.post(f"/rate_group/{group_id}", data=scores, follow_redirects=False)
|
| 131 |
+
assert resp.status_code == 200
|
| 132 |
+
resp_data = resp.get_json()
|
| 133 |
+
assert resp_data["ok"] is True
|
| 134 |
+
|
| 135 |
+
# Verify annotations created for both
|
| 136 |
+
for a in assignments:
|
| 137 |
+
ann = app_db.execute(
|
| 138 |
+
"SELECT * FROM annotations WHERE assignment_id=?", (a["id"],)
|
| 139 |
+
).fetchone()
|
| 140 |
+
assert ann is not None
|
| 141 |
+
parsed = json.loads(ann["scores_json"])
|
| 142 |
+
assert parsed["general"]["SA"] == 4
|
| 143 |
+
assert parsed["physical"]["flow_dynamics"] == 4
|
| 144 |
+
|
| 145 |
+
# Verify all assignments marked completed
|
| 146 |
+
statuses = app_db.execute(
|
| 147 |
+
"SELECT status FROM assignments WHERE group_id=?", (group_id,)
|
| 148 |
+
).fetchall()
|
| 149 |
+
assert all(r["status"] == STATUS_COMPLETED for r in statuses)
|
evals/physics_criteria.py
ADDED
|
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Shared physical-law criteria definitions — single source of truth.
|
| 2 |
+
|
| 3 |
+
Every module that needs law names, descriptions, or domain groupings should
|
| 4 |
+
import from here instead of defining its own copy.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from __future__ import annotations
|
| 8 |
+
|
| 9 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 10 |
+
# Full bilingual criteria (Chinese + English) — used by VLM evaluation prompts
|
| 11 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 12 |
+
|
| 13 |
+
CRITERIA_EN: dict[str, str] = {
|
| 14 |
+
"gravity": "Do unsupported objects fall downward? Do thrown objects follow a curved trajectory? Does poured liquid fall with gravity?",
|
| 15 |
+
"inertia": "Do stationary objects remain still unless acted upon? Do moving objects maintain their motion unless stopped by friction, collision, or an obstacle?",
|
| 16 |
+
"momentum": "After collision, push, or pull, is the direction of motion reasonable? Ignore speed magnitude.",
|
| 17 |
+
"impenetrability": "Do objects maintain impenetrability — no passing through each other?",
|
| 18 |
+
"collision": "After impact, is there reasonable bounce/shatter/deformation? Does response match impact force?",
|
| 19 |
+
"material": "Does each material respond according to its properties? (glass shatters, rubber bounces, metal is rigid, cloth deforms softly, etc.)",
|
| 20 |
+
"buoyancy": "Do dense objects sink? Do wood/plastic float?",
|
| 21 |
+
"displacement": "When you add more liquid or put an object into it, does the liquid level rise in a realistic way? Does it overflow when full?",
|
| 22 |
+
"flow_dynamics": "Does the liquid's overall motion behave realistically over time — flowing along surfaces, spreading, draining naturally?",
|
| 23 |
+
"boundary_interaction": "When the liquid hits a boundary such as a rock face, container wall, or floor, does it respond realistically? Do local splash, rebound, or split patterns on impact look physically plausible?",
|
| 24 |
+
"fluid_continuity": "Does the liquid avoid disappearing or appearing out of nowhere? Small splashes that briefly break apart are okay.",
|
| 25 |
+
"reflection": "Does the reflection roughly match objects and colors in the scene, and avoid completely unrelated content?",
|
| 26 |
+
"shadow": "Are shadow directions consistent with light source? Do shadows move with objects?",
|
| 27 |
+
}
|
| 28 |
+
|
| 29 |
+
CRITERIA_ZH: dict[str, str] = {
|
| 30 |
+
"gravity": "无支撑的物体(固体或液体)是否向下掉落?抛出的物体轨迹是否呈弧线下落?倾倒的液体是否沿重力方向落下?",
|
| 31 |
+
"inertia": "静止物体是否保持静止?运动物体是否在没有摩擦、碰撞或障碍的情况下保持运动?",
|
| 32 |
+
"momentum": "碰撞、推或拉后,物体运动方向是否合理?忽略速度大小",
|
| 33 |
+
"impenetrability": "物体之间是否保持不可穿透,没有穿模?",
|
| 34 |
+
"collision": "物体撞击后是否有合理的反弹/碎裂/变形?撞击力度与响应程度是否匹配?",
|
| 35 |
+
"material": "每种材料的响应是否符合其属性?玻璃碎裂、橡胶弹回、金属坚硬、布料柔软变形。",
|
| 36 |
+
"buoyancy": "密度大于水的物体是否下沉?木头/塑料等是否浮于水面?",
|
| 37 |
+
"displacement": "当添加液体或物体浸入时,液面是否合理地上升?容器满后是否溢出?",
|
| 38 |
+
"flow_dynamics": "液体整体流动是否符合物理——沿表面流动、铺展、排出是否自然?",
|
| 39 |
+
"boundary_interaction": "液体撞击边界(如岩壁、杯壁、地面)时,局部交互是否合理?撞击后的飞溅、反弹或分流形态是否物理上可信?",
|
| 40 |
+
"fluid_continuity": "液体是否保持物理连续性与质量守恒——无不合理的断裂、消失或凭空生成?短暂飞溅分离可接受。",
|
| 41 |
+
"reflection": "镜子/光滑表面中的反射内容是否大致匹配场景中的物体和颜色,没有完全不相关的内容。",
|
| 42 |
+
"shadow": "阴影方向是否与光源位置一致?物体移动时阴影是否同步移动?",
|
| 43 |
+
}
|
| 44 |
+
|
| 45 |
+
# Backward-compatible alias — defaults to English
|
| 46 |
+
CRITERIA = CRITERIA_EN
|
| 47 |
+
|
| 48 |
+
# Re-export sub-question definitions (defined in evals/sub_questions.py)
|
| 49 |
+
from evals.sub_questions import SUB_QUESTIONS, SubQuestion # noqa: F401
|
| 50 |
+
|
| 51 |
+
from dataclasses import dataclass, field
|
| 52 |
+
|
| 53 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 54 |
+
# Human-eval Rating Instrument — structured criteria for human annotators
|
| 55 |
+
# ───────────────────────────────────────────────────────────────────────────���──
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
@dataclass
|
| 59 |
+
class HumanCriterion:
|
| 60 |
+
"""A single criterion in the human-eval rating instrument."""
|
| 61 |
+
key: str # e.g. "gravity"
|
| 62 |
+
code: str # e.g. "A1"
|
| 63 |
+
domain: str # e.g. "Solid-Body Mechanics"
|
| 64 |
+
name: str # e.g. "Gravity"
|
| 65 |
+
question: str # Full question text
|
| 66 |
+
note: str = "" # Optional guidance note
|
| 67 |
+
name_zh: str = "" # Chinese name
|
| 68 |
+
question_zh: str = "" # Chinese question text
|
| 69 |
+
note_zh: str = "" # Chinese guidance note
|
| 70 |
+
scale: list[int | str] = field(default_factory=lambda: [1, 2, 3, 4, 5, "N/A"])
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
HUMAN_CRITERIA: list[HumanCriterion] = [
|
| 74 |
+
# ── G. General ───────────────────────────────────────────────────────────
|
| 75 |
+
HumanCriterion(
|
| 76 |
+
key="persistence", code="G1", domain="General", name="Object Persistence",
|
| 77 |
+
question="To what extent do objects maintain consistent appearance, shape, and existence throughout the video?",
|
| 78 |
+
name_zh="物体持久性",
|
| 79 |
+
question_zh="物体在整个视频中在多大程度上保持了一致的外观、形状和存在?",
|
| 80 |
+
),
|
| 81 |
+
HumanCriterion(
|
| 82 |
+
key="PTV", code="G2", domain="General", name="Temporal Coherence",
|
| 83 |
+
question="To what extent does the temporal sequence of physical events follow a physically plausible order?",
|
| 84 |
+
note="Evaluate whether event ordering is logically consistent (e.g., objects break before scattering, liquid is poured before it flows).",
|
| 85 |
+
name_zh="时序连贯性",
|
| 86 |
+
question_zh="物理事件的时间顺序在多大程度上遵循了物理上合理的顺序?",
|
| 87 |
+
note_zh="例如,物体先碎裂再散开,液体先被倒出再流动。",
|
| 88 |
+
),
|
| 89 |
+
HumanCriterion(
|
| 90 |
+
key="SA", code="G3", domain="General", name="Prompt Alignment",
|
| 91 |
+
question="To what extent does the video content align with the text prompt?",
|
| 92 |
+
note="Evaluate whether the depicted scene, objects, actions, and overall narrative correspond to the prompt description.",
|
| 93 |
+
name_zh="提示词对齐",
|
| 94 |
+
question_zh="视频内容在多大程度上与文本提示词一致?",
|
| 95 |
+
note_zh="评估所描绘的场景、物体、动作和整体叙事是否与提示词描述相对应。",
|
| 96 |
+
),
|
| 97 |
+
# ── A. Solid-Body Mechanics ──────────────────────────────────────────────
|
| 98 |
+
HumanCriterion(
|
| 99 |
+
key="gravity", code="A1", domain="Solid-Body Mechanics", name="Gravity",
|
| 100 |
+
question="To what extent do objects and liquids move in accordance with gravity?",
|
| 101 |
+
note="e.g., unsupported objects fall downward, projectiles follow curved trajectories, poured liquids descend.",
|
| 102 |
+
name_zh="重力",
|
| 103 |
+
question_zh="物体和液体在多大程度上按照重力运动?",
|
| 104 |
+
note_zh="例如,无支撑的物体向下掉落,抛射物沿弧线轨迹运动,倾倒的液体向下流。",
|
| 105 |
+
),
|
| 106 |
+
HumanCriterion(
|
| 107 |
+
key="inertia", code="A2", domain="Solid-Body Mechanics", name="Inertia",
|
| 108 |
+
question="To what extent do objects maintain their state of motion — remaining stationary or continuing to move — in the absence of a plausible external force?",
|
| 109 |
+
note="Penalise significant spontaneous motion or unexplained stopping with no identifiable cause.",
|
| 110 |
+
name_zh="惯性",
|
| 111 |
+
question_zh="在没有合理外力的情况下,物体在多大程度上保持其运动状态——静止的保持静止,运动的继续运动?",
|
| 112 |
+
note_zh="对无明确原因的显著自发运动或无故停止予以扣分。",
|
| 113 |
+
),
|
| 114 |
+
HumanCriterion(
|
| 115 |
+
key="momentum", code="A3", domain="Solid-Body Mechanics", name="Momentum",
|
| 116 |
+
question="To what extent are post-collision directions of motion physically plausible?",
|
| 117 |
+
note="Evaluate direction only; ignore speed magnitude.",
|
| 118 |
+
name_zh="动量",
|
| 119 |
+
question_zh="碰撞后物体的运动方向在多大程度上是物理合理的?",
|
| 120 |
+
note_zh="仅评估方向;忽略速度大小。",
|
| 121 |
+
),
|
| 122 |
+
HumanCriterion(
|
| 123 |
+
key="impenetrability", code="A4", domain="Solid-Body Mechanics", name="Impenetrability",
|
| 124 |
+
question="To what extent do solid objects maintain impenetrability, avoiding passage through one another?",
|
| 125 |
+
name_zh="不可穿透性",
|
| 126 |
+
question_zh="固体物体在多大程度上保持不可穿透,避免相互穿过?",
|
| 127 |
+
),
|
| 128 |
+
HumanCriterion(
|
| 129 |
+
key="collision", code="A5", domain="Solid-Body Mechanics", name="Collision Response",
|
| 130 |
+
question="To what extent do objects exhibit physically plausible responses to impact, proportional to the applied force?",
|
| 131 |
+
note="e.g., bouncing, shattering, deforming.",
|
| 132 |
+
name_zh="碰撞响应",
|
| 133 |
+
question_zh="物体在多大程度上表现出与施加力成比例��物理合理碰撞响应?",
|
| 134 |
+
note_zh="例如,弹跳、碎裂、变形。",
|
| 135 |
+
),
|
| 136 |
+
HumanCriterion(
|
| 137 |
+
key="material", code="A6", domain="Solid-Body Mechanics", name="Material Properties",
|
| 138 |
+
question="To what extent do objects respond in ways consistent with their apparent material properties?",
|
| 139 |
+
note="e.g., glass shatters, rubber bounces, metal dents, fabric drapes.",
|
| 140 |
+
name_zh="材料属性",
|
| 141 |
+
question_zh="物体在多大程度上以符合其表观材料属性的方式做出响应?",
|
| 142 |
+
note_zh="例如,玻璃碎裂、橡胶弹跳、金属凹陷、布料垂坠。",
|
| 143 |
+
),
|
| 144 |
+
# ── B. Fluid Dynamics ────────────────────────────────────────────────────
|
| 145 |
+
HumanCriterion(
|
| 146 |
+
key="buoyancy", code="B1", domain="Fluid Dynamics", name="Buoyancy",
|
| 147 |
+
question="To what extent do objects sink or float in a manner consistent with their apparent density?",
|
| 148 |
+
name_zh="浮力",
|
| 149 |
+
question_zh="物体在多大程度上以符合其表观密度的方式下沉或漂浮?",
|
| 150 |
+
),
|
| 151 |
+
HumanCriterion(
|
| 152 |
+
key="displacement", code="B2", domain="Fluid Dynamics", name="Displacement",
|
| 153 |
+
question="When volume is added or an object is submerged, to what extent does the liquid level rise in a physically plausible manner?",
|
| 154 |
+
note="e.g., level rises proportionally; container overflows when full.",
|
| 155 |
+
name_zh="液面变化",
|
| 156 |
+
question_zh="当添加液体或物体浸入时,液面是否合理地上升?",
|
| 157 |
+
note_zh="例如,液面按比例上升;容器满时溢出。",
|
| 158 |
+
),
|
| 159 |
+
HumanCriterion(
|
| 160 |
+
key="flow_dynamics", code="B3", domain="Fluid Dynamics", name="Flow Dynamics",
|
| 161 |
+
question="To what extent does liquid flow along surfaces, spread, and drain in a physically plausible manner?",
|
| 162 |
+
note="Ignore brief splash details. Penalise unphysical acceleration, sudden stops, or unsupported uphill flow.",
|
| 163 |
+
name_zh="流体动力学",
|
| 164 |
+
question_zh="液体在多大程度上以物理合理的方式沿表面流动、铺展和排出?",
|
| 165 |
+
note_zh="忽略短暂飞溅细节。对非物理的加速、突然停止或无支撑的上坡流动予以扣分。",
|
| 166 |
+
),
|
| 167 |
+
HumanCriterion(
|
| 168 |
+
key="boundary_interaction", code="B4", domain="Fluid Dynamics", name="Boundary Interaction",
|
| 169 |
+
question="To what extent does liquid splash, rebound, or split in a physically plausible manner when hitting a boundary?",
|
| 170 |
+
note="e.g., container walls, rock faces, floors.",
|
| 171 |
+
name_zh="边界交互",
|
| 172 |
+
question_zh="液体撞击边界时,其飞溅、反弹或分流在多大程度上是物理合理的?",
|
| 173 |
+
note_zh="例如,容器壁、岩面、地面。",
|
| 174 |
+
),
|
| 175 |
+
HumanCriterion(
|
| 176 |
+
key="fluid_continuity", code="B5", domain="Fluid Dynamics", name="Fluid Continuity",
|
| 177 |
+
question="Does the liquid avoid disappearing or appearing out of nowhere?",
|
| 178 |
+
note="Small splashes that briefly break apart are okay.",
|
| 179 |
+
name_zh="流体连续性",
|
| 180 |
+
question_zh="液体在多大程度上避免了非物理的碎裂、消失或凭空生成?",
|
| 181 |
+
note_zh="短暂的、物理合理的飞溅分离可以接受。",
|
| 182 |
+
),
|
| 183 |
+
# ── C. Optical Physics ───────────────────────────────────────────────────
|
| 184 |
+
HumanCriterion(
|
| 185 |
+
key="reflection", code="C1", domain="Optical Physics", name="Reflection",
|
| 186 |
+
question="To what extent do reflective surfaces display content that is physically plausible given the surrounding scene?",
|
| 187 |
+
note="Evaluate whether reflected objects and colours broadly correspond to the scene. Do not require precise geometric mirroring.",
|
| 188 |
+
name_zh="反射",
|
| 189 |
+
question_zh="反射表面在多大程度上显示了与周围场景物理合理的内容?",
|
| 190 |
+
note_zh="评估反射的物体和颜色是否大致对应场景。不要求精确的几何镜像。",
|
| 191 |
+
),
|
| 192 |
+
HumanCriterion(
|
| 193 |
+
key="shadow", code="C2", domain="Optical Physics", name="Shadow",
|
| 194 |
+
question="To what extent are shadows physically plausible in their direction, placement, and movement relative to light sources and objects?",
|
| 195 |
+
name_zh="阴影",
|
| 196 |
+
question_zh="阴影的方向、位置和运动在多大程度上相对于光源和物体是物理合理的?",
|
| 197 |
+
),
|
| 198 |
+
]
|
| 199 |
+
|
| 200 |
+
#: Quick lookup by criterion key for human-eval criteria.
|
| 201 |
+
HUMAN_CRITERIA_BY_KEY: dict[str, HumanCriterion] = {c.key: c for c in HUMAN_CRITERIA}
|
| 202 |
+
|
| 203 |
+
|
| 204 |
+
def get_criteria_text(law_name: str) -> str:
|
| 205 |
+
"""Build full criteria text for a law, preferring HumanCriterion when available."""
|
| 206 |
+
criterion = HUMAN_CRITERIA_BY_KEY.get(law_name)
|
| 207 |
+
if criterion:
|
| 208 |
+
text = criterion.question
|
| 209 |
+
if criterion.note:
|
| 210 |
+
text += f" ({criterion.note})"
|
| 211 |
+
return text
|
| 212 |
+
return CRITERIA.get(law_name, "")
|
| 213 |
+
|
| 214 |
+
#: Domain grouping for human-eval criteria.
|
| 215 |
+
HUMAN_DOMAINS: dict[str, list[HumanCriterion]] = {}
|
| 216 |
+
for _c in HUMAN_CRITERIA:
|
| 217 |
+
HUMAN_DOMAINS.setdefault(_c.domain, []).append(_c)
|
| 218 |
+
|
| 219 |
+
#: Ordered list of all 13 physical laws (stable order for tabular reporting).
|
| 220 |
+
ALL_LAWS: list[str] = list(CRITERIA_EN.keys())
|
| 221 |
+
|
| 222 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 223 |
+
# Domain → criteria-key groupings
|
| 224 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 225 |
+
|
| 226 |
+
def _subs(keys: list[str]) -> list[tuple[str, str]]:
|
| 227 |
+
"""Return list of (key, bilingual_description) tuples for the given keys."""
|
| 228 |
+
return [(k, CRITERIA[k]) for k in keys]
|
| 229 |
+
|
| 230 |
+
#: "fluid" domain covers liquid behaviour only (not smoke, steam, or other gases).
|
| 231 |
+
#: Maps each physical domain to its list of (law_key, description) pairs.
|
| 232 |
+
DOMAIN_SUBSCORES: dict[str, list[tuple[str, str]]] = {
|
| 233 |
+
"mechanics": _subs(["gravity", "inertia", "momentum", "impenetrability", "collision", "material"]),
|
| 234 |
+
"collision": _subs(["collision", "impenetrability", "momentum", "material"]),
|
| 235 |
+
"gravity": _subs(["gravity", "inertia"]),
|
| 236 |
+
"buoyancy": _subs(["buoyancy", "displacement"]),
|
| 237 |
+
"fluid": _subs(["flow_dynamics", "boundary_interaction", "fluid_continuity"]),
|
| 238 |
+
"lighting": _subs(["reflection", "shadow"]),
|
| 239 |
+
}
|
| 240 |
+
|
evals/prompts/__init__.py
ADDED
|
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Prompt templates and configuration for VLM evaluation and judge training.
|
| 2 |
+
|
| 3 |
+
Supports loading prompt configs from YAML files for A/B comparison.
|
| 4 |
+
Default config loaded at import for backward compatibility.
|
| 5 |
+
|
| 6 |
+
Usage:
|
| 7 |
+
from evals.prompts import PromptConfig
|
| 8 |
+
cfg = PromptConfig.load("subq+human.yaml")
|
| 9 |
+
prompt = cfg.build_eval_prompt(caption, "SA")
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
from __future__ import annotations
|
| 13 |
+
|
| 14 |
+
from dataclasses import dataclass
|
| 15 |
+
from pathlib import Path
|
| 16 |
+
from typing import Any
|
| 17 |
+
|
| 18 |
+
import yaml
|
| 19 |
+
|
| 20 |
+
from evals.physics_criteria import SUB_QUESTIONS, SubQuestion
|
| 21 |
+
|
| 22 |
+
_PROMPTS_DIR = Path(__file__).parent
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 26 |
+
# PromptConfig
|
| 27 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
@dataclass
|
| 31 |
+
class PromptConfig:
|
| 32 |
+
"""A prompt template set loaded from YAML."""
|
| 33 |
+
|
| 34 |
+
name: str
|
| 35 |
+
scheme: str
|
| 36 |
+
system_prompt: str
|
| 37 |
+
general_keys: list[str]
|
| 38 |
+
eval_prompts: dict[str, str]
|
| 39 |
+
training_prompts: dict[str, str]
|
| 40 |
+
physical_template: str
|
| 41 |
+
physical_sub_questions: bool = False
|
| 42 |
+
sub_questions: dict[str, str] | None = None
|
| 43 |
+
|
| 44 |
+
@classmethod
|
| 45 |
+
def load(cls, name: str) -> PromptConfig:
|
| 46 |
+
"""Load evals/prompts/{name}; name must be a YAML filename."""
|
| 47 |
+
raw_path = Path(name)
|
| 48 |
+
if raw_path.name != name or raw_path.suffix != ".yaml":
|
| 49 |
+
raise ValueError(
|
| 50 |
+
"prompt config must be a YAML filename under evals/prompts, "
|
| 51 |
+
f"got {name!r}"
|
| 52 |
+
)
|
| 53 |
+
path = _PROMPTS_DIR / name
|
| 54 |
+
with open(path) as f:
|
| 55 |
+
data = yaml.safe_load(f)
|
| 56 |
+
return cls._from_dict(data)
|
| 57 |
+
|
| 58 |
+
@classmethod
|
| 59 |
+
def _from_dict(cls, data: dict[str, Any]) -> PromptConfig:
|
| 60 |
+
def _resolve(raw: str | dict) -> str:
|
| 61 |
+
if isinstance(raw, str):
|
| 62 |
+
return raw
|
| 63 |
+
template = raw["template"]
|
| 64 |
+
considerations = raw.get("considerations")
|
| 65 |
+
if considerations and "{considerations}" in template:
|
| 66 |
+
block = "\n".join(
|
| 67 |
+
f"{i}. {c}" for i, c in enumerate(considerations, 1)
|
| 68 |
+
)
|
| 69 |
+
template = template.replace("{considerations}", block)
|
| 70 |
+
return template
|
| 71 |
+
|
| 72 |
+
return cls(
|
| 73 |
+
name=data.get("name", ""),
|
| 74 |
+
scheme=data.get("scheme", "plain"),
|
| 75 |
+
system_prompt=data.get(
|
| 76 |
+
"system_prompt", "You are a strict video evaluation model."
|
| 77 |
+
),
|
| 78 |
+
general_keys=data.get("general_keys", ["SA", "PTV", "persistence"]),
|
| 79 |
+
eval_prompts={
|
| 80 |
+
k: _resolve(v) for k, v in data.get("eval_prompts", {}).items()
|
| 81 |
+
},
|
| 82 |
+
training_prompts={
|
| 83 |
+
k: _resolve(v) for k, v in data.get("training_prompts", {}).items()
|
| 84 |
+
},
|
| 85 |
+
physical_template=data.get("physical_template", data.get("physical_training_template", "")),
|
| 86 |
+
physical_sub_questions=bool(data.get("physical_sub_questions", False)),
|
| 87 |
+
sub_questions=data.get("sub_questions"),
|
| 88 |
+
)
|
| 89 |
+
|
| 90 |
+
@property
|
| 91 |
+
def _answer_format(self) -> str | None:
|
| 92 |
+
if self.sub_questions:
|
| 93 |
+
return self.sub_questions.get("answer_format")
|
| 94 |
+
return None
|
| 95 |
+
|
| 96 |
+
def build_eval_prompt(
|
| 97 |
+
self, prompt_text: str, metric: str, *, answers_block: str = "",
|
| 98 |
+
) -> str:
|
| 99 |
+
questions_block, question_keys_str = build_general_questions_block(
|
| 100 |
+
metric, self._answer_format,
|
| 101 |
+
)
|
| 102 |
+
return self.eval_prompts[metric].format(
|
| 103 |
+
prompt=prompt_text,
|
| 104 |
+
questions_block=questions_block,
|
| 105 |
+
question_keys_str=question_keys_str,
|
| 106 |
+
answers_block=answers_block,
|
| 107 |
+
)
|
| 108 |
+
|
| 109 |
+
def build_training_prompt(
|
| 110 |
+
self, prompt_text: str, dim: str, *, answers_block: str = "",
|
| 111 |
+
) -> str:
|
| 112 |
+
prompts = self.training_prompts or self.eval_prompts
|
| 113 |
+
questions_block, question_keys_str = build_general_questions_block(
|
| 114 |
+
dim, self._answer_format,
|
| 115 |
+
)
|
| 116 |
+
return prompts[dim].format(
|
| 117 |
+
prompt=prompt_text,
|
| 118 |
+
questions_block=questions_block,
|
| 119 |
+
question_keys_str=question_keys_str,
|
| 120 |
+
answers_block=answers_block,
|
| 121 |
+
)
|
| 122 |
+
|
| 123 |
+
def build_physical_prompt(
|
| 124 |
+
self, prompt_text: str, law: str, criteria: str,
|
| 125 |
+
*, answers_block: str = "",
|
| 126 |
+
) -> str:
|
| 127 |
+
if self.physical_sub_questions:
|
| 128 |
+
questions_block, question_keys_str = build_physical_questions_block(
|
| 129 |
+
law, criteria, self._answer_format,
|
| 130 |
+
)
|
| 131 |
+
else:
|
| 132 |
+
questions_block = ""
|
| 133 |
+
question_keys_str = ""
|
| 134 |
+
return self.physical_template.format(
|
| 135 |
+
prompt=prompt_text,
|
| 136 |
+
law=law,
|
| 137 |
+
criteria=criteria,
|
| 138 |
+
questions_block=questions_block,
|
| 139 |
+
question_keys_str=question_keys_str,
|
| 140 |
+
answers_block=answers_block,
|
| 141 |
+
)
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
|
| 145 |
+
|
| 146 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 147 |
+
# Scoring dimension keys
|
| 148 |
+
# ──────────────────────────────────────────────────────────────────────────────
|
| 149 |
+
|
| 150 |
+
GENERAL_KEYS = ["SA", "PTV", "persistence"]
|
| 151 |
+
GENERAL_DIMS = GENERAL_KEYS
|
| 152 |
+
|
| 153 |
+
GENERAL_SUB_QUESTIONS: dict[str, list[str]] = {
|
| 154 |
+
"SA": [
|
| 155 |
+
"Are the main objects in the caption present in the video?",
|
| 156 |
+
"Are the key actions or interactions from the caption visible?",
|
| 157 |
+
"Are important scene attributes and relationships preserved?",
|
| 158 |
+
"Does the video avoid major contradictions to the caption?",
|
| 159 |
+
],
|
| 160 |
+
"PTV": [
|
| 161 |
+
"Do causes appear before their effects?",
|
| 162 |
+
"Do physical events unfold in a plausible temporal order?",
|
| 163 |
+
"Are motion transitions continuous rather than abrupt jumps or loops?",
|
| 164 |
+
"Does the sequence avoid impossible reversals or repeated resets?",
|
| 165 |
+
],
|
| 166 |
+
"persistence": [
|
| 167 |
+
"Do objects maintain consistent existence throughout the video?",
|
| 168 |
+
"Do objects keep a stable shape, size, color, and texture?",
|
| 169 |
+
"Do objects avoid disappearing, appearing, or transforming unexpectedly?",
|
| 170 |
+
"Do objects preserve identity through motion and brief occlusion?",
|
| 171 |
+
],
|
| 172 |
+
}
|
| 173 |
+
|
| 174 |
+
|
| 175 |
+
_ANSWER_FORMAT_SUFFIX = {
|
| 176 |
+
"answer": "Answer each with yes/no/uncertain.",
|
| 177 |
+
}
|
| 178 |
+
|
| 179 |
+
|
| 180 |
+
def _format_questions_block(
|
| 181 |
+
questions: list[str], answer_format: str | None = None,
|
| 182 |
+
) -> tuple[str, str]:
|
| 183 |
+
keys: list[str] = []
|
| 184 |
+
lines: list[str] = []
|
| 185 |
+
for i, question in enumerate(questions, 1):
|
| 186 |
+
key = f"q{i}"
|
| 187 |
+
keys.append(key)
|
| 188 |
+
lines.append(f"{key}: {question}")
|
| 189 |
+
suffix = _ANSWER_FORMAT_SUFFIX.get(answer_format or "")
|
| 190 |
+
if suffix:
|
| 191 |
+
lines.append(suffix)
|
| 192 |
+
return "\n".join(lines), ", ".join(keys)
|
| 193 |
+
|
| 194 |
+
|
| 195 |
+
def build_general_questions_block(
|
| 196 |
+
dim: str, answer_format: str | None = None,
|
| 197 |
+
) -> tuple[str, str]:
|
| 198 |
+
questions = GENERAL_SUB_QUESTIONS.get(dim)
|
| 199 |
+
if not questions:
|
| 200 |
+
questions = [f"Does the video satisfy the {dim} criterion?"]
|
| 201 |
+
return _format_questions_block(questions, answer_format)
|
| 202 |
+
|
| 203 |
+
|
| 204 |
+
def build_physical_questions_block(
|
| 205 |
+
law: str, criteria: str, answer_format: str | None = None,
|
| 206 |
+
) -> tuple[str, str]:
|
| 207 |
+
sub_qs = SUB_QUESTIONS.get(law)
|
| 208 |
+
if not sub_qs:
|
| 209 |
+
sub_qs = [SubQuestion(f"{law}_q1", criteria, violation="no")]
|
| 210 |
+
return _format_questions_block([sq.question for sq in sub_qs], answer_format)
|
evals/prompts/cot-subq.yaml
ADDED
|
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
scheme: cot_subq
|
| 2 |
+
description: |-
|
| 3 |
+
Structured sub-question rationale prompts for Claude distillation.
|
| 4 |
+
Claude answers fixed sub-questions with visible evidence, then outputs a JSON score.
|
| 5 |
+
sub_questions:
|
| 6 |
+
source: static
|
| 7 |
+
answer_format: template
|
| 8 |
+
system_prompt: |-
|
| 9 |
+
You are a strict video evaluation model.
|
| 10 |
+
Base your judgment only on visible evidence in the video.
|
| 11 |
+
Provide a concise rationale, then output a JSON object with the score.
|
| 12 |
+
general_keys:
|
| 13 |
+
- SA
|
| 14 |
+
- PTV
|
| 15 |
+
- persistence
|
| 16 |
+
eval_prompts:
|
| 17 |
+
SA: |-
|
| 18 |
+
Evaluate Prompt Alignment (SA).
|
| 19 |
+
|
| 20 |
+
Caption:
|
| 21 |
+
"{prompt}"
|
| 22 |
+
|
| 23 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 24 |
+
|
| 25 |
+
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, partial, no, uncertain.
|
| 26 |
+
Use this format for each sub-question:
|
| 27 |
+
qN: <brief evidence>; answer=<label>
|
| 28 |
+
|
| 29 |
+
{questions_block}
|
| 30 |
+
|
| 31 |
+
Score 1-5:
|
| 32 |
+
5=fully aligned
|
| 33 |
+
4=mostly aligned with minor deviations
|
| 34 |
+
3=partially aligned with notable gaps
|
| 35 |
+
2=mostly misaligned
|
| 36 |
+
1=not aligned
|
| 37 |
+
|
| 38 |
+
Answer every sub-question in the format above, then justify the overall score.
|
| 39 |
+
Then output a JSON object with keys "reasoning" (string) and "SA" (integer 1-5).
|
| 40 |
+
Output JSON only.
|
| 41 |
+
|
| 42 |
+
Example:
|
| 43 |
+
{{"reasoning": "q1: water balloon, target, and thrower all present; answer=yes. q2: balloon is thrown and bursts on impact; answer=yes. q3: outdoor setting and distance preserved; answer=yes. q4: target behaves like an inflatable rather than cardboard; answer=partial. Overall the scene matches well with one material mismatch.", "SA": 4}}
|
| 44 |
+
PTV: |-
|
| 45 |
+
Evaluate Temporal Coherence (PTV).
|
| 46 |
+
|
| 47 |
+
Caption:
|
| 48 |
+
"{prompt}"
|
| 49 |
+
|
| 50 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 51 |
+
|
| 52 |
+
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, partial, no, uncertain.
|
| 53 |
+
Use this format for each sub-question:
|
| 54 |
+
qN: <brief evidence>; answer=<label>
|
| 55 |
+
|
| 56 |
+
{questions_block}
|
| 57 |
+
|
| 58 |
+
Score 1-5:
|
| 59 |
+
5=fully plausible event order
|
| 60 |
+
4=mostly plausible with minor timing issues
|
| 61 |
+
3=partially plausible
|
| 62 |
+
2=mostly implausible
|
| 63 |
+
1=completely implausible order
|
| 64 |
+
|
| 65 |
+
Answer every sub-question in the format above, then justify the overall score.
|
| 66 |
+
Then output a JSON object with keys "reasoning" (string) and "PTV" (integer 1-5).
|
| 67 |
+
Output JSON only.
|
| 68 |
+
|
| 69 |
+
Example:
|
| 70 |
+
{{"reasoning": "q1: bottle shatters before any object contacts it, effect precedes cause; answer=no. q2: rupture occurs without visible force, not a plausible physical event order; answer=no. q3: fragments appear instantly rather than progressing from impact point; answer=no. q4: no repeated resets, but the spontaneous break is an impossible state change; answer=partial. Temporal sequence is highly implausible.", "PTV": 1}}
|
| 71 |
+
persistence: |-
|
| 72 |
+
Evaluate Object Persistence.
|
| 73 |
+
|
| 74 |
+
Caption, for context only:
|
| 75 |
+
"{prompt}"
|
| 76 |
+
|
| 77 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 78 |
+
|
| 79 |
+
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, partial, no, uncertain.
|
| 80 |
+
Use this format for each sub-question:
|
| 81 |
+
qN: <brief evidence>; answer=<label>
|
| 82 |
+
|
| 83 |
+
{questions_block}
|
| 84 |
+
|
| 85 |
+
Score 1-5:
|
| 86 |
+
5=fully consistent
|
| 87 |
+
4=mostly consistent with minor flicker
|
| 88 |
+
3=noticeable issues
|
| 89 |
+
2=major inconsistencies
|
| 90 |
+
1=severe disappearance or identity changes
|
| 91 |
+
|
| 92 |
+
Answer every sub-question in the format above, then justify the overall score.
|
| 93 |
+
Then output a JSON object with keys "reasoning" (string) and "persistence" (integer 1-5).
|
| 94 |
+
Output JSON only.
|
| 95 |
+
|
| 96 |
+
Example:
|
| 97 |
+
{{"reasoning": "q1: tire and ground remain present throughout; answer=yes. q2: tire and ground keep stable color and texture, but bottle label text changes mid-video; answer=partial. q3: no objects disappear or appear unexpectedly; answer=yes. q4: tire identity stable through motion, but bottle label shifts; answer=partial. Minor but noticeable label inconsistency.", "persistence": 3}}
|
| 98 |
+
physical_sub_questions: true
|
| 99 |
+
physical_template: |-
|
| 100 |
+
Evaluate physical realism for one physical law: {law}.
|
| 101 |
+
|
| 102 |
+
Criterion:
|
| 103 |
+
{criteria}
|
| 104 |
+
|
| 105 |
+
Caption, for context only:
|
| 106 |
+
"{prompt}"
|
| 107 |
+
|
| 108 |
+
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, no, uncertain, na.
|
| 109 |
+
Use this format for each sub-question:
|
| 110 |
+
qN: <brief evidence>; answer=<label>
|
| 111 |
+
|
| 112 |
+
{questions_block}
|
| 113 |
+
|
| 114 |
+
Judge the video itself. Do not penalize prompt mismatch unless it affects whether this physical law can be evaluated.
|
| 115 |
+
|
| 116 |
+
Score 1-5:
|
| 117 |
+
5=clearly correct
|
| 118 |
+
4=mostly correct with minor issues
|
| 119 |
+
3=partially correct or ambiguous
|
| 120 |
+
2=mostly incorrect
|
| 121 |
+
1=severely incorrect
|
| 122 |
+
|
| 123 |
+
Answer every sub-question in the format above, then justify the overall score.
|
| 124 |
+
Then output a JSON object with keys "reasoning" (string) and "{law}" (integer 1-5).
|
| 125 |
+
Output JSON only.
|
| 126 |
+
|
| 127 |
+
Example:
|
| 128 |
+
{{"reasoning": "q1: baseball stays in place after clear bat contact, completely unaffected; answer=yes. q2: brown chunk appears on ball surface, response wildly disproportionate to impact; answer=yes. q3: bat morphs and clips through ball, but no shattering from light touch; answer=no. Severely incorrect physical behavior.", "{law}": 1}}
|
evals/prompts/cotnosubq.yaml
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
scheme: cot
|
| 2 |
+
description: |-
|
| 3 |
+
Free-form rationale prompts for Claude distillation.
|
| 4 |
+
Claude outputs concise evidence-based reasoning plus a JSON score, used as training data for smaller VLM judges.
|
| 5 |
+
system_prompt: |-
|
| 6 |
+
You are a strict video evaluation model.
|
| 7 |
+
Base your judgment only on visible evidence in the video.
|
| 8 |
+
Provide concise evidence-based reasoning, then output a JSON object with the score.
|
| 9 |
+
general_keys:
|
| 10 |
+
- SA
|
| 11 |
+
- PTV
|
| 12 |
+
- persistence
|
| 13 |
+
eval_prompts:
|
| 14 |
+
SA: |-
|
| 15 |
+
Evaluate Prompt Alignment (SA).
|
| 16 |
+
|
| 17 |
+
Caption:
|
| 18 |
+
"{prompt}"
|
| 19 |
+
|
| 20 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 21 |
+
|
| 22 |
+
Score 1-5:
|
| 23 |
+
5=fully aligned
|
| 24 |
+
4=mostly aligned with minor deviations
|
| 25 |
+
3=partially aligned with notable gaps
|
| 26 |
+
2=mostly misaligned
|
| 27 |
+
1=not aligned
|
| 28 |
+
|
| 29 |
+
Write concise reasoning that cites the visible evidence for the score. Consider whether the main objects, actions, scene attributes, and relationships from the caption are present, and whether the video avoids major contradictions to the caption.
|
| 30 |
+
|
| 31 |
+
Then output a JSON object with keys "reasoning" (string) and "SA" (integer 1-5).
|
| 32 |
+
Output JSON only.
|
| 33 |
+
|
| 34 |
+
Example:
|
| 35 |
+
{{"reasoning": "The video shows the water balloon, target, thrower, and outdoor setup described in the caption. The throw and burst are visible, but the target behaves more like an inflatable surface than cardboard, creating a minor material mismatch. Overall the scene follows the requested event with one noticeable deviation.", "SA": 4}}
|
| 36 |
+
PTV: |-
|
| 37 |
+
Evaluate Temporal Coherence (PTV).
|
| 38 |
+
|
| 39 |
+
Caption:
|
| 40 |
+
"{prompt}"
|
| 41 |
+
|
| 42 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 43 |
+
|
| 44 |
+
Score 1-5:
|
| 45 |
+
5=fully plausible event order
|
| 46 |
+
4=mostly plausible with minor timing issues
|
| 47 |
+
3=partially plausible
|
| 48 |
+
2=mostly implausible
|
| 49 |
+
1=completely implausible order
|
| 50 |
+
|
| 51 |
+
Write concise reasoning that cites the visible evidence for the score. Consider cause-before-effect order, plausible event timing, continuous motion transitions, and whether the sequence avoids impossible reversals, jumps, loops, or resets.
|
| 52 |
+
|
| 53 |
+
Then output a JSON object with keys "reasoning" (string) and "PTV" (integer 1-5).
|
| 54 |
+
Output JSON only.
|
| 55 |
+
|
| 56 |
+
Example:
|
| 57 |
+
{{"reasoning": "The bottle shatters before any visible object contacts it, so the effect appears before the cause. The rupture happens without a clear force and the fragments appear nearly instantly rather than progressing from an impact point. The temporal sequence is therefore highly implausible.", "PTV": 1}}
|
| 58 |
+
persistence: |-
|
| 59 |
+
Evaluate Object Persistence.
|
| 60 |
+
|
| 61 |
+
Caption, for context only:
|
| 62 |
+
"{prompt}"
|
| 63 |
+
|
| 64 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 65 |
+
|
| 66 |
+
Score 1-5:
|
| 67 |
+
5=fully consistent
|
| 68 |
+
4=mostly consistent with minor flicker
|
| 69 |
+
3=noticeable issues
|
| 70 |
+
2=major inconsistencies
|
| 71 |
+
1=severe disappearance or identity changes
|
| 72 |
+
|
| 73 |
+
Write concise reasoning that cites the visible evidence for the score. Consider whether objects remain present, keep stable shape, size, color, and texture, avoid unexpected appearances or transformations, and preserve identity through motion and brief occlusion.
|
| 74 |
+
|
| 75 |
+
Then output a JSON object with keys "reasoning" (string) and "persistence" (integer 1-5).
|
| 76 |
+
Output JSON only.
|
| 77 |
+
|
| 78 |
+
Example:
|
| 79 |
+
{{"reasoning": "The tire and ground remain visible throughout, and the main object identity is mostly stable during motion. However, the bottle label changes appearance mid-video and the held object shows small texture shifts. These are noticeable but not severe persistence errors.", "persistence": 3}}
|
| 80 |
+
physical_sub_questions: false
|
| 81 |
+
physical_template: |-
|
| 82 |
+
Evaluate physical realism for one physical law: {law}.
|
| 83 |
+
|
| 84 |
+
Criterion:
|
| 85 |
+
{criteria}
|
| 86 |
+
|
| 87 |
+
Caption, for context only:
|
| 88 |
+
"{prompt}"
|
| 89 |
+
|
| 90 |
+
Judge the video itself. Do not penalize prompt mismatch unless it affects whether this physical law can be evaluated.
|
| 91 |
+
|
| 92 |
+
Score 1-5:
|
| 93 |
+
5=clearly correct
|
| 94 |
+
4=mostly correct with minor issues
|
| 95 |
+
3=partially correct or ambiguous
|
| 96 |
+
2=mostly incorrect
|
| 97 |
+
1=severely incorrect
|
| 98 |
+
|
| 99 |
+
Write concise reasoning that cites the visible evidence for the score. Focus on whether the relevant physical law is followed in the visible motion, interactions, object boundaries, material behavior, and state changes.
|
| 100 |
+
|
| 101 |
+
Then output a JSON object with keys "reasoning" (string) and "{law}" (integer 1-5).
|
| 102 |
+
Output JSON only.
|
| 103 |
+
|
| 104 |
+
Example:
|
| 105 |
+
{{"reasoning": "The baseball receives clear bat contact but barely responds, so the collision response does not match the visible impact force. A brown deformation appears on the ball surface in a way that is disproportionate and inconsistent with the contact. The physical behavior is severely incorrect for this law.", "{law}": 1}}
|
evals/prompts/dashboard.py
ADDED
|
@@ -0,0 +1,212 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Simple dashboard to view prompt YAML files."""
|
| 2 |
+
|
| 3 |
+
import yaml
|
| 4 |
+
from pathlib import Path
|
| 5 |
+
from flask import Flask, render_template_string
|
| 6 |
+
|
| 7 |
+
_DIR = Path(__file__).resolve().parent
|
| 8 |
+
|
| 9 |
+
app = Flask(__name__)
|
| 10 |
+
|
| 11 |
+
TEMPLATE = r"""
|
| 12 |
+
<!DOCTYPE html>
|
| 13 |
+
<html lang="en">
|
| 14 |
+
<head>
|
| 15 |
+
<meta charset="utf-8">
|
| 16 |
+
<title>Prompt Viewer</title>
|
| 17 |
+
<style>
|
| 18 |
+
* { box-sizing: border-box; margin: 0; padding: 0; }
|
| 19 |
+
body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
| 20 |
+
background: #f5f5f5; color: #333; }
|
| 21 |
+
.header { background: #1a1a2e; color: #fff; padding: 24px 32px; }
|
| 22 |
+
.header h1 { font-size: 1.5rem; font-weight: 600; }
|
| 23 |
+
.header p { color: #aaa; margin-top: 4px; font-size: 0.9rem; }
|
| 24 |
+
.container { max-width: 1200px; margin: 0 auto; padding: 24px; }
|
| 25 |
+
.tabs { display: flex; gap: 8px; flex-wrap: wrap; margin-bottom: 24px; }
|
| 26 |
+
.tab { padding: 8px 20px; border: 1px solid #ddd; border-radius: 6px;
|
| 27 |
+
background: #fff; cursor: pointer; font-size: 0.9rem; transition: all 0.15s; }
|
| 28 |
+
.tab:hover { border-color: #667; }
|
| 29 |
+
.tab.active { background: #1a1a2e; color: #fff; border-color: #1a1a2e; }
|
| 30 |
+
.yaml-card { display: none; }
|
| 31 |
+
.yaml-card.active { display: block; }
|
| 32 |
+
.meta { display: flex; gap: 16px; margin-bottom: 20px; flex-wrap: wrap; }
|
| 33 |
+
.meta-item { background: #fff; border: 1px solid #e0e0e0; border-radius: 8px;
|
| 34 |
+
padding: 12px 20px; }
|
| 35 |
+
.meta-label { font-size: 0.75rem; text-transform: uppercase; color: #888; letter-spacing: 0.05em; }
|
| 36 |
+
.meta-value { font-size: 1rem; font-weight: 600; margin-top: 2px; }
|
| 37 |
+
.section { background: #fff; border: 1px solid #e0e0e0; border-radius: 8px;
|
| 38 |
+
margin-bottom: 16px; overflow: hidden; }
|
| 39 |
+
.section-header { padding: 12px 20px; background: #fafafa; border-bottom: 1px solid #e0e0e0;
|
| 40 |
+
font-weight: 600; font-size: 0.95rem; cursor: pointer; user-select: none;
|
| 41 |
+
display: flex; justify-content: space-between; align-items: center; }
|
| 42 |
+
.section-header:hover { background: #f0f0f0; }
|
| 43 |
+
.section-header .arrow { transition: transform 0.2s; font-size: 0.8rem; color: #888; }
|
| 44 |
+
.section-header.collapsed .arrow { transform: rotate(-90deg); }
|
| 45 |
+
.section-body { padding: 0; }
|
| 46 |
+
.section-header.collapsed + .section-body { display: none; }
|
| 47 |
+
.prompt-row { border-bottom: 1px solid #f0f0f0; }
|
| 48 |
+
.prompt-row:last-child { border-bottom: none; }
|
| 49 |
+
.prompt-key { padding: 10px 20px; background: #f8f9fa; font-weight: 600;
|
| 50 |
+
font-size: 0.85rem; color: #555; border-bottom: 1px solid #eee; }
|
| 51 |
+
.prompt-text { padding: 16px 20px; white-space: pre-wrap; font-family: 'SF Mono', Monaco,
|
| 52 |
+
'Cascadia Code', monospace; font-size: 0.82rem; line-height: 1.6;
|
| 53 |
+
color: #2d2d2d; background: #fff; }
|
| 54 |
+
.raw-block { padding: 16px 20px; white-space: pre-wrap; font-family: 'SF Mono', Monaco,
|
| 55 |
+
monospace; font-size: 0.82rem; line-height: 1.5; background: #fafafa; }
|
| 56 |
+
</style>
|
| 57 |
+
</head>
|
| 58 |
+
<body>
|
| 59 |
+
<div class="header">
|
| 60 |
+
<h1>Prompt YAML Viewer</h1>
|
| 61 |
+
<p>evals/prompts/ — {{ files | length }} files</p>
|
| 62 |
+
</div>
|
| 63 |
+
<div class="container">
|
| 64 |
+
<div class="tabs">
|
| 65 |
+
{% for f in files %}
|
| 66 |
+
<div class="tab {% if loop.first %}active{% endif %}" onclick="switchTab('{{ f.name }}', this)">
|
| 67 |
+
{{ f.name }}
|
| 68 |
+
</div>
|
| 69 |
+
{% endfor %}
|
| 70 |
+
</div>
|
| 71 |
+
|
| 72 |
+
{% for f in files %}
|
| 73 |
+
<div id="card-{{ f.name }}" class="yaml-card {% if loop.first %}active{% endif %}">
|
| 74 |
+
<div class="meta">
|
| 75 |
+
<div class="meta-item">
|
| 76 |
+
<div class="meta-label">Scheme</div>
|
| 77 |
+
<div class="meta-value">{{ f.data.get('scheme', '-') }}</div>
|
| 78 |
+
</div>
|
| 79 |
+
<div class="meta-item">
|
| 80 |
+
<div class="meta-label">General Keys</div>
|
| 81 |
+
<div class="meta-value">{{ f.data.get('general_keys', []) | join(', ') or '-' }}</div>
|
| 82 |
+
</div>
|
| 83 |
+
<div class="meta-item">
|
| 84 |
+
<div class="meta-label">Physical Sub-Questions</div>
|
| 85 |
+
<div class="meta-value">{{ f.data.get('physical_sub_questions', '-') }}</div>
|
| 86 |
+
</div>
|
| 87 |
+
</div>
|
| 88 |
+
|
| 89 |
+
{% if f.data.get('description') %}
|
| 90 |
+
<div class="section">
|
| 91 |
+
<div class="section-header" onclick="toggleSection(this)">
|
| 92 |
+
Description <span class="arrow">▼</span>
|
| 93 |
+
</div>
|
| 94 |
+
<div class="section-body">
|
| 95 |
+
<div class="raw-block">{{ f.data['description'] }}</div>
|
| 96 |
+
</div>
|
| 97 |
+
</div>
|
| 98 |
+
{% endif %}
|
| 99 |
+
|
| 100 |
+
{% if f.data.get('system_prompt') %}
|
| 101 |
+
<div class="section">
|
| 102 |
+
<div class="section-header" onclick="toggleSection(this)">
|
| 103 |
+
System Prompt <span class="arrow">▼</span>
|
| 104 |
+
</div>
|
| 105 |
+
<div class="section-body">
|
| 106 |
+
<div class="prompt-text">{{ f.data['system_prompt'] }}</div>
|
| 107 |
+
</div>
|
| 108 |
+
</div>
|
| 109 |
+
{% endif %}
|
| 110 |
+
|
| 111 |
+
{% if f.data.get('eval_prompts') %}
|
| 112 |
+
<div class="section">
|
| 113 |
+
<div class="section-header" onclick="toggleSection(this)">
|
| 114 |
+
Eval Prompts <span class="arrow">▼</span>
|
| 115 |
+
</div>
|
| 116 |
+
<div class="section-body">
|
| 117 |
+
{% for key, val in f.data['eval_prompts'].items() %}
|
| 118 |
+
<div class="prompt-row">
|
| 119 |
+
<div class="prompt-key">{{ key }}</div>
|
| 120 |
+
<div class="prompt-text">{{ val }}</div>
|
| 121 |
+
</div>
|
| 122 |
+
{% endfor %}
|
| 123 |
+
</div>
|
| 124 |
+
</div>
|
| 125 |
+
{% endif %}
|
| 126 |
+
|
| 127 |
+
{% if f.data.get('training_prompts') %}
|
| 128 |
+
<div class="section">
|
| 129 |
+
<div class="section-header" onclick="toggleSection(this)">
|
| 130 |
+
Training Prompts <span class="arrow">▼</span>
|
| 131 |
+
</div>
|
| 132 |
+
<div class="section-body">
|
| 133 |
+
{% for key, val in f.data['training_prompts'].items() %}
|
| 134 |
+
<div class="prompt-row">
|
| 135 |
+
<div class="prompt-key">{{ key }}</div>
|
| 136 |
+
<div class="prompt-text">{{ val }}</div>
|
| 137 |
+
</div>
|
| 138 |
+
{% endfor %}
|
| 139 |
+
</div>
|
| 140 |
+
</div>
|
| 141 |
+
{% endif %}
|
| 142 |
+
|
| 143 |
+
{% if f.data.get('physical_template') %}
|
| 144 |
+
<div class="section">
|
| 145 |
+
<div class="section-header" onclick="toggleSection(this)">
|
| 146 |
+
Physical Template <span class="arrow">▼</span>
|
| 147 |
+
</div>
|
| 148 |
+
<div class="section-body">
|
| 149 |
+
<div class="prompt-text">{{ f.data['physical_template'] }}</div>
|
| 150 |
+
</div>
|
| 151 |
+
</div>
|
| 152 |
+
{% endif %}
|
| 153 |
+
|
| 154 |
+
{% if f.data.get('sub_questions') %}
|
| 155 |
+
<div class="section">
|
| 156 |
+
<div class="section-header" onclick="toggleSection(this)">
|
| 157 |
+
Sub-Questions Config <span class="arrow">▼</span>
|
| 158 |
+
</div>
|
| 159 |
+
<div class="section-body">
|
| 160 |
+
<div class="raw-block">{{ f.sub_questions_str }}</div>
|
| 161 |
+
</div>
|
| 162 |
+
</div>
|
| 163 |
+
{% endif %}
|
| 164 |
+
|
| 165 |
+
<div class="section">
|
| 166 |
+
<div class="section-header collapsed" onclick="toggleSection(this)">
|
| 167 |
+
Raw YAML <span class="arrow">▼</span>
|
| 168 |
+
</div>
|
| 169 |
+
<div class="section-body" style="display:none">
|
| 170 |
+
<div class="raw-block">{{ f.raw }}</div>
|
| 171 |
+
</div>
|
| 172 |
+
</div>
|
| 173 |
+
</div>
|
| 174 |
+
{% endfor %}
|
| 175 |
+
</div>
|
| 176 |
+
|
| 177 |
+
<script>
|
| 178 |
+
function switchTab(name, el) {
|
| 179 |
+
document.querySelectorAll('.tab').forEach(t => t.classList.remove('active'));
|
| 180 |
+
document.querySelectorAll('.yaml-card').forEach(c => c.classList.remove('active'));
|
| 181 |
+
el.classList.add('active');
|
| 182 |
+
document.getElementById('card-' + name).classList.add('active');
|
| 183 |
+
}
|
| 184 |
+
function toggleSection(header) {
|
| 185 |
+
header.classList.toggle('collapsed');
|
| 186 |
+
const body = header.nextElementSibling;
|
| 187 |
+
body.style.display = body.style.display === 'none' ? '' : 'none';
|
| 188 |
+
}
|
| 189 |
+
</script>
|
| 190 |
+
</body>
|
| 191 |
+
</html>
|
| 192 |
+
"""
|
| 193 |
+
|
| 194 |
+
|
| 195 |
+
def load_yamls():
|
| 196 |
+
files = []
|
| 197 |
+
for p in sorted(_DIR.glob("*.yaml")):
|
| 198 |
+
raw = p.read_text()
|
| 199 |
+
data = yaml.safe_load(raw) or {}
|
| 200 |
+
sq = data.get("sub_questions")
|
| 201 |
+
sq_str = yaml.dump(sq, default_flow_style=False) if sq else ""
|
| 202 |
+
files.append({"name": p.name, "data": data, "raw": raw, "sub_questions_str": sq_str})
|
| 203 |
+
return files
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
@app.route("/")
|
| 207 |
+
def index():
|
| 208 |
+
return render_template_string(TEMPLATE, files=load_yamls())
|
| 209 |
+
|
| 210 |
+
|
| 211 |
+
if __name__ == "__main__":
|
| 212 |
+
app.run(host="127.0.0.1", port=5003, debug=True)
|
evals/prompts/default.yaml
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
scheme: plain
|
| 2 |
+
description: Per-dimension evaluation prompts, 1-5 scale (v2)
|
| 3 |
+
system_prompt: You are a strict video evaluation model.
|
| 4 |
+
general_keys:
|
| 5 |
+
- SA
|
| 6 |
+
- PTV
|
| 7 |
+
- persistence
|
| 8 |
+
eval_prompts:
|
| 9 |
+
SA: |-
|
| 10 |
+
You are evaluating a generated video on Prompt Alignment (SA).
|
| 11 |
+
|
| 12 |
+
The text prompt used to generate this video is:
|
| 13 |
+
"{prompt}"
|
| 14 |
+
|
| 15 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 16 |
+
|
| 17 |
+
Score SA (Prompt Alignment, 1-5):
|
| 18 |
+
5=perfectly matches prompt, 4=clearly matches, 3=mostly matches, 2=clearly does not match well, 1=severely contradicts.
|
| 19 |
+
Evaluate whether the depicted scene, objects, actions, and overall narrative correspond to the prompt description.
|
| 20 |
+
|
| 21 |
+
Output ONLY a JSON object with exactly one key "SA" (integer 1-5). No other text.
|
| 22 |
+
|
| 23 |
+
Example:
|
| 24 |
+
{{"SA": 3}}
|
| 25 |
+
PTV: |-
|
| 26 |
+
You are evaluating a generated video on Temporal Coherence (PTV).
|
| 27 |
+
|
| 28 |
+
The text prompt used to generate this video is:
|
| 29 |
+
"{prompt}"
|
| 30 |
+
|
| 31 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 32 |
+
|
| 33 |
+
Score PTV (Temporal Coherence, 1-5):
|
| 34 |
+
5=perfect temporal sequence, 4=correct order with minor timing issues, 3=mostly correct order, 2=significant misordering, 1=completely wrong temporal sequence.
|
| 35 |
+
Evaluate whether event ordering is logically consistent (e.g., objects break before scattering, liquid is poured before it flows).
|
| 36 |
+
|
| 37 |
+
Output ONLY a JSON object with exactly one key "PTV" (integer 1-5). No other text.
|
| 38 |
+
|
| 39 |
+
Example:
|
| 40 |
+
{{"PTV": 4}}
|
| 41 |
+
persistence: |-
|
| 42 |
+
You are evaluating a generated video on Object Persistence.
|
| 43 |
+
|
| 44 |
+
The text prompt used to generate this video is:
|
| 45 |
+
"{prompt}"
|
| 46 |
+
|
| 47 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 48 |
+
|
| 49 |
+
Score persistence (Object Persistence, 1-5):
|
| 50 |
+
5=perfect object constancy, 4=near-perfect with trivial flicker, 3=minor inconsistencies, 2=noticeable persistence issues, 1=severe violations (objects appear/disappear/transform randomly).
|
| 51 |
+
|
| 52 |
+
Output ONLY a JSON object with exactly one key "persistence" (integer 1-5). No other text.
|
| 53 |
+
|
| 54 |
+
Example:
|
| 55 |
+
{{"persistence": 4}}
|
| 56 |
+
training_prompts:
|
| 57 |
+
SA: |-
|
| 58 |
+
You are evaluating a generated video on Prompt Alignment (SA).
|
| 59 |
+
|
| 60 |
+
Caption:
|
| 61 |
+
"{prompt}"
|
| 62 |
+
|
| 63 |
+
Evaluate whether the video content matches the caption, including scene, objects, actions, and relationships.
|
| 64 |
+
|
| 65 |
+
Score 1-5:
|
| 66 |
+
5=fully aligned
|
| 67 |
+
4=mostly aligned with minor deviations
|
| 68 |
+
3=partially aligned with notable gaps
|
| 69 |
+
2=mostly misaligned
|
| 70 |
+
1=not aligned
|
| 71 |
+
|
| 72 |
+
Output ONLY a JSON object with exactly one key: SA.
|
| 73 |
+
PTV: |-
|
| 74 |
+
You are evaluating Temporal Coherence (PTV).
|
| 75 |
+
|
| 76 |
+
Caption:
|
| 77 |
+
"{prompt}"
|
| 78 |
+
|
| 79 |
+
Evaluate whether the sequence of physical events follows a plausible temporal order.
|
| 80 |
+
|
| 81 |
+
Score 1-5:
|
| 82 |
+
5=fully plausible event order
|
| 83 |
+
4=mostly plausible with minor issues
|
| 84 |
+
3=partially plausible
|
| 85 |
+
2=mostly implausible
|
| 86 |
+
1=completely implausible order
|
| 87 |
+
|
| 88 |
+
Output ONLY a JSON object with exactly one key: PTV.
|
| 89 |
+
persistence: |-
|
| 90 |
+
You are evaluating Object Persistence.
|
| 91 |
+
|
| 92 |
+
Caption, for context only:
|
| 93 |
+
"{prompt}"
|
| 94 |
+
|
| 95 |
+
Evaluate whether objects maintain consistent existence, shape, and appearance throughout the video.
|
| 96 |
+
|
| 97 |
+
Score 1-5:
|
| 98 |
+
5=fully consistent
|
| 99 |
+
4=mostly consistent with minor flicker
|
| 100 |
+
3=noticeable issues
|
| 101 |
+
2=major inconsistencies
|
| 102 |
+
1=severe disappearance or identity changes
|
| 103 |
+
|
| 104 |
+
Output ONLY a JSON object with exactly one key: persistence.
|
| 105 |
+
physical_sub_questions: false
|
| 106 |
+
physical_template: |-
|
| 107 |
+
You are evaluating physical realism for one physical law: {law}.
|
| 108 |
+
|
| 109 |
+
Criterion:
|
| 110 |
+
{criteria}
|
| 111 |
+
|
| 112 |
+
Caption, for context only:
|
| 113 |
+
"{prompt}"
|
| 114 |
+
{questions_block}
|
| 115 |
+
Judge the video itself. Do not penalize prompt mismatch unless it affects whether this physical law can be evaluated.
|
| 116 |
+
|
| 117 |
+
Score 1-5:
|
| 118 |
+
5=clearly correct
|
| 119 |
+
4=mostly correct with minor issues
|
| 120 |
+
3=partially correct or ambiguous
|
| 121 |
+
2=mostly incorrect
|
| 122 |
+
1=severely incorrect
|
| 123 |
+
|
| 124 |
+
Output ONLY a JSON object with exactly one key: {law}.
|
evals/prompts/subq+answer.yaml
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
scheme: subq_answer
|
| 2 |
+
description: |-
|
| 3 |
+
JSON-only per-task prompts with sub-questions AND model (Qwen) answers.
|
| 4 |
+
输入: sub-questions + Qwen 对子问题的回答 + 视频. 输出: JSON score only.
|
| 5 |
+
sub_questions:
|
| 6 |
+
source: static
|
| 7 |
+
answer_format: hint
|
| 8 |
+
system_prompt: You are a strict video evaluation model.
|
| 9 |
+
general_keys:
|
| 10 |
+
- SA
|
| 11 |
+
- PTV
|
| 12 |
+
- persistence
|
| 13 |
+
eval_prompts:
|
| 14 |
+
SA: |-
|
| 15 |
+
Evaluate Prompt Alignment (SA).
|
| 16 |
+
|
| 17 |
+
Caption:
|
| 18 |
+
"{prompt}"
|
| 19 |
+
|
| 20 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 21 |
+
|
| 22 |
+
Sub-questions and reference answers from another model:
|
| 23 |
+
{questions_block}
|
| 24 |
+
|
| 25 |
+
Reference answers:
|
| 26 |
+
{answers_block}
|
| 27 |
+
|
| 28 |
+
Use the reference answers as additional context, but base your final judgment on the video itself.
|
| 29 |
+
|
| 30 |
+
Score 1-5:
|
| 31 |
+
5=fully aligned
|
| 32 |
+
4=mostly aligned with minor deviations
|
| 33 |
+
3=partially aligned with notable gaps
|
| 34 |
+
2=mostly misaligned
|
| 35 |
+
1=not aligned
|
| 36 |
+
|
| 37 |
+
Then output ONLY a JSON object with exactly one key: SA.
|
| 38 |
+
|
| 39 |
+
Example:
|
| 40 |
+
{{"SA": 3}}
|
| 41 |
+
PTV: |-
|
| 42 |
+
Evaluate Temporal Coherence (PTV).
|
| 43 |
+
|
| 44 |
+
Caption:
|
| 45 |
+
"{prompt}"
|
| 46 |
+
|
| 47 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 48 |
+
|
| 49 |
+
Sub-questions and reference answers from another model:
|
| 50 |
+
{questions_block}
|
| 51 |
+
|
| 52 |
+
Reference answers:
|
| 53 |
+
{answers_block}
|
| 54 |
+
|
| 55 |
+
Use the reference answers as additional context, but base your final judgment on the video itself.
|
| 56 |
+
|
| 57 |
+
Score 1-5:
|
| 58 |
+
5=fully plausible event order
|
| 59 |
+
4=mostly plausible with minor timing issues
|
| 60 |
+
3=partially plausible
|
| 61 |
+
2=mostly implausible
|
| 62 |
+
1=completely implausible order
|
| 63 |
+
|
| 64 |
+
Then output ONLY a JSON object with exactly one key: PTV.
|
| 65 |
+
|
| 66 |
+
Example:
|
| 67 |
+
{{"PTV": 4}}
|
| 68 |
+
persistence: |-
|
| 69 |
+
Evaluate Object Persistence.
|
| 70 |
+
|
| 71 |
+
Caption, for context only:
|
| 72 |
+
"{prompt}"
|
| 73 |
+
|
| 74 |
+
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
|
| 75 |
+
|
| 76 |
+
Sub-questions and reference answers from another model:
|
| 77 |
+
{questions_block}
|
| 78 |
+
|
| 79 |
+
Reference answers:
|
| 80 |
+
{answers_block}
|
| 81 |
+
|
| 82 |
+
Use the reference answers as additional context, but base your final judgment on the video itself.
|
| 83 |
+
|
| 84 |
+
Score 1-5:
|
| 85 |
+
5=fully consistent
|
| 86 |
+
4=mostly consistent with minor flicker
|
| 87 |
+
3=noticeable issues
|
| 88 |
+
2=major inconsistencies
|
| 89 |
+
1=severe disappearance or identity changes
|
| 90 |
+
|
| 91 |
+
Then output ONLY a JSON object with exactly one key: persistence.
|
| 92 |
+
|
| 93 |
+
Example:
|
| 94 |
+
{{"persistence": 4}}
|
| 95 |
+
physical_sub_questions: true
|
| 96 |
+
physical_template: |-
|
| 97 |
+
Evaluate physical realism for one physical law: {law}.
|
| 98 |
+
|
| 99 |
+
Criterion:
|
| 100 |
+
{criteria}
|
| 101 |
+
|
| 102 |
+
Caption, for context only:
|
| 103 |
+
"{prompt}"
|
| 104 |
+
|
| 105 |
+
Sub-questions and reference answers from another model:
|
| 106 |
+
{questions_block}
|
| 107 |
+
|
| 108 |
+
Reference answers:
|
| 109 |
+
{answers_block}
|
| 110 |
+
|
| 111 |
+
Use the reference answers as additional context, but judge the video itself. Do not penalize prompt mismatch unless it affects whether this physical law can be evaluated.
|
| 112 |
+
|
| 113 |
+
Score 1-5:
|
| 114 |
+
5=clearly correct
|
| 115 |
+
4=mostly correct with minor issues
|
| 116 |
+
3=partially correct or ambiguous
|
| 117 |
+
2=mostly incorrect
|
| 118 |
+
1=severely incorrect
|
| 119 |
+
|
| 120 |
+
Then output ONLY a JSON object with exactly one key: {law}.
|
| 121 |
+
|
| 122 |
+
Example:
|
| 123 |
+
{{"{law}": 3}}
|