File size: 5,399 Bytes
4949db9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 | scheme: cot_subq
description: |-
Structured sub-question rationale prompts for Claude distillation.
Claude answers fixed sub-questions with visible evidence, then outputs a JSON score.
sub_questions:
source: static
answer_format: template
system_prompt: |-
You are a strict video evaluation model.
Base your judgment only on visible evidence in the video.
Provide a concise rationale, then output a JSON object with the score.
general_keys:
- SA
- PTV
- persistence
eval_prompts:
SA: |-
Evaluate Prompt Alignment (SA).
Caption:
"{prompt}"
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, partial, no, uncertain.
Use this format for each sub-question:
qN: <brief evidence>; answer=<label>
{questions_block}
Score 1-5:
5=fully aligned
4=mostly aligned with minor deviations
3=partially aligned with notable gaps
2=mostly misaligned
1=not aligned
Answer every sub-question in the format above, then justify the overall score.
Then output a JSON object with keys "reasoning" (string) and "SA" (integer 1-5).
Output JSON only.
Example:
{{"reasoning": "q1: water balloon, target, and thrower all present; answer=yes. q2: balloon is thrown and bursts on impact; answer=yes. q3: outdoor setting and distance preserved; answer=yes. q4: target behaves like an inflatable rather than cardboard; answer=partial. Overall the scene matches well with one material mismatch.", "SA": 4}}
PTV: |-
Evaluate Temporal Coherence (PTV).
Caption:
"{prompt}"
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, partial, no, uncertain.
Use this format for each sub-question:
qN: <brief evidence>; answer=<label>
{questions_block}
Score 1-5:
5=fully plausible event order
4=mostly plausible with minor timing issues
3=partially plausible
2=mostly implausible
1=completely implausible order
Answer every sub-question in the format above, then justify the overall score.
Then output a JSON object with keys "reasoning" (string) and "PTV" (integer 1-5).
Output JSON only.
Example:
{{"reasoning": "q1: bottle shatters before any object contacts it, effect precedes cause; answer=no. q2: rupture occurs without visible force, not a plausible physical event order; answer=no. q3: fragments appear instantly rather than progressing from impact point; answer=no. q4: no repeated resets, but the spontaneous break is an impossible state change; answer=partial. Temporal sequence is highly implausible.", "PTV": 1}}
persistence: |-
Evaluate Object Persistence.
Caption, for context only:
"{prompt}"
The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, partial, no, uncertain.
Use this format for each sub-question:
qN: <brief evidence>; answer=<label>
{questions_block}
Score 1-5:
5=fully consistent
4=mostly consistent with minor flicker
3=noticeable issues
2=major inconsistencies
1=severe disappearance or identity changes
Answer every sub-question in the format above, then justify the overall score.
Then output a JSON object with keys "reasoning" (string) and "persistence" (integer 1-5).
Output JSON only.
Example:
{{"reasoning": "q1: tire and ground remain present throughout; answer=yes. q2: tire and ground keep stable color and texture, but bottle label text changes mid-video; answer=partial. q3: no objects disappear or appear unexpectedly; answer=yes. q4: tire identity stable through motion, but bottle label shifts; answer=partial. Minor but noticeable label inconsistency.", "persistence": 3}}
physical_sub_questions: true
physical_template: |-
Evaluate physical realism for one physical law: {law}.
Criterion:
{criteria}
Caption, for context only:
"{prompt}"
For each sub-question, briefly state the observed evidence, then give exactly one answer label from: yes, no, uncertain, na.
Use this format for each sub-question:
qN: <brief evidence>; answer=<label>
{questions_block}
Judge the video itself. Do not penalize prompt mismatch unless it affects whether this physical law can be evaluated.
Score 1-5:
5=clearly correct
4=mostly correct with minor issues
3=partially correct or ambiguous
2=mostly incorrect
1=severely incorrect
Answer every sub-question in the format above, then justify the overall score.
Then output a JSON object with keys "reasoning" (string) and "{law}" (integer 1-5).
Output JSON only.
Example:
{{"reasoning": "q1: baseball stays in place after clear bat contact, completely unaffected; answer=yes. q2: brown chunk appears on ball surface, response wildly disproportionate to impact; answer=yes. q3: bat morphs and clips through ball, but no shattering from light touch; answer=no. Severely incorrect physical behavior.", "{law}": 1}}
|