File size: 2,676 Bytes
4949db9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
scheme: subq_hint
description: |-
  JSON-only per-task prompts with observable sub-questions/checklists.subq+ human 其实是用 human 分数训练.输入就是 subq .输出人类分数.
sub_questions:
  source: static
  answer_format: hint
system_prompt: You are a strict video evaluation model.
general_keys:
- SA
- PTV
- persistence
eval_prompts:
  SA: |-
    Evaluate Prompt Alignment (SA).

    Caption:
    "{prompt}"

    The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.

    Sub-questions to consider in your mind before scoring:
    {questions_block}

    Score 1-5:
    5=fully aligned
    4=mostly aligned with minor deviations
    3=partially aligned with notable gaps
    2=mostly misaligned
    1=not aligned

    Then output ONLY a JSON object with exactly one key: SA.

    Example:
    {{"SA": 3}}
  PTV: |-
    Evaluate Temporal Coherence (PTV).

    Caption:
    "{prompt}"

    The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.

    Sub-questions to consider in your mind before scoring:
    {questions_block}

    Score 1-5:
    5=fully plausible event order
    4=mostly plausible with minor timing issues
    3=partially plausible
    2=mostly implausible
    1=completely implausible order

    Then output ONLY a JSON object with exactly one key: PTV.

    Example:
    {{"PTV": 4}}
  persistence: |-
    Evaluate Object Persistence.

    Caption, for context only:
    "{prompt}"

    The video was generated using a text+image-to-video (ti2v) model, conditioned on the first frame and the text prompt above.

    Sub-questions to consider in your mind before scoring:
    {questions_block}

    Score 1-5:
    5=fully consistent
    4=mostly consistent with minor flicker
    3=noticeable issues
    2=major inconsistencies
    1=severe disappearance or identity changes

    Then output ONLY a JSON object with exactly one key: persistence.

    Example:
    {{"persistence": 4}}
physical_sub_questions: true
physical_template: |-
  Evaluate physical realism for one physical law: {law}.

  Criterion:
  {criteria}

  Caption, for context only:
  "{prompt}"

  Sub-questions to consider in your mind before scoring:
  {questions_block}

  Judge the video itself. Do not penalize prompt mismatch unless it affects whether this physical law can be evaluated.

  Score 1-5:
  5=clearly correct
  4=mostly correct with minor issues
  3=partially correct or ambiguous
  2=mostly incorrect
  1=severely incorrect

  Then output ONLY a JSON object with exactly one key: {law}.

  Example:
  {{"{law}": 3}}