wruisi commited on
Commit
43f5d8f
·
1 Parent(s): 4620b21

Update README

Browse files
Files changed (2) hide show
  1. README.md +28 -13
  2. example.py +0 -1
README.md CHANGED
@@ -51,7 +51,7 @@ The model was presented in the paper [A Very Big Video Reasoning Suite](https://
51
 
52
  | Model | Base Architecture | Other Remarks |
53
  |-------|-------------------|---------------|
54
- | [VBVR-Wan2.1](https://huggingface.co/Video-Reason/VBVR-Wan2.1) | Wan2.1-I2V-14B-720P | Diffusers format |
55
  | [VBVR-Wan2.2](https://huggingface.co/Video-Reason/VBVR-Wan2.2) | Wan2.2-I2V-A14B | Diffusers format |
56
  | [VBVR-Wan2.1-diffsynth](https://huggingface.co/Video-Reason/VBVR-Wan2.1-diffsynth) | Wan2.1-I2V-14B-720P | DiffSynth LoRA format |
57
  | [VBVR-Wan2.2-diffsynth](https://huggingface.co/Video-Reason/VBVR-Wan2.2-diffsynth) | Wan2.2-I2V-A14B | DiffSynth LoRA format |
@@ -120,6 +120,11 @@ In this release, we present
120
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
121
  <td colspan="14"><em>Proprietary Models</em></td>
122
  </tr>
 
 
 
 
 
123
  <tr>
124
  <td>Runway Gen-4 Turbo</td>
125
  <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td>
@@ -127,34 +132,44 @@ In this release, we present
127
  </tr>
128
  <tr>
129
  <td><strong>Sora 2</strong></td>
130
- <td><strong>0.546</strong></td><td><strong>0.569</strong></td><td><u>0.602</u></td>
131
- <td><u>0.477</u></td><td><strong>0.581</strong></td><td><strong>0.572</strong></td>
132
- <td><strong>0.597</strong></td><td><strong>0.523</strong></td>
133
  <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td>
134
- <td><strong>0.462</strong></td><td><strong>0.546</strong></td>
135
  </tr>
136
  <tr>
137
  <td>Kling 2.6</td>
138
- <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td><u>0.519</u></td>
139
  <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td>
140
  </tr>
141
  <tr>
142
- <td><u>Veo 3.1</u></td>
143
- <td><u>0.480</u></td><td><u>0.531</u></td><td><strong>0.611</strong></td>
144
- <td><strong>0.503</strong></td><td><u>0.520</u></td><td><u>0.444</u></td>
145
- <td>0.510</td><td><u>0.429</u></td>
146
- <td><strong>0.577</strong></td><td>0.277</td><td><u>0.420</u></td>
147
- <td><u>0.441</u></td><td><u>0.404</u></td>
148
  </tr>
149
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
150
  <td colspan="14"><em>Data Scaling Strong Baseline</em></td>
151
  </tr>
 
 
 
 
 
 
 
 
 
 
152
  <tr>
153
  <td><strong>VBVR-Wan2.2</strong></td>
154
  <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td>
155
  <td><strong>0.750</strong></td><td><strong>0.782</strong></td><td><strong>0.745</strong></td>
156
  <td><strong>0.833</strong></td><td><strong>0.610</strong></td>
157
- <td><strong>0.768</strong></td><td><strong>0.572</strong></td><td><strong>0.547</strong></td>
158
  <td><strong>0.618</strong></td><td><strong>0.615</strong></td>
159
  </tr>
160
  </tbody>
 
51
 
52
  | Model | Base Architecture | Other Remarks |
53
  |-------|-------------------|---------------|
54
+ | [**VBVR-Wan2.1**](https://huggingface.co/Video-Reason/VBVR-Wan2.1) | Wan2.1-I2V-14B-720P | Diffusers format |
55
  | [VBVR-Wan2.2](https://huggingface.co/Video-Reason/VBVR-Wan2.2) | Wan2.2-I2V-A14B | Diffusers format |
56
  | [VBVR-Wan2.1-diffsynth](https://huggingface.co/Video-Reason/VBVR-Wan2.1-diffsynth) | Wan2.1-I2V-14B-720P | DiffSynth LoRA format |
57
  | [VBVR-Wan2.2-diffsynth](https://huggingface.co/Video-Reason/VBVR-Wan2.2-diffsynth) | Wan2.2-I2V-A14B | DiffSynth LoRA format |
 
120
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
121
  <td colspan="14"><em>Proprietary Models</em></td>
122
  </tr>
123
+ <tr>
124
+ <td><u>Seedance 2.0</u></td>
125
+ <td><u>0.544</u></td><td><strong>0.570</strong></td><td>0.593</td><td><u>0.498</u></td><td><strong>0.618</strong></td><td><u>0.514</u></td><td><strong>0.602</strong></td>
126
+ <td><u>0.517</u></td><td><strong>0.643</strong></td><td>0.398</td><td><u>0.492</u></td><td>0.427</td><td><strong>0.556</strong></td>
127
+ </tr>
128
  <tr>
129
  <td>Runway Gen-4 Turbo</td>
130
  <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td>
 
132
  </tr>
133
  <tr>
134
  <td><strong>Sora 2</strong></td>
135
+ <td><strong>0.546</strong></td><td><u>0.569</u></td><td><u>0.602</u></td>
136
+ <td>0.477</td><td><u>0.581</u></td><td><strong>0.572</strong></td>
137
+ <td><u>0.597</u></td><td><strong>0.523</strong></td>
138
  <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td>
139
+ <td><strong>0.462</strong></td><td><u>0.546</u></td>
140
  </tr>
141
  <tr>
142
  <td>Kling 2.6</td>
143
+ <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td>0.519</td>
144
  <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td>
145
  </tr>
146
  <tr>
147
+ <td>Veo 3.1</td>
148
+ <td>0.480</td><td>0.531</td><td><strong>0.611</strong></td>
149
+ <td><strong>0.503</strong></td><td>0.520</td><td>0.444</td>
150
+ <td>0.510</td><td>0.429</td>
151
+ <td><u>0.577</u></td><td>0.277</td><td>0.420</td>
152
+ <td><u>0.441</u></td><td>0.404</td>
153
  </tr>
154
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
155
  <td colspan="14"><em>Data Scaling Strong Baseline</em></td>
156
  </tr>
157
+ <tr>
158
+ <td><strong>VBVR-LTX2.3</strong></td>
159
+ <td>0.516</td><td>0.580</td><td>0.608</td><td>0.631</td><td>0.529</td><td>0.454</td><td>0.680</td>
160
+ <td>0.453</td><td>0.608</td><td>0.577</td><td><u>0.409</u></td><td>0.414</td><td><u>0.388</u></td>
161
+ </tr>
162
+ <tr>
163
+ <td><strong>VBVR-Wan2.1</strong></td>
164
+ <td><u>0.592</u></td><td><u>0.724</u></td><td><u>0.705</u></td><td><u>0.710</u></td><td><u>0.727</u></td><td><u>0.719</u></td><td><u>0.784</u></td>
165
+ <td><u>0.461</u></td><td><u>0.674</u></td><td><strong>0.592</strong></td><td>0.387</td><td><u>0.461</u></td><td>0.387</td>
166
+ </tr>
167
  <tr>
168
  <td><strong>VBVR-Wan2.2</strong></td>
169
  <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td>
170
  <td><strong>0.750</strong></td><td><strong>0.782</strong></td><td><strong>0.745</strong></td>
171
  <td><strong>0.833</strong></td><td><strong>0.610</strong></td>
172
+ <td><strong>0.768</strong></td><td><u>0.572</u></td><td><strong>0.547</strong></td>
173
  <td><strong>0.618</strong></td><td><strong>0.615</strong></td>
174
  </tr>
175
  </tbody>
example.py CHANGED
@@ -3,7 +3,6 @@
3
  VBVR-Wan2.1 Image-to-Video Inference Example
4
 
5
  Generate a video from a reference image using the VBVR-Wan2.1 model.
6
- Wan2.1 uses a single transformer with a CLIPVisionModel image encoder.
7
 
8
  Usage:
9
  python example.py --model_path /path/to/VBVR-Wan2.1
 
3
  VBVR-Wan2.1 Image-to-Video Inference Example
4
 
5
  Generate a video from a reference image using the VBVR-Wan2.1 model.
 
6
 
7
  Usage:
8
  python example.py --model_path /path/to/VBVR-Wan2.1