AEmotionStudio commited on
Commit
6791346
·
verified ·
1 Parent(s): 99726f8

add: model card README (fp8 release notes, quantization details, load examples)

Browse files
Files changed (1) hide show
  1. README.md +205 -0
README.md ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-to-video
5
+ - video-reshooting
6
+ - 4d
7
+ - vista4d
8
+ - wan2.1
9
+ - fp8
10
+ - quantized
11
+ base_model:
12
+ - Eyeline-Labs/Vista4D
13
+ - Wan-AI/Wan2.1-T2V-14B
14
+ ---
15
+
16
+ # Vista4D — fp8 Release
17
+
18
+ A consumer-GPU-friendly mirror of the [Vista4D](https://huggingface.co/Eyeline-Labs/Vista4D) inference weights, pre-quantized to **fp8 (`float8_e4m3fn`)** with per-tensor symmetric scaling. Drops the on-disk size from ~56 GiB (bf16) to ~17 GiB while staying numerically faithful enough for inference on the published Vista4D pipeline (CVPR 2026).
19
+
20
+ > **Vista4D** reshoots a video from a new camera trajectory using a finetuned [Wan 2.1-T2V-14B](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) DiT plus a 4D point-cloud rendering pass. See the [project page](https://eyeline-labs.github.io/Vista4D), [paper](https://arxiv.org/abs/2604.21915), and [upstream code](https://github.com/Eyeline-Labs/Vista4D) for the full picture.
21
+
22
+ ---
23
+
24
+ ## What's in this repo
25
+
26
+ | Path | Contents | Size |
27
+ |---|---|---|
28
+ | `384p49_step=30000-fp8/` | DiT for the 672×384, 49-frame checkpoint, fp8 sharded safetensors + `config.yaml` + index | 5.05 GiB |
29
+ | `720p49_step=3000-fp8/` | DiT for the 1280×720, 49-frame checkpoint (finetuned from 384p49), same layout | 5.05 GiB |
30
+ | `wan-encoders-fp8/` | Wan 2.1's UMT5-XXL text encoder (fp8) + Wan VAE (bf16) + tokenizer | 6.7 GiB |
31
+ | `384p49_step=30000/dit.safetensors` | The 384p49 DiT in **bf16 full precision** (legacy, kept for users who want the un-quantized reference) | 20.1 GiB |
32
+ | `384p49_step=30000/config.yaml` | Same config as the `-fp8` variant, kept alongside the bf16 file | < 1 KiB |
33
+
34
+ Everything lives at the top level of `https://huggingface.co/AEmotionStudio/Vista4D`.
35
+
36
+ ---
37
+
38
+ ## Quick download
39
+
40
+ ```bash
41
+ # Just the fp8 release (Vista4D 384p + 720p + Wan encoders) — ~17 GiB total
42
+ hf download AEmotionStudio/Vista4D \
43
+ --include "*-fp8/*" \
44
+ --local-dir ./vista4d-fp8
45
+
46
+ # Single checkpoint only
47
+ hf download AEmotionStudio/Vista4D \
48
+ --include "384p49_step=30000-fp8/*" \
49
+ --local-dir ./vista4d-fp8
50
+ hf download AEmotionStudio/Vista4D \
51
+ --include "wan-encoders-fp8/*" \
52
+ --local-dir ./vista4d-fp8
53
+
54
+ # bf16 reference (only 384p49 is mirrored; 720p49 fp8 only)
55
+ hf download AEmotionStudio/Vista4D \
56
+ --include "384p49_step=30000/dit.safetensors" \
57
+ --local-dir ./vista4d-fp8
58
+ ```
59
+
60
+ You'll also want the upstream Vista4D inference scripts:
61
+
62
+ ```bash
63
+ git clone https://github.com/Eyeline-Labs/Vista4D
64
+ ```
65
+
66
+ ---
67
+
68
+ ## Quantization details
69
+
70
+ - **Dtype:** `torch.float8_e4m3fn` (max representable magnitude 448).
71
+ - **Scaling:** per-tensor symmetric. For each quantized weight `W`:
72
+ - `scale = max(|W|).float() / 448.0`
73
+ - `W_fp8 = (W / scale).clamp(-448, 448).to(float8_e4m3fn)`
74
+ - The scale is saved alongside as a sibling key, e.g. `blocks.0.self_attn.q.weight.scale_weight` (fp32 scalar).
75
+ - **What was quantized:** only 2D `nn.Linear` weight tensors in the DiT — the QKV/output projections in self-/cross-attention and the FFN linears.
76
+ - **What was kept in source dtype:** patch/text/time embeddings, every `*_norm.weight`, modulation tensors, output head, and all biases (1D tensors are not quantized regardless of name).
77
+ - **Reload pattern:**
78
+ ```python
79
+ actual_weight = fp8_tensor.to(torch.bfloat16) * scale_tensor
80
+ ```
81
+ - **Sharding:** ≤ 5 GiB per shard, with a standard `*.safetensors.index.json` mapping every key to a shard. Both Vista4D DiTs ended up as single shards because each `dit.pth` was already ~10 GiB in fp16; the structure supports multi-shard models if you re-quantize larger weights.
82
+
83
+ This convention matches the [`fp8_linear`](https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/fp8_optimization.py) path used by the WanVideoWrapper / kijai community releases and the `fp8_linear` matmul in [diffsynth's loader](https://github.com/Vchitect/diffsynth-engine) (see `diffsynth/core/vram/layers.py`). No torchao / GPTQ wrappers — plain safetensors with sibling scales.
84
+
85
+ ---
86
+
87
+ ## Loading example (raw safetensors)
88
+
89
+ ```python
90
+ import torch
91
+ from safetensors import safe_open
92
+
93
+ ckpt = "vista4d-fp8/384p49_step=30000-fp8/diffusion_pytorch_model-00001-of-00001.safetensors"
94
+
95
+ with safe_open(ckpt, framework="pt") as f:
96
+ keys = list(f.keys())
97
+ # Quantized weights have a ".scale_weight" sibling
98
+ qk = "blocks.0.self_attn.q.weight"
99
+ w_fp8 = f.get_tensor(qk)
100
+ w_scale = f.get_tensor(qk + ".scale_weight")
101
+
102
+ print(w_fp8.dtype, w_fp8.shape, w_scale.dtype, w_scale.item())
103
+ # -> torch.float8_e4m3fn torch.Size([5120, 5120]) torch.float32 ~0.000XX
104
+
105
+ # Materialize as bf16 if you want a normal tensor:
106
+ w_real = w_fp8.to(torch.bfloat16) * w_scale
107
+ ```
108
+
109
+ ---
110
+
111
+ ## Loading example (diffsynth fp8 path)
112
+
113
+ The fp8 layout is compatible with [diffsynth-engine](https://github.com/Vchitect/diffsynth-engine)'s native fp8 loader. Set `preparing_dtype=torch.float8_e4m3fn` in the config and point the model loader at the shard:
114
+
115
+ ```python
116
+ import torch
117
+ from diffsynth.core.loader.model import load_model
118
+
119
+ dit = load_model(
120
+ "vista4d-fp8/384p49_step=30000-fp8/", # dir containing the safetensors index
121
+ preparing_dtype=torch.float8_e4m3fn,
122
+ )
123
+ ```
124
+
125
+ The `convert_fp8_linear` helper in WanVideoWrapper (referenced above) is a near-identical drop-in if you're not using diffsynth.
126
+
127
+ ---
128
+
129
+ ## Layout vs upstream Vista4D
130
+
131
+ The upstream pipeline expects:
132
+
133
+ ```
134
+ checkpoints/
135
+ vista4d/
136
+ 384p49_step=30000/
137
+ config.yaml
138
+ dit.pth
139
+ wan/
140
+ Wan2.1-T2V-14B/
141
+ <full Wan 2.1 repo layout>
142
+ ```
143
+
144
+ This repo ships **safetensors**, not `.pth`, and uses a flat layout instead of nested `wan/Wan2.1-T2V-14B/`. To wire it into the upstream code without modifications:
145
+
146
+ ```bash
147
+ # Vista4D DiTs
148
+ mkdir -p checkpoints/vista4d
149
+ mv vista4d-fp8/384p49_step=30000-fp8 checkpoints/vista4d/384p49_step=30000
150
+ mv vista4d-fp8/720p49_step=3000-fp8 checkpoints/vista4d/720p49_step=3000
151
+
152
+ # Wan encoders (rename to match upstream)
153
+ mkdir -p checkpoints/wan/Wan2.1-T2V-14B
154
+ mv vista4d-fp8/wan-encoders-fp8/umt5_xxl_e4m3fn_scaled.safetensors checkpoints/wan/Wan2.1-T2V-14B/
155
+ mv vista4d-fp8/wan-encoders-fp8/Wan2.1_VAE.bf16.safetensors checkpoints/wan/Wan2.1-T2V-14B/
156
+ mv vista4d-fp8/wan-encoders-fp8/tokenizer/umt5-xxl checkpoints/wan/Wan2.1-T2V-14B/google/
157
+ ```
158
+
159
+ Most upstream loaders accept either `.pth` or `.safetensors` transparently. You may need to set the fp8-aware loader path in your config (see the diffsynth example above).
160
+
161
+ ---
162
+
163
+ ## Source attribution
164
+
165
+ This release is a quantized derivative of:
166
+
167
+ - **Vista4D** — [`Eyeline-Labs/Vista4D`](https://huggingface.co/Eyeline-Labs/Vista4D) ([code](https://github.com/Eyeline-Labs/Vista4D), [paper](https://arxiv.org/abs/2604.21915)), Apache 2.0.
168
+ - **Wan 2.1** — [`Wan-AI/Wan2.1-T2V-14B`](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B), Apache 2.0.
169
+
170
+ All weights here are derived from those releases. No new training was done.
171
+
172
+ ---
173
+
174
+ ## License
175
+
176
+ Apache 2.0, inherited from both upstream sources. See [`LICENSE` on Eyeline-Labs/Vista4D`](https://huggingface.co/Eyeline-Labs/Vista4D/blob/main/LICENSE) and [`LICENSE` on Wan-AI/Wan2.1-T2V-14B`](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/blob/main/LICENSE).
177
+
178
+ Quantization itself is a numerical transformation; this repo redistributes the same weights under the same license.
179
+
180
+ ---
181
+
182
+ ## Acknowledgments
183
+
184
+ - Eyeline Labs for [Vista4D](https://huggingface.co/Eyeline-Labs/Vista4D).
185
+ - The Wan-AI team for [Wan 2.1-T2V-14B](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B).
186
+ - [kijai](https://github.com/kijai/ComfyUI-WanVideoWrapper) and the diffsynth-engine project for the fp8 linear matmul convention this release matches.
187
+
188
+ ---
189
+
190
+ ## Citation
191
+
192
+ If you use Vista4D in your research, cite the original paper:
193
+
194
+ ```bibtex
195
+ @inproceedings{vista4d2026,
196
+ title = {Vista4D: Learning to Reshoot Video with Camera Trajectories},
197
+ author = {Eyeline Labs},
198
+ booktitle = {CVPR},
199
+ year = {2026}
200
+ }
201
+ ```
202
+
203
+ ---
204
+
205
+ *Maintained by [AEmotionStudio](https://huggingface.co/AEmotionStudio). Issues / questions: open an issue on the upstream Vista4D repo for model behavior, or here for layout / quantization questions.*