File size: 8,059 Bytes
6791346 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 | ---
license: apache-2.0
tags:
- text-to-video
- video-reshooting
- 4d
- vista4d
- wan2.1
- fp8
- quantized
base_model:
- Eyeline-Labs/Vista4D
- Wan-AI/Wan2.1-T2V-14B
---
# Vista4D — fp8 Release
A consumer-GPU-friendly mirror of the [Vista4D](https://huggingface.co/Eyeline-Labs/Vista4D) inference weights, pre-quantized to **fp8 (`float8_e4m3fn`)** with per-tensor symmetric scaling. Drops the on-disk size from ~56 GiB (bf16) to ~17 GiB while staying numerically faithful enough for inference on the published Vista4D pipeline (CVPR 2026).
> **Vista4D** reshoots a video from a new camera trajectory using a finetuned [Wan 2.1-T2V-14B](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) DiT plus a 4D point-cloud rendering pass. See the [project page](https://eyeline-labs.github.io/Vista4D), [paper](https://arxiv.org/abs/2604.21915), and [upstream code](https://github.com/Eyeline-Labs/Vista4D) for the full picture.
---
## What's in this repo
| Path | Contents | Size |
|---|---|---|
| `384p49_step=30000-fp8/` | DiT for the 672×384, 49-frame checkpoint, fp8 sharded safetensors + `config.yaml` + index | 5.05 GiB |
| `720p49_step=3000-fp8/` | DiT for the 1280×720, 49-frame checkpoint (finetuned from 384p49), same layout | 5.05 GiB |
| `wan-encoders-fp8/` | Wan 2.1's UMT5-XXL text encoder (fp8) + Wan VAE (bf16) + tokenizer | 6.7 GiB |
| `384p49_step=30000/dit.safetensors` | The 384p49 DiT in **bf16 full precision** (legacy, kept for users who want the un-quantized reference) | 20.1 GiB |
| `384p49_step=30000/config.yaml` | Same config as the `-fp8` variant, kept alongside the bf16 file | < 1 KiB |
Everything lives at the top level of `https://huggingface.co/AEmotionStudio/Vista4D`.
---
## Quick download
```bash
# Just the fp8 release (Vista4D 384p + 720p + Wan encoders) — ~17 GiB total
hf download AEmotionStudio/Vista4D \
--include "*-fp8/*" \
--local-dir ./vista4d-fp8
# Single checkpoint only
hf download AEmotionStudio/Vista4D \
--include "384p49_step=30000-fp8/*" \
--local-dir ./vista4d-fp8
hf download AEmotionStudio/Vista4D \
--include "wan-encoders-fp8/*" \
--local-dir ./vista4d-fp8
# bf16 reference (only 384p49 is mirrored; 720p49 fp8 only)
hf download AEmotionStudio/Vista4D \
--include "384p49_step=30000/dit.safetensors" \
--local-dir ./vista4d-fp8
```
You'll also want the upstream Vista4D inference scripts:
```bash
git clone https://github.com/Eyeline-Labs/Vista4D
```
---
## Quantization details
- **Dtype:** `torch.float8_e4m3fn` (max representable magnitude 448).
- **Scaling:** per-tensor symmetric. For each quantized weight `W`:
- `scale = max(|W|).float() / 448.0`
- `W_fp8 = (W / scale).clamp(-448, 448).to(float8_e4m3fn)`
- The scale is saved alongside as a sibling key, e.g. `blocks.0.self_attn.q.weight.scale_weight` (fp32 scalar).
- **What was quantized:** only 2D `nn.Linear` weight tensors in the DiT — the QKV/output projections in self-/cross-attention and the FFN linears.
- **What was kept in source dtype:** patch/text/time embeddings, every `*_norm.weight`, modulation tensors, output head, and all biases (1D tensors are not quantized regardless of name).
- **Reload pattern:**
```python
actual_weight = fp8_tensor.to(torch.bfloat16) * scale_tensor
```
- **Sharding:** ≤ 5 GiB per shard, with a standard `*.safetensors.index.json` mapping every key to a shard. Both Vista4D DiTs ended up as single shards because each `dit.pth` was already ~10 GiB in fp16; the structure supports multi-shard models if you re-quantize larger weights.
This convention matches the [`fp8_linear`](https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/fp8_optimization.py) path used by the WanVideoWrapper / kijai community releases and the `fp8_linear` matmul in [diffsynth's loader](https://github.com/Vchitect/diffsynth-engine) (see `diffsynth/core/vram/layers.py`). No torchao / GPTQ wrappers — plain safetensors with sibling scales.
---
## Loading example (raw safetensors)
```python
import torch
from safetensors import safe_open
ckpt = "vista4d-fp8/384p49_step=30000-fp8/diffusion_pytorch_model-00001-of-00001.safetensors"
with safe_open(ckpt, framework="pt") as f:
keys = list(f.keys())
# Quantized weights have a ".scale_weight" sibling
qk = "blocks.0.self_attn.q.weight"
w_fp8 = f.get_tensor(qk)
w_scale = f.get_tensor(qk + ".scale_weight")
print(w_fp8.dtype, w_fp8.shape, w_scale.dtype, w_scale.item())
# -> torch.float8_e4m3fn torch.Size([5120, 5120]) torch.float32 ~0.000XX
# Materialize as bf16 if you want a normal tensor:
w_real = w_fp8.to(torch.bfloat16) * w_scale
```
---
## Loading example (diffsynth fp8 path)
The fp8 layout is compatible with [diffsynth-engine](https://github.com/Vchitect/diffsynth-engine)'s native fp8 loader. Set `preparing_dtype=torch.float8_e4m3fn` in the config and point the model loader at the shard:
```python
import torch
from diffsynth.core.loader.model import load_model
dit = load_model(
"vista4d-fp8/384p49_step=30000-fp8/", # dir containing the safetensors index
preparing_dtype=torch.float8_e4m3fn,
)
```
The `convert_fp8_linear` helper in WanVideoWrapper (referenced above) is a near-identical drop-in if you're not using diffsynth.
---
## Layout vs upstream Vista4D
The upstream pipeline expects:
```
checkpoints/
vista4d/
384p49_step=30000/
config.yaml
dit.pth
wan/
Wan2.1-T2V-14B/
<full Wan 2.1 repo layout>
```
This repo ships **safetensors**, not `.pth`, and uses a flat layout instead of nested `wan/Wan2.1-T2V-14B/`. To wire it into the upstream code without modifications:
```bash
# Vista4D DiTs
mkdir -p checkpoints/vista4d
mv vista4d-fp8/384p49_step=30000-fp8 checkpoints/vista4d/384p49_step=30000
mv vista4d-fp8/720p49_step=3000-fp8 checkpoints/vista4d/720p49_step=3000
# Wan encoders (rename to match upstream)
mkdir -p checkpoints/wan/Wan2.1-T2V-14B
mv vista4d-fp8/wan-encoders-fp8/umt5_xxl_e4m3fn_scaled.safetensors checkpoints/wan/Wan2.1-T2V-14B/
mv vista4d-fp8/wan-encoders-fp8/Wan2.1_VAE.bf16.safetensors checkpoints/wan/Wan2.1-T2V-14B/
mv vista4d-fp8/wan-encoders-fp8/tokenizer/umt5-xxl checkpoints/wan/Wan2.1-T2V-14B/google/
```
Most upstream loaders accept either `.pth` or `.safetensors` transparently. You may need to set the fp8-aware loader path in your config (see the diffsynth example above).
---
## Source attribution
This release is a quantized derivative of:
- **Vista4D** — [`Eyeline-Labs/Vista4D`](https://huggingface.co/Eyeline-Labs/Vista4D) ([code](https://github.com/Eyeline-Labs/Vista4D), [paper](https://arxiv.org/abs/2604.21915)), Apache 2.0.
- **Wan 2.1** — [`Wan-AI/Wan2.1-T2V-14B`](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B), Apache 2.0.
All weights here are derived from those releases. No new training was done.
---
## License
Apache 2.0, inherited from both upstream sources. See [`LICENSE` on Eyeline-Labs/Vista4D`](https://huggingface.co/Eyeline-Labs/Vista4D/blob/main/LICENSE) and [`LICENSE` on Wan-AI/Wan2.1-T2V-14B`](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/blob/main/LICENSE).
Quantization itself is a numerical transformation; this repo redistributes the same weights under the same license.
---
## Acknowledgments
- Eyeline Labs for [Vista4D](https://huggingface.co/Eyeline-Labs/Vista4D).
- The Wan-AI team for [Wan 2.1-T2V-14B](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B).
- [kijai](https://github.com/kijai/ComfyUI-WanVideoWrapper) and the diffsynth-engine project for the fp8 linear matmul convention this release matches.
---
## Citation
If you use Vista4D in your research, cite the original paper:
```bibtex
@inproceedings{vista4d2026,
title = {Vista4D: Learning to Reshoot Video with Camera Trajectories},
author = {Eyeline Labs},
booktitle = {CVPR},
year = {2026}
}
```
---
*Maintained by [AEmotionStudio](https://huggingface.co/AEmotionStudio). Issues / questions: open an issue on the upstream Vista4D repo for model behavior, or here for layout / quantization questions.*
|