license: apache-2.0
tags:
- text-to-video
- video-reshooting
- 4d
- vista4d
- wan2.1
- fp8
- quantized
base_model:
- Eyeline-Labs/Vista4D
- Wan-AI/Wan2.1-T2V-14B
Vista4D — fp8 Release
A consumer-GPU-friendly mirror of the Vista4D inference weights, pre-quantized to fp8 (float8_e4m3fn) with per-tensor symmetric scaling. Drops the on-disk size from ~56 GiB (bf16) to ~17 GiB while staying numerically faithful enough for inference on the published Vista4D pipeline (CVPR 2026).
Vista4D reshoots a video from a new camera trajectory using a finetuned Wan 2.1-T2V-14B DiT plus a 4D point-cloud rendering pass. See the project page, paper, and upstream code for the full picture.
What's in this repo
| Path | Contents | Size |
|---|---|---|
384p49_step=30000-fp8/ |
DiT for the 672×384, 49-frame checkpoint, fp8 sharded safetensors + config.yaml + index |
5.05 GiB |
720p49_step=3000-fp8/ |
DiT for the 1280×720, 49-frame checkpoint (finetuned from 384p49), same layout | 5.05 GiB |
wan-encoders-fp8/ |
Wan 2.1's UMT5-XXL text encoder (fp8) + Wan VAE (bf16) + tokenizer | 6.7 GiB |
384p49_step=30000/dit.safetensors |
The 384p49 DiT in bf16 full precision (legacy, kept for users who want the un-quantized reference) | 20.1 GiB |
384p49_step=30000/config.yaml |
Same config as the -fp8 variant, kept alongside the bf16 file |
< 1 KiB |
Everything lives at the top level of https://huggingface.co/AEmotionStudio/Vista4D.
Quick download
# Just the fp8 release (Vista4D 384p + 720p + Wan encoders) — ~17 GiB total
hf download AEmotionStudio/Vista4D \
--include "*-fp8/*" \
--local-dir ./vista4d-fp8
# Single checkpoint only
hf download AEmotionStudio/Vista4D \
--include "384p49_step=30000-fp8/*" \
--local-dir ./vista4d-fp8
hf download AEmotionStudio/Vista4D \
--include "wan-encoders-fp8/*" \
--local-dir ./vista4d-fp8
# bf16 reference (only 384p49 is mirrored; 720p49 fp8 only)
hf download AEmotionStudio/Vista4D \
--include "384p49_step=30000/dit.safetensors" \
--local-dir ./vista4d-fp8
You'll also want the upstream Vista4D inference scripts:
git clone https://github.com/Eyeline-Labs/Vista4D
Quantization details
- Dtype:
torch.float8_e4m3fn(max representable magnitude 448). - Scaling: per-tensor symmetric. For each quantized weight
W:scale = max(|W|).float() / 448.0W_fp8 = (W / scale).clamp(-448, 448).to(float8_e4m3fn)- The scale is saved alongside as a sibling key, e.g.
blocks.0.self_attn.q.weight.scale_weight(fp32 scalar).
- What was quantized: only 2D
nn.Linearweight tensors in the DiT — the QKV/output projections in self-/cross-attention and the FFN linears. - What was kept in source dtype: patch/text/time embeddings, every
*_norm.weight, modulation tensors, output head, and all biases (1D tensors are not quantized regardless of name). - Reload pattern:
actual_weight = fp8_tensor.to(torch.bfloat16) * scale_tensor - Sharding: ≤ 5 GiB per shard, with a standard
*.safetensors.index.jsonmapping every key to a shard. Both Vista4D DiTs ended up as single shards because eachdit.pthwas already ~10 GiB in fp16; the structure supports multi-shard models if you re-quantize larger weights.
This convention matches the fp8_linear path used by the WanVideoWrapper / kijai community releases and the fp8_linear matmul in diffsynth's loader (see diffsynth/core/vram/layers.py). No torchao / GPTQ wrappers — plain safetensors with sibling scales.
Loading example (raw safetensors)
import torch
from safetensors import safe_open
ckpt = "vista4d-fp8/384p49_step=30000-fp8/diffusion_pytorch_model-00001-of-00001.safetensors"
with safe_open(ckpt, framework="pt") as f:
keys = list(f.keys())
# Quantized weights have a ".scale_weight" sibling
qk = "blocks.0.self_attn.q.weight"
w_fp8 = f.get_tensor(qk)
w_scale = f.get_tensor(qk + ".scale_weight")
print(w_fp8.dtype, w_fp8.shape, w_scale.dtype, w_scale.item())
# -> torch.float8_e4m3fn torch.Size([5120, 5120]) torch.float32 ~0.000XX
# Materialize as bf16 if you want a normal tensor:
w_real = w_fp8.to(torch.bfloat16) * w_scale
Loading example (diffsynth fp8 path)
The fp8 layout is compatible with diffsynth-engine's native fp8 loader. Set preparing_dtype=torch.float8_e4m3fn in the config and point the model loader at the shard:
import torch
from diffsynth.core.loader.model import load_model
dit = load_model(
"vista4d-fp8/384p49_step=30000-fp8/", # dir containing the safetensors index
preparing_dtype=torch.float8_e4m3fn,
)
The convert_fp8_linear helper in WanVideoWrapper (referenced above) is a near-identical drop-in if you're not using diffsynth.
Layout vs upstream Vista4D
The upstream pipeline expects:
checkpoints/
vista4d/
384p49_step=30000/
config.yaml
dit.pth
wan/
Wan2.1-T2V-14B/
<full Wan 2.1 repo layout>
This repo ships safetensors, not .pth, and uses a flat layout instead of nested wan/Wan2.1-T2V-14B/. To wire it into the upstream code without modifications:
# Vista4D DiTs
mkdir -p checkpoints/vista4d
mv vista4d-fp8/384p49_step=30000-fp8 checkpoints/vista4d/384p49_step=30000
mv vista4d-fp8/720p49_step=3000-fp8 checkpoints/vista4d/720p49_step=3000
# Wan encoders (rename to match upstream)
mkdir -p checkpoints/wan/Wan2.1-T2V-14B
mv vista4d-fp8/wan-encoders-fp8/umt5_xxl_e4m3fn_scaled.safetensors checkpoints/wan/Wan2.1-T2V-14B/
mv vista4d-fp8/wan-encoders-fp8/Wan2.1_VAE.bf16.safetensors checkpoints/wan/Wan2.1-T2V-14B/
mv vista4d-fp8/wan-encoders-fp8/tokenizer/umt5-xxl checkpoints/wan/Wan2.1-T2V-14B/google/
Most upstream loaders accept either .pth or .safetensors transparently. You may need to set the fp8-aware loader path in your config (see the diffsynth example above).
Source attribution
This release is a quantized derivative of:
- Vista4D —
Eyeline-Labs/Vista4D(code, paper), Apache 2.0. - Wan 2.1 —
Wan-AI/Wan2.1-T2V-14B, Apache 2.0.
All weights here are derived from those releases. No new training was done.
License
Apache 2.0, inherited from both upstream sources. See LICENSE on Eyeline-Labs/Vista4D](https://huggingface.co/Eyeline-Labs/Vista4D/blob/main/LICENSE) and [LICENSE on Wan-AI/Wan2.1-T2V-14B.
Quantization itself is a numerical transformation; this repo redistributes the same weights under the same license.
Acknowledgments
- Eyeline Labs for Vista4D.
- The Wan-AI team for Wan 2.1-T2V-14B.
- kijai and the diffsynth-engine project for the fp8 linear matmul convention this release matches.
Citation
If you use Vista4D in your research, cite the original paper:
@inproceedings{vista4d2026,
title = {Vista4D: Learning to Reshoot Video with Camera Trajectories},
author = {Eyeline Labs},
booktitle = {CVPR},
year = {2026}
}
Maintained by AEmotionStudio. Issues / questions: open an issue on the upstream Vista4D repo for model behavior, or here for layout / quantization questions.