Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

AEmotionStudio
/
Vista4D

Text-to-Video
Diffusers
Safetensors
video-reshooting
4d
vista4d
wan2.1
fp8
quantized
Model card Files Files and versions
xet
Community

Instructions to use AEmotionStudio/Vista4D with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use AEmotionStudio/Vista4D with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("AEmotionStudio/Vista4D", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
Vista4D
110 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 38 commits
AEmotionStudio's picture
AEmotionStudio
Upload vista4d-bf16/720p49_step=3000/config.yaml with huggingface_hub
90ae880 verified about 7 hours ago
  • 384p49_step=30000-fp8
    add: 384p49_step=30000 fp8: 384p49_step=30000-fp8/diffusion_pytorch_model.safetensors.index.json about 20 hours ago
  • 384p49_step=30000
    cleanup: remove redundant single-file fp8 (kept sharded -fp8/ variant + bf16 dit.safetensors) about 19 hours ago
  • 720p49_step=3000-fp8
    add: 720p49_step=3000 fp8: 720p49_step=3000-fp8/diffusion_pytorch_model.safetensors.index.json about 20 hours ago
  • vista4d-bf16
    Upload vista4d-bf16/720p49_step=3000/config.yaml with huggingface_hub about 7 hours ago
  • wan-bf16
    Upload wan-bf16/google/umt5-xxl/tokenizer.json with huggingface_hub about 8 hours ago
  • wan-encoders-fp8
    add: wan encoders fp8: wan-encoders-fp8/umt5_xxl_e4m3fn_scaled.safetensors about 20 hours ago
  • .gitattributes
    1.68 kB
    Upload wan-bf16/google/umt5-xxl/tokenizer.json with huggingface_hub about 8 hours ago
  • README.md
    8.06 kB
    add: model card README (fp8 release notes, quantization details, load examples) about 19 hours ago