Text-to-Video
Diffusers
Safetensors
English
FARWanAnyFlowPipeline
Any-Step
Text-to-Video
Image-to-Video
Video-to-Video
Instructions to use nvidia/AnyFlow-FAR-Wan2.1-1.3B-Diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use nvidia/AnyFlow-FAR-Wan2.1-1.3B-Diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("nvidia/AnyFlow-FAR-Wan2.1-1.3B-Diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
File size: 785 Bytes
a86236e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | {
"_class_name": "FAR_Wan_Transformer3DModel",
"_diffusers_version": "0.35.1",
"_name_or_path": "Wan-AI/Wan2.1-T2V-1.3B-Diffusers",
"added_kv_proj_dim": null,
"attention_head_dim": 128,
"chunk_partition": [
1,
3,
3,
3,
3,
3,
3,
2
],
"compressed_patch_size": [
1,
4,
4
],
"cross_attn_norm": true,
"deltatime_type": "r",
"eps": 1e-06,
"ffn_dim": 8960,
"freq_dim": 256,
"full_chunk_limit": 3,
"gate_value": 0.25,
"image_dim": null,
"in_channels": 16,
"init_far_model": true,
"init_flowmap_model": true,
"num_attention_heads": 12,
"num_layers": 30,
"out_channels": 16,
"patch_size": [
1,
2,
2
],
"qk_norm": "rms_norm_across_heads",
"rope_max_seq_len": 1024,
"text_dim": 4096
}
|