WARNING 02-08 03:03:20 [envs.py:235] Flash Attention library "flash_attn" not found, using pytorch attention implementation ================================================================================ CONFIGURATION PARAMETERS: ================================================================================ cfg_scale_text : 5.0 data_root : data_inference/wan_i2v/ dit_root : ./weights/Wan2.1-I2V-14B-480P/ extra_module_root : weights/Stable-Video-Infinity/version-1.0/svi-shot.safetensors lora_alpha : 1.0 max_prompts_per_sample : None max_width : 832 num_clips : 10 num_motion_frames : 1 num_persistent_param_in_dit : 6000000000 num_steps : 50 output : videos/svi_shot/ prompt_path : /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/prompt.txt prompt_prefix : none prompt_repeat_times : 1 ref_image_path : /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/train_000001.jpg ref_pad_cfg : False ref_pad_num : -1 repeat_first_clip : False seed_times : 42 test_samples : None tile_size : [30, 52] tile_stride : [15, 26] tiled : False train_architecture : lora use_first_aug : False use_first_prompt_only : True ================================================================================ Total number of cfg parameters: 27 ================================================================================ Using direct paths for reference image and prompt file Reference image: /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/train_000001.jpg Prompt file: /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/prompt.txt Generated 1 test scenario with 1 prompts Loading models from: ./weights/Wan2.1-I2V-14B-480P/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth model_name: wan_video_image_encoder model_class: WanImageEncoder The following models are loaded: ['wan_video_image_encoder']. Loading models from: ['./weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00001-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00002-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00003-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00004-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00005-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00006-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00007-of-00007.safetensors'] model_name: wan_video_dit model_class: WanModel This model is initialized with extra kwargs: {'has_image_input': True, 'patch_size': [1, 2, 2], 'in_dim': 36, 'dim': 5120, 'ffn_dim': 13824, 'freq_dim': 256, 'text_dim': 4096, 'out_dim': 16, 'num_heads': 40, 'num_layers': 40, 'eps': 1e-06} The following models are loaded: ['wan_video_dit']. Loading models from: ./weights/Wan2.1-I2V-14B-480P/models_t5_umt5-xxl-enc-bf16.pth model_name: wan_video_text_encoder model_class: WanTextEncoder The following models are loaded: ['wan_video_text_encoder']. Loading models from: ./weights/Wan2.1-I2V-14B-480P/Wan2.1_VAE.pth model_name: wan_video_vae model_class: WanVideoVAE The following models are loaded: ['wan_video_vae']. Loading LoRA models from file: weights/Stable-Video-Infinity/version-1.0/svi-shot.safetensors Adding LoRA to wan_video_dit (['./weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00001-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00002-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00003-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00004-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00005-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00006-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00007-of-00007.safetensors']). 400 tensors are updated. Using wan_video_text_encoder from ./weights/Wan2.1-I2V-14B-480P/models_t5_umt5-xxl-enc-bf16.pth. Using wan_video_dit from ['./weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00001-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00002-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00003-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00004-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00005-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00006-of-00007.safetensors', './weights/Wan2.1-I2V-14B-480P/diffusion_pytorch_model-00007-of-00007.safetensors']. Using wan_video_vae from ./weights/Wan2.1-I2V-14B-480P/Wan2.1_VAE.pth. Using wan_video_image_encoder from ./weights/Wan2.1-I2V-14B-480P/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth. #################################################################################################### STARTING SAMPLE 1/1: train_000001 #################################################################################################### Reference image: /mnt/vita/scratch/vita-students/users/wuli/code/DigitalHuman/VBench/20260207_test/sample1/train_000001.jpg Available prompts: 1 Video dimensions: 832x528 Processing train_000001 with 1 prompts Generating 10 clips using the first prompt repeatedly Created output directory for sample: videos/svi_shot/train_000001_20260208_030504 ================================================================================ PROCESSING SAMPLE: train_000001 CHUNK: 1/10 PROMPT: An Amtrak train, numbered 146, travels along a set of tracks under a clear blue sky with scattered clouds, surrounded by a forested landscape. NOTE: Using first prompt only (use_first_prompt_only=True) ================================================================================ Starting video generation... 0%| | 0/50 [00:00