metadata
license: apache-2.0
language:
- en
base_model:
- Wan-AI/Wan2.1-I2V-14B-720P
- Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
pipeline_tag: image-to-video
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: gangtiexia,背景保持不变,这个人开始变身白色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
output:
url: result/output1.mp4
- text: gangtiexia,背景保持不变,这个人开始变身粉色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
output:
url: result/output2.mp4
- text: gangtiexia,背景保持不变,这个人开始变身金色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
output:
url: result/output3.mp4
- text: gangtiexia,背景保持不变,这个人开始变身金色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
output:
url: result/output4.mp4
- text: gangtiexia,背景保持不变,这个人开始变身白色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
output:
url: result/output5.mp4
- text: gangtiexia,背景保持不变,这个人开始变身红色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
output:
url: result/output6.mp4
valiantcat LoRA for Wan2.1 14B I2V 720p
Overview
This LoRA is trained on the Wan2.1 14B I2V 720p model.
Features
- Transform any image into a video of two people began to fight
- Trained on the Wan2.1 14B 720p I2V base model
- Consistent results across different object types
- Simple prompt structure that's easy to adapt
- Prompt
- gangtiexia,背景保持不变,这个人开始变身白色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
- Prompt
- gangtiexia,背景保持不变,这个人开始变身粉色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
- Prompt
- gangtiexia,背景保持不变,这个人开始变身金色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
- Prompt
- gangtiexia,背景保持不变,这个人开始变身金色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
- Prompt
- gangtiexia,背景保持不变,这个人开始变身白色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
- Prompt
- gangtiexia,背景保持不变,这个人开始变身红色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
Model File and Inference Workflow
📥 Download Links:
- wan2.1-Mecha.safetensors - LoRA Model File
- wan_img2video_lora_workflow.json - Wan I2V with LoRA Workflow for ComfyUI
Using with Diffusers
pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video, load_image
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from transformers import CLIPVisionModel
import numpy as np
model_id = "Wan-AI/Wan2.1-I2V-14B-720P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")
pipe.load_lora_weights("valiantcat/Wan2.1-Mecha-LoRA")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "gangtiexia,背景保持不变,这个人开始变身红色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走."
image = load_image("https://huggingface.co/valiantcat/Wan2.1-Mecha-LoRA/blob/main/result/test.jpg")
max_area = 512 * 768
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))
output = pipe(
image=image,
prompt=prompt,
height=height,
width=width,
num_frames=81,
guidance_scale=5.0,
num_inference_steps=25
).frames[0]
export_to_video(output, "output.mp4", fps=16)
Recommended Settings
- LoRA Strength: 1.0
- Embedded Guidance Scale: 6.0
- Flow Shift: 5.0
Trigger Words
The key trigger phrase is: gangtiexia
Prompt Template
For best results, use this prompt structure:
gangtiexia,背景保持不变,这个人开始变身[color]色机甲,变身过程中出现机甲面罩遮住脸部,变身完成之后这个人向前走
Simply replace [color] with whatever you want to let this person transform into the color of a mecha