Can't load this lora with difusers

#13
by furaidosu - opened

Hi! I'm trying to load this lora with difusers on a zerogpu space (https://huggingface.co/spaces/lichorosario/qwen-image-lora-dlc-v3) without any success.
The LOrA doesn't seem to be loaded with:

dtype = torch.bfloat16
device = "cuda" if torch.cuda.is_available() else "cpu"
base_model = "Qwen/Qwen-Image-2512"

# Scheduler configuration from the Qwen-Image-Lightning repository
scheduler_config = {
    "base_image_seq_len": 256,
    "base_shift": math.log(3),
    "invert_sigmas": False,
    "max_image_seq_len": 8192,
    "max_shift": math.log(3),
    "num_train_timesteps": 1000,
    "shift": 1.0,
    "shift_terminal": None,
    "stochastic_sampling": False,
    "time_shift_type": "exponential",
    "use_beta_sigmas": False,
    "use_dynamic_shifting": True,
    "use_exponential_sigmas": False,
    "use_karras_sigmas": False,
}


scheduler = FlowMatchEulerDiscreteScheduler.from_config(scheduler_config)
pipe = DiffusionPipeline.from_pretrained(
    base_model, scheduler=scheduler, torch_dtype=dtype
).to(device)

pipe.load_lora_weights(
                'Wuli-Art/Qwen-Image-2512-Turbo-LoRA', 
                weight_name='Wuli-Qwen-Image-2512-Turbo-LoRA-4steps-V1.0-bf16.safetensors',
                adapter_name="lightning"
            )

Is there any know issue with this or am I doing it the wrong way?
Thanks in advance!

Wuli-Art org

Hi @lichorosario, please use V2.0 or V1.0 (ComfyUI version) and see if it works

Hi @lichorosario, please use V2.0 or V1.0 (ComfyUI version) and see if it works

V2.0 works perfectly! Thank you very much!

Sign up or log in to comment