How to use this model in python?

#9
by ideusb - opened

The following code seems to have nothing to do with this gguf model, they are just official examples:

pipeline = QwenImageEditPlusPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2511", torch_dtype=torch.bfloat16)
print("pipeline loaded")

pipeline.to('cuda')
pipeline.set_progress_bar_config(disable=None)
image1 = Image.open("input1.png")
image2 = Image.open("input2.png")

This GGUF files are primarily meant to be run in ComfyUI or stablediffusion.cpp.

Try this solution.
Related docs are at https://huggingface.co/docs/diffusers/en/quantization/gguf. And to be aware that Loading GGUF checkpoints via Pipelines is currently not supported.. So you can only load GGUF via

import numpy as np
import random
import torch
from PIL import Image
from diffusers import FlowMatchEulerDiscreteScheduler, QwenImageEditPlusPipeline, QwenImageTransformer2DModel

transformer = QwenImageTransformer2DModel.from_single_file(
    "./models/Qwen-Image-Edit-2511-FP8/model.safetensors",
    config="Qwen/Qwen-Image-Edit-2511",
    subfolder="transformer"
)

device = "cuda" if torch.cuda.is_available() else "cpu"

pipe = QwenImageEditPlusPipeline.from_pretrained(
    "Qwen/Qwen-Image-Edit-2511",
    transformer = transformer,
    dtype = torch.float16
).to(device)

Sign up or log in to comment