QwXL v.1

1

I’ve been experimenting with and modifying an SDXL model by replacing both of its original text encoders with Qwen 0.5B. I always found the 77-token limit frustrating—it really takes the fun out of creating.

Using a smaller model like Qwen 0.5B might not match the original in terms of accuracy, but the trade-off is worth it: you can go beyond that 77-token limit, which opens up a lot more creative freedom.

import torch
from Q_pipeline import QPipeline
import os

MODEL_PATH = "model"
PROMPT = "portrait of a beautiful woman wearing a sundress at a lake, looking at camera, d & d, nice outfit, long hair, intricate, elegant, stylish, realistic"
NEGATIVE_PROMPT = "low quality, blurry"
OUTPUT_IMAGE_PATH = "1.png"
SEED = 42

def main():
    if not torch.cuda.is_available():
        raise RuntimeError("GPU is required.")

    pipe = QPipeline.from_pretrained(
        MODEL_PATH,
        torch_dtype=torch.bfloat16,
        device="cuda"
    )
    
    generator = torch.Generator(device="cuda").manual_seed(SEED)

    result = pipe(
        prompt=PROMPT,
        negative_prompt=NEGATIVE_PROMPT,
        num_inference_steps=40,
        guidance_scale=7.5,
        generator=generator,
        width=1024,
        height=1024,
    )
    
    result["images"][0].save(OUTPUT_IMAGE_PATH)
    print(f"Saved to {OUTPUT_IMAGE_PATH}")

if __name__ == "__main__":
    main()
Downloads last month
-
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kpsss34/QwXL-EXP-1

Finetunes
1 model