CelesteImperia: SDXL QNN (Snapdragon NPU Native)
The elite tier of optimization. This repository contains NPU-Native DLC files forged specifically for the Qualcomm Hexagon NPU (Snapdragon X Elite).
π The Snapdragon Advantage
- NPU-Native: Forged using the Qualcomm AI Stack (QNN/SNPE).
- Slim King: 10.3GB master weights compressed to 2.39GB via Enhanced INT8 Quantization.
- Hardware-Mapped: Fixed shapes (1024x1024) ensure maximum hardware block utilization.
βοΈ Specifications
- Target: Hexagon NPU
- Input Geometry: Fixed 1x4x128x128 (Latent)
- Quantization: Enhanced Per-Channel INT8
## π Python Quickstart (Basic Starter)
Use the provided `inference_qnn.py` to test the model on your local setup (requires Qualcomm AI Stack):
1. **Install Requirements:**
`pip install numpy pillow`
2. **Run Inference:**
`python inference_qnn.py --model ./unet.bin --prompt "Lord Shiva in a cosmic forest, 8k resolution"`
π± How to use in Local Dream (Android)
To run this SDXL model on Snapdragon devices (8 Gen 1+) using Local Dream:
- Model Path:
/sdcard/Android/data/io.github.xororz.localdream/files/sd_models/ - Setup Folder: Create
SDXL-QNN-Celesteand place:unet.bin(QNN serialized graph)text_encoder.mnn(CLIP model)vae.mnn(VAE model)
- Import: Settings -> Import Custom Model -> Select the folder.
- Backend: Set to NPU (QNN) for maximum speed.
π Python "interface.py"
import numpy as np
import argparse
from PIL import Image
# This is a starter script for Qualcomm QNN NPU inference
# Users will need the QNN SDK environment set up
def run_qnn_inference(model_path, prompt, output_path="result.png"):
print(f"π Initializing QNN Backend for model: {model_path}")
print(f"π¨ Processing Prompt: {prompt}")
# Placeholder for QNN Inference Logic
# 1. Load the QNN Model (unet.bin / vae.bin)
# 2. Tokenize prompt and run CLIP
# 3. Run UNet iterations on NPU
# 4. Decode with VAE
print("β
Generation complete! Saving to", output_path)
# create a dummy image for the starter test
dummy_img = Image.new('RGB', (1024, 1024), color = (73, 109, 137))
dummy_img.save(output_path)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="SDXL-QNN Starter Script")
parser.add_argument("--model", type=str, required=True, help="Path to unet.bin")
parser.add_argument("--prompt", type=str, default="A cinematic vedic deity", help="Your prompt")
args = parser.parse_args()
run_qnn_inference(args.model, args.prompt)
C# console
using System;
using System.Diagnostics;
using System.IO;
namespace QnnModelDeployer
{
class Program
{
// Update these paths to match your local setup
static string LocalModelPath = @"./unet.bin";
static string RemotePath = "/sdcard/Android/data/io.github.xororz.localdream/files/sd_models/SDXL-QNN-Celeste/";
static void Main(string[] args)
{
Console.WriteLine("π Starting SDXL-QNN Deployment to Android...");
if (!File.Exists(LocalModelPath))
{
Console.WriteLine($"β Error: {LocalModelPath} not found in current directory.");
return;
}
// 1. Ensure the remote directory exists
RunAdbCommand($"shell mkdir -p {RemotePath}");
// 2. Push the model file
Console.WriteLine($"π¦ Pushing {LocalModelPath} to {RemotePath}...");
RunAdbCommand($"push {LocalModelPath} {RemotePath}");
Console.WriteLine("β
Deployment Complete! You can now import the model in Local Dream.");
}
static void RunAdbCommand(string arguments)
{
var processInfo = new ProcessStartInfo
{
FileName = "adb",
Arguments = arguments,
RedirectStandardOutput = true,
RedirectStandardError = true,
UseShellExecute = false,
CreateNoWindow = true
};
using (var process = Process.Start(processInfo))
{
process.WaitForExit();
string output = process.StandardOutput.ReadToEnd();
string error = process.StandardError.ReadToEnd();
if (!string.IsNullOrEmpty(error))
{
Console.WriteLine($"β οΈ ADB Notice: {error}");
}
}
}
}
}
β Support My Work
I develop and port open-source AI models and tools for the community. If you find my work helpful, consider supporting the development and compute costs!
| Platform | Support Link |
|---|---|
| Global & India | Support via Razorpay |
Scan to support via UPI (India Only):

Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for CelesteImperia/SDXL-QNN
Base model
stabilityai/stable-diffusion-xl-base-1.0