Celeste Imperia | Hardware-Aware AI Forge

Bridging the gap between frontier AI architectures and consumer silicon.

Celeste Imperia specializes in the precision optimization of LLMs, VLMs, and Diffusion architectures. We provide hardware-validated weights optimized for private, zero-latency execution across Qualcomm, Intel, and NVIDIA ecosystems.


πŸ—οΈ The Development Forge: Workstation V2.0

All models are forged on a specialized dual-GPU pipeline to ensure stability across both "Masses-Ready" and "Professional" hardware.

Component Legacy Validation Rig Current AI Workstation
Processor Intel Core i5-11400 Intel Core i5-11400 (6C/12T)
Primary GPU NVIDIA RTX A4000 (16GB) NVIDIA RTX 3090 (24GB GDDR6X)
Secondary GPU - NVIDIA RTX A4000 (16GB GDDR6)
Memory 16GB RAM 64GB DDR4 RAM
Storage Standard SSD High-Speed NVMe, and HDD Storage Pool

πŸ“‚ Active Repositories

πŸ‰ Snapdragon & Mobile NPU (QNN Native)

  • SDXL-QNN - Optimized for Qualcomm Hexagon NPU. Includes Python and C# automation tools.

🧠 Domain-Specific Reasoning

🎨 Cross-Platform Optimization (OpenVINO / GGUF)


β˜• Support the Forge

Maintaining a dual-GPU AI workstation and hosting high-bandwidth models requires significant resources. If our open-source tools power your projects, consider supporting our development:

Platform Support Link
Global & India Support via Razorpay

Scan to support via UPI (India Only):


Connect with the architect: Abhishek Jaiswal on LinkedIn

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support