--- title: Celeste Imperia | Hardware-Aware AI Forge emoji: 🌌 colorFrom: blue colorTo: purple sdk: static pinned: true thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/697f74cb005a67fc114da2b4/OCwnTMS-RIt9cPQR0Gnxf.png license: apache-2.0 tags: - edge-ai - qnn - quantization - openvino - windows-on-arm --- # Celeste Imperia | Hardware-Aware AI Forge **Bridging the gap between frontier AI architectures and consumer silicon.** Celeste Imperia specializes in the precision optimization of LLMs, VLMs, and Diffusion architectures. We provide hardware-validated weights optimized for private, zero-latency execution across Qualcomm, Intel, and NVIDIA ecosystems. --- ## 🏗️ The Development Forge: Workstation V2.0 All models are forged on a specialized dual-GPU pipeline to ensure stability across both "Masses-Ready" and "Professional" hardware. | Component | **Legacy Validation Rig** | **Current AI Workstation** | | :--- | :--- | :--- | | **Processor** | Intel Core i5-11400 | Intel Core i5-11400 (6C/12T) | | **Primary GPU** | NVIDIA RTX A4000 (16GB) | **NVIDIA RTX 3090 (24GB GDDR6X)** | | **Secondary GPU** | - | **NVIDIA RTX A4000 (16GB GDDR6)** | | **Memory** | 16GB RAM | **64GB DDR4 RAM** | | **Storage** | Standard SSD | **High-Speed NVMe, and HDD Storage Pool** | --- ## 📂 Active Repositories ### 🐉 Snapdragon & Mobile NPU (QNN Native) * **[SDXL-QNN](https://huggingface.co/CelesteImperia/SDXL-QNN)** - Optimized for Qualcomm Hexagon NPU. Includes Python and C# automation tools. ### 🧠 Domain-Specific Reasoning * **[Llama-3-Indian-Finance](https://huggingface.co/CelesteImperia/Llama-3-8B-Indian-Finance-Quant-v2.0-RELEASE)** - Specialized reasoning for Indian financial and regulatory sectors. ### 🎨 Cross-Platform Optimization (OpenVINO / GGUF) * **[SDXL-OpenVINO-Trinity](https://huggingface.co/CelesteImperia/celeste-imperia-sdxl-openvino)** - 4-step generation for Intel i5/i7. * **[Whisper-Large-V3-Turbo](https://huggingface.co/CelesteImperia/whisper-large-v3-turbo-openvino)** - High-speed speech-to-text. * **[Qwen2-VL-2B-INT4](https://huggingface.co/CelesteImperia/Qwen2-VL-2B-Instruct-OpenVINO-INT4)** - Vision-language reasoning. * **[Llama-3.2-1B-GGUF](https://huggingface.co/CelesteImperia/Llama-3.2-1B-Instruct-GGUF)** / **[Phi-3.5-GGUF](https://huggingface.co/CelesteImperia/Phi-3.5-mini-instruct-GGUF)** - Mobile-ready LLMs. --- ### ☕ Support the Forge Maintaining a dual-GPU AI workstation and hosting high-bandwidth models requires significant resources. If our open-source tools power your projects, consider supporting our development: | Platform | Support Link | | :--- | :--- | | **Global & India** | [Support via Razorpay](https://razorpay.me/@huggingface) | **Scan to support via UPI (India Only):** --- **Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)