itsabhishek19 commited on
Commit
5814026
ยท
verified ยท
1 Parent(s): fef9442

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -13
README.md CHANGED
@@ -1,22 +1,69 @@
1
  ---
2
- license: mit
3
- organization_profile: true
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # ๐Ÿ›๏ธ Celeste Imperia | High-Efficiency AI Weights
7
 
8
- **The official repository for local-first, hardware-optimized AI models.** โšก
9
 
10
- This organization is dedicated to making advanced AI accessible on consumer hardware. We specialize in porting heavy encoders and LLMs to run on **NPUs** and **ARM** architectures.
11
 
12
- ### ๐Ÿ“ฆ What youโ€™ll find here:
13
- * **NPU-Optimized Encoders:** CLIP and T5 variants converted for **Intel OpenVINO** and **Qualcomm AI Stack**.
14
- * **Consistent Character LoRAs:** High-fidelity character models trained for perfect persistence across frames.
15
- * **Edge-Ready LLMs:** Quantized and ported models specifically tuned for local CPU/NPU inference.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- ### ๐Ÿ› ๏ธ Hardware Focus
18
- Our models are tested and optimized on local rigs (RTX A4000) to ensure they work for creators, not just data centers.
 
 
 
19
 
20
  ---
21
- ๐Ÿ“ซ **Inquiries:** [celesteimperia@gmail.com](mailto:celesteimperia@gmail.com)
22
- "Forging the future of Edge AI, one model at a time."
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Celeste Imperia | Hardware-Aware AI Forge
3
+ emoji: ๐ŸŒŒ
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: static
7
+ pinned: true
8
+ thumbnail: >-
9
+ https://cdn-uploads.huggingface.co/production/uploads/697f74cb005a67fc114da2b4/OCwnTMS-RIt9cPQR0Gnxf.png
10
+ license: apache-2.0
11
+ tags:
12
+ - edge-ai
13
+ - qnn
14
+ - quantization
15
+ - openvino
16
+ - windows-on-arm
17
  ---
18
 
19
+ # Celeste Imperia | Hardware-Aware AI Forge
20
 
21
+ **Bridging the gap between frontier AI architectures and consumer silicon.**
22
 
23
+ Celeste Imperia specializes in the precision optimization of LLMs, VLMs, and Diffusion architectures. We provide hardware-validated weights optimized for private, zero-latency execution across Qualcomm, Intel, and NVIDIA ecosystems.
24
 
25
+ ---
26
+
27
+ ## ๐Ÿ—๏ธ The Development Forge: Workstation V2.0
28
+ All models are forged on a specialized dual-GPU pipeline to ensure stability across both "Masses-Ready" and "Professional" hardware.
29
+
30
+ | Component | **Legacy Validation Rig** | **Current AI Workstation** |
31
+ | :--- | :--- | :--- |
32
+ | **Processor** | Intel Core i5-11400 | Intel Core i5-11400 (6C/12T) |
33
+ | **Primary GPU** | NVIDIA RTX A4000 (16GB) | **NVIDIA RTX 3090 (24GB GDDR6X)** |
34
+ | **Secondary GPU** | - | **NVIDIA RTX A4000 (16GB GDDR6)** |
35
+ | **Memory** | 16GB RAM | **64GB DDR4 RAM** |
36
+ | **Storage** | Standard SSD | **High-Speed NVMe, and HDD Storage Pool** |
37
+
38
+ ---
39
+
40
+ ## ๐Ÿ“‚ Active Repositories
41
+
42
+ ### ๐Ÿ‰ Snapdragon & Mobile NPU (QNN Native)
43
+ * **[SDXL-QNN](https://huggingface.co/CelesteImperia/SDXL-QNN)** - Optimized for Qualcomm Hexagon NPU. Includes Python and C# automation tools.
44
+
45
+ ### ๐Ÿง  Domain-Specific Reasoning
46
+ * **[Llama-3-Indian-Finance](https://huggingface.co/CelesteImperia/Llama-3-8B-Indian-Finance-Quant-v2.0-RELEASE)** - Specialized reasoning for Indian financial and regulatory sectors.
47
 
48
+ ### ๐ŸŽจ Cross-Platform Optimization (OpenVINO / GGUF)
49
+ * **[SDXL-OpenVINO-Trinity](https://huggingface.co/CelesteImperia/celeste-imperia-sdxl-openvino)** - 4-step generation for Intel i5/i7.
50
+ * **[Whisper-Large-V3-Turbo](https://huggingface.co/CelesteImperia/whisper-large-v3-turbo-openvino)** - High-speed speech-to-text.
51
+ * **[Qwen2-VL-2B-INT4](https://huggingface.co/CelesteImperia/Qwen2-VL-2B-Instruct-OpenVINO-INT4)** - Vision-language reasoning.
52
+ * **[Llama-3.2-1B-GGUF](https://huggingface.co/CelesteImperia/Llama-3.2-1B-Instruct-GGUF)** / **[Phi-3.5-GGUF](https://huggingface.co/CelesteImperia/Phi-3.5-mini-instruct-GGUF)** - Mobile-ready LLMs.
53
 
54
  ---
55
+
56
+ ### โ˜• Support the Forge
57
+ Maintaining a dual-GPU AI workstation and hosting high-bandwidth models requires significant resources. If our open-source tools power your projects, consider supporting our development:
58
+
59
+ | Platform | Support Link |
60
+ | :--- | :--- |
61
+ | **Global & India** | [Support via Razorpay](https://razorpay.me/@huggingface) |
62
+
63
+ **Scan to support via UPI (India Only):**
64
+
65
+ <img src="https://huggingface.co/datasets/CelesteImperia/Assets/resolve/main/QrCode.jpeg" width="200">
66
+
67
+ ---
68
+
69
+ **Connect with the architect:** [Abhishek Jaiswal on LinkedIn](https://www.linkedin.com/in/abhishek-jaiswal-524056a/)