Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: llama3
|
| 3 |
+
base_model:
|
| 4 |
+
- meta-llama/Meta-Llama-3-8B
|
| 5 |
+
---
|
| 6 |
+
# ♾️ Infinity 1.0 (Llama-3-8B GGUF)
|
| 7 |
+
|
| 8 |
+
**Developed by:** [RockSky1](https://huggingface.co/RockSky1)
|
| 9 |
+
**Model Type:** Causal Language Model
|
| 10 |
+
**Base Model:** Meta-Llama-3-8B
|
| 11 |
+
**Format:** GGUF (Quantized for efficiency)
|
| 12 |
+
|
| 13 |
+
## 🚀 Overview
|
| 14 |
+
**Infinity 1.0** is a high-performance, fine-tuned version of the Llama-3-8B architecture. This model is designed to be the "Brain" of the Infinity AI ecosystem, offering fast, creative, and technically sound responses. It has been optimized for local deployment and low-latency interactions.
|
| 15 |
+
|
| 16 |
+
## ✨ Key Features
|
| 17 |
+
* **Optimized Architecture:** Fine-tuned over multiple epochs (v5 development cycle) for superior reasoning.
|
| 18 |
+
* **GGUF Format:** Ready for offline use in LM Studio, Ollama, and mobile LLM runners.
|
| 19 |
+
* **Quantized Precision:** Balanced performance-to-size ratio using Q4_K_M quantization.
|
| 20 |
+
* **Coding & Logic:** Strong capabilities in full-stack development and architectural logic.
|
| 21 |
+
|
| 22 |
+
## 🛠️ How to Use
|
| 23 |
+
You can use this model offline using any GGUF-compatible runner:
|
| 24 |
+
|
| 25 |
+
1. **LM Studio:** Search for `RockSky1/Infinity_1.0` and download.
|
| 26 |
+
2. **Ollama:** Create a Modelfile and point it to the `.gguf` file.
|
| 27 |
+
3. **Mobile:** Load via Layla or MLC LLM apps.
|
| 28 |
+
|
| 29 |
+
## 📜 License
|
| 30 |
+
This model follows the Meta Llama 3 Community License.
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
*Created with ❤️ by Shivam Kumar (RockSky1)*
|