π CelesteImperia Financial Engine v1.1 (Spiral)
The Quant-Reasoning LoRA for Indian Equities
This is a specialized Low-Rank Adaptation (LoRA) for Llama-3-8B, fine-tuned on a proprietary dataset of quantitative reasoning chains for the Indian stock market (NSE/BSE).
β οΈ IMPORTANT UPDATE (April 2026): > A significantly upgraded version of this model is now available. Indian Finance Quant v2.1 - DeepSeek Reasoning Edition has been released, featuring a "Dual-Brain" architecture capable of synthesizing technical indicators, live market sentiment, and complex regulatory/legal constraints using native
<think>reasoning chains. Please migrate to v2.1 for all "Indian Finance Quant" deployments.
π Key Features
- Chain-of-Thought Reasoning: This engine uses tags to break down RSI, Support/Resistance, and Momentum before giving a verdict.
- NSE/BSE Optimized: Understands the nuances of Indian market sentiment and large-cap stock behavior (e.g., Reliance, HDFC, TCS).
- Lightweight & Fast: Optimized via Unsloth for 4-bit inference on consumer GPUs.
π Repository Structure
To support various algorithmic trading and scraping pipelines, this repository contains three distinct formats:
- /adapter: The raw LoRA adapter weights (.safetensors).
- /full_model: The fully merged and sharded model weights (4 parts).
- /gguf: A compressed Q4_K_M version optimized for CPU-bound inference (Hugging Face Spaces Free Tier).
π» Training Infrastructure
The v1.1 weights were trained locally in a dedicated WSL2 environment utilizing a dual-GPU architecture:
- Primary Compute: NVIDIA RTX 3090
- Secondary Accelerator: NVIDIA RTX A4000
π οΈ Usage & Quick Start
Option A: GPU Inference (Unsloth)
from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained("CelesteImperia/Llama-3-8B-Quant-Reasoning-v1.1")
Option B: CPU/Edge Inference (GGUF)
from huggingface_hub import hf_hub_download from llama_cpp import Llama
model_path = hf_hub_download( repo_id="CelesteImperia/Llama-3-8B-Quant-Reasoning-v1.1", filename="gguf/llama-3-8b.Q4_K_M.gguf" ) llm = Llama(model_path=model_path, n_ctx=2048)
π Terms of Use & Licensing
- Base License: Subject to the Meta Llama-3 Community License.
- Financial Data License: Reasoning layers are proprietary to CelesteImperia (CC BY-NC 4.0).
- Non-Commercial Restriction: You may not use this model for commercial trading services or paid signal groups without consent.
- Authenticity Tracking: All outputs contain visible and invisible steganographic watermarks to identify usage.
- Disclaimer: This is an AI research tool. It does not provide financial advice. CelesteImperia is not responsible for financial losses.
- Downloads last month
- -
4-bit
Model tree for CelesteImperia/Llama-3-8B-Quant-Reasoning-v1.1
Base model
meta-llama/Meta-Llama-3-8B