Llama-3.1-8B-Instruct-GGUF (Platinum Series)

Status Format Series Support

This repository contains the Platinum Series universal GGUF release of Llama-3.1-8B-Instruct. This collection represents a monumental leap in local AI, featuring a 128k context window and state-of-the-art reasoning capabilities distilled from the 405B model. Optimized for enterprise-grade financial and regulatory analysis.

πŸ“¦ Available Files & Quantization Details

File Name Quantization Size Accuracy Recommended For
Llama-3.1-8B-Instruct-Platinum-F16.gguf FP16 ~16.1 GB 100% Master Reference / Benchmarking
Llama-3.1-8B-Instruct-Platinum-Q8_0.gguf Q8_0 ~8.1 GB 99.9% Platinum Reference / High-Fidelity
Llama-3.1-8B-Instruct-Platinum-Q6_K.gguf Q6_K ~6.6 GB 99.8% Complex Regulatory Reasoning
Llama-3.1-8B-Instruct-Platinum-Q5_K_M.gguf Q5_K_M ~5.8 GB 99.5% Balanced High-End Performance
Llama-3.1-8B-Instruct-Platinum-Q4_K_M.gguf Q4_K_M ~5.0 GB 99.0% Default / General Purpose

🐍 Python Inference (llama-cpp-python)

To run these engines using Python:

from llama_cpp import Llama

llm = Llama(
    model_path="Llama-3.1-8B-Instruct-Platinum-Q8_0.gguf",
    n_gpu_layers=-1, # Target all layers to NVIDIA/Apple GPU
    n_ctx=32768      # Supports up to 131,072 context tokens
)

output = llm("Analyze the 2026 RBI Internal Ombudsman Directions.", max_tokens=512)
print(output["choices"][0]["text"])

πŸ’» For C# / .NET Users (LLamaSharp)

This collection is fully compatible with .NET applications via the LLamaSharp library.

using LLama.Common;
using LLama;

var parameters = new ModelParams("Llama-3.1-8B-Instruct-Platinum-Q8_0.gguf") {
    ContextSize = 8192,
    GpuLayerCount = 33 
};

using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
var executor = new InteractiveExecutor(context);

Console.WriteLine("Platinum Finance Engine Active.");

πŸ—οΈ Technical Details

  • Optimization Tool: llama-quantize (CMake Build 2026-03-26)
  • Architecture: Llama 3.1 (8B)
  • Hardware Validation: Dual-GPU (RTX 3090 + RTX A4000)

β˜• Support the Forge

Maintaining the production line for high-fidelity models requires significant hardware resources. If these tools power your research or industrial projects, please consider supporting the development:

Platform Support Link
Global & India Support via Razorpay

Scan to support via UPI (India Only):


Connect with the architect: Abhishek Jaiswal on LinkedIn

Downloads last month
337
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CelesteImperia/Llama-3.1-8B-Instruct-Platinum-GGUF

Quantized
(615)
this model