Celeste-Gemma-4-31B-Dense-Platinum-GGUF (Platinum Series)
Optimized GGUF weights for Gemma 4 (31B Dense), ported by CelesteImperia on an NVIDIA RTX 3090 AI Workstation.
🌟 Key Features
- Architecture: Gemma 4 (Instruction Tuned)
- Context Window: 256,000 Tokens (Native p-RoPE support)
- Intelligence: Frontier-level reasoning (MMLU Pro 85.2%)
- Quantization: High-fidelity K-Quants forged with
llama.cppgemma4-day0 branch (b8642).
This repository contains the Platinum Series universal GGUF release of Gemma-4-31B-Dense. This collection provides professional-grade quantization levels optimized for high-fidelity reasoning, long-context retrieval, and multi-step logic. Ported manually to ensure zero weight-map corruption, these quants are optimized for local 24GB VRAM workstations.
📦 Available Files & Quantization Details
| File | Method | Description |
|---|---|---|
| Q3_K_M | k-quant | The Gold Standard. Consumer Grade. (~14.2 GB) Optimized for 16GB VRAM cards (RTX 4080 / A4000). |
| Q4_K_M | k-quant | The Gold Standard. Optimal balance of logic retention and inference speed. |
| Q5_K_M | k-quant | Platinum Tier. Recommended for the RTX 3090 to maintain high reasoning stability. |
| Q6_K | k-quant | High-bit precision for complex logic and massive 100k+ token document analysis. |
| Q8_0 | block-quant | The "Reference" version. Near-perfect fidelity to the original BF16 master. |
🛠️ Usage (Ollama / llama.cpp)
To use the native Thinking Mode, ensure you use the correct control tokens:
ollama run Celeste-Gemma-4-31B-Q4_K_M
🐍 Python Inference (llama-cpp-python)
To run these engines using the provided python script :
from llama_cpp import Llama
# Initialize the model for 24GB VRAM (RTX 3090)
llm = Llama(
model_path="./Gemma-4-31B-Q4_K_M.gguf",
n_gpu_layers=-1, # Offload all layers to VRAM
n_ctx=32768, # Extended context window
)
# Generate response with Native Thinking tokens
output = llm(
"<|think|>\nAnalyze the logic of the following legal document:",
max_tokens=1024,
stop=["<turn|>", "<|file_separator|>"],
echo=True
)
print(output['choices'][0]['text'])
💻 For C# / .NET Users (LLamaSharp)
This collection is fully compatible with .NET applications via the csharp script and the LLamaSharp library.
using LLama.Common;
using LLama;
var parameters = new ModelParams("Gemma-4-31B-Q4_K_M.gguf")
{
ContextSize = 32768,
GpuLayerCount = -1 // Utilize all available CUDA cores on RTX 3090
};
using var weights = LLamaWeights.LoadFromFile(parameters);
using var context = weights.CreateContext(parameters);
var executor = new InteractiveExecutor(context);
var chatHistory = new ChatHistory();
chatHistory.AddMessage(AuthorRole.System, "You are a helpful assistant.");
var session = new ChatSession(executor, chatHistory);
await foreach (var text in session.ChatAsync(new ChatHistory.Message(AuthorRole.User, "Explain GST impact on small businesses."), new InferenceParams { MaxTokens = 1024 }))
{
Console.Write(text);
}
🏗️ Hardware Requirements
Given the 31B parameter count and the 256K context architecture, the following configurations are recommended:
- Minimum: 24GB VRAM (e.g., RTX 3090 / 4090) for full offloading of Q4_K_M.
- Precision: 32GB+ VRAM (or VRAM + System RAM) for Q6_K / Q8_0 variants.
- NPU Support: Compatible with OpenVINO (Intel Core Ultra) for edge execution.
🏗️ Technical Details
- Optimization Tool: llama.cpp (Day 0 - gemma4-day0 branch)
- Architecture: Gemma 4 (31B Dense)
- Hardware Validation: Dual-GPU (RTX 3090 + RTX A4000)
☕ Support the Forge
Maintaining the production line for high-fidelity models requires significant hardware resources. If these tools power your research or industrial projects, please consider supporting the development:
| Platform | Support Link |
|---|---|
| Global & India | Support via Razorpay |
Scan to support via UPI (India Only):
Connect with the architect: Abhishek Jaiswal on LinkedIn
- Downloads last month
- 4,769