gemma-4-31B-Zeroclaw-ClaudeReasoning-GGUF
Derivative of gemma-4-31B-it, quantized using MagicQuant hybrid evolutionary per-tensor search.
Base Model
This is a derivative of gemma-4-31B-it. All credit for the base model architecture and weights goes to the original authors. The base model's license applies to this derivative.
Quantization Method
Quantized using MagicQuant hybrid evolutionary per-tensor quantization, based on the methodology by magiccodingman:
- Tensors are classified into sensitivity groups (Embeddings, Head, Query, Key, Output, FFN Up/Down, MoE Experts, Router)
- An evolutionary search finds the optimal quantization type per group, balancing size vs. perplexity
- Q4/Q5/Q6 tier targets are produced with different size-quality tradeoffs
- Small-row tensors and sensitivity-critical layers (embeddings, output head, router) are kept at F32/F16/BF16
- This is NOT a uniform quantization -- each tensor group gets its own optimal type
GGUF Files
| File | Size | Quant |
|---|---|---|
| gemma-4-31B-it-Q4_K_M.gguf | 22.1 GB | Q4 hybrid |
| gemma-4-31B-it-Q5_K_M.gguf | 30.9 GB | Q5 hybrid |
| gemma-4-31B-it-Q6_K.gguf | 32.3 GB | Q6 hybrid |
Usage
LM Studio
- Download the GGUF file of your preferred quantization tier
- Place it in your LM Studio models directory
- Load the model in LM Studio -- it will auto-detect the chat template
- The model supports the base model's full context length
llama.cpp
# Interactive chat
llama-cli -m gemma-4-31B-Zeroclaw-ClaudeReasoning-GGUF-Q5.gguf -c 8192 --chat-template chatml -cnv
# Single prompt
llama-cli -m gemma-4-31B-Zeroclaw-ClaudeReasoning-GGUF-Q5.gguf -c 8192 -p "Your prompt here"
# Server mode
llama-server -m gemma-4-31B-Zeroclaw-ClaudeReasoning-GGUF-Q5.gguf -c 8192 --port 8080
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="./gemma-4-31B-Zeroclaw-ClaudeReasoning-GGUF-Q5.gguf", n_ctx=8192)
output = llm.create_chat_completion(
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)
print(output["choices"][0]["message"]["content"])
Caveats
- The base model's license (apache-2.0) applies to all derivative files
- Quantization reduces precision -- verify outputs for your specific use case
- The hybrid quantization assigns different precision to different tensor groups, which means quality characteristics may differ from uniform quantizations
Limitations
- Quantized models may exhibit subtle differences from the full-precision fine-tune
- This model inherits any limitations and biases present in the base model
Generated with MagicQuant
- Downloads last month
- 485
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
Model tree for lmcoleman/gemma-4-31B-Zeroclaw-ClaudeReasoning-GGUF
Base model
google/gemma-4-31B-it