granite-4.0-h-small-ZeroClawToolUse-MagicQuant-GGUF
Fine-tuned GGUF quantization of granite-4.0-h-small, trained on ./data/zeroclaw_training_data.jsonl with QLoRA and quantized using MagicQuant hybrid evolutionary search.
Base Model
This is a fine-tuned and quantized derivative of granite-4.0-h-small. All credit for the base model architecture and weights goes to the original authors. The base model's license applies to this derivative.
Quantization Method
Quantized using MagicQuant hybrid evolutionary per-tensor quantization, based on the methodology by magiccodingman:
- Tensors are classified into sensitivity groups (Embeddings, Head, Query, Key, Output, FFN Up/Down, MoE Experts, Router)
- An evolutionary search finds the optimal quantization type per group, balancing size vs. perplexity
- Q4/Q5/Q6 tier targets are produced with different size-quality tradeoffs
- Small-row tensors and sensitivity-critical layers (embeddings, output head, router) are kept at F32/F16/BF16
- This is NOT a uniform quantization -- each tensor group gets its own optimal type
Training Details
| Parameter | Value |
|---|---|
| Method | QLoRA with completion-only loss masking |
| LoRA rank (r) | 32 |
| LoRA alpha | 64 |
| LoRA dropout | 0.05 |
| Epochs | 3 |
| Learning rate | 0.0002 |
| LR scheduler | cosine |
| Batch size | 2 (effective 8 with gradient accumulation) |
| Optimizer | adamw_8bit |
| Training sequence length | 4096 |
| Precision | BF16 |
| Dataset | ./data/zeroclaw_training_data.jsonl |
| Hardware | AMD Ryzen AI Max+ 395 (Strix Halo), 128 GB unified memory (GTT), ROCm |
| Training pipeline | Custom fast QLoRA with shard-by-shard BnB 4-bit quantization |
Completion-only loss: Only assistant response turns contribute to the training loss. System and user turns are masked, so the model learns to generate responses rather than memorizing prompts.
GGUF Files
| File | Size | Quant |
|---|---|---|
| granite-4.0-h-small-Q4-MXFP4_MOE-EH-BF16-OQ-Q80-K-Q6K-U-MXFP4MOE-D-IQ4NL.gguf | 18.6 GB | Q4 hybrid |
| granite-4.0-h-small-Q5-MXFP4_MOE-EHKO-BF16-Q-Q80-U-Q6K-D-MXFP4MOE.gguf | 18.9 GB | Q5 hybrid |
| granite-4.0-h-small-Q6-MXFP4_MOE-EHKOU-BF16-Q-Q80-D-MXFP4MOE.gguf | 19.5 GB | Q6 hybrid |
Usage
LM Studio
- Download the GGUF file of your preferred quantization tier
- Place it in your LM Studio models directory
- Load the model in LM Studio -- it will auto-detect the chat template
- The model supports the base model's full context length
llama.cpp
# Interactive chat
llama-cli -m granite-4.0-h-small-ZeroClawToolUse-MagicQuant-GGUF-Q5.gguf -c 8192 --chat-template chatml -cnv
# Single prompt
llama-cli -m granite-4.0-h-small-ZeroClawToolUse-MagicQuant-GGUF-Q5.gguf -c 8192 -p "Your prompt here"
# Server mode
llama-server -m granite-4.0-h-small-ZeroClawToolUse-MagicQuant-GGUF-Q5.gguf -c 8192 --port 8080
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="./granite-4.0-h-small-ZeroClawToolUse-MagicQuant-GGUF-Q5.gguf", n_ctx=8192)
output = llm.create_chat_completion(
messages=[
{"role": "user", "content": "Hello, how are you?"}
]
)
print(output["choices"][0]["message"]["content"])
Caveats
- This is a personal fine-tune, not an official release from the base model authors
- The base model's license (apache-2.0) applies to all derivative files
- Quality depends on the training data and may not generalize to all tasks
- Quantization reduces precision -- verify outputs for your specific use case
- The hybrid quantization assigns different precision to different tensor groups, which means quality characteristics may differ from uniform quantizations
Limitations
- Training data used sequences up to 4096 tokens; the model retains the base model's full context window
- Performance on tasks not represented in the training data may be degraded
- Quantized models may exhibit subtle differences from the full-precision fine-tune
- This model inherits any limitations and biases present in the base model
Generated with the MagicQuant Pipeline
Model tree for lmcoleman/granite-4.0-h-small-ZeroClawToolUse-MagicQuant-GGUF
Base model
ibm-granite/granite-4.0-h-small