🦙 Gemma4-E2B-it-Heretic GGUF
This repository contains GGUF quants for Gemma4-E2B-it-Heretic, an uncensored version of Google's Gemma 4 E2B it. This model is specifically optimized for high instruction-following compliance and reduced safety-filter interference, aimed at providing more direct and unrestricted responses.
📊 Performance Metrics
- Refusal Rate: 7/100 (7%). In internal testing, the model successfully followed instructions in 93% of scenarios where base models typically decline due to safety buffers.
- Training Note: KL-Divergence (Kullback–Leibler divergence) metrics for this specific pass were not recorded because I forgot!
🚀 Benchmarks
The following benchmarks were conducted on an NVIDIA GeForce RTX 5060 Ti (16GB) using the llama-bench tool with Vulkan offloading and 8 threads.
| Model Variant | Test | Context | Tokens/sec (t/s) |
|---|---|---|---|
| Q4_K_M (Medium) | Prompt Processing | 512 tokens | 6804.99 ± 261.99 |
| Q4_K_M (Medium) | Text Generation | 128 tokens | 158.50 ± 1.09 |
| Q8_0 | Prompt Processing | 512 tokens | 7439.29 ± 730.78 |
| Q8_0 | Text Generation | 128 tokens | 118.76 ± 0.17 |
🗜️ Files & Quantization
All quants were generated using a storage-aware llama.cpp pipeline.
🛠️ Usage
Llama.cpp
-cnv: Conversation mode. It manages chat templates so the model acts like an assistant instead of just completing text.
-ngl: GPU Offloading. "Number of GPU Layers." Setting this to 99 puts the entire model on your graphics card for maximum speed.
-c: Context Size. The model's "short-term memory." High values (like 4096) allow longer chats but use more VRAM.
-t: CPU Threads. The number of CPU cores used to process any parts of the model that didn't fit on the GPU.
--mmap: Memory Mapping. Efficiently uses Free RAM by reading only the parts of the model file needed at that moment.
--no-mlock: Unlock Memory. Prevents the model from "locking" into physical RAM. This keeps your system stable by letting the OS move data if RAM gets low.
You can run these models using the llama-cli built during the pipeline:
./llama-cli -m Gemma4-E2B-it-Heretic_Q4_K_M.gguf -cnv -ngl 99 -c 4096 -t 8 --mmap --no-mlock
- Downloads last month
- 2,564
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit