GGUF quantized versions of https://huggingface.co/coder3101/gemma-3-27b-it-heretic-v2
Multimodal projector included.
Provided quants:
- IQ3_XS - 12-16GB GPUs
- IQ3_M - 16GB GPUs - slightly better quality but requires offloading the KV cache at long contexts
- IQ4_XS - 24GB GPUs
- Downloads last month
- 274
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
Model tree for worstplayer/gemma-3-27b-it-heretic-v2-GGUF
Base model
google/gemma-3-27b-pt Finetuned
coder3101/gemma-3-27b-it-heretic-v2