GGUF quantized versions of https://huggingface.co/coder3101/gemma-3-27b-it-heretic-v2

Multimodal projector included.

Provided quants:

  • IQ3_XS - 12-16GB GPUs
  • IQ3_M - 16GB GPUs - slightly better quality but requires offloading the KV cache at long contexts
  • IQ4_XS - 24GB GPUs
Downloads last month
274
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for worstplayer/gemma-3-27b-it-heretic-v2-GGUF

Quantized
(4)
this model