Gemma 2 2B parity GGUF (Q8_0)
Same-origin GGUF parity artifact for google/gemma-2-2b-it, produced for backend comparison work in mesh-llm.
- Source checkpoint:
google/gemma-2-2b-it - Conversion flow: original checkpoint -> GGUF f16 -> GGUF Q8_0
- Intended pair:
meshllm/gemma-2-2b-it-parity-8bit-mlx
This repo is for backend-parity testing rather than for claiming best overall model quality.
- Downloads last month
- 53
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support