Gemma 4 E4B Instruct Parity GGUF (Q8_0)
This repository contains the canonical GGUF side of the meshllm Gemma 4 parity pair.
- Source checkpoint:
google/gemma-4-E4B-it - Conversion path: original checkpoint -> GGUF ->
Q8_0 - Intended use: backend parity testing against the matching MLX artifact
This artifact is not meant to be a general "best available" Gemma 4 release. It exists so that GGUF and MLX can be compared from the same original model lineage with minimal third-party conversion noise.
Canonical pair:
- GGUF:
meshllm/gemma-4-e4b-it-parity-q8_0-gguf - MLX:
meshllm/gemma-4-e4b-it-parity-8bit-mlx
Latest trusted exact result:
| Backend | Model | Exact |
|---|---|---|
| GGUF | gemma-4-e4b-it-q8_0.gguf |
PASS |
| MLX | meshllm/gemma-4-e4b-it-parity-8bit-mlx |
PASS |
Prompt comparison from local same-origin validation:
| Prompt | GGUF Q8_0 | MLX 8bit |
|---|---|---|
primary |
blue |
blue |
alt-green |
green |
green |
alt-red |
red |
red |
capital-france |
Paris |
Paris |
primary-colors |
red, green, blue |
red, green, blue |
two-plus-two |
4 |
4 |
largest-planet |
Jupiter |
Jupiter |
breathing-gas |
Oxygen |
Oxygen |
opposite-hot |
Cold |
Cold |
banana-color |
Yellow |
Yellow |
after-monday |
Tuesday |
Tuesday |
- Downloads last month
- 171
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for meshllm/gemma-4-e4b-it-parity-q8_0-gguf
Base model
google/gemma-4-E4B-it