OLMo-2-1124-7B-Instruct parity Q8_0 GGUF

This repository contains a same-origin GGUF export of allenai/OLMo-2-1124-7B-Instruct prepared for local parity testing in mesh-llm.

Artifact:

  • olmo2-7b-instruct-q8_0.gguf

Notes:

  • converted locally from the cached origin checkpoint
  • quantized to Q8_0
  • intended to pair with meshllm/olmo2-7b-instruct-parity-8bit-mlx
  • exact validation passed locally on 2026-04-07

Source model:

Downloads last month
9
GGUF
Model size
7B params
Architecture
olmo2
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for meshllm/olmo2-7b-instruct-parity-q8_0-gguf