Mistral-7B-Instruct-v0.3 Parity BF16 MLX

Same-origin parity artifact derived from mistralai/Mistral-7B-Instruct-v0.3.

This repo contains the high-fidelity bf16 MLX artifact used for mesh-llm backend parity validation against the corresponding GGUF artifact.

Accepted local validation status:

  • Exact prompts: matches GGUF on all checked prompts, including the shared capitalization and verbosity drift on strict one-word canaries
  • Behavior smoke: 0 flagged prompts out of 80 on the MT-Bench-derived harness

Paired GGUF repo:

  • meshllm/mistral-7b-instruct-v0.3-parity-f16-gguf
Downloads last month
715
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support