Mistral-Small-3.2-24B-Instruct-2506 (NVFP4)

This repository contains an NVFP4 quantization of the following base model:

  • Base model: mistral/Mistral-Small-3.2-24B-Instruct-2506
  • Quantized model: yepthatsjason/Mistral-Small-3.2-24B-Instruct-2506-nvfp4
  • Quantization: NVFP4
  • Quantized with: llmcompressor

What is this?

This is a quantized version of the base model intended to reduce memory usage and improve inference efficiency, while keeping behavior close to the original.

Usage

Add your exact loading snippet here (it depends on how llmcompressor exported the artifacts and which runtime you鈥檙e using).

Quantization details

  • Format: NVFP4
  • Tooling: llmcompressor
  • Notes: (add any relevant settings, e.g. target hardware, calibration details, etc.)

Limitations / caveats

Quantized models can differ from the base model in edge cases. If you observe regressions, please compare against the base model and share a minimal repro.

Downloads last month
37
Safetensors
Model size
14B params
Tensor type
BF16
F32
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for yepthatsjason/Mistral-Small-3.2-24B-Instruct-2506-nvfp4