Quantized NVFP4 version of Mistral-Nemo-Instruct-2407 with the self-attention tensors calibrated in FP8_DYNAMIC, created to compare against my hybrid quant. Made with the same version of llm-compressor and compressed-tensors, using the same calibration data, to isolate the variables as much as possible.

Downloads last month
11
Safetensors
Model size
8B params
Tensor type
F32
BF16
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for DataSnake/Mistral-Nemo-Instruct-2407-NVFP4-FP8-RTN