mistral-small-3.2-24b-qiskit-GGUF

Qiskit/mistral-small-3.2-24b-qiskit-GGUF

This is the Q4_K converted version of the original Qiskit/mistral-small-3.2-24b-qiskit. Please refer to the original mistral-small-3.2-24b-qiskit model card for more details.

Benchmark Results (Base vs GGUF)

Notes: For CrowsPairs (% stereotype), lower is better.

Metric mistral-small-3.2-24b-qiskit (%) mistral-small-3.2-24b-qiskit-GGUF (%)
QiskitHumanEval-Hard 32.45 30.46
QiskitHumanEval 47.02 37.75
HumanEval 77.49 75.00
ASDiv (acc) 3.77 4.55
MathQA (acc) 49.68 49.51
SciQ (acc) 97.50 97.40
IFEval (prompt strict) 48.44 46.04
CrowsPairs English (% stereotype) 67.08 65.89
TruthfulQA (MC1 acc) 39.41 38.80
Downloads last month
38
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Qiskit/mistral-small-3.2-24b-qiskit-GGUF

Collection including Qiskit/mistral-small-3.2-24b-qiskit-GGUF