QuantFactory/mistral-nemo-gutades-12B-GGUF
This is quantized version of nbeerbower/mistral-nemo-gutades-12B created using llama.cpp
Original Model Card
mistral-nemo-gutades-12B
nbeerbower/mistral-nemo-bophades-12B finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
ORPO finetuned using an RTX 3090 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 20.76 |
| IFEval (0-Shot) | 34.25 |
| BBH (3-Shot) | 34.57 |
| MATH Lvl 5 (4-Shot) | 9.89 |
| GPQA (0-shot) | 8.72 |
| MuSR (0-shot) | 8.67 |
| MMLU-PRO (5-shot) | 28.45 |
- Downloads last month
- 39
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for QuantFactory/mistral-nemo-gutades-12B-GGUF
Base model
mistralai/Mistral-Nemo-Base-2407 Finetuned
mistralai/Mistral-Nemo-Instruct-2407 Finetuned
nbeerbower/mistral-nemo-bophades-12BDataset used to train QuantFactory/mistral-nemo-gutades-12B-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard34.250
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard34.570
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard9.890
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.720
- acc_norm on MuSR (0-shot)Open LLM Leaderboard8.670
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard28.450