Model Stock: All we need is just a few fine-tuned models
Paper • 2403.19522 • Published • 14
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using bunnycore/Llama-3.1-8B-TitanFusion + grimjim/Llama-3-Instruct-abliteration-LoRA-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: bunnycore/Llama-3.1-8B-TitanFusion-v3
- model: bunnycore/Llama-3.1-8B-TitanFusion-v2
- model: bunnycore/Llama-3.1-8B-TitanFusion
merge_method: model_stock
base_model: bunnycore/Llama-3.1-8B-TitanFusion+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
normalize: false
int8_mask: true
dtype: bfloat16
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 24.77 |
| IFEval (0-Shot) | 49.25 |
| BBH (3-Shot) | 39.54 |
| MATH Lvl 5 (4-Shot) | 11.40 |
| GPQA (0-shot) | 6.04 |
| MuSR (0-shot) | 12.46 |
| MMLU-PRO (5-shot) | 29.95 |