my-output
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised
layer_range: [0, 32]
- model: microsoft/Phi-3.5-mini-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: microsoft/Phi-3.5-mini-instruct
parameters:
t:
- filter: self_attn
value: [1, 0.75, 0.5, 0.25, 0]
- filter: mlp
value: [0, 0.25, 0.5, 0.75, 1]
- value: 0.5
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 21.23 |
| IFEval (0-Shot) | 28.76 |
| BBH (3-Shot) | 35.45 |
| MATH Lvl 5 (4-Shot) | 2.95 |
| GPQA (0-shot) | 11.30 |
| MuSR (0-shot) | 15.43 |
| MMLU-PRO (5-shot) | 33.51 |
- Downloads last month
- 9
Model tree for brgx53/3Blareneg-ECE-PRYMMAL-Martial
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard28.760
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard35.450
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard2.950
- acc_norm on GPQA (0-shot)Open LLM Leaderboard11.300
- acc_norm on MuSR (0-shot)Open LLM Leaderboard15.430
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard33.510