WizardIceLemonTeaRP-32k

This is a merge of pre-trained language models created using mergekit.

Merge Details

I have mix feelings with this merge result. It's for merges!! Not usable! Try WestIceLemonTeaRP-32k-7b instead.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • IceLemonTeaRP-32k-7b
  • Not-WizardLM-2-7B

Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: Not-WizardLM-2-7B
        layer_range: [0, 32]
      - model: IceLemonTeaRP-32k-7b
        layer_range: [0, 32]
merge_method: slerp
base_model: Not-WizardLM-2-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.31
AI2 Reasoning Challenge (25-Shot) 65.61
HellaSwag (10-Shot) 85.39
MMLU (5-Shot) 63.29
TruthfulQA (0-shot) 58.30
Winogrande (5-shot) 77.03
GSM8k (5-shot) 54.21
Downloads last month
5
Safetensors
Model size
7B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for icefog72/WizardIceLemonTeaRP-32k

Quantizations
1 model

Evaluation results