Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper • 2203.05482 • Published • 8
This model has been quantized using llama-quantize from llama.cpp
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Qwen/Qwen2.5-0.5B-Instruct
parameters:
weight: 0.6
- model: Qwen/Qwen2.5-0.5B
parameters:
weight: 0.4
merge_method: linear
parameters:
normalize: true
dtype: bfloat16
4-bit
8-bit