Resolving Interference When Merging Models
Paper • 2306.01708 • Published • 18
This is a merge of pre-trained language models created using mergekit. It was developed as part of an ongoing blog series. I take no responsibility for the ouputs of this model.
This model was merged using the TIES merge method using EryriLabs/Llama-3.2-SARA-3b as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: EryriLabs/Llama-3.2-SARA-3b
parameters:
density: 0.99 # keeping 90% of base model weights
- model: Lyte/Llama-3.2-3B-Overthinker
parameters:
density: 0.1 # fraction of weights in differences from the base model to retain
weight: # weight gradient
- filter: mlp
value: 0.1
- value: 0
- model: HollowMan6/Llama-3.2-3B-SFT-Model-Ocra-500k
parameters:
density: 0.1
weight: 0.2
merge_method: ties
base_model: EryriLabs/Llama-3.2-SARA-3b
parameters:
normalize: true
int8_mask: true
dtype: float16
5-bit
6-bit
8-bit
16-bit