Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper • 2311.03099 • Published • 32
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using bunnycore/Qwen-2.5-3b-RP as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: bunnycore/Qwen-2.5-3b-RP+bunnycore/Qwen-2.5-3b-rp-mix-lora
parameters:
density: 0.5
weight: 0.5
- model: Replete-AI/Replete-LLM-V2.5-Qwen-3b+bunnycore/Qwen-2.5-3b-rp-mix-lora
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: bunnycore/Qwen-2.5-3b-RP
parameters:
normalize: false
int8_mask: true
dtype: float16