Model Stock: All we need is just a few fine-tuned models
Paper • 2403.19522 • Published • 14
New merge method with better results, in all aspects a improvement over the previous version. At the core still focussed on large context, uncensored focussed on RP and story.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using \Llama-3-70B-Instruct-Gradient-262k as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: \Llama-3-70B-Instruct-Gradient-262k
- model: \Llama-3-Giraffe-70B
- model: \Llama3-70B-Chinese-Chat
- model: \Higgs-Llama-3-70B
- model: \Smaug-Llama-3-70B-Instruct
- model: \Llama-3-Lumimaid-70B-v0.1-OAS
merge_method: model_stock
base_model: \Llama-3-70B-Instruct-Gradient-262k
dtype: bfloat16