⚠️ Note: This project is slowly under construction. See this page for more info.

πŸš€ Mergedonia Suite 24B v1

Mergedonia

This is test suite to see how various merge methods affects a model's ability at metrics and benchmarks.

Instead of gemma 2 finetunes, mistral 24B was used. The following models are featured:

  • Cydonia v4.3
  • Magidonia v4.3
  • Precog v1
  • Fallen Mistral v1e
  • Rivermind

For methods that require a base_model, the most optimal is either anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only or Darkhn/Magistral-2509-24B-Text-Only, depending on what elements you want to emphasize.

The top 3 drummer finetunes plus the most uncensored (Fallen Mistral) as well as a random outlier (Rivermind) was included to see how each method reacts to:

  • censorship
  • intelligence
  • style
  • positivity
  • context retention

So far there are 16 merges planned. These might compete in a bracket on challonge in the 16 > 8 > 4 > 2 > 1 format.

Update: More methods are planned to be included. The outline below is obsolete but is being updated soon.

Regular methods featured:

  • arcee_fusion (2 stages)
  • slerp (2 stages)
  • multislerp
  • model_stock
  • karcher
  • SCE
  • della
  • dare_ties
  • ties

Custom:

  • cvs
  • brf
  • magic
  • chiral_qhe
  • flux
  • scream
  • arcane

list randomizer recommended these for round 1:

  • scream vs model_stock
  • arcane vs arcee_fusion
  • karcher vs ties
  • slerp vs cvs
  • magic vs della
  • SCE vs multislerp
  • brf vs chiral_qhe
  • dare_ties vs flux

space5

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for 24B-Suite/Mergedonia-Suite-24B-v1