KaidenRp2400_12b_v1_m2

Merged using mergekit.

Merge Configuration

merge_method: dare_ties
base_model: mistralai/Mistral-Nemo-Base-2407
tokenizer_source: union
parameters:
  density: 0.5
  weight: 1.0
models:
  - model: mergekit-community/MN-Sappho-g2-12B
    parameters:
      weight: 0.33
  - model: nbeerbower/Nemoties-ChatML-12B
    parameters:
      weight: 0.33
  - model: pbevan11/Mistral-Nemo-Baseline-SFT
    parameters:
      weight: 0.34
Downloads last month
11
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kainatq/KaidenRp2400_12b_v1_m2

Quantizations
2 models