hermite_sakana-mathja-7b-hermite-optimal
Hermite 補間で最適化された λ によるモデルマージ。
Merge Configuration
| Parameter | Value |
|---|---|
| Method | Hermite interpolation (Phase 2 optimized) |
| λ | [0.816325, 0.183675] |
| dtype | torch.float16 |
- Model 0 (
WizardLMTeam/WizardMath-7B-V1.1): λ=0.816325 - Model 1 (
augmxnt/shisa-gamma-7b-v1): λ=0.183675
Tokenizer
Union tokenizer (mergekit-style): vocab size = 32000
Formula
θ* = Σ_k λ_k θ_k
The mixing weights λ were optimized by minimizing the Hermite polynomial approximation of the loss function (see Phase 2).
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for lejelly/hermite_sakana-mathja-7b-hermite-optimal
Base model
WizardLMTeam/WizardMath-7B-V1.1