OrobasVault commited on
Commit
3b81dbc
Β·
verified Β·
1 Parent(s): 4edccb2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -4
README.md CHANGED
@@ -1,19 +1,79 @@
1
  ---
2
- base_model: []
 
 
 
 
 
 
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
 
 
8
  ---
9
- # karcher_stock-12B
 
 
10
 
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ## Merge Details
14
  ### Merge Method
15
 
16
- This model was merged using the Karcher-Stock merge method using /workspace/models/mistralai--Mistral-Nemo-Instruct-2407 as a base.
17
 
18
  ### Models Merged
19
 
 
1
  ---
2
+ base_model:
3
+ - mistralai/Mistral-Nemo-Instruct-2407
4
+ - Vortex5/Prototype-X-12b
5
+ - Vortex5/Stellar-Witch-12B
6
+ - Vortex5/Celestial-Queen-12B
7
+ - Vortex5/Moonlit-Mirage-12B
8
+ - Vortex5/Crimson-Constellation-12B
9
+ - Vortex5/Wicked-Nebula-12B
10
+
11
  library_name: transformers
12
  tags:
13
  - mergekit
14
  - merge
15
+ - mistral
16
+ - nemo
17
+ - karcher_stock
18
  ---
19
+ # πŸ‘» Geodesic Phantom 12B
20
+
21
+ ![geodesic-phantom](https://cdn-uploads.huggingface.co/production/uploads/69e46bb84df2a2575b60a527/7tnIXKdUUtGLGkbcGPRGK.jpeg)
22
 
23
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
24
 
25
+ This was merged on a runpod A40 using an [adaptive VRAM chunking script](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v18_runpod_A40.py) (based on `measure.py` by [GrimJim](https://huggingface.co/grimjim))
26
+
27
+ ```bat
28
+ WARNING:mergekit.graph:OOM at chunk 65536, reducing to 32768 (attempt 1, progress: 0/131075)
29
+ WARNING:mergekit.graph:OOM at chunk 32768, reducing to 16384 (attempt 2, progress: 0/131075)
30
+
31
+ [Karcher_Stock Audit] Layer: lm_head.weight
32
+ Stats: Cos(ΞΈ): 0.564 | t-factor: 0.8843 | Karcher Iters: 2960
33
+ (Base) mistralai--Mistral-Nemo-Instruct-2407 : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ( 11.57%)
34
+ (Donor) Vortex5--Prototype-X-12b : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ( 14.74%)
35
+ (Donor) Vortex5--Stellar-Witch-12B : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ( 14.74%)
36
+ (Donor) Vortex5--Celestial-Queen-12B : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ( 14.74%)
37
+ (Donor) Vortex5--Moonlit-Mirage-12B : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ( 14.74%)
38
+ (Donor) Vortex5--Crimson-Constellation-12B : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ( 14.74%)
39
+ (Donor) Vortex5--Wicked-Nebula-12B : β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ( 14.74%)
40
+ ```
41
+
42
+ The following patch was also required for this merge
43
+
44
+ # `karcher_stock` Adaptive Tanh Soft-Clamp v11
45
+
46
+ ```py
47
+ # ── 11. Model Stock t factor with Adaptive Soft-Clamp ─────────────
48
+ N = len(ws_2d)
49
+ ct = cos_theta.unsqueeze(-1) if cos_theta.dim() > 0 else cos_theta
50
+
51
+ # Raw Model Stock formula
52
+ denom = 1.0 + (N - 1) * ct
53
+ # Add a tiny epsilon to prevent literal division by zero
54
+ t_raw = (N * ct) / denom.clamp(min=1e-6)
55
+
56
+ # --- BULLETPROOF TANH CLAMP ---
57
+ # 1. Prevent negative infinity spikes (fallback to base model)
58
+ t_clamped_bottom = torch.clamp(t_raw, min=0.0)
59
+
60
+ # 2. Smoothly asymptote positive spikes to L (Maximum allowed t-factor)
61
+ L = 1.5
62
+ excess = torch.clamp(t_clamped_bottom - 1.0, min=0.0)
63
+ t_soft_top = 1.0 + (L - 1.0) * torch.tanh(excess / (L - 1.0))
64
+
65
+ # 3. Apply: If t <= 1.0, use exact math. If t > 1.0, use soft curve.
66
+ t = torch.where(t_clamped_bottom <= 1.0, t_clamped_bottom, t_soft_top)
67
+ # ------------------------------
68
+ ```
69
+
70
+ ## Example of the clamp preventing merge corruption
71
+ ![tanh_clamp](https://cdn-uploads.huggingface.co/production/uploads/68e840caa318194c44ec2a04/eRdxOMhKsRysDgP-6Pkw0.png)
72
+
73
  ## Merge Details
74
  ### Merge Method
75
 
76
+ This model was merged using the `karcher_stock` merge method using /workspace/models/mistralai--Mistral-Nemo-Instruct-2407 as a base.
77
 
78
  ### Models Merged
79