mikeumus-divincian commited on
Commit
22b185f
Β·
verified Β·
1 Parent(s): 46a9aee

regenerate index.html with td-bold header row

Browse files
Files changed (1) hide show
  1. index.html +12 -99
index.html CHANGED
@@ -55,108 +55,21 @@ offline-built search index.</p>
55
  <strong>Llama</strong>, <strong>OpenAI MoE</strong>, plus two 1-bit
56
  controls.</p>
57
  <table>
58
- <thead>
59
- <tr>
60
- <th>Model</th>
61
- <th>Architecture</th>
62
- <th>Params</th>
63
- <th>Vindex</th>
64
- <th>C4 (layer temp)</th>
65
- <th>Notes</th>
66
- </tr>
67
- </thead>
68
  <tbody>
69
- <tr>
70
- <td><strong>Gemma 4 E2B-it</strong></td>
71
- <td>Dense (Gemma 4)</td>
72
- <td>2B</td>
73
- <td><a
74
- href="https://huggingface.co/Divinci-AI/gemma-4-e2b-vindex">gemma-4-e2b-vindex</a></td>
75
- <td><strong>0.0407 Β± 0.0004</strong> βœ“</td>
76
- <td>3-seed validated; headline universal-constant model</td>
77
- </tr>
78
- <tr>
79
- <td>Qwen3-0.6B</td>
80
- <td>Dense (Qwen 3)</td>
81
- <td>0.6B</td>
82
- <td><a
83
- href="https://huggingface.co/Divinci-AI/qwen3-0.6b-vindex">qwen3-0.6b-vindex</a></td>
84
- <td>0.411</td>
85
- <td>Smallest published; Qwen3 family-elevated C4</td>
86
- </tr>
87
- <tr>
88
- <td>Qwen3-8B bf16</td>
89
- <td>Dense (Qwen 3)</td>
90
- <td>8B</td>
91
- <td><a
92
- href="https://huggingface.co/Divinci-AI/qwen3-8b-vindex">qwen3-8b-vindex</a></td>
93
- <td>0.804</td>
94
- <td>Architecture control for Bonsai</td>
95
- </tr>
96
- <tr>
97
- <td>Qwen3.6-35B-A3B</td>
98
- <td>MoE (Qwen 3.6)</td>
99
- <td>35B / 3B active</td>
100
- <td><a
101
- href="https://huggingface.co/Divinci-AI/qwen3.6-35b-a3b-vindex">qwen3.6-35b-a3b-vindex</a></td>
102
- <td>β€”</td>
103
- <td>256 experts, 40 layers</td>
104
- </tr>
105
- <tr>
106
- <td>Ministral-3B</td>
107
- <td>Dense (Mistral 3)</td>
108
- <td>3B</td>
109
- <td><a
110
- href="https://huggingface.co/Divinci-AI/ministral-3b-vindex">ministral-3b-vindex</a></td>
111
- <td>0.265</td>
112
- <td>fp8 β†’ bf16 reconstruction</td>
113
- </tr>
114
- <tr>
115
- <td>Llama 3.1-8B</td>
116
- <td>Dense (Llama 3.1)</td>
117
- <td>8B</td>
118
- <td><a
119
- href="https://huggingface.co/Divinci-AI/llama-3.1-8b-vindex">llama-3.1-8b-vindex</a></td>
120
- <td><strong>0.012</strong> βœ“</td>
121
- <td>Llama family signature</td>
122
- </tr>
123
- <tr>
124
- <td>MedGemma 1.5-4B</td>
125
- <td>Dense (Gemma multimodal)</td>
126
- <td>4B</td>
127
- <td><a
128
- href="https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex">medgemma-1.5-4b-vindex</a></td>
129
- <td><strong>1.898 ⚠</strong></td>
130
- <td>45Γ— cohort anomaly β€” under investigation</td>
131
- </tr>
132
- <tr>
133
- <td>GPT-OSS 120B</td>
134
- <td>MoE (OpenAI)</td>
135
- <td>120B</td>
136
- <td><a
137
- href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td>
138
- <td>β€”</td>
139
- <td>S[0] grows 117Γ— with depth (L0=111 β†’ final=13,056)</td>
140
- </tr>
141
- <tr>
142
- <td><strong>Bonsai 8B</strong></td>
143
- <td>1-bit (Qwen 3 base, post-quantized)</td>
144
- <td>8B</td>
145
- <td><em>vindex pending publish</em></td>
146
- <td>0.429</td>
147
- <td><strong>C5 = 1</strong> (circuit dissolved); var@64 = 0.093</td>
148
- </tr>
149
- <tr>
150
- <td><strong>BitNet b1.58-2B-4T</strong></td>
151
- <td>1-bit (Microsoft, native)</td>
152
- <td>2B</td>
153
- <td><em>vindex pending publish</em></td>
154
- <td>(Phase 2 pending)</td>
155
- <td><strong>var@64 = 0.111</strong> mean across 30 layers β€” n=2
156
- confirmation of dissolution</td>
157
- </tr>
158
  </tbody>
159
  </table>
 
160
  <hr />
161
  <h2 id="whats-a-vindex">What's a vindex?</h2>
162
  <p>Standard model weights tell you <em>what</em> a model computes. A
 
55
  <strong>Llama</strong>, <strong>OpenAI MoE</strong>, plus two 1-bit
56
  controls.</p>
57
  <table>
 
 
 
 
 
 
 
 
 
 
58
  <tbody>
59
+ <tr><td><strong>MODEL</strong></td><td><strong>ARCHITECTURE</strong></td><td><strong>PARAMS</strong></td><td><strong>VINDEX</strong></td><td><strong>C4 (LAYER TEMP)</strong></td><td><strong>NOTES</strong></td></tr>
60
+ <tr><td><strong>Gemma 4 E2B-it</strong></td><td>Dense (Gemma 4)</td><td>2B</td><td><a href="https://huggingface.co/Divinci-AI/gemma-4-e2b-vindex">gemma-4-e2b-vindex</a></td><td><strong>0.0407 Β± 0.0004</strong> βœ“</td><td>3-seed validated; headline universal-constant model</td></tr>
61
+ <tr><td>Qwen3-0.6B</td><td>Dense (Qwen 3)</td><td>0.6B</td><td><a href="https://huggingface.co/Divinci-AI/qwen3-0.6b-vindex">qwen3-0.6b-vindex</a></td><td>0.411</td><td>Smallest published; Qwen3 family-elevated C4</td></tr>
62
+ <tr><td>Qwen3-8B bf16</td><td>Dense (Qwen 3)</td><td>8B</td><td><a href="https://huggingface.co/Divinci-AI/qwen3-8b-vindex">qwen3-8b-vindex</a></td><td>0.804</td><td>Architecture control for Bonsai</td></tr>
63
+ <tr><td>Qwen3.6-35B-A3B</td><td>MoE (Qwen 3.6)</td><td>35B / 3B active</td><td><a href="https://huggingface.co/Divinci-AI/qwen3.6-35b-a3b-vindex">qwen3.6-35b-a3b-vindex</a></td><td>β€”</td><td>256 experts, 40 layers</td></tr>
64
+ <tr><td>Ministral-3B</td><td>Dense (Mistral 3)</td><td>3B</td><td><a href="https://huggingface.co/Divinci-AI/ministral-3b-vindex">ministral-3b-vindex</a></td><td>0.265</td><td>fp8 β†’ bf16 reconstruction</td></tr>
65
+ <tr><td>Llama 3.1-8B</td><td>Dense (Llama 3.1)</td><td>8B</td><td><a href="https://huggingface.co/Divinci-AI/llama-3.1-8b-vindex">llama-3.1-8b-vindex</a></td><td><strong>0.012</strong> βœ“</td><td>Llama family signature</td></tr>
66
+ <tr><td>MedGemma 1.5-4B</td><td>Dense (Gemma multimodal)</td><td>4B</td><td><a href="https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex">medgemma-1.5-4b-vindex</a></td><td><strong>1.898 ⚠</strong></td><td>45Γ— cohort anomaly β€” under investigation</td></tr>
67
+ <tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>β€”</td><td>S[0] grows 117Γ— with depth (L0=111 β†’ final=13,056)</td></tr>
68
+ <tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.429</td><td><strong>C5 = 1</strong> (circuit dissolved); var@64 = 0.093</td></tr>
69
+ <tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>(Phase 2 pending)</td><td><strong>var@64 = 0.111</strong> mean across 30 layers β€” n=2 confirmation of dissolution</td></tr>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
  </tbody>
71
  </table>
72
+
73
  <hr />
74
  <h2 id="whats-a-vindex">What's a vindex?</h2>
75
  <p>Standard model weights tell you <em>what</em> a model computes. A