mikeumus-divincian commited on
Commit
72b4f26
Β·
verified Β·
1 Parent(s): bfc9737

sync table to HTML form for legible headers in HF dark mode

Browse files
Files changed (1) hide show
  1. README.md +24 -12
README.md CHANGED
@@ -31,18 +31,30 @@ Pick any of 9 models from the dropdown. Toggle between the 3D cylinder spiral an
31
 
32
  Cross-family evidence in hand: **Gemma**, **Qwen3**, **Mistral**, **Llama**, **OpenAI MoE**, plus two 1-bit controls.
33
 
34
- | Model | Architecture | Params | Vindex | C4 (layer temp) | Notes |
35
- |-------|-------------|--------|--------|-----------------|-------|
36
- | **Gemma 4 E2B-it** | Dense (Gemma 4) | 2B | [gemma-4-e2b-vindex](https://huggingface.co/Divinci-AI/gemma-4-e2b-vindex) | **0.0407 Β± 0.0004** βœ“ | 3-seed validated; headline universal-constant model |
37
- | Qwen3-0.6B | Dense (Qwen 3) | 0.6B | [qwen3-0.6b-vindex](https://huggingface.co/Divinci-AI/qwen3-0.6b-vindex) | 0.411 | Smallest published; Qwen3 family-elevated C4 |
38
- | Qwen3-8B bf16 | Dense (Qwen 3) | 8B | [qwen3-8b-vindex](https://huggingface.co/Divinci-AI/qwen3-8b-vindex) | 0.804 | Architecture control for Bonsai |
39
- | Qwen3.6-35B-A3B | MoE (Qwen 3.6) | 35B / 3B active | [qwen3.6-35b-a3b-vindex](https://huggingface.co/Divinci-AI/qwen3.6-35b-a3b-vindex) | β€” | 256 experts, 40 layers |
40
- | Ministral-3B | Dense (Mistral 3) | 3B | [ministral-3b-vindex](https://huggingface.co/Divinci-AI/ministral-3b-vindex) | 0.265 | fp8 β†’ bf16 reconstruction |
41
- | Llama 3.1-8B | Dense (Llama 3.1) | 8B | [llama-3.1-8b-vindex](https://huggingface.co/Divinci-AI/llama-3.1-8b-vindex) | **0.012** βœ“ | Llama family signature |
42
- | MedGemma 1.5-4B | Dense (Gemma multimodal) | 4B | [medgemma-1.5-4b-vindex](https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex) | **1.898 ⚠** | 45Γ— cohort anomaly β€” under investigation |
43
- | GPT-OSS 120B | MoE (OpenAI) | 120B | [gpt-oss-120b-vindex](https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex) | β€” | S[0] grows 117Γ— with depth (L0=111 β†’ final=13,056) |
44
- | **Bonsai 8B** | 1-bit (Qwen 3 base, post-quantized) | 8B | *vindex pending publish* | 0.429 | **C5 = 1** (circuit dissolved); var@64 = 0.093 |
45
- | **BitNet b1.58-2B-4T** | 1-bit (Microsoft, native) | 2B | *vindex pending publish* | (Phase 2 pending) | **var@64 = 0.111** mean across 30 layers β€” n=2 confirmation of dissolution |
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
 
48
  ## What's a vindex?
 
31
 
32
  Cross-family evidence in hand: **Gemma**, **Qwen3**, **Mistral**, **Llama**, **OpenAI MoE**, plus two 1-bit controls.
33
 
34
+ <table>
35
+ <thead>
36
+ <tr style="background:#1e3a2b;color:#faf8f5;">
37
+ <th style="color:#faf8f5;text-align:left;padding:0.6rem 0.85rem;">Model</th>
38
+ <th style="color:#faf8f5;text-align:left;padding:0.6rem 0.85rem;">Architecture</th>
39
+ <th style="color:#faf8f5;text-align:left;padding:0.6rem 0.85rem;">Params</th>
40
+ <th style="color:#faf8f5;text-align:left;padding:0.6rem 0.85rem;">Vindex</th>
41
+ <th style="color:#faf8f5;text-align:left;padding:0.6rem 0.85rem;">C4 (layer temp)</th>
42
+ <th style="color:#faf8f5;text-align:left;padding:0.6rem 0.85rem;">Notes</th>
43
+ </tr>
44
+ </thead>
45
+ <tbody>
46
+ <tr><td><strong>Gemma 4 E2B-it</strong></td><td>Dense (Gemma 4)</td><td>2B</td><td><a href="https://huggingface.co/Divinci-AI/gemma-4-e2b-vindex">gemma-4-e2b-vindex</a></td><td><strong>0.0407 Β± 0.0004</strong> βœ“</td><td>3-seed validated; headline universal-constant model</td></tr>
47
+ <tr><td>Qwen3-0.6B</td><td>Dense (Qwen 3)</td><td>0.6B</td><td><a href="https://huggingface.co/Divinci-AI/qwen3-0.6b-vindex">qwen3-0.6b-vindex</a></td><td>0.411</td><td>Smallest published; Qwen3 family-elevated C4</td></tr>
48
+ <tr><td>Qwen3-8B bf16</td><td>Dense (Qwen 3)</td><td>8B</td><td><a href="https://huggingface.co/Divinci-AI/qwen3-8b-vindex">qwen3-8b-vindex</a></td><td>0.804</td><td>Architecture control for Bonsai</td></tr>
49
+ <tr><td>Qwen3.6-35B-A3B</td><td>MoE (Qwen 3.6)</td><td>35B / 3B active</td><td><a href="https://huggingface.co/Divinci-AI/qwen3.6-35b-a3b-vindex">qwen3.6-35b-a3b-vindex</a></td><td>β€”</td><td>256 experts, 40 layers</td></tr>
50
+ <tr><td>Ministral-3B</td><td>Dense (Mistral 3)</td><td>3B</td><td><a href="https://huggingface.co/Divinci-AI/ministral-3b-vindex">ministral-3b-vindex</a></td><td>0.265</td><td>fp8 β†’ bf16 reconstruction</td></tr>
51
+ <tr><td>Llama 3.1-8B</td><td>Dense (Llama 3.1)</td><td>8B</td><td><a href="https://huggingface.co/Divinci-AI/llama-3.1-8b-vindex">llama-3.1-8b-vindex</a></td><td><strong>0.012</strong> βœ“</td><td>Llama family signature</td></tr>
52
+ <tr><td>MedGemma 1.5-4B</td><td>Dense (Gemma multimodal)</td><td>4B</td><td><a href="https://huggingface.co/Divinci-AI/medgemma-1.5-4b-vindex">medgemma-1.5-4b-vindex</a></td><td><strong>1.898 ⚠</strong></td><td>45Γ— cohort anomaly β€” under investigation</td></tr>
53
+ <tr><td>GPT-OSS 120B</td><td>MoE (OpenAI)</td><td>120B</td><td><a href="https://huggingface.co/Divinci-AI/gpt-oss-120b-vindex">gpt-oss-120b-vindex</a></td><td>β€”</td><td>S[0] grows 117Γ— with depth (L0=111 β†’ final=13,056)</td></tr>
54
+ <tr><td><strong>Bonsai 8B</strong></td><td>1-bit (Qwen 3 base, post-quantized)</td><td>8B</td><td><em>vindex pending publish</em></td><td>0.429</td><td><strong>C5 = 1</strong> (circuit dissolved); var@64 = 0.093</td></tr>
55
+ <tr><td><strong>BitNet b1.58-2B-4T</strong></td><td>1-bit (Microsoft, native)</td><td>2B</td><td><em>vindex pending publish</em></td><td>(Phase 2 pending)</td><td><strong>var@64 = 0.111</strong> mean across 30 layers β€” n=2 confirmation of dissolution</td></tr>
56
+ </tbody>
57
+ </table>
58
 
59
 
60
  ## What's a vindex?